Speech recognition with weightedfinite-state transducers

by Ian Parker in Computer Science

http://www.cs.nyu.edu/~mohri/postscript/hbka.pdf

This paper is dealing with speech recognition. We will see that Speech recognition is a very deep process in Natural Language. It ios linked closely to translation.

Let me begin by asking some fairly simple questions.

1) Why is language important for AI?

Suppose I am confronted with “Alice” and I want to discuss say my boating holiday. I go through a series of locks at Merry Hill. It takes me an hour and I moor my boat by the pub and have lunch there.

For Alice to comment she needs to have an accurate internal representation of what I am saying. Alice is going to use this internal representation to find the most appropriate response. How do I know whether the internal representation is correct or not. Well at one level you can say that if Alice gives an appropriate, intelligent response she has passed the Turing Test. Alternatively I can ask for a translation into a second language, Spanish say. Good Spanish will indicated an accurate internal representation, and conversely. In fact Google Translate provides me with “El barco attravesta una cerradura

2) What is the connection between speech and language understanding?

The recognition of speech comes in two parts. There is the recognition of individual phonemes, the building up of phonemes into words. An extremely important part of speech recognition is the placing of words in context. If I were to say “El si esta lluva y viento” you see immediately that “si” is out of context. A correct word recognizer should be able to tell the difference between whether and weather even though the sound of the words is the same. It does this by means of context - exactly the same as if we are translating.

This paper looks at Markov chains. In this model we have a sequence of words, each sequence of words is assigned a probability. If we have a choice of words we fit in the most probable. The probability is determined using a training set. Of course Google being a Web engine has no difficulty in taking a training set which is as large as required. A Markov chain essentially runs forwards. You have a “state” and this “state” is being constantly modified by subsequent words. This contrasts with lattice techniques where a number of surrounding words are taken.

The paper talks about bigrams and n grams. These in essence represent different meanings of words. The paper discusses in detail the lexical structure of bigrams and trigrams, but it does not tell us how to construct them. A bigram in fact consists of a different meaning. Clearly cerradura is going to behave differently from éclusia. In a Markov chain we move along branching when required. As soon as a branch reaches a termination and/or low probability we stop.

Perplexity is defined as being the number of choices. It is a geometrical mean as the perplexity metric is associated with entropy (defines to be Log (Number of states). With the vocabularies normally used we get a perplexity of about 150. Exact values are in the paper.

What are the results. On large vocabularies there is a 12% error rate. This although disappointing is none the less better than most speech recognition systems and as good as any. How could it be improved? Well similar hidden Markov models are available as source code.

Hidden Markov Models are now available as source code.
http://htk.eng.cam.ac.uk/
http://www.colloquial.com/carp/Publications/collinsACL04.pdf

There is quite an important point here and this concerns grammatical parsing. Markov chain methods do not explicitly parse. The lattice methods are basically about building a hypothesis which is then refined. I commented “!El barco esta caliente!” to “El barco attravesta una cerradura“. The methodology does indeed have an analogy to the annealing process. To start a lattice process you need an initial approximation A 12% error rate is good enough to be an initial approximation.

Markov chain and bigrams are well established techique. Sphinx uses Markov chain and bigram techniques, but with one important difference. It has grammar. It is developed by Carnegie Mellon university and is called Sphinx. As explained above the effect of grammar is not so much to produce a better result but to provide a framework by means of which other languages can be added. Sphinx allows you to speak in one language and get text in another. It is no better in terms of basic accuracy than other Markov chain methods.

ConclusionThis I think demonstrates the latest developments in Speech Recognition Research. The irony is that speech, like translation, depends on the recognition of words in context. Markov chains and lattices may be used too translate text from one language to another. Speech is in fact a far harder problem than translation but much more effort is being put in. Google Translate does not (as yet) differentiate between different contexts although the speech research does. It is clear why. People are on the move with their mobile phones, texting with a small keypad is an extremely time consuming process and most people can speak faster than they can type. However it would still be ionic if speech recognition came in before good translation.

In fact if one could get a computer to recognize speech and take down speeches, to have a “Hansard” (multilingual) for the European Parliant would be a trivial extension.

Popularity: 29% [?]

POTW 6/24/07: “Support-Vector Networks” by Cortes and Vapnik

by grant.ingersoll in Algorithms, Artificial Intelligence, Computer Science, Machine Learning, Natural Language Processing (NLP), SVM, Statistical Approach, Text Categorization, classification, support vector machines, text mining

Long paper this week, but it is the original on Support Vector Machines: Support-Vector Networks by Cortes and Vapnik.  Given my schedule, I may spread this out over two weeks.

Popularity: 42% [?]

POTW 6/11/07: Discussion of “A Sequential Algorithm for Training Text Classifiers” by Lewis and Gale

by grant.ingersoll in Algorithms, Artificial Intelligence, Computer Science, Information Retrieval, Machine Learning, Natural Language Processing (NLP), Statistical Approach, Text Categorization, classification, naive bayes

In “A Sequential Algorithm for Training Text Classifiers” by David D.
Lewis and William Gale, the authors put forth a new (at the time)
method training text classifiers using an approach they call
“uncertainty sampling”

Section 1 outlines the problem of training, namely obtaining a good
sample of text to be labeled for the trainer.  After disposing of
several other methods of garnering samples (random, relevance
feedback based), Lewis and Gale introduce an iterative approach for
manually labeling examples.

Section 2 then discusses the benefits of “learning by query” in
theory, namely the possibility of reducing the error rate very
quickly in comparison to the number of queries required.

Figure 1 (described in section 3) outlines their basic approach,
which relies on having a human judge some subset of examples that the
currently used classifier is least certain about.  This process is
iterated until the human feels satisfied with the results.  One
caveat of this approach is that the classifier must not only predict
the class, it must give a measurement of certainty for that class.

Continuing on into section 4, we are introduced to how to build a
classifier and use uncertainty sampling to train it.  Most of the
section details the probability theory behind it, finishing up with
how to do the sampling.  One thing I always wish for in these papers
are concrete examples (maybe as an appendix or a reference) that work
through the math on an actual toy problem.  Section 5 does just this,
laying out an experiment and discussing the details, minus the math,
which probably suits most people just fine.

Section 7 has an excellent discussion of the results, the pay dirt
being that using this new method significantly reduces the number of
examples required for training, at the cost of having a human in the
loop.

Popularity: 42% [?]

Google’s initiatives in Artificial Intelligence

by Ian Parker in Artificial Intelligence, Computer Science, Information Retrieval, Natural Language Processing (NLP), Question Answering, Text Categorization

Introduction
Google’s earnings nearly doubled last year.
http://news.com.com/Google+profit+nearly+doubles/2100-1030_3-6127658.html

Unlike Microsoft that gets its money from shifting boxes Google relies on advertising to pay its way. There is a tremendous incentive to improve the quality of searching. The first reason is obvious. The better Google is perceived to perform as a search engine, the more people will use Google for their searches and the greater the traffic for advertisers. The second reason is a little bit more sinister. Google gets paid according to the number of clicks made on an advertisement. Google as well as telling you the results of your search needs also to put some ads your way. The share price of Google is closely linked to the perceived quality of search.

The quest for AI
As one might expect Google is deeply into AI. AI one might argue is essentially what the core business of Google depends on. Suppose we can take a web page, find out exactly what it is about, extract all the relevant facts and put them into a database, then on the prompting of a query from a user marshal all the facts that are relevant to that enquiry. This is what an AI system looking at web pages would essentially do.

http://news.com.com/2100-11395_3-6160372.html
Google is talking about the size of the human genome and the size of AI. I think the arguments are a little bit misleading. I would prefer to look at what we would expect from AI. Suppose I were to show you a box and I told you that that box was “intelligent“. What would you expect. Well Alan Turing devised what is now known as the Turing Test. He said that if the response of a computer to a conversation was indistinguishable from that of a human, it had passed the Turing Test.

On the subject of the Turing Test, Alan Turing envisaged a test which would distinguish between men and women and also would be psychic. Turing believed in ESP. Looking at Alice I am aghast, whenever I say something she always changes the subject. Hardly surprising in view of the Spanish! (La estacion de resorte - El barco attravesta una cerradura)

In other words I would expect to be able to ask questions and get an intelligible response. I could engage in a conversation if I wanted greater depth. If the box claimed to speak Spanish I would expect translations which showed an understanding of context. In fact it could not produce an intelligible response without context. We would also want the answer to statistical questions, like how do people like BMW cars? What is the correlation between this and that? Can we deduce anything about cancer from the people who get it their lifestyles etc?

We would also like to see some evidence of reasoning ability. Google is not committed specifically to reasoning. In a sense reasoning comes after the ability to retrieve efficiently. This has been discussed by myself and other people in “Creating Artificial Intelligence”
http://groups.google.co.uk/group/creatingAI?hl=en
I have also written the following blogs.

http://ipai.blogspot.com/
http://ipai1.blogspot.com/
http://ipai2.blogspot.com/

One thing to remember and that is that the ability to find facts is closely related to the ability to automatically construct wrappers. This is one of the main features of Web 3.0.

Let us now return to Google and what they are doing to produce a Web based AI

Searching - The fundamentals
Search engines are basically databases. The information which is contained in the database has changed throughout the years. What the user needs to know about a Web page is :-
1) What is it about?
2) How is it rated, is it written by a crank or does it contain good and useful stuff?
http://infolab.stanford.edu/~backrub/google.html
Describes the main techniques used in search engines.
Google became the primary search engine on the basis of what might be termed a citation index. Scientists have used this principle almost from the year dot. At the bottom of an academic paper are references and these references are “citations”. The “Science Citation Index” is an index of papers which cite a given paper. Now a paper which is frequently cited is generally regarded as being a good paper. Google does exactly the same things with hyperlinks. There is also the number of times other people access a website.

http://209.85.163.132/papers/sawzall-sciprog.pdf
The Web is of course very large and Google has to find a way of dividing up the tasks. This paper is the key to the way in which Google does this. The database is far too large to place on a single machine, and is therefore stored on a number of servers. Sawzall is quite ingenious. A query is passed round from server to server, but while the query is in transit other queries are being worked on. Hence although a query takes a few seconds to process on the network, the fact that other queries can be processed at the same time means that a high throughput is maintained. One quite important fact is that it is possible to discuss aggregations. That is to say once websites are found with their keywords a further search based on programs written in C++ can be performed.

http://infolab.stanford.edu/pub/papers/google.pdf
Describes what Google was doing in 2000

Google wants to know your surfing history. This will enable it to both target web pages and ads. Suppose I am a civil engineer and I enter “Bridge” as one of my search terms. A civil engineer is interested in “puente that is the sort of bridge that crosses a river. If I am a card player I will be interested in the game of Bridge. A website containing “4 hearts”, is about a card game.

Google also wants to target it advertisments. It wants to know what you think of a particular organization.
http://ryanmcd.googlepages.com/sentimentACL07.pdf
Does just that. It used a training set There is of course one other thing. Advertisers like some sort of feedback on how they and their product is perceived. This paper attempts to achieve this and manages to achieve scores approximating to 80%.

It is not my aim to make moral judgements about Google. Google in fact, unlike Microsoft, has not broken the law. Indeed the Google code is mostly open source. How it is all put together is highly proprietary, but there are references to source code in all the papers. If you are bundling inaccessible code with a inferior operating system (Windows as a sheer operating system is inferior to Linux.) a fine of x million Euros a day is appropriate. Google technology is immensely powerful and society will have to come to terms with it in some way.

Google and Semantic Analysis
http://www2007.org/papers/paper342.pdf
This is a most remarkable paper. Let us dissect some of the terminology. It talks about “Vectors”. What are these “vectors”? They are all derived from Latent Semantic Analysis, or some other allied method. It talks about partially indexing the vectors (not storing the full vector). It takes queries and search results. It actually looks at what people have put into their query as keywords and the web pages they actually click on. An algorithm is developed for giving people exactly what they want. The paper makes great play on optimization for an inveted file search. Now an inverted file is a database file where the entries are indexed. Quite clearly if you are doing web based searches

http://labs.google.com/papers/orkut-kdd2005.html

This paper is 2007 so its results are not yet in “Google”. The methodology is amazingly powerful and could be applied in a variety of circumstances. Slightly chillingly the “Orkut” set which correlates friendship and personality and other similarities is used. The paper can effectively find you matches and build you up a friendship network. Equally it can judge you by the friends that you have!

Potentially you could take El Cid and its English translation and match words up. Rather you are not just matching words you are matching vectors. An inverted file then gives the correct Spanish translation for an English vector and vice versa. This program will take any set of vector pairs and do a match.

Translation
At present translation with Google Translate is rather poor.
El barco attravesta una cerradura - The boat goes through a lock
La estacion de resorte - The season of spring.

http://www.stefanriezler.com/PAPERS/NAACL06.pdf
Google have in 2006 recruited Stefan Riezler. It is interesting in that it indicates a direction in which Google is moving. Here is his CV
http://www.stefanriezler.com/CV07.pdf
It is probably a pretty good summary of the way in which Google intend to go. One thing should be pointed out straight away and that is that is that the Google NLP initiatives are based on strict parsing as their starting point. This contrasts with some versions of Latent Semantic Analysis where unparsed words are entered. Google looks at subjects, verbs, adjectives, adverbs, objects and possessives. Google is also interested in question and answer responses.

http://www.cs.nyu.edu/~mohri/postscript/hbka.pdf
This paper is a review article about a very much related area of speech recognition. I think I should say straight away that the recognition of individual phonemes by computer is as good if not better than that performed manually. The reason why human speech recognition is better than that of a machine is that humans recognize words in context. This in fact makes speech very similar to translation. I can illustrate this with words that have different meanings and spellings but the same phoneme structure. Whether (si), weather (tiempo) hear (oir), here (aqui). One thing that is a little bit disappointing is that the speech and NLP groups in Google appear to be working independently.

Speech is in fact a far harder problem than translation or the discernment of meaning from text. This is because in translating from text you have fewer choices. The methos used is Markov chains and the association of neighbouring words including grammar. Interestingly in neither Riezler’s work or this are words chosen on the basis of long range meaning. Let us say we have a medical paper and we could bias the search to medical terms. They do not seem to do this.

To produce the right words in speech you need an iterative annealing process. This means you may wish to change the phoneme, or word, assignment once other words have been found.

http://www.stefanriezler.com/PAPERS/ACL07.pdf
Suppose I am not looking for a website. Suppose I want to know a fact. “What is the velocity of light?”, “What is somebody’s address?”, “What is the turnover of company X?”. To answer a question the question needs to be parsed so that its meaning can be ascertained. We are here quite close to the Turing test.
http://www.cs.cmu.edu/~acarlson/semisupervised/million-fact-aaai06.pdf
http://www.cs.bell-labs.com/cm/cs/who/pfps/temp/web/www2007.org/papers/paper560.pdf

This is the first stage of Google’s program. A database of, initially, a million facts will be gathered. These facts are going into a database which will be used to answer questions. This will of course be extended as time goes on.

Head to Head with Microsoft
Google has a spreadsheet and a word processor. It also features desktop publishing.
https://www.google.com/accounts/ServiceLogin?service=writely&passive=true&continue=http%3A%2F%2Fdocs.google.com%2F%3Fhl%3Den_GB&hl=en_GB&ltmpl=homepage&nui=1&utm_source=en_GB-more&utm_medium=more&utm_campaign=en_GB
There are advantages and disadvantages in using the Web for basic word processing and spreadsheets. The advantages are that the software is :-

1) Up to date.
2) Will run of both Linux and windows systems.
3) Is free.
4) There are facilities for work sharing.
http://labs.google.com/papers/gfs.html
5) Your work is backed up automatically.

The disadvantages are that you need to be connected to the Web to access your work. There are question marks over security, although to be fair Google is investing a considerable effort in this field.

http://labs.google.com/papers.html
This gives a list of Google papers. Note those on security. I have not mentioned them individually since my main thrust is AI.
I feel that we should look at spellcheckers and how word processing and AI can be integrated. Often when we spell words wrong the spelling is valid but means something different. People will often spell words that sound the same wrongly. This puts spellcheckers in the same position as translators, and on a Web spell-check the latest translator can be used. If I use a translator as a spellchecker I am one stage up on anything Microsoft has produced. If you are writing in Spanish “si” and “tiempo” are never confused. In English large number of people confuse “whether” and “weather”. Present day spellcheckers pass both.

There is one other point. If I want to write something learned, I want references. If I write on web software Google can suggest them to me. If I have Microsoft software on my own computer, it cannot do this.

Conclusion
I started off this investigation rather skeptical. I came away from Google translate distinctly unimpressed. “La estacion de resorte” - I did not know stations were elastic! I came away deeply impressed with the work which Google are doing and its scope. My criticism that the research on Natural Language should involve more interchange of information is perhaps rather carping, considering the difficulties involved in running a program on this scale.
On the question of personal information I can see where Google is coming from. Lets put it this way. If you meet a friend in the street you will have remembered some of the “personal” information that they have told you if you were to start a conversation. We can thus show that any Turing machine must store personal information, you need personal information stored if you are ever going to “talk to Google”. It is also vital for proper retrieval of information. The information you get must be relevant to you.
http://michaelaltendorf.wordpress.com/2007/06/13/top-100-alternative-search-engines-from-readwrite-web/
The whole point of search engine technology is to get relevant references and facts. This reference misses this point completely. If you need a 3D display your basic engine is lousy.

Google is now entering the world of facts rather than just websites. This could have some very interesting consequences in the future.
There is one fact that society in the future will have to come to terms with. To become president of the United States you need television exposure. Television, telephones and the Internet are now becoming one. Who will choose the programs you watch? Why Google of course. This is a tremendous responsibility.

Popularity: 100% [?]

POTW 6/11/07: “A Sequential Algorithm for Training Text Classifiers” by Lewis and Gale

by grant.ingersoll in Algorithms, Artificial Intelligence, Computer Science, Machine Learning, Natural Language Processing (NLP), Statistical Approach, Text Categorization, classification, naive bayes

More on text classification: “A Sequential Algorithm for Training Text Classifiers” by David Lewis and William Gale.  A little bit of an older paper, but still looks to be a good one.

Popularity: 41% [?]