Introduction
Google’s earnings nearly doubled last year.
http://news.com.com/Google+profit+nearly+doubles/2100-1030_3-6127658.html
Unlike Microsoft that gets its money from shifting boxes Google relies on advertising to pay its way. There is a tremendous incentive to improve the quality of searching. The first reason is obvious. The better Google is perceived to perform as a search engine, the more people will use Google for their searches and the greater the traffic for advertisers. The second reason is a little bit more sinister. Google gets paid according to the number of clicks made on an advertisement. Google as well as telling you the results of your search needs also to put some ads your way. The share price of Google is closely linked to the perceived quality of search.
The quest for AI
As one might expect Google is deeply into AI. AI one might argue is essentially what the core business of Google depends on. Suppose we can take a web page, find out exactly what it is about, extract all the relevant facts and put them into a database, then on the prompting of a query from a user marshal all the facts that are relevant to that enquiry. This is what an AI system looking at web pages would essentially do.
http://news.com.com/2100-11395_3-6160372.html
Google is talking about the size of the human genome and the size of AI. I think the arguments are a little bit misleading. I would prefer to look at what we would expect from AI. Suppose I were to show you a box and I told you that that box was “intelligent“. What would you expect. Well Alan Turing devised what is now known as the Turing Test. He said that if the response of a computer to a conversation was indistinguishable from that of a human, it had passed the Turing Test.
On the subject of the Turing Test, Alan Turing envisaged a test which would distinguish between men and women and also would be psychic. Turing believed in ESP. Looking at Alice I am aghast, whenever I say something she always changes the subject. Hardly surprising in view of the Spanish! (La estacion de resorte - El barco attravesta una cerradura)
In other words I would expect to be able to ask questions and get an intelligible response. I could engage in a conversation if I wanted greater depth. If the box claimed to speak Spanish I would expect translations which showed an understanding of context. In fact it could not produce an intelligible response without context. We would also want the answer to statistical questions, like how do people like BMW cars? What is the correlation between this and that? Can we deduce anything about cancer from the people who get it their lifestyles etc?
We would also like to see some evidence of reasoning ability. Google is not committed specifically to reasoning. In a sense reasoning comes after the ability to retrieve efficiently. This has been discussed by myself and other people in “Creating Artificial Intelligence”
http://groups.google.co.uk/group/creatingAI?hl=en
I have also written the following blogs.
http://ipai.blogspot.com/
http://ipai1.blogspot.com/
http://ipai2.blogspot.com/
One thing to remember and that is that the ability to find facts is closely related to the ability to automatically construct wrappers. This is one of the main features of Web 3.0.
Let us now return to Google and what they are doing to produce a Web based AI
Searching - The fundamentals
Search engines are basically databases. The information which is contained in the database has changed throughout the years. What the user needs to know about a Web page is :-
1) What is it about?
2) How is it rated, is it written by a crank or does it contain good and useful stuff?
http://infolab.stanford.edu/~backrub/google.html
Describes the main techniques used in search engines.
Google became the primary search engine on the basis of what might be termed a citation index. Scientists have used this principle almost from the year dot. At the bottom of an academic paper are references and these references are “citations”. The “Science Citation Index” is an index of papers which cite a given paper. Now a paper which is frequently cited is generally regarded as being a good paper. Google does exactly the same things with hyperlinks. There is also the number of times other people access a website.
http://209.85.163.132/papers/sawzall-sciprog.pdf
The Web is of course very large and Google has to find a way of dividing up the tasks. This paper is the key to the way in which Google does this. The database is far too large to place on a single machine, and is therefore stored on a number of servers. Sawzall is quite ingenious. A query is passed round from server to server, but while the query is in transit other queries are being worked on. Hence although a query takes a few seconds to process on the network, the fact that other queries can be processed at the same time means that a high throughput is maintained. One quite important fact is that it is possible to discuss aggregations. That is to say once websites are found with their keywords a further search based on programs written in C++ can be performed.
http://infolab.stanford.edu/pub/papers/google.pdf
Describes what Google was doing in 2000
Google wants to know your surfing history. This will enable it to both target web pages and ads. Suppose I am a civil engineer and I enter “Bridge” as one of my search terms. A civil engineer is interested in “puente that is the sort of bridge that crosses a river. If I am a card player I will be interested in the game of Bridge. A website containing “4 hearts”, is about a card game.
Google also wants to target it advertisments. It wants to know what you think of a particular organization.
http://ryanmcd.googlepages.com/sentimentACL07.pdf
Does just that. It used a training set There is of course one other thing. Advertisers like some sort of feedback on how they and their product is perceived. This paper attempts to achieve this and manages to achieve scores approximating to 80%.
It is not my aim to make moral judgements about Google. Google in fact, unlike Microsoft, has not broken the law. Indeed the Google code is mostly open source. How it is all put together is highly proprietary, but there are references to source code in all the papers. If you are bundling inaccessible code with a inferior operating system (Windows as a sheer operating system is inferior to Linux.) a fine of x million Euros a day is appropriate. Google technology is immensely powerful and society will have to come to terms with it in some way.
Google and Semantic Analysis
http://www2007.org/papers/paper342.pdf
This is a most remarkable paper. Let us dissect some of the terminology. It talks about “Vectors”. What are these “vectors”? They are all derived from Latent Semantic Analysis, or some other allied method. It talks about partially indexing the vectors (not storing the full vector). It takes queries and search results. It actually looks at what people have put into their query as keywords and the web pages they actually click on. An algorithm is developed for giving people exactly what they want. The paper makes great play on optimization for an inveted file search. Now an inverted file is a database file where the entries are indexed. Quite clearly if you are doing web based searches
http://labs.google.com/papers/orkut-kdd2005.html
This paper is 2007 so its results are not yet in “Google”. The methodology is amazingly powerful and could be applied in a variety of circumstances. Slightly chillingly the “Orkut” set which correlates friendship and personality and other similarities is used. The paper can effectively find you matches and build you up a friendship network. Equally it can judge you by the friends that you have!
Potentially you could take El Cid and its English translation and match words up. Rather you are not just matching words you are matching vectors. An inverted file then gives the correct Spanish translation for an English vector and vice versa. This program will take any set of vector pairs and do a match.
Translation
At present translation with Google Translate is rather poor.
El barco attravesta una cerradura - The boat goes through a lock
La estacion de resorte - The season of spring.
http://www.stefanriezler.com/PAPERS/NAACL06.pdf
Google have in 2006 recruited Stefan Riezler. It is interesting in that it indicates a direction in which Google is moving. Here is his CV
http://www.stefanriezler.com/CV07.pdf
It is probably a pretty good summary of the way in which Google intend to go. One thing should be pointed out straight away and that is that is that the Google NLP initiatives are based on strict parsing as their starting point. This contrasts with some versions of Latent Semantic Analysis where unparsed words are entered. Google looks at subjects, verbs, adjectives, adverbs, objects and possessives. Google is also interested in question and answer responses.
http://www.cs.nyu.edu/~mohri/postscript/hbka.pdf
This paper is a review article about a very much related area of speech recognition. I think I should say straight away that the recognition of individual phonemes by computer is as good if not better than that performed manually. The reason why human speech recognition is better than that of a machine is that humans recognize words in context. This in fact makes speech very similar to translation. I can illustrate this with words that have different meanings and spellings but the same phoneme structure. Whether (si), weather (tiempo) hear (oir), here (aqui). One thing that is a little bit disappointing is that the speech and NLP groups in Google appear to be working independently.
Speech is in fact a far harder problem than translation or the discernment of meaning from text. This is because in translating from text you have fewer choices. The methos used is Markov chains and the association of neighbouring words including grammar. Interestingly in neither Riezler’s work or this are words chosen on the basis of long range meaning. Let us say we have a medical paper and we could bias the search to medical terms. They do not seem to do this.
To produce the right words in speech you need an iterative annealing process. This means you may wish to change the phoneme, or word, assignment once other words have been found.
http://www.stefanriezler.com/PAPERS/ACL07.pdf
Suppose I am not looking for a website. Suppose I want to know a fact. “What is the velocity of light?”, “What is somebody’s address?”, “What is the turnover of company X?”. To answer a question the question needs to be parsed so that its meaning can be ascertained. We are here quite close to the Turing test.
http://www.cs.cmu.edu/~acarlson/semisupervised/million-fact-aaai06.pdf
http://www.cs.bell-labs.com/cm/cs/who/pfps/temp/web/www2007.org/papers/paper560.pdf
This is the first stage of Google’s program. A database of, initially, a million facts will be gathered. These facts are going into a database which will be used to answer questions. This will of course be extended as time goes on.
Head to Head with Microsoft
Google has a spreadsheet and a word processor. It also features desktop publishing.
https://www.google.com/accounts/ServiceLogin?service=writely&passive=true&continue=http%3A%2F%2Fdocs.google.com%2F%3Fhl%3Den_GB&hl=en_GB<mpl=homepage&nui=1&utm_source=en_GB-more&utm_medium=more&utm_campaign=en_GB
There are advantages and disadvantages in using the Web for basic word processing and spreadsheets. The advantages are that the software is :-
1) Up to date.
2) Will run of both Linux and windows systems.
3) Is free.
4) There are facilities for work sharing.
http://labs.google.com/papers/gfs.html
5) Your work is backed up automatically.
The disadvantages are that you need to be connected to the Web to access your work. There are question marks over security, although to be fair Google is investing a considerable effort in this field.
http://labs.google.com/papers.html
This gives a list of Google papers. Note those on security. I have not mentioned them individually since my main thrust is AI.
I feel that we should look at spellcheckers and how word processing and AI can be integrated. Often when we spell words wrong the spelling is valid but means something different. People will often spell words that sound the same wrongly. This puts spellcheckers in the same position as translators, and on a Web spell-check the latest translator can be used. If I use a translator as a spellchecker I am one stage up on anything Microsoft has produced. If you are writing in Spanish “si” and “tiempo” are never confused. In English large number of people confuse “whether” and “weather”. Present day spellcheckers pass both.
There is one other point. If I want to write something learned, I want references. If I write on web software Google can suggest them to me. If I have Microsoft software on my own computer, it cannot do this.
Conclusion
I started off this investigation rather skeptical. I came away from Google translate distinctly unimpressed. “La estacion de resorte” - I did not know stations were elastic! I came away deeply impressed with the work which Google are doing and its scope. My criticism that the research on Natural Language should involve more interchange of information is perhaps rather carping, considering the difficulties involved in running a program on this scale.
On the question of personal information I can see where Google is coming from. Lets put it this way. If you meet a friend in the street you will have remembered some of the “personal” information that they have told you if you were to start a conversation. We can thus show that any Turing machine must store personal information, you need personal information stored if you are ever going to “talk to Google”. It is also vital for proper retrieval of information. The information you get must be relevant to you.
http://michaelaltendorf.wordpress.com/2007/06/13/top-100-alternative-search-engines-from-readwrite-web/
The whole point of search engine technology is to get relevant references and facts. This reference misses this point completely. If you need a 3D display your basic engine is lousy.
Google is now entering the world of facts rather than just websites. This could have some very interesting consequences in the future.
There is one fact that society in the future will have to come to terms with. To become president of the United States you need television exposure. Television, telephones and the Internet are now becoming one. Who will choose the programs you watch? Why Google of course. This is a tremendous responsibility.
Popularity: 100% [?]