Google Makes AI with 16,000 coresGoogle has made another artificial intelligence. But instead of parsing text or controlling cars, this one is aimed at understanding images. Which is kinda the holy grail of artificial intelligence.
The new network runs on 16,000 computer cores spread over 1,000 different computers. That is really Google's biggest accomplishment, here, because where most AI projects stumble is in coordinating a large number of machines in a single simulation.
The neural network is very, very promising, too. Not only is it able to identify objects without being given a prior concept of them (as Google Fellow Jeff Dean said, "We never told it during the training, 'This is a cat.' "It basically invented the concept of a cat.") it can identify the type of object. It can identify not only a face, for example, but between a cat face and a human face. That has never been done before.
But then, most previous types of machine vision relied on simpler techniques for identifying objects. The computer would create an outline of the object and then tag that shape as something. As you can imagine, it made expanding on the concept difficult as things look different from different angles and the stuff inside the outline is where most of the content is.
The team are training the bot on the internet, running it through the web looking for pictures of cats. Its accuracy is still fairly terrible, but the hit rate is improving steadily. Right now its accuracy is at 15.8% out of 20,000 images, which is already 70% higher than previous studies, and it is expected to double its performance.
Google has decided that the research is promising enough that it pulled it out of Google X, their skunkworks, and put it into its search division. google probably has big plans for a smarter image search.