Google's Bots Read Like Humans, NowWhen the first version of Google’s search engine was released, it relied on one thing and one thing only: links in a page. A database of links was built up by stripping out all other page elements until just the metadata and the links remained.
The only reason for the code to be executed is for the bots to get an actual idea of what the page looks like.
Further, it looks like the bots aren’t just skimming out the URs anymore, but mimicking how actual users click the links. That is big, because that means the bots are using the web like we do. That has been a major thing holding back the development of true semantic networks on the web, and it might be how Google is planning on taking its new Knowledge Graph to the next level.
We still know very little about what Google is working on in this regard. After all, the evidence exists purely in one developer’s logs, and then it is only a record of a bot doing something it shouldn’t have been doing. But the fact that this is being done at all has a far reaching impact on the future of the web.