Steven Wolfram, a mathematician of some note and founder of Wolfram Alpha, wants computers to have their own top level domain.
The .Data domain, as Wolfram names it, would let computers communicate with sites designed specifically for other computers, rather than for humans.
The web as we know it was designed to be parsed by people. The pretty text, fancy menus and complex AJAX-driven effects that make sites so interesting to our eyes make it hard for computers to pull out data. While we have managed to circumvent that a bit with special files meant to be read by machines, by and large the web is a poisonous place to its most populous member.
The web is about to undergo a fairly serious change, with the ICANN, the body responsible for assigning url’s to people, expanding the pool of TLD’s. A TLD, or Top Level Domain, is something like .com, .gov, .biz, .cn, etc. There are already a bunch, but there are also a ton of sites. When startups start randomly misspelling their name just so they can have a meanigful web address, you know a TLD is saturated.
Wolfram wants the .data domain to be appended to the list for computer to computer services, of which there are a growing number. As his blog post says:
My concept for the .data domain is to use it to create the “data web”—in a sense a parallel construct to the ordinary web, but oriented toward structured data intended for computational use. The notion is that alongside a website like wolfram.com, there’d be wolfram.data.
If a human went to wolfram.data, there’d be a structured summary of what data the organization behind it wanted to expose. And if a computational system went there, it’d find just what it needs to ingest the data, and begin computing with it.
The obvious benefit to this would be knowing, at a glance, whether a site was meant for humans or not. When coding, you would know that a .data is meant for your code, not your customer. But there is also the benefit of being able to run a more streamlined server. Without having to worry about the various visual bits of a site, you can get a web server to run blazingly fast, while also chewing up less data. Both of which are great for the web.
People have been arguing for a web that a machine could understand since the web’s inception. Finally, though, we are reaching a point where we need it. There is so much information out there that, without the help of a computer, we will never be able to parse everything. The next evolution of the web is computer assisted browsing. We are already beginning to see that with Google’s recent search changes. Unless we want to hold back the internet, we need to introduce a better, more semantic web.