Tuesday, May 05, 2009

.

The Internet, in New Scientist, part 2

Today I’ll continue my series commenting on the New Scientist magazine series “Eight things you didn’t know about the internet”. Part 2 is “Could the net become self-aware?”, by Michael Brooks.

Mr Brooks opens with this:

In engineering terms, it is easy to see qualitative similarities between the human brain and the internet’s complex network of nodes, as they both hold, process, recall and transmit information. “The internet behaves a fair bit like a mind,” says Ben Goertzel, chair of the Artificial General Intelligence Research Institute, an organisation inevitably based in cyberspace. “It might already have a degree of consciousness”.
Of course, this whole discussion depends upon what it means for it to be self-aware, how one defines “consciousness”, and Mr Brooks looks at that in the rest of the article, considering a “network that constantly strives to become better at what it does, reorganising itself,” and such. “It could happen within a decade,” says a Belgian researcher.

Indeed, this is exactly the sort of thing that many of us are working on, in a number of ways.

The Internet standards community has been, for some years, working on standards to help in service discovery, and in automatically routing around problems in the network. Those working on search technology have been developing better search mechanisms and algorithms that are more likely to find what you’re really looking for. Natural-language experts continually come out with systems that more accurately understand things that users say in plain language.[1] People working on pervasive/ubiquitous computing are always looking at how a network of always-connected everyday devices can take advantage of the Internet to enhance our typical, mundane activities.

All of this fits together into a network that “knows what it needs to do” to serve us better. It can find information for us, locate the services we need, repair itself when there are glitches, and communicate with its components without our having to stay “in the loop.” Before long, it will anticipate what it should do and get a jump on us — in some ways, it’s already doing that. In many senses, those are things the human brain does.

Does that make it “self-aware”? I don’t think I’d call it that, but it really is a question of semantics.
 


[1] Though I don’t think any of this will serve the fellow who reached one of my monthly archive pages with a Google search for “please help me! i’m being hunted by a government agency that doesn’t exist.”

No comments: