Web development is anything but a static affair. Progress is constantly being made in whole or in part. Sometimes in tiny incremental steps or, just occasionally, in a sweeping, life-altering manner when a killer app hits the Web.
Web evolution is an observable phenomena and there is plenty of room for curiosity and speculation as to where or what the eventual outcome would be. Or probably more accurately, since it is a continuous ongoing process, what will this outcome look like at a certain stage in the future?
Khris Loux, Eric Blantz and Chris Saad have written an article about how they see change coming, and by way of providing a clue they have included you in the authorship as well.
Their basic suggestion is that the Web – while not exactly evolving into a brain as we know it – will certainly operate like a brain. The key to brain operation is the transmission of information from brain cell to brain cell. Since brain cells are discrete objects they, need a means to reach out to each other, and these bridges are called synapses. There are two kinds of synapses, one is chemical in nature and the the other is electrical, and they work quite differently. Chemical synapses transmit information via neurotransmitters and receptors, and electrical synapses use conductive channels in the membranes of the cell.
One very important thing to remember is that neither of these methods are binary in the sense of being fully on contrasted with fully off. They are either more active or less active.
The purpose of this mini brain science lesson is to define the limits of the Synaptic Web metaphor. It will be a long time, if ever, before we can get a computer system to work like the brain.
However, that shouldn’t detract from some of the points that Khris, Eric and Chris make. They say that the Web has evolved from document delivery (Web 1.0) to a communication platform (Web 2.0), and is heading towards, “something more profound: a dynamic Web of of adaptive ‘organic’ and implicit connections whereby real-time information flows give structure and meaning to previously-unconnected sets of data. The Internet is a sea of conversations streaming through connections, and these patterns have meaning.”
In a previous article, The Collective Brain App, we talked about how increasing interconnectivity would obviate the need for the construction of monolithic websites and their associated data silos, and communication would become more focused on real connections than by billboard-like, broadbased announcements. This is very much in tune with the authors’ idea that, “Like individual neurons, ‘sites’ must now maximise their connections to outside data sources and applications in response to external stimuli or risk being ‘pruned’ themselves.”
There are over 500 million users on Facebook now whose activity revolves around over 900 million social objects (images, profiles, links, groups, and so on). The relationship that the users have with each other and via their social objects has been described by Mark Zuckerberg and his team as a social graph.
All these connections create patterns of behaviour, interactivity and relationship. The significant and relevant patterns can be said to hold meaning.
“Implicit information derived from content and gestures is one of the great opportunities of the Synaptic Web. To observe a set of gestures and connect them together creates a dynamic profile of interests, intentions and friends that can be used for discovery and filtering.”
This, perhaps, is where the great leap forward promises to take place.
“In the Synaptic Web, filtering is more important than search. While search is about narrowing the infinite document Web to a digestible set of pages, filtering is about narrowing the torrent of streams, nodes and networks into something that matches your current and evolving criteria. It’s about defining and constantly refining your world view so that information can find you.”
Of course, to make all this possible, code will have to be written and applications developed. The authors describe in their own list what they expect this new software would have to do:
- They connect two or more categories of things together (e.g. people and data, content and communication, data and devices, places and companies).
- They create or derive new/novel meaning or utility from implicit connections (e.g. interest profiles, filtering, visualisations).
- The connections they enable adjust in real or near-real time to changes in users’ behavior or other inputs.
- They bias towards implicit connections that are strengthened or weakened by actual behavior rather than explicitly-stated connections that are arguably less accurate and relatively inflexible.
- They use the Web as the platform (e.g. open standards and interoperable endpoints).
- They apply a variety of inputs to extend existing applications (e.g. GPS applied to maps, interests applied to dating).
- They become stronger through network effects (e.g. crowd-sourced images, social gestures, etc.).
- Though they might have a companion site, they are defined by usership and information flows and are untethered from any given destination site.
- One of their primary inputs and/or outputs is the stream.
This criteria offers up some fascinating challenges for those wishing to rise to them. But to make the Web with its mind-boggling amount of data relevant, meaningful and accessible, then it is worthwhile to think about approaching it by using the way our brains operate as a model. After all, it is still the best computational machine that we know of.