How will we interact with our data in the age of cloud computing and semantic search?

John Collins interviewed me recently for an article he was writing for the Irish Times on life in 2030. Here are the full answers to his questions which may be of interest.

Human-computer interaction: the PC of 20 years ago had a keyboard, mouse and screen (although in terms of spec it was less powerful than most mobile phones). Microsoft is releasing Project Natal this year removing the need for a controller when playing Xbox games. How might we interact with computers in 20 years time?

When I heard about Natal, I said to myself: “Johnny Lee, the Wiimote guy, must be involved”, and sure enough he is! Certainly, there are some very interesting prototypes in the HCI space including things like the Wiimote, where people can create a €50 electronic whiteboard with a Wii remote and a LED pen (as opposed to costly electronic whiteboards that can cost thousands of euro), and also the Sixth Sense project which enables gesture recognition and interaction with one’s environment through the use of a mini projector and camera (spotted this in Technology Review recently).

Cloud computing: is the idea of locally stored data going to seem crazy in 20 years time?

We are getting used to our data being available in the cloud, through applications like Gmail or Evernote, and people expect certain features now from their applications: accessibility, freedom to get their data wherever they are, security (of course) and the ability to share tasks or documents with others – such that many traditional applications like MS Office are augmenting their offerings with online cloud-powered applications (Office Live). But also beyond just data computing power can be leveraged in a cloud of machines physically removed from a local user or application, allowing the cloud to carry out tasks for a person that an individual’s computer cannot. There’s a diverse range of software and services moving to the cloud, nearly all of which are accessible via a fully-featured web browser on a computer or a smart phone.

It’s not just in computing that we’ve become used to the cloud. Banks too have become clouds, allowing a person to go to any ATM and withdraw money from their bank wherever they are. Electricity can be thought of as a cloud, as it can come from various places, and you don’t have to know where it comes from: it’s just there, be it from companies located in Ireland or beyond. In the future, we may see more services moving to the cloud or clouds of their own: health services, security monitoring, video recording / retrieval, etc.

One issue with cloud computing is that users will always want to know that they have full control over their own (personal) data. To this end, there will have to be guarantees that moving personal data into the cloud is extremely secure and will be protected. I don’t think there will necessarily be an end to locally-stored data – a cache is still important, even if they main storage is in the cloud – people will want to still be able to access important information locally without having to worry about a net connection being down.

The Web: clearly you will have thoughts on the Semantic Web. I’m also interested in what happens when not just people and PCs/phones are connected to the Web but also possibly hundreds of other devices we might own.

On the Web, we’re starting to see huge interest in the idea of semantic search. Bing from Microsoft, Twine 2 from Radar Networks and Google are the main names in this space.

People are tired of typing in keywords and getting back a bunch of pages that may or may not be related to what they are looking for, when they just want to find a person, a recipe, a company, a product, some particular thing that they know some of the properties of, e.g. a recipe for chicken soup, a person with skills in home entertainment systems located in Cork, etc.

There’s a few ways you can get towards having some semantic or meaningful information being produced on the Web – automated or human-generated. You can apply various mining or language processing techniques to extract certain facts from pages of text or structures. Or you can look at what people are making and add some metadata to describe what it is. Both work on their own, and work better when combined.

The SFI-funded DERI institute at NUI Galway recently worked on adding semantics to Drupal, one of the largest content management systems in use on the Web (by the likes of the White House, Warner Bros Records, The Onion, etc.), and the alpha version of Drupal 7 with added semantics was released recently. Google has been mining information from various websites about how many people are talking in different discussion areas and when was it last updated, but this addition to Drupal will allow site owners to publish this information themselves in a way that can be easily picked up by search engines. This will help search engines to augment their search results with data about how many replies a particular discussion has, what topic it’s about, and what other stuff the author has written about – and then these sites using Drupal can benefit from being boosted in search result listings.

Dries Buytaert, the creator of Drupal, made a nice blog post about semantic search last year describing the potential benefits of having many Drupal sites marked up with semantics: he was envisaging beyond just social website structures (blogs, comments) but thinking more about vertical search and what would happen if many small websites all started using the same semantics to represent their items.

For example, for small companies publishing info on their products – search engines could directly index this data and show different products in certain price ranges, locations, etc. thereby disintermediating many of the middlemen like Amazon or whoever and allowing people to get directly to the product supplier.

The other point you mentioned is in relation to other devices and maybe sensors connected to the Web. Most people have many computing devices (including mobiles), and again, the cloud can help with being able to access one’s data across a range of devices. The potential issues and data scaling challenges here could make your head spin as you begin to think about the sensors in these devices, separate sensors in houses, cars, clothing, location monitoring systems, and how to manage all this data, and make it part of the Web we are using now. There are some parallels between social networks where Facebook and Twitter users are streaming out data every hour about what they are doing, and sensors are a bit like that, streaming out data about what they are observing. Just like we filter the data that’s relevant to us in social networks (usually through social connections with people with whom we share a common interest), we may filter out the inputs from various devices: traffic stats, weather reports, power statuses, etc., based on context: where we are and what we are doing.

One thought on “How will we interact with our data in the age of cloud computing and semantic search?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s