“With the Semantic Web, there’s been a lot of effort in building different technologies, the best ones possible. But it isn’t always the best one possible that is the most useful. You might be very happy with a small subset of things that are easier for developers to pick up and to do something useful for you.”
There has indeed been a great deal of work and effort in building the Semantic Web – a smorgasbord of technologies such as FOAF, RDFa, OWL, SPARQL, and SIOC, to name just a few. The idea was to step beyond the original Hyper Text Markup Language (HTML) that was developed by Sir Tim Berners-Lee in the late eighties.
HTML enabled pages to link to each other. The Semantic Web, or Linked Data as it is often referred to now, links data that denotes the structure of a page and the data in that page. Most pages on the Web have an implicit design or structure according to their intended purpose which enables them to be classified. Bio pages, itinerary pages, commercial display pages, blog pages, and so on can be identified as such in their own right. Also, the data on the page can be made relevant to data elsewhere by the various application of Linked Data technologies, some of which have just been mentioned.
To go from being able to handle the Web as a huge set of pages to handling the Web as a huge set of data required a massive dedication of time and resources. Work first started at informal meetups at the MIT Computer Science and Artificial Intelligence Laboratory at the turn of the millennium. The idea of a Web of Linked Data was seen very early on to be a viable idea and so funding was sought.
Sadly for everyone, everywhere, 9/11 happened and very soon the lion’s share of available research funding in the United States went to defense projects. Fortunately, the Irish Government in the form of Science Foundation Ireland came forward with an offer of funding, and the Digital Enterprise Research Institute (DERI) was founded in Galway, Ireland in 2003 with the remit of “interlinking technologies, information and people to advance business and benefit society”.
The Open Graph Protocol as a name is derived from the social graph which is Facebook’s adopted term for describing the mapping of relationships of users on its network. Facebook has over 500 million users at the time of writing, and that is a lot of users and relationships to map.
Using subsets and truncated versions of Linked Data technologies, it allows Facebook – through the work of developers – to make itself more useful to its users by enabling better search (for instance) inside Facebook and better intercommunication with the Web outside the Facebook walled garden.
All well and good. What is technology for if it is not to be used?
The issue is the sheer size of Facebook. So much effort and time will be devoted by so many developers to work with and then expand the Open Graph Protocol that the development of Linked Data as a whole may end up being ramified towards just the work surrounding Facebook.
Open Graph, in the form being developed in order to meet Facebook’s present and future needs, may well commandeer so many resources and use so much talent working in the development field that Open Graph, by making Facebook more compatible with the Web, will end up being the the de-facto web of Linked Data.
Facebook, at the moment, with its three main areas of friend, group and Facebook pages, does not have the need for the full set of bells and whistles that the Semantic Web has to offer. If they stay with this simple page structure development, Linked Data could be confined to working on information handling in a limited, self-defined context.
Of course, work will still continue at <a href="DERI and at other facilities as Open Graph still very much lacks the full expressivity of the potential that Linked Data offers with its broader perspective.
Or will it be a detriment to the continuing pace of Linked Data development by diverting attention, energy, and money away from the main goal of Linked Data, which is to make the Web more useful for everybody?