Peracton: A Smart Search for Investments

Trading in stocks and financial markets can be a complicated process. A cursory glance at the news at any time over the last five years would tell you that even the so-called experts have a tendency to get it wrong more often than not.

Laurentiu Vasiliu, a researcher at the Digital Enterprise Research Institute (DERI), at NUI Galway, founded Peracton to help both individuals and corporate multinationals negotiate the complicated and volatile stock markets.

Laurentiu was not initially motivated by any desire to enter the financial sector, though. Rather, he was led to the markets by his team at DERI’s research, which lent itself particularly well to the complexity of financial investing.

“We initially started our research looking at complex decisions and negotiations. So it was more a kind of theoretical approach,” says Laurentiu, who originally hails from Romania.

“We started an R&D project with Enterprise Ireland initially. We always knew that this could have multiple applications, and finance was one of them.”

The research project with Enterprise Ireland yielded MAARS, which stands for Multi Attribute Analysis Ranking System.

The MAARS platform algorithms can analyse the hundreds of parameters, across thousands of stocks, to show any would-be investors what options most closely mirror their desired investment portfolio.

If you are looking at US stocks on the main exchanges like NASDAQ, or the New York Stock Exchange,” explains Laurentiu, “Altogether there are maybe 4,000 stocks, and each stock describes up to 200 financial parameters.”

For an investor who wishes to exercise due diligence in their financial dealings, the sheer level of detail involved in analyzing hundreds of stock options, renders it impossible to do so, “The human brain just can’t work in this way” he says.

The tools which currently exist to help investors analyse stocks are, says Laurentiu, “Quite primitive,” in only flagging as suitable stocks which exactly match your investment criteria.

“With this approach, an investor would lose many good stocks. Even if they have a very good strategy they could get only 60 percent of the eligible stocks that are out there. They would miss 40 percent. And who knows? Maybe there are really good stocks they just can’t see.

“So, this is what our algorithm does, it’s able to trawl the mass of equities and extract the closest fit and they are received in a ranked order.”

There have been attempts, he says, by the big financial houses to design similar programs, but there is no room for error in the world of finance.

“This is not trivial research. For somebody who wants to choose between ten types of apples, you don’t need a complex algorithm, but if you want to spend a half million dollars in one go for a particular set of stocks, then proper research and due diligence have to be exercised.

“There are many things to be taken into account as algorithm stability and algorithm flexibility. To add as many parameters as you want and go for as many stocks as you want, and at the same time to be safe and consistent,” he says.

For this reason, Peracton’s MAARS platform is not suitable for amateur investors, and some experience of finance and stock markets is necessary.

“This is a professional tool for traders but also we are looking at investment clubs, who know more about investments,” says Laurentiu.

Peracton currently employs three people in Galway, and one in Dublin. Two employees are based in Oxford, England, while three employees are based in the United States, divided between Boston and Silicon Valley.

There is currently some revenue coming in from deal with a major financial house, details of which Laurentiu is keeping firmly under wraps, and the plan is to develop the Irish, UK and US markets, before moving on to Asia and the Middle East.

For now, the focus is on constantly improving the product. “Every six months we are releasing new versions of our equity selectors.

“We are addressing stocks, mutual funds, bonds, ETFs, so we have modules for different types of equities. And we are improving and adding more and more functionality also for the user.”

Sindice: New Approach to Online Data Management

Sindice is a semantic web index, which allows you to access and leverage the “web of data”, which is the rapidly expanding number of websites which are semantically marked up, that is tagged with RDF, RDFa, Microformats or Microdata, tags which can be used to identify online content as belonging to different categories.

This week Sindice, in partnership with Hepp Research and Openlink Software, launched Sindice Ltd, a new startup which will manage Sindice’s intellectual property, and oversee the commercial drive of its products.

Giovanni Tummarello is the CEO of Sindice, which originated at the Digital Enterprise Research Institute in Galway, Ireland. He explains how the web of data will revolutionise online data management, and how it is, “set to explode”, in the coming months. Once it does, he enthuses, the web of data, “all becomes a big graph which one can join with a single query”.

“Semantic mark-up is basically a markup that you put on the page to express what you have on the page, so if you have the name of a movie, because you are discussing that movie in a blog article for example, you might want to tag the title of the movie, the director of the movie, whatever data makes that page recognisable to a search engine for exactly what it is, which is a page which talks specifically about that movie.

“In a regular search engine, it’s just the keywords that are being searched, so you’re looking for the title of the movie, which could be, “The Blue Tomato”, but there are all sorts of pages which can contain these two words, for all sorts of reasons. On the other hand, if you put a mark up saying that it is a movie, you will be stating that you are talking about a movie.”

As Giovanni points out, Sindice acts like a search engine of all the 270 million or so sites which currently have semantic markup, but its real utility is greater than that, “OK, you can put in a keyword and search it, it’s fine, but that’s not really the point”.

“Sindice is basically a search engine which is not just a search engine. Really it’s an infrastructure for leveraging all the web data out there. We have 270 million pages or so at the moment; they are not normal web pages, they are only web pages which have semantic markup on them. What Sindice does is it has a very powerful engine that can correlate information from one website to another.

“You can basically use the entire web as if it is your playground by merging information here and there. You can get the name of a movie from a page which is marked up, and the name of the movie can be looked up on Wikipedia, where you can see what the director is, and then go on Rotten Tomatoes and get the rating, and all together it can be queried with a single query which goes all over the web and returns the information all ready to be consumed to enhance websites, anywhere where you want content aggregated from multiple sources on the web.

“Sindice provides these services where you can make these queries and combine content coming from all of the web of marked up data.”

The formation of the company is a sign that Sindice are ready to commercialise their technology. Sindice Ltd solidifies the “very important” partnerships with Hepp Research and Openlink Software, and also manages the intellectual property the partners now share, keeping it, “nice and tidy”, so that Sindice can seek further investment.

“There are two main markets we are pursuing, the first one is customised cloud hosted data spaces”, continues Giovanni.

“We’re going to be allowing people to have their own data spaces, we call them, and that comes for a price of course, it’s kind of data as a service.

“You want to have the data that comes from the web of data but you also want to have your own data, you also want to have your own correlation of the data. So you want subsets of Sindice data that need to be live and fresh, that need to be combined in way that you want, to solve your problems.”

The second main service, which Giovanni describes as, “much more concrete and immediate” is something called Sindice site services.

Not yet available, this is, he says, “something that will basically appeal to anybody that has a website that they want to enhance with information coming from multiple websites at the same time. This is good for everybody, because the websites which are providing the information, they get traffic and they get links so it becomes a syndication network. That obviously has a value in terms of the possibility of sharing revenue and advertisement”.

Giovanni is confident that following the common approach taken by the three major search engines to create what they call, “a shared markup vocabulary”, the time is right for Sindice to capitalise on the expected flurry of markup activity in the near future.

“It’s exploding as we speak. Microsoft, Google and Yahoo! are telling everybody at the same time to do this! In search engine optimisation circles they are raving about this stuff, and everybody’s implementing it so there’s no alternative.

This means that there will be a lot of people who want to do services on top of this, so if there’s a market, it’s right now.”

The Internet Becomes The Interdata: Interview With Stefan Decker

Stefan Decker is the director of the Digital Enterprise Research Institute (DERI). One of its key funders is Science Foundation Ireland. It is based in NUI Galway, Galway, Ireland, and with a staff of over 130, it is the largest web research institute on the planet.

DERI works with industrial partners such as Ericsson and Avaya to road test and develop new ideas in Semantic Web research. Cisco, for one, is rapidly moving from a company which used to produce hardware and routers to a company that is connecting people much more than machines, and who are basing new technologies on work that has been done at the institute. Developments like SIOC are becoming a global standard in representing information about how humans communicate, and for the foreseeable future this sort of research can only grow in importance and relevance.

In a previous article, “The Fragmentation Of The Semantic Web“, a view was expressed that development in Semantic Web technologies may be side-tracked or derailed by vested interests and proprietary systems. However, Stefan takes a far more positive view of the situation.

“What we see right now is that our ideas are gaining ground and making progress and are becoming part of reality. It’s the reason why companies like Microsoft, Google and Facebook are actually using the same ideas.

“The same thing happened when it became clear we would need global information systems. MSN and CompuServe tried to do this, each with their own information systems, but they basically failed and they gave into open standards. Companies are doing the same thing again, but as Nova Spivack says, unless you really gain your own monopoly, really gain dominance in the market, you are not able to work together with everyone else. For working together, you need open standards: that is exactly where the Semantic Web comes in.

“In other words, what Google is doing, what Microsoft is doing, and what Facebook is doing is actually supporting us. It’s just a first step into a much larger game, which will end up as people and companies working together based on the standards that are being created. The only danger is one company like Facebook reaching global dominance.”

But the issue is not only open standards. Interoperability is the key to future developments.

Stefan goes on to say, “The semantic search engines that we are building are right now one of the largest repositories for structured information that exists on the web for open standards.

“What we see right now is a fundamental shift in the internet architecture. Previously, the Internet has been concerned about sending bits from one piece of the network to another piece of the network. But that is now more or less solved. We know how to send bits. What we now need to do is ensure that we are able to make sense out of those bits.

“In other words, the Internet at this point is in desperate need of a new layer in the internet architecture which takes care of data interoperability. It’s not so much about internetworking anymore but inter-data-working – making sense of each other’s data – which is applicable and available for all the specific applications.

“That requires a new data layer and that is something that we are building right now: the Linked Data Layer.”

The diagram at the top of the page shows where the Linked Data Layer fits into the current web architecture. In the current Open Systems Interconnection (OSI) stack, which consists of seven layers, there is a presentation layer which works a bit with data formats. This is directly followed by the application layer which is designed to make sense of the bits. The Linked Data Layer is a new layer between the application layer and the presentation layer which makes sense of the data in such a way that it establishes interoperability between different applications.

Stefan points out, “A recent Wired article indicated that a lot of the traffic is moving away from the Web into other applications. As the technology changes and allows new uses to come into being on new platforms, mobile phones and such-like, the need increases for the applications and other operational requirements to be able to work together – to be interoperable.

“This is one of the core topics at DERI, so one way or another everyone is working on the Linked Data Layer. It is all about interoperability, about making things work between different applications on the Internet.

“If you look at future internet initiatives that are going on around the planet, you have a lot special interest group discussions about security, about e-health, about smart cities and so on. They all have different information requirements. Then you have the mobile service providers who again have different information requirements. But what they are missing is a joint layer, something they can base their applications on.

“We all know how to exchange bits on the wire, but to make these different applications interoperable – to make smart cities use the security standards that have been defined – they need a common layer to understand each other’s problems as well as each other’s solutions. That is exactly what a Linked Data Layer can provide.”

With research and development well under way at DERI, Stefan’s next step is to get the word out to the larger Semantic Web community and beyond.

“One of my fundamental aims now is to make clear the need for this Linked Data Layer, and there will be an assembly in Ghent in December in which we will have a Linked Data Layer session. We will invite all the different communities in the Future Internet Initiative in Europe to help define what the role of a Linked Data Layer will be and what it can contribute to bring the communities together.”

Videos From BlogTalk 2010 Available For Viewing

Click on image to access videos.

The videos from last week’s BlogTalk 2010 are now available for viewing at DERI, NUI Galway’s vimeo site.

For those of you who for one reason or another were unable to attend, this is a great opportunity to see the variety of speakers from different areas and disciplines, and to appreciate the range of subjects covered.

For those who did manage to attend, this is another chance to revisit a favourite part of the two-day event or double check on what a speaker really said in case you missed it the first time around.

A special thanks to a group of volunteers from DERI who took turns tending the camera so nothing was missed and then handled the post-production processes.