Here's the showreel:
And here's the description of how it works:
- Scour the web
We continually scan thousands of news publications, blogs, niche sources, trade publications, government web sites, financial databases and more.
- Extract, rank and organize
We extract information from text including entities, events, and the time that these events occur. We also measure momentum for each item in our index, as well as sentiment.
- Make it accessible and useful
You can explore the past, present and predicted future of almost anything. Powerful visualization tools allow you to quickly see temporal patterns, or link networks of related information.
It feels like there's a lot of missing steps in point 2 there, but that's understandable, as I expect there's a lot of proprietary smartness around data extraction, categorisation and analysis, and natural language processing (all of which I'd love to find out more about). The Fast Company article linked to above makes the good point that, like a lot of 'predictive' analytics systems, it's more an assembly and connection tool than a predictor, and that 'the only way of really proving that the intel gathered is bona fide is common sense'. To that I'd add a caution about the data that's going in. Without guarantees as to how the compilers judge the informative content of input data, no guarantees are possible about the dependability of outputs (as ever, x in, x out).
But it's easy to bash a product you don't understand. Actually, my concern is that the whole thing feels over-branded. The name and the other associated terms ('Temporal Analytics Engine', for one) seem grossly out of step with reality. That's a shame, because it's clearly a fantastically smart system designed and analysed by equally smart people. Isn't that enough without forcing your marketing material to imply some sort of clairvoyance? All the 'news from the future' stuff is a bit offputting.
# Alex Steer (30/07/2010)