IPExpo Europe - Brian Cox and managing the question of not knowing what he doesn’t know

Profile picture for user mbanks By Martin Banks October 8, 2017
Professor Brian Cox, academic, CERN scientist and rockstar science evangelist on TV, delivered some helpful, if pessimistic, observations on the state of IT and its capabilities.

Brian Cox

The Richard Feynman lecture – Plenty of Room at the Bottom – delivered at the American Physical Society meeting at Caltech back in December 1959, may have been all-but-ignored at the time, but it has since become acknowledged as the start point for many of the latest developments in particle physics – not least being the first pointers to the potential of quantum computing.

It certainly played a major part in the background to the growth of Prof. Brian Cox as a leading member of the team working on the Large Hadron Collider in CERN, The Professor of Particle Physics at Manchester University’s School of Physics and Astronomy, and genuine (ex-)rock star quality populariser of all things scientific to the general public.

So his appearance as opening keynote presenter at the IPExpo event in London’s ExCel Centre, was always likely to be a highlight of the week, one way or the other. Sadly, it was the venue rather than the man himself that rather spoiled the event, when its generally unsuitable acoustics, coupled with a poor PA, struggled – and generally lost – an unequal fight to get the Professor’s words heard. The words fought  against a continuous background drone spilling over the walls of the purpose-built 'Keynote Theatre' from the assembled multitudes going about their business on the exhibition floor just `outside’.

The 'pizzaz’ of the event was also hardly enhanced by the sight of the Professor, one of the most noted presenters on TV, having no introduction. This left him to walk on and, as he would with a class of noisy students, grab the attention of the large audience. The corollary of this, of course, came at the end of his presentation, when he ended up trapped in a corner by a scrum of people, all seemingly wanting to thrust business cards upon him.

Yet in the end, despite being an astro-physicist with a user’s rather than developer’s view of IT and its capabilities, he had some interesting observations to make about a fundamental weakness of the technology – storage.

These emerged during the short press Q&A session held after the keynote was done. He referred by to Feynman’s lecture and the fact that it was not only the first mention of quantum computing, but that made an equally important observation - that all information is itself physical, it is stored in physical things and has a physical property of being. This means it is possible for the quantum behaviour of matter to be used to manipulate information. That is likely to be the basis of future quantum cryptography, which is claimed will have the capability of making future systems unhackable.

He admitted that he is not up to speed on the state of the art in quantum computing, but says that the engineering ability to manipulate such small-particle based systems is now in place. The potential of such systems is huge, and could even solve the type of storage problems his work generates.

Will there be solace in the quantum?

The volume of data that a quantum computer could work with is prodigious, for the Qubit (the quantum bit, which is the basic unit of information in a quantum computer) has the capability of holding more data than just one of two states – 0 or 1, Yes or No, or any other dichotomous state. The data handling capabilities get significantly larger if Qubits are then tangled together. Cox said:

It has been shown that if you can build a system of 256 Qubits tangled together, then to describe that system classically would require 1080 bits, which is roughly speaking the number of atoms in the observable universe.

It was against this background, I asked a more mundane question about what areas of current IT resource provision and capabilities leave him frustrated or disappointed? Where is current technology still not able to provide the capabilities he needs or the software that would enable him to formulate the questions he would like to ask? The biggest problem, it transpires, is storage:

We throw most of the data away in real time because we have collisions. The systems are working at a vast data rate, and every engineer remembers the rule that light travels one foot per nanosecond, so a 25 foot cable has one signal in it, and 50 foot of cable will have two signals in it, chasing each other. You can’t store all that, so we have systems that ask `is this interesting or not’ and if it is not then we throw it away. We do keep random event samples to check that nothing interesting is getting through, but as we don’t know what we are looking for we don’t always know what to throw away.

There are times, such as when he worked on the development of the ATLAS (A Toroidal LHC ApparatuS) Detector, part of the Large Hadron Collider particle accelerator project at CERN in Switzerland, when they were developing and building elements of it that were based entirely on the hope that Moore’s Law would continue to fulfil its promise. The hope was that when they were finally ready to run the detector, computer systems with enough capacity would become available to store and process the volume of data that became available.

The issue then becomes, as he acknowledged, that they are operating in an area where they don’t know what they don’t know. This means they could be throwing away vital data that could possibly lead to important discoveries, without recognising them as such. And because they generate so much more data than can be stored, the discarded bytes are lost forever: there is no way of back-tracking after the event.

This is a problem that many large enterprises are now likely to find themselves facing as not only the volume of data increases, but the new types of data coming available that are finding relevance in business analytics. It most likely that these new data sources will generate new areas worthy of more complex analyses. By the same token however, they are also likely to push businesses to the point where they have to consider what data they can throw away, without really knowing what the discarded data might be able to tell them if they have the resources and performance to crack them.

For such businesses, the only solution is to adopt the tactics taken by Professor Cox and his co-workers. That is to save random samples, analyse those, and effectively pray that the good, valuable content shows up:

The idea, the plan, is that if patterns do emerge then you will see them and you can modify your throwing away procedure.

Pitched like that it does not seem an amazingly thorough or scientific methodology. But with the capabilities of IT as they stand today, it is the only game in town. And you just know that Brian Cox longs for a time when he won’t ask himself, "I wonder what I’ve missed?".

My take

There will be thousands of businesses – or should be – asking themselves the same question, and it will of course be interesting to see whether quantum computing can play the role of the 5th Cavalry in the last reel of a `B’ movie. My 50 penny/cent bet goes to the probability that, by the time it gets into play, data and processing demands will already have got well beyond the level of one observable universe.