- 30 MB of data are required to store an X-ray
- 150 MB are required to store a single MRI
- 3 GB are required to store the human genome
- 660 terabytes are necessary to power a single hospital
Data is exploding at an unprecedented pace. Ninety percent of all of the data in the world was created in just the past two years. Every year from now, that data will double. So the question becomes: How do we make big data smaller, so it’s meaningful and useful?
The answer is data science. The UC Berkeley School of Information defines data science as the field “emerging at the intersection of the fields of social science and statistics, information and computer science, and design,” suggesting just how varied the talents required are to fully harness this recent explosion of information.
From big data to smart data
This is the year for Science and Big Data. The relationship between enterprise success and the optimization and implementation of data science is already vitally symbiotic. The successful companies of the future will be the ones that turn big data into smart data, and build that data into the core of their businesses. The next challenge is to discover the most valuable way to process, visualize, and share that data.
I can hear some of you asking, “If this explosion of data continues to grow at the projected rate, won’t we have to constantly re-adjust the strategy?"
We have to embrace that there is no way to predict with certainty the growth of big data over the next one, five, or ten years. We do know its growth will be nonlinear. I believe the growth of big data will happen at a rate well beyond many of the industry projections. But there is no doubt that we are poised for a major increase. A 2014 study by Accenture and GE found that 87 percent of enterprises believed big data will largely redefine their industries in the next three years. I believe this to be an incredibly exciting development. By harnessing big data in order to gain deeper, industry-specific, science-based learning, this growth will empower, rather than encumber.
Finding value in data
So you have to ask. “How can companies – small, medium, or large – scale data science for value?” Anytime you pay to store something you don’t need, it’s wasted money. Given the amount of unnecessary data businesses are paying to store, it could add to millions, if not billions of dollars. Ask yourself: does your company back up and wind up storing everything, including employee iTunes libraries? While this is an extreme example, clearly eliminating unnecessary data remains a priority as a way to cut costs and maximize efficiency.
Defining high-value data points requires deep industry micro-vertical experience. Even micro-vertical data sets are vast, and require specific expertise in managing, cleansing, and harmonizing. Here are a few examples from Infor’s customer base:
- A cloud-based hotelier has shown that it can optimize 350,000 hotel room prices a day using a revenue management system developed by Infor’s Dynamic Science Labs.
- A pet supply company using talent science has reduced turnover by 38% among store associates. (Infor has done talent assessments for 11% of the working population in the US.)
- A hospital that connected all of its medical devices using Infor healthcare technology created 5 billion value-driven transactions a month, as real-time data becomes accessible anywhere at any time.
You can turn big data into smart data when you optimize collection, analysis, and sharing based upon the specific needs of your micro-vertical industry and your individual company. So how does a small to mid-sized company integrate the necessary infrastructure to derive this value?
It takes people with the combination of left-brain analytics and right-brain creativity – a combined DNA – and the ability to work together to drive value – specifically a valuable customer experience. It takes a great UI – intuitive and graspable. Without a great interface, the power of massive data and analytics will be merely superficial.
Specialization and skills enable you to manage the transition to big data analysis.
Finding data scientists
Specialization comes from a business’s (and their vendors’) deep micro-vertical industry understanding: quite simply, they need to use their particular expertise to address the unique challenges they face. That said, many big data cloud applications that could assist in this process go unadopted because the product is inaccessible. You must demystify the black box; the output of these science applications cannot be purely a set of numbers.
To derive value-driven answers from your data, you need data scientists who can unlock the data’s deeper meaning and provide the user with a context that creates understanding. And the supply of qualified data scientists is quite small.
How small? By 2018, the market will demand 190,000 more data scientists than will exist. It’s a very deep skill set pursued by computer scientists, statisticians, and mathematicians, highly trained individuals who know how to create and optimize predictive big data-born analytics. This talent is largely unavailable to companies without a major capital investment.
That is why enterprise software companies are deploying data scientists to collaborate with industry-leading customers to harness big data to reveal industry and micro-vertical, high-value advantages. These advantages will allow customers across the industry to optimize their big data sets around valuable and predictive metrics.
The next challenge for companies is to synthesize this data around unique metrics – developed by data scientists in conjunction with specific enterprise strategy and goals – to improve efficiency and output. For instance, data scientists examining electronic health records can help doctors determine better, faster treatments for their patients, demonstrating how big data can not only improve quality of life but actually save them.
So finally, a question to ask yourself: is my enterprise poised to capitalize on the big data revolution?