Ever since I made myself a nuisance at Infor's Hook & Loop internal design agency in New York City, I've been wanting to do the same with their Dynamic Science Labs team at MIT. I did the trip last week, after the team's stint at Inforum Europe. Heading in, the big question on my mind was:
Every enterprise software vendor claims to have an expert data science team, but how does that help customers? How is this know-how productized?
Many companies grapple with the recruitment and makeup of their own data science teams, so expecting your customers to build their own algorithms is problematic. On the flip side, consulting can get really expensive really fast.
Vibhu Walia, Director of Infor's Dynamic Science Labs, drew the short straw - he was my tour guide/master of ceremonies who got to field the bulk of my questions. Other team members dropped in along the way, including a lunch with Chief Scientist (and lab chief) Ziad Nejmeldeen. They hit the white board to share their methods. We wrapped the day with industry use cases - and a sneak preview of what lies ahead.
The Dynamic Science team is driven by these "guiding principles":
- Recommendations are Valuable to customers
- Impact is Measurable and reportable
- Products are Relevant for customers
- Science is Accessible for all users
- Tasks are Automated where possible
- Systems are Evolving so results don’t go stale
- Development processes are Rapid
Fine- sounds lovely/modern, but how does this play out?
Build a minimum viable product - but with a UX polish
The Dynamic Science team "co-innovates" with customers, building a use case that turns into software quickly. No algorithmic know-how is required on the customer side. Customers bring their own data and validate results. Infor's customers get access to software as soon as possible, minimum viable product (MVP) style - but with a twist. This MVP has a UX polish courtesy Hook & Loop.
Industry use cases include distributor pricing, customer scoring and prioritization, and healthcare inventory optimization. These solutions are built as co-innovations, but they are not one-offs. The core is made available to other Infor customers (distributor pricing is a GA product now).
Most analytics solutions are still stuck in "describe" mode
So how is this approach different than traditional analytics?
Walia used this slide to illustrate:
If the analytics cycle begins on top at "cleanse," Walia believes that most companies are stuck near where he is pointing, somewhere in the describe phase. Nejmeldeen elaborated:
Before this group was formed, we looked what Infor was doing in terms of data analytics. It was not far from what everyone else was doing in terms of BI. It allowed people to go in and understand what happened historically, and if they go in deep enough, maybe figure out why it happened.
Nejmeldeen's team takes it further:
Where we come in is looking at what's going to happen and what should we do about it. What's the value in those decisions, how do you measure them. Which ones can you automate? That's the part of the circle that we're trying to complete.
Walia thinks what some call "data science" really isn't. Building a model, cleaning the data, charting it, and making a decision with that data is still in the "describing" phase. So how do you improve on that? Walia:
We want to move to the next step, which is not only what happened, but why it happened, what's going to happen next, and what you should do about it. And: here's the recommended action.
"A customer should never have to care about algorithms"
This led us into a deeper discussion of automating decisions - when it works, when it doesn't. Walia's team works with customers to decide which decisions might be fully automated (e.g. low dollar value, repeatable), which ones might need a manual review, and which might be structured around recommendations. The tougher the decision and the more expensive the trade-offs, the less likely it is to be fully automated. Walia's team wants to surface data-informed recommendations for tough decisions, empowering the user to drill into the trade-offs as needed.
A chief design goal? Hide data science complexity from the customer. Bake process intelligence into their existing applications instead:
Customers should be able to dig into the data, but they should never have to care about algorithms. They should be caring about the business questions and the business functions. They should be saying, "I want to do this today," and we provide them a recommendation. They should never have to ask, "Which algorithm did you use for recommendation one versus two."
Yes, provide data science driven recommendations, but enable deeper dives of harder issues. Because even the best course of action has trade-offs:
Customers that use our software should be saying, "I see a trade-off between recommendation one and two, tell me what the trade-offs are." Then, the software shows them, "If you do option one, your sales increase but your margin decreases. If you do two, your margin increases but your sales decreases.
To illustrate the trade-off, Walia hit the white board, using a concept called the Pareto Optimal Front:
Pareto Optimal Front is a fancy way of saying that the analytics validate a series of points where the trade-offs are optimal. In this case, Walia is using a car pricing example, looking at a trade-off between comfort and price. Calculating the optimal front is a way of finding the best trade-off between the known variables at different price/value points.
Walia is pointing his finger at the tougher area that Infor wants to help customers assess, where the trade-offs between factors such as price of a product and quality of a product are very difficult to weigh - especially without good data. In this instance, he ended up eliminating the top dot on the optimal front as being too high of a price point for that customer. The other two might be part of a formal recommendation of which cars to present to consumers:
In this case, you only present these two cars at these two prices - one is much cheaper but not very comfortable, the other much more comfortable at a higher price.
Working with customers, Infor regularly tweaks the algorithms to provide better results and/or to accommodate changes in customer behavior. Beneath the recommended scenarios, a customer can explore the pros and cons feeding into the recommendation.
Present and future use cases
A similar concept is embedded in Infor's B2B pricing optimization solution. The team showed me how this worked in the case of Product Visibility Analysis, using a grocery store as a hypothetical example:
The scenario is the following: what are the trade-offs between product visibility, price, and margin? Studies have shown that if you place a product prominently in a grocery store, such as bananas, and that product is priced too aggressively, you lose business. On the flip side, a more expensive product deeper in the shelves doesn't carry the same level of pricing risk. The trade-offs between pricing and product visibility must therefore be carefully weighed.
Using sample data from two actual customers (not shown), Walia walked me through the recommendations, noting which items could get a price bump with less likelihood of adverse reaction:
We're going to start showing you recommendations for these items, weighing the value of visibility versus margin increases. Similarly on this side, we might say, "This item is medium-to-high visibility. Its margin's too high, we need to mark it down." Then we show you possible impact on customers, and so on.
During a chat on the future of CRM, we looked at a lead scoring co-innovation. This screen shot gives a good example of the Hook & Loop design impact:
The interface meets the attractive/ease of use criteria that you must have with sales teams. But what are the data science features? The lead scores resulting in hot leads are the beginning - though Infor is hardly the only company that's developed a weighted lead score system. Another screen, however, leads to a series of recommended items based on the contact's interest and purchasing history. This gives a sales person a big edge versus a typical "would you like to renew your service" type of approach. Walia:
The feedback that we have from our customer is: our sales guys are making a 100 calls a day. They dont know who Adam Smith is. So when they call, they say something like, "Hey Adam. How are you doing? You bought some appliance from us, and how is that working out? Hey 0ur new drill just came in, and we have a promotion on it. What do you think?"
But using the solution, now, you're saying something relevant, rather than saying, "Hey Adam, would you like to buy some random thing today?"
Walia pointed out some customer might not even be aware of products that match their interests. A good recommendations engine solves that. Then we got into a very interesting discussion about how Infor can help customers weight the use of the salespersons' time.
Using our trade-offs concept, at what point in the lead scoring do we reach a point where the salespersons' time is better used calling a customer with a higher attrition risk than prospecting a mediocre lead. That's another example of where algorithmic-driven recommendations can help assess the financial ramifications of different options.
Final thoughts - the work is promising, can it scale?
We walked through more use cases than I can cover here. Over lunch, Dawn Rose told me about the healthcare inventory optimization project she is working on (this solution is tied into the Lawson product line). And, based on Infor's recent acquisition of GT Nexus, The Dynamic Science Labs group is working on how they can infuse data science into GT Nexus, such as geo-locational shipping alerts and dynamic re-routing recommendations.
Avoiding the need for customers to provide data science talent is a huge plus. Productizing solutions rather than building one-offs is the way to go, but given the close relationships needed to optimize, how well with these solutions scale?
The good news for Infor customers is they don't have to be running on Infor's cloud ERP (CloudSuite) to take advantage. Some scenarios might benefit from cloud-based operations (such as GT Nexus), but that's not a barrier to entry.
Though only one of these solutions is in the GA stage, the customer response bodes well. I was told it was not uncommon, at the end of a proof of concept, for the customer to insist on continuing to use the unfinished POC, even before the software was built. You don't hear that every day.
Infor is not the only vendor pushing beyond historical views into predictive and decision support. But when you mash that with the "MVP + UX polish" approach, they seem to have a promising recipe.
I'm betting the MIT location has something to do with this upstart attitude. The Dynamic Science Labs team is based in the same building as tiny startups, and sexy consumer brands like Facebook. An infusion of talent from MIT should keep things lively. As Walia puts it:
There's no doubting we're part of Infor, but this place also brings more of the startup mentality. We're always around people who say, "Okay, we've got this idea, we work with a client, get it out the door ASAP,"
And with that, I headed back on the Mass Pike to my own startup batcave. Now we'll see how far this group can go.
Image credits: on-site photos by Jon Reed.
Disclosure: Infor is a diginomica premier partner. I made the drive to the Dynamic Science Labs at my own expense.