When James Taylor and I wrote the book "Smart (Enough) Systems: How to Gain Competitive Advantage by Automating Hidden Decisions," we focused mainly on those kinds of decisions that are managed by a "decision service." A decision service is an embedded applet that fires decisions in a stream, typically by employing a ruleset (a set of declarative statements), and a business rules engine.
We dealt at length with both predictive modeling and optimization. Still, the goal was to automate "little decisions that add up," like credit line increases or call center responses, not "Global Thermonuclear War."
You develop a decision service like the one we proposed with predictive modeling, as the models inform the rules engine with what to do, but that wasn't the focus of the book. In the thirteen years since we wrote the book, some organizations have developed and expanded the idea of decision services, combining predictive modeling, business rules, entity analytics, AI/ML, optimization, NLP, and cognitive computing (I know some of these overlap).
The goal is to provide decision services addressing both horizontal applications (customer, risk, pricing, and performance management, for example), and platforms such as digital assistants with the ability to provide advice and recommendations to the full spectrum of strategic, operational, and tactical decisions, automated or not. The goal is outcomes, not features.
Cognitive computing simulates (but does NOT duplicate) human cognition in software. We can describe cognitive computing components as data mining and knowledge graphs, ontologies, natural language processing with self-learning algorithms, computer vision, and pattern recognition to mimic how the human brain works. The fundamental concept behind a cognitive system is the neural network, deep learning. Thus cognitive computing is more than the sum of constituent technologies.
What can a cognitive system do? It has to be adaptive. Pre-programming for a task is a feature of TPA, e.g., It needs to have memory for solving problems. This includes asking questions to resolve ambiguity. It would be rather useless if it could not also assess data quality and validation methodologies through its processes or additional ones it can access. And finally, context is everything, to understand meaning, syntax, time, location, appropriate domain, regulations, user's profile, process, tasks, and goals, including structured and unstructured data.
Watson - what not to do
I was first introduced to Watson in 2012, and I immediately had reservations. This article isn't about Watson Health, because it's kaput. This is a sort of morality tale about technology in search of a solution, applied to something unethical - promising to apply gee-whiz technology to make dramatic improvements in healthcare, cancer in particular.
Sloan Kettering developed and trained a clinical system, Watson for Oncology, in 2015. The doctors at MD Anderson Cancer Center in Houston started two years earlier, along with mid-seven-figures of help from PricewaterhouseCoopers, to create a different tool called Oncology Expert Advisor. MD Anderson got as far as $62 million and pulled the plug (Anderson is part of the University of Texas System, and the Texas legislature just ran out of patience). What happened? A well-worn technology syndrome - enthusiasm and promise of early cognitive computing and the stark reality of clinical medicine.
Watson's ability to gather, represent and use information was pretty stunning, but one of its detractors quipped, "It reads fast and understands slowly." IBM has discovered that its powerful technology was overmatched to today's healthcare system's complexity, especially something as complex as cancer treatment, one of medicine's still unsolved mysteries. Not unlike a multitude of technology implementation failures, IBM simply failed to understand the way doctors work.
Amelia and the rise of cognitive, conversational AI for the enterprise
Contrast that with Amelia, a conversational AI agent billed by its creators as "The most human AI for the enterprise." According to Anil Vijayan, Vice President at Everest Group:
Amelia combines the ability to address a variety of use cases - including customer service, HR support, marketing, and IT helpdesk - with its ability to function across channels, including voice, thereby providing customers with an Intelligent Virtual Agent solution fit for multiple business needs.
(Amelia was included in Everest Group's Intelligent Virtual Agents Product PEAK Matrix Assessment 2020).
You can see right away Amelia isn't set out to deal with something so complicated as cancer therapy, which is why Amelia is enjoying success.
iPsoft (recently renamed Amelia An iPSoft Company) is squarely in the cognitive, conversational AI category. Amelia, their lead product, which I'm sort of fascinated with, was just awarded one of those "best" things by Everest Group in "Learning and Language, Automation Capabilities, Architecture, Development and Tooling, Deployment and Security, Optimization/Analytics, Vision, and Roadmap." Amelia also received an On-Par rating in the "Chatbot Readiness and Market Approach" categories.
That makes sense. Amelia is pretty far advanced from a chatbot. Amelia is so capable, it can train digital assistants for specific tasks without the full capability of a digital cognitive assistant, which sets it apart from RPA and chatbots. This is very different from RPA and chatbots, which primarily work on scripts. In fairness, some chatbots can learn and branch. But I remember Apple having some problems with Siri that took a while to work out. Like this:
"Siri, I'm bleeding badly. Can you call me an ambulance?"
"OK, Neil, sure. From now on, your name will be 'an ambulance.'"
I will compare RPA, chatbot, cognitive conversational AI technologies, and solutions in a future article. By then, I'll be able to understand IPsoft's new effort, Digital Workforce Automation. So far as I can tell, to be more competitive in price and time-to-solution with Amelia, because Amelia, with all of the features, doesn't wholly work out-of-the-box. Amelia will be training "Digital Employees" through DigitalWorkforce.ai, a "one-stop online marketplace where you can browse, interview, and onboard Digital Employees powered by Amelia, the industry-leading cognitive AI solution."
Also, I was just briefed by a company called Automation Hero, started by Stefan Groschupf, formerly founder of Datameer, that's gone beyond some parts of these technologies and describe themselves as IPA (not beer), but Intelligent Process Automation
Conversational cognitive AI has me a little shaky. They appear to be so smart, but their "learning" seems to be confined to the application they are trained for. But what if? A couple parting quotes to think about:
While AI can manipulate humans into believing and doing things (it can also be used to drive robots that are problematic if their processes or appearance involve deception, threaten human dignity, or violate the Kantian requirement of "respect for humanity." - from Ethics of Artificial Intelligence and Robotics Stanford Encyclopedia of Philosophy.
AI reasoning should be able to take into account societal values, moral and ethical considerations; weigh the respective priorities of values held by different stakeholders in various multicultural contexts; explain its reasoning; and guarantee transparency - from Dignum, Virginia, 2018, "Ethics in Artificial Intelligence: Introduction to the Special Issue," Ethics and Information Technology, 20(1): 1- 3. doi:10.1007/s10676-018-9450-z.