For April Fools’ Day, Unit4 decided to have a bit of fun by announcing the release of Wanda EasySpeak (see picture), an advanced AI-powered sentiment analysis bot that could monitor conversations and provide contextual meanings for messages sent between people from two different cultures. Examples of EasySpeak’s alleged capabilities included interpreting the underlying meaning of emails, alerting users that their messages had been unclear or impolite and even providing voice intervention in conversations to clarify potential misunderstandings.
Interestingly, quite a few members of the public who read the April 1st post regarding EasySpeak took it as a serious new product announcement. This fact revealed some truly fascinating insights into public perceptions of AI. Specifically, it showed just how powerful people believe artificial intelligence systems already are. This leads naturally to certain questions about why people have these perceptions, how they compare to current AI capabilities and when, if ever, technological reality will catch up to them.
Misunderstandings of AI’s current abilities stem largely from misinformation. In part, this is the fault of the artificial intelligence industry itself, which tends to overstate the current and near-term applications of AI. In addition, AI doomsayers tend to present a skewed picture of AI’s capabilities and, by extension, its supposed dangers.
What can AI really do right now?
Even though people may have unreasonably high expectations from AI today, there are many areas in which the technology is proving immensely beneficial. A recent test of AI’s ability to read and understand legal contracts proved that it could outperform human lawyers in finding legal issues in documents. Large companies such as Volkswagen are already using AI systems to make media and marketing decisions. The social media giant Facebook is even using AI algorithms to help identify potentially suicidal users and connect them with life-saving support materials and resources.
In the area of sentiment analysis that Unit4’s April Fools’ Day post focused on, there are also some fairly remarkable advances being made. AI systems can use context and themes within a written message to make a very general determination of whether the message is positive, negative or neutral in tone. Such systems can currently be used to determine the overall sentiment associated with online reviews, which usually have a clearly negative or positive tone to them. This information can be used on its own to understand sentiments about a given topic or product, or it can be fed into other responsive AI systems, such as chatbots, to tailor the experience of a user according to the general sentiment he or she is expressing.
Presently, though, sentiment analysis is limited to determining only very general emotions. The kind of analysis described in Unit4’s EasySpeak post, by comparison, would require a deep and nuanced understanding of not only general human communication, but also how that communication varies based on culture and geographic region. At the moment, such complex understandings of the sentiment underlying human communication are well beyond even the most advanced sentiment analysis algorithms.
Where might we be in 5-10 years?
Just because AI isn’t yet at the point of being able to fulfill Unit4’s tongue-in-cheek April Fools promises doesn’t mean that such capabilities will remain beyond AI systems indefinitely. In the coming years, researchers hope to develop ways for AI to understand sarcasm and other ironic forms of language use, which by their nature make objective meanings difficult to grasp. Making sentiment analysis more accurate will also be a major component of this area of AI research moving forward, though it’s possible that 100% accuracy may never be achieved. Within the next five to ten years, though, it’s almost certain that both sentiment analysis and AI in general will see vast improvements and be integrated more fully into daily life.
One thing to understand is that the limitations of AI likely won’t be eliminated within the next decade. Aside from existing issues such as machine bias and the capacity to be fooled more easily than a human, more advanced AIs are expected to come with their own unique challenges. One very real obstacle is the fact that cutting-edge deep learning systems have already progressed to the point that even their creators can’t always understand why they make certain decisions. This fact may make diagnosis and problem-solving very difficult when systems fail to make correct decisions.
These challenges won’t vanish overnight or even within a few years. Rather, they serve as a reminder that the process of putting AI to the highest possible use will be a long one that requires considerable amounts of research. As far as AI has come in recent years, the reality is that the exciting journey to develop and improve AI systems is still just beginning. The revolution hasn’t happened yet.
Image credit - via Unit4