In pursuit of General Intelligence - Dynatrace and the death of the dashboard

Martin Banks Profile picture for user mbanks March 29, 2022
Dashboards have become the mainstay user interface for the majority of systems management implementations – with some vendors even heard to talk about 'a dashboard of dashboards’ – but that could all change as AI tools become integrated into management systems code and the move to self-management, self-healing and autonomy gathers pace. 

machine intelligence

Dynatrace CEO Rick McConnell, recently appointed and ex-President and General Manager of Akamai Technologies’ Security Technology Group, sees an inevitable contradiction in the future use of IT resources.

In essence, the better we get at exploiting the capabilities of the technology, the more data we will produce, forcing the complexity of both the systems and the management tools they will require to grow to the point where we can no longer manage any of it with any of the currently available tools. Artificial Intelligence (AI) will be the only viable answer, he suggests:

If we do our jobs, it seems to me that the complexity of data just explodes in an unbounded way. We can make sense of that through our AI ops capability, and by making sense of it, we provide not data on dashboards, we provide answers through Intelligent Automation using AI.

This is also an area where the developments now proceeding at pace out at the edge will bring significant changes not only to the way data is managed, but also the speed at which ever-richer value is derived from it. McConnell has seen the evolution of this from the early days when the ‘bleeding edge’ capability was the ability to integrate different data sources and types in order to assemble a common data set:

But it was just that, it was data. It was really left to the customer to figure out what to do with the data. The important evolution over the last four or five years, at Dynatrace, was not delivering just data, but rather delivering answers from data through our AIOps edge.

The Edge will make complexity more complex

While some of his current `sense of direction’ for the company suggests McConnell sees scope in extending its penetration into his old data and operational security balliwick, it is clear that he has also latched on to the fact that, without a strong mixture of AI and operational management, the ability to generate any value out of the exploding growth of data  will be difficult to maintain. Indeed, control may degrade enough to start reducing the value that can be created.   

For example, he sees potential growth in edge-related applications and consequent new growth in the data it will inevitably generate. This points to an underlying truth - that the ability for business users to move up the levels of abstraction, to stop seeing the data and instead see the questions and possible answers data represents – read words and sentences rather than see characters from an alphabet – will become essential for fast and effective business management. It will also play an increasingly important role in the management and development of the applications that will get used, especially as they grow to incorporate the edge into what will have to be a holistic soup-to-nuts business management solution:

It will allow you to provide much more pinpoint accuracy, based on trace routes and other elements of immediate troubleshooting if your app is not being delivered, the infrastructure goes down, some other particular issue. Then I can, with great certainty, figure out where it is, why it is and, hopefully, in advance and when customers start reporting back to you that there's a problem. Our artificial intelligence pinpoints precisely where the issue is. I think of this as, in some sense, the next phase, a shift into the development cycle, the evaluation of that data.

Here, the task moves beyond the common practice of figuring out where the problem is and adjusting for it, then dealing with it with very low false positives, false negatives. With Dynatrace, the AI functionality is directly integrated into the application code, allowing users to measure and evaluate service level of objectives and triage directly in the code. That way, the system can automatically respond to changes in the delivery of those metrics.

The goal here is to completely automate out the need for manual intervention and interaction in tasks such as operations remediation. This would also seem to hold out useful potential as a component of future digital twin implementations, where its capabilities could be fully exercised in a Sandbox environment.

Edge computing is, of course, just the latest addition to the growing multi-cloud/hyperscale environment where that infrastructure complexity is racing ahead. And it is, as McConnell points out, now a world where much is dependent on that complexity working well and reliably, and where much more of our lives depend on exactly that.

For example, many of the world’s banking systems are now running on services ultimately hosted by some of the largest hyperscaler service providers around. That is just one obvious application where just a few minutes downtime in a service can cost painful amounts of money, he argues:

Compliance is going to increasingly enforce resilience through (the use of) multiple providers. That's driving more complexity by driving a multi-cloud set of initiatives through this compliance regulatory environment. That then leads to even more complexity. And so the assembling of data from multiple disparate sources, and the processing of that data becomes quintessential, it just is critical, because there's no way you can do it manually. And increasingly, there's no way you can do it the way most companies are doing it today, which is DIY.

By this he means self-assembled open-source apps and put-together monitoring tools, or perhaps using one of the hyperscaler toolsets. But in his view this doesn't get users to multi-cloud:

It's just this kind of hodgepodge of elements that don't give you a precise or complete picture of what's happening. And as consumers, we have lost all patience. If I if I order something through some sort of e-commerce site, I now expect it to be here tomorrow, not two weeks from now, and I expect it to work perfectly. And by the way, if that site is slow, if I lose items out of my cart, I am never going back to that site again, ever, because I can do better somewhere else.

The need for ‘general’ AI to manage ‘specialist’ AI

One particular element of the opening keynote session of the recent Dynatrace conference in Las Vegas was the appearance of Max Tegmark, a Professor of Physics at MIT, the President of the  Future of Life Institute, and scientific director at the Foundational Questions Institute. In his role as scene-setter he even managed to take McConnell by surprise with the breadth of his views on where AI was heading and where it might take a company like Dynatrace.

McConnell acknowledges that some of Tegmark’s observations raise potential conflicts with his own thinking, though it also maps on to it in other areas. For example, he describes his thinking as a juxtaposition of two elements - the criticality and evolution of AI, and the role of humans in the world, and how that all maps together:

But we weren't trying to send any indelible message here, what we were trying to do was indicate AI is ever more important amidst a world of exploding data. So that was one element. And then number two, is figuring out how to ensure that you're optimising your utilisation of AI looking forward in your roles as humans and leaders, workers, whatever you might be, so that it can be most effective in your lives.

Tegmark’s fundamental theme was that the development goal for AI had to be as Artificial General Intelligence, rather than what he called “Super Intelligence” in specific areas. This latter – the ability for the system to perform a specific task far better (by whatever definition) than a human - is what is currently most often touted by vendors as AI. Tegmark sees General Intelligence as being much more important as the AI systems will be able to observe themselves and what they are doing. They will then be able to control their operations in the wider context of what is `best’ overall, not just in that context of achieving a particular goal.

It did occur to me that this would not be a bad message to try and get across -  think beyond the here and now of AI and think about how this wider goal might be achieved (coupled, of course, with the sub-text of `look for Dynatrace to be heading in this direction’). McConnell is however not keen to have all his colours stuck to that mast, but it was clear that the underlying theme is a strong part of the company’s future direction:

We believe that AI has a fundamental role in the universe amidst an explosion of data from an ever-dizzying array of sources that then needs to be managed and processed. And the only way to do that is manual interaction is no longer viable. And so therefore, you need to think differently about how you approach a problem, it was really simple to have a manual dashboard of an app that ran on a mainframe that was completely in your control from end to end in terms of what the infrastructure looked like.”

Now, you have apps that are getting updated almost continuously throughout the course of the day. And that brings with it a rate of complexity growth that is stunning. Yet if you push an app that breaks for your end users, you want to be able to roll that back roll it back in microseconds, because every one of those is going to result in a negative experience that is not going to go well. And so the intent was to send a signal that we will see many, many orders of magnitude more data, complexity and growth in compute power, and you're going to need to think very, very carefully and intelligently about how you put stuff out.

A grey colored placeholder image