Main content

Software archaeology for beginners - digging through layers and layers of decisions

Martin Banks Profile picture for user mbanks May 29, 2024
They’re messy, and getting messier, which is why is targeting keeping enterprise applications up to speed. CEO Derek Holt explains how.


I describe it as software archaeology. There's years and years of layers of decisions and, you know, an old mainframe that's sitting three levels below the ground and needs to be dusted off every once in a while. I can almost guarantee most of the apps on my phone are hitting a mainframe at some point. And now there is yet another layer on top of that archaeology, these AI services, and so all we're doing is adding to the complexity of these applications.

That is how Derek Holt, the CEO of, outlines one of the key issues confronting enterprises as they work through transforming themselves from a traditional, 'hand-managed’ business supported by an array of legacy IT systems and applications to being a cloud-using, AI-wrangling, full-on digitalized operation. Back in those legacy days, applications were loaded from floppy disk and CDs and lasted at least three years before being updated. Now some applications – including business critical ones – may last only three days before an update, addition or modification appears and needs inserting into a vastly more complex, continuously operational environment, all without bringing the business to a grinding halt.

For greenfield start-ups heading down the cloud native road from day one, much of this will be mainstream behavior, and no major problem. But for established large enterprises looking to move into the real world of hybrid cloud operations that mix trusted legacy business management systems with on premise client/server departmental management and cloud-based customer-facing online real-time operations, that is easy to talk about, far less easy to manage. (It's made even less easy when the minefields of governance and compliance are thrown into the mix. This is all being further complicated by the arrival of gen AI Co-pilot tools which offer an attractive potential of helping developers produce more code, and do it faster. But Holt sees this as a development that, while it does have potential, currently flatters to deceive, making the lives of CIOs, developers and others across the world of IT far more difficult and messy.


Helping businesses manage the range of applications lifecycles and interactions in such environment is where has pitched its camp. Holt claims that although only four years old, the company represents about 50 years of thought leadership over a variety of different domains within the software development lifecycle. These range from ideas and planning, all the way through deploying code into production, monitoring, and all the complicated steps in between. He explains: 

The opportunity we saw when we formed the company was that there were no companies at that time. And this is probably broadly still the case, they're waking up and only thinking about the complexities of large-scale enterprises. If you think about the problems, challenges and opportunities of 100,000-person development organisation, they are wildly different from those of a 20 or 30 person, and in particular when these large businesses are in highly regulated, very complex environments.

He describes the company’s abilities as AI-powered devsec ops, across the all the key stages of software development, from writing code to planning, to testing, to securing, to releasing and deploying, and assuming this needs to be done across heterogeneous environments. This means having partnerships and integrations with a wide range of third-party tools as point solutions, and the firm's tech has found favor with around 50 of the Fortune 100, as well as the Top 10 US banks, eight of the Top 10 European banks, and the top five US insurance companies, plus large airline and hospitality businesses. He says: 

It's the big kind of industries, often not born on the web. And if they were born on the web, not born in the cloud, helping them navigate these really complex topologies.

It’s about the guardrails does seem to be approaching the large business heterogeneous complexity question in a very different way to the other popular option of shifting as much of the workload to low-code/no-code alternatives, especially in sectors like finance and insurance where the data traffic is largely based on a relatively tight set of tasks executed in high volumes.  Holt sets about answering the question in two ways.

Firstly he does see a role for low code/no code in regular tasks such as updating COBOL for a mainframe application, updating middleware to push out to an app server running in some data center, or working with containers and Kubernetes, perhaps solving particular use cases within a composite application. In the end, he sees that as part of a broader ecosystem. Secondly, he sees low code/no code being used for rapid prototyping, perhaps in a proof-of-concept manner, to be followed by reaching the point where a custom implementation is required to take that application or function to mission critical status. He argues: 

The thing of doing more traditional release management is that you know what's getting into production, that the right check boxes have been checked, the right scans have been done. That, frankly, is different for every type of development, including low code/no code, but it still needs to be there; there still needs to be governance, there still needs to be compliance. And if you do it right it automates repetitive tasks and takes away a lot of other things that frankly, developers don't want to spend their time on. But it still provides those guardrails to make sure that we're following compliance.

He also sees the arrival of gen AI-based Co-pilots having an impact on both low-code/no-code and more traditional applications development processes, not least because the accepted thesis is that it is really hard to write custom code. Reducing that problem lies at the heart of the low-code/no-code proposition. He feels that AI-assisted coding using co-pilots to help developers code faster, especially with repetitive tasks is now a really compelling possibility. He sees giving each developer a co-pilot as the future equivalent of the pair programming model, stating: 

Having a pair where the pair is not another human but a virtual peer; may that start to tease away from some of the low-code/no-code and get us back to writing more custom code?  I don't know the answer, but I think it's an interesting space to keep an eye on.

There is a downside to Co-pilots, of course, at least in the short term. has avoided entering the business itself, Holt says, as there are already dozens of them out there and most of them are poorly trained when it comes to working with code development. Instead the company focuses on the governance of the code they produce, and attempting to control their inevitable code sprawl and production of poor quality code. According to Holt: 

The big, big one goes back to the training data and security issues. If the training data had bad security challenges, guess what the generated code is going to have - security challenges. I do believe for a variety of reasons: from a cost perspective, from a sustainability perspective, interestingly enough from an accuracy perspective, I think you're going to see an evolution from Large Language Models to - I don't know if this is the proper term - but to small language models, where I can train it on a subset of my data and therefore get a better output from it.

One of his big fears right now is that current Co-pilots could make things worse. Questions such as, 'Is it writing code that is readable?’  or 'Is it writing code that is going to be maintainable?’ are very important. In practice the perceived benefits of getting started fast, and writing code quickly may seem attractive, but are likely to create problems downstream. In Holt's view, the smart users are not using Co-Pilots to write more code. Instead they are they are looking at how they can be used to help write better code tests, he suggests: 

Think about what is the lifecycle model. I build a model, I train a model, how do I test the model? There's an infinite number of questions that could be asked in theory.  And then how do I figure out the retraining? And how do I measure whether it's better or worse, and there's best practices that are emerging, but no different than maintaining the life cycle data? You need to have best practices and to find that lifecycle. And then you can go figure out what tools you need to help you be better at it. But being really thoughtful about what this looks like, I think we're still in the early innings in some of these areas.

My take

Enterprise applications are no longer singular entities. These days they are composites of established legacy mainframe applications, middle management tools  and bleeding edge, cloud native, realtime end user tools that have to work as services to the customer. The arrival of gen AI tools will only make this more complex and, with early AI-based Co-pilot tools, potentially far more messy than ever. It’s a difficult area, but someone has to do it.

A grey colored placeholder image