Modernization of legacy business systems and applications is currently probably one of the mainstream activities across most businesses and their IT services. It is also one of the most difficult tasks for them to face for, while those business systems may still work satisfactorily for now, many are close to the end of their capabilities and certainly not up to accommodating any of the new services and capabilities businesses are looking to offer their own customers.
To the majority of the marketing and sales ‘suits’ one may meet the answer is now simple – move to the cloud. In general terms that advice is almost certainly correct. But while it might be generally right for most businesses, the practical upshots for every specific business and its own specific set of business applications, services and future plans is a huge amount of uncertainty.
Pinning down what these issues are in every case demands some digging and analysis. Identifying every process step that an application runs, the relationships between those process steps and the relationship of any one application to any or all of the other applications being run is a major pre-requisite of moving legacy applications to the cloud. `Lift and shift’ is very easy to say, but in practice it rarely works without first building a high level of understanding about the applications themselves, how they work is detail, how they work together, and how they work with the rest of the system of which they are a part.
A session at a recent Software Intelligence Forum organised by CAST, the Paris-based Software Intelligence specialist, highlighted the complexity and interactive nature of these issues and set out what steps need to be taken as the essential pre-cursor to moving existing applications out into the cloud. It can be done, and done successfully, but a simple `lift-and-shift’ it certainly is not.
The session was moderated by Bob Hoey, a former IBM General Manager and now board member with several software companies as well as an advisor to CAST. He was joined by Subhada Reddimassi, Head of Modernization and Cloud at Wells Fargo Bank. The fundamental question they set out to address was, ‘As many of these applications need to be modernized or refactored, have transition teams really analyzed the millions of lines of code inside current software portfolios to get an in depth understanding of the inner workings of the applications?’.
As Reddimassi pointed out, some of the software is going to be 20, 30, or even 40 years old, and much of it will not have been touched, let alone changed. Yet to be modernized, they will need to live in a world where software is nimble, is expandable, and able to be changed very quickly. Modernization of legacy software should ideally mean getting it to a point where the modernized code can be released quickly and adaptably to production systems:
When you start talking about modernization, the first thing that comes to mind is cost savings. But in general, when we, as a large firm, are looking for modernizing our applications, we’re driven from a different need. We are driven from the need to deliver the capabilities very quickly to the market so that we can capture the market, capture customers with the features we want to deliver. We also are driven by the fact that that competitive edge and advantage is quite important for banks like us.
Old world, meet new world
But the bank’s position in the US market raises a problem that many large enterprises will face in their own market sectors: namely there is the growing need to integrate with the newcomers bringing new services and capabilities. That, however, is difficult when the legacy applications come from a world where there are only co-ordinated quarterly releases between all systems and where a change that happens in the mainframe forces a corresponding change in other systems at the same time in order to continue delivering that end-to-end experience customers look for and expect.
For Reddimassi, a key part of modernizing is moving into an architecture which is more compatible with modern releases, whether its event-driven or reactive design. The goal is to be able to keep software more nimble, with smaller code bases that allow the development of the micro-services specific to customer need:
We want to make sure we are always agile, so that we are able to be there when our customers need us, especially when you think about the storyline with the pandemic. We all have to move from that environment where people could very easily walk into the branch, to an environment which was 99% online at a point of time to the branches could re-open.
She made the point that not only is modernization not simple, it is also costly, so it is important to identify what strategies of modernization should apply to which applications? It is a mistake to assume that the same strategy will apply to all applications, not least because it can incur unnecessary cost. So the priority is to look for the output options which have high business value to customers and to the business. This can identify which software should be kept as ‘champion’ software, which qualifies as ‘challenges’ and which can now be retired. Well Fargo has been through this process on the 5,000 or so applications it has been running.
The factors that have to be considered here include the cost of re-hosting an application in the cloud if it can be lifted and shifted, plus the alternative position: whether there is an comparable SaaS offering now available and whether that is more efficient than the current inhouse application. Can an application be re-factored or re-engineered and does the value balance favourably against the time and cost required to achieve that? For some applications, that value will certainly be present, but the migration process will then need to include the due diligence to ensure its operation is not introducing new technical or business problems, especially if the objective is to make the application cloud native rather than just cloud friendly. As Reddimassi said:
Now imagine doing that without having any insight about that application. Some of these applications can have millions and millions of lines of code. How do we make sure that we are able to get value out of these applications? And remember that we have got to get them to a place where we are really able to deliver capabilities to our end customers in an effective manner.
This is where, for Wells Fargo, Software Intelligence products come into play, in order to get a good understanding of the code base and get a real opportunity to look at the nitty gritty of each application, the structural dependencies and operational interdependencies of each application, and establish what applications need to go together or incorporate code that blocks their ability to be even cloud friendly, let along cloud native.
It also helps identify what applications are too complex and would be better rewritten, and perhaps most importantly, it can identify what blockers are present that prevent development teams from taking advantage of modern infrastructure. This, indicated Reddimassi, is something the Bank has spent time on to good effect:
The CAST Software Intelligence product that we have been using has been very helpful for us to identify those blockers within the code bases, so that we can then plan for what effective change needs to happen for making those applications cloud friendly. And when we really make an application decision to say, okay, cloud friendly is not good enough, or there is too much work for us to do to make that application cloud friendly, we make an intentional decision to say let's re-architect this application. We sometimes even rewrite the application to fulfil the business need that it has. Without Software Intelligence, I think we would be like blind people trying to cross the road, not knowing where we were going and without anybody leading us.