Main content

Eating an elephant - Nedbank’s digital transformation, built around DevOps and mainframe resilience

Martin Banks Profile picture for user mbanks January 29, 2021
Developing front-end, user-facing applications is one thing; making sure the resilience of the established back office services keeps up is another matter.  


Nedbank in South Africa dates back to 1881 and is steeped in very traditional banking culture. It also has more than a few mainframe applications to its name. Like other long-established banks, it’s now facing a challenge from new unicorn and Fintech start-ups, which has led in turn to a very aggressive digitalisation journey. 

Part of that required accepting early on that many traditional ways of working needed to be re-evaluated, with both agile and DevOps practices rising right to the top of the requirements list, says Devi Moodley, Executive Enterprise Dev/Ops and Release Management at the bank: 

We started with the adoption of agile practices. Subsequently, we brought in the adoption of DevOps practices, starting with the easier components of our technical stack, the front ends. Then it became extremely clear that there was a mismatch of delivery across the stack, because at the end of the day, our mainframe remained the core of the business.

That led her and the bank’s Senior Software Development Manager, Troy McNamara, to conclude that they had to ‘hollow out’ the core of the mainframe and modernise the applications it was running. The alternative choice was to go for a new suite of core banking applications, a move loaded with inevitable risks. Sticking with the mainframe and modernising the applications won out - and quickly led to another realisation, says Moodley: 

Troy and myself started saying if we are going to do DevOps properly in Nedbank, we've got to do DevOps across our full technical stack, because DevOps isn't just about the tooling. It’s about the change of culture, it's about a change of work, a main way of working.

As a result, they started looking at DevOps tools vendors out in the market and did a desktop evaluation, which led to a pilot exercise with two contenders. The winner of that process was Compuware. 

How to eat an elephant

From the beginning there was an understanding that this would be no ‘nice to have’ side project, but rather an integral part of an across-the-board commitment to the application of DevOps, ‘soup-to-nuts’. Moodley acknowledges that they had the key advantage of full senior executive support for this new way of working, but despite that up-front commitment, the plan at that point was still  to “eat the elephant, one bite at a time”, starting with low-hanging fruit projects - the new internet applications, the web front ends for mobile application, and the customer facing access points. 

In the background, team members were also examining how to make DevOps on the mainframe a reality, she adds: 

When they had started, the DevOps engineers that had mainframe experience in their careers and were actually already doing the stock valuations, understanding what was available to us.

They became aware of the advantages offered by DevOps, but also of those front-end applications being inhibited and even stalled by the inability of the mainframe back office applications to cope with the increased workload, the throughput and the increased real-time requirements. 

In essence, it became clear that DevOps in one part of the system meant DevOps had to be right through the system if the business requirement was to be met. It didn't matter what you did at the front end; the total time of fulfilment of whatever the task was didn't change, because of the issues at the back end, explains Moodley: 

The teams were modernising our interfaces with our customers, but not able to fulfil the features because the pace at which the mainframe team was moving was too slow. To release a feature to the market was dependent on the slowest part of the chain and that was the mainframe. The benefits that we wanted, that were most important, was how can we increase the velocity and the delivery of these teams?

According to MacNamara, from a deployment perspective the process was cumbersome, requiring high levels of discipline with lots of manual checks and balances in place. There was also a high administrative overhead  for release management and change management:  

I think those were key things that DevOps allows you to do, just from a practical perspective. Irrespective of whether you're a mainframe or a front end, you can actually do proper source code management. The other big thing was to figure out code quality. With some of the COBOL developers reaching retirement age, we've got a lot of youngsters coming in. So it was important to put the knowledge into a product that could then run rules against the code that was being pushed out. 

We did a Value Stream Map. If you look at that, you could see where the blockages were and why things were taking so long. It was really to automate as much as possible, so you can have a product that has some workflow built in that allows you to manage each stage as you promote your code through to production. That's a huge saving. absolutely huge.” 

It has also been possible to check for code contentions, which traditionally have often only been found when first running the finished application This allows developers to know who else is working on the same piece of code. Perhaps most important of all, especially with an ageing workforce, it helps to transfer as much of the background institutional knowledge as possible to the young developers and automatically check they are not breaking any rules.

This has also given the team a chance to look at the bank’s business processes and revamp those where more efficient ways of operating were now available. For example, they were able to establish that some old emergency processes were no longer relevant and could be removed. Typical of this is the old process for versioning control, where the integrity of the version is now maintained by the pipeline and it is not possible to deploy the wrong version of code.

As an example of the results these changes have made, Moodley explains that, from an operational efficiency point of view, the administrative time taken for an IP change management process is reduced by 95%. In addition, the unit test testing process has shortened from two hours to 10 minutes now that a senior developer no longer has to review the code before it can be integrated:

It was a huge administrative overhead, because you had multiple forms that you needed to fill out. We were able to do pull all of the information directly from the system or reference the change to the computer which held the information from an audit perspective. From the developers side of things, it previously took over an hour to promote the code through the system because they had to rebuild. Now, it takes about two minutes to promote code to the different environments as so much of it is automated.

A grey colored placeholder image