Mainframes in an edge computing world - Compuware providing the Phoenix with a box of matches
- Summary:
-
Edge computing looks set to provide mainframe computing with an escape route from being trapped forever in the silo marked `back office’.
The trajectory of Compuware's development over the last few years raises some interesting areas of speculation as to where it might all lead, especially when one couples it with what looks set to be `the next big thing' in IT - edge computing. It opens up the question as to whether it will create new opportunities for exploiting mainframe technologies, or whether they will stay tied to their silo of the back office?
If edge computing is going to create a different architecture for businesses, is the traditional back office function of the mainframe going to need to change? Especially considering there are now IBM mainframe systems that IBM talks about as being ‘cloud mainframes'. With Dev/Ops and continuous delivery on cloud-native applications, it begs the question as to whether back office applications need to make significant adjustments so they remain in lock-step with what is going on at the edge.
So I sought out someone who would have some views on the subject: Rick Slade, Dev/Ops Solution Architect at Compuware, who most definitely believes that this is the direction mainframe applications will need to travel in the very near future.
Indeed, he does not even see `edge computing' as new, having worked some years ago for a major US hardware retailer which, he says, has been operating the basic edge model for years. However, he does see the new interpretation of the idea as a very different animal.
Where I do see change is in how software is delivered. I think what we're seeing with our customers, and what I believe to be true in the industry, is that the mainframe will become an aggregation server in that computing model. If it's a component and it's providing information in several different forms to either edge, processing devices or wherever, then the delivery of those applications need to be merged.
Mainframes no longer `other'
The major change he sees is that mainframe applications have to become an equal component in the software delivery pipeline as every other application, including the bleeding edge cloud-native ones. He sees a product-based, or functional application-based software delivery discipline building up where there is no segregation between platforms with regard to how software development is done.
This is moving towards a top-down, business function-related approach, where all the applications that go to make that function all work together, and therefore should be created, deployed, updated and tested together. This is regardless of whether they are cloud native out at the remotest edge of the network, or part of the back office element running on a mainframe.
My belief is that the mainframe will be a participating member of that software development platform, where today it's not. I think we'll see significant changes there.
He sees this starting to happen already, but expresses deep frustration that most businesses seem determined for now to keep their silos intact, and therefore fundamentally out of step with each other. They are still segregating mainframe work from non-mainframe components, building non-mainframe pipelines based on tools, such as Jenkins, Ansible, Bamboo and others, while the mainframe groups are building their own pipelines for delivery to the mainframe platform.
Out-of-step mainframes = slow and getting slower
Tackling this is now one of the key thrusts of Compuware's existence, building exactly the same Dev/Ops tools and delivery processes for mainframe software as are used by Dev/Ops teams working at the cloud native bleeding edge. This should meant that the same teams can start to do `business process'-related development, using the same tools, regardless of platform. The importance of this, he feels, is that there is still a `throwing tasks over the fence' culture in play, which is slowing apps development at a time when it really needs to speed up.
Until we start to design and leverage software delivery systems that align with application architecture, we're going to have that problem and it's going to create delays.
Slade also sees edge computing forcing modernisation on the mainframe stack itself to create faster communications between components through the use of microservices and RESTful APIs such as ZOS Connect, Open Legacy, and Blue Age.
These tools make it easier to get to use current mainframe code, applications or services by building wrappers around them that give easy access. He acknowledges that this will lead to some degradation of service, and this could lead to the need for them to be reengineered - though if the application is fairly static in nature, this may not be required.
The need now is for mainframes to become easily accessible and more easily modified through the use of automated delivery pipelines. He also sees this creating opportunities for mainframes to grow out of the classic `batch process/back office' role into tasks where they become part of the real time data creation/management process.
It will depend on the application. There will be opportunities where existing logic that's running on the mainframe can be leveraged through an abstraction layer with microservice or RESTful API access, but with minimal modifications to the existing code itself. I think there will also be opportunities for reengineering existing applications to accommodate a more modern computing paradigm.
De-segregate your functions
Slade calls this process ‘functional segregation', where very large monolithic applications are broken down into functional components: not down to the microservices level, but functional components such as Reporting, ETL activity, or business user or roles management. One area where this approach could be particularly useful is where back office functions could be tailored to processes a specific part of the edge environment was contributing to the business. It would be smaller and specifically dedicated and built out of the functional units needed to achieve that task.
This might be an example of the aggregation server role he mentioned, which in turn would eventually report back to the final, `backest' of back office functions.
One of the problems that we've got is that these existing mainframe applications are very tightly coupled today. They're very monolithic in design, so I think that what is going to be required in order to do that is to extract functionality from these monolithic applications and build those into services.
At some point in time they're going to maximise the velocity that they can achieve through better software delivery and they're going to start to look at how they can further improve or get faster at making changes to come to the market or whatever. They're going to have to look at applicational architecture, and that's where the real reengineering work is going to have to occur.
He indicated that Compuware has already begun discussions regarding integration with tools like Open Legacy, which provides a microservice interface into existing legacy applications. His view is that there will be a growth of interest in mainframe applications and expects to see companies working out how we get to that.
One of the important by-products of all this should be the delivery of potential business advantages in the form of better customer experiences. Slade is certain that the improvements here could be significant because it will overcome the problems of delivering those types of systems today caused by the segregation of these platforms from a software delivery standpoint.
If I can include the mainframe in application testing from an automated standpoint, allowing Jenkins to do that work for me, the quality of what I produce is going to be significantly better than if I depend on different silos manually testing those results, and then pulling them together. I want to be able to deploy and deliver change quickly and the best way to do that is to combine all of my software delivery capabilities regardless of platform in the same automated delivery workflow.
His `anti-silo' feelings go so far as to suggest that the vendor community now has to take real steps to minimise the need for detailed knowledge around the platform. He feels software developers should not have to care about the platform they are writing FOR, and a lot more for what they are writing ABOUT - the business logic. Their objective should be to hit the requirements, standards and compliances of whatever organisation or repository to which they deliver. But these days the system should take care of doing the building, the testing and the deployment, regardless of target platform.
My take
Well, here is one genuine application area where the mainframe can have a role, indeed should have a role, and where its contribution to producing a good customer experience worthy of repeat could be important. Edge computing is set to change many user perceptions and operational rules, and if the fundamental inversion of moving compute to the data, rather than data to compute as now, the desegregation of back office functions are likely to benefit the business, and please the customer, if they take their freedom and get inverted too.