Mainframe Modernization Solutions
Your mainframe is the perfect system for your high-intensity transaction processing environment. Some process 75% or more of business revenue. But you will always need ways to implement innovation and may feel that the mainframe is inflexible and incapable of rapid implementation and agility.
The key to improved flexibility, rapid implementation and agility is mainframe modernization – the augmentation of the platform’s already impressive capabilities with new interfaces, an enterprise-wide reach, improved performance, controlled costs, and anything else you can imagine.
We’ve helped 20% of the Fortune 50 with modernizing their mainframe environments, and we can help you too.
The term Mainframe Modernization means different things to different people. For IT organizations running mainframe system in large, high-intensity datacenters, it often means one thing – turning that mainframe into more than it is now, and doing it without breaking the bank. Some will give in to the pressure to migrate to other platforms, other will look long and hard for ways to improve their best transaction processing platform.
A misleading misnomer
The enterprise vendor ecosystem is saturated with service providers and product vendors willing to help you to spend millions on grand and costly platform migration projects – and why not? Mainframe migrations have been big business in enterprise IT for decades. And some of these companies have latched onto the term Mainframe Modernization for their own use – it’s their terminology for mainframe migration.
Now, to be fair, there have been many applications, indeed, many organizations that had no business running mainframe systems – think of all the small or medium sized organizations that used mainframes to run accounting and HR applications. Most of them didn’t need that type of horsepower for those operations, and they moved to Windows platforms when suitable applications became available; most of them have abandoned their mainframe systems.
What’s left now are large organizations that run high-intensity transaction processing on mainframe systems – for them, the mainframe is the optimum system for their purposes. Optimum because the mainframe is specifically designed to handle their type of workloads; and because it is unquestionably the most cost-effective platform for those workloads.
For most of these organizations, leaving the mainframe would be a costly mistake – and a risky endeavor to replace what they already have with less suitable technology. History is filled with mainframe migration mistakes of this type. What they need is actual Mainframe Modernization. – turning that mainframe into more than it is now.
Modernizing the Mainframe
Nay-sayers – or platform migration specialists – will tell you that the platform is outdated and incapable of adapting to today’s challenges. But the closer you look, and the more research you do, that conclusion seems more based on marketing than on the reality of what’s going on in the mainframe ecosystem.
Now, like any other technology, the mainframe needs to adapt to changing times, and IBM has put many of the pieces in place to enable this. The mainframe has been open-source capable for more than a decade, there has been mobile development on z/OS for several years now, DevOps has considerable mainframe compatibility (and it’s badly needed), and even Blockchain technology is integrated into the mainframe. And that’s just a few examples; IBM has been very busy making sure that the mainframe remains relevant and capable in virtually any capacity. The mainframe ISV ecosystem is no less active.
There are several modernization techniques available right now – user interface modernization, legacy code modernization, operational cost modernization, system and application performance modernization, and IT transparency modernization. And that is just to name a few.
There is actually a long history of solutions for the modernization of mainframe green-screen interfaces. The first were screen scrapers, which still exist today, that capture and convert character data, or capture bitmap data. Some emulators used user macros that could drive up mainframe resource usage costs. More adventurous techniques involve actually redesigning some of the legacy code. These solutions all present some level of risk – rising costs, significant redevelopment costs, and so on.
Today the biggest demand is for mobile access to mainframe applications, and today there are solutions that actually leverage the legacy code base to drive new mobile interfaces for mainframe green-screen applications. And the good news is that these tools leverage legacy applications as they are. Legacy applications contain years’ worth of intellectual property, and run fast and reliably. These advantages are leveraged – no new mainframe-side processing takes place; in fact, it need not be modified at all. And that leads us to code modernization.
Today, there are solutions that can leverage all of the code design work done on COBOL programs for the past decades, and help you to move seamlessly into the future (where there may be a continuing shortage of mainframe COBOL, JCL and assembler language expertise). Some of these solutions translate code into various types of distributed-systems flavors of COBOL, however, they are generally limited to smaller projects, where a re-platform will not affect performance. For larger projects, costs quickly get out of control when matching previous levels of throughput performance, 5-9’s reliability, redundancy, and horizontal AND vertical scalability on another platform.
Better solutions allow you to leverage existing code as it is, without a major redesign, re-engineering or complete migration. For larger projects, leveraging what is in place is the fastest and more economical way to modernize. New code can be initiated that can interwork with legacy code – new business rules and business logic can be used to augment the legacy code base, using younger, cheaper programmers, using modern toolsets and programming languages. And that code can run anywhere – on your mainframe, or on other platforms.
While running mainframe systems cannot truly be considered in and of itself a cost issue (see above), there are many ways to optimize mainframe operations without making changes to code logic, databases, and platform hardware. One is high-performance in-memory technology, which can sharply reduce the amount of CPU and MSU resources used by your mainframe applications, thereby reducing their impact on the monthly bill. Similarly, smart performance capping can reduce cost – some of the best solutions can do that without actually capping business-critical workloads.
One tried and true method to improve performance is a general systems upgrade – adding processor cores, memory and other hardware onto your existing machines, or even an upgrade to the newest mainframe system (z14), if you haven’t already done that (some might tell you to go back to z12, but that’s another discussion :-). Upgrading some system software can also improve performance. These solutions will of course, come with an increase in operations cost. However, some of the same solutions that help control costs can also make a big difference in performance as well, without adding to your monthly bill – for example, in-memory technology can improve application performance as well as database performance (in cases where many database applications are optimized).
As you know, tremendous amounts of IT data is saved every hour of every day on all of your systems, both your mainframe systems and midrange servers; enough data that you could realistically call it your own “IT Big Data.” All companies leverage this data at least for the purposes of paying the monthly licensing bills. The more serious IT organizations also use this data to look at efficiency and to glean some analytical insight.
Going beyond that, however, is where you can make a quantum leap – and that means IT business intelligence. By adding business structure and costing information to your IT data, it becomes possible to measure who in the company is using which resources, and how much that is costing. It can also help to measure the immediate effects caused by business changes (company mergers and acquisitions, process changes, new product introductions, etc.). The power of IT’s own data can help change the position of IT from being just a huge cost center into a window into general business efficiency.
Nobody will argue that today’s IT systems must be modernized to handle the new and changing demands of tomorrow. And there are as many ways to do that as there are bits of data in your cell phone’s memory card. But don’t let anyone define for you what “modernization” means – it doesn’t mean use Vendor A’s specific (and possible inflexible) software solutions, and it certainly doesn’t mean to suddenly or even gradually dump your existing high-value and mission-critical IT assets into the landfill. So if you’re running a mainframe – the very best system on the planet for processing business data – and it’s generating 60 or 75 percent of your revenue, find someone who will actually modernize it for you, not just replace it with their own who-knows-what…
For more information, see the article “So what Does ‘Mainframe Modernization’ Really Mean?” on the Planet Mainframe blog.