Mainframe Batch Solutions

Batch processing on the mainframe is critical to your business.  In recent years, overall mainframe workloads have increased (along with associated mainframe resource usage and operations costs). So now it has become more of a challenge to finish that critical batch processing on time.

By leveraging in-memory technology, you can reduce the time needed to complete batch processing, and at the same time, reduce the mainframe resources needed for that processing.

When DataKinetics mainframe batch optimization is applied where needed, your batch processing problems will go away, and your operational costs relating to batch may be reduced, all while using your current applications, current database (virtually unchanged) and hardware configuration.

We’ve helped 20% of the Fortune 50 solve their batch window problems, and we can help solve yours too.

Pressures on the Batch Window

In today’s connected world, demands for 24/7 OLTP is universal. Global business has removed any restrictions on time; a company that does business globally must be available around the clock to handle all time zones equally. Mobile and e-commerce have also had a major impact on business operations. They, too, require that businesses have operations available 24/7 to meet the demands of global customers. Banks must also be responsive at all hours for transaction processing, which has put tremendous pressure on batch window processing.

Other pressures on the batch window come from the need to handle larger volumes of data, and the need to incorporate additional functions. Naturally with these changes, the processing time required to complete the batch jobs increases, sometimes exceeding the available batch windows; often leading to extreme congestion in the batch window.

Contemporary Solutions for Batch Woes

There are several options in the marketplace that have been used to solve batch window congestion problems. The best place to start is the IBM Redbook on Batch Modernization on z/OS. Most of these solutions work quite well, and mainframe shops running significant amounts of batch should be implementing many of them now. Implementing the best of these solutions, or even all of them, may or may not be enough to achieve your batch goals, whether they be performance or cost related. However, there are alternative contemporary options —

  • Scheduling
  • Hardware Upgrades
  • Grid Workflow
  • Application Re-architecture and optimization
  • Db2 optimization
  • Run Batch and OLTP Concurrently
  • Data in Memory (DIM)

Despite all of these contemporary solutions, there is only so much improvement possible, and often, when systems and applications change, these efforts have to be repeated. Fortunately, there are a handful of third-party batch optimization solutions that have been helping IT organizations reach their batch goals for years – and for the most part, they are “fire-and-forget” – their impact is long-term. And most of them improve performance and lower operational cost at the same time.

Modern Mainframe Batch Performance and Cost Optimization Techniques

These batch performance and optimization solutions are proven techniques used by the Fortune Global 500 today, and are helping to power high-intensity transaction processing without the need for additional hardware, memory, or CPU. They do not require ongoing monitoring and optimization of the processes.

  • High-performance in-memory technology
  • IT business intelligence
  • Soft capping automation
  • In-memory optimization of batch applications

High-performance mainframe in-memory technology

High-performance mainframe in-memory technology can be used to accelerate your existing batch applications – particularly those in environments experiencing ultra-high transaction processing rates. It augments the database, as well as existing contemporary batch solutions, like data buffering.

This technology works by allowing select data to be accessed using a much shorter code path than most data. The typical Db2 code path requires 10,000 to 100,000 machine cycles – and that includes any type of buffered access. This image shows the typical code path (top).

Data accessed using high-performance in-memory technology uses only 400 machine cycles – the image above also shows this code path (bottom). Only a small portion of your read-only data – the data accessed most often – needs to be accessed in this way.

How it works

Small amounts of data that gets accessed for most or all transactions – account numbers, interest rates, etc., are copied into high-performance in-memory tables. From there, it is accessed via a small, tight API. All other data access is unchanged. No changes are required to application logic or to the database.

Using this technique, it is possible to not only sharply reduce batch I/O, but more importantly, it can significantly reduce elapsed time – which can solve a batch window congestion problem, It can also reduce CPU usage – which translates directly to reduced MSU usage and therefore reduced operational costs associated with any affected application. The image below shows the benefits:

Conclusion

Third-party batch optimization solutions can help large IT organizations running mission-critical batch processing to reduce their execution times from two times to two orders of magnitude, depending on their business type and the specific characteristics of their batch applications. Each solution shown in this paper can make a significant difference by itself – together, they can make a large dent in ongoing batch processing costs, and most will help reduce batch run times.

For more information, see the article “Mainframe batch challenges – and solutions” on the Planet Mainframe blog.