Recently, I was involved in a conversation with one of our clients who was being told by a consultant that they could replace tableBASE (our in-memory technology offering) with BMC’s MainView Batch Optimizer. My initial reaction was “whaaaaaa?” It seemed to me that if a statement like that were to be made, it must be coming from someone very dishonest, or at least grossly misinformed. Lies! Well, after I calmed down, I realized that we were dealing with a paid consultant who was just doing her job – trying to displace an existing product with a ‘similar’ product that would result in commission for her. Honestly, I can’t blame someone for that.
But I was quick to point out to our client that replacing tableBASE with Batch Optimizer would be a big mistake. Not that there’s anything wrong with Batch Optimizer – it’s a decent product – even if it is a part of a much larger suit of products that one could characterize as bloat-ware or shelf-ware. Seriously though, Batch Optimizer does make a difference in many shops, but it is no replacement for a product like tableBASE. Really, you’re comparing apples to oranges.
Let me explain why.
Reduction of I/O
First, Batch Optimizer’s biggest advantage is I/O reduction or elimination – which is a well understood and accepted way to reduce CPU usage and related costs. Accessing memory is typically 1000 times faster than accessing disk, so more memory accesses and less I/O is obviously going to make a big difference in your batch (or online) transaction processing. And if you believe BMC’s own claims (they’re reasonable), Batch Optimizer can cut elapsed time for some batch processing by 60% or more. It also cuts CPU usage by nearly 40%. Pretty impressive numbers, I’ll admit.
Code Path Length
The image below shows the code path length for accessing data from a mainframe disk subsystem – there is considerable database overhead that is used for various concerns like contention control, buffer management, logging, SQL parsing and much more. As you can see, there is a lot of overhead even in cases where data is accessed from buffers exclusively.
And this is where tableBASE makes a big difference, in specific cases. In many transaction-intense environments – mostly batch, but also in unique online scenarios – an application will access RO (Read-Only) data hundreds or thousands of times every second. Think interest rates or customer account numbers in a bank’s batch processing, or inventory numbers or customer account/partner account numbers in retail processing. This amounts to about 1% or less of the total amount of data. For these specific types of data – RO reference data that is accessed more often than the rest of the enterprise data – accessing it using tableBASE can make a huge difference.
How tableBASE Works
The most often accessed RO reference data is copied into tableBASE high-performance in-memory tables, where the application accesses it using a tight, efficient API. In effect, this data can be accessed using a much shorter code path. No changes are required to either the database or the application logic. The image below compares the standard buffer code path to the tableBASE code path.
The standard code path requires between 10,000 and 100,000 machine cycles, while the tableBASE code path requires often less that 400 machine cycles. And this can help an application access certain types of data more than 100 x faster than is possible using any type of buffering technology.
Where to apply tableBASE
The first thing to remember is that tableBASE is a tool that augments the database – it does not replace the database in any way. Further, tableBASE must be applied to specific applications that consume excessive I/O, and perform excessive READs. The best way to identify these kinds of applications is to use some type of GUI tool that presents your SMF data to streamline identification. There are several tools out there that fit the bill – from SMTData to Syncsort, Splunk, BlackHill Software, and IntelliMagic, to the product suites of BMC (MainView), CA and Compuware (Strobe). The image below shows how one such tool can identify candidates for tableBASE optimization.
The results of using tableBASE
tableBASE is an effective way to control the amount of CPU being churned by your most transaction-intense applications. Actually, any application that consumes excessive I/O or performs excessing reads will benefit from tableBASE usage. Reducing CPU will allow you to save on operational costs – with an added benefit that some of these optimized applications will run faster. The image below shows actual customer results.
The idea that tableBASE could be replaced with MainView’s Buffer Optimizer is not sound; the two products do very different things. But to be fair, any mainframe datacenter without a product like BMC’s MainView is at a disadvantage. Should you run out and get it? Debatable, as there are several competing products. If you have a BMC suite of products in your shop now, maybe you’re already paying for it. Otherwise, the smart play is to shop around.
Similarly, any mainframe datacenter that is not using a solution like tableBASE is leaving money on the table – in that you’re burning CPU dollars unnecessarily. Accessing data using tableBASE will always be considerably faster than accessing data from buffers, no matter how much you optimize your buffer access. At the end of the day, you shouldn’t compare apples to oranges, as the consultant I mentioned at the beginning did. It might, rather, make more sense to think about using the right tools for the job – and remember, the craftsperson with the right tools for the job has a great advantage over the craftsperson with just a very basic set of tools.