BrainBlog for Adaptigent by Jason Bloomberg
Caching engines are a common, important tool in any distributed computing architect’s toolbox. Caches reduce the time and effort to fetch content or execute queries on databases, document stores, web servers, and other infrastructure elements.
At first glance, caches are deceptively simple: instead of fetching information from the source, store it in the cache. Now subsequent requests can hit the cache instead, reducing latency and lightening the load on the back-end database, server, or other system of record.
This straightforward but powerful value proposition has led to the creation and maturation of several open-source caches on the market, including Redis, Memcached, Apache Ignite, NGINX, and several others. These products support throngs of distributed computing applications, helping them run at massive scale.
One might wonder, therefore, if there’s an opening in the market for a new commercial cache offering. Adaptigent (formerly GT Software) certainly thinks so, as the company recently launched its new Intelligent Caching engine.
In spite of the relative maturity of open-source caches on the market, Adaptigent has crafted a product with a well-differentiated value proposition that will appeal to a mainframe-centric customer base who will find that the other products on the market fall short. Let’s take a closer look.
Caching Mainframe Data
Adaptigent has long specialized in unlocking the value of mainframe data via no-code integration technology. Its Intelligent Caching engine unsurprisingly focuses on this important segment of the data access market.
The value proposition for mainframe data caching is twofold. The first, like the other distributed computing caches, is to make data more easily available with lower latency.
The second part of the story, however, is specific to the mainframe, as IBM charges customers by the MIPS (millions of instructions per second) – and database queries can potentially consume many of them.
As a result, supporting modern application requirements with mainframe-based systems of record can be unexpectedly expensive. Imagine if every request in your mobile banking app required MIPS on the mainframe to execute – and then multiply by the number of customers at each bank.
Caching is an obvious solution to this problem. Instead of requiring every mobile banking transaction to hit the mainframe, cache the results instead. As a result, it would be possible to respond to many such requests from the cache without requiring any mainframe processing at all. It sounds promising, right?
Well, as you might expect, the devil is in the details.
Read the entire BrainBlog here.