In a world in which speed drives both value and competitiveness, IT organizations must spend as few resources as possible on those things that do not deliver competitive value and must, therefore, break one of the most significant problems and resource drains facing the modern them: the maintenance paradigm of legacy applications.

“[I] just realized that a spreadsheet I use weekly was first created in 2004 and has been in constant use now for over 13 years! What’s your oldest excel spreadsheet still in use?”

This fun, seemingly innocuous question was asked recently in an online car enthusiast discussion group. Despite the automotive focus of the group, a lively conversation ensued with participants each trying to best the others.

The winner? Several people who were still using spreadsheets created in 1995 — the year that Excel began to take root with the launch of Microsoft Office for Windows 95.

Besides being a little whimsical and nostalgic, have you ever wondered how this is possible? Clearly, the members of this group are no longer using Excel 95 and are presumably using a modern version of Excel. Yet these files — and the ‘business logic’ they contain — are still working for them now, some twenty-three years later.

Perhaps more importantly, how might the answer to that question be one of the keys that will help enterprises break one of the most significant problems and resource drains facing the modern enterprise IT organization: the maintenance paradigm of legacy applications?

Today’s New Doesn’t Have To Be Tomorrow’s Legacy

There’s an axiom that some industry leaders are using in an attempt to squelch what they see as an over-exuberant view of new technologies. In its various forms, it goes like this: today’s shiny new code/application/technology is tomorrow’s legacy.

It’s a prescient cautionary note. With all of the excitement around the rush of new and emerging technologies, it is easy for technology leaders to forget the mistakes of the past — and therefore repeat them.

And one of the greatest of these past mistakes is using application architectures which delivered great functionality at the moment, but then required continual maintenance once they were in production.

This issue is so prevalent that many IT organizations spend 60-80% of their resources — or more — just maintaining the applications and technologies they’ve already deployed. While they use some of these resources on feature enhancements, they consume the vast majority of them on things that provide no incremental value to the organization.

In most cases, however, the reason for this heavy maintenance overhead is not poor coding practices — it’s the application architecture itself that is the problem. In most applications, the business logic and core functionality of the application are bundled together with the application core itself — the place where the application handles things like security, access control, and application interfaces.

The problem, of course, is that the application core is where most maintenance activities occur as there is a continuous need to update and improve security and keep up with evolving technology standards and requirements. It is this paradigm in which the architecture couples the application core with the business logic and core functionality that creates the application legacy quagmire.

There is, however, an approach that can help organizations break free from this resource-consuming application maintenance paradigm: a metadata-driven application architecture.

Metadata Architecture Transforms Application Maintenance

The concept of a metadata-driven application architecture may seem like an unfamiliar concept on the surface. However, you most likely already use such an architecture every day — it is what enables an Excel file from 1995 to still operate today with its business logic and functionality intact.

A metadata-driven application architecture is one that abstracts the core functionality of an application platform from the business logic and user functionality of an application. Another way of thinking about it is that it separates functional requirements from non-functional requirements and allows one or more teams to maintain each of them separately.

This type of architecture creates several benefits. First, the development team responsible for delivering business logic and functional improvements can respond more quickly and focus more intently on those needs — without worrying about non-functional requirements such as security or interfaces.

At the same time, those teams responsible for those non-functional requirements can maintain them independent of the business logic and functionality that sits on top of them. And when using a platform-type approach, they can perform this maintenance centrally. This abstraction generally results in a more secure and more well-built application core.

This type of architecture also enables organizations to add new core functionality and extend it to an existing application portfolio as new technologies emerge — without having to perform maintenance within the applications themselves to do so.

For instance, an organization could extend its core application platform to include voice search (to connect with Amazon’s Alexa, for example) or to leverage machine learning algorithms. This core functionality would then be immediately available to all applications using the core platform.

The Intellyx Take

In a world in which speed drives both value and competitiveness, IT organizations must spend as few of their resources as possible on those things that do not help the organization deliver competitive value in the market.

Unfortunately, most IT organizations are doing just the opposite as they spend most of their resources on maintenance activities that do not provide direct value — things like improving security, updating interfaces, testing, and so on.

Of course, these non-value-driving activities are essential, and every organization must do them or pay the consequences. But it is the constraints of traditional architectures that have made the application maintenance process resource-intensive and onerous. The need to break free of these constraints is why a metadata-driven application architecture is potentially so powerful.

The challenge, of course, is that building such an application architecture is an involved endeavor in and of itself — and may be beyond the resource capacity of many enterprise organizations.

Organizations are, therefore, turning to technology providers, such as ClaySys, which deliver application development platforms based on this type of metadata-driven application architecture. These sorts of providers also handle maintenance of the application core, which frees enterprises to focus on only the business logic and functional needs of their applications.

There is no question that the need for applications will continue to grow within enterprises. IT organizations, however, are already struggling with the maintenance-load of their current application stack.

Simple math makes it clear that the continual introduction of more applications will make the current application maintenance paradigm unsustainable. Adopting a metadata-driven application architecture should, therefore, be a short-list consideration as enterprise leaders seek to transform this paradigm.

Copyright © Intellyx LLC. ClaySys is an Intellyx client. Intellyx retains full editorial control over the content of this paper.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe to our Cortex & Brain Candy Newsletters!

Thank you for reading Intellyx thought leadership!

Please sign up for our biweekly Cortex and Brain Candy newsletters.

The Cortex features thought leadership on Agile Digital Transformation topics, and Brain Candy highlights disruptive vendors in enterprise IT.

We won't spam you and you can unsubscribe at any time.