When I read my colleague Charles Araujo’s insightful article for CIO Magazine, Is all-in on the cloud a real strategy?, it got me thinking.
While enterprises have been talking about moving to the cloud for a number of years now, Charlie points out that the revitalization of the legacy stack is becoming an important part of today’s modern IT strategies.
What’s going on here? Are companies feeling nostalgia for their AS/400 systems the way they might for men’s fedoras? Or perhaps aficionados are relishing the quaintness of Windows NT much as vinyl LP fans enjoy the decidedly analog sound of their preferred medium?
Nostalgia aside, there’s more going on here – and it’s not simply a question of moving to the cloud vs. revitalizing legacy. This counterintuitive trend is actually impacting many different areas of IT today – and if we take a broader look, the bigger picture of what’s going on will emerge.
Centralized vs. Distributed
This ‘new to old’ trend is perhaps simply one swing of the centralized vs. distributed pendulum. It has been swinging from years, from host-based computing (centralized) to client/server (distributed) to the web (centralized) to the cloud (distributed).
One could argue, however, that the cloud is really more centralized than distributed, as so many companies put their eggs into a single CSP’s basket. In any case, where does the pendulum swing next?
As organizations reconsider the value of on-premises assets like mainframes, it’s indisputable that the pendulum has swung back to the centralized end of its motion. After all, the core reliability and vertical scalability benefits of the mainframe depend upon its centralized architecture.
On the other hand, today’s mainframe is a full participant in modern distributed architectures, essentially serving as one server type among many (or in reality, a family of server types).
There is more to the mainframe story, however: its new-found versatility in modern digital architectures essentially means that the mainframe offers hybrid IT all by itself. Any hybrid IT strategy would therefore be even stronger if the organization in question added its mainframes to the mix.
Edge Computing: Back to Client/Server?
One of the reasons why client/server was such a success in its day is because the clients – PCs running ‘fat clients,’ handled much of the processing. This distributed processing was essential, because the servers of the day were woefully underpowered by today’s standards.
Today, we can observe a parallel trend, as significant quantities of compute power move to the edge.
The edge, in fact, is an ambiguous term, as it might mean the cloud or network edge, where content delivery networks (CDNs) serve content closer to the end-user for improved performance and reduced latency.
Or, the edge may refer to the ‘edge edge’ – those devices in the hands of end-users, or perhaps more interestingly, IoT devices of various sorts.
We may have found value in centralizing compute on our Web and application servers in the 1990s and then in the cloud in the 2000s, but today, we’re just as likely to want to move our compute to one edge or the other.
The reason: the quantity of data we want to collect at the edge is exploding, both with smartphones as well as the IoT. Shipping all those raw data feeds back to the cloud is becoming increasingly impractical, and thus we are more likely to want to process them on the edge and upload only summary information.
Summary information, however, isn’t good enough – we want to ship intelligent inferences to the cloud. As a result, AI is also moving to the edge, as such data processing is growing increasingly intelligent. If we can use AI on processors on the edge to make such inferences, then we can leverage the best features of the cloud as well as all the performance we can squeeze out of processing locally.
From Low-Code to High-Code
We’ve written extensively about the low-code movement – platforms that simplify and streamline the work of developers, who now use configuration-based drag-and-drop tooling to replace most or all of the hand-coding they had to do previously.
In response to this movement, a few vendors are doubling down on what we might call ‘high code’ platforms – platforms that may very well apply low-code principles in some areas, but which offer more traditional coding environments for other tasks.
The core realization: coders love to code, and furthermore, there are certain tasks that are simply easier and faster with hand-coding.
As with the other trends on this list, it doesn’t make sense to have some kind of religious argument for why the new approach is always better than the old. Instead, it’s always a matter of the right tool for the job.
Sold-State Storage vs. Spinning Disks
Common sense would indicate that sold-state storage – non-volatile storage technology with no moving parts – would be more reliable and longer living than any technology that has bits that move, like the spinning disks and jiggling heads of traditional hard drive technology.
In fact, we might argue that the only reason that anyone would buy spinning disks anymore would simply be because it’s cheaper – and once solid-state storage reaches price parity (which it is already achieving), then that head jiggling, disk spinning gear will be relegated to the dust heap of obsolete technologies.
Not so fast. As it happens, spinning disk vendors continue to innovate furiously, as they pack bits into ever smaller volumes while improving performance and reliability – all at ever-falling costs.
One of the latest innovations: sealed disks filled with helium. Not only does helium reduce the possibility of corrosion, but it’s less dense than air, which means all that spinning and jiggling faces less resistance.
Bottom line: don’t count spinning disks out yet.
Microservices vs. ‘Macro’ Services
Another trend that has gathered followers with a near-religious fervor: the rise of microservices.
Small, self-contained units of code that are well-suited for the dynamic, ephemeral world of containers. What’s not to love? Maybe you should rewrite everything as microservices, right?
Once again, not so fast. True, microservices are smaller than software components were before, and that makes them easier to write. But at the same time, they’re also harder to manage.
Furthermore, the more, smaller microservices you have, the more important it is to architect them properly.
A microservices architect must serve in the role of cat-herder, as typically multiple development teams want to deploy their shiny new microservices on a continual basis – only now you need to figure out how to integrate them with each other and with everything else you have in your environment in such a way that you can maintain performance, quality, and security.
If you aren’t comfortable with such headaches, perhaps I can interest you in what we might call a ‘macro’ service? In other words, a larger block of code that perhaps does several things, as opposed to a microservice that should only do one thing.
True, these coarse-grained business services were all the rage in the SOA days of the last decade, and generally proved to have limited flexibility. But now that particular pendulum has swung all the way to the other side, as microservices may offer increased flexibility but at the cost of far more difficult architectural challenges.
The Intellyx Take: The Right Tool for the Job
So, which is better: modern, cloud-native software or legacy software? Centralized or decentralized? Microservices or coarse-grained, monolithic enterprise services?
The answer: it depends. If anybody tells you that one option is always the better one, run as fast as you can in the opposite direction – even if they’re being a cheerleader for the latest, greatest innovation.
Every piece of technology, old or new, has its strengths and weaknesses. Any IT leader or architect should know what those are in order to make the right decision for the particular need at the time.
All of these individual trends roll up into what we call hybrid IT – an approach to enterprise IT where decision makers not only realize that organizations will leverage a mix of old and new, on-premises and cloud, but will actually do so on purpose.
The seemingly backward trend from new to old is, in fact, forward progress from poorly thought out decisions to more a more mature, resilient approach to building and running technology-based solutions – just the ticket for the modern, hybrid IT world of enterprise technology in the digital era.
Copyright © Intellyx LLC. Intellyx publishes the Agile Digital Transformation Roadmap poster, advises companies on their digital transformation initiatives, and helps vendors communicate their agility stories. As of the time of writing, none of the organizations mentioned in this article are Intellyx customers. Image credit: Ya, saya inBaliTimur.