There is little question about the value and benefits of moving analytics workloads to the cloud. But unleashing the power of analytics demands that enterprises reimagine the way they structure and manage their data pipelines.
It may have taken a while, but the cloud has finally crossed the threshold. Not only is it no longer seen as a risky move to move critical workloads to the cloud, if you’re one of the remaining holdouts who is not loudly proclaiming that you’ve adopted a cloud-first approach, people sort of look at you funny.
I have seen this movie way too many times, however, and know that there is never a single, simple answer to every question.
But that hasn’t stopped the frenzied rush to avoid being the last person standing on the platform after the train to the future has left the station.
To be fair, I’m a fan of the cloud-first ethos. There are lots of good reasons that every IT executive should be looking at how they can move most of their workloads to the cloud.
The challenge with this fevered transition to the cloud, however, is when enterprise organizations fail to recognize that the cloud is a different approach to meeting IT demands. What worked in your on-premises world may not work nearly as well in the cloud.
In fact, it could go very wrong if you don’t take a fresh look at how you handle things.
Case in point: analytics workloads in the cloud and their associated data pipelines.
Moving analytics workloads to the public cloud, on the surface anyway, is a no-brainer.
These workloads are notoriously difficult to provision and support using on-premises resources with their variable and often intensive demands. The scalability, performance, and elasticity of the public cloud, therefore, would seem to be a perfect fit for these workloads.
And they are.
As enterprise leaders began moving these analytics workloads to the cloud, however, they realized it wasn’t quite so simple. They could not just pick up the complex data pipelines that they needed to support analytics use cases and plop them down onto a public cloud’s analytics service.
Whereas enterprises had purpose-built portions of their technology stack to support their specific analytics needs, public cloud services are just the opposite: a smorgasbord of technical choices that enable any organization to do almost anything it chooses.
That’s a great resource to enterprise technical teams, but for those non-programmers building and managing data pipelines, this vast array of choices is both bewildering and befuddling. Yet, the key to enabling organizations to leverage analytics to make business decisions more quickly demands that they enable these non-technical business users to cross this divide and take control of data pipelines.
While moving analytics workloads into public cloud analytics services unquestionably delivers on the promise of scale, performance, and elasticity, it also comes at the price of complexity in the form of myriad cloud offerings from various vendors.
Organizations must, therefore, reimagine their data pipelines, and re-evaluate the processes and tools they use to manage them as they seek to take advantage of all the cloud has to offer, while minimizing the complexity it brings with it.
In the traditional, on-premises world, building data pipelines is difficult work. The complexity of data sources and integration requirements made the creation of datasets suitable for analytics something that was both time-consuming and resource intensive.
As a result, enterprises often hardened their data pipelines — building them using static processes that ensured their reliability and minimized their resource demands. This approach helped solve immediate needs, but came at the cost of flexibility and scale, thus, the desire to move these workloads to the cloud.
While moving to the cloud helped to solve one set of problems, it also created a whole new set of challenges by eliminating this hardening. Non-programmers now had to sort through the various data sources, choose the right cloud offerings, and use multiple new tools to manage the process.
To address these new challenges, leading enterprise organizations are beginning to reimagine how they construct and manage their data pipelines — and the tools they use to do so.
Using techniques such as visual data pipeline modeling and by adopting so-called no-code tools that mask the complexity and abstract the pipeline model from the cloud infrastructure, organizations can leverage the continuously evolving power of cloud-native analytics services from a business perspective and without continually re-engineering pipelines.
There is little question about the value and benefits of moving analytics workloads to cloud-native analytics services.
But doing so introduces complexity that can undermine efforts and diminish the value organizations hope to realize.
The good news is that new tools, such as Accelerite’s ShareInsights, are emerging which help organizations do things like visualize their data pipeline, and which help them abstract the underlying complexity of public cloud offerings from non-technical users.
While tools like ShareInsights can help make the transition of analytics workloads to the cloud simpler, successfully making this transition must start with a recognition that it is, in fact, a transition. Moving analytics to the cloud demands more than merely the application of existing data pipelines in a new form factor.
Instead, unleashing the power of analytics and realizing the significant potential benefits of this new approach will demand that enterprises reimagine the way they structure and manage their data pipelines in the first place.
P.S. To dig deeper into this important and timely subject, join Intellyx’s Principal Analyst, Charles Araujo, and Accelerite’s ShareInsights General Manager, Dean Hamilton, on November 1st as they examine this topic during an engaging conversation and demo. Register for the webinar here.
Copyright © Intellyx LLC. As of the time of writing, Accelerite is an Intellyx customer. Intellyx retains final editorial control of this paper.