From the popular media to the never-ending march of new intelligent technologies, artificial intelligence (AI) is everywhere. Beyond the whiz-bang factor of consumer-focused innovations, however, enterprise organizations are in an all-out arms race to employ AI to identify actionable customer insights, make better and faster decisions, and to automate anywhere and everywhere they can — to become what we call cognitive enterprises.
In the cognitive enterprise, artificial intelligence powers nearly every aspect of the operational model, informs or automates critical decisions and is central to its value generation. As organizations move down this road and employ AI more expansively, they quickly realize that it is not just another type of application.
AI workloads are demanding and can quickly push traditional architectures to their breaking point. As a result, organizations are re-envisioning their entire technology stack from the infrastructure on up and optimizing it for an era in which AI workloads are central to their operations. As they do so, they also realize that these new AI-centric architectures demand a fresh perspective on how they monitor them.
Re-envisioning the Technology Stack for AI
At the beginning of their AI journey, most organizations believe that they can simply run these new AI workloads on their existing technology stack. After all, the rapid movement to both public and private cloud technologies has taught enterprises that infrastructure is a commodity and that all focus should be on optimizing at the software level.
What organizations realize as they deploy AI workloads at scale, however, is that with their intensive nature, infrastructure does, in fact, matter.
As I wrote recently in my article, Why Hardware Matters in the Cognitive Enterprise, AI workloads require an optimized stack that extends beyond the software layer and which optimizes both the underlying hardware as well as the integration among software components.
The compute-intensive nature of these workloads requires holistic, end-to-end optimization and are resulting in organizations taking a fresh look at how they architect their entire stack to accommodate them. The rearchitected stack incorporates some elements that are purpose-built for AI workloads, but the effective use of AI requires more than just some specialized parts.
The data-centric nature of AI requires that organizations feed these systems and their workloads with mountains of data from countless disparate sources. This requirement means that in addition to the compute-intensive workloads themselves, AI also increases capacity demands across the entire hardware, software and integration stack.
Optimizing Performance Management for AI
Once organizations begin to rearchitect their environments for AI, their operations teams realize that their traditional approaches to performance management may be inadequate for specialized AI workloads.
While some performance management approaches ostensibly look at performance across tiers, most are staunchly focused on specific layers, platforms or technologies. It’s sound logic: while understanding interconnections is essential, IT optimizes performance at a component level.
AI workloads add another dimension.
While component-level optimization must still occur, AI workloads also require that performance optimization occur holistically across the entire stack because of their parallel processing and data-intensive nature. Moreover, this is true not only once workloads are in operation, but during the planning, testing and deployment processes as well.
In addition to the intensive nature of AI workloads, there is another factor that complicates performance management — but also makes it all that more critical: real-time demand.
While the real-time delivery of insights, next-best actions or automation are not the only business use cases for AI, they are among the most prevalent. In addition to requiring performance management to optimize the stack holistically for core compute functionality, real-time interactions will increase performance management demands across the entire architecture at a systemic level.
The Intellyx Take
To a certain extent, the impact of AI on performance management is a natural evolution. After all, a holistic view of performance is important for all modern workloads, as the technology stack now powers every aspect of the customer journey and drives each organization’s value creation model.
The unique and intensive characteristics of AI workloads and the demands they place on the entire architecture, however, require organizations to address this additional dimension. In this new environment, the ability to execute holistic, end-to-end performance management specifically tailored to support and sustain business-critical AI workloads becomes a business imperative.
In their quest to become cognitive enterprises, organizations must re-envision their entire technology stack to support the demands of AI workloads — and then must adapt their operational and management practices, accordingly. This transformation will include many steps and actions, but integral to this process will be the adoption of infrastructure performance management tools, such as Galileo Performance Explorer, which will help organizations get the end-to-end, holistic viewpoint they require.
Only those organizations that are able to adapt both their architecture and their management practices will be able to take advantage of and sustain their investments in AI.
Copyright © Intellyx LLC. ATS Group is an Intellyx client. Intellyx retains full editorial control over the content of this paper.
Comments