How AI-driven development tools impact software observability

SiliconANGLE article by Jason English

Let’s face it, the next few years are going to be really tough for software-driven companies and software engineers.

Even the most successful startups on their way up will be asked to deliver more software with fewer development resources. That means we can expect to see more artificial intelligence tooling being used in development, in an attempt either to enhance developer productivity or to replace some work hours with AI-driven automation and agents.

Some stories about generative AI hallucinations are making the rounds, for instance when an Air Canada chatbot speciously offered a customer a refund, which resulted in a penalty when it tried to rescind the offer. Or Microsoft’s experimental Tay chatbot, which became progressively more “racist” through dialogue with bias-trolling users.

Haha, funny. We know large language model chatbots have insanely complex models that are largely opaque to conventional testing and observability tools. But enough said about the risks of putting AI-based applications in front of customers.

Let’s shift left, and explore how the use of AI development tools within development processes is affecting software observability and see if we can figure out why these problems are happening.

How would we know AI development tools are reliable in production?

As humans developing software, we never expected to be as fully engaged as we are now. Thanks to the evolution of automation and agile DevOps practices, per-developer productivity is at an all time high. So where else can we go from here with AI assistance?

Let’s look for better data than some fanboy on X saying he developed a whole app in five minutes.

The recent 2024 DORA Report, with a massive survey audience underwritten by Google, does highlight significant improvements in documentation quality, code quality, and code review speed. Then, the report says:

“However, despite AI’s potential benefits, our research revealed a critical finding: AI adoption may negatively impact software delivery performance. As AI adoption increased [for each 25% increment], it was accompanied by an estimated decrease in delivery throughput by 1.5%, and an estimated reduction in delivery stability by 7.2%.”

As it turns out, AI-generated code within applications, when infused with complex probabilistic weighting and nondeterministic thinking, are less observable than conventional applications that contain rules-based logic …

Read the whole article on SliconANGLE here: https://siliconangle.com/2025/04/21/ai-driven-development-tools-impact-software-observability/

 

SHARE THIS:

Principal Analyst & CMO, Intellyx. Twitter: @bluefug