BrainRiffs #02: Is Hybrid AI a Real Thing?

An Intellyx BrainRiffs Podcast

Once we’re done with Generative AI and Agentic AI, what’s next? Time to give up and call a collection of different technologies “Hybrid AI” — just like we already did with Hybrid IT and Hybrid Cloud. Still, the term has some practical use. Your trusty analysts Jason, JE, and Eric are here to lift the veil on this future, or maybe obscure it even more. Listen and enjoy!

 

Watch the BrainRiffs podcast/vCast above, or on YouTube here: https://youtu.be/Jvu5LwrNmJg

Full episode #02 Transcript:

Jason: Hi, this is Jason Bloomberg, Managing Director of Intellyx, and welcome to our second Brain Riffs podcast. So, with me are my colleagues, Jason English and Eric Newcomer. Say hi, boys.

JE and Eric: Hi, boys. Glad to be here. Yep.

Jason: And our topic for today is Hybrid AI and this this is a term that is just gaining some initial currency.

It’s not that familiar term when I first heard it. I’m thinking, well, hybrid means a mix of cloud and on premises, and that’s not what we’re talking about here. What we’re talking about here is either a mix of two different types of AI or potentially AI with something else. And so this is definitely worth talking to.

So to just get us started, we need to define a few terms. We could be talking about generative AI, which is a way of taking text information and summarizing it or boiling it down to some sort of natural language representation. Or we might be talking about machine learning, which is a way of taking large data sets and finding patterns in the data sets could be anomalies or other patterns.

And then there’s also another type of AI that is being discussed in this context, which is Symbolic AI, and Symbolic AI is less familiar. So, basically a way of taking linking facts and events using logic rules to making the knowledge machine readable. You could take, say, compliance documents with a list of regulations and turn it into a form that the AI can turn it into a form that is now machine readable that you can use to combine with other types of AI.

So that, that sort of gets us started. So do one of you want to jump in and get the, get the conversation going from here?

JE: Yeah, I could basically look at hybrid AI as the same thing as Hybrid IT or Hybrid Cloud. I mean, when, whenever I hear the word hybrid, I just think you’re basically munging two categories together in a sort of a haphazard way.

And then, just calling whatever’s left over at the end of that process a hybrid version of IT. Or AI, as it were. So for instance, this idea of generative AI actually producing some synthesizing some sort of content out of, out of a text prompt or a question or synthesizing an image out of that is basically the generative part.

But it doesn’t include basically understanding what was behind that. Now we’ve seen forms of this, like For instance, at at Zoho day, we were just there and Zoho has something called chain of thought that they include where they take semantic instructions for very specialized AIs. Right? And use that to kind of interpret what the actual objective is trying to be.

So in essence, they are all doing some sort of hybrid AI. And you’re using generative AI to chat and talk to a system. And then it has a bunch of specialized business rules that it can interpret in a form, just like you’re saying with symbolic AI. So, in that sense, we may be getting to this next level of understanding what the process is behind the inference that it’s making, rather than just stringing together sets of words as generative AI has done to date.

Eric: At the high level, this strikes me as another attempt to save generative AI’s bacon. Generative AI has been overhyped so much, still, when you talk with some people who are unfamiliar with how it works, they still take it as presented that it can solve everything.

It’s got the world’s data. It’s got the world’s information. It has a chat interface. You can ask it anything. It can do anything. That is not correct.

Two years later, everybody’s starting to realize it cannot do everything. It’s not going to be the end all and be all of AI. Originally when it came out, and you presented the limitations back to the, the AI companies, they would say, well, just wait.

This is the first version. We’ve got another version coming. We’ll solve all these problems over time. No, they are not solving these problems. They are not solvable problems. It turns out to get Gen AI to work correctly, you have to break the problem up. You cannot assume that you can ask it anything.

You have to ask it specific questions. That’s the whole thing behind prompt engineering. It’s behind the way of coding. It’s become accepted of getting it to be an assistant, giving it specific tasks to do. Go give me this piece of code. Fix this piece of code. Don’t ask it to create the whole application.

It might do it, but that’s not going to be any good. And now that we recognize Gen AI is not good at everything, and you need to break up the problem.

Oh, look, some of the problems that Gen AI is purported to solve can be better solved by other AI techniques, such as machine learning, so this, I mean, you could say we’re saving GenAIs bacon, or we could say we’re finally figuring out what the various types of AI are good for and what they’re not good for.

GenAI is very powerful for what it is good for. And as long as we know how to use it, then there’s no problem.

Jason: So a couple of examples. One example here that combines machine learning and generative AI is in a health care situation where machine learning could come up with a diagnosis by analyzing symptoms, test results in patient history. And it can do that relatively accurately given the right diagnosis and input data.

And then generative AI could then step in to explain the diagnosis to patients in a clear natural language way. So the generative AI wouldn’t create the diagnosis because machine learning is better suited for that. But it could convert it into or explain it in a better way.

But there’s also another medical example here that is our healthcare example that combines machine learning and some symbolic AI. In this case the machine learning could analyze medical images. That’s something that that machine learning can be very good at. And that symbolic reasoning can follow clinical guidelines.

So clinical guidelines would be in text form, a series of individual paragraphs that describe best practices for the clinician that might create more accurate diagnoses for patients.

So another example in finance might be combining for fraud detection, which might be combining rules based approaches, which can ensure compliance with regulatory standards with machine learning, which can detect suspicious patterns in transactions.

So simply detecting the suspicious patterns doesn’t necessarily tell you if they’re fraudulent or not, because the machine learning doesn’t know what fraud is. It knows what anomalous patterns are. Anomalous data in the large data sets are what machine learning is good at. But rules based or symbolic approaches can follow, and can ensure compliance with regulatory standards because it can process regulatory standards and put them into a machine readable format.

Eric: Well, I think this goes back to another point you made at the opening, which is we’re starting to find out what GenAI is really good for, and we’re finding out that maybe it needs to be combined with some other approaches to provide certain solutions to certain problems. But aren’t we then in danger of another hype cycle around hybrid AI instead of just kind of recognizing that GenAI is a component of a solution, not the whole solution.

Now we’ve got another. Category to hype up and say, this is how you solve everything as opposed to just kind of stepping back and saying, okay, GenAI’s not what it was purported to be. It’s just one piece of a solution. Let’s go at it at a very practical level. And I try to avoid more hype.

JE: Yeah, that’s a good way to look at it.

I mean, I think it’s safer to think of AI as a composite solution, not a monolithic application, that’s for sure. There’s not going to be one model that rules them all. So we’re starting to see the emergence of a lot of AI orchestration layers where you have basically a composite AI underneath other orchestrated AIs that are basically just routing the appropriate queries and questions or are asking for data from different sorts of specialized AI.

So these composite models are becoming way more prevalent. And so I don’t know if I would call it hybrid AI rather than AI orchestration, where you have a layer that has access to multiple AIs. And it kind of functions like an iPaaS or an integration layer that can route things to different AIs that are in the stack.

Some of those could be more heavy machine learning based. Some could be just regular heuristics and an automation and things like that. And that’s kind of where you get into, why do we need to throw away all of our existing automation and business rules that we have built into our systems?

A real hybrid AI approach should include business rules and processes from our conventional systems as well as new AI models, and then bring them together in a more productive way. Now, I don’t know if that’s where the definition of hybrid AI will go, but that’s just my take on it.

Jason: Yeah. So, so this question of whether hybrid AI will generate a lot of hype. Well, it probably will when the vendors jump on it and the marketers jump on it. But if you boil it down, it’s really a question of using the right tool for the job and understanding that sometimes you have a problem that is best solved.

With the combination of tools. So, as you said, the combination of tools might be AI and something that is not AI, which may or may not fall under the hybrid AI definition. It really depends upon how you want to use the term or another. Another way to extend the notion of hybrid AI is combining different models.

This is already going on where organizations are combining with each other. Either different public models or public and private models, even within the context of generative AI in order to deliver better answers, deliver better results or to leverage different data sets as appropriate. So sometimes an organization may want to have Private LLMs to access corporate data where they’re leveraging the public LLM for less sensitive data, but they want to combine the two. Is that hybrid AI? Well, probably you could call it that. I mean, it’s hybrid in a sense, whether we’re going to be defining that as hybrid AI or not, really, the market will have to have to determine.

But that is actually going on where organizations are mixing different models, trying to figure out what models are good for what and dealing with both internal and external models, both for Data privacy concerns, but also for cost concerns, right? Running a private model with your own data set is likely to cost less than the actual all in cost of running a public model with a massive public data set.

Eric: Well, cost is a whole other discussion, probably, but I’d like to pick up on something else from what you were talking about. But regarding the tools we’re talking about another tool set, and I was at a presentation and have been reading some of the work by Phil Calçado, who’s got a company called Outropy, who is in the business of building AI agents.

And he said, you’re going to have to have a whole different set of tools than you currently have. If you’re in, for example, the microservices world and you’ve got your platform engineering tool set and your standards and your best practices built up around that, guess what? It’s not going to work for AI agents.

You’re going to have to retool. We have to find the right combination of new tools that work with AI that include AI in the application pipeline in the right way. And that’s not what people have today. Whatever we’re going to call it, I think we also have to recognize that it represents, to some extent, a shift, a disruption in the platform engineering world of how do I get my tools together that the tools that I need to do the job now I’ve got AI in the mix, I need a different set of tools.

Jason: So you bring up agentic AI, which is another hot topic. And, and one of the challenges with AI agents is that essentially, they’re focused on individual tasks or an agent will do something for you, return your result, but the agents that we have today are not very good at orchestrating sequences of events right now.

So what a lot of the agentic platforms out there, what they’re doing is they’re having individual agents doing individual tasks, and then some sort of workflow or orchestration engine that is sequencing different interactions, some of which may be the actions of agents. This would bring up the question as to whether or not some different kind of AI, maybe not agentic AI, but some other kind of AI that is particularly good at orchestrating different tasks might be combined with agentic AI, which is good at performing those tasks autonomously.

And that’s an open question. Would we call that hybrid AI, or would we call that something else?  That remains to be seen.

JE: Yeah, that makes a lot of sense in terms of whether we’re going to reach a definition for hybrid AI now or later. I mean, just opening up the whole cost concern and resource consumption aspect is something else we should probably do in a future episode because a lot of these decisions are made based on what resources are available, where the inference is happening and what data is available at that point, right?

So, like, you could take a model and break it down into an endpoint where it’s running on someone’s device and it has a localized model that does certain tasks very well, and maybe it is sort of self-contained, but it’s like a self-contained hybrid AI model that has aspects of all of those things.

But then if you roll it back up to the corporate level, they probably have much larger versions of hybrid AI that have to include all of those aspects, but with just much higher horsepower, much more storage, much bigger resource consumption and footprint. So, that’s the problem with a term like hybrid. It just seems like a give up, throw away term for any industry where it’s changing, you know?

Eric: Well, yeah, exactly. I think it’s not completely well defined in the marketplace yet, but it’s interesting to see it starting to be to be used and struggling market struggling to really define it hybrid or not, though, it seems pretty clear, back to the point you made earlier about this is not a monolithic application.

You can’t think about it that way. Now, I think people are recognizing whether we call it hybrid or not. You’re going to have to figure out how to use the tools in the right way for your problem. You’re not just going to have a solution for everything. You’re going to have a set of components that you’re going to have to put together yourself, depending on the problem you’re trying to solve.

Jason: So one of the trends we’ve seen in the market over the last few years is, there were already a number of vendors who were selling AI based solutions, and that’s been going on for a number of years. And along came generative AI two or three years ago, and a lot of those AI vendors had to either retool their product line or at least retool their marketing to be genAI centric because it was such a hot topic.

So all of its other forms of AI, the deep learning and the machine learning in the natural language Processing and some of these other forms of AI, we just sort of subsumed under this GenAI banner and now what’s happening since we’re realizing that genAI isn’t good for everything. It’s not a panacea.

Well, those vendors who have spent a lot of time and effort building out machine learning or some of these other approaches to AI are now well positioned to play in this in this new Hybrid AI space and I would definitely envision this year, a number of these more mature AI vendors basically saying, Hey, we can do multiple kinds of AI, and we will not only help you do these different kinds of AI will help you do machine learning, will help you do deep learning, will help you do genAI, but will also help you put these together into hybrid AI solutions depending upon the problem you’re trying to solve. And now it’s going to be interesting to see what they come up with, right?

How well can these vendors do that, assembling these hybrid AI solutions out of individual AI based components and what kinds of problems they can solve that way? And there’s going to be an interesting time because some of the simple examples on this call,  taking machine learning and generative AI to do better  medical diagnostics, I would say that is going to be just scratching the surface of the power of combining multiple different approaches to AI into a single solution.

JE: Yeah. The market’s going to change. It’ll be interesting to watch how traditional automation vendors and process automation vendors have been wrestling with this problem for decades, right? And so how are we going to take the same problems that were solved then and apply it to composite AI models or a hybrid AI model? It’s going to have to take with it some of the same practices, even if the technology is on a completely new level, it’s still going to have to be thought about that way, I think.

Eric: Not to mention the impact on DevTools.

Jason: Yeah, that’ll be interesting too.

JE: Yeah, especially as it replaces coding. What are we going to do, you know?

Jason: Very good. Well, I think we’re at about time here, so we should wrap up. So thanks a lot for tuning in. Again, this is Jason Bloomberg of Intellyx.

JE: Jason English Intellyx.

Eric: And Eric Newcomer from Intellyx.

Jason: Thanks a lot, everybody.

JE: And we are in your corner. We need a tagline.

SHARE THIS:

Principal Analyst & CMO, Intellyx. Twitter: @bluefug