Intellyx BrainRiffs Podcast
Will the advent of an ‘open source’ DeepSeek AI from China really wreck the hype cycle for GenAI and Agentic AI, or will it help the rest of this chaotic market finally learn to do more useful work with less? Intellyx Analysts Jason Bloomberg, Jason English ‘JE’, and Eric Newcomer discuss.
Watch and like our new show on YouTube here: https://youtu.be/eduyLA2U-u0
Full show transcript:
Jason: Hi, this is Jason Bloomberg, Managing Director of Intellyx and welcome to Intellyx’s inaugural Brain Riffs podcast. with me are my colleagues Jason English and Eric Newcomer. introduce yourselves boys.
JE: Jason English here or J E, looking forward to our inaugural discussion.
Eric: Hi, Eric Newcomer, Principal Analyst, CTO here, and, former CTO, Industry and Finance. Looking forward to the conversation.
Jason: So for this, first, Brain Riffs, we figured we would, start with a discussion of DeepSeek. It seems like this new Chinese, open source, language model has upended the entire generative AI or gen AI industry. I must say that back in October, I called it, I called for a gen AI crash.
I didn’t expect a Chinese, project to do the, crashing. But here we are. Eric, you wrote about, Gen AI in this week’s Cortex newsletter as well. So why don’t you give us your take on this?
Eric: Yeah, I was right in the middle of writing this when the DeepSeek news landed. So I spent a bit of time researching that as best I could.
And it looks like the main impact is economic. The news out of China is DeepSeek created a superior Gen AI engine for a lower price, and they’re giving it away as open source. People can use it for free. It’s been the most popular downloaded app on the mobile app stores, and it just had a huge impact. I think the impact is on the financing area where VCs have been raising hundreds of millions, if not billions of dollars to invest in AI, and maybe now have been shown that they don’t need that much money.
the business model of OpenAI charging for the use of ChatGPT has been threatened. there’s a lot of pushback on the security front on the bias front and other potential dangers with the DeepSeek model. So big, big news. All right. It’s causing a lot of commotion, It will be interesting to see how it plays out.
JE: I was finding it particularly interesting because it seemed like the industry was going along a black box path that I didn’t think was very responsible.
As we do a lot of talking about responsible AI, it seemed like these large companies had all of a sudden sucked up all of this investment, and then they’re turning it into a basically, you can trust us to develop the ultimate AI that’s going to do everything for you.
that’s maybe even going to replace the human workforce and the rest of us in the actual. World, may not want it to be managed that way. so I think the idea that they’ve commoditized it sort of puts it back in the hands of the market and then we can look at it, just like any other IT issue, without it being developed in the dark.
Jason: So one of the things I find interesting about this whole story is that DeepSeek comes out of China and TikTok is getting all of this, hassle because it comes out of China. And so if TikTok threatens, the U. S. uh, population, then won’t DeepSeek. And, and, I’m wondering, is anybody going to trust some sort of Chinese software?
Now, the fact that it’s open source and all of the techies can poke around in its innards, I guess will help. I mean, nobody’s poking around in the innards of, TikTok in the same way, but it still makes you wonder. And Eric, you mentioned also some concerns. I just got this email that pointed out that DeepSeek is more biased than Claude 3 Opus, more vulnerable to generating insecure code than OpenAI’s O1, more toxic than GPT 4. 0, and I’m not quite sure exactly what toxic means, but that sounds bad, and 11 times more likely to produce harmful output than O1 as well. it sounds like It’s even worse than TikTok in some ways, which makes you wonder what’s really going on. are people okay with this? if it’s so cheap and so much better than all the others, then do we really have any alternatives?
I mean, is this just going to be the DeepSeek era of the Gen AI movement and we’ll just have to live with it? That’s what 2025 is going to be. It’s the DeepSeek era.
Eric: One of the big changes is they’re using commodity hardware as opposed to expensive GPUs. I read somewhere that one of the reasons for that is we had put export controls over the GPUs on China.
Which kind of forced them to do more with less, if you will, or create these models on cheaper hardware, which is why the whole thing is cheaper. So, in that sense, even though there may be some concerns about the toxicity, I think we have to wait to see how many of these are real and how many of these are competitive issues.
That really will matter in the marketplace. but I think that also it has an impact regardless of that on how things are going to go forward with all the other engines are going to have to try to keep up somehow with this, do more with less, this open source, this commoditization as J.E. Put it, I’m going to have to rebuild probably to some extent.
Jason: it sucks to be NVIDIA, eh?
Eric: their stocks were the biggest hit, didn’t it?
Jason: Biggest hit in history. which just tells me that maybe we don’t need GPUs for this stuff, but that, NVIDIA was way overinflated along with some of these other stocks as well. So, which does make you wonder sort of where the money is going, right?
I mean, if there’s so much money poured into NVIDIA and its brethren and now it’s sort of evaporated now what?
JE: Yeah, like 600 billion dollars of value lost in a day. it’s more than I’ve ever thought about happening, but I think it goes to show you that there’s that like where I was going with the commoditization idea is that I don’t know if the constant consolidation of companies is a good thing for innovation.
And I think what this does is if they can prove you can do more with less, that was the fundamental nature of innovation in those days. Now, if markets can allow Microsoft to buy a part of OpenAI or Google can own all of the training data. That starts getting to the point where you’re not really having a competition of ideas anymore.
and so, even if it’s not the DeepSeek era, That’s going to basically push all of these companies back into the open, and we might see more of an open source initiative emerging from developers here, too, as well in response to this. I think the best possible outcome of this change, would be to actually drive it back into the light so that it is something that we can all start to grasp and help guide.
Jason: Yeah, I would say, it’s important to keep in mind that what we have today, DeepSeek is a version 1. 0 product, not only are the DeepSeek, experts back in China going to continue to evolve this technology, but since it’s open source, all the AI gurus around the world are going to be evolving this technology and building things with deepSeek, but also the next generations of DeepSeek or DeepSeek alternatives. So, if anything, it’s, it’s sort of like, uh, we’re past the AltaVista and Lycos phase where, the early leaders in the search engine market got trounced by Google. similar kind of pattern, only Google is now playing a different role, all of the leaders up through 2024 in the Gen AI space are now going to be scrambling to either do something different, or maybe they’ll go belly up, other vendors are going to, rise in the ashes of the open AIs of the world.
Eric: Something I also like about this is that really popped the inflated hype balloon that all of these initial AI companies, especially open AI have been writing on for so long as if there was something magical and mysterious and mystical about what they were doing that nobody else could do, and therefore their valuations went way up and their investments went way up because these were the guys who knew how to do it.
It could change the market and change the world but now we find out that. Balloon, that bubble has been punctured they are not necessarily the only guys who can do this. They are not necessarily the masters of the universe. Their playing field has been really leveled and we can get back to hopefully more reality based conversations about what Gen AI is.
Jason: Yeah, and we also have concerns, about, the, credibility that Gen AI has as a technology. I wrote about this a while back, where I talked about how, it was a wonderful, bullshit generation machine. And I know, Eric, in your latest, Cortex, you wrote about it being word salad.
what we have with Gen AI is technology for taking large Data sets consisting of natural language and, boiling it down into, representations that look plausible. And so it’s always been about plausibility over veracity, which has gotten people into trouble. so you have this whole movement about how do we avoid hallucinations?
a lot of organizations have put a lot of time and money into improving Gen AI. through 2024. Now I have DeepSeek and it’s like, well, is this going to be better? And yeah, it’s better technology, but it’s still gen AI, right? It’s still taking, large data sets consisting of natural language and coming up with possible responses.
So, it’s going to be interesting to see. I mean, one of the challenges gen AI has had is. reasoning, right? can it, solve problems? Can it, address, complex processes? some things chat, uh, chat engines has been very good at, but, solving math problems has been something it still struggles with.
And the question is, do we really want it to solve math problems as well? I mean, is that even useful?
JE: All right. I mean, one of the interesting parts about so much development just following along this LLM path. Is that really even the model for AI that we wanted to have in the 1st place?
We’re basically what we’ve done is we’ve taught AI. Intermediate language, which is the human language and try to make it emulate human language and then on top of that, get that to do productive work. So why are we teaching intermediate language? That’s very inefficient for its uses. we’re burning up tons of power.
They don’t even talk about the fact that we’re consuming All these terabyte farms and petabyte farms of training data and all of this AI work going on is consuming a lot of power, I mean the size of a small country at this point, the fact that they never address that at all, they act like somehow magically it’s going to come down when, when basically we’re just feeding more and more bloated data into these models and it’s not going to produce results.
That much better results other than the fact that we just add more horsepower behind each search. if we can bring it down, not just in terms of like our accessibility to it, in terms of the cost of licensing, but the cost of all of this power and computing equipment and super expensive chips and everything that’s Being fed into this machine in order to get it to seem more human like, is that really what we wanted?
AI could do a lot more for us in terms of being able to calculate things or form inferences around real business problems and predictive, solutions that we need rather than, being a companion in this sense.
Eric: Well, in researching the article, I found several references to the fact that the most popular usage of GenAI is for Internet searches and replacement for Google.
Of course, there’s a huge interest in coding assistance. But there again, it’s an assistant. It’s not taking over the code. It’s not generating apps for you. It can’t do that. The human has to be involved. And I think this is the fallacy of the human language underpinnings of the whole thing as human language is not computer language, it is not concrete in terms of its transformation into ones and zeros that are executable by a computer.
It’s interpreted. This whole LLM is a process of analyzing words. transforming them into vectors and then comparing them to get statistical matches to produce the result. It’s not definitive. And a lot of the people that are writing agents, I did read an article about somebody who just created a toolkit for creating agents He listed a lot of problems.
This is referenced in the article I just wrote as very unreliable. It’s not deterministic. Then you have on the other hand, Microsoft and Salesforce going all in on this idea of autonomous agents. Well, co-pilots, guess what? Copilots for office bombed. That’s like, I think Jason, you mentioned that’s like Clippy on steroids
Jason:. Yeah, yeah. Go on set.
Eric: it’s not working and Salesforce is out there saying, Hey Microsoft, you screwed up, but we’re doing great. But you can’t find anything in the news about how much revenue is coming in from Salesforce on this. They’re just saying, Hey, we got 200 customers. Well, you gave it to them.
Did they use it? I’m very skeptical of autonomous agents because of this, you know, that’s human language is not capable of that kind of autonomy.
Jason: But to get back to your, you mentioned the power, just the electricity, this stuff consumes. And of course, now, a lot of these, big. A. I. Companies are looking to put nuclear reactors in the data centers, which sounds like a great idea, right?
We can put nuclear reactors in submarines, right? Why can’t we put them in data centers? And, I just love the idea of having uranium in every office park, chugging away generating, electricity. Now, of course, fusion’s on the horizon. I remember when I was a teenager, fusion, power was 20 years away, and gosh, it’s still 20 years away, although we’re making progress, but who knows?
I don’t foresee fusion solving the problem anytime soon. I’d be happy to be wrong, but, even if All of a sudden, we had viable fusion power, and electricity was essentially free. it still doesn’t address the whole problem, it’s still about the chips and the rest of the facilities, right?
So even having unlimited electricity would only be one component of solving this problem. I mean, one of the challenges open AI is having as they evolve their models is that, Using the entire web as a training set is now too small. The entire web is now too small and they don’t have any place to go for more data, right?
Enterprise data can’t be used for training these public models because it’s essentially private data every enterprise can use AI on its own data, but no enterprise is going to contribute its data to this greater good of, these AI models that need more and more data. So what are you going to do if the entire web is Not enough, and enterprises aren’t willing to share.
Well, you end up generating synthetic data with AI, but that’s going to lead to model poisoning and model collapse. I wrote about this a little while back as well. You can’t train AI on AI generated data, or you’ll end up with worse and worse results over time. Will DeepSeek solve this problem? I haven’t heard one way or the other, but, I have a feeling that when push comes to shove, we’re just not going to have enough data to make this Gen AI vision a reality.
Eric: I think it’s the same problem. It’s an LLM based, technology. The difference is just now it’s cheaper And perhaps better algorithms that are running it, but the data challenges is going to be the same.
JE: Yeah, as long as it’s not aligned with this idea of unlimited growth that we seem to have gotten addicted to, right?
I think that’s where we can really improve the model. And so if it does, I think if it does push it back out into the open, where it’s not just about. growing top line revenue, but it’s more about growing capability as opposed to, just growing the size of the data set and the size of the computing farm, I think we’ll finally get it to a better place then.
But, it’s gonna require us to take some of these, people who thought that they could control it out of the loop. So we’ll see if that can happen over the course of this year.
Eric: And who’s going to pay for it all? It’s still the question, Jason you mentioned other sources of electricity available, but these also cost money.
These are investments. This is, where the V. C. s are giving you money for, and how are they going to get their money back, especially if the economic model has been shifted with the release of DeepSeek?
Yeah, it’s, uh, this does make you wonder, yeah, and it’s like this overhype thing that I’m glad to see punctured you listen to the original chat GPT hype, just two years ago it sounded, like it was good for everything They made it sound like you could trust it to be reliable over time.
And as Jason, pointed out several times, that’s not the case. There’s a lot of problems and those were not initially disclosed. Because of, the technology hype cycle to capitalize on investments and the business model that they would put out for the investors to try to make a lot of money.
And I think we’re seeing now through adoption over the first two years that most of the adoption is in searching and coding assistance. And other things in which humans are still involved. It’s not taking over. It’s like autonomous cars still not really working, even though they’ve been promised for years.
So I think. It’s somewhat good to see a correction on the hype cycle here, but I think we’ll be hearing about this for a while because of the economic repercussions through the current models.
Jason: Yes, there’s also the question of AGI, Artificial General Intelligence. Will we ever have AI that can, that is generally useful to answer any kind of question to solve any kind of problem?
hearing from some of the, big AI firms, what they’re trying to say is that, if Gen AI just gets better and better, eventually it will be AGI. But I don’t see it going in the right direction at all, because Gen AI is more about creating plausible results rather than accurate results.
It’s not going to come up with new thinking that is not already exist in its training data. So you’re never going to get to AGI, you’re never going to get to the smart android, you know, commander data of Star Trek. You’re not going to get there, with GenAI, no matter how much money you pour into new chips or, new models or new electricity.
So, uh, so this is also sort of a misleading misdirection in the market where we’re just making AI smarter. We’re not making AI smarter. We’re making it bigger. But it’s not really getting smarter, right? Gen AI is not getting to that point. And, the history of AI, dating back to Alan Turing, has been, constrained with this misconception.
In fact, people have long misunderstood the Turing test, right? When Alan Turing came up with the Turing test, he realized that there was no good way to measure machine intelligence. So he came up with a metric that could be measured, right? So if you can fool a human audience, then that is at least something you can measure.
But what he didn’t realize is that people would take the Turing test and make it a goal in and of itself if you can fool people into believing your AI is human, then that is a sufficient condition for a good quality A. I. So people have come up with ways of creating A. I. That looks more human because it makes mistakes and does other things.
And it’s like, well, yeah, you can fool people fooling people. It’s not a good measure of good. A. I. Problem is GenAI has really been about fooling people into thinking that you’re chatting with an intelligent being on the other end of the line people are falling in love with their AI, girlfriends and boyfriends.
And it’s like, that’s not what this stuff is for. your AI boyfriend or AI girlfriend is not really a person and it will never be a person. It’s not getting closer to being a person.
JE: It’s just a matter of the destination, not matching up
with all of the effort and capability that’s been invested so far. I think this is still a fun development to see. It’s gonna make our lives as analysts a lot more interesting this year.
Jason: sounds like a good note to end on. So we’re about at time. We want to keep this to about 20 minutes.
thank you for tuning into our, first, brain riffs from Intellyx. I’m Jason Bloomberg.
JE: I’m Jason English or J E and Eric Newcomer
Jason: thanks a lot for tuning in everybody.
©2025 Intellyx B.V. All rights reserved.



Comments
Comments are closed.