Article for SiliconANGLE by Tony Baer
Clearly, not all AI models are alike. Even categorizing them under the buckets of machine learning, deep learning and generative pre-trained transformers doesn’t do justice to the many variations for how algorithms are structured, and how data is processed.
But it all starts with finding the right tool for the job. Predictive and prescriptive analytics are different from voice recognition, which in turn is different from entity extraction, natural language query or content generation. Some problems require hard facts, while others just require a general idea.
A colleague of ours, Jason Bloomberg, summed it up nicely: It’s a matter of precision versus salience. At this point, ML or DL models are better-suited for providing more precise answers, while generative models will be best utilized for establishing context. In many cases, the choice of approach won’t be either-or, but an “ensemble” of different models that each solve parts of the problem that are assembled into a composite answer.
So, if you are a financial institution or insurer deciding whether to grant a loan or underwrite coverage, you need hard, quantifiable information. The same goes if you are a farmer seeking to optimize how much water or fertilizer to apply to different parts of your spread; a manufacturer, transport or logistics provider seeking preventing maintenance; or a retailer predicting customer churn. An ML algorithm that is fed relevant statistical data is likely to be more precise in its answer than an LLM transformer, and therefore better-suited for these use cases.
Click here to read the entire article.


