AI Is Too Expensive

AI is everywhere

Since the release of OpenAI’s ChatGPT 2.5 in November of 2022 the topic of AI has infused itself into nearly all walks of our modern life. This trend is unlikely to stop as AI will disrupt business, transform daily life, and will continue to invade the public consciousness. Awareness of what AI is, what AI it’s capable of (and not capable of, if anything) is crucial to weighing opportunity cost of IT time. It’s simply too expensive to not be aware of what AI is.

What is AI?

We’ve been living with functioning computer systems that can imitate intelligent human behavior for decades, typically outperforming humans by orders of magnitude. This is not new.

The impressive science that drives OpenAI’s ChatGPT is also rooted in research that has been in flight since 1986, taking iterative steps forward and expanding AI function and capability. Recent breakthroughs in Generative Pre-trained Transformer (GPT) models (often collected under the reference of Large Language Models) has shown how language can be mapped in multi-dimensional space with more dimensions than there are atoms in the universe. Starting with recurrent neural networks and expanding through perceptrons, transformers and attention headers, massive scaling has shown extraordinary progress in achieving artificial intelligence. This is new.

LLMs are extraordinarily capable of generating text, responding to any given prompt, by predicting the most probable next word. Words that represent ideas, moods, taxonomies, ideologies, concepts, facts, and fiction. These can be directed with specific scope, tone, and candor to essentially find and describe almost anything that has ever existed. LLM’s are also able to formulate these words into things that have never existed thereby becoming a creative agent, assembling past works to create new works.

Substitute “word” for “pixel”, and substitute “narrative” for “image” or “video” and we have the same concept simply applied to a different medium, and this is multi-modal AI. The principles are roughly the same, even though the core technology and science differ in important ways. These ways are not, for the general audience of the world, terribly important to understand. But the fact that image and video AI agents operate on the same principle as text-based LLMs is important. AI is simply too expensive to not understand how this behavior works.

Why Should Anyone Care?

The functionality of the modern blend of AI has some immensely powerful capabilities that offer a great deal of opportunity. LLMs also have some important flaws. A globe earth and a flat earth are both facts to an LLM, each having a different probability of answering a specific question. LLMs can be easily coerced to tell you almost any words, generate almost any image, and present the information factually.

This data is not assembled from citable sources, but instead from probabilistic inference in highly-dimensioned region mappings. The resulting responses are assembled “from thin air” (so to speak) which is what makes AI such an impressive and awe-inspiring technology. This probabilistic inference is also what makes it thoroughly capable of providing confident but inaccurate responses. LLMs cannot provide you with the truth, just words which you must evaluate for truth (or not). LLMs do not “know” the truth, they just have statistical probability to generate the most likely match.

LLMs cannot reason, but they can be used to simulate reasoning through iterative implementation. Chain of thought models can break down a question into its component parts, answer each component and use that answer to influence the next step, and thus simulate reasoning in a way similar to humans. Deepseek’s most recent releases of their v3 and r1 models have also exposed how Mixture of Experts models can achieve exceptional performance with a much smaller footprint, reducing price without necessarily sacrificing quality.

This being said, LLMs are not intelligent, they just simulate intelligence. Seems like a very reasonable argument to warrant calling it AI. But the distinction is important here…the AI itself generates a probabilistic assembly of information. Based on all of the information AI has digested and learned from, this is what AI believes most probably answers the question you ask. It’s not reason, it’s probability, and while philosophy can weigh in on what that actually means the reason we care is that it’s simply too expensive to believe LLMs will actually reason their way to a truthful answer.

Do I Really Need AI?

It’s impossible to deny how impressive the technology is. Ask an LLM to answer any question, or ask AI to generate any image, and it will do so in a thoroughly convincing way. AI through the power of GPT and LLMs work, however accurate they may or may not be. If you do not take advantage of it someone else will, and they will do things with AI that will give them competitive advantages that you may miss out on.

While this sounds like a classic call to avoid fear of missing out, the reality is that these tools are here to stay and not using them in an appropriate manner would handicap your own possibilities. This is a truly disruptive technology that has fully infiltrated our lives, and will continue to do so. To ignore this is simply too expensive.

Is This Really AI?

A debate is occurring about the definition of AI. Some in the AI field believe intelligence is defined by the ability to learn, and therefore since LLMs do, in truth, learn to create emergent structures of text, images, and video they are intelligent and hence AI. Merriam-Webster provides this definition for artificial intelligence:

“the capability of computer systems or algorithms to imitate intelligent human behavior”

The core debate is whether or not we are near, or are already at, something called Artificial General Intelligence (AGI). At the risk of oversimplifying, this would be where the AI can learn by itself to teach itself to learn anything and create anything. Whether this is the correct definition is, of course, debatable…but whether it’s correct or not there is an intense focus on AI achieving the goal of self-sustaining, self-learning, self-motivated functionality.

How Should I Use AI?

We are bombarded by many concepts all at once in trying to decide what we can do with AI. Retrieval Augmented Generation, Agentic Pipeline Workflows, and Finetuned Model Training are some of the most recent trends in implementation approaches. Older concepts of Classification, Entity Extraction, Clustering, Data Graphing are still critical tools that provide immense value. Any of these approaches and tools have a non-zero cost, and in many cases that cost can be quite high. There is a value to all of these, when used correctly to solve the right kind of problem.

The technology is changing constantly and at a breakneck pace, with market leaders jockeying for position. The landscape is vastly terraforming with alarming speed, where a decision to commit to a path today will very possibly be obsolete in 6 months or less. Big bets on the use of this technology can weigh heavily on opportunity cost if not delivered with agility, flexibility, and scalability. These problems are not unforeseeable, but they are difficult to gauge without experience. Choosing AI without a guiding hand is simply too expensive.

Innovent Solutions has over two decades of experience working with Information Retrieval solutions, and has been deeply involved in the emerging technologies in GPT, LLMs, and AI. Contact us to help you make the right decision on how to use this phenomenal technology intelligently, and help you keep the decision from being too expensive for you.

*This article was written by a human being and not processed by transformer models.