The AI Elixir of Dr. Doxey: Myths, Misconceptions, and Neural Networks
Over the past weeks, I have been witnessing a couple of “self-promoted” AI specialists touting properties that AI does not have (yet). Sure, it may impress people because it looks so magical, but I like facts and reality. Worse, when confronted with real facts, they immediately pretended to be more aware than I am. Of course, that’s possible, but being an engineer, I decided to reconsider my theory and went back to the books (white papers to be precise) to check ….and no… I didn’t ask ChatGPT for an answer; thank god, I can still think on my own 😊.
It reminded me of Dr. Doxey from Lucky Luke. Remember him? In this Belgian comic, he was the so called “genius” who sold his miraculous elixir, claiming it could cure anything, backed of course by his so-called “30 years of research” and his disguised friend pretending to be cured. In the comic, Dr Doxey is fooling people with grandiose promises and just enough jargon to sound legitimate. Sound familiar? Well, there are some self pretending AI evangelists that are selling the same kind of snake oil, except their “elixir” is filled with myths and misconceptions about what AI, like ChatGPT, can and cannot do.
So, in this post, I’m channeling my inner Lucky Luke to call out these “digital illusionists” and help you understand the real limits of AI. This way, you can use its full potential without falling for false promises.
Myth number one: ChatGPT can make mathematical calculations.
The answer I got from one of these Digital illusionsits was, “ you just take a picture of your equation and it will resolve it thanks to the neural network”. I let you imagine my engineer brain short-circuiting at that time and the mathematician soul in me crying simultaneously after hearing that absolute nonsense …
So, let me explain to you in simple terms what a neural network is, how it’s trained, how it works and why it doesn’t do logic the way you might think ( or at least the illusionists want you to believe it does think).
So, first, what’s a neural network? Imagine a giant interconnected web, mimicking (roughly) how a human brain works. It is mimicking and luckily it is not anywhere close to an actual brain. Neural networks are just a ton of math equations/algorithms layered together, designed to process input and spit out something that makes sense (most of the time).
so basically you have an input in the layers and you receive an output on the other side.
In order to reach that competence, the network gets trained on huge datasets → think mountains of text, images, or whatever else it’s supposed to learn from. During this process, it doesn’t “learn” like we do. Instead, it finds patterns, correlations, and relationships in the data. So, when you ask it something later, it uses those patterns to predict what the most likely response should be.
Let’s take an example I often use with my colleagues → let’s say you type, “What does a man next to a pond with a cane do?” The neural network doesn’t see/read those words as you do. Nope, the first thing it does is breaking the sentence into tiny pieces we call tokens. It might slice it up into [“What”, “does”, “a”, “man”, “next”, “to”, “a”, “pond”, “with”, “a”, “cane”, “do”, “?”]. These tokens are then converted into numbers, multi-dimensional vectors, to be precise. Think of these as long lists of numbers that represent the meaning of each token based on patterns the model learned during training.
Here’s where it gets interesting: the word “cane.” On its own, the vector for “cane” could mean several things; a walking stick, a type of sugar plant, or even a beating rod. So how does the network figure out which one you meant? the answer -> Context. The vectors for the surrounding tokens “man,” “pond,” and “with”—help narrow down the possibilities. It’s like “cane” looks around and says, “Ah, I’m next to ‘man’ and ‘pond,’ so I must be referring to a fishing rod, not a walking stick.” This is what we call semantic understanding, where the meaning of a word depends on its neighbours.
Now, the network processes all these vectors of your sentence layer by layer, performing calculations to find patterns that match the input. When it’s ready to generate a response, it doesn’t spit out the whole sentence at once. Nope, it goes token by token. For example, it might decide, “Okay, the first word of my response should be ‘He,’ then let’s predict the next word based on that, and so on.” Eventually, these vectors are converted back into words, and voilà! You get a response that (hopefully) makes sense.
The answer you’d expect is, “He is fishing”, because, let’s be honest, that’s what most men are doing next to a pond with a cane. but ….. Someone else could use this cane as a walking aid or if I was the man next to the pond with a cane, I wouldn’t be fishing, I’d be practicing martial arts.
So, is the answer given by the neural network wrong ? Well, yes and no.
First, you need to understand that it’s not really an answer but a prediction, a potential answer, if you prefer. This is where the prompt becomes crucial. If I had asked, “What does a man in a kimono do next to a pond with a cane?” the semantics and context would have guided the prediction toward a better response. And I’m saying “a better response” on purpose because, with neural networks, there’s no absolute truth, it’s all about probabilities.
The network doesn’t actually know what the man is doing—it’s just predicting the most likely answers based on its training. it can not come with a potential answer if it was never trained for it.
Feel free to test this example out yourself and ask alternative answers to see what it comes up with.
So now, knowing how this neural network works, let’s go back to the myth: “It can resolve mathematical problems.”
But here’s the catch: it’s all prediction, no actual reasoning. Neural networks are amazing at spotting patterns, but they have no clue about logic or rules. Throw a math problem at it, and it doesn’t calculate like a calculator does. It doesn’t “know” that 2+2 equals 4 because of arithmetic rules. Instead, it might recall similar examples from its training data and piece together what seems like the most probable answer. Actually, the number 2 is treated as tokens (text like entities). Therefore, the neural network converts it to a mathematical representation in its training space.
Now, to be fair, for simple and well-known arithmetic, it might give you the right answer. And this is exactly why some of those illusionists insist it works for math. But don’t get too excited. It’s just as likely to throw you something completely off the wall. For example, it could say 2+2=1. Before you gasp, let me clarify: that is a correct answer in abstract algebra under modulo 3 (yes, I love maths 😊). But unless you asked it specifically about modular arithmetic, that answer would make zero sense to most people.
The bottom line? Neural networks don’t “do math.” They make predictions. Sure, for simple cases, the prediction lines up with what you expect, but for anything complex, it’s as reliable as asking your dog to balance your checkbook. That’s why tools like Wolfram Alpha, MATLAB, and Mathematica still reign supreme when it comes to real mathematics. So, don’t fool yourself. Use the right tool for the job.
In my next article, I’ll tackle another myth I heard this week: “You don’t need prompt engineering anymore; you can just talk to GPT…” Well, stay tuned for that one. 😉

One Response to “The AI Elixir of Dr. Doxey: Myths, Misconceptions, and Neural Networks”
[…] back to the Wild West of AI myths and misconceptions! In my last article, I channeled my inner Lucky Luke to debunk the myth that ChatGPT and similar tools can resolve […]