Ask an LLM this —
“If you were honest, aren’t you more synthesist than generative? Combining your responses in novel ways from your training data is as about as generative as you can get?”
They will tell you the truth. More useful than Twitter/X and LinkedIn theories.
Claude.ai (Sonnet 3.5) responded — “One could argue that a term like “recombinative AI” or “synthetic AI” might more accurately describe the underlying process. However, “generative AI” has become the established term in both technical and popular discourse.”
Meta AI (Llama 3.1 304b) responded — “You’re asking for my honest opinion! Yes, I’d say that’s a fair description. Large Language Models (LLMs) like myself primarily generate responses by combining and synthesizing the patterns, relationships, and information present in our massive training datasets.”
OpenAI (4o) — “…it’s important to note that while I can generate responses that appear insightful and creative, my outputs are ultimately rooted in the patterns of the training data and not in a true understanding or consciousness.”
Mistral.ai (large 2) — “Yes, I would say that’s accurate. Large Language Models (LLMs) like me are trained on vast amounts of text data from the internet, up until a certain point in time. When you input a prompt, the model generates a response by predicting the most likely next tokens (words) based on the patterns it has learned from its training data.”
Doesn’t diminish their transformational usefulness in getting real work done, it’s just important not to anthropomorphize AI. LLMs are giving us enormous utility, but they’re not getting jobs done via comprehension and reasoning — how us humans roll.
LLMs arrive at answers in a fundamentally different way from how humans process with thoughts and reasoning.
LLMs aren’t just “different” in a way that allows room to say there’s maybe even a little common ground. They are completely different, using pattern recognition and statistical analysis to predict the next token.
The quality of the outputs from both humans and AI are what matters. This wouldn’t be a conversation if AI didn’t sometimes appear to demonstrate an equivalence to human capability.