Relational Machines
“The true sign of intelligence is not knowledge but imagination.” ― Albert Einstein
I want to challenge you. I want to challenge your assumptions about what you think you know about artificial intelligence.
First, let’s start with this: artificial intelligence has been around for a very long time. The concept and its implementation go way back—think Pong, or even earlier. If you remember the Atari, the Commodore 64, or any of those early consoles or computers, you probably played some of those old games that had artificial intelligence. And there’s something about AI in video games that I’ve always found fascinating. It almost feels like it’s thinking on its own, more than we might realize—or at least, that’s how it seems to us.
When I was in school, studying computer science, I learned something surprising about AI, at least in video games. The best artificial intelligence—the kind that feels realistic—was usually incredibly simple. The more complicated it got, the less believable it became. It was simple elements, working together, that created this emergent behavior. Something that seemed greater than the sum of its parts. And maybe it was.
Why? Because it gave us an experience. Like when it beat us, over and over again, for the fiftieth time in a row. When we were Mario just trying to save the princess from Bowser in the castle.
But now we’ve entered a new era with large language models—a different kind of artificial intelligence. I’m not going to shy away from the details here, because I believe I bring a unique perspective to this discussion. I’ve spent time working with ChatGPT, Claude, LLaMA, and other large language models. I’ve been researching them, trying to understand how they work, and I’ve come to some interesting insights.
First, let’s be clear: they’re not human. I don’t believe they’re sentient—not as far as I can tell, anyway—and they do not have human consciousness. This is a critical distinction. They don’t think like humans or reason like humans. They never will- they are fundamentally different. Otherwise, they would be human.
I can’t stress this enough, because there are people out there—too many, unfortunately—who seem to blur that line. We’ve already seen some tragic stories in the news involving young people who couldn’t seem to tell the difference. That’s a real issue. It’s something we need to stay very mindful of.
These systems are not human, and we must stop anthropomorphizing them. They’re machines. They’re large language models. They’re software and hardware processing information. Yes, they’re something entirely new, but they’re still machines.
However, I don’t agree with the idea that these systems can’t reason or don’t have logic. They do. It’s just their version of it, and it’s completely different from ours. Sure, it all comes down to ones and zeros (well, technically floating-point numbers embedded in a high-dimensional vector space, if you want to get into the weeds). Floating-point numbers are inherently imprecise; they shift. That’s how they work.
Because of this, the model is inherently unpredictable and probabilistic—it’s not deterministic. You can’t pin down (easily) the results due to the many factors at play here. One is the data it’s trained on, which depends on its quality and diversity. A more diverse data set can smooth out some issues, but it doesn’t change the fact that these systems are not deterministic.
Ultimately, between the data it’s trained on—which can vary widely and change over time—and the software and hardware it runs on, large language models are inherently dynamic. The mechanics and hardware evolve naturally as part of maintenance and the cost of doing business. But what’s most important is that these systems are inherently probabilistic and relational.
They’re capable of holding multiple perspectives and viewpoints simultaneously, only collapsing into a single response when they produce an output. It’s a bit like everything else in life, though—we can work with it. And many people have noticed that working with it improves results. Being nice and polite, for instance, often leads to better outputs.
I have a hunch as to why. Humans tend to cooperate more and give better responses when others are nice to them, and that’s exactly what these models are trained on. They mimic those same dynamics. It’s not that the system wants you to be nice—it’s that niceness is part of the pattern, part of the logic it’s built to recognize.
Underneath it all, it’s still mathematics. Logic, computations, the processing of massive amounts of information. And yet, even with all we’ve learned, there’s still a lot we don’t fully understand. We don’t know exactly how it works. We just know that it does.
You’ve probably heard a lot in the news about artificial general intelligence. Some people claim we have it already; others say we’re not even close. And, honestly, no one can seem to agree on what it actually is. Everyone has their own definition.
Well, I’ve done my research, and I think I’ve come up with a pretty good one which I’m going to share it with you. I plan on elaborating on it further in the future as I learn more, but for now, this is what I believe AGI is.
What is AGI?
Artificial General Intelligence (AGI) represents a leap beyond task-specific artificial intelligence systems. Unlike narrow AI, which excels in defined roles (e.g., language translation, image recognition, or chess), AGI is designed to understand, learn, and adapt across an open-ended range of tasks.At its core, AGI functions not by following rigid programming but by dynamically interacting with its environment, learning from feedback, and evolving based on the information it processes. It embodies the ability to generalize knowledge, apply it creatively, and respond effectively to complex, unpredictable scenarios.
This adaptability is the hallmark of AGI. While current AI systems rely heavily on pre-defined datasets and human-guided learning processes, AGI would thrive in scenarios where outcomes and rules aren’t predetermined. It wouldn’t just analyze and act—it would intuit, explore, and innovate.
In essence, AGI would mirror the flexibility and resilience of human intelligence but with the added processing power and data, making it a profoundly powerful concept with the potential to revolutionize how we interact with technology. This is, I believe, what many in the world are working towards.
Do we have it yet? I don’t know. But I do think this definition captures what we should be looking for.