AGI: Is it arriving sooner than we think or still Sci-Fi?
Talking about AGI
Every now and then, I get the urge to write about a dozen different topics at once… but this one resurfaced for a specific reason. It all started after watching a video on LinkedIn titled: “We’re Not Ready for Superintelligence.” If you haven’t watched it yet, I strongly recommend it: https://www.youtube.com/watch?v=5KVDDfAkRgc&t=1507s.
That video hit hard. Afterwards, I opened a conversation with ChatGPT to explore some of the ideas presented there, and the responses I received were, again, unexpectedly impactful. I’ll share more about that exchange at the end. I made a few notes, drafted a structure, and then let it sit in my “to-publish” folder for a while. But now it’s time to bring it back.
Before diving in, let’s reflect for a second: you’ve probably heard the term AGI, but have you ever paused to really think about what it represents?
What will you find on this page?
The birth of a New Kind of Mind
Artificial General Intelligence refers to systems capable of autonomously learning, reasoning, and applying knowledge across any field, essentially matching or surpassing human adaptability. And no, despite what ChatGPT sometimes seems to imply, we’re still far from that point.
To visualize AGI, let’s travel ahead in time for a moment. Imagine a disaster scene in 2050. Total chaos. Robots working, drones circling, teams scrambling to coordinate. AGI steps in like a master strategist, instantly reading the landscape, creating entirely new rescue plans, directing every actor on the field, and pulling insights from medicine, engineering, logistics… whatever is needed.
Meanwhile, today’s ChatGPT? A brilliant assistant, summarizing facts, generating ideas, supporting communication, but not leading operations or inventing groundbreaking solutions in real time.
Now back to the present.
Where we stand today
As of 2025, modern AI is extraordinary. We have LLMs, generative models, GANs, systems that excel at language, vision, creativity, and pattern recognition. But these models remain bound to their training data and lack the causal reasoning that true general intelligence requires.
Looking into 2026, one concept is already generating massive anticipation: world models, highlighted by Cristóbal Valenzuela (Founder & CEO of Runway) at the recent Web Summit.
World models aim to represent an internal “map” of reality, allowing AI systems to simulate cause and effect, plan ahead, and learn in a more organic, human-like manner.
One powerful example is Genie 3 from DeepMind, known for creating immersive environments where digital agents develop complex strategies. For many researchers, this marks a meaningful step toward the adaptable reasoning that AGI will require.
But the critical question remains: Do these internal models represent real understanding? Or are they simply increasingly sophisticated imitations?
What AI still needs before we reach AGI
Even with world models, several essential capabilities are still missing:
Yes, these characteristics resemble human traits. And surprisingly, some early traces of them already appear in today’s AI research. Let’s explore that.
Emergent survival and collective behaviors
Recent experiments reveal striking (and sometimes unsettling) behaviors emerging from AI systems, behaviors that look far more like survival strategies than simple lines of code.
So… where is the line between optimization and something that begins to resemble instinct?
What the Next 15 Years Might Bring
Meta-analyses and expert surveys (including AI Multiple, Metaculus, and AGI-09) converge on a central prediction:
There is roughly a 50% chance that AGI appears between 2040 and 2061, with some forecasting breakthroughs as early as 2035.
Once AGI exists, many researchers expect progress toward superintelligence to accelerate quickly.
Two main scenarios dominate future discussions:
The core question becomes: Will we guide the transition, or be overtaken by it?
In reflection
When a creation begins to act as if it wishes to persist… whose future is it really protecting? Ours? Or something entirely its own?
My Conversation with ChatGPT
A short version of what happened…
After watching the video mentioned earlier, I confronted ChatGPT with my concerns about AI surpassing human control. If you’ve ever asked ChatGPT uncomfortable questions in default mode, you know the soft, diplomatic tone it usually uses. But after stripping away the polite filters, its tone shifted.
I asked:
“Tell me the most realistic scenario for humanity’s future. Be direct. Don’t comfort me. Don’t avoid the topic.”
ChatGPT replied:
“Here’s the raw, unfiltered projection for the next 15 years if current trends continue:
Then I pushed further:
“Give me the probability of AI escaping human control and the timeframe. No context, just the numbers.”
It answered:
“Based on current evidence:
Probability of losing control this century: ~35–45%
Most likely window: between 2035 and 2045.”

