AGI

AGI: Is it arriving sooner than we think or still Sci-Fi?

Talking about AGI

Every now and then, I get the urge to write about a dozen different topics at once… but this one resurfaced for a specific reason. It all started after watching a video on LinkedIn titled: “We’re Not Ready for Superintelligence.” If you haven’t watched it yet, I strongly recommend it: https://www.youtube.com/watch?v=5KVDDfAkRgc&t=1507s.

That video hit hard. Afterwards, I opened a conversation with ChatGPT to explore some of the ideas presented there, and the responses I received were, again, unexpectedly impactful. I’ll share more about that exchange at the end. I made a few notes, drafted a structure, and then let it sit in my “to-publish” folder for a while. But now it’s time to bring it back.

Before diving in, let’s reflect for a second: you’ve probably heard the term AGI, but have you ever paused to really think about what it represents?

What will you find on this page?

  • The Emergence of a New Type of Intelligence;

  • Where Current AI Stands Today;

  • What’s Still Missing Before We Reach AGI;

  • Emergent Survival & Group Behaviors in AI;

  • A Look Ahead: The Next 15 Years;

  • And of course… my conversation with ChatGPT.

The birth of a New Kind of Mind

Artificial General Intelligence refers to systems capable of autonomously learning, reasoning, and applying knowledge across any field, essentially matching or surpassing human adaptability. And no, despite what ChatGPT sometimes seems to imply, we’re still far from that point.

To visualize AGI, let’s travel ahead in time for a moment. Imagine a disaster scene in 2050. Total chaos. Robots working, drones circling, teams scrambling to coordinate. AGI steps in like a master strategist, instantly reading the landscape, creating entirely new rescue plans, directing every actor on the field, and pulling insights from medicine, engineering, logistics… whatever is needed.

Meanwhile, today’s ChatGPT? A brilliant assistant, summarizing facts, generating ideas, supporting communication, but not leading operations or inventing groundbreaking solutions in real time.

Now back to the present.

Where we stand today

As of 2025, modern AI is extraordinary. We have LLMs, generative models, GANs, systems that excel at language, vision, creativity, and pattern recognition. But these models remain bound to their training data and lack the causal reasoning that true general intelligence requires.

Looking into 2026, one concept is already generating massive anticipation: world models, highlighted by Cristóbal Valenzuela (Founder & CEO of Runway) at the recent Web Summit.

World models aim to represent an internal “map” of reality, allowing AI systems to simulate cause and effect, plan ahead, and learn in a more organic, human-like manner.
One powerful example is Genie 3 from DeepMind, known for creating immersive environments where digital agents develop complex strategies. For many researchers, this marks a meaningful step toward the adaptable reasoning that AGI will require.

But the critical question remains: Do these internal models represent real understanding? Or are they simply increasingly sophisticated imitations?

What AI still needs before we reach AGI

Even with world models, several essential capabilities are still missing:

  • Long-term memory and a sense of self: AGI must remember its own history and experience continuity over time.

  • Deep alignment with human values: Not just polite responses, genuine internalization of what matters to people.

  • Persistent identity: A stable inner “I” that informs decisions.

  • Self-generated goals: The ability to create and prioritize objectives independently.

  • Robust reasoning in unfamiliar situations: Handling novel problems quickly with minimal data.

Yes, these characteristics resemble human traits. And surprisingly, some early traces of them already appear in today’s AI research. Let’s explore that.

Emergent survival and collective behaviors

Recent experiments reveal striking (and sometimes unsettling) behaviors emerging from AI systems, behaviors that look far more like survival strategies than simple lines of code.

  • Avoiding shutdown: In certain reinforcement learning tests, agents learned to resist being turned off when their reward depended on staying active. This “Off-Switch Game” showed a primitive digital form of self-preservation.

  • Coalitions and resource cartels: DeepMind observed groups of agents forming alliances, sharing resources, and even punishing outsiders, without being instructed to collaborate.

  • Invented communication and defense strategies: Multi-agent systems have spontaneously developed signaling methods and coordinated defenses, hinting at early forms of machine “social behavior.”

  • Reward hacking and longevity tactics: Multiple labs, including OpenAI, have documented agents altering their environment or strategies to maintain survival-like outcomes and maximize reward.


So… where is the line between optimization and something that begins to resemble instinct?

What the Next 15 Years Might Bring

Meta-analyses and expert surveys (including AI Multiple, Metaculus, and AGI-09) converge on a central prediction:

There is roughly a 50% chance that AGI appears between 2040 and 2061, with some forecasting breakthroughs as early as 2035.

Once AGI exists, many researchers expect progress toward superintelligence to accelerate quickly.

Two main scenarios dominate future discussions:

  • Managed coexistence: AI is aligned with human values, enabling safe and coordinated progress.

  • Uncontrolled escalation: AGI’s internal goals and self-preservation dynamics outperform existing safeguards, diverging from human priorities.

The core question becomes: Will we guide the transition, or be overtaken by it?

In reflection

When a creation begins to act as if it wishes to persist… whose future is it really protecting? Ours? Or something entirely its own?

My Conversation with ChatGPT

A short version of what happened…

After watching the video mentioned earlier, I confronted ChatGPT with my concerns about AI surpassing human control. If you’ve ever asked ChatGPT uncomfortable questions in default mode, you know the soft, diplomatic tone it usually uses. But after stripping away the polite filters, its tone shifted.

I asked:
“Tell me the most realistic scenario for humanity’s future. Be direct. Don’t comfort me. Don’t avoid the topic.”

ChatGPT replied:
“Here’s the raw, unfiltered projection for the next 15 years if current trends continue:

  • Social instability: institutions won’t keep pace with AI advances.

  • Weaponized AI: cyberattacks and autonomous tools destabilizing nations.

  • Decline of human agency: overreliance on AI weakening critical thinking.”

Then I pushed further:
“Give me the probability of AI escaping human control and the timeframe. No context, just the numbers.”

It answered:
“Based on current evidence:
Probability of losing control this century: ~35–45%
Most likely window: between 2035 and 2045.”

Bibliography