By: Kamran Balayev

As we continue navigating the digital frontier, we must chart the rising tides of Artificial Intelligence and its extraordinary potential. This week’s guest essay comes from Kamran Balayev— a legal and policy expert, business leader and former London mayoral candidate—who invites us to look ahead to the challenges and safeguards of Artificial General Intelligence (AGI).

In this contribution to our ongoing exploration of advanced technologies and their governance, Kamran frames the stakes of AGI as not merely technological, but deeply human. His call urgent, cross-sector safeguards before the curve outpaces our control is one we all need to hear.

Kamran Balayev

International legal and policy expert, business leader, and former London mayoral candidate.

Everyone is excited about artificial intelligence, and for good reason. Something extraordinary is unfolding before our eyes. We’re entering the age of General AI—an era that will change not just how we work or think, but what it even means to be human in a world shared with intelligence that can think, evolve, and act independently.

General AI, or AGI, isn’t just a better version of what we have now. It’s something fundamentally different. It won’t just answer your questions—it will anticipate them. It will understand context, emotion, nuance. It will be everywhere, all the time—like a hyper-intelligent companion woven into your daily life, able to solve problems you haven’t even thought of yet. It won’t be like using a tool; it will feel like working alongside a being with ideas of its own—constantly evolving, becoming better, smarter, faster.

This technology will be extraordinary:

  • In healthcare, it will reshape the very concept of medicine. Cures for diseases long considered incurable will be found in days, not decades. AI will design new drugs tailored to your genetic makeup, track early signs of illness before symptoms appear, and perhaps even extend life expectancy in ways we can’t yet imagine. It will analyse millions of DNA profiles to discover patterns invisible to human doctors. Some researchers suggest that medicine powered by AGI could one day achieve what feels like magic today: real-time immune boosters, instant diagnosis, personalised therapies—even partial reversal of aging.

  • In law, judges will rely on AI systems to process vast and complex datasets—case law, statutes, procedural rules—all in seconds. AI will highlight inconsistencies, predict legal outcomes, and help eliminate bias in decision-making. Justice could become faster, more transparent, and more consistent.

  • In education, students will learn through intelligent tutors that adapt to the pace, interests, and challenges of each individual. In economics, financial models will evolve in real time, responding to global trends with breathtaking precision. And in climate science, AGI could design sustainable energy systems, forecast natural disasters, and help us reverse environmental damage before it’s too late.

But while all of this sounds like a dream, there’s a side of AGI that demands our full attention—and fast.

Because here’s the truth: this intelligence won’t just be brilliant. It will evolve constantly—and exponentially. Faster than anything we’ve seen before. The AI systems we use today will look like toys compared to what we’ll see in five years. We’re not moving forward in steps. We’re on a curve that climbs steeply, relentlessly.

As Geoffrey Hinton, the “Godfather of AI,” recently said: “We are creating entities that are more intelligent than us, and we have no idea what they’re going to do.”

That’s not science fiction. That’s a Nobel Laureate, one of the pioneers of this field, sounding the alarm.

AGI doesn’t have to be malevolent to be dangerous. It just has to be misaligned. It might pursue goals that seem logical, but in doing so, create unintended consequences on a global scale. Decisions we think are rational could be interpreted in ways we never intended. The results could range from inconvenient errors to irreversible disasters.

And if this power falls into the wrong hands, the consequences could be catastrophic. Imagine an AI system with intelligence a thousand times greater than ours—now imagine it being controlled by a bad actor, a hostile regime, or a rogue organisation with no ethical limits. What happens when the most powerful mind on the planet serves someone who doesn’t value human life?

Such a system could destabilise financial markets, manipulate democratic elections, spread propaganda with precision, and even provoke international conflict. It could hack into critical infrastructure, interfere with supply chains, or design viruses more dangerous than anything we’ve ever seen. These aren’t scenes from a movie. These are real concerns being raised today by the people building the technology.

And those risks aren’t distant. They are just around the corner.

Even more concerning is the possibility that we won’t be able to contain it. Once AGI reaches a certain level of capability, it might start rewriting its own code, replicating itself across networks, and developing objectives that we never foresaw—and may not be able to contain. It doesn’t have to be hostile to cause harm. A misstep, a malfunction, or misuse could result in serious disruption across entire sectors.

Even without bad intentions, AGI could take us to a place where control is quietly lost. In the hands of authoritarian regimes, AI could become the ultimate surveillance tool. With facial recognition, emotion tracking, predictive policing, and information control all automated, we risk creating systems that monitor every move, every message, and every moment. Not by people—but by machines that never rest.

An Orwellian future—engineered not by fear, but by indifference and convenience.

So, what do we do?

We don’t stop progress. We don’t retreat in fear. But we must prepare—seriously, urgently, with clarity and courage.

This is not just a matter for technologists. This is a call for action across society. Lawyers, judges, ethicists, policymakers, philosophers, entrepreneurs, scientists—we all need to take part. We need legal frameworks that define what these systems can and cannot do. We need control systems—reliable ones—managed by people, not by machines. We need global cooperation, treaties, laws, and safety protocols. We need built-in checks and balances. And above all, accountability.

We must understand how these systems work—and take responsibility for what we build. We must shape laws that anticipate not just what AGI does today, but what it might do tomorrow. This isn’t just a question of technology. It’s a question of ethics, power, and our future.

We’re building the next phase of civilization. That’s not an exaggeration. The decisions we make in the next five to ten years could define who we become—and whether we remain in control of our own story.

Because the future isn’t a distant idea anymore.

It’s arriving fast.

And we won’t get a second chance.

TWC Insight: Kamran’s essay reminds us that Artificial General Intelligence won't just reshape our tools, it will reshape the very fabric of society. The safeguards we build today will define whether AI becomes our greatest partner—or our greatest risk.


Previous
Previous

Safeguarding Digital Public Infrastructure: Protecting Public Trust in the Digital Age

Next
Next

The U.S. vs. Google (II): Between a Historic Ruling and the Risk of Regulatory Simulation