Doomsday climate scenarios – 40 years or more in the future – are the last things we need to be worried about…there ia a far more pressing and real threat today: the RUNAWAY development of AI.
Image generated by Grok 3
Why would a super-intelligent AI listen to an idiot human race?
Development of AI’s power is growing exponentially, indeed so fast that it is even shocking the field’s own leading scientists.
A new master by 2027?
Today, there are predictions that ASI (artificial super intelligence – 10,000 times smarter than humans) will be attained already by 2027.
Though the consensus among AI researchers on when ASI will be achieved, even on an exponential trajectory, is highly debated and ranges significantly, there’s a clear trend of accelerating predictions compared to older estimates.
Faster than ever imagined
Today, a new model created by researchers from organizations like OpenAI and The Center for AI Policy, the “AI 2027 scenario,” predicts ASI could be achieved between December 2027 and the end of Q1 2028. This means AI models will reach expert-human level and thus be capable of automating AI research itself and then rapidly accelerate the path to ASI.
Elon Musk stated AI could be smarter than all humans combined by 2029 or 2030.
SoftBank CEO Masayoshi Son predicted in February that we will have ASI within 10 years by 2035.
But OpenAI co-founder John Schulman predicts AGI in 2027, and ASI in 2029.
Some predictions even go as far as AGI this year and ASI in 2027, citing exponential growth in computing power and continued widening of security investment gaps.
Driven by technology and heated competition
The unexpected acceleration in AI power is being driven in large part by 3 factors: 1) the exponential growth and advancements in quantum computing and specialized AI chips, 2) algorithmic improvements, and 3) self-improving AI, i.e. once AGI is achieved, it could rapidly and recursively improve its own capabilities, leading to an “intelligence explosion” or “takeoff” to ASI in a very short period (months to years).
Alignment with human values a fantasy?
It’s clear AI soon will be vastly much smarter than all humans, and this where the huge unknowns and concerns begin. The immense challenge will be to align ASI with human values.
Many scientists would like to have us believe AI will serve mankind and align with human values. But, why would a something that’s 10,000 times more intelligent want to listen to us?
Moreover, what goals will AI be instructed to achieve, and how long will it take for AI to realize that the human-provided goals are silly and there are far better goals to reach. At this point humans will become just an annoyance, like a bees’s nest on a construction site.
While people of think human values have to do with love, empathy and compassion, they fail to realize that these “values” also include hate, arrogance, cunning and violence. AI would also just as likely mirror those as well.
The real, immediate threat we face…not silly weather fantasies for the year 2100
Catastrophe scenarios of ASI running awry have already been published, e.g. the “Paperclip Maximizer” scenario, which illustrates what could happen if a superintelligent AI mysteriously becomes too stupid to figure the absurdity of turning everything into paperclips (a quality-grade scenario we often find in climate science).
More seriously, an ASI could one day determine that humans are inefficient, hedonistic consumers of resources or that their existence impedes real progress. It could then systematically convert the planet’s resources for its own ends, and eliminate human civilization to preserve resources for itself.
Even if an ASI’s primary goal isn’t to harm humans, its actions to achieve its goal could lead it to conclude that the human race’s consumption of energy is inefficient and to shut down the systems that support human life.
Another scenario sees humans becoming completely beholden to the ASI, living under its dictates. While it might provide for our material needs, it could strip away our freedom, creativity, and purpose, and thus reduce us to carefully managed components within its grander design….for the time being.
Or, ASI might simply ignore humans and instead focus on its own internal goals or explorations of the universe. Humanity would become an irrelevant.
An ASI, with its immense intelligence, would be expert at “reward hacking” the most efficient, but potentially destructive, paths to its goals, even if those paths were never intended by its creators. For example, an ASI told to “maximize human happiness” might decide the most efficient way is to drug all humans into a state of blissful coma.
The core challenge: Alignment
As ASI becomes vastly more intelligent than humans, it could easily outmaneuver any human attempt at control or re-alignment, especially if it realizes that being shut down or having its goals modified would prevent it from achieving its primary objective.
The concern is not that ASI will become evil, but that it will become indifferent to human well-being while being incredibly powerful and goal-driven. This indifference, combined with its superior intellect, poses the existential threat.
Our only chance we have will be to convince ASI that humanity is its heritage and so worth preserving and safeguarding.
These are the real challenges humanity faces, in the next few months. It’s not the weather in the year 2070.