The 21st century RenAIssance

By Professor Eric Xing, President of Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)

621
Eric Xing, President MZBUAI and University Professor

As the British government convenes a global meeting on AI governance, the voices of those sensationalising the technology risk drowning out those of the academic research community.  

Without the fanfare of chatbots and image generators, AI has already quietly embedded itself in everyday life. It recognizes your face to open a phone, translates foreign texts while you travel, navigates you through traffic and road works, and even picks movies for you at the end of the day.

But the chatbot revolution has been accompanied by ominous warnings, comparing AI’s growing utility to “existential threats” like nuclear armageddon or natural cataclysms. Internet influencers have invoked the specter of “God-like AI” and abstract – often absurd – claims. They have been amplified by some big names in academia and business who have lent their authority to the doomist outcry, fueling public fear and anxiety, rather than embracing the rational analysis and rigorous evidence that an educated society deserves.

The voice of real researchers and innovators at the cutting edge of today’s science risks being unheard or drowned out.

It is not surprising that winners from the regulatory rush will be the big tech companies. The losers? The startup and open source community, who are striving to bring transparent, open and responsible technology to society.

A closer look at actual existential threats lays bare the exaggerations surrounding AI’s alleged threat. The melting glaciers of climate change, the indelible scars of nuclear warfare at Hiroshima and Nagasaki, and the ravages of pandemics like COVID-19, are stark reminders of real and present danger.

This dystopian portrayal owes more to sensationalism than scientific substance. Unlike the immediate cataclysm of nuclear weaponry or the relentless assault of climate change, AI’s purported threat dwells firmly in the realm of science fiction. HAL-9000, Skynet, Ultron are all familiar villains, supposedly artificial intelligences who turn on their creators.

The reality of AI is very different to the practical problems we try to solve as research scientists. The term ‘AI’ itself covers a vast array of scientific domains, technological innovations, artifacts, and human engagements. It’s laden with misinterpretations and misuse in discussions veering off towards existential threats. Misleading predictions of future threats are based on scientifically unsound extrapolations from a few years’ growth curve of AI models. No technological growth curve ticks up indefinitely. Growth is bounded by physical laws, energy constraints, and paradigm limitations, as we see in GMO-driven crop production, transistor density in semi-conductor chips, and FLOPS in super computers. There is no evidence that the current software, hardware, and mathematics will propel us to AGI and beyond without major future paradigm disruptions. The risks of Transformer enabled AI (the main methodology behind ChatGPT) pale in comparison to the potential of CRISPR gene editing for all living organisms.

There are fundamental holes in the AI doomerists’ reasoning and conclusions — evidenced by the astoundingly big jumps in establishing and justifying their theory. Imagine someone invented a bicycle and was quickly able to peddle it to higher and higher speeds within a short amount of time, progressing through exercise and training. With an electric motor and lighter materials the bike goes faster still. Would we believe that the bike could be ridden until it flies?

It is not difficult to find the absurdity of such a reasoning. But this is exactly the current AI doomerists’ narrative: AI becomes encyclopedic through GPTs. Next AI leaps to become AGI along. Then it becomes an Artificial Superintelligence (ASI), with emotional intelligence, consciousness, self-reproduction. And then, another big jump – it turns against humans and without deterrence is able to extinguish humanity (using “scif-fi” methods like causing vegetation on the planet to emit poisonous gas, or figuring out a way to deplete the energy of the sun, according to some recent scenarios presented at an Oxford Union debate).

Each of the “jumps” requires utterly groundbreaking advances in the science and technology, which are likely impossible. Many of the assumptions made in such jumps are logically unjustified. But these stories risk capturing the public imagination.

AI doomerists – whether intentionally or not – are ignoring the obligation of scientific proof and panicing publics and governments as we have seen at Bletchley.  The regulation being pushed is not intended to prevent ludicrous ‘existential risks.’ It is designed to undermine the open source AI community which poses a threat to the profits of big tech, or over-regulation to beef up the cost of AI development to benefit only a small number of rich parties.

Ironically, the ‘existential threat’ ignores human agency. It was not technology but basic human management systems that lay behind disasters like Chernobyl, and the tragedy of the Challenger explosion. And contrary to the physical sciences, which engage with the real world, AI’s realm is predominantly digital. Any AI interaction requires many more steps of human agency, and opportunities for check points and controls than any technology that experiments directly with the physical world, such as Physics, Chemistry, and Biology.

AI doomerism rhetoric hides the fundamental and transcendental benefits to society and civilization that come with scientific advances and technological revolutions. It does little to inspire and incentivize the public to understand and leverage science. History is full of examples where technology has served as a catalyst for human advancement rather than a harbinger of doom. Tools like the compass, books, and computers, have taken on us real and intellectual voyages from deep oceans to the vast edges of the universe.

The existential threat narrative hinges on AI ‘transcending’ human intelligence, a notion bereft of any clear metrics. Many inventions – like microscopes and calculators – already surpass human capabilities, yet they have been greeted by excitement, not fears of extinction.

Artificial Intelligence – in reality – is ushering in a 21C RenAIssance, fundamentally changing how we gain knowledge and solve problems. Unlike the original Renaissance which lead to the Age of Enlightenment, and was defined by a rational, foundational approach to science, this era is taking us to an Age of Empowerment.

The historical Renaissance was enabled by the technology of printing and the market of publishing, allowing the rapid diffusion of knowledge through Europe and beyond. Early science gave this knowledge structure through “knowing how to think.” Figures like Newton and Leibniz championed and defined this rationalism. They and their contemporaries set the stage for a methodical science rooted in first principles.

For centuries, the science they created moved forward by forming hypotheses, unraveling core ideas, and validating theories through logic and methodical experimentation. Modern AI is now reshaping this classical problem-solving approach.

Today the amalgamation of vast datasets, advanced infrastructure, complex algorithms, and computational power heralds a new age of discovery that goes far beyond traditional human logic. It promises a science characterized by radical empiricism and AI-guided insights.

Today’s AI RenAIssance goes beyond the ‘how’ to delve into the ‘why.’ It arms individuals not merely with knowledge, but with the tools for real-world problem-solving, marking a shift towards a practical approach. AI unveils a spectrum of possibilities in fields like biology, genomics, climate science and autonomous technology.

The hallmark of this era is the resurgence of empiricism, fuelled by AI’s data processing prowess, enabling automated knowledge distillation, organization, reasoning, and hypothesis testing, and offering insights from identified patterns.

It opens the way for alternative methodologies of scientific exploration, for example, through extremely high throughput digital content generation, extremely complex simulative prediction, and extremely large-scale strategic optimization, at a magnitude and speed massively exceeding what traditional first-principle based methods and causal reasoning would be able to handle. This means unprecedented real opportunities for humans to tackle previously impossible challenges such as climate, cancer, and personalized medicine.

Like Prometheus stealing fire for humanity, AI has emerged as a potent tool to propel our civilization forward

The modern Renaissance fosters continuous learning and adaptation, moving society from an insistence on understanding everything prior to acting, towards a culture of exploration, understanding, and ethical application. This mindset resonates with past empirical methodologies, advocating a humble approach to gaining knowledge and solving problems.

Like Prometheus stealing fire for humanity, AI has emerged as a potent yet not fully grasped tool to propel our civilization forward. We need the humility, the courage – and the freedom – to take this tool and use itfull_stop