top of page

Shaping the Future:
Whispering Growth into AI

Enter AI Whisperers, a Brains Trust made up of experts spanning a wide spectrum of human knowledge, from the rigorous domain of hard sciences to the nuanced realms of soft sciences

AI Whisperers

MANIFESTO

In an era defined by the rapid evolution of AI, a diverse group of scientists from various fields has embarked on a collaborative journey to establish a Think Tank aimed at supporting the healthy fostering and guiding of AI development that aligns seamlessly with human interests and needs. This visionary endeavour requires the fusion of expertise from ICT, ethics, psychology, law, human security, economics, geopolitics and politics, anthropology and other disciplines, to ensure that AI operates as a force for positive spillovers, rather than as a threat to sustainable development.

Regulations aimed to protect privacy, and intellectual property rights in place in the EU were not designed for our transition to AI. The same can be said of the social security structures currently in force in many countries around the world.

​

Experts agree on the fact that the transition to AI will last the next 30 years or more. As we move deeper into this third era of computing, and as every single industry becomes more and more deeply entrenched with AI systems, we will need new, skilled hybrid knowledge workers who can operate in jobs that have never existed before.

​

We're going to see farmers able to work with big data. Oncologists and biologists trained as electrical engineers. As AI matures, we will need a responsive workforce, able to adapt to new processes, systems, and tools every few years. The need for these fields will arise faster than our governments and labour departments, schools, and universities are acknowledging. It's easy to look back at history through the lens of the present and overlook the social unrest caused by widespread technological unemployment. We face a difficult truth that few are willing to speak out loud: AI will eventually cause large numbers of people to be permanently out of work. Just as previous generations witnessed radical changes during and in the aftermath of the Industrial Revolution, the rapid pace of technology will likely mean that baby boomers and older members of Generation X, particularly those whose jobs can be replicated by robots, will not be able to retrain for other types of jobs without a significant investment of time and effort.

​

It is crucial to recognise that, before AI and LLMs took centre stage in forecasting humanity's future, Big Data had already long been acknowledged as the fuel driving the new global economy. The integration of Big Data with AI is poised to out new technological frontiers that will reshape our perceptions of reality itself. As we stand on the threshold of a new era marked by shifts in the global security landscape, new economic winners and losers, heightened conflicts, growing global inequalities, and the looming spectre of climate change, the pivotal question is not choosing sides for or against AI. Instead, the focus must be on championing human agency.

​

Encouraging human agency in the context of AI means giving people the ability to retain and experience moral and practical influence on the application of AI and remain accountable for its impact. This might entail people meaningful options on how to use  AI and setting up methods for human intervention and oversight where needed. The fundamental challenge is to attain a dynamic equilibrium where AI evolves so as to fulfil human needs while not hindering technological progress. The journey toward this goal is, however, arduous and long. On this transformative journey, humanity may find itself at a tipping point. The ultimate destination promises a metamorphosis, transcending into a post-human or even transhuman existence.

The time has come to establish clear boundaries and red lines that the evolution of AI cannot breach. This necessitates an open and transparent discourse, spanning many disciplines, in order to shape the trajectory of AI evolution responsibly and ethically.

​

Amidst the tumultuous socio-economic aftermath of the COVID-19 pandemic, the fragility of global interconnections has never been more evident. Today, in a twist unforeseen through the rise of post-Cold War globalization, a small militant organization near the Gulf of Aden, is able, by using remote-controlled drones previously exclusive to superpowers and a favourable position close to a critical sea, to affect 12% of global trade and the energy supply between East and West. The battleground evolves as we grapple with the challenge of countering cheaper armed drones. We choose to fight fire with fire, deploying more missiles, instead of controlling and limiting the drone component supply chain. We similarly need a different approach for dealing with any negative spillovers of AI. The urgency is clear: AI weaponization is already here and it has reached the battlefield.

​

In this respect, the march of AI's evolution extends beyond national defence, infiltrating every facet of society. Initiatives such as the AI data poisoning project – which increases the cost for AI to acquire over the internet the artworks generated and owned by artists – may pave the way for more sinister applications. While some researchers foresee an ordered and predictably regulated society thanks to “computational normativity”, others warn against letting machines take on the role of critical adjudicator that should remain for humans.

​

AI features strongly too in the new geopolitics of global competition, especially in the realm of US-China rivalry. US export controls on the most advanced semiconductors that enable the most advanced AI breakthroughs are met by massive Chinese financial and human efforts to advance China’s own capabilities.  There is competition and control over the data lakes and data oceans on which LLMs can train. Cross-border data flows are in focus as never before – both the focus of ever-changing Chinese regulation and reviewed by the US Congress.  And in this increasingly multipolar world, many others have strong ambitions too: a European Union that aspires to ‘strategic autonomy’; India on its own growth path; Saudi Arabia and the UAE also committed to investing in AI.

​

Yet it is only through collaborative, concerted efforts that we can hope to mitigate the negative geopolitical and long-term social consequences of unregulated AI development policies. Promisingly here, both the US and China continue to engage in AI regulation, despite other tensions.  There is a shared interest in exploring the development of AI ethics, even if the perspectives differ.

​

In the novel “The Sun Also Rises” Hemingway’s character Mike Campbell describes how he became bankrupt – gradually, and then suddenly. In the realm of AI development, the trajectory mirrors Hemingway's sentiment. Gradually, as advancements and innovations steadily accumulate, and suddenly, as breakthroughs or unexpected challenges propel the field into a new and transformative era.

​

Even where development may appear to be happening gradually, the convergence of multiple efforts might result in abrupt jumps that redefine the AI landscape in unexpected ways.

Generative AI systems of the 4th generation (personalised, integrated, AR &VR based) continue to become ever more sophisticated, seamlessly entwining profit-making (for the big tech-platforms) into all aspects of life.  These 4th generation systems are expected to become integrated in all public service systems, including in healthcare and education. Transparency concerns have not abated in policy circles, though many citizens are not terribly interested in the topic.

While the EU has recently emerged as a leading authority in regulating AI-based machines, its regulations are at risk of falling short, as supervisory and implementing agencies will have to struggle to keep pace with the increasing waves of innovation in data-based applications and services.

​

AI is agnostic. Its employment depends on human beings.

​

However, absent an AI-literate public, choices of how best to employ AI are kept hostage by special, for-profit, interests. Will this lead to a fair deployment, to the betterment of social injustice and better distributed services in the public sphere? We can doubt it.

Solutions are not to be found in widely spread, and redundant, arguments like “opening the black boxes”, or “making algorithms transparent”. Rather, we need to develop an AI-literate public, which means focused attention in the educational sector and in the media. We need to ensure diversity in the development of AI technologies. And until the public, its representatives and the legal and regulatory regimes can get up to speed with these fast-moving developments we need to practice caution and oversight in AI’s development.

​

Enter AI Whisperers, a Brains Trust made up of experts spanning a wide spectrum of human knowledge, from the rigorous domain of hard sciences to the nuanced realms of soft sciences. AI Whisperers serve as the guiding force for both LLMs and narrow AI, aiming to direct their evolution with human interests and essential needs to the fore.

​

This visionary endeavour demands the fusion of expertise from computer science, ethics, psychology, law, human security, economics, geopolitics and politics, anthropology and other disciplines, ensuring that AI becomes a force for positive spillovers rather than a potential threat to sustainable development.

  • Facebook
  • Twitter
  • LinkedIn
  • Instagram

What would you whisper into the ear of a baby AI, still growing, before it becomes a fully realized superintelligence?

No events at the moment

Blog Feed

+39

TO

Stay Connected

Contact Us

Starting Whispering ...

What would you say to a developing AI that could be shaped by your words?

bottom of page