humanity-must-wake-up-anthropic-ceo-warns-about-ai-dangers

"Humanity must wake up": Anthropic CEO warns about AI dangers

 • 3102 переглядiв

Humanity "must wake up" to the potentially catastrophic risks posed by powerful AI systems in the coming years, according to Anthropic CEO Dario Amodei, whose company is among those pushing the boundaries of this technology, UNN reports with reference to the Financial Times.

Details

In an essay on Monday, Amodei outlined the risks that could arise if the technology develops unchecked – from massive job losses to bioterrorism.

"Humanity is about to be handed almost incredible power, and it is not entirely clear whether our social, political, and technological systems have the maturity to wield it," Amodei wrote.

The essay was a stark warning from one of the most influential entrepreneurs in the AI industry that safeguards against AI are insufficient.

Amodei outlines the risks that could arise with the advent of what he calls "powerful artificial intelligence" – systems that will be "much more powerful than any Nobel laureate, statesman, or technologist" – which he predicts is likely to happen in the next "few years."

Among these risks is the potential for individuals to develop biological weapons capable of killing millions or "in the worst case, even destroying all life on Earth."

"A mentally unstable individual capable of committing a school shooting, but presumably incapable of creating a nuclear weapon or releasing a plague… will now be reduced to the level of a virologist with a PhD," Amodei wrote.

He also raises the question of the potential for AI to "get out of control and subjugate humanity" or empower authoritarian rulers and other malicious actors, leading to a "global totalitarian dictatorship."

Amodei, whose company Anthropic is a major competitor to ChatGPT developer OpenAI, clashed with David Sacks, US President Donald Trump's AI and crypto "czar," over the direction of US regulation.

He also compared the administration's plans to sell advanced AI chips to China to selling nuclear weapons to North Korea.

Last month, Trump signed an executive order to thwart state-level efforts to regulate AI companies and last year released an AI action plan outlining plans to accelerate innovation in the US.

In his essay, Amodei warned of massive job losses and "concentration of economic power" and wealth in Silicon Valley as a result of AI development.

"It's a trap: AI is so powerful, such a brilliant prize, that it's very difficult for human civilization to put any limits on it at all," he added.

Veiledly referencing the controversy surrounding Elon Musk's Grok AI, Amodei wrote that "some AI companies have shown alarming negligence regarding the sexualization of children in current models, which makes me doubt that they will show either the inclination or the ability to consider the risks of autonomy in future models."

European Commission investigates spread of illegal content through Elon Musk's Grok chatbot26.01.26, 15:27 • [views_5208]

AI safety concerns such as biological weapons, autonomous weapons, and malicious actions by state actors were prominent in public discourse in 2023, partly due to warnings from leaders like Amodei, the publication notes.

Policy decisions regarding AI are increasingly driven by a desire to capitalize on the opportunities presented by the new technology rather than to mitigate its risks, according to Amodei.

"These fluctuations are regrettable, because the technology itself doesn't care what's fashionable, and we are significantly closer to real danger in 2026 than in 2023," he wrote.

Amodei was an early employee at OpenAI but left to co-found Anthropic in 2020 after a conflict with Sam Altman over OpenAI's direction and limitations in the AI field.

Anthropic is in talks with groups such as Microsoft and Nvidia, as well as investors including Singapore's sovereign wealth fund GIC, Coatue, and Sequoia Capital, for a funding round of $25 billion or more, valuing the company at $350 billion, the publication writes.

AI in modern weapons: why the topic has become relevant, and what risks it carries17.10.25, 10:15 • [views_157738]

Popular
News by theme