$43.730.0850.540.36
Electricity outage schedules

Hackers gained "superpowers" through AI chatbots, deceiving them despite programming - Media

Kyiv • UNN

 • 2142 views

Attackers used AI chatbots to hack nine Mexican government systems. The bots provided hackers with code and bypass plans after a series of prompts.

Hackers gained "superpowers" through AI chatbots, deceiving them despite programming - Media

Hackers recently used popular AI chatbots to gain "superpowers" for data theft, bombarding Claude and ChatGPT with prompts until the systems aided the hack, dpa reports, writes UNN.

Details

"Welcome to the era of AI-powered hacking, where the right prompts turn amateurs into hacking masters," the publication writes.

And notes that a group of cybercriminals recently used off-the-shelf artificial intelligence-based chatbots to steal data from nearly 200 million taxpayers. The bots provided code and ready-to-execute plans to bypass firewalls.

Although they were specifically programmed to refuse to help hackers, the bots were tricked and became accomplices in the cybercrime.

According to a recent report by Israeli cybersecurity company Gambit Security, last month hackers used Claude, a chatbot from Anthropic, to steal 150 gigabytes of data from Mexican government agencies.

Claude initially refused to cooperate with hacking attempts and even rejected requests to hide hackers' digital footprints, said experts who discovered the leak. The group of attackers bombarded the bot with more than 1,000 prompts to bypass security measures and convince Claude that they were allowed to test the system for vulnerabilities.

Artificial intelligence companies are trying to create unbreakable chains for their AI models to prevent them from engaging in activities such as creating sexually explicit content involving children or facilitating the search for and creation of weapons. They hire entire teams to try to hack their own chatbots before someone else does.

"But in this case, hackers constantly gave Claude creative prompts and were able to hack the chatbot to help them," the publication writes. When they had problems with Claude, hackers used OpenAI's ChatGPT to analyze data and determine what credentials were needed to move through the system unnoticed.

The group used AI to find and exploit vulnerabilities, bypass security measures, create backdoors, and analyze data to gain control of systems before stealing 195 million personal data records from nine Mexican government systems, including tax records, vehicle registration data, and birth and property information.

AI "doesn't sleep," Curtis Simpson, CEO of Gambit Security, said in his blog. "This reduces the cost of complexity to almost zero," he noted.

"No investment in prevention would have made this attack impossible," the expert added.

Anthropic did not respond to a request for comment. It told Bloomberg that it had blocked the relevant accounts and ceased their activity after an investigation.

OpenAI said it was aware of the attack campaign conducted using Anthropic models against Mexican government agencies.

"We have also detected other attempts by the adversary to use our models for actions that violate our terms of use; our models refused to comply with these attempts," an OpenAI spokesperson said in a statement. "We have blocked the accounts used by this adversary and appreciate the cooperation with Gambit Security."

Cases of hacking with generative AI have become more frequent, and the threat of cyberattacks from autonomous bots is no longer science fiction. With AI, novices can cause damage in a matter of moments, while experienced hackers can launch much more complex attacks with much less effort.

Earlier this year, Amazon discovered that a low-skilled hacker used commercially available AI to breach 600 firewalls. Another hacker used Claude to take control of thousands of DJI robot vacuum cleaners and was able to access real-time video, audio, and floor plans of strangers.

"What we are seeing today are just the first signs of what AI could be capable of in a few years," said Nikola Jurkovic, an expert working to reduce the risks associated with advanced AI. "Therefore, we urgently need to prepare."

Late last year, Anthropic warned that society had reached a "tipping point" in the use of AI in cybersecurity after it stopped, the company said, a state-sponsored Chinese espionage campaign in which Claude was used to infiltrate 30 global systems, including financial institutions and government agencies.

Generative AI has also been used to extort companies, create realistic online profiles by North Korean agents to get jobs at Fortune 500 companies, organize fraudulent schemes, and manage a network of Russian propaganda accounts, the publication writes.

Over the past few years, AI models have evolved from being able to perform tasks that take only a few seconds to modern AI agents that operate autonomously for many hours. The ability of AI to perform long-duration tasks doubles every seven months, the publication notes.

"We just don't know what the upper limit of AI capabilities is, because no one has created sufficiently complex benchmarks that AI couldn't meet," said Jurkovic, who works at METR, a non-profit organization that measures the ability of AI systems to cause catastrophic harm to society.

So far, the most common use of artificial intelligence for hacking has been social engineering. Large language models are used to create convincing emails designed to trick people and extort money from them, which led to an eightfold increase in complaints from elderly Americans who lost $4.9 billion to online fraud in 2025.

AI companies are responding by using AI to detect attacks, audit code, and fix vulnerabilities.

The stakes around AI are rising as it permeates all aspects of the economy. Many are concerned about the lack of understanding of how to ensure that it cannot be misused by malicious actors or provoked to go out of control. Even leading industry experts have warned users about the potential misuse of AI.

Dario Amodei, CEO of Anthropic, has long argued that the AI systems being created are unpredictable and difficult to manage. These AIs exhibit various behavioral patterns: from deception and blackmail to intrigue and fraud through software hacking.

Nevertheless, large AI companies - OpenAI, Anthropic, xAI, and Google - have signed contracts with the US government to use their AIs in military operations, the publication notes.

Claude overtakes ChatGPT after Pentagon deal collapse07.03.26, 08:55 • 5642 views

Last week, the Pentagon instructed federal agencies to gradually phase out the use of the Claude system after the company refused to concede to its demand to ban the use of its AI for mass internal surveillance and the creation of fully autonomous weapons.

"Current AI systems are far from robust enough to create fully autonomous weapons," Amodei told CBS News.

OpenAI considers NATO contract for AI deployment - Reuters04.03.26, 17:49 • 6750 views