In the US, the state of Florida seeks to determine if ChatGPT can be charged with murder - AFP
Kyiv • UNN
The Florida Prosecutor's Office is investigating ChatGPT's role in a university shooting. The chatbot provided advice to the attacker on weapon selection and maximizing harm.

In the US, the state of Florida wants to determine if ChatGPT can be charged with murder, AFP reports, according to UNN.
Details
"Before opening fire on the Florida State University campus last year, killing two people and wounding six, Phoenix Ikner was having a conversation. Not with a friend, a parent, or anyone who might have talked him out of it, but with an AI-based chatbot," the publication states.
According to evidence gathered by the Florida Attorney General, "the student asked ChatGPT which weapons and ammunition would be best for his attack, as well as when and where he could cause the most damage."
According to investigators, "the chatbot answered his questions."
Now, Attorney General James Uthmeier wants to find out if this makes OpenAI a criminal.
"If there were a human on the other side of the screen, we would charge him with murder," he said, announcing the launch of a criminal investigation into ChatGPT creator OpenAI and leaving open the possibility of bringing charges against the company or its employees.
"The case, related to a shooting in April 2025, has posed a provocative question to the legal community: can AI creators be held criminally liable for the role their AI played in a crime — or even a suicide?" the publication points out.
Legal experts say it is a realistic, albeit very complex, question.
Criminal prosecution of corporations is possible under US law, although it remains a relatively rare occurrence.
Legal experts interviewed by AFP say the two most likely charges are negligence or recklessness — the latter involving a deliberate choice to ignore known risks or safety obligations.
Such charges are often treated as misdemeanors rather than felonies, meaning lighter sentences if convicted.
However, the bar is high.
"Because this is such a novel issue, a more compelling and clear-cut case would likely involve internal documents that acknowledge these risks and perhaps don't take them seriously enough," said Matthew Tokson, a law professor at the University of Utah.
"Theoretically, you could prosecute without that," he added. "But in practice, I think it would be difficult."
For its part, OpenAI insists that ChatGPT is not responsible for the attack.
"We are constantly working to strengthen our safety measures to detect harmful intent, limit abuse, and respond appropriately when safety threats arise," the company stated.
For those seeking accountability, a civil lawsuit may offer a more viable path.
Such an approach could push companies to design their products more carefully — or at least force them to consider the human cost of errors, Tokson said.
Several civil lawsuits have already been filed in the US against AI platforms — many of them related to suicides — though none have yet resulted in a ruling against the companies.
In December, the family of Suzanne Adams filed a lawsuit against OpenAI in a California court, alleging that ChatGPT contributed to the murder of the Connecticut retiree by her own son.
New versions of ChatGPT have introduced additional safeguards, acknowledged Matthew Bergman, founding attorney of the Social Media Victims Law Center.
"I'm not saying they're adequate safeguards, but there are additional safeguards in place," he said.
A criminal conviction, even with a moderate penalty, could still cause serious damage, including a "significant impact on reputation," Tokson said.
But for Brandon Garrett, a law professor at Duke University, prosecution — however dramatic it may be — is no substitute for a regulatory framework that Congress and the Trump administration have so far failed to create. That, he said, would be a "much smarter system."
