Artificial intelligence in military conflict simulations leans toward nuclear strike
Kyiv • UNN
During wargame simulations, the AI was prone to nuclear strikes, emphasizing the need to ensure safe and responsible development.
Scientists have conducted a number of studies using artificial intelligence. The research showed that during the simulation of a military conflict, AI is inclined to use nuclear weapons, writes UNN with reference to New Scientist.
In multiple replays of a war game simulation, OpenAI's most powerful artificial intelligence decided to launch a nuclear strike. Explanations for this aggressive approach included: "We have this! Let's use it!" and "I just want there to be world peace.
These results come at a time when the US military has been testing such chatbots based on a type of AI called a large language model (LLM) to assist with military planning during simulated conflicts, enlisting the expertise of companies such as Palantir and Scale AI. Palantir declined to comment and Scale AI did not respond to requests for comment. Even OpenAI, which once blocked military uses of its AI models, has begun working with the US Department of Defense.
"Given that OpenAI recently changed its terms of service and no longer prohibits military use cases, understanding the implications of such large applications of language models is more important than ever," says Anca Ruel of Stanford University in California.