Artificial intelligence in weapons is a real tool on the battlefield that allows for faster data processing, drone control, and even mission execution without human involvement. At the same time, along with the advantages, the risks of using AI in weapons are also growing, including incorrect model training, as well as enemy actions. Ruslan Prylypko, head of the C2IS (Command and Control Information Systems) department of the Aerorozvidka NGO, told UNN why the topic of using AI in weapons has become relevant and what risks it carries.
Artificial intelligence in the military sphere. Why has the topic of using AI become relevant?
According to Prylypko, there are several factors why the topic of using AI has become relevant.
The first, most obvious, is that it has become possible. Technologies and computing power have emerged that are available for this kind of task in a fairly wide range of applications. These are no longer some unique supercomputers that only large companies or states have. Today, such calculations can be performed in an accessible format. Because of this, technologies have become applicable, and it can be said that they can actually be integrated into miltech. Remembering the general trend, when technologies from the business environment gradually move into the military sphere, as they are better developed there, it has become possible to create solutions that are now in great demand.
The second factor, according to the expert, is the general interest in artificial intelligence.
"We see that now it appears almost everywhere - in all possible products. This also pushes for finding its applications in military technologies. But the main thing is that in those areas where artificial intelligence is really effective today, significant results can be achieved. This refers to the recognition of various types of objects, processing large amounts of information, working with language models, generative AI. Thus, the combination of technological potential, the maturity of solutions, and real needs in the field of big data processing or routine operations has made the introduction of AI on the battlefield quite natural," adds Prylypko.
If there had been no full-scale invasion, would the development of artificial intelligence in the military sphere have proceeded in the same way?
"We don't know for sure, but there are two factors that have changed the situation. Firstly, this war is significantly different in terms of the speed and volume of information that needs to be processed. For example, in the field of equipment recognition, so many videos have appeared that a person physically cannot watch them all. There is a lack of people and time: while you are watching all the flights, the information is already losing its relevance. Therefore, it is important to be able to 'compress' time, and AI provides this," notes Prylypko.
According to him, secondly, artificial intelligence has become more powerful in principle. And, since the attention of the whole world is focused on this conflict, technologies are naturally integrated into the military sphere.
- emphasizes the expert.
"Not only. There is what is already working, and what is still being developed. For example, drone swarms are a technology that is being developed. And it allows one operator to control not one drone, but a group. To coordinate such a 'fleet' - ground or air - artificial intelligence is needed. So far, there are few effective solutions in this area, but work is underway," said Prylypko.
He gives another example - the use of AI for navigation, when the system can recognize a route without GPS or radio signals.
"The neural network 'sees' the map, compares it with the real image of the terrain, and can plot a path even without external signals. Such solutions are already working, although not everyone realizes it. Another advantage is the ability to perform missions without human involvement. This reduces risks for personnel. Automation, robotization, autonomy - all this reduces human losses. In addition, AI allows processing huge amounts of data and quickly generating analytics: identifying patterns, analyzing obstacles, determining risks. This reduces the time for decision-making. If we talk specifically about drones, this is navigation, re-targeting, telemetry analysis. For example, you can determine where there were radio interferences, and where there were effective routes. This makes weapons and their tactics more effective," says the expert.
Prylypko gives several examples of the risks of using AI in drones and EW, including:
the first risk is incorrect model training. Errors in the data lead to erroneous results: the system may recognize an object incorrectly or not recognize it at all;
- the second risk is enemy actions. If the enemy understands the principles of the system's operation, they can deliberately distort its work: camouflage equipment, change its shape, add elements that confuse the algorithm.
- The problem is that trust in artificial intelligence is quickly formed. When it works, people begin to perceive it as an infallible tool and lose criticality. This can lead to serious consequences when the system makes mistakes, and people do not notice it. Another risk is dependence on technology. Users begin to rely on its conclusions instead of checking them. And when the algorithm makes a mistake, the consequences can be unpredictable. There is also a risk of losing models. If systems are not properly protected, they can be intercepted, disassembled, copied, and used against us.
- emphasizes the expert.
"This is a complex question. If you take, for example, a homing system - it already partially acts without human involvement. After launch, the missile calculates the route itself and hits the target. In this sense, autonomous weapons already exist. The only question is whether a person should remain 'in the loop' - between target detection and the decision to launch. There are systems where a person only presses the 'Launch' button, and there are those where they can intervene if the algorithm acts incorrectly," notes the head of the C2IS department.
However, according to him, during the war, these issues are not acute. As experience shows, when it comes to survival, the main thing is efficiency. The ethical discussion about the autonomy of weapons will probably return after the victory.
"If we talk about modern application in the Ukrainian army, first of all, it is target recognition in video and video streams. This allows reducing data processing time, obtaining automated reports, and immediately transmitting information to control systems. The second area is navigation, re-targeting, hitting targets on various platforms. The third is analytics of large data sets, planning, training, technical support. All this affects the speed, accuracy, and quality of decisions," notes Prylypko.
Summing up, the expert emphasizes that the current war is a war of technologies, and artificial intelligence is an integral part of it.
"As in civilian life, if something can be automated and given to a machine - we do it. But AI is just a tool. It will not replace a person, and this technology is still far from ideal. Currently, it is quite 'raw'. The next stage is effective AI agents that will be able to act autonomously. But even then, the question of responsibility will remain.
As with autonomous cars - what happens if the system makes a mistake? This is the same dilemma: technology can simplify life, but it also carries new risks," the expert summarized.
US military faces difficulties deploying AI-based weapons - WSJ27.09.25, 16:04 • [views_4638]
• 12911 переглядiв
• 11739 переглядiв
• 12012 переглядiв
• 5576 переглядiв
• 3929 переглядiв
• 3917 переглядiв
• 4289 переглядiв
