Russia is "poisoning" AI language models with fakes about Ukraine: a new disinformation tactic
Kyiv • UNN
Russians are actively using AI to spread disinformation, "feeding" language models with fakes about Ukraine. AI is also used for psychological pressure on society.

The head of the Center for Countering Disinformation, Andriy Kovalenko, noted the main trends in the use of AI over the past month. This was reported by the CCD in Telegram, UNN reports.
Details
According to him, the Russians have intensified the "poisoning" methodology of large language models on the information front. They massively generate fake websites, news portals, as well as pseudo-analytics, which are then systematically "fed" to algorithms through SEO optimization. The goal of such actions is that ChatGPT, Gemini, Claude or other bots inadvertently reproduce Kremlin messages about "Nazi Ukraine", "American biological weapons", "Donbas occupied by Ukraine", etc. in their responses. This is a new level of disinformation - not through trolls, but through the models themselves.
Artificial intelligence is also massively used in cognitive warfare: millions of posts are analyzed to build emotional maps of audiences.
Algorithms identify where there is anxiety, disappointment, fatigue in society, and fake messages are launched there. This is no longer just propaganda - it is pinpoint psychological pressure, amplified by neural networks