Over the past years, the view that Russia cannot possibly compete with the collective West in the arena of rising tech has provided some comfort for Western policymakers and academics, deepened still with the influx of Russian nationals into Serbia, Montenegro, or Turkey, most of whom are young people escaping Putin’s regime military mobilisation, with IT professional backgrounds, some of which have already founded companies in these countries. As a result, the CEE region as a whole, but especially Estonia and Poland, as well as the Western Balkans, have seen a rise in disinformation campaigns, political pressure, and even threats of new armed conflicts.
This an obvious sign that active measures are back, but this time in the era of AI and emerging tech, in a co-dependent and multipolar world. The question one asks is whether or not AI is usable in creating false narratives, compromising material of individuals for political gain, or influencing elections as such. Using AI in this context seems a lot cheaper and would require a lot less training than the human-driven intelligence methods we have today.
When going through existing research, there are three potential vectors of AI development that the Kremlin can use for political purposes. First, faster, cheaper, and easier content creation through machine learning algorithms – deep fakes, already available on social media like TikTok, apps like FaceApp, etc., which can be used for targeted smear campaigns. Second, advances in natural language processing should make human emotions and language manipulation easier, which can turn into kompromat and fake news creation. Third, by using deep fakes and AI-enabled disinformation, the Kremlin can target a specific group due to easier access to social media networks.
We can safely assume that Russia will probably not be the global leader in the development of those three vectors but will rather adapt to the growth of the global digital landscape and develop its expertise in the usage of existing AI outputs, like facial and vocal-related software, which offer a strategic depth which would not be possible by conventional means, as AI outputs are not a Kremlin-original, but rather a mixture of opportunism in service of political goals, the accessibility of AI software, and Kremlins strategic goal of becoming a major player in the emerging tech arena. Russia merely has to use the already existing, publically accessible digital tools and figure out how to manipulate algorithms for targeted advertising in the political contexts, as it was widely publicised during the 2016 US Presidential elections, where employment of “useful idiot”-driven narratives was widespread, having far-reaching consequences, narratives that can still be observed in 2023.