Fighting the Behemoth: New Age of AI-fuelled Active Measures

Omar Memišević

From its potential in military forecasting, reconnaissance, and intelligence analysis to its satirical online presence through US Presidents playing Minecraft, deep fakes, ChatGPT, election fraud, etc., artificial intelligence, or shortly AI, is quickly becoming a potential challenge in national (and international) politics, media, and national security in general. When used skillfully, AI-enabled disinformation and subversion can become more dynamic, increasing societal divisions and altering the political landscapes, making methods like open-source investigation much more challenging.

Several high-ranking Kremlin officials, including Vladimir Putin himself, have expressed the Russian inclination towards AI expertise as a way of effectively countering Russia’s adversaries in the information and cyber domain, which includes the creation and dissemination of kompromat, misinformation, disinformation, and malinformation, in those countries where the Russian influence has seen intensive growth over the last years. During the 2019 St. Petersburg International Economic Forum, some 21 sessions tackled AI, but limiting it to economic and health potential, while the political potential of AI was left to the AI Journey conference in Moscow and Novosibirsk, which shows that Russia is very much interested in political uses of AI. This was further underlined by the Western think-tank community.

We are already having increased calls by Silicon Valley companies and AI executives for a more proactive government approach to regulating the AI sector, as underlined in a hearing before the US Senate Judiciary subcommittee on May 16, 2023, when OpenAI CEO Sam Altman expressed that he supported regulation for the technology, saying “we think that regulatory intervention by governments will be critical to mitigating the risks of increasingly powerful models”.

The hope of this proactive approach can be witnessed in the Biden White House executive order leaked by POLITICO on September 28, 2023, which, if signed, would give the government more legroom in AI control and regulation, which in turn curbs Russian domineering efforts in the US and sets a precedence for Europe to do the same.

When used skillfully, AI-enabled disinformation and subversion can become more dynamic, increasing societal divisions and altering the political landscapes, making methods like open-source investigation much more challenging.

AI Technologies. Photo: Shutterstock.com

Navigating the AI Landscape

In early June 2023, the EU Commission called for classifying AI content by designating it as not human in its outputs, which shows us the potential heading for AI policy solutions, while the importance of thinking about the role of AI in political warfare is increasingly being recognised by the think-tank community in Central and Eastern Europe as shown during the 2023 GLOBSEC Forum in Bratislava where a panel was convened: “AI, Algorithms, and the Next Advance: Making Tech Work for Democracy.” One of the key takeaways was that AI is used to spread the message during elections and that malign actors, like Russia or China, have been using machine learning algorithms to harm.

As a tool with high potential in the military-industrial complex, AI will probably lead to breakthroughs in both offensive and defensive cyber contexts, as it will allow cyber vulnerabilities to be identified and terminated more quickly. War-gaming and strategising of this are almost certainly going on in Russia, however, the length of which remains difficult to assess due to their sensitivity and the veil of Russian national security around them.

As a tool with high potential in the military-industrial complex, AI will probably lead to breakthroughs in both offensive and defensive cyber contexts, as it will allow cyber vulnerabilities to be identified and terminated more quickly.

AI Landscape. Photo: European Commission.

„The fact that this [EU] support and cooperation goes far beyond what any other partner has provided to the region deserves public acknowledgment.“

Russia’s Evolving Strategies in a Tech-Driven World

Over the past years, the view that Russia cannot possibly compete with the collective West in the arena of rising tech has provided some comfort for Western policymakers and academics, deepened still with the influx of Russian nationals into Serbia, Montenegro, or Turkey, most of whom are young people escaping Putin’s regime military mobilisation, with IT professional backgrounds, some of which have already founded companies in these countries. As a result, the CEE region as a whole, but especially Estonia and Poland, as well as the Western Balkans, have seen a rise in disinformation campaigns, political pressure, and even threats of new armed conflicts.

This an obvious sign that active measures are back, but this time in the era of AI and emerging tech, in a co-dependent and multipolar world. The question one asks is whether or not AI is usable in creating false narratives, compromising material of individuals for political gain, or influencing elections as such. Using AI in this context seems a lot cheaper and would require a lot less training than the human-driven intelligence methods we have today.

When going through existing research, there are three potential vectors of AI development that the Kremlin can use for political purposes. First, faster, cheaper, and easier content creation through machine learning algorithms – deep fakes, already available on social media like TikTok, apps like FaceApp, etc., which can be used for targeted smear campaigns. Second, advances in natural language processing should make human emotions and language manipulation easier, which can turn into kompromat and fake news creation. Third, by using deep fakes and AI-enabled disinformation, the Kremlin can target a specific group due to easier access to social media networks.

We can safely assume that Russia will probably not be the global leader in the development of those three vectors but will rather adapt to the growth of the global digital landscape and develop its expertise in the usage of existing AI outputs, like facial and vocal-related software, which offer a strategic depth which would not be possible by conventional means, as AI outputs are not a Kremlin-original, but rather a mixture of opportunism in service of political goals, the accessibility of AI software, and Kremlins strategic goal of becoming a major player in the emerging tech arena. Russia merely has to use the already existing, publically accessible digital tools and figure out how to manipulate algorithms for targeted advertising in the political contexts, as it was widely publicised during the 2016 US Presidential elections, where employment of “useful idiot”-driven narratives was widespread, having far-reaching consequences, narratives that can still be observed in 2023.

Conclusion

The potential of AI in political warfare and hybrid threats in the transatlantic space is an emerging topic within the arena, but offering potential policy solutions for the regulation of AI in NATO member states remains limited to analyses and reports. The convergence of AI and political warfare, specifically the creation and the spread of Russian malinformation, disinformation, or kompromat, as part of a broader strategy of active measures, within the three vectors mentioned, is quickly becoming the topic of security forums in Europe. Still, a broader, cross-sectoral approach is needed to mitigate the potential negative outcomes of AI-driven narratives coming from the Kremlin.

 

Sources

Omar Memišević is a Research Fellow at the Strategic Analysis Think Tank.

The potential of AI in political warfare and hybrid threats in the transatlantic space is an emerging topic within the arena, but offering potential policy solutions for the regulation of AI in NATO member states remains limited to analyses and reports.

Share This Blog, Choose Your Platform!