Algorithms of Destruction: Moscow’s Tragedy and the Case for AI Regulation

MoscowWinter


On March 22, 2024, Moscow experienced the worst terrorist attack that Russia has seen in over a
decade when four gunmen opened fire in the Crocus City Hall, killing more than 140 people and
setting fire to the building before attempting an escape. Russia blamed Ukraine for the attack,
although the radical terrorist group ISIS quickly claimed responsibility, stating that the four Tajik
assailants were members of the province ISIS-K.

Russian President Vladimir Putin did acknowledge that the attack was carried out by radical
Islamists, but on March 25, 2024 he restated his belief of Ukraine’s involvement and indirectly
placed blame on the United States when he questioned, “Who benefits from this? This atrocity
can only be a link in a whole series of attempts by those who, since 2014, have been at war with
our country using the neo-Nazi regime in Kyiv as their instrument”.
The Kremlin also claimed that they caught the terrorists trying to enter Ukraine, but this was
rapidly debunked, as the arrests were made nowhere near the Ukrainian border. Also contrary to
Putin’s statements, U.S. involvement seems highly unlikely; on March 6, 2024, the United States
warned Russia of a possible upcoming terrorist attack under the “duty to warn” policy. The
warning was sent to Moscow just one day before the United States posted a public warning,
instructing civilians in Moscow to “avoid crowds,” and to “monitor local media for updates.”
Due to the lack of extra security present in the concert hall at the time of the attack, Russia was
put under serious scrutiny for their failure to heed Washington’s warning.

This attack was devastating for Russia, but also for Ukraine as Russia rallied her defenses for retaliation. Given that Russia continues to blame Kyiv for the attack against all sound reasoning, it follows that the Kremlin will continue to use the concert hall attack as justification for the unrelenting attacks on Ukraine following their initial invasions on February 24, 2022. Moscow clearly had no intention of admitting they knew that Ukraine and the United States were not at fault, nor do they now. Dmitry Medvedev, Russia’s ex-president, “vowed revenge if Ukraine was involved,” and Putin himself “publicly dismissed U.S. warnings just three days before the March 22 attack, calling them ‘outright blackmail.’” He has yet to attempt to rescind this quote.

Although there was significant backlash brewing in Russia surrounding the ill precautions taken
before the event, the Russian President used this incident to his advantage as he has in the past to
show that he is a strong leader for Russia ahead of the Russian elections. Within the country,
there were numerous online accounts emerging that claimed “Ukrainians had called in false
reports of shootings in other locations around Moscow to disrupt the rescue effort.” These claims
worked to Putin’s advantage, both reprieving him of scrutiny from the massive security failure
and allowing him to maintain Russian support for the attacks in Ukraine as he worked to
convince the world that Kyiv was behind the Moscow attack.

The 2016 elections showed the world that Russia has no issues with using technology to their
advantage when it comes to political affairs. In fact, there have been numerous reported instances
of artificial intelligence (AI) being used in the war between Russia and Ukraine.
“In the months leading up to the invasion, the Ukrainian government warned of the potential for
Russia to employ a deepfake of Zelensky surrendering to the Russian government,” and although
the Kremlin has yet to publicly claim responsibility, that is exactly what happened. In technical
terms, a deepfake is a fake, artificially engineered photo or video of a real person. Deepfakes are
extremely dangerous and are often used for malicious intent, including for identity theft and for
generating fake pornographic images of real people. Numerous countries have come out with
new AI policies in an effort to regulate this behavior, but so far there is no clear solution. With
current legislation, access to deepfake technology is borderline unrestricted, blurring the lines
between truth and AI. Now more than ever, people are weary to trust what they see on the
internet, and for good reason.

On March 14, 2022, someone hacked a Ukrainian news site and displayed a deepfaked video of
Ukrainian President Volodymyr Zelensky surrendering Ukraine to Russia. On March 22, 2024, a
similarly implicating deepfaked video clip emerged on Russian television channel NTV of
Ukraine’s National Security and Defense Council secretary Oleksiy Danilov making interesting
remarks about the attack on Moscow. He was alleged to have said “Is it fun in Moscow today? I
think it’s a lot of fun. I would like to believe that we will arrange such fun for them more often.”
The video has since been debunked as a fake, and then Secretary Danilov has since been
replaced.

Audio analysis was carried out on the clip for BBC Verify, and Ukraine’s Center for Countering
Disinformation “added that the video’s quality was not good and that Mr. Danilov’s facial
expressions and speech did not match.” While no one has come forward to claim responsibility
for the altered clip, it serves as a solemn reminder that, gone unregulated, deepfake technology
could continue to advance until humans do not possess the capability to tell the difference between what is altered and what is real. Had this incident occurred ten years from now, it’s possible that the AI used to generate the video could have been so believable that it would be hard to deny Russia’s claims of Ukrainian involvement, even with ISIS claiming responsibility for the attacks. Let this serve as a reminder of the detrimental implications that come with AI being used in military situations, and let the world continue to push for more AI regulation.

Featured/Headline Image Caption and Citation: Winter in Moscow, taken on Feb 4, 2022 | Image sourced from PexelsCC License, no changes made

Author