Terrorist Organizations Are Proving Adept at Embracing Artificial Intelligence

Author(s): EMAN Network / European Eye on Radicalisation (EER)

Over the past decade, a lot of discussions have been brought to the table regarding the capabilities and risks of artificial intelligence (AI), but much of it has been focusing on AI’s ramifications in the global economy, its impact on careers and its vast potential as an emerging technology across multiple sectors including, healthcare, agriculture, financial services, education, manufacturing and defense. However, AI’s capabilities go beyond that. Governments, intelligence agencies, policy-makers and media are debating how AI will transform the lethality of autonomous weapons systems and the implications of these systems being used by this technology falls into the hands of terrorists and non-state actors.

AI is defined as “systems or machines that mimic human intelligence to perform tasks and can iteratively improve themselves based on the information they collect.” Even though terrorist groupsm and particularly lone-wolf actors, have typically relied on so-called low-tech terrorism such as weapons, knives, and automobiles, terrorism is not a static danger. But, as soon as AI becomes more widely used, the barriers to adopting this type of technology decrease as the abilities and technical know-how required to use it diminishes. The UN’s Secretary-General António Guterres argues that “while these technologies hold great promise, they are not risk-free, and some inspire anxiety and even fear. They can be used to malicious ends or have unintended negative consequences.”

Emerging Challenges

The application of AI in myriad contexts can create various challenges for individuals, communities, and nations. These issues can develop at any point in the AI lifecycle, from design through deployment, and can be caused by both planned and unexpected acts. According to the UN, the substantial potential for AI to infringe on human rights is one of the worries surrounding its usage by legal entities. When AI technology is not utilised properly, it can jeopardize rights to privacy, equality (particularly gender equality), and non-discrimination, among other things.

During the past couple of years, terrorist organizations and extremist individuals have demonstrated a high level of adaptability and have developed significantly over time. Additionally, they’ve proved their ability to innovate in other areas as well, such as their organizational structure, which has become decentralized, franchised, and transcontinental, allowing them to effectively leverage emerging technologies to preach, fundraise, recruit and carry out attacks. They’ve also changed their tactics dramatically, transitioning from irregular guerilla warfare to indiscriminate strikes.

In 2016, a video purportedly produced by Daesh in Syria documented the organization testing a remote-controlled self-driving automobile. Mannequins were placed inside the car in an attempt to trick spectators into believing it was being driven by an actual person. Shortly after, it was stated that there was strong evidence that Daesh was working on developing self-driving cars to replace suicide bombers, and British authorities disclosed in 2018 that two Daesh sympathizers were intending to use a driverless car to carry out terrorist operations. Additionally, in March 2020, Daesh posted an illustrative video on Rocket.chat — a decentralized social media network that Daesh has utilized to promote terrorist propaganda — demonstrating how face recognition software may be used to fool security systems. The group’s ability to leverage AI systems has been clearly demonstrated.

Technology in the Wrong Hands

Radical individuals and terrorist groups are also using the Internet and social media, as well as online gaming platforms, to preach their extremist ideologies, recruit, incite hatred and violence, and claim responsibility for attacks. The 2019 Christchurch terrorist attack in New Zealand that took place is a case in point — the terrorist incident was live-streamed by the attacker on Facebook. Although the video was removed a few minutes later, the incident was aired throughout the world, magnifying its impact and repercussions on the victims.

The Yemen-based Houthis are another case in point, having successfully launched drone attacks against population centers in Saudi Arabia and the United Arab Emirates, resulting in hundreds of innocent casualties and significant material damage to energy and civilian infrastructure. The Houthis’ deployment of drones against airports, energy infrastructure and civilian centers is another facet of terrorist groups adopting 21st century methods of warfare. The successful use of drones may potentially evolve into competence to produce and manufacture drone swarms — small, cheap and autonomous interlinked drones deployed in large numbers and armed with a variety of munitions that could engage multiple targets.

Conclusion

Due to the overwhelming extremist content being posted online, and to avoid losing control, Facebook, alongside other major tech giants, are employing machine-learning algorithms to search for trends in terrorist propaganda and remove them from other people’s newsfeeds more efficiently. The company has taken the lead by teaming up with major tech giants including Twitter, Microsoft, and YouTube to develop an industry database that tracks the terrorist organizations’ unique digital fingerprints.

Despite its concerns, AI is also recognized as a technology that has the potential to provide huge advantages to counterterrorism as it can be used to anticipate extremist attacks. Like big data — governments and intelligence agencies understand the importance of adopting these two emerging technologies in tandem, to not only prevent violent terrorist attacks but also to track and monitor radical individuals using social media to preach and recruit vulnerable individuals.

Previous
Previous

The Dangers of Hindutva Extremism

Next
Next

Islamist Extremists: The Misuse of Charity Givings