29 Jun 2023

Cybercrime / AI and organised crime in Africa

Artificial intelligence is rapidly developing in Africa and is already being used by organised criminals and law enforcement.

In 2019, a British company in the UK was defrauded of US$243 000 when the voice of the Chief Executive Office was impersonated by criminals using artificial intelligence (AI). In South Africa,  impersonation crimes increased by 284% between 2021 and the first five months of 2022, according to the South African Fraud Prevention Service.

Advancements in technology are making these crimes easier to commit and more difficult to distinguish between fake and real. The recent and rapid awareness of AI applications for personal use – such as ChatGPT and resemble.ai – has increased concern about AI’s real and potential uses and abuses in our everyday lives. 

While AI is still nascent in Africa, it can potentially be used for both positive and negative ends in perpetrating and combating transnational organised crime across the continent.

AI is predominantly applied in cyberspace, where criminals use the internet to engage in increasingly sophisticated phishing scams and ransomware attacks. Law enforcement can use automated systems and models to collect data on, detect and counteract these cybercrimes. But because we live in an increasingly interconnected world, where physical objects are connected to networks via sensors (the Internet of Things), new technologies can be used to commit or police crimes in the physical realm. This practical application of AI is developing rapidly, and has the potential to aid both organised criminals and law enforcement operations in Africa.

AI-powered applications, systems and technologies offer transnational organised criminal groups the means and opportunity to commit a broad range of criminal activities that are more complex, at a greater distance, and involve less physical risk.

Since AI is continuously developing and evolving, legal frameworks are always playing catch-up

Recent analysis shows that criminal organisations can use AI in the same ways as legitimate companies, for ‘supply chain management, risk assessment and mitigation, personnel vetting, social media data mining, and various types of analysis and problem-solving.’

Organised criminals in Africa use drones for intelligence, surveillance and reconnaissance purposes, but drones also pose a potential threat to physical security. Already used by drug cartels in Mexico, autonomous attack drones under AI control can give criminals more flexibility, agility and coordination when physically attacking human, supply chain or infrastructure targets.

Satellite imagery can help criminals plot and manage smuggling routes across borders with AI systems such as Earth Observations, which provide highly accurate and near-real-time terrain data at the local level. Organised criminals can also attack AI systems to evade detection (e.g. biometric screening processes), circumvent security systems (at banks, warehouses, airports, ports and borders), or wreak havoc with private sector and government networks and economic infrastructure.

AI-enabled attacks on confidential personal databases, platforms and applications could allow organised criminals to extort or blackmail to generate income or leverage political influence. AI-generated deepfake technology can be used, for example, to access money by impersonating account holders, request access to secure systems, and manipulate political support through fake videos of public figures or politicians speaking or acting reprehensibly.

While AI offers new opportunities for criminals, it also provides new ways for law enforcement to police crime. The most promising application of AI for law enforcement is its ability to map movements, identify patterns, and anticipate, investigate and prevent crime. Predictive policing allows law enforcement to calculate where crime is likely to occur based on AI algorithms that categorise large amounts of data to determine risk – although this system comes with its own set of associated harms, including reinforcing discriminatory policing patterns.

AI technologies aimed at policing organised crime would give law enforcement agencies enormous powers

In Africa, private security companies are usually more technologically advanced than police forces, many of which currently lack basic internet/data access, technology resources and capacity. But the AI systems developed and implemented by the private sector are usually linked in some way to police databases or can be used to prosecute suspects.

For example, Vumacam’s automatic license plate recognition system in Johannesburg comprises over 2 000 vehicle tracking cameras. It’s connected to the South African Police Service’s national database of suspicious or stolen vehicles, and this partnership between private sector and law enforcement has led to several arrests.

The Bidvest Protea Coin group recently announced the rollout of Project Scarface, an early-warning system that uses facial recognition software to allow for the real-time automated detection of potential suspects. Visual data collected for the project can be used as evidence in criminal cases.

AI can also be used for fighting organised crime in remote areas. EarthRanger uses AI and predictive analytics to collect and display all historical and real-time data – including from wildlife, ranger patrols, spatial data and observed threats – available from a protected environmental area. This technology has helped park managers to dismantle poaching rings in Grumeti Game Reserve, Tanzania, and enhanced sustainability efforts by helping local communities coexist with protected wildlife in Liwonde National Park, Malawi.

The Africa Regional Data Cube (ARDC) is an AI system that layers 17 years of satellite imagery and Earth Observations data for five African countries (Kenya, Senegal, Sierra Leone, Tanzania and Ghana). It stacks 8 000 scenes across a time series and makes the compressed, geocoded and analysis-ready data accessible via an online user interface. ARDC’s data on comparing land changes over time could help law enforcement identify and track illegal mining operations in Ghana.

Autonomous attack drones under AI control would allow criminals to act with more flexibility, agility and coordination

Although AI offers new technologies for tackling organised crime in Africa, there are currently several limitations – and some risks – to its use.

AI is a digital technology that needs access to an uninterrupted power supply, a stable internet connection and ‘the ability to store, collect, process, collate and manage vast quantities of data.’ This means its rollout across Africa will be erratic and irregular, depending on countries’ resources, law enforcement capabilities, and states’ willingness to enter into public-private partnerships.

In addition, because of the potential for AI systems to be attacked or to fail, relying on AI in the fight against organised crime could have negative consequences.

AI technologies and applications aimed at policing organised crime would give law enforcement agencies enormous powers, some of which could violate citizens’ rights to privacy, freedom of assembly and association, among others. Because AI continuously develops and evolves, legal frameworks are always trying to catch up. Private companies and even governments may capitalise on this to circumvent privacy concerns. Vumacam has already come under scrutiny for collecting potentially sensitive location data on private individuals with no links to crime.

Authoritarian governments could also use legitimate AI systems to monitor political opponents or suppress critical civil society. Human rights advocates in Zimbabwe are worried about the government’s implementation of Chinese-developed facial recognition software, with serious questions remaining about the ownership and potential use(s) of the data gathered.

In September 2021, then United Nations (UN) High Commissioner for Human Rights, Michelle Bachelet, called for a moratorium on governments’ use of AI technology that threatens or violates human rights. And in March 2023, major AI developers called for a pause on giant AI experiments ‘to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.’

In November 2021, UN Educational, Scientific and Cultural Organization member states adopted an agreement detailing recommendations on the ethics of AI to ensure ‘the healthy development of AI.’ But AI is developing fast, both in scope and reach. International bodies, governments and civil society must be equally fast and agile in establishing and implementing responsible and ethical AI use principles.

At the same time, more should be done to develop instruments and legal frameworks for investigating, prosecuting and punishing individuals and groups who use (or misuse) AI for criminal and violent ends.

Romi Sigsworth, Research Consultant, ENACT

Related

More +

EU Flag
ENACT is funded by the European Union
ISS Donors
Interpol
Global
ENACT is implemented by the Institute for Security Studies in partnership with
INTERPOL and the Global Initiative against Transnational Organized Crime.