Saturday , March 1 2025

Dozens arrested in global hit against AI’s child abuse

01-03-2025

LONDON: At least 25 arrests have been made during a worldwide operation against child abuse images generated by artificial intelligence (AI), the European Union’s law enforcement organization Europol has said.

The suspects were part of a criminal group whose members engaged in distributing fully AI-generated images of minors, according to the agency.

The operation is one of the first involving such child sexual abuse material (CSAM), Europol says. The lack of national legislation against these crimes made it “exceptionally challenging for investigators”, it added.

Arrests were made simultaneously on Wednesday 26 February during Operation Cumberland, led by Danish law enforcement, a press release said.

Authorities from at least 18 other countries have been involved and the operation is still continuing, with more arrests expected in the next few weeks, Europol said.

In addition to the arrests, so far 272 suspects have been identified, 33 house searches have been conducted and 173 electronic devices have been seized, according to the agency.

It also said the main suspect was a Danish national who was arrested in November 2024.

The statement said he “ran an online platform where he distributed the AI-generated material he produced”.

After making a “symbolic online payment”, users from around the world were able to get a password that allowed them to “access the platform and watch children being abused”.

The agency said online child sexual exploitation was one of the top priorities for the European Union’s law enforcement organizations, which were dealing with “an ever-growing volume of illegal content”.

Europol added that even in cases when the content was fully artificial and there was no real victim depicted, such as with Operation Cumberland, “AI-generated CSAM still contributes to the objectification and sexualisation of children”.

Europol’s executive director Catherine De Bolle said: “These artificially generated images are so easily created that they can be produced by individuals with criminal intent, even without substantial technical knowledge.”

She warned law enforcement would need to develop “new investigative methods and tools” to address the emerging challenges.

The Internet Watch Foundation (IWF) warns that more sexual abuse AI images of children are being produced and becoming more prevalent on the open web.

In research last year the charity found that over a one-month period, 3,512 AI child sexual abuse and exploitation images were discovered on one dark website. Compared with a month in the previous year, the number of the most severe category images (Category A) had risen by 10%.

Experts say AI child sexual abuse material can often look incredibly realistic, making it difficult to tell the real from the fake.

An analyst who removes child sexual abuse content from the internet says she is always trying to stay “one step ahead” of the “bad guys”.

Mabel, who uses a pseudonym to protect her identity, works for the Internet Watch Foundation (IWF), a charity based in Histon, Cambridgeshire.

The IWF has a team of frontline staff who identify and remove online child sexual abuse imagery from across the world.

It has identified more than half a million victims this year, but says advances in technology are making the charity’s work even more challenging. (Int’l News Desk)

Check Also

Japan’s births fell to record low in 2024

28-02-2025 TOKYO: The number of babies born in Japan fell to a record low of …