Home SECURITY Thousands of realistic but thankfully fake images of sexualized child abuse found on the dark web

Thousands of realistic but thankfully fake images of sexualized child abuse found on the dark web

0
Thousands of realistic but thankfully fake images of sexualized child abuse found on the dark web

[ad_1]

Thousands of realistic but thankfully fake images of sexualized child abuse found on the dark web

Now it is even more difficult for law enforcement officers to deal with child molesters.

Child protection experts say they can’t resist the thousands of “artificial intelligence-generated images of sexualized child abuse” that are easily and quickly created and shared on dark pedophile forums. informs The Washington Post.

This “explosion” of frighteningly realistic images could help normalize sexualized abuse of minors, draw real children into a trap, and make it harder for law enforcement to find real victims, experts said.

Finding victims depicted in these materials is already “a needle-in-a-haystack problem,” said Rebecca Portnoff, director of scientific data at the nonprofit Child Welfare. Thorn. Law enforcement will now be further behind in their investigations due to the need to determine whether the materials are real or not.

Malicious AI-generated content can also re-harm real victims whose images from the past have been used to train AI models to create fake images.

Both law enforcement and child protection experts report that images generated in this way are increasingly being promoted on dark forums for pedophiles, and many Internet users see this content as an alternative to trafficking in illegal material of sexualized abuse of real children (CSAM).

According to Active Fencewhich develops trust and security tools for online platforms and streaming sites, “roughly 80 percent of respondents” to a 3,000-member dark forum survey said “they used or intended to use artificial intelligence tools to create images of sexualized child abuse.” “.

Company Stability AIwhose neural network is likely used to generate such content, told The Washington Post that it bans the creation of obscene material, cooperates with law enforcement to find violators of its policies, and removes explicit material from its training data to prevent future attempts to create this kind of content. content.

However, The Washington Post deftly retorted that such protection mechanisms are imperfect, because anyone can download Stable Diffusion on your computer and use it however you like. Any filters can be easily bypassed by the banal addition of a few lines of code, about which many guides have already been written on the Internet.

Although some users who use AI to generate images, and even some legal analysts, believe that such content is not potentially illegal because real children are not harmed in the course of its production, some US Department of Justice officials (DoJ) report that artificial intelligence images that sexualize minors still violate federal child protection laws.

As authorities become more aware of the growing problem, the public is being warned to adjust their online behavior to prevent victimization. Thus, experts advise parents to set limits on who can see photos and videos of their children so that it is not used for the “wrong” purposes.

Earlier this month, the FBI also issued a warning that malefactors on the basis of ordinary and in no way violating the law photographs are able to create synthetic content (deepfake), including sexual material with both minor children and adults.

[ad_2]

Source link

www.securitylab.ru

LEAVE A REPLY

Please enter your comment!
Please enter your name here