Neural networks are so good at creating fakes that even specialized services cannot recognize them.
Experts warn of new challenges and threats that fake generated content can create.
We have repeatedly written about how millions of people believed in an obvious hoax, which other Internet users for fun passed off as something absolutely real. So, in March, Reddit was in full swing frightening footage of the aftermath of the earthquake 2001 that never happened and also photos of fashionable Pope Francis in a puffy Balenciaga down jacket.
Neural networks are increasing the speed and quality of generation every month, so it is extremely important now to learn how to distinguish real photos from impudent fakes.
To simplify the task, they promise special detector services that can analyze dubious images using complex algorithms and make their own verdict. But how effective and reliable are such services?
American edition The New York Times decided to test five such services: “Umm-maybe”, “Illuminarty”, “AI or Not”, “Hive” and “Sensity”. To do this, the researchers “fed” the services more than 100 photographs, which depicted various landscapes, architectural structures, food, portraits of people and animals, and much more. Many of the images were real, but the rest, of course, were generated using neural networks.
To create realistic fakes, the researchers used AI generators – midjourney, Stable Diffusion And DALL-E. When real images were taken from the old city photo archive or were even little-known works of art.
The detectors are looking for unusual patterns in the arrangement of pixels, which often occur during artificial generation. However, such services do not take into account the context and logic of the image, so they can sometimes miss obvious fakes or mistake a real photo for a fake.
For example, one of the verified images showed Elon Musk in the company of a realistic android girl. The image was created using Midjourney, but was able to deceive two of the five detectors into believing that the image was genuine.
Elon Musk kisses a beautiful android girl (generated by AI)
In addition, AI detectors have difficulty with images that have been altered from their original appearance or are of poor quality. Such images are often found on the Internet, where they are copied, resaved, reduced or cropped. All this negatively affects those markers that generative image detectors usually rely on.
For example, one of the images contained a very old photo of a giant Neanderthal man standing next to ordinary people. Of course, the image was created with Midjourney. When the detectors analyzed the high-resolution image, they all correctly identified it as fake, however, when the image was intentionally reduced in quality, all five detectors reported that the image was genuine.
Giant Neanderthal standing next to ordinary people (AI generated)
In addition to a banal reduction in quality, the artificial addition of noise or digital grain can help deceive such detectors, since neural networks usually generate “too perfect” images.
The detectors did a much better job of identifying real images. Although sometimes one of the services could mistakenly consider some picture of an abstract artist as the work of a neural network.
“Convergence» — Jackson Pollock (real picture)
In general, experts believe that AI detectors should not be the only defense against fake content. They suggest using other methods as well, such as watermarks, online warnings, and restrictions on the distribution of fake images. They also call for greater transparency and accountability on the part of the creators of such content and the platforms on which it is distributed.
Artificial intelligence is able to generate not only realistic images, but also texts, audio and video, which can also be used to manipulate public opinion, financial markets and political processes. This creates new challenges and threats for society, which require increased vigilance and skepticism towards any Internet content.
Source link
www.securitylab.ru