Friday, March 29, 2024
HomeSECURITYhow neural networks can be used to spread fake information

how neural networks can be used to spread fake information

-


The tragedy that never happened: how neural networks can be used to spread fake information

Frightening images from Midjourney that are almost indistinguishable from real photographs.

Often on the Internet you can stumble upon very realistic photographs of natural disasters or man-made disasters. Many of these photos are absolutely real and taken by real photographers, while the other lion’s share is fakes edited in Photoshop. And if earlier any image from the Internet could cause a feeling of doubt in any critically thinking person, now, with the development and spread of neural networks, such doubts should become even greater.

Realistic images of a tragedy that never happened were recently posted on a forum Redditin one of the topics where users experiment with a neural network midjourney and share their creations with like-minded people. original post user under the nickname “Arctic_Chilean” quickly went viral and ended up on the main page of Reddit. Naturally, many people who did not know the context of the image even thought at first that these were real photographs.



The creator of the fake photos claimed that an earthquake measuring 9.1 on the Richter scale was recorded in April 2001. He even listed the places where the disaster occurred. One image purportedly shows the collapsed downtown Seattle, while another shows rescuers pulling survivors out of the rubble in Vancouver.

The images are very convincing and frightening. The overall quality and style is also very reminiscent of 2001. It is likely that the neural network took photographs of the real tragedy of September 11, 2001 as a reference. However, in April of that year, fortunately, there was no earthquake off the west coast of the United States and Canada – this is just a realistic generation.

Artificial intelligence-generated fake images have been spreading like a pandemic lately. One foreign journalist, for example, generated a dozen different photos of the detention of Donald Trump by the American police.


If the images were at least a little more realistic, it would be quite possible to let another fake news into the world information field. And after all, most people would believe in it, here’s the picture! And the faster neural networks develop, the more the already thin line of trust in the media and the Internet is blurred.

Let’s hope that in the future, attackers or pranksters will not use neural networks to spread misinformation, especially on such a scale as in the images above, associated with any tragic events and loss of life.

And even better, if in the future all images posted in any media will need to be checked without fail by some tool based on the same AI to confirm the authenticity.



Source link

www.securitylab.ru

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular