Wednesday, November 29, 2023
HomeSECURITYscientists from London conducted a unique experiment

scientists from London conducted a unique experiment

-


How not to fall into the trap of lies on social networks: scientists from London conducted a unique experiment

Don’t believe everything you see on the internet.

Social media is full of false information. Every day, users of Facebook, Twitter, Instagram and other sites can stumble upon made-up “facts” about anything from vaccines to war to climate change.

Some people easily distinguish between truth and fiction, while others do not.

Why would a reasonable person believe false information?

The researchers of the project “PolyGraphs: Combatting Networks of Ignorance in the Misinformation Age” are trying to answer this question.

The project unites three departments – philosophy, economics and computer science – at the North Eastern University in London. It uses computer simulations to help us learn more about how knowledge is shared in the social media community.

After two years of work, the researchers launched an interactive website and made some impressive discoveries, including how and why reasonable people can make mistakes.

The project uses artificial data, as well as data from real social networks such as Facebook and Twitter, to create simulated communities in which each individual must choose between A and B. (B is the correct choice, but the community does not know this.) Agents in communities gather their own evidence, share it with others, and change their beliefs. The researchers then look to see if the community collectively reaches the right conclusion and how long it takes.

Simulations are like drug trials

If this concept seems a little abstract, Amil Mohanan offers an example that is in many ways similar to the situations that agents face in simulations. Mohanan, who is an assistant professor of philosophy at North-Eastern University London, demonstrates the analogies between simulations and clinical drug trials conducted by medical professionals. In the context of the simulation, each doctor is given drug A or drug B for testing.

“We know that B is a little better, but doctors in the community we are simulating don’t know that,” he says.

When doctors do trials, they find that drug B is better and share their results with their colleagues. Depending on various factors, some of the doctors will change their beliefs based on what they learn from others. If all goes well, they will eventually come to a consensus that drug B is better.

But how long will it take?

“We’re looking at how quickly and effectively communities come to the conclusion that drug B is superior to drug A,” Mohanan elaborates. They run thousands of simulation iterations, modifying various parameters, such as the size and structure of the network, to determine how long it will take to reach consensus on drug B.

For example, in one network model, information exchange is carried out only between each agent and two others, creating a circular exchange structure. In another model, one participant shares information with the entire team, which, in turn, returns information to him. In the third case, information circulates freely among all participants. These network models reflect those we see in real life or in the online space.

The small ones can have a profound impact. Another factor that influences the results is false information. Small deceptions can have far-reaching consequences, Mohanan found, and the impact can vary depending on parameters such as the degree of network connectivity.

In this case, deception can take various forms in different simulations. Physicians may, for example, claim that they have tested and are confident that drug A is superior. They may choose one of the remedies at random and skew the data. Or they may be misled into choosing drug A because they know it the most.

“The overall results are very different” in each of the scenarios, Mohanan emphasizes. After thousands of iterations of the simulation, his team figured out that these bits of doubt can make a big difference.

Other results were completely unexpected. For example, the research team found that in some cases, greater openness of information exchange between people can actually contribute to the negative effect, slowing down consensus building. This phenomenon is known as the Zollman effect and is the theory that a more connected network may be more conducive to the spread of erroneous beliefs.

“Smart agents in such networks may be more prone to ignorance or be more likely to not come up with the right answer to a question if they communicate more with each other,” says Brian Ball, head of philosophy at North Eastern University in London.

They also found that community members’ distrust of those whose beliefs differ from their own can lead to rejection of common opinion and as a result of polarization within the community.

Ignorance among intelligent subjects

First of all, scientists have proved that ignorance can penetrate even among intelligent subjects. Often people perceive those who are “wrong” as not being intelligent or subjective. However, as Ball emphasizes, “we show that people’s mistakes may not be due to their stupidity or bias.”

Instead, people can be misled through no fault of their own, and the reason here is the structure of the social network.

“It may depend on the extent to which they are involved in society and, more broadly, on the nature of their information environment,” explains Ball.

Ball hopes that these discoveries will be useful in various areas, including social networks, public policy, the media and non-profit organizations that aim to combat the spread of false information on the Internet.

If you’re worried about what you see on social media, Mohanan reassures: by and large, “the truth always comes out over time.”



Source link

www.securitylab.ru

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular