Home SECURITY how generative AI models like ChatGPT, DALL-E and Midjourney can distort human beliefs

how generative AI models like ChatGPT, DALL-E and Midjourney can distort human beliefs

0
how generative AI models like ChatGPT, DALL-E and Midjourney can distort human beliefs

[ad_1]

Disinformation Express: How Generative AI Models Like ChatGPT, DALL-E, and Midjourney Can Distort Human Beliefs

We tell how generative AI models can influence human beliefs and prejudices.

Scientists from the American Association for the Advancement of Science (AAAS) warn us against the possible danger posed by generative artificial intelligence (AI) models such as ChatGPT, DALL-E and Midjourney. These models, in their opinion, can spread false and subjective information, influencing our beliefs.

Generative AI models create new content based on existing data. They can generate text, images, audio or video materials that can be used in a variety of areas, including education, entertainment, and research. But at the same time, their use can be dangerous if they create false or biased information, misleading people.

IN article by scientists Celeste Kidd and Abeba Birhane, discusses three key principles of psychology that help us understand why generative AI models can influence our beliefs so powerfully:


  • People form stronger beliefs when information comes from confident and knowledgeable sources. For example, children learn better from teachers who demonstrate their competence in the subject.
  • People often tend to exaggerate the capabilities of generative AI models and consider them superior to human abilities. This leads to the fact that people faster and more confidently accept the information generated by AI.
  • People are most receptive to new information when they are actively seeking it, and tend to hold on to what they have learned. But this information can be unreliable or subjective if it is generated by generative AI models.

Scientists call for more attention to the design of generative AI models, as they are more focused on finding and providing information. Scientists propose to evaluate the impact of these models on human beliefs and prejudices before and after exposure to generative AI. the survey becomes especially relevant given the increasing use and integration of these systems into our everyday technological world.

[ad_2]

Source link

www.securitylab.ru

LEAVE A REPLY

Please enter your comment!
Please enter your name here