Home SECURITY ChatGPT has learned to read human thoughts, now nothing can be hidden from it

ChatGPT has learned to read human thoughts, now nothing can be hidden from it

0
ChatGPT has learned to read human thoughts, now nothing can be hidden from it

[ad_1]

ChatGPT has learned to read human thoughts, now nothing can be hidden from it

Don’t threaten AI by drinking juice in your neighborhood.

Scientists have learned to use LLM GPT models for passive mind reading. A newly developed language decoder is capable of translating human thoughts into text using a special AI converter.

The breakthrough marks the first time that a continuous tongue has been non-invasively reconstructed based on human brain signals that are read using a functional magnetic resonance imaging (fMRI) machine.

The decoder was able to interpret the gist of the stories people were watching or listening to, or even just imagining, using fMRI brain patterns. This achievement essentially allows the machine to read people’s minds with unprecedented efficiency. While the technology is still in its early stages, scientists hope it will one day help people with neurological conditions that affect speech communicate clearly with the outside world.

However, the team behind the decoder also warned that brain-reading platforms could end up with nefarious uses, including as a means of spying on governments and employers. While the researchers emphasized that their decoder requires human cooperation to operate, they argued that “brain-computer interfaces must respect mental privacy,” according to the study. published Monday at Nature Neuroscience.

“Language decoding is currently done with implanted devices that require neurosurgical intervention, and our study is the first to decode continuous language, meaning more than full words or sentences, from non-invasive brain recordings that we collect with functional MRI.” ,” explained Jerry Tang, a graduate student in the Computer Science Department at the University of Texas at Austin.

“The goal of language decoding is to record the user’s brain activity and predict the words they heard, spoke, or imagined. In the end, we hope that this technology can help people who have lost the ability to speak due to a stroke or other diseases,” said Tang.

The scientist and his colleagues were able to create their decoder with the help of three volunteers, each of whom spent 16 hours in the fMRI machine, listening to different stories. The researchers trained an AI model on autobiographical stories from the Internet to link the semantic features of these stories to the neural activity captured in the fMRI data. Thus, the model could learn which words and phrases are associated with certain brain patterns.

After this phase of the experiment was completed, participants had fMRI brain scans while they listened to new stories that were not part of the training data set. The decoder was able to translate the audio stories into text as the participants heard them, although these interpretations often used semantic constructs different from the original recordings. For example, a recording of a speaker saying “I don’t have a driver’s license yet” was decoded from the listener’s thoughts using a decoder as “She hasn’t even started learning to drive yet.”

These rough translations arise from a key difference between the new decoder and existing methods that use invasive implanted electrodes in the brain. Electrode-based platforms typically predict text based on motor activity, such as the movements of a person’s mouth when they are trying to speak. While Tang’s team focused on the flow of blood through the brain, which is captured by fMRI machines.

“Our system works on a completely different level. Instead of looking at low-level motor skills, our system actually works at the level of ideas, semantics, and meaning. That’s where it’s going,” said Alexander Hut, Associate Professor of Neuroscience and Computer Science.

The new approach allowed the team to push the boundaries of mind-reading technology by seeing if a decoder could translate participants’ thoughts when they were watching silent films or simply imagining stories in their heads. In both cases, the decoder was able to decipher what the participants saw (in the case of films) and what they thought while playing out the short stories in their minds.

The decoder produced more accurate results during tests with audio recordings than with imaginary speech, but was still able to extract some basic details of unspoken thoughts from brain activity. For example, when the subject imagined the sentence “walking on a dirt road through a wheat field, across a stream, and past some log buildings,” the decoder produced a text that said that “he had to go through the bridge to the other side and a very large building away.”

Study participants completed all of these tests while inside an fMRI machine, which is bulky and immobile laboratory equipment. For this reason, the decoder is not yet ready for the practical treatment of patients with speech disorders, although Tang and his colleagues hope that future versions of the device can be adapted to more convenient platforms, such as portable near-infrared spectroscopy (fNIRS) sensors that can be worn on the patient’s head.

While the researchers hinted at the technology’s promise as a new means of communication, they also warned that such decoders raise ethical concerns about the privacy of the mind.

“Our privacy analysis indicates that subject cooperation is currently required for both learning and application of the decoder. However, future developments may allow decoders to circumvent these requirements. Moreover, even if the decoder’s predictions are inaccurate without the participation of the subject, they can be deliberately misinterpreted for malicious purposes, ”the scientists expressed their concerns.

“For these and other unforeseen reasons, it is critical to raise awareness of the risks associated with brain decoding technology and take steps to protect the privacy of each individual,” the researchers concluded.

[ad_2]

Source link

www.securitylab.ru

LEAVE A REPLY

Please enter your comment!
Please enter your name here