in

Scientists developed a GPT model that reads human thoughts

[ad_1]

In context: Generative pre-trained transformers (GPT) like those used in OpenAI’s ChatGPT chatbot and Dall-E image generator are the current trend in AI research. Everybody wants to apply GPT models to just about everything, and it has raised considerable controversy for various reasons.

Scientific American notes that a group of researchers has developed a GPT model that can read a human’s mind. The program is not dissimilar to ChatGPT in that it can generate coherent, continuous language from a prompt. The main difference is that the prompt is human brain activity.

The team from the University of Texas at Austin just published its study in Nature Neuroscience on Monday. The method uses imaging from an fMRI machine to interpret what the subject is “hearing, saying, or imagining.” The scientists call the technique “non-invasive,” which is ironic since reading someone’s thoughts is about as invasive as you can get.

However, the team means that its method is not medically invasive. It is not the only time scientists have developed a technology that can read thoughts, but it is the only successful method that does not require electrodes connected to the subject’s brain.

The model, unimaginatively dubbed GPT-1, is the only method that interprets brain activity in a continuous language format. Other techniques can spit out a word or short phrase, but GPT-1 can form complex descriptions that explain the gist of what the subject is thinking.

For example, one participant listened to a recording of someone stating, “I don’t have my driver’s license yet.” The language model interpreted the fMRI imaging as meaning, “She has not even started to learn to drive yet.” So while it does not read the person’s thoughts verbatim, it can get a general idea and summarize it.

Invasive methods can interpret exact words because they are trained to recognize specific physical motor functions in the brain, such as the lips moving to form a word. The GPT-1 model determines its output based on blood flow in the brain. It can’t precisely repeat thoughts because it works on a higher level of neurological functioning.

“Our system works at a very different level,” said Assistant Professor Alexander Huth from UT Austin’s Neuroscience and Computer Science Center at a press briefing last Thursday. “Instead of looking at this low-level motor thing, our system really works at the level of ideas, of semantics, and of meaning. That’s what it’s getting at.”

Also Read: Leading tech minds sign open letter asking for a six-month pause on advanced AI development

The breakthrough came after feeding GPT-1 Reddit comments and “autobiographical” accounts. Then they trained it on the scans from three volunteers who spent 16 hours each listening to recorded stories while in the fMRI machine. This allowed GPT-1 to link the neural activity to the words and ideas in the recordings.

Once trained, the volunteers listened to new stories while being scanned, and GPT-1 accurately determined the general idea of what the participants were hearing. The study also used silent movies and the volunteers’ imaginations to test the technology with similar results.

Interestingly, GPT-1 was more accurate when interpreting the audio-recording sessions than the participants’ made-up stories. One could chalk it up to the abstract nature of imagined thoughts versus the more concrete ideas formed from listening to something. That said, GPT-1 was still pretty close when reading unspoken thoughts.

In one example, the subject imagined, “[I] went on a dirt road through a field of wheat and over a stream and by some log buildings.” The model interpreted this as “He had to walk across a bridge to the other side and a very large building in the distance.” So it missed some arguably essential details and vital context but still grasped elements of the person’s thinking.

Machines that can read thoughts might be the most controversial form of GPT tech yet. While the team envisions the technology helping ALS or aphasia patients speak, it acknowledges its potential for misuse. It requires the subject’s consent to operate in its current form, but the study admits that bad actors could create a version that overrides that check.

“Our privacy analysis suggests that subject cooperation is currently required both to train and to apply the decoder,” it reads. “However, future developments might enable decoders to bypass these requirements. Moreover, even if decoder predictions are inaccurate without subject cooperation, they could be intentionally misinterpreted for malicious purposes. For these and other unforeseen reasons, it is critical to raise awareness of the risks of brain decoding technology and enact policies that protect each person’s mental privacy.”

Of course, this scenario assumes that fMRI tech can be miniaturized enough to be practical outside of a clinical setting. Any applications other than research are still a long way off.



[ad_2]

Source link

A lake under a kilometer of ice preserves the ice sheet’s past, hints at possibility of advancing even after large-scale retreat — ScienceDaily

Exposure to airplane noise increases risk of sleeping fewer than 7 hours per night — ScienceDaily