Researchers decode how we turn thoughts into sentences

Researchers decode how we turn thoughts into sentences


New research reveals that the brain’s handling of sentence formation goes far beyond word recognition, tapping into dynamic and syntax-specific activity that reshapes how we understand language production. PHOTO/Shutterstock.

By JAMES OLOO

[email protected]

A team of New York University Tandon School of Engineering researchers uses machine learning to analyze neural activity data and uncover how speech is produced.

In a recent paper published in Nature Communications Psychology, a research team at NYU, led by Associate Professor Adeen Flinker and postdoctoral researcher Adam Morgan, explored how the brain constructs sentences from individual words.

Using high-resolution electrocorticography (ECoG), they aimed to determine whether findings from simpler language tasks, such as naming pictures, also apply to the more cognitively demanding process of sentence formation.

The study involved ten neurosurgical patients undergoing treatment for epilepsy. Participants completed language tasks that included saying single words and forming full sentences to describe cartoon scenes. Machine learning was applied to ECoG recordings taken directly from electrodes placed on the surface of each participant’s brain.

The researchers first mapped the distinct neural signatures associated with six words spoken individually, then monitored how those same patterns evolved as the words were incorporated into complete sentences.

The study revealed that although the brain’s activity patterns for individual words stay consistent across different language tasks, the way those words are arranged and processed depends heavily on sentence structure. In sensorimotor areas, neural activity mirrored the order in which the words were spoken.

However, in prefrontal regions—especially the inferior and middle frontal gyri—the encoding strategy was different. These areas not only represented the words participants intended to say but also registered each word’s grammatical function (such as subject or object) and its place within the sentence’s overall structure.

Researchers pinpointed what areas of the brain activated for individual words, allowing them to track that activity as participants spoke sentences. PHOTO/ NYU Tandon School of Engineering.

The researchers also found that during passive constructions like “Frankenstein was hit by Dracula,” the prefrontal cortex maintained activation for both nouns throughout the entire sentence. Even while one word was being spoken, the other remained active in the brain.

This ongoing, simultaneous encoding indicates that forming grammatically complex or non-standard sentences requires the brain to retain and manage multiple elements at once, likely engaging additional working memory to do so.

Interestingly, this dynamic aligns with a longstanding observation in linguistics: most of the world’s languages favor placing subjects before objects. The researchers propose that this could be due, in part, to neural efficiency.

Processing less common structures like passives appears to demand more cognitive effort, which over evolutionary time could influence language patterns.

Ultimately, this work offers a detailed glimpse into the cortical choreography of sentence production and challenges some of the long-standing assumptions about how speech unfolds in the brain.

Rather than a simple linear process, it appears that speaking involves a flexible interplay between stable word representations and syntactically driven dynamics, shaped by the demands of grammatical structure.

The study was supported by multiple grants from the National Institutes of Health.