Facebook backed researchers have managed to translate brain signals into spoken words, bringing the social network´s vision of linking brains and machines closer to reality.
My focus in 2018 has been addressing the most important issues facing Facebook including defending against election interference, better protecting our community from abuse, and making sure people have more control of their information. As the year wraps up, I’m writing a series of notes outlining how I’m thinking about these issues and the progress we’re making. This is the first note, and it’s about preventing election interference on Facebook.
A study published this week by University of California-San Francisco scientists showed progress toward a new type of brain-computer interface. The project involved brain implants, but could be a step toward accomplishing the goal with a non-invasive method such as augmented reality glasses with sensors.
“A decade from now, the ability to type directly from our brains may be accepted as a given,” Facebook said Tuesday in an online post updating a project announced two years ago.
“Not long ago, it sounded like science fiction. Now, it feels within plausible reach.”
Such a breakthrough could benefit people with paralysis, spinal cord injuries, regenerative diseases or other conditions making them unable to speak, and may also let people control technology such as augmented reality glasses just by thinking, Facebook said.
“It´s never too early to start thinking through the important questions that will need to be answered before such a potentially powerful technology should make its way into commercial products,” Facebook said.
A study published in Nature Communications detailed how researchers were able to capture brain signals being sent to produce speech and figure out what people were trying to say.
A standard set of questions were asked of the volunteers in the study, with the computer provided context to help figure out answers.
“Currently, patients with speech loss due to paralysis are limited to spelling words out very slowly using residual eye movements or muscle twitches to control a computer interface,” said UCSF neuro scientist Eddie Chang.