Multiple brain-computer interface (BCI) devices can allow now users to do everything from control computer cursors, to translate neural activity into words, to convert handwriting into text. While one of the latest BCI examples appears to accomplish very similar tasks, it does so without the need for time-consuming, personalized calibration or high-stakes neurosurgery.
As recently detailed in a study published in PNAS Nexus, University of Texas Austin researchers have developed a wearable cap that allows a user to accomplish complex computer tasks through interpreting brain activity into actionable commands. But instead of needing to tailor each device to a specific user’s neural activity, an accompanying machine learning program offers a new, “one-size-fits-all” approach that dramatically reduces training time.
“Training a BCI subject customarily starts with an offline calibration session to collect data to build an individual decoder,” the team explains in their paper’s abstract. “Apart from being time-consuming, this initial decoder might be inefficient as subjects do not receive feedback that helps them to elicit proper [sensorimotor rhythms] during calibration.”
To solve for this, researchers developed a new machine learning program that identifies an individual’s specific needs and adjusts its repetition-based training as needed. Because of this interoperable self-calibration, trainees don’t need the researcher team’s guidance, or complex medical procedures to install an implant.
[Related: Neuralink shows first human patient using brain implant to play online chess.]
“When we think about this in a clinical setting, this technology will make it so we won’t need a specialized team to do this calibration process, which is long and tedious,” Satyam Kumar, a graduate student involved in the project, said in a recent statement. “It will be…
Read the full article here