Mason Archival Repository Service

Interpreting Speech and Sounds from Neural Activity, a Brief Overview

Show simple item record Ryan, Andrew 2020-05-12T20:42:54Z 2020-05-12T20:42:54Z 2020
dc.description.abstract For people who are mute, or are completely paralyzed, one of the primary problems they have to deal with is being able to communicate. One potential solution to compensate for decreased communication functions is by using a brain-computer interface (BCI). The idea would be to quantify neural activation in the brain that correlates to imagined speech from the patient, and decode that into legible text that can be interpreted by the receiver. Due to the intricacy of speech interpretation, direct access to regions of the brain and individual neurons isrequired. As a result, many tests done on BCI speech interpretation involve using ECoG sensors on epilepsy patients when they are available. Some approaches used to analyze these signals for feature extraction include word based classification, and phoneme based classification. One approach mentioned less in the literature, is if there is a method to pull a sound signal directly from the activated regions of the brain. Advancement of the technology has potential use as a speech replacement for people suffering from paralysis, as well as in prosthetics. en_US
dc.language.iso en_US en_US
dc.rights Attribution-ShareAlike 3.0 United States *
dc.rights.uri *
dc.subject neural engineering en_US
dc.subject brain-computer interface en_US
dc.title Interpreting Speech and Sounds from Neural Activity, a Brief Overview en_US
dc.type Other en_US

Files in this item

The following license files are associated with this item:

This item appears in the following Collection(s)

Show simple item record

Attribution-ShareAlike 3.0 United States Except where otherwise noted, this item's license is described as Attribution-ShareAlike 3.0 United States

Search MARS


My Account