Intru­sive Tech­nol­ogy Pt.6


New Tech­nique Allows Sci­en­tists to Read Minds at Nearly the Speed of Thought

An exper­i­ment by Uni­ver­sity of Wash­ing­ton researchers is set­ting the stage for advances in mind read­ing tech­nol­ogy. Using brain implants and sophis­ti­cated soft­ware, researchers can now pre­dict what their sub­jects are see­ing with star­tling speed and accu­racy. The abil­ity to view a two-​dimensional image on a page or com­puter screen, and then trans­form that image into some­thing our minds can imme­di­ately rec­og­nize, is a neu­ro­log­i­cal process that remains mys­te­ri­ous to sci­en­tists. To learn more about how our brains per­form this task — and to see if com­put­ers can col­lect and pre­dict what a per­son is see­ing in real time — a research team led by Uni­ver­sity of Wash­ing­ton neu­ro­sci­en­tist Rajesh Rao and neu­ro­sur­geon Jeff Ojer­mann demon­strated that it’s pos­si­ble to decode human brain sig­nals at nearly the speed of per­cep­tion. The details of their work can be found in a new paper in PLOS Com­pu­ta­tional Biology.

The team sought the assis­tance of seven patients under­go­ing treat­ment for epilepsy. Med­ica­tions weren’t help­ing alle­vi­ate their seizures, so these patients were given tem­po­rary brain implants, and elec­trodes were used to pin­point the focal points of their seizures. The UW researchers saw this as an oppor­tu­nity to per­form their exper­i­ment. “They were going to get the elec­trodes no mat­ter what,” noted Ojer­mann in a UW News­Beat arti­cle. “We were just giv­ing them addi­tional tasks to do dur­ing their hos­pi­tal stay while they are oth­er­wise just wait­ing around.”

The patients were shown a ran­dom sequence of pic­tures — images of human faces, houses, and blank gray screens — on com­puter mon­i­tors in brief 400 mil­lisec­ond inter­vals. Their spe­cific task was to watch for an image of an upside-​down house.

The face and house dis­crim­i­na­tion task. Credit: Kai J. Miller et al., 2016/​PLOS Com­pu­ta­tional Biol­ogy

At the same time, the elec­trodes in their brain were con­nected to soft­ware that extracted two dis­tinct brain sig­nal prop­er­ties, namely “event related poten­tials” (when mas­sive batches of neu­rons simul­ta­ne­ously light up in response to an image) and “broad­band spec­tral” changes (sig­nals that linger after view­ing an image).

As the images flick­ered on the screen, a com­puter sam­pled and dig­i­tized the incom­ing brain sig­nals at a rate of 1,000 times per sec­ond. This res­o­lu­tion allowed the soft­ware to deter­mine which com­bi­na­tion of elec­trode loca­tions and sig­nals cor­re­lated best to what the patients were see­ing. “We got dif­fer­ent responses from dif­fer­ent (elec­trode) loca­tions; some were sen­si­tive to faces and some were sen­si­tive to houses,” Rao said.

After train­ing the soft­ware, researchers exposed the patients to an entirely new set of pic­tures. With­out pre­vi­ous expo­sure to these new images, the com­puter was able to pre­dict with 96 per­cent accu­racy when a test sub­ject was see­ing a house, a face, or a grey screen. And it did so at nearly the speed of per­cep­tion.

This pro­fi­ciency only occurred when the com­puter con­sid­ered both event-​related poten­tials and broad­band changes, which as stated in the study, sug­gests “they cap­ture dif­fer­ent and com­ple­men­tary aspects of the subject’s per­cep­tual state.”So when it comes to under­stand­ing how a per­son per­ceives a com­plex visual object, it’s impor­tant to con­sider the “global pic­ture” of large neural net­works.

While inter­est­ing, the results of the study are excep­tion­ally lim­ited. A true test of the sys­tem would be to see if it could learn a much larger set of images, includ­ing dif­fer­ent cat­e­gories. It’s not imme­di­ately obvi­ous, for exam­ple, if the com­puter could dis­cern if a patient was view­ing the face of a human or a dog.

Once refined, how­ever, this kind of brain decod­ing could be used to build com­mu­ni­ca­tion mech­a­nisms for “locked-​in” patients who are par­a­lyzed or have suf­fered a stroke. This tech­nique could also assist with brain map­ping, allow­ing neu­ro­sci­en­tists to iden­tify loca­tions in the brain respon­si­ble for cer­tain types of infor­ma­tion in real time.


0 # cita­tion­machin 2016-​12-​30 11:00
I’ve never con­sid­ered that there’s such a won­der­ful weblog related to %BT%.
Nice work! May I ask you how much time does it take to cre­ate such write-​up?
Does the tips on cita­tion­machin: http://​mccoy18karstensen​.sos​blogs​.com/​T​h​e​-​f​i​r​s​t​-​b​l​o​g​-​b​1​/​W​h​a​t​-​Y​o​u​-​W​i​l​l​-​L​i​k​e​-​A​b​o​u​t​-​A​r​t​i​c​l​e​-​A​d​v​e​r​t​i​s​i​n​g​-​D​o​m​i​n​a​t​i​o​n​-​b​1​-​p​5​.​h​t​m is correct?
Reply | Reply with quote | Quote

Add com­ment

User Guide is located at here. There you can also find tags for adding media to your com­ment. “NEW” Or Click the ADD COM­MENT link on the main menu to start or join a con­ver­sa­tional topic.

Security code

joomla tem­platesfree joomla tem­platestem­plate joomla
2018 Genet­icMem­ory glob­bers joomla tem­plate