{"id":6254,"date":"2023-10-09T07:07:00","date_gmt":"2023-10-09T07:07:00","guid":{"rendered":"https:\/\/dailyai.com\/?p=6254"},"modified":"2023-10-09T07:07:00","modified_gmt":"2023-10-09T07:07:00","slug":"ai-decodes-speech-from-non-invasive-brain-recordings","status":"publish","type":"post","link":"https:\/\/dailyai.com\/nb\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/","title":{"rendered":"AI dekoder tale fra ikke-invasive hjerneopptak"},"content":{"rendered":"<p><strong>N\u00f8yaktig hvordan hjernen v\u00e5r behandler og formulerer spr\u00e5k, er fortsatt i stor grad et mysterium. Forskere ved Meta AI har funnet en ny m\u00e5te \u00e5 m\u00e5le hjerneb\u00f8lger p\u00e5 og avkode ordene som er knyttet til dem.<\/strong><\/p>\n<p>Personer som har sterkt begrensede motoriske ferdigheter, slik som ALS-rammede, opplever det som spesielt utfordrende \u00e5 kommunisere. Det er vanskelig \u00e5 forestille seg frustrasjonen hos en person som Stephen Hawking, som m\u00f8ysommelig konstruerer en setning ved hjelp av \u00f8yebevegelser eller rykninger i en kinnmuskel.<\/p>\n<p>Mye forskning har blitt gjort for \u00e5 <a href=\"https:\/\/dailyai.com\/nb\/2023\/08\/unbabels-ai-tool-makes-telepathic-communication-possible\/\">avkode tale fra hjerneaktivitet<\/a>men de beste resultatene avhenger av invasiv <a href=\"https:\/\/dailyai.com\/nb\/2023\/08\/ai-replenishes-speech-and-facial-expressions-of-stroke-survivor\/\">hjerne-datamaskin-implantater.<\/a><\/p>\n<p>Meta AI-forskerne brukte magneto-encefalografi (MEG) og elektroencefalografi (EEG) til \u00e5 registrere hjerneb\u00f8lgene til 175 frivillige mens de lyttet til korte historier og isolerte setninger.<\/p>\n<p>De brukte en forh\u00e5ndstrenet talemodell og kontrastive l\u00e6ringsprosesser for \u00e5 identifisere hvilke hjerneb\u00f8lgem\u00f8nstre som var assosiert med spesifikke ord som fors\u00f8kspersonene lyttet til.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\" style=\"text-align: center;\">\"Avkoding av taleoppfatning fra ikke-invasive hjerneopptak\",<br \/>\nledet av den eneste ene <a href=\"https:\/\/twitter.com\/honualx?ref_src=twsrc%5Etfw\">@honualx<\/a><br \/>\ner nettopp publisert i den siste utgaven av Nature Machine Intelligence:<\/p>\n<p>- \u00e5pent tilgjengelig artikkel: <a href=\"https:\/\/t.co\/1jtpTezQzM\">https:\/\/t.co\/1jtpTezQzM<\/a><br \/>\n- full oppl\u00e6ringskode: <a href=\"https:\/\/t.co\/Al2alBxeUC\">https:\/\/t.co\/Al2alBxeUC<\/a> <a href=\"https:\/\/t.co\/imLxRjRQ6h\">pic.twitter.com\/imLxRjRQ6h<\/a><\/p>\n<p style=\"text-align: center;\">- Jean-R\u00e9mi King (@JeanRemiKing) <a href=\"https:\/\/twitter.com\/JeanRemiKing\/status\/1709957806030311798?ref_src=twsrc%5Etfw\">5. oktober 2023<\/a><\/p>\n<\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>Forskerne delte opp lyden i segmenter p\u00e5 tre sekunder og testet deretter modellen for \u00e5 se om den kunne identifisere hvilket av de 1500 segmentene den frivillige lyttet til. Modellen foruts\u00e5 en slags ordsky, der det mest sannsynlige ordet ble tillagt st\u00f8rst vekt.<\/p>\n<p>De oppn\u00e5dde en n\u00f8yaktighet p\u00e5 41% i gjennomsnitt og 95,9% n\u00f8yaktighet med sine beste deltakere.<\/p>\n<figure id=\"attachment_6256\" aria-describedby=\"caption-attachment-6256\" style=\"width: 1024px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-6256 size-large\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-1024x578.webp\" alt=\"Taleforutsigelser fra hjerneb\u00f8lger\" width=\"1024\" height=\"578\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-1024x578.webp 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-300x169.webp 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-768x434.webp 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-1536x867.webp 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-2048x1156.webp 2048w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-370x209.webp 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-800x452.webp 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-20x11.webp 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-740x418.webp 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-1600x903.webp 1600w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-1320x745.webp 1320w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-85x48.webp 85w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption id=\"caption-attachment-6256\" class=\"wp-caption-text\">Prediksjoner p\u00e5 ordniv\u00e5 mens deltakerne lyttet til setningen \"Takk for at du kom, Ed\". Bl\u00e5 ord tilsvarer det riktige ordet, og svarte ord tilsvarer negative kandidater. Tekstst\u00f8rrelsen er proporsjonal med modellens logaritmiske sannsynlighet. Kilde: Kilde: <a href=\"https:\/\/www.nature.com\/articles\/s42256-023-00714-5\/figures\/3\" target=\"_blank\" rel=\"noopener\">Natur<\/a><\/figcaption><\/figure>\n<p>Forskningen viser at det er mulig \u00e5 f\u00e5 en ganske god id\u00e9 om hvilken tale en person h\u00f8rer, men n\u00e5 m\u00e5 prosessen reverseres for \u00e5 v\u00e6re nyttig. Vi m\u00e5 m\u00e5le hjerneb\u00f8lgene deres og vite hvilket ord de tenker p\u00e5.<\/p>\n<p>I artikkelen foresl\u00e5s det \u00e5 trene opp et nevralt nettverk mens fors\u00f8kspersonene produserer ord ved \u00e5 snakke eller skrive. Den generelle modellen kan deretter brukes til \u00e5 tolke hjerneb\u00f8lger og de tilh\u00f8rende ordene som en ALS-syk person tenker p\u00e5.<\/p>\n<p>Forskerne klarte \u00e5 identifisere talesegmenter fra et begrenset forh\u00e5ndsbestemt sett. For \u00e5 kunne kommunisere ordentlig, m\u00e5 du kunne identifisere mange flere ord. Det kan v\u00e6re nyttig \u00e5 bruke en generativ AI til \u00e5 forutsi det neste mest sannsynlige ordet en person pr\u00f8ver \u00e5 si.<\/p>\n<p>Selv om prosessen var ikke-invasiv, krever den likevel at man kobles til en <a href=\"https:\/\/dailyai.com\/nb\/2023\/08\/ai-mind-reading-medical-breakthrough-or-step-towards-dystopia\/\">MEG-enhet<\/a>. Dessverre var resultatene fra EEG-m\u00e5lingene ikke gode.<\/p>\n<p>Forskningen gir h\u00e5p om at kunstig intelligens etter hvert kan brukes til \u00e5 hjelpe stemmel\u00f8se, som ALS-pasienter, med \u00e5 kommunisere. Ved \u00e5 bruke en ferdig trent modell unngikk man ogs\u00e5 behovet for mer m\u00f8ysommelig ord-for-ord-trening.<\/p>\n<p>Meta AI offentliggjorde modellen og dataene, slik at andre forskere forh\u00e5pentligvis vil bygge videre p\u00e5 arbeidet deres.<\/p>","protected":false},"excerpt":{"rendered":"<p>N\u00f8yaktig hvordan hjernen v\u00e5r behandler og formulerer spr\u00e5k, er fortsatt i stor grad et mysterium. Forskere ved Meta AI har funnet en ny m\u00e5te \u00e5 m\u00e5le hjerneb\u00f8lger p\u00e5 og avkode ordene som er knyttet til dem. Personer med sterkt begrensede motoriske ferdigheter, som ALS-pasienter, synes det er spesielt utfordrende \u00e5 kommunisere. Det er vanskelig \u00e5 forestille seg frustrasjonen hos en person som Stephen Hawking, som m\u00f8ysommelig konstruerer en setning med \u00f8yebevegelser eller rykninger i en kinnmuskel. Det er gjort mye forskning for \u00e5 avkode tale fra hjerneaktivitet, men de beste resultatene er avhengig av invasive hjerne-datamaskinimplantater. Meta AI-forskerne brukte magneto-encefalografi (MEG)<\/p>","protected":false},"author":6,"featured_media":6257,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[150,105,101,131],"class_list":["post-6254","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-ai-benefits","tag-machine-learning","tag-medtech","tag-meta"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>AI decodes speech from non-invasive brain recordings | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/nb\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/\" \/>\n<meta property=\"og:locale\" content=\"nb_NO\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"AI decodes speech from non-invasive brain recordings | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Exactly how our brains process and formulate language is still largely a mystery. Researchers at Meta AI found a new way to measure brain waves and decode the words associated with them. People who have severely limited motor skills, like ALS sufferers, find it particularly challenging to communicate. The frustration of a person like Stephen Hawking painstakingly constructing a sentence with eye movements or twitching a cheek muscle is hard to imagine. A lot of research has been done to decode speech from brain activity, but the best results depend on invasive brain-computer implants. Meta AI researchers used magneto-encephalography (MEG)\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/nb\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-10-09T07:07:00+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Brain-waves-to-speech.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"864\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skrevet av\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Ansl. lesetid\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"AI decodes speech from non-invasive brain recordings\",\"datePublished\":\"2023-10-09T07:07:00+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/\"},\"wordCount\":519,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/10\\\/Brain-waves-to-speech.jpg\",\"keywords\":[\"AI benefits\",\"machine learning\",\"MedTech\",\"Meta\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"nb-NO\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/\",\"name\":\"AI decodes speech from non-invasive brain recordings | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/10\\\/Brain-waves-to-speech.jpg\",\"datePublished\":\"2023-10-09T07:07:00+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/#breadcrumb\"},\"inLanguage\":\"nb-NO\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"nb-NO\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/10\\\/Brain-waves-to-speech.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/10\\\/Brain-waves-to-speech.jpg\",\"width\":1000,\"height\":864},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AI decodes speech from non-invasive brain recordings\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"nb-NO\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"nb-NO\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"nb-NO\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/nb\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"AI dekoder tale fra ikke-invasive hjerneopptak | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/nb\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/","og_locale":"nb_NO","og_type":"article","og_title":"AI decodes speech from non-invasive brain recordings | DailyAI","og_description":"Exactly how our brains process and formulate language is still largely a mystery. Researchers at Meta AI found a new way to measure brain waves and decode the words associated with them. People who have severely limited motor skills, like ALS sufferers, find it particularly challenging to communicate. The frustration of a person like Stephen Hawking painstakingly constructing a sentence with eye movements or twitching a cheek muscle is hard to imagine. A lot of research has been done to decode speech from brain activity, but the best results depend on invasive brain-computer implants. Meta AI researchers used magneto-encephalography (MEG)","og_url":"https:\/\/dailyai.com\/nb\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/","og_site_name":"DailyAI","article_published_time":"2023-10-09T07:07:00+00:00","og_image":[{"width":1000,"height":864,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Brain-waves-to-speech.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skrevet av":"Eugene van der Watt","Ansl. lesetid":"3 minutter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"AI decodes speech from non-invasive brain recordings","datePublished":"2023-10-09T07:07:00+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/"},"wordCount":519,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Brain-waves-to-speech.jpg","keywords":["AI benefits","machine learning","MedTech","Meta"],"articleSection":["Industry"],"inLanguage":"nb-NO"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/","url":"https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/","name":"AI dekoder tale fra ikke-invasive hjerneopptak | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Brain-waves-to-speech.jpg","datePublished":"2023-10-09T07:07:00+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/#breadcrumb"},"inLanguage":"nb-NO","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/"]}]},{"@type":"ImageObject","inLanguage":"nb-NO","@id":"https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Brain-waves-to-speech.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Brain-waves-to-speech.jpg","width":1000,"height":864},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"AI decodes speech from non-invasive brain recordings"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DagligAI","description":"Din daglige dose med AI-nyheter","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"nb-NO"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DagligAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"nb-NO","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"nb-NO","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene har bakgrunn som elektroingeni\u00f8r og elsker alt som har med teknologi \u00e5 gj\u00f8re. N\u00e5r han tar en pause fra AI-nyhetene, finner du ham ved snookerbordet.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/nb\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts\/6254","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/comments?post=6254"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts\/6254\/revisions"}],"predecessor-version":[{"id":6259,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts\/6254\/revisions\/6259"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/media\/6257"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/media?parent=6254"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/categories?post=6254"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/tags?post=6254"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}