{"id":6254,"date":"2023-10-09T07:07:00","date_gmt":"2023-10-09T07:07:00","guid":{"rendered":"https:\/\/dailyai.com\/?p=6254"},"modified":"2023-10-09T07:07:00","modified_gmt":"2023-10-09T07:07:00","slug":"ai-decodes-speech-from-non-invasive-brain-recordings","status":"publish","type":"post","link":"https:\/\/dailyai.com\/es\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/","title":{"rendered":"La IA descodifica el habla a partir de grabaciones cerebrales no invasivas"},"content":{"rendered":"<p><strong>La forma exacta en que nuestro cerebro procesa y formula el lenguaje sigue siendo en gran medida un misterio. Los investigadores de Meta AI han descubierto una nueva forma de medir las ondas cerebrales y descodificar las palabras asociadas a ellas.<\/strong><\/p>\n<p>A las personas con graves limitaciones motoras, como los enfermos de ELA, les resulta especialmente dif\u00edcil comunicarse. Es dif\u00edcil imaginar la frustraci\u00f3n de una persona como Stephen Hawking construyendo una frase con movimientos oculares o moviendo un m\u00fasculo de la mejilla.<\/p>\n<p>Se ha investigado mucho para <a href=\"https:\/\/dailyai.com\/es\/2023\/08\/unbabels-ai-tool-makes-telepathic-communication-possible\/\">descodificar el habla a partir de la actividad cerebral<\/a>pero los mejores resultados dependen de la invasi\u00f3n <a href=\"https:\/\/dailyai.com\/es\/2023\/08\/ai-replenishes-speech-and-facial-expressions-of-stroke-survivor\/\">implantes cerebro-ordenador.<\/a><\/p>\n<p>Los investigadores de Meta AI utilizaron magnetoencefalograf\u00eda (MEG) y electroencefalograf\u00eda (EEG) para registrar las ondas cerebrales de 175 voluntarios mientras escuchaban historias cortas y frases aisladas.<\/p>\n<p>Utilizaron un modelo de habla preentrenado y aprendizaje contrastivo para identificar qu\u00e9 patrones de ondas cerebrales se asociaban a palabras concretas que escuchaban los sujetos.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\" style=\"text-align: center;\">Descodificaci\u00f3n de la percepci\u00f3n del habla a partir de grabaciones cerebrales no invasivas\",<br \/>\ndirigidos por el \u00fanico <a href=\"https:\/\/twitter.com\/honualx?ref_src=twsrc%5Etfw\">@honualx<\/a><br \/>\nacaba de salir en el \u00faltimo n\u00famero de Nature Machine Intelligence:<\/p>\n<p>- documento de libre acceso: <a href=\"https:\/\/t.co\/1jtpTezQzM\">https:\/\/t.co\/1jtpTezQzM<\/a><br \/>\n- c\u00f3digo de formaci\u00f3n completo: <a href=\"https:\/\/t.co\/Al2alBxeUC\">https:\/\/t.co\/Al2alBxeUC<\/a> <a href=\"https:\/\/t.co\/imLxRjRQ6h\">pic.twitter.com\/imLxRjRQ6h<\/a><\/p>\n<p style=\"text-align: center;\">- Jean-R\u00e9mi King (@JeanRemiKing) <a href=\"https:\/\/twitter.com\/JeanRemiKing\/status\/1709957806030311798?ref_src=twsrc%5Etfw\">5 de octubre de 2023<\/a><\/p>\n<\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>Los investigadores dividieron el audio en segmentos de 3 segundos y probaron su modelo para ver si pod\u00eda identificar correctamente cu\u00e1l de los 1.500 segmentos estaba escuchando el voluntario. El modelo predijo una especie de nube de palabras en la que la palabra m\u00e1s probable ten\u00eda m\u00e1s peso.<\/p>\n<p>Lograron una precisi\u00f3n media de 41% y una precisi\u00f3n de 95,9% con sus mejores participantes.<\/p>\n<figure id=\"attachment_6256\" aria-describedby=\"caption-attachment-6256\" style=\"width: 1024px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-6256 size-large\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-1024x578.webp\" alt=\"Predicci\u00f3n del habla a partir de las ondas cerebrales\" width=\"1024\" height=\"578\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-1024x578.webp 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-300x169.webp 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-768x434.webp 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-1536x867.webp 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-2048x1156.webp 2048w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-370x209.webp 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-800x452.webp 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-20x11.webp 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-740x418.webp 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-1600x903.webp 1600w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-1320x745.webp 1320w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Speech-predictions-from-brain-waves-85x48.webp 85w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption id=\"caption-attachment-6256\" class=\"wp-caption-text\">Predicciones a nivel de palabra mientras los participantes escuchaban la frase \"Gracias por venir, Ed\". Las palabras azules corresponden a la palabra correcta y las negras a las candidatas negativas. El tama\u00f1o del texto es proporcional a la probabilidad logar\u00edtmica del modelo. Fuente: <a href=\"https:\/\/www.nature.com\/articles\/s42256-023-00714-5\/figures\/3\" target=\"_blank\" rel=\"noopener\">Naturaleza<\/a><\/figcaption><\/figure>\n<p>La investigaci\u00f3n demuestra que es posible hacerse una idea bastante aproximada del habla que oye una persona, pero ahora hay que invertir el proceso para que sea \u00fatil. Tenemos que medir sus ondas cerebrales y saber en qu\u00e9 palabra est\u00e1n pensando.<\/p>\n<p>El art\u00edculo sugiere entrenar una red neuronal mientras los sujetos producen palabras hablando o escribiendo. Ese modelo general podr\u00eda utilizarse entonces para dar sentido a las ondas cerebrales y las palabras asociadas en las que estuviera pensando un enfermo de ELA.<\/p>\n<p>Los investigadores fueron capaces de identificar segmentos del habla a partir de un conjunto limitado predeterminado. Para una comunicaci\u00f3n adecuada, ser\u00eda necesario poder identificar muchas m\u00e1s palabras. Utilizar una IA generativa para predecir la siguiente palabra m\u00e1s probable que una persona est\u00e1 tratando de decir podr\u00eda ayudar con eso.<\/p>\n<p>Aunque el proceso no era invasivo, requer\u00eda estar conectado a un <a href=\"https:\/\/dailyai.com\/es\/2023\/08\/ai-mind-reading-medical-breakthrough-or-step-towards-dystopia\/\">Dispositivo MEG<\/a>. Por desgracia, los resultados de las mediciones del EEG no fueron muy buenos.<\/p>\n<p>La investigaci\u00f3n promete que la IA podr\u00eda llegar a utilizarse para ayudar a comunicarse a los que no tienen voz, como los enfermos de ELA. El uso de un modelo preentrenado tambi\u00e9n evit\u00f3 la necesidad de un minucioso entrenamiento palabra por palabra.<\/p>\n<p>Meta AI hizo p\u00fablicos el modelo y los datos, por lo que es de esperar que otros investigadores se basen en su trabajo.<\/p>","protected":false},"excerpt":{"rendered":"<p>La forma exacta en que nuestro cerebro procesa y formula el lenguaje sigue siendo en gran medida un misterio. Investigadores de Meta AI han descubierto una nueva forma de medir las ondas cerebrales y descodificar las palabras asociadas a ellas. A las personas con graves limitaciones motoras, como los enfermos de ELA, les resulta especialmente dif\u00edcil comunicarse. Es dif\u00edcil imaginar la frustraci\u00f3n de una persona como Stephen Hawking construyendo una frase con movimientos oculares o moviendo un m\u00fasculo de la mejilla. Se ha investigado mucho para descodificar el habla a partir de la actividad cerebral, pero los mejores resultados dependen de implantes cerebro-ordenador invasivos. Los investigadores de Meta AI utilizaron la magnetoencefalograf\u00eda (MEG)<\/p>","protected":false},"author":6,"featured_media":6257,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[150,105,101,131],"class_list":["post-6254","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-ai-benefits","tag-machine-learning","tag-medtech","tag-meta"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>AI decodes speech from non-invasive brain recordings | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/es\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/\" \/>\n<meta property=\"og:locale\" content=\"es_ES\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"AI decodes speech from non-invasive brain recordings | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Exactly how our brains process and formulate language is still largely a mystery. Researchers at Meta AI found a new way to measure brain waves and decode the words associated with them. People who have severely limited motor skills, like ALS sufferers, find it particularly challenging to communicate. The frustration of a person like Stephen Hawking painstakingly constructing a sentence with eye movements or twitching a cheek muscle is hard to imagine. A lot of research has been done to decode speech from brain activity, but the best results depend on invasive brain-computer implants. Meta AI researchers used magneto-encephalography (MEG)\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/es\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-10-09T07:07:00+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Brain-waves-to-speech.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"864\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tiempo de lectura\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"AI decodes speech from non-invasive brain recordings\",\"datePublished\":\"2023-10-09T07:07:00+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/\"},\"wordCount\":519,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/10\\\/Brain-waves-to-speech.jpg\",\"keywords\":[\"AI benefits\",\"machine learning\",\"MedTech\",\"Meta\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"es\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/\",\"name\":\"AI decodes speech from non-invasive brain recordings | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/10\\\/Brain-waves-to-speech.jpg\",\"datePublished\":\"2023-10-09T07:07:00+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/#breadcrumb\"},\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/10\\\/Brain-waves-to-speech.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/10\\\/Brain-waves-to-speech.jpg\",\"width\":1000,\"height\":864},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/10\\\/ai-decodes-speech-from-non-invasive-brain-recordings\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AI decodes speech from non-invasive brain recordings\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"es\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/es\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"La IA descodifica el habla a partir de grabaciones cerebrales no invasivas | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/es\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/","og_locale":"es_ES","og_type":"article","og_title":"AI decodes speech from non-invasive brain recordings | DailyAI","og_description":"Exactly how our brains process and formulate language is still largely a mystery. Researchers at Meta AI found a new way to measure brain waves and decode the words associated with them. People who have severely limited motor skills, like ALS sufferers, find it particularly challenging to communicate. The frustration of a person like Stephen Hawking painstakingly constructing a sentence with eye movements or twitching a cheek muscle is hard to imagine. A lot of research has been done to decode speech from brain activity, but the best results depend on invasive brain-computer implants. Meta AI researchers used magneto-encephalography (MEG)","og_url":"https:\/\/dailyai.com\/es\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/","og_site_name":"DailyAI","article_published_time":"2023-10-09T07:07:00+00:00","og_image":[{"width":1000,"height":864,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Brain-waves-to-speech.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Eugene van der Watt","Tiempo de lectura":"3 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"AI decodes speech from non-invasive brain recordings","datePublished":"2023-10-09T07:07:00+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/"},"wordCount":519,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Brain-waves-to-speech.jpg","keywords":["AI benefits","machine learning","MedTech","Meta"],"articleSection":["Industry"],"inLanguage":"es"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/","url":"https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/","name":"La IA descodifica el habla a partir de grabaciones cerebrales no invasivas | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Brain-waves-to-speech.jpg","datePublished":"2023-10-09T07:07:00+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/#breadcrumb"},"inLanguage":"es","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/"]}]},{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Brain-waves-to-speech.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/Brain-waves-to-speech.jpg","width":1000,"height":864},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"AI decodes speech from non-invasive brain recordings"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Su dosis diaria de noticias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"es"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene es ingeniero electr\u00f3nico y le encanta todo lo relacionado con la tecnolog\u00eda. Cuando descansa de consumir noticias sobre IA, lo encontrar\u00e1 jugando al billar.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/es\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/6254","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/comments?post=6254"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/6254\/revisions"}],"predecessor-version":[{"id":6259,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/6254\/revisions\/6259"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media\/6257"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media?parent=6254"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/categories?post=6254"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/tags?post=6254"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}