{"id":11499,"date":"2024-04-11T19:06:18","date_gmt":"2024-04-11T19:06:18","guid":{"rendered":"https:\/\/dailyai.com\/?p=11499"},"modified":"2024-04-12T10:16:58","modified_gmt":"2024-04-12T10:16:58","slug":"nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system","status":"publish","type":"post","link":"https:\/\/dailyai.com\/es\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","title":{"rendered":"Investigadores de la NYU crean un innovador sistema de s\u00edntesis del habla con IA"},"content":{"rendered":"<p><b>Un equipo de investigadores de la Universidad de Nueva York ha logrado avances en la descodificaci\u00f3n neural del habla, lo que nos acerca a un futuro en el que las personas que han perdido la capacidad de hablar puedan recuperar su voz.\u00a0<\/b><\/p>\n<p><span style=\"font-weight: 400;\">En <\/span><a href=\"https:\/\/www.nature.com\/articles\/s42256-024-00824-8\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">estudiar<\/span><\/a><span style=\"font-weight: 400;\">publicado en <em>Naturaleza Inteligencia Artificial<\/em>presenta un novedoso marco de aprendizaje profundo que traduce con precisi\u00f3n las se\u00f1ales cerebrales en habla inteligible.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Las personas con lesiones cerebrales debidas a accidentes cerebrovasculares, enfermedades degenerativas o traumatismos f\u00edsicos pueden utilizar estos sistemas para comunicarse descodificando sus pensamientos o su discurso a partir de se\u00f1ales neuronales.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">El sistema del equipo de la NYU implica un modelo de aprendizaje profundo que mapea las se\u00f1ales de electrocorticograf\u00eda (ECoG) del cerebro a las caracter\u00edsticas del habla, como el tono, el volumen y otros contenidos espectrales.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">La segunda etapa consiste en un sintetizador neural del habla que convierte las caracter\u00edsticas del habla extra\u00eddas en un espectrograma audible, que luego puede transformarse en una forma de onda del habla.\u00a0<\/span><\/p>\n<p>Finalmente, esa forma de onda puede convertirse en voz sintetizada de sonido natural.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Nuevo art\u00edculo publicado hoy en <a href=\"https:\/\/twitter.com\/NatMachIntell?ref_src=twsrc%5Etfw\">@NatMachIntell<\/a>donde mostramos una robusta decodificaci\u00f3n neural del habla en 48 pacientes. <a href=\"https:\/\/t.co\/rNPAMr4l68\">https:\/\/t.co\/rNPAMr4l68<\/a> <a href=\"https:\/\/t.co\/FG7QKCBVzp\">pic.twitter.com\/FG7QKCBVzp<\/a><\/p>\n<p>- Adeen Flinker \ud83c\uddee\ud83c\uddf1\ud83c\uddfa\ud83c\udde6\ud83c\udf97\ufe0f (@adeenflinker) <a href=\"https:\/\/twitter.com\/adeenflinker\/status\/1777513445304193367?ref_src=twsrc%5Etfw\">9 de abril de 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<h2>C\u00f3mo funciona el estudio<\/h2>\n<p><span style=\"font-weight: 400;\">Este estudio consiste en entrenar un modelo de inteligencia artificial capaz de alimentar un dispositivo de s\u00edntesis del habla que permita hablar a las personas con p\u00e9rdida del habla mediante impulsos el\u00e9ctricos cerebrales.\u00a0<\/span><\/p>\n<p>He aqu\u00ed c\u00f3mo funciona con m\u00e1s detalle:<\/p>\n<p><b>1. Recopilaci\u00f3n de datos cerebrales<\/b><\/p>\n<p><span style=\"font-weight: 400;\">El primer paso consiste en recopilar los datos brutos necesarios para entrenar el modelo de descodificaci\u00f3n del habla. Los investigadores trabajaron con 48 participantes sometidos a neurocirug\u00eda por epilepsia. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Durante el estudio, se pidi\u00f3 a estos participantes que leyeran cientos de frases en voz alta mientras se registraba su actividad cerebral mediante rejillas de ECoG. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Estas rejillas se colocan directamente sobre la superficie del cerebro y captan las se\u00f1ales el\u00e9ctricas de las regiones cerebrales implicadas en la producci\u00f3n del habla.<\/span><\/p>\n<p><b>2. Mapeo de se\u00f1ales cerebrales para el habla<\/b><\/p>\n<p><span style=\"font-weight: 400;\">A partir de los datos del habla, los investigadores desarrollaron un sofisticado modelo de inteligencia artificial que asigna las se\u00f1ales cerebrales registradas a caracter\u00edsticas espec\u00edficas del habla, como el tono, el volumen y las frecuencias \u00fanicas que componen los distintos sonidos del habla.\u00a0<\/span><\/p>\n<p><b>3. Sintetizar el habla a partir de caracter\u00edsticas<\/b><\/p>\n<p><span style=\"font-weight: 400;\">El tercer paso se centra en convertir las caracter\u00edsticas del habla extra\u00eddas de las se\u00f1ales cerebrales en habla audible. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Los investigadores utilizaron un sintetizador de voz especial que toma las caracter\u00edsticas extra\u00eddas y genera un espectrograma, una representaci\u00f3n visual de los sonidos del habla.\u00a0<\/span><\/p>\n<p><b>4. Evaluaci\u00f3n de los resultados<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Los investigadores compararon el habla generada por su modelo con el habla original de los participantes. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Utilizaron m\u00e9tricas objetivas para medir la similitud entre ambos y descubrieron que el discurso generado se ajustaba al contenido y el ritmo del original.\u00a0<\/span><\/p>\n<p><b>5. Pruebas con palabras nuevas<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Para asegurarse de que el modelo puede manejar palabras nuevas que no ha visto antes, se omitieron intencionadamente ciertas palabras durante la fase de entrenamiento del modelo, y luego se prob\u00f3 el rendimiento del modelo con estas palabras no vistas. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">La capacidad del modelo para descodificar con precisi\u00f3n incluso palabras nuevas demuestra su potencial para generalizar y manejar diversos patrones del habla.<\/span><\/p>\n<figure id=\"attachment_11500\" aria-describedby=\"caption-attachment-11500\" style=\"width: 1024px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-11500 size-large\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-1024x397.webp\" alt=\"AI discurso\" width=\"1024\" height=\"397\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-1024x397.webp 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-300x116.webp 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-768x298.webp 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-1536x596.webp 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-60x23.webp 60w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML.webp 1622w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption id=\"caption-attachment-11500\" class=\"wp-caption-text\">El sistema de s\u00edntesis de voz de la NYU. Fuente: <a href=\"https:\/\/www.nature.com\/articles\/s42256-024-00824-8\">Naturaleza<\/a> (acceso abierto)<\/figcaption><\/figure>\n<p>La parte superior del diagrama describe el proceso de conversi\u00f3n de las se\u00f1ales cerebrales en habla. En primer lugar, un descodificador convierte estas se\u00f1ales en par\u00e1metros del habla a lo largo del tiempo. A continuaci\u00f3n, un sintetizador crea im\u00e1genes sonoras (espectrogramas) a partir de esos par\u00e1metros. Otra herramienta vuelve a transformar estas im\u00e1genes en ondas sonoras.<\/p>\n<p>La \u00faltima secci\u00f3n trata de un sistema que ayuda a entrenar el decodificador de se\u00f1ales del cerebro imitando el habla. Toma una imagen sonora, la convierte en par\u00e1metros del habla y los utiliza para crear una nueva imagen sonora. Esta parte del sistema aprende de los sonidos reales del habla para mejorar.<\/p>\n<p>Tras el entrenamiento, s\u00f3lo es necesario el proceso superior para convertir las se\u00f1ales cerebrales en habla.<\/p>\n<p><span style=\"font-weight: 400;\">Una ventaja clave del sistema de la NYU es su capacidad para lograr una descodificaci\u00f3n del habla de alta calidad sin necesidad de matrices de electrodos de alt\u00edsima densidad, poco pr\u00e1cticas para un uso a largo plazo. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">En esencia, ofrece una soluci\u00f3n m\u00e1s ligera y port\u00e1til.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Otro logro es la descodificaci\u00f3n satisfactoria del habla procedente tanto del hemisferio izquierdo como del derecho del cerebro, lo cual es importante para pacientes con da\u00f1o cerebral en un lado del cerebro.\u00a0<\/span><\/p>\n<h2>Convertir pensamientos en voz mediante la IA<\/h2>\n<p><span style=\"font-weight: 400;\">El estudio de la NYU se basa en investigaciones anteriores sobre descodificaci\u00f3n neural del habla e interfaces cerebro-ordenador (BCI).\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">En 2023, un equipo de la Universidad de California, en San Francisco, permiti\u00f3 a un superviviente de un ictus paral\u00edtico <\/span><a href=\"https:\/\/dailyai.com\/es\/2023\/08\/ai-replenishes-speech-and-facial-expressions-of-stroke-survivor\/\"><span style=\"font-weight: 400;\">generar frases<\/span><\/a><span style=\"font-weight: 400;\"> a una velocidad de 78 palabras por minuto utilizando una BCI que sintetizaba tanto vocalizaciones como expresiones faciales a partir de se\u00f1ales cerebrales.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Otros estudios recientes han explorado el uso de la IA para interpretar diversos aspectos del pensamiento humano a partir de la actividad cerebral. Los investigadores han demostrado la capacidad de generar im\u00e1genes, texto e incluso m\u00fasica a partir de datos de resonancias magn\u00e9ticas y electroencefalogramas (EEG) tomados del cerebro. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Por ejemplo, un <\/span><a href=\"https:\/\/dailyai.com\/es\/2023\/08\/ai-mind-reading-medical-breakthrough-or-step-towards-dystopia\/\"><span style=\"font-weight: 400;\">estudio de la Universidad de Helsinki<\/span><\/a><span style=\"font-weight: 400;\"> utilizaron se\u00f1ales EEG para guiar una red generativa adversarial (GAN) en la producci\u00f3n de im\u00e1genes faciales que coincid\u00edan con los pensamientos de los participantes.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Meta AI tambi\u00e9n <\/span><a href=\"https:\/\/dailyai.com\/es\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/\"><span style=\"font-weight: 400;\">desarroll\u00f3 una t\u00e9cnica<\/span><\/a><span style=\"font-weight: 400;\"> para descodificar parcialmente lo que alguien estaba escuchando mediante ondas cerebrales recogidas de forma no invasiva.<\/span><\/p>\n<h2>Oportunidades y retos<\/h2>\n<p>El m\u00e9todo de la NYU utiliza electrodos m\u00e1s accesibles y cl\u00ednicamente viables que los m\u00e9todos anteriores, lo que lo hace m\u00e1s accesible.<\/p>\n<p><span style=\"font-weight: 400;\">Aunque esto es apasionante, hay importantes obst\u00e1culos que superar si queremos presenciar un uso generalizado.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Por un lado, la recopilaci\u00f3n de datos cerebrales de alta calidad es una tarea compleja y laboriosa. Las diferencias individuales en la actividad cerebral dificultan la generalizaci\u00f3n, lo que significa que un modelo entrenado para un grupo de participantes puede no funcionar bien para otro.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">No obstante, el estudio de la NYU representa un paso adelante en esta direcci\u00f3n al demostrar una descodificaci\u00f3n del habla de gran precisi\u00f3n utilizando haces de electrodos m\u00e1s ligeros.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">De cara al futuro, el equipo de la NYU pretende perfeccionar sus modelos de descodificaci\u00f3n del habla en tiempo real, lo que nos acercar\u00e1 al objetivo final de permitir conversaciones naturales y fluidas a las personas con deficiencias del habla.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Tambi\u00e9n pretenden adaptar el sistema a dispositivos inal\u00e1mbricos implantables que puedan utilizarse en la vida cotidiana.<\/span><\/p>","protected":false},"excerpt":{"rendered":"<p>Un equipo de investigadores de la Universidad de Nueva York ha logrado avances en la descodificaci\u00f3n neuronal del habla, lo que nos acerca a un futuro en el que las personas que han perdido la capacidad de hablar puedan recuperar su voz.  El estudio, publicado en Nature Machine Intelligence, presenta un novedoso marco de aprendizaje profundo que traduce con precisi\u00f3n las se\u00f1ales cerebrales en habla inteligible.  Las personas con lesiones cerebrales debidas a accidentes cerebrovasculares, enfermedades degenerativas o traumatismos f\u00edsicos pueden utilizar estos sistemas para comunicarse decodificando sus pensamientos o su intenci\u00f3n de hablar a partir de se\u00f1ales neuronales. El sistema del equipo de la NYU implica un modelo de aprendizaje profundo que mapea las se\u00f1ales de electrocorticograf\u00eda (ECoG) del<\/p>","protected":false},"author":2,"featured_media":11501,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[203,204,178],"class_list":["post-11499","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-biotech","tag-healthcare","tag-medicine"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>NYU researchers build a groundbreaking AI speech synthesis system | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/es\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/\" \/>\n<meta property=\"og:locale\" content=\"es_ES\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"NYU researchers build a groundbreaking AI speech synthesis system | DailyAI\" \/>\n<meta property=\"og:description\" content=\"A team of researchers from New York University has made progress in neural speech decoding, bringing us closer to a future in which individuals who have lost the ability to speak can regain their voice.\u00a0 The study, published in Nature Machine Intelligence, presents a novel deep learning framework that accurately translates brain signals into intelligible speech.\u00a0 People with brain injuries from strokes, degenerative conditions, or physical trauma can use these systems to communicate by decoding their thoughts or intended speech from neural signals. The NYU team&#8217;s system involves a deep learning model that maps the electrocorticography (ECoG) signals from the\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/es\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-11T19:06:18+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-04-12T10:16:58+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tiempo de lectura\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"NYU researchers build a groundbreaking AI speech synthesis system\",\"datePublished\":\"2024-04-11T19:06:18+00:00\",\"dateModified\":\"2024-04-12T10:16:58+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\"},\"wordCount\":970,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"keywords\":[\"Biotech\",\"Healthcare\",\"Medicine\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"es\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\",\"name\":\"NYU researchers build a groundbreaking AI speech synthesis system | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"datePublished\":\"2024-04-11T19:06:18+00:00\",\"dateModified\":\"2024-04-12T10:16:58+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#breadcrumb\"},\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"width\":1792,\"height\":1024,\"caption\":\"AI speech\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"NYU researchers build a groundbreaking AI speech synthesis system\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"es\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/es\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Investigadores de la Universidad de Nueva York crean un innovador sistema de s\u00edntesis de voz por IA | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/es\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","og_locale":"es_ES","og_type":"article","og_title":"NYU researchers build a groundbreaking AI speech synthesis system | DailyAI","og_description":"A team of researchers from New York University has made progress in neural speech decoding, bringing us closer to a future in which individuals who have lost the ability to speak can regain their voice.\u00a0 The study, published in Nature Machine Intelligence, presents a novel deep learning framework that accurately translates brain signals into intelligible speech.\u00a0 People with brain injuries from strokes, degenerative conditions, or physical trauma can use these systems to communicate by decoding their thoughts or intended speech from neural signals. The NYU team&#8217;s system involves a deep learning model that maps the electrocorticography (ECoG) signals from the","og_url":"https:\/\/dailyai.com\/es\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","og_site_name":"DailyAI","article_published_time":"2024-04-11T19:06:18+00:00","article_modified_time":"2024-04-12T10:16:58+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","type":"image\/webp"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Sam Jeans","Tiempo de lectura":"5 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"NYU researchers build a groundbreaking AI speech synthesis system","datePublished":"2024-04-11T19:06:18+00:00","dateModified":"2024-04-12T10:16:58+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/"},"wordCount":970,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","keywords":["Biotech","Healthcare","Medicine"],"articleSection":["Industry"],"inLanguage":"es"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","url":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","name":"Investigadores de la Universidad de Nueva York crean un innovador sistema de s\u00edntesis de voz por IA | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","datePublished":"2024-04-11T19:06:18+00:00","dateModified":"2024-04-12T10:16:58+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#breadcrumb"},"inLanguage":"es","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/"]}]},{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","width":1792,"height":1024,"caption":"AI speech"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"NYU researchers build a groundbreaking AI speech synthesis system"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Su dosis diaria de noticias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"es"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Sam Jeans","image":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam es un escritor de ciencia y tecnolog\u00eda que ha trabajado en varias startups de IA. Cuando no est\u00e1 escribiendo, se le puede encontrar leyendo revistas m\u00e9dicas o rebuscando en cajas de discos de vinilo.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/es\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/11499","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/comments?post=11499"}],"version-history":[{"count":13,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/11499\/revisions"}],"predecessor-version":[{"id":11523,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/11499\/revisions\/11523"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media\/11501"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media?parent=11499"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/categories?post=11499"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/tags?post=11499"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}