{"id":11499,"date":"2024-04-11T19:06:18","date_gmt":"2024-04-11T19:06:18","guid":{"rendered":"https:\/\/dailyai.com\/?p=11499"},"modified":"2024-04-12T10:16:58","modified_gmt":"2024-04-12T10:16:58","slug":"nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system","status":"publish","type":"post","link":"https:\/\/dailyai.com\/pt\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","title":{"rendered":"Investigadores da NYU criam um sistema inovador de s\u00edntese da fala com IA"},"content":{"rendered":"<p><b>Uma equipa de investigadores da Universidade de Nova Iorque fez progressos na descodifica\u00e7\u00e3o neural da fala, aproximando-nos de um futuro em que as pessoas que perderam a capacidade de falar podem recuperar a voz.\u00a0<\/b><\/p>\n<p><span style=\"font-weight: 400;\">O <\/span><a href=\"https:\/\/www.nature.com\/articles\/s42256-024-00824-8\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">estudo<\/span><\/a><span style=\"font-weight: 400;\">, publicado em <em>Natureza Intelig\u00eancia artificial<\/em>apresenta uma nova estrutura de aprendizagem profunda que traduz com precis\u00e3o os sinais cerebrais em discurso intelig\u00edvel.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">As pessoas com les\u00f5es cerebrais provocadas por acidentes vasculares cerebrais, doen\u00e7as degenerativas ou traumatismos f\u00edsicos podem utilizar estes sistemas para comunicar, descodificando os seus pensamentos ou o discurso pretendido a partir de sinais neurais.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">O sistema da equipa da NYU envolve um modelo de aprendizagem profunda que mapeia os sinais de electrocorticografia (ECoG) do c\u00e9rebro para caracter\u00edsticas da fala, como o tom, o volume e outros conte\u00fados espectrais.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A segunda fase envolve um sintetizador de fala neural que converte as caracter\u00edsticas de fala extra\u00eddas num espetrograma aud\u00edvel, que pode depois ser transformado numa forma de onda de fala.\u00a0<\/span><\/p>\n<p>Essa forma de onda pode finalmente ser convertida em fala sintetizada com som natural.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Novo artigo publicado hoje no <a href=\"https:\/\/twitter.com\/NatMachIntell?ref_src=twsrc%5Etfw\">@NatMachIntell<\/a>onde mostramos uma descodifica\u00e7\u00e3o neural robusta para a fala em 48 pacientes. <a href=\"https:\/\/t.co\/rNPAMr4l68\">https:\/\/t.co\/rNPAMr4l68<\/a> <a href=\"https:\/\/t.co\/FG7QKCBVzp\">pic.twitter.com\/FG7QKCBVzp<\/a><\/p>\n<p>- Adeen Flinker \ud83c\uddee\ud83c\uddf1\ud83c\uddfa\ud83c\udde6\ud83c\udf97\ufe0f (@adeenflinker) <a href=\"https:\/\/twitter.com\/adeenflinker\/status\/1777513445304193367?ref_src=twsrc%5Etfw\">9 de abril de 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<h2>Como funciona o estudo<\/h2>\n<p><span style=\"font-weight: 400;\">Este estudo envolve o treino de um modelo de IA que pode alimentar um dispositivo de s\u00edntese de fala, permitindo que as pessoas com perda de fala falem utilizando impulsos el\u00e9ctricos do seu c\u00e9rebro.\u00a0<\/span><\/p>\n<p>Eis como funciona em mais pormenor:<\/p>\n<p><b>1. Recolha de dados cerebrais<\/b><\/p>\n<p><span style=\"font-weight: 400;\">O primeiro passo consiste em recolher os dados em bruto necess\u00e1rios para treinar o modelo de descodifica\u00e7\u00e3o da fala. Os investigadores trabalharam com 48 participantes que estavam a ser submetidos a uma neurocirurgia para tratamento da epilepsia. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Durante o estudo, foi pedido a estes participantes que lessem centenas de frases em voz alta, enquanto a sua atividade cerebral era registada atrav\u00e9s de grelhas de ECoG. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Estas grelhas s\u00e3o colocadas diretamente na superf\u00edcie do c\u00e9rebro e captam sinais el\u00e9ctricos das regi\u00f5es cerebrais envolvidas na produ\u00e7\u00e3o da fala.<\/span><\/p>\n<p><b>2. Mapeamento dos sinais cerebrais para a fala<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Utilizando dados da fala, os investigadores desenvolveram um modelo sofisticado de IA que mapeia os sinais cerebrais registados para caracter\u00edsticas espec\u00edficas da fala, como o tom, o volume e as frequ\u00eancias \u00fanicas que comp\u00f5em os diferentes sons da fala.\u00a0<\/span><\/p>\n<p><b>3. Sintetizar o discurso a partir de caracter\u00edsticas<\/b><\/p>\n<p><span style=\"font-weight: 400;\">A terceira etapa consiste em converter as caracter\u00edsticas da fala extra\u00eddas dos sinais cerebrais em fala aud\u00edvel. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Os investigadores utilizaram um sintetizador de fala especial que utiliza as caracter\u00edsticas extra\u00eddas e gera um espetrograma - uma representa\u00e7\u00e3o visual dos sons da fala.\u00a0<\/span><\/p>\n<p><b>4. Avalia\u00e7\u00e3o dos resultados<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Os investigadores compararam o discurso gerado pelo seu modelo com o discurso original falado pelos participantes. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Utilizaram m\u00e9tricas objectivas para medir a semelhan\u00e7a entre os dois e conclu\u00edram que o discurso gerado correspondia de perto ao conte\u00fado e ao ritmo do original.\u00a0<\/span><\/p>\n<p><b>5. Testes com palavras novas<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Para garantir que o modelo consegue lidar com palavras novas que nunca viu antes, algumas palavras foram intencionalmente omitidas durante a fase de treino do modelo e, em seguida, foi testado o desempenho do modelo nestas palavras n\u00e3o vistas. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">A capacidade do modelo para descodificar com precis\u00e3o mesmo palavras novas demonstra o seu potencial para generalizar e lidar com diversos padr\u00f5es de discurso.<\/span><\/p>\n<figure id=\"attachment_11500\" aria-describedby=\"caption-attachment-11500\" style=\"width: 1024px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-11500 size-large\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-1024x397.webp\" alt=\"Discurso de IA\" width=\"1024\" height=\"397\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-1024x397.webp 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-300x116.webp 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-768x298.webp 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-1536x596.webp 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-60x23.webp 60w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML.webp 1622w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption id=\"caption-attachment-11500\" class=\"wp-caption-text\">O sistema de s\u00edntese de voz da NYU. Fonte: <a href=\"https:\/\/www.nature.com\/articles\/s42256-024-00824-8\">Natureza<\/a> (acesso livre)<\/figcaption><\/figure>\n<p>A sec\u00e7\u00e3o superior do diagrama acima descreve um processo de convers\u00e3o de sinais cerebrais em fala. Primeiro, um descodificador transforma estes sinais em par\u00e2metros de fala ao longo do tempo. Depois, um sintetizador cria imagens sonoras (espectrogramas) a partir destes par\u00e2metros. Outra ferramenta transforma estas imagens novamente em ondas sonoras.<\/p>\n<p>A sec\u00e7\u00e3o inferior aborda um sistema que ajuda a treinar o descodificador de sinais do c\u00e9rebro, imitando a fala. Pega numa imagem sonora, transforma-a em par\u00e2metros de fala e depois utiliza-os para criar uma nova imagem sonora. Esta parte do sistema aprende com os sons reais da fala para os melhorar.<\/p>\n<p>Ap\u00f3s o treino, s\u00f3 \u00e9 necess\u00e1rio o processo superior para transformar os sinais cerebrais em discurso.<\/p>\n<p><span style=\"font-weight: 400;\">Uma das principais vantagens do sistema da NYU \u00e9 a sua capacidade de obter uma descodifica\u00e7\u00e3o da fala de alta qualidade sem a necessidade de matrizes de el\u00e9ctrodos de densidade ultra elevada, que n\u00e3o s\u00e3o pr\u00e1ticas para uma utiliza\u00e7\u00e3o a longo prazo. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Na sua ess\u00eancia, oferece uma solu\u00e7\u00e3o mais leve e port\u00e1til.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Outra conquista \u00e9 a descodifica\u00e7\u00e3o bem sucedida da fala a partir dos hemisf\u00e9rios esquerdo e direito do c\u00e9rebro, o que \u00e9 importante para os doentes com les\u00f5es cerebrais num dos lados do c\u00e9rebro.\u00a0<\/span><\/p>\n<h2>Converter pensamentos em discurso utilizando a IA<\/h2>\n<p><span style=\"font-weight: 400;\">O estudo da NYU baseia-se em investiga\u00e7\u00e3o anterior sobre descodifica\u00e7\u00e3o neural da fala e interfaces c\u00e9rebro-computador (BCI).\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Em 2023, uma equipa da Universidade da Calif\u00f3rnia, em S\u00e3o Francisco, permitiu a um sobrevivente de um AVC paralisado <\/span><a href=\"https:\/\/dailyai.com\/pt\/2023\/08\/ai-replenishes-speech-and-facial-expressions-of-stroke-survivor\/\"><span style=\"font-weight: 400;\">gerar frases<\/span><\/a><span style=\"font-weight: 400;\"> a uma velocidade de 78 palavras por minuto, utilizando uma BCI que sintetizava vocaliza\u00e7\u00f5es e express\u00f5es faciais a partir de sinais cerebrais.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Outros estudos recentes exploraram a utiliza\u00e7\u00e3o da IA para interpretar v\u00e1rios aspectos do pensamento humano a partir da atividade cerebral. Os investigadores demonstraram a capacidade de gerar imagens, texto e at\u00e9 m\u00fasica a partir de dados de resson\u00e2ncia magn\u00e9tica e de eletroencefalograma (EEG) retirados do c\u00e9rebro. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Por exemplo, um <\/span><a href=\"https:\/\/dailyai.com\/pt\/2023\/08\/ai-mind-reading-medical-breakthrough-or-step-towards-dystopia\/\"><span style=\"font-weight: 400;\">estudo da Universidade de Hels\u00ednquia<\/span><\/a><span style=\"font-weight: 400;\"> utilizaram sinais EEG para orientar uma rede advers\u00e1ria generativa (GAN) na produ\u00e7\u00e3o de imagens faciais que correspondiam aos pensamentos dos participantes.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A Meta IA tamb\u00e9m <\/span><a href=\"https:\/\/dailyai.com\/pt\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/\"><span style=\"font-weight: 400;\">desenvolveu uma t\u00e9cnica<\/span><\/a><span style=\"font-weight: 400;\"> para descodificar parcialmente o que algu\u00e9m estava a ouvir utilizando ondas cerebrais recolhidas de forma n\u00e3o invasiva.<\/span><\/p>\n<h2>Oportunidades e desafios<\/h2>\n<p>O m\u00e9todo da NYU utiliza el\u00e9ctrodos mais amplamente dispon\u00edveis e clinicamente vi\u00e1veis do que os m\u00e9todos anteriores, tornando-o mais acess\u00edvel.<\/p>\n<p><span style=\"font-weight: 400;\">Embora isto seja empolgante, h\u00e1 grandes obst\u00e1culos a ultrapassar se quisermos assistir a uma utiliza\u00e7\u00e3o generalizada.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Por um lado, a recolha de dados cerebrais de alta qualidade \u00e9 um esfor\u00e7o complexo e moroso. As diferen\u00e7as individuais na atividade cerebral dificultam a generaliza\u00e7\u00e3o, o que significa que um modelo treinado para um grupo de participantes pode n\u00e3o funcionar bem para outro.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">No entanto, o estudo da NYU representa um passo em frente nesta dire\u00e7\u00e3o ao demonstrar uma descodifica\u00e7\u00e3o da fala de elevada precis\u00e3o utilizando matrizes de el\u00e9ctrodos mais leves.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Olhando para o futuro, a equipa da NYU pretende aperfei\u00e7oar os seus modelos de descodifica\u00e7\u00e3o da fala em tempo real, aproximando-nos do objetivo final de permitir conversas naturais e fluentes para indiv\u00edduos com defici\u00eancias da fala.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Pretendem tamb\u00e9m adaptar o sistema a dispositivos sem fios implant\u00e1veis que possam ser utilizados na vida quotidiana.<\/span><\/p>","protected":false},"excerpt":{"rendered":"<p>Uma equipa de investigadores da Universidade de Nova Iorque fez progressos na descodifica\u00e7\u00e3o neural da fala, aproximando-nos de um futuro em que as pessoas que perderam a capacidade de falar podem recuperar a voz.  O estudo, publicado na revista Nature Machine Intelligence, apresenta um novo quadro de aprendizagem profunda que traduz com precis\u00e3o os sinais cerebrais em discurso intelig\u00edvel.  As pessoas com les\u00f5es cerebrais provocadas por acidentes vasculares cerebrais, doen\u00e7as degenerativas ou traumatismos f\u00edsicos podem utilizar estes sistemas para comunicar, descodificando os seus pensamentos ou o discurso pretendido a partir de sinais neurais. O sistema da equipa da NYU envolve um modelo de aprendizagem profunda que mapeia os sinais de electrocorticografia (ECoG) do<\/p>","protected":false},"author":2,"featured_media":11501,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[203,204,178],"class_list":["post-11499","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-biotech","tag-healthcare","tag-medicine"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>NYU researchers build a groundbreaking AI speech synthesis system | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/pt\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/\" \/>\n<meta property=\"og:locale\" content=\"pt_PT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"NYU researchers build a groundbreaking AI speech synthesis system | DailyAI\" \/>\n<meta property=\"og:description\" content=\"A team of researchers from New York University has made progress in neural speech decoding, bringing us closer to a future in which individuals who have lost the ability to speak can regain their voice.\u00a0 The study, published in Nature Machine Intelligence, presents a novel deep learning framework that accurately translates brain signals into intelligible speech.\u00a0 People with brain injuries from strokes, degenerative conditions, or physical trauma can use these systems to communicate by decoding their thoughts or intended speech from neural signals. The NYU team&#8217;s system involves a deep learning model that maps the electrocorticography (ECoG) signals from the\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/pt\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-11T19:06:18+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-04-12T10:16:58+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo estimado de leitura\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"NYU researchers build a groundbreaking AI speech synthesis system\",\"datePublished\":\"2024-04-11T19:06:18+00:00\",\"dateModified\":\"2024-04-12T10:16:58+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\"},\"wordCount\":970,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"keywords\":[\"Biotech\",\"Healthcare\",\"Medicine\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"pt-PT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\",\"name\":\"NYU researchers build a groundbreaking AI speech synthesis system | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"datePublished\":\"2024-04-11T19:06:18+00:00\",\"dateModified\":\"2024-04-12T10:16:58+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#breadcrumb\"},\"inLanguage\":\"pt-PT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"width\":1792,\"height\":1024,\"caption\":\"AI speech\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"NYU researchers build a groundbreaking AI speech synthesis system\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-PT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/pt\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Investigadores da NYU criam um sistema inovador de s\u00edntese de discurso com IA | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/pt\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","og_locale":"pt_PT","og_type":"article","og_title":"NYU researchers build a groundbreaking AI speech synthesis system | DailyAI","og_description":"A team of researchers from New York University has made progress in neural speech decoding, bringing us closer to a future in which individuals who have lost the ability to speak can regain their voice.\u00a0 The study, published in Nature Machine Intelligence, presents a novel deep learning framework that accurately translates brain signals into intelligible speech.\u00a0 People with brain injuries from strokes, degenerative conditions, or physical trauma can use these systems to communicate by decoding their thoughts or intended speech from neural signals. The NYU team&#8217;s system involves a deep learning model that maps the electrocorticography (ECoG) signals from the","og_url":"https:\/\/dailyai.com\/pt\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","og_site_name":"DailyAI","article_published_time":"2024-04-11T19:06:18+00:00","article_modified_time":"2024-04-12T10:16:58+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","type":"image\/webp"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Sam Jeans","Tempo estimado de leitura":"5 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"NYU researchers build a groundbreaking AI speech synthesis system","datePublished":"2024-04-11T19:06:18+00:00","dateModified":"2024-04-12T10:16:58+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/"},"wordCount":970,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","keywords":["Biotech","Healthcare","Medicine"],"articleSection":["Industry"],"inLanguage":"pt-PT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","url":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","name":"Investigadores da NYU criam um sistema inovador de s\u00edntese de discurso com IA | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","datePublished":"2024-04-11T19:06:18+00:00","dateModified":"2024-04-12T10:16:58+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#breadcrumb"},"inLanguage":"pt-PT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/"]}]},{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","width":1792,"height":1024,"caption":"AI speech"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"NYU researchers build a groundbreaking AI speech synthesis system"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"A sua dose di\u00e1ria de not\u00edcias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-PT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Cal\u00e7as de ganga Sam","image":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam \u00e9 um escritor de ci\u00eancia e tecnologia que trabalhou em v\u00e1rias startups de IA. Quando n\u00e3o est\u00e1 a escrever, pode ser encontrado a ler revistas m\u00e9dicas ou a vasculhar caixas de discos de vinil.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/pt\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/11499","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/comments?post=11499"}],"version-history":[{"count":13,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/11499\/revisions"}],"predecessor-version":[{"id":11523,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/11499\/revisions\/11523"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media\/11501"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media?parent=11499"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/categories?post=11499"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/tags?post=11499"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}