{"id":11499,"date":"2024-04-11T19:06:18","date_gmt":"2024-04-11T19:06:18","guid":{"rendered":"https:\/\/dailyai.com\/?p=11499"},"modified":"2024-04-12T10:16:58","modified_gmt":"2024-04-12T10:16:58","slug":"nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system","status":"publish","type":"post","link":"https:\/\/dailyai.com\/fr\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","title":{"rendered":"Des chercheurs de l'universit\u00e9 de New York mettent au point un syst\u00e8me r\u00e9volutionnaire de synth\u00e8se vocale par IA"},"content":{"rendered":"<p><b>Une \u00e9quipe de chercheurs de l'universit\u00e9 de New York a progress\u00e9 dans le d\u00e9codage neuronal de la parole, ce qui nous rapproche d'un avenir o\u00f9 les personnes ayant perdu l'usage de la parole pourront retrouver leur voix.\u00a0<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Les <\/span><a href=\"https:\/\/www.nature.com\/articles\/s42256-024-00824-8\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">\u00e9tude<\/span><\/a><span style=\"font-weight: 400;\">, publi\u00e9 dans <em>Nature Machine Intelligence<\/em>pr\u00e9sente un nouveau cadre d'apprentissage profond qui traduit avec pr\u00e9cision les signaux c\u00e9r\u00e9braux en paroles intelligibles.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Les personnes souffrant de l\u00e9sions c\u00e9r\u00e9brales dues \u00e0 des accidents vasculaires c\u00e9r\u00e9braux, \u00e0 des maladies d\u00e9g\u00e9n\u00e9ratives ou \u00e0 des traumatismes physiques peuvent utiliser ces syst\u00e8mes pour communiquer en d\u00e9codant leurs pens\u00e9es ou leur discours \u00e0 partir des signaux neuronaux.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Le syst\u00e8me de l'\u00e9quipe de l'Universit\u00e9 de New York comprend un mod\u00e8le d'apprentissage profond qui \u00e9tablit une correspondance entre les signaux d'\u00e9lectrocorticographie (ECoG) du cerveau et les caract\u00e9ristiques de la parole, telles que la hauteur, le volume et d'autres contenus spectraux.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">La deuxi\u00e8me \u00e9tape fait intervenir un synth\u00e9tiseur de parole neuronal qui convertit les caract\u00e9ristiques vocales extraites en un spectrogramme audible, qui peut ensuite \u00eatre transform\u00e9 en une forme d'onde vocale.\u00a0<\/span><\/p>\n<p>Cette forme d'onde peut enfin \u00eatre convertie en une synth\u00e8se vocale \u00e0 la sonorit\u00e9 naturelle.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Un nouvel article est publi\u00e9 aujourd'hui dans <a href=\"https:\/\/twitter.com\/NatMachIntell?ref_src=twsrc%5Etfw\">@NatMachIntell<\/a>o\u00f9 nous montrons un d\u00e9codage neuronal robuste de la parole chez 48 patients. <a href=\"https:\/\/t.co\/rNPAMr4l68\">https:\/\/t.co\/rNPAMr4l68<\/a> <a href=\"https:\/\/t.co\/FG7QKCBVzp\">pic.twitter.com\/FG7QKCBVzp<\/a><\/p>\n<p>- Adeen Flinker \ud83c\uddee\ud83c\uddf1\ud83c\uddfa\ud83c\udde6\ud83c\udf97\ufe0f (@adeenflinker) <a href=\"https:\/\/twitter.com\/adeenflinker\/status\/1777513445304193367?ref_src=twsrc%5Etfw\">9 avril 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<h2>Comment fonctionne l'\u00e9tude ?<\/h2>\n<p><span style=\"font-weight: 400;\">Cette \u00e9tude porte sur la formation d'un mod\u00e8le d'IA capable d'alimenter un dispositif de synth\u00e8se vocale, permettant aux personnes souffrant de troubles de la parole de parler en utilisant les impulsions \u00e9lectriques de leur cerveau.\u00a0<\/span><\/p>\n<p>Voici comment cela fonctionne plus en d\u00e9tail :<\/p>\n<p><b>1. Collecte de donn\u00e9es sur le cerveau<\/b><\/p>\n<p><span style=\"font-weight: 400;\">La premi\u00e8re \u00e9tape consiste \u00e0 collecter les donn\u00e9es brutes n\u00e9cessaires \u00e0 l'entra\u00eenement du mod\u00e8le de d\u00e9codage de la parole. Les chercheurs ont travaill\u00e9 avec 48 participants ayant subi une neurochirurgie pour \u00e9pilepsie. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Au cours de l'\u00e9tude, ces participants ont \u00e9t\u00e9 invit\u00e9s \u00e0 lire des centaines de phrases \u00e0 haute voix pendant que leur activit\u00e9 c\u00e9r\u00e9brale \u00e9tait enregistr\u00e9e \u00e0 l'aide de grilles ECoG. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ces grilles sont plac\u00e9es directement \u00e0 la surface du cerveau et captent les signaux \u00e9lectriques des r\u00e9gions c\u00e9r\u00e9brales impliqu\u00e9es dans la production de la parole.<\/span><\/p>\n<p><b>2. Cartographie des signaux c\u00e9r\u00e9braux et de la parole<\/b><\/p>\n<p><span style=\"font-weight: 400;\">\u00c0 partir de donn\u00e9es vocales, les chercheurs ont mis au point un mod\u00e8le d'intelligence artificielle sophistiqu\u00e9 qui associe les signaux c\u00e9r\u00e9braux enregistr\u00e9s \u00e0 des caract\u00e9ristiques sp\u00e9cifiques de la parole, telles que la hauteur, le volume et les fr\u00e9quences uniques qui composent les diff\u00e9rents sons de la parole.\u00a0<\/span><\/p>\n<p><b>3. Synth\u00e8se de la parole \u00e0 partir de caract\u00e9ristiques<\/b><\/p>\n<p><span style=\"font-weight: 400;\">La troisi\u00e8me \u00e9tape consiste \u00e0 convertir les caract\u00e9ristiques vocales extraites des signaux c\u00e9r\u00e9braux en paroles audibles. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Les chercheurs ont utilis\u00e9 un synth\u00e9tiseur vocal sp\u00e9cial qui prend les caract\u00e9ristiques extraites et g\u00e9n\u00e8re un spectrogramme - une repr\u00e9sentation visuelle des sons de la parole.\u00a0<\/span><\/p>\n<p><b>4. \u00c9valuer les r\u00e9sultats<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Les chercheurs ont compar\u00e9 le discours g\u00e9n\u00e9r\u00e9 par leur mod\u00e8le au discours original prononc\u00e9 par les participants. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ils ont utilis\u00e9 des param\u00e8tres objectifs pour mesurer la similitude entre les deux et ont constat\u00e9 que le discours g\u00e9n\u00e9r\u00e9 correspondait \u00e9troitement au contenu et au rythme de l'original.\u00a0<\/span><\/p>\n<p><b>5. Test sur des mots nouveaux<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Pour s'assurer que le mod\u00e8le peut traiter de nouveaux mots qu'il n'a pas vus auparavant, certains mots ont \u00e9t\u00e9 intentionnellement omis pendant la phase d'entra\u00eenement du mod\u00e8le, puis la performance du mod\u00e8le sur ces mots non vus a \u00e9t\u00e9 test\u00e9e. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">La capacit\u00e9 du mod\u00e8le \u00e0 d\u00e9coder avec pr\u00e9cision m\u00eame des mots nouveaux d\u00e9montre son potentiel de g\u00e9n\u00e9ralisation et de traitement de divers mod\u00e8les de discours.<\/span><\/p>\n<figure id=\"attachment_11500\" aria-describedby=\"caption-attachment-11500\" style=\"width: 1024px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-11500 size-large\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-1024x397.webp\" alt=\"Discours sur l&#039;IA\" width=\"1024\" height=\"397\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-1024x397.webp 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-300x116.webp 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-768x298.webp 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-1536x596.webp 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-60x23.webp 60w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML.webp 1622w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption id=\"caption-attachment-11500\" class=\"wp-caption-text\">Le syst\u00e8me de synth\u00e8se vocale de l'Universit\u00e9 de New York. Source : <a href=\"https:\/\/www.nature.com\/articles\/s42256-024-00824-8\">Nature<\/a> (acc\u00e8s libre)<\/figcaption><\/figure>\n<p>La partie sup\u00e9rieure du diagramme ci-dessus d\u00e9crit le processus de conversion des signaux c\u00e9r\u00e9braux en parole. Tout d'abord, un d\u00e9codeur transforme ces signaux en param\u00e8tres vocaux au fil du temps. Ensuite, un synth\u00e9tiseur cr\u00e9e des images sonores (spectrogrammes) \u00e0 partir de ces param\u00e8tres. Un autre outil transforme \u00e0 nouveau ces images en ondes sonores.<\/p>\n<p>La derni\u00e8re section traite d'un syst\u00e8me qui aide \u00e0 former le d\u00e9codeur de signaux c\u00e9r\u00e9braux en imitant la parole. Il prend une image sonore, la transforme en param\u00e8tres vocaux, puis les utilise pour cr\u00e9er une nouvelle image sonore. Cette partie du syst\u00e8me apprend \u00e0 partir de sons vocaux r\u00e9els pour s'am\u00e9liorer.<\/p>\n<p>Apr\u00e8s la formation, seul le processus sup\u00e9rieur est n\u00e9cessaire pour transformer les signaux c\u00e9r\u00e9braux en parole.<\/p>\n<p><span style=\"font-weight: 400;\">L'un des principaux avantages du syst\u00e8me de l'Universit\u00e9 de New York est qu'il permet d'obtenir un d\u00e9codage vocal de haute qualit\u00e9 sans avoir recours \u00e0 des r\u00e9seaux d'\u00e9lectrodes \u00e0 tr\u00e8s haute densit\u00e9, qui ne sont pas pratiques pour une utilisation \u00e0 long terme. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Il s'agit essentiellement d'une solution plus l\u00e9g\u00e8re et plus portable.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Une autre r\u00e9alisation est le d\u00e9codage r\u00e9ussi de la parole \u00e0 partir des h\u00e9misph\u00e8res gauche et droit du cerveau, ce qui est important pour les patients souffrant de l\u00e9sions c\u00e9r\u00e9brales d'un seul c\u00f4t\u00e9 du cerveau.\u00a0<\/span><\/p>\n<h2>Convertir les pens\u00e9es en paroles gr\u00e2ce \u00e0 l'IA<\/h2>\n<p><span style=\"font-weight: 400;\">L'\u00e9tude de l'Universit\u00e9 de New York s'appuie sur des recherches ant\u00e9rieures concernant le d\u00e9codage neuronal de la parole et les interfaces cerveau-ordinateur (BCI).\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">En 2023, une \u00e9quipe de l'Universit\u00e9 de Californie \u00e0 San Francisco a permis \u00e0 un survivant d'un accident vasculaire c\u00e9r\u00e9bral paralys\u00e9 d'\u00eatre en mesure de se d\u00e9placer dans le monde entier. <\/span><a href=\"https:\/\/dailyai.com\/fr\/2023\/08\/ai-replenishes-speech-and-facial-expressions-of-stroke-survivor\/\"><span style=\"font-weight: 400;\">g\u00e9n\u00e9rer des phrases<\/span><\/a><span style=\"font-weight: 400;\"> \u00e0 une vitesse de 78 mots par minute en utilisant un BCI qui synth\u00e9tise \u00e0 la fois les vocalisations et les expressions faciales \u00e0 partir des signaux c\u00e9r\u00e9braux.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D'autres \u00e9tudes r\u00e9centes ont explor\u00e9 l'utilisation de l'IA pour interpr\u00e9ter divers aspects de la pens\u00e9e humaine \u00e0 partir de l'activit\u00e9 c\u00e9r\u00e9brale. Des chercheurs ont d\u00e9montr\u00e9 leur capacit\u00e9 \u00e0 g\u00e9n\u00e9rer des images, du texte et m\u00eame de la musique \u00e0 partir de donn\u00e9es d'IRM et d'\u00e9lectroenc\u00e9phalogrammes (EEG) pr\u00e9lev\u00e9es sur le cerveau. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Par exemple, un <\/span><a href=\"https:\/\/dailyai.com\/fr\/2023\/08\/ai-mind-reading-medical-breakthrough-or-step-towards-dystopia\/\"><span style=\"font-weight: 400;\">\u00e9tude de l'Universit\u00e9 d'Helsinki<\/span><\/a><span style=\"font-weight: 400;\"> a utilis\u00e9 des signaux EEG pour guider un r\u00e9seau g\u00e9n\u00e9ratif contradictoire (GAN) dans la production d'images faciales correspondant aux pens\u00e9es des participants.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">L'IA m\u00e9ta <\/span><a href=\"https:\/\/dailyai.com\/fr\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/\"><span style=\"font-weight: 400;\">a mis au point une technique<\/span><\/a><span style=\"font-weight: 400;\"> pour d\u00e9coder partiellement ce qu'une personne \u00e9coute en utilisant les ondes c\u00e9r\u00e9brales recueillies de mani\u00e8re non invasive.<\/span><\/p>\n<h2>Opportunit\u00e9s et d\u00e9fis<\/h2>\n<p>La m\u00e9thode de l'Universit\u00e9 de New York utilise des \u00e9lectrodes plus largement disponibles et cliniquement viables que les m\u00e9thodes pr\u00e9c\u00e9dentes, ce qui la rend plus accessible.<\/p>\n<p><span style=\"font-weight: 400;\">Bien que cela soit passionnant, il y a des obstacles majeurs \u00e0 surmonter si nous voulons assister \u00e0 une utilisation g\u00e9n\u00e9ralis\u00e9e.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D'une part, la collecte de donn\u00e9es c\u00e9r\u00e9brales de haute qualit\u00e9 est une entreprise complexe qui prend du temps. Les diff\u00e9rences individuelles dans l'activit\u00e9 c\u00e9r\u00e9brale rendent la g\u00e9n\u00e9ralisation difficile, ce qui signifie qu'un mod\u00e8le form\u00e9 pour un groupe de participants peut ne pas fonctionner correctement pour un autre.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">N\u00e9anmoins, l'\u00e9tude de l'Universit\u00e9 de New York repr\u00e9sente un pas en avant dans cette direction en d\u00e9montrant un d\u00e9codage de la parole de haute pr\u00e9cision \u00e0 l'aide de r\u00e9seaux d'\u00e9lectrodes plus l\u00e9gers.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00c0 l'avenir, l'\u00e9quipe de l'universit\u00e9 de New York entend affiner ses mod\u00e8les de d\u00e9codage de la parole en temps r\u00e9el, ce qui nous rapprochera de l'objectif ultime : permettre aux personnes souffrant de troubles de la parole d'avoir des conversations naturelles et fluides.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ils ont \u00e9galement l'intention d'adapter le syst\u00e8me \u00e0 des dispositifs sans fil implantables pouvant \u00eatre utilis\u00e9s dans la vie de tous les jours.<\/span><\/p>","protected":false},"excerpt":{"rendered":"<p>Une \u00e9quipe de chercheurs de l'universit\u00e9 de New York a progress\u00e9 dans le d\u00e9codage neuronal de la parole, ce qui nous rapproche d'un avenir o\u00f9 les personnes ayant perdu l'usage de la parole pourront retrouver leur voix.  L'\u00e9tude, publi\u00e9e dans Nature Machine Intelligence, pr\u00e9sente un nouveau cadre d'apprentissage profond qui traduit avec pr\u00e9cision les signaux c\u00e9r\u00e9braux en paroles intelligibles.  Les personnes souffrant de l\u00e9sions c\u00e9r\u00e9brales dues \u00e0 des accidents vasculaires c\u00e9r\u00e9braux, \u00e0 des maladies d\u00e9g\u00e9n\u00e9ratives ou \u00e0 des traumatismes physiques peuvent utiliser ces syst\u00e8mes pour communiquer en d\u00e9codant leurs pens\u00e9es ou leur intention de parler \u00e0 partir des signaux neuronaux. Le syst\u00e8me de l'\u00e9quipe de l'Universit\u00e9 de New York comprend un mod\u00e8le d'apprentissage profond qui cartographie les signaux d'\u00e9lectrocorticographie (ECoG) provenant du cerveau.<\/p>","protected":false},"author":2,"featured_media":11501,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[203,204,178],"class_list":["post-11499","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-biotech","tag-healthcare","tag-medicine"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>NYU researchers build a groundbreaking AI speech synthesis system | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/fr\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"NYU researchers build a groundbreaking AI speech synthesis system | DailyAI\" \/>\n<meta property=\"og:description\" content=\"A team of researchers from New York University has made progress in neural speech decoding, bringing us closer to a future in which individuals who have lost the ability to speak can regain their voice.\u00a0 The study, published in Nature Machine Intelligence, presents a novel deep learning framework that accurately translates brain signals into intelligible speech.\u00a0 People with brain injuries from strokes, degenerative conditions, or physical trauma can use these systems to communicate by decoding their thoughts or intended speech from neural signals. The NYU team&#8217;s system involves a deep learning model that maps the electrocorticography (ECoG) signals from the\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/fr\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-11T19:06:18+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-04-12T10:16:58+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"NYU researchers build a groundbreaking AI speech synthesis system\",\"datePublished\":\"2024-04-11T19:06:18+00:00\",\"dateModified\":\"2024-04-12T10:16:58+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\"},\"wordCount\":970,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"keywords\":[\"Biotech\",\"Healthcare\",\"Medicine\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"fr-FR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\",\"name\":\"NYU researchers build a groundbreaking AI speech synthesis system | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"datePublished\":\"2024-04-11T19:06:18+00:00\",\"dateModified\":\"2024-04-12T10:16:58+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"width\":1792,\"height\":1024,\"caption\":\"AI speech\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"NYU researchers build a groundbreaking AI speech synthesis system\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/fr\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Des chercheurs de l'universit\u00e9 de New York mettent au point un syst\u00e8me r\u00e9volutionnaire de synth\u00e8se vocale par IA | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/fr\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","og_locale":"fr_FR","og_type":"article","og_title":"NYU researchers build a groundbreaking AI speech synthesis system | DailyAI","og_description":"A team of researchers from New York University has made progress in neural speech decoding, bringing us closer to a future in which individuals who have lost the ability to speak can regain their voice.\u00a0 The study, published in Nature Machine Intelligence, presents a novel deep learning framework that accurately translates brain signals into intelligible speech.\u00a0 People with brain injuries from strokes, degenerative conditions, or physical trauma can use these systems to communicate by decoding their thoughts or intended speech from neural signals. The NYU team&#8217;s system involves a deep learning model that maps the electrocorticography (ECoG) signals from the","og_url":"https:\/\/dailyai.com\/fr\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","og_site_name":"DailyAI","article_published_time":"2024-04-11T19:06:18+00:00","article_modified_time":"2024-04-12T10:16:58+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","type":"image\/webp"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"\u00c9crit par":"Sam Jeans","Dur\u00e9e de lecture estim\u00e9e":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"NYU researchers build a groundbreaking AI speech synthesis system","datePublished":"2024-04-11T19:06:18+00:00","dateModified":"2024-04-12T10:16:58+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/"},"wordCount":970,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","keywords":["Biotech","Healthcare","Medicine"],"articleSection":["Industry"],"inLanguage":"fr-FR"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","url":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","name":"Des chercheurs de l'universit\u00e9 de New York mettent au point un syst\u00e8me r\u00e9volutionnaire de synth\u00e8se vocale par IA | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","datePublished":"2024-04-11T19:06:18+00:00","dateModified":"2024-04-12T10:16:58+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/"]}]},{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","width":1792,"height":1024,"caption":"AI speech"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"NYU researchers build a groundbreaking AI speech synthesis system"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Votre dose quotidienne de nouvelles sur l'IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Sam Jeans","image":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam est un r\u00e9dacteur scientifique et technologique qui a travaill\u00e9 dans diverses start-ups sp\u00e9cialis\u00e9es dans l'IA. Lorsqu'il n'\u00e9crit pas, on peut le trouver en train de lire des revues m\u00e9dicales ou de fouiller dans des bo\u00eetes de disques vinyles.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/fr\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/11499","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/comments?post=11499"}],"version-history":[{"count":13,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/11499\/revisions"}],"predecessor-version":[{"id":11523,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/11499\/revisions\/11523"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media\/11501"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media?parent=11499"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/categories?post=11499"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/tags?post=11499"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}