{"id":11499,"date":"2024-04-11T19:06:18","date_gmt":"2024-04-11T19:06:18","guid":{"rendered":"https:\/\/dailyai.com\/?p=11499"},"modified":"2024-04-12T10:16:58","modified_gmt":"2024-04-12T10:16:58","slug":"nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system","status":"publish","type":"post","link":"https:\/\/dailyai.com\/da\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","title":{"rendered":"NYU-forskere bygger et banebrydende AI-talesyntese-system"},"content":{"rendered":"<p><b>Et hold forskere fra New York University har gjort fremskridt inden for neural taledekodning, hvilket bringer os t\u00e6ttere p\u00e5 en fremtid, hvor personer, der har mistet evnen til at tale, kan f\u00e5 deres stemme tilbage.\u00a0<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Den <\/span><a href=\"https:\/\/www.nature.com\/articles\/s42256-024-00824-8\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">unders\u00f8gelse<\/span><\/a><span style=\"font-weight: 400;\">, udgivet i <em>Naturens maskinelle intelligens<\/em>pr\u00e6senterer en ny ramme for dyb l\u00e6ring, der pr\u00e6cist overs\u00e6tter hjernesignaler til forst\u00e5elig tale.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Mennesker med hjerneskader fra slagtilf\u00e6lde, degenerative tilstande eller fysiske traumer kan bruge disse systemer til at kommunikere ved at afkode deres tanker eller tilsigtede tale fra neurale signaler.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">NYU-teamets system involverer en deep learning-model, der kortl\u00e6gger elektrokortikografi-signalerne (ECoG) fra hjernen til talefunktioner, s\u00e5som toneh\u00f8jde, lydstyrke og andet spektralt indhold.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Det andet trin involverer en neural talesynthesizer, der omdanner de ekstraherede talefunktioner til et h\u00f8rbart spektrogram, som derefter kan omdannes til en taleb\u00f8lgeform.\u00a0<\/span><\/p>\n<p>Den b\u00f8lgeform kan til sidst konverteres til naturligt klingende syntetisk tale.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Ny artikel udgivet i dag i <a href=\"https:\/\/twitter.com\/NatMachIntell?ref_src=twsrc%5Etfw\">@NatMachIntell<\/a>hvor vi viser robust neural til tale-afkodning p\u00e5 tv\u00e6rs af 48 patienter. <a href=\"https:\/\/t.co\/rNPAMr4l68\">https:\/\/t.co\/rNPAMr4l68<\/a> <a href=\"https:\/\/t.co\/FG7QKCBVzp\">pic.twitter.com\/FG7QKCBVzp<\/a><\/p>\n<p>- Adeen Flinker \ud83c\uddee\ud83c\uddf1\ud83c\uddfa\ud83c\udde6\ud83c\udf97\ufe0f (@adeenflinker) <a href=\"https:\/\/twitter.com\/adeenflinker\/status\/1777513445304193367?ref_src=twsrc%5Etfw\">9. april 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<h2>S\u00e5dan fungerer unders\u00f8gelsen<\/h2>\n<p><span style=\"font-weight: 400;\">Unders\u00f8gelsen g\u00e5r ud p\u00e5 at tr\u00e6ne en AI-model, der kan drive en talesynteseenhed, s\u00e5 personer med talebesv\u00e6r kan tale ved hj\u00e6lp af elektriske impulser fra deres hjerne.\u00a0<\/span><\/p>\n<p>Her er en mere detaljeret beskrivelse af, hvordan det fungerer:<\/p>\n<p><b>1. Indsamling af hjernedata<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Det f\u00f8rste skridt er at indsamle de r\u00e5data, der er n\u00f8dvendige for at tr\u00e6ne taleafkodningsmodellen. Forskerne arbejdede med 48 deltagere, som gennemgik en neurokirurgisk operation for epilepsi. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Under unders\u00f8gelsen blev disse deltagere bedt om at l\u00e6se hundredvis af s\u00e6tninger h\u00f8jt, mens deres hjerneaktivitet blev registreret ved hj\u00e6lp af ECoG-gitre. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Disse gitre placeres direkte p\u00e5 hjernens overflade og opfanger elektriske signaler fra de hjerneomr\u00e5der, der er involveret i taleproduktion.<\/span><\/p>\n<p><b>2. Kortl\u00e6gning af hjernesignaler til tale<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Ved hj\u00e6lp af taledata udviklede forskerne en sofistikeret AI-model, der kortl\u00e6gger de optagne hjernesignaler til specifikke talegenskaber, s\u00e5som toneh\u00f8jde, lydstyrke og de unikke frekvenser, der udg\u00f8r forskellige talelyde.\u00a0<\/span><\/p>\n<p><b>3. Syntetisering af tale ud fra funktioner<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Det tredje trin fokuserer p\u00e5 at konvertere de talegenskaber, der er udvundet af hjernesignalerne, tilbage til h\u00f8rbar tale. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Forskerne brugte en s\u00e6rlig talesynthesizer, der tager de ekstraherede funktioner og genererer et spektrogram - en visuel repr\u00e6sentation af talelydene.\u00a0<\/span><\/p>\n<p><b>4. Evaluering af resultaterne<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Forskerne sammenlignede den tale, der blev genereret af deres model, med den originale tale, som deltagerne talte. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">De brugte objektive parametre til at m\u00e5le ligheden mellem de to og fandt ud af, at den genererede tale n\u00f8je matchede originalens indhold og rytme.\u00a0<\/span><\/p>\n<p><b>5. Test af nye ord<\/b><\/p>\n<p><span style=\"font-weight: 400;\">For at sikre, at modellen kan h\u00e5ndtere nye ord, den ikke har set f\u00f8r, blev visse ord med vilje udeladt under modellens tr\u00e6ningsfase, og derefter blev modellens pr\u00e6station p\u00e5 disse usete ord testet. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Modellens evne til pr\u00e6cist at afkode selv nye ord viser dens potentiale til at generalisere og h\u00e5ndtere forskellige talem\u00f8nstre.<\/span><\/p>\n<figure id=\"attachment_11500\" aria-describedby=\"caption-attachment-11500\" style=\"width: 1024px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-11500 size-large\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-1024x397.webp\" alt=\"AI-tale\" width=\"1024\" height=\"397\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-1024x397.webp 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-300x116.webp 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-768x298.webp 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-1536x596.webp 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-60x23.webp 60w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML.webp 1622w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption id=\"caption-attachment-11500\" class=\"wp-caption-text\">NYU's system til stemmesyntese. Kilde: <a href=\"https:\/\/www.nature.com\/articles\/s42256-024-00824-8\">Naturen<\/a> (\u00e5ben adgang)<\/figcaption><\/figure>\n<p>Den \u00f8verste del af ovenst\u00e5ende diagram beskriver en proces til konvertering af hjernesignaler til tale. F\u00f8rst omdanner en dekoder disse signaler til taleparametre over tid. Derefter skaber en synthesizer lydbilleder (spektrogrammer) ud fra disse parametre. Et andet v\u00e6rkt\u00f8j \u00e6ndrer disse billeder tilbage til lydb\u00f8lger.<\/p>\n<p>Det nederste afsnit handler om et system, der hj\u00e6lper med at tr\u00e6ne hjernens signaldekoder ved at efterligne tale. Det tager et lydbillede, omdanner det til taleparametre og bruger dem derefter til at lave et nyt lydbillede. Denne del af systemet l\u00e6rer af faktiske talelyde for at blive bedre.<\/p>\n<p>Efter tr\u00e6ning er det kun den \u00f8verste proces, der er n\u00f8dvendig for at omdanne hjernesignaler til tale.<\/p>\n<p><span style=\"font-weight: 400;\">En vigtig fordel ved NYU's system er dets evne til at opn\u00e5 taledekodning i h\u00f8j kvalitet uden behov for elektroder med ultrah\u00f8j t\u00e6thed, som er upraktiske til langvarig brug. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">I bund og grund tilbyder den en mere let og b\u00e6rbar l\u00f8sning.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">En anden pr\u00e6station er den vellykkede afkodning af tale fra b\u00e5de venstre og h\u00f8jre hjernehalvdel, hvilket er vigtigt for patienter med hjerneskade i den ene side af hjernen.\u00a0<\/span><\/p>\n<h2>Konverterer tanker til tale ved hj\u00e6lp af AI<\/h2>\n<p><span style=\"font-weight: 400;\">NYU-unders\u00f8gelsen bygger p\u00e5 tidligere forskning i neural taleafkodning og hjerne-computer-gr\u00e6nseflader (BCI'er).\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">I 2023 gjorde et team p\u00e5 University of California, San Francisco, det muligt for en lammet slagtilf\u00e6lde-overlever at <\/span><a href=\"https:\/\/dailyai.com\/da\/2023\/08\/ai-replenishes-speech-and-facial-expressions-of-stroke-survivor\/\"><span style=\"font-weight: 400;\">generere s\u00e6tninger<\/span><\/a><span style=\"font-weight: 400;\"> med en hastighed p\u00e5 78 ord i minuttet ved hj\u00e6lp af en BCI, der syntetiserede b\u00e5de vokaliseringer og ansigtsudtryk fra hjernesignaler.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Andre nyere unders\u00f8gelser har udforsket brugen af AI til at fortolke forskellige aspekter af den menneskelige tanke ud fra hjerneaktivitet. Forskere har demonstreret evnen til at generere billeder, tekst og endda musik ud fra MR- og elektroencefalogramdata (EEG) fra hjernen. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">For eksempel kan en <\/span><a href=\"https:\/\/dailyai.com\/da\/2023\/08\/ai-mind-reading-medical-breakthrough-or-step-towards-dystopia\/\"><span style=\"font-weight: 400;\">unders\u00f8gelse fra Helsinki Universitet<\/span><\/a><span style=\"font-weight: 400;\"> brugte EEG-signaler til at guide et generativt kontradiktorisk netv\u00e6rk (GAN) til at producere ansigtsbilleder, der matchede deltagernes tanker.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Meta AI er ogs\u00e5 <\/span><a href=\"https:\/\/dailyai.com\/da\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/\"><span style=\"font-weight: 400;\">udviklet en teknik<\/span><\/a><span style=\"font-weight: 400;\"> til delvis afkodning af, hvad en person lyttede til, ved hj\u00e6lp af hjerneb\u00f8lger indsamlet ikke-invasivt.<\/span><\/p>\n<h2>Muligheder og udfordringer<\/h2>\n<p>NYU's metode bruger mere bredt tilg\u00e6ngelige og klinisk anvendelige elektroder end tidligere metoder, hvilket g\u00f8r den mere tilg\u00e6ngelig.<\/p>\n<p><span style=\"font-weight: 400;\">Det er sp\u00e6ndende, men der er store forhindringer, der skal overvindes, hvis vi skal se en udbredt anvendelse.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">For det f\u00f8rste er det en kompleks og tidskr\u00e6vende opgave at indsamle hjernedata af h\u00f8j kvalitet. Individuelle forskelle i hjerneaktivitet g\u00f8r det sv\u00e6rt at generalisere, hvilket betyder, at en model, der er tr\u00e6net til \u00e9n gruppe deltagere, m\u00e5ske ikke fungerer godt for en anden.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ikke desto mindre repr\u00e6senterer NYU-unders\u00f8gelsen et skridt i denne retning ved at demonstrere h\u00f8jpr\u00e6cisionsafkodning af tale ved hj\u00e6lp af lettere elektrodearrays.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Fremover vil NYU-teamet arbejde p\u00e5 at forfine deres modeller til afkodning af tale i realtid, s\u00e5 vi kommer t\u00e6ttere p\u00e5 det ultimative m\u00e5l om at muligg\u00f8re naturlige, flydende samtaler for personer med talehandicap.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">De har ogs\u00e5 til hensigt at tilpasse systemet til implanterbare tr\u00e5dl\u00f8se enheder, der kan bruges i hverdagen.<\/span><\/p>","protected":false},"excerpt":{"rendered":"<p>Et hold forskere fra New York University har gjort fremskridt inden for neural taledekodning, hvilket bringer os t\u00e6ttere p\u00e5 en fremtid, hvor personer, der har mistet evnen til at tale, kan f\u00e5 deres stemme tilbage.  Unders\u00f8gelsen, der er offentliggjort i Nature Machine Intelligence, pr\u00e6senterer en ny deep learning-ramme, der n\u00f8jagtigt overs\u00e6tter hjernesignaler til forst\u00e5elig tale.  Mennesker med hjerneskader fra slagtilf\u00e6lde, degenerative tilstande eller fysiske traumer kan bruge disse systemer til at kommunikere ved at afkode deres tanker eller tilsigtede tale fra neurale signaler. NYU-teamets system involverer en deep learning-model, der kortl\u00e6gger elektrokortikografi-signalerne (ECoG) fra hjernen.<\/p>","protected":false},"author":2,"featured_media":11501,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[203,204,178],"class_list":["post-11499","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-biotech","tag-healthcare","tag-medicine"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>NYU researchers build a groundbreaking AI speech synthesis system | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/da\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/\" \/>\n<meta property=\"og:locale\" content=\"da_DK\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"NYU researchers build a groundbreaking AI speech synthesis system | DailyAI\" \/>\n<meta property=\"og:description\" content=\"A team of researchers from New York University has made progress in neural speech decoding, bringing us closer to a future in which individuals who have lost the ability to speak can regain their voice.\u00a0 The study, published in Nature Machine Intelligence, presents a novel deep learning framework that accurately translates brain signals into intelligible speech.\u00a0 People with brain injuries from strokes, degenerative conditions, or physical trauma can use these systems to communicate by decoding their thoughts or intended speech from neural signals. The NYU team&#8217;s system involves a deep learning model that maps the electrocorticography (ECoG) signals from the\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/da\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-11T19:06:18+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-04-12T10:16:58+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skrevet af\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimeret l\u00e6setid\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"NYU researchers build a groundbreaking AI speech synthesis system\",\"datePublished\":\"2024-04-11T19:06:18+00:00\",\"dateModified\":\"2024-04-12T10:16:58+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\"},\"wordCount\":970,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"keywords\":[\"Biotech\",\"Healthcare\",\"Medicine\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"da-DK\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\",\"name\":\"NYU researchers build a groundbreaking AI speech synthesis system | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"datePublished\":\"2024-04-11T19:06:18+00:00\",\"dateModified\":\"2024-04-12T10:16:58+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#breadcrumb\"},\"inLanguage\":\"da-DK\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"width\":1792,\"height\":1024,\"caption\":\"AI speech\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"NYU researchers build a groundbreaking AI speech synthesis system\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"da-DK\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/da\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"NYU-forskere bygger et banebrydende AI-talesyntese-system | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/da\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","og_locale":"da_DK","og_type":"article","og_title":"NYU researchers build a groundbreaking AI speech synthesis system | DailyAI","og_description":"A team of researchers from New York University has made progress in neural speech decoding, bringing us closer to a future in which individuals who have lost the ability to speak can regain their voice.\u00a0 The study, published in Nature Machine Intelligence, presents a novel deep learning framework that accurately translates brain signals into intelligible speech.\u00a0 People with brain injuries from strokes, degenerative conditions, or physical trauma can use these systems to communicate by decoding their thoughts or intended speech from neural signals. The NYU team&#8217;s system involves a deep learning model that maps the electrocorticography (ECoG) signals from the","og_url":"https:\/\/dailyai.com\/da\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","og_site_name":"DailyAI","article_published_time":"2024-04-11T19:06:18+00:00","article_modified_time":"2024-04-12T10:16:58+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","type":"image\/webp"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skrevet af":"Sam Jeans","Estimeret l\u00e6setid":"5 minutter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"NYU researchers build a groundbreaking AI speech synthesis system","datePublished":"2024-04-11T19:06:18+00:00","dateModified":"2024-04-12T10:16:58+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/"},"wordCount":970,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","keywords":["Biotech","Healthcare","Medicine"],"articleSection":["Industry"],"inLanguage":"da-DK"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","url":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","name":"NYU-forskere bygger et banebrydende AI-talesyntese-system | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","datePublished":"2024-04-11T19:06:18+00:00","dateModified":"2024-04-12T10:16:58+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#breadcrumb"},"inLanguage":"da-DK","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/"]}]},{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","width":1792,"height":1024,"caption":"AI speech"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"NYU researchers build a groundbreaking AI speech synthesis system"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Din daglige dosis af AI-nyheder","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"da-DK"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Sam Jeans","image":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam er videnskabs- og teknologiforfatter og har arbejdet i forskellige AI-startups. N\u00e5r han ikke skriver, kan han finde p\u00e5 at l\u00e6se medicinske tidsskrifter eller grave i kasser med vinylplader.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/da\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/11499","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/comments?post=11499"}],"version-history":[{"count":13,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/11499\/revisions"}],"predecessor-version":[{"id":11523,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/11499\/revisions\/11523"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media\/11501"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media?parent=11499"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/categories?post=11499"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/tags?post=11499"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}