{"id":11499,"date":"2024-04-11T19:06:18","date_gmt":"2024-04-11T19:06:18","guid":{"rendered":"https:\/\/dailyai.com\/?p=11499"},"modified":"2024-04-12T10:16:58","modified_gmt":"2024-04-12T10:16:58","slug":"nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system","status":"publish","type":"post","link":"https:\/\/dailyai.com\/nb\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","title":{"rendered":"NYU-forskere bygger et banebrytende AI-system for talesyntese"},"content":{"rendered":"<p><b>Et forskerteam fra New York University har gjort fremskritt innen nevral taledekoding, noe som bringer oss n\u00e6rmere en fremtid der personer som har mistet evnen til \u00e5 snakke, kan f\u00e5 stemmen sin tilbake.\u00a0<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Den <\/span><a href=\"https:\/\/www.nature.com\/articles\/s42256-024-00824-8\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">studie<\/span><\/a><span style=\"font-weight: 400;\">, publisert i <em>Naturens maskinintelligens<\/em>presenterer et nytt rammeverk for dyp l\u00e6ring som n\u00f8yaktig oversetter hjernesignaler til forst\u00e5elig tale.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Personer med hjerneskader som f\u00f8lge av hjerneslag, degenerative tilstander eller fysiske traumer kan bruke disse systemene til \u00e5 kommunisere ved \u00e5 avkode tankene eller den tiltenkte talen sin fra nervesignaler.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">NYU teamets system involverer en dyp l\u00e6ringsmodell som kartlegger elektrokortikografi-signaler (EKoG) fra hjernen til talekjennetegn, som toneh\u00f8yde, lydstyrke og annet spektralt innhold.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Det andre trinnet involverer en nevral talesynthesizer som konverterer de ekstraherte talefunksjonene til et h\u00f8rbart spektrogram, som deretter kan omdannes til en taleb\u00f8lgeform.\u00a0<\/span><\/p>\n<p>Denne b\u00f8lgeformen kan til slutt konverteres til syntetisk tale med naturlig lyd.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Ny artikkel publisert i dag i <a href=\"https:\/\/twitter.com\/NatMachIntell?ref_src=twsrc%5Etfw\">@NatMachIntell<\/a>hvor vi viser robust nevrale til tale-avkoding hos 48 pasienter. <a href=\"https:\/\/t.co\/rNPAMr4l68\">https:\/\/t.co\/rNPAMr4l68<\/a> <a href=\"https:\/\/t.co\/FG7QKCBVzp\">pic.twitter.com\/FG7QKCBCBVzp<\/a><\/p>\n<p>- Adeen Flinker \ud83c\uddee\ud83c\uddf1\ud83c\uddfa\ud83c\udde6\ud83c\udf97\ufe0f (@adeenflinker) <a href=\"https:\/\/twitter.com\/adeenflinker\/status\/1777513445304193367?ref_src=twsrc%5Etfw\">9. april 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<h2>Slik fungerer studien<\/h2>\n<p><span style=\"font-weight: 400;\">Denne studien g\u00e5r ut p\u00e5 \u00e5 trene opp en AI-modell som kan drive en talesynteseenhet som gj\u00f8r det mulig for personer med nedsatt taleevne \u00e5 snakke ved hjelp av elektriske impulser fra hjernen.\u00a0<\/span><\/p>\n<p>Her kan du lese mer om hvordan det fungerer:<\/p>\n<p><b>1. Innsamling av hjernedata<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Det f\u00f8rste trinnet inneb\u00e6rer \u00e5 samle inn r\u00e5dataene som trengs for \u00e5 trene opp taledekodingsmodellen. Forskerne jobbet med 48 deltakere som gjennomgikk en nevrokirurgisk operasjon for epilepsi. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">I l\u00f8pet av studien ble deltakerne bedt om \u00e5 lese hundrevis av setninger h\u00f8yt, mens hjerneaktiviteten deres ble registrert ved hjelp av EKG-registrering. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Disse rutenettene plasseres direkte p\u00e5 hjernens overflate og fanger opp elektriske signaler fra de hjerneomr\u00e5dene som er involvert i taleproduksjon.<\/span><\/p>\n<p><b>2. Kartlegging av hjernesignaler til tale<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Ved hjelp av taledata utviklet forskerne en sofistikert AI-modell som tilordner de innspilte hjernesignalene til spesifikke talekjennetegn, som toneh\u00f8yde, lydstyrke og de unike frekvensene som ulike talelyder best\u00e5r av.\u00a0<\/span><\/p>\n<p><b>3. Syntetisering av tale fra funksjoner<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Det tredje trinnet fokuserer p\u00e5 \u00e5 konvertere talefunksjonene som er hentet ut fra hjernesignalene, tilbake til h\u00f8rbar tale. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Forskerne brukte en spesiell talesynthesizer som tar de ekstraherte funksjonene og genererer et spektrogram - en visuell fremstilling av talelydene.\u00a0<\/span><\/p>\n<p><b>4. Evaluering av resultatene<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Forskerne sammenlignet talen som ble generert av modellen, med den opprinnelige talen som ble talt av deltakerne. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">De brukte objektive parametere for \u00e5 m\u00e5le likheten mellom de to, og fant ut at den genererte talen stemte godt overens med originalens innhold og rytme.\u00a0<\/span><\/p>\n<p><b>5. Testing p\u00e5 nye ord<\/b><\/p>\n<p><span style=\"font-weight: 400;\">For \u00e5 sikre at modellen kan h\u00e5ndtere nye ord den ikke har sett f\u00f8r, ble visse ord med vilje utelatt under modellens treningsfase, og deretter ble modellens ytelse p\u00e5 disse usette ordene testet. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Modellens evne til \u00e5 avkode selv nye ord viser at den har potensial til \u00e5 generalisere og h\u00e5ndtere ulike talem\u00f8nstre.<\/span><\/p>\n<figure id=\"attachment_11500\" aria-describedby=\"caption-attachment-11500\" style=\"width: 1024px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-11500 size-large\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-1024x397.webp\" alt=\"AI-tale\" width=\"1024\" height=\"397\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-1024x397.webp 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-300x116.webp 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-768x298.webp 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-1536x596.webp 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-60x23.webp 60w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML.webp 1622w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption id=\"caption-attachment-11500\" class=\"wp-caption-text\">NYUs talesyntesesystem. Kilde: <a href=\"https:\/\/www.nature.com\/articles\/s42256-024-00824-8\">Natur<\/a> (\u00e5pen tilgang)<\/figcaption><\/figure>\n<p>Den \u00f8verste delen av diagrammet ovenfor beskriver en prosess for \u00e5 konvertere hjernesignaler til tale. F\u00f8rst gj\u00f8r en dekoder disse signalene om til taleparametere over tid. Deretter lager en synthesizer lydbilder (spektrogrammer) av disse parameterne. Et annet verkt\u00f8y endrer disse bildene tilbake til lydb\u00f8lger.<\/p>\n<p>Den nederste delen omhandler et system som hjelper til med \u00e5 trene opp hjernens signaldekoder ved \u00e5 etterligne tale. Det tar et lydbilde, gj\u00f8r det om til taleparametere, og bruker deretter disse til \u00e5 lage et nytt lydbilde. Denne delen av systemet l\u00e6rer av faktiske talelyder for \u00e5 bli bedre.<\/p>\n<p>Etter oppl\u00e6ring er det bare den \u00f8verste prosessen som trengs for \u00e5 gj\u00f8re hjernesignaler om til tale.<\/p>\n<p><span style=\"font-weight: 400;\">En av de viktigste fordelene med NYUs system er at det gir taledekoding av h\u00f8y kvalitet uten behov for elektroder med ultrah\u00f8y tetthet, noe som er upraktisk for langvarig bruk. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">I bunn og grunn tilbyr den en lettere, b\u00e6rbar l\u00f8sning.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">En annen prestasjon er vellykket avkoding av tale fra b\u00e5de venstre og h\u00f8yre hjernehalvdel, noe som er viktig for pasienter med hjerneskade p\u00e5 den ene siden av hjernen.\u00a0<\/span><\/p>\n<h2>Konverterer tanker til tale ved hjelp av kunstig intelligens<\/h2>\n<p><span style=\"font-weight: 400;\">NYU-studien bygger p\u00e5 tidligere forskning innen nevral taledekoding og hjerne-datamaskin-grensesnitt (BCI).\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">I 2023 gjorde et team ved University of California, San Francisco, det mulig for en lammet slagpasient \u00e5 <\/span><a href=\"https:\/\/dailyai.com\/nb\/2023\/08\/ai-replenishes-speech-and-facial-expressions-of-stroke-survivor\/\"><span style=\"font-weight: 400;\">generere setninger<\/span><\/a><span style=\"font-weight: 400;\"> med en hastighet p\u00e5 78 ord i minuttet ved hjelp av en BCI som syntetiserte b\u00e5de vokalisering og ansiktsuttrykk fra hjernesignaler.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Andre nyere studier har utforsket bruken av kunstig intelligens til \u00e5 tolke ulike aspekter ved menneskelig tankevirksomhet ut fra hjerneaktivitet. Forskere har demonstrert evnen til \u00e5 generere bilder, tekst og til og med musikk fra MR- og EEG-data (elektroencefalogram) fra hjernen. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">For eksempel kan en <\/span><a href=\"https:\/\/dailyai.com\/nb\/2023\/08\/ai-mind-reading-medical-breakthrough-or-step-towards-dystopia\/\"><span style=\"font-weight: 400;\">studie fra Universitetet i Helsinki<\/span><\/a><span style=\"font-weight: 400;\"> brukte EEG-signaler til \u00e5 veilede et generativt kontradiktorisk nettverk (GAN) i \u00e5 produsere ansiktsbilder som samsvarte med deltakernes tanker.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Meta AI har ogs\u00e5 <\/span><a href=\"https:\/\/dailyai.com\/nb\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/\"><span style=\"font-weight: 400;\">utviklet en teknikk<\/span><\/a><span style=\"font-weight: 400;\"> for delvis \u00e5 avkode hva noen lyttet til ved hjelp av hjerneb\u00f8lger som ble samlet inn ikke-invasivt.<\/span><\/p>\n<h2>Muligheter og utfordringer<\/h2>\n<p>NYUs metode bruker mer allment tilgjengelige og klinisk anvendelige elektroder enn tidligere metoder, noe som gj\u00f8r den mer tilgjengelig.<\/p>\n<p><span style=\"font-weight: 400;\">Selv om dette er spennende, er det store hindringer som m\u00e5 overvinnes hvis vi skal se utbredt bruk.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">For det f\u00f8rste er det komplisert og tidkrevende \u00e5 samle inn hjernedata av h\u00f8y kvalitet. Individuelle forskjeller i hjerneaktivitet gj\u00f8r det vanskelig \u00e5 generalisere, noe som betyr at en modell som er trent opp for \u00e9n gruppe deltakere, kanskje ikke fungerer like godt for en annen.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">NYU-studien representerer likevel et skritt i denne retningen ved \u00e5 demonstrere h\u00f8ypresis taleavkoding ved hjelp av lettere elektrodeoppsett.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">I tiden fremover har NYU-teamet som m\u00e5l \u00e5 videreutvikle modellene sine for taledekoding i sanntid, slik at vi kommer n\u00e6rmere det endelige m\u00e5let om \u00e5 muliggj\u00f8re naturlige, flytende samtaler for personer med talevansker.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">De har ogs\u00e5 til hensikt \u00e5 tilpasse systemet til implanterbare tr\u00e5dl\u00f8se enheter som kan brukes i hverdagen.<\/span><\/p>","protected":false},"excerpt":{"rendered":"<p>Et forskerteam fra New York University har gjort fremskritt innen nevral taledekoding, noe som bringer oss n\u00e6rmere en fremtid der personer som har mistet evnen til \u00e5 snakke, kan f\u00e5 stemmen sin tilbake.  Studien, som er publisert i Nature Machine Intelligence, presenterer et nytt rammeverk for dyp l\u00e6ring som n\u00f8yaktig oversetter hjernesignaler til forst\u00e5elig tale.  Personer med hjerneskader som f\u00f8lge av hjerneslag, degenerative tilstander eller fysiske traumer kan bruke disse systemene til \u00e5 kommunisere ved \u00e5 avkode tankene sine eller tiltenkt tale fra nevrale signaler. NYU teamets system involverer en dyp l\u00e6ringsmodell som kartlegger elektrokortikografi (EKoG)-signalene fra hjernen.<\/p>","protected":false},"author":2,"featured_media":11501,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[203,204,178],"class_list":["post-11499","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-biotech","tag-healthcare","tag-medicine"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>NYU researchers build a groundbreaking AI speech synthesis system | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/nb\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/\" \/>\n<meta property=\"og:locale\" content=\"nb_NO\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"NYU researchers build a groundbreaking AI speech synthesis system | DailyAI\" \/>\n<meta property=\"og:description\" content=\"A team of researchers from New York University has made progress in neural speech decoding, bringing us closer to a future in which individuals who have lost the ability to speak can regain their voice.\u00a0 The study, published in Nature Machine Intelligence, presents a novel deep learning framework that accurately translates brain signals into intelligible speech.\u00a0 People with brain injuries from strokes, degenerative conditions, or physical trauma can use these systems to communicate by decoding their thoughts or intended speech from neural signals. The NYU team&#8217;s system involves a deep learning model that maps the electrocorticography (ECoG) signals from the\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/nb\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-11T19:06:18+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-04-12T10:16:58+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skrevet av\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Ansl. lesetid\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"NYU researchers build a groundbreaking AI speech synthesis system\",\"datePublished\":\"2024-04-11T19:06:18+00:00\",\"dateModified\":\"2024-04-12T10:16:58+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\"},\"wordCount\":970,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"keywords\":[\"Biotech\",\"Healthcare\",\"Medicine\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"nb-NO\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\",\"name\":\"NYU researchers build a groundbreaking AI speech synthesis system | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"datePublished\":\"2024-04-11T19:06:18+00:00\",\"dateModified\":\"2024-04-12T10:16:58+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#breadcrumb\"},\"inLanguage\":\"nb-NO\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"nb-NO\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"width\":1792,\"height\":1024,\"caption\":\"AI speech\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"NYU researchers build a groundbreaking AI speech synthesis system\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"nb-NO\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"nb-NO\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"nb-NO\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/nb\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"NYU-forskere bygger et banebrytende AI-system for talesyntese | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/nb\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","og_locale":"nb_NO","og_type":"article","og_title":"NYU researchers build a groundbreaking AI speech synthesis system | DailyAI","og_description":"A team of researchers from New York University has made progress in neural speech decoding, bringing us closer to a future in which individuals who have lost the ability to speak can regain their voice.\u00a0 The study, published in Nature Machine Intelligence, presents a novel deep learning framework that accurately translates brain signals into intelligible speech.\u00a0 People with brain injuries from strokes, degenerative conditions, or physical trauma can use these systems to communicate by decoding their thoughts or intended speech from neural signals. The NYU team&#8217;s system involves a deep learning model that maps the electrocorticography (ECoG) signals from the","og_url":"https:\/\/dailyai.com\/nb\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","og_site_name":"DailyAI","article_published_time":"2024-04-11T19:06:18+00:00","article_modified_time":"2024-04-12T10:16:58+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","type":"image\/webp"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skrevet av":"Sam Jeans","Ansl. lesetid":"5 minutter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"NYU researchers build a groundbreaking AI speech synthesis system","datePublished":"2024-04-11T19:06:18+00:00","dateModified":"2024-04-12T10:16:58+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/"},"wordCount":970,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","keywords":["Biotech","Healthcare","Medicine"],"articleSection":["Industry"],"inLanguage":"nb-NO"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","url":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","name":"NYU-forskere bygger et banebrytende AI-system for talesyntese | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","datePublished":"2024-04-11T19:06:18+00:00","dateModified":"2024-04-12T10:16:58+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#breadcrumb"},"inLanguage":"nb-NO","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/"]}]},{"@type":"ImageObject","inLanguage":"nb-NO","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","width":1792,"height":1024,"caption":"AI speech"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"NYU researchers build a groundbreaking AI speech synthesis system"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DagligAI","description":"Din daglige dose med AI-nyheter","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"nb-NO"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DagligAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"nb-NO","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Sam Jeans","image":{"@type":"ImageObject","inLanguage":"nb-NO","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam er en vitenskaps- og teknologiskribent som har jobbet i ulike oppstartsbedrifter innen kunstig intelligens. N\u00e5r han ikke skriver, leser han medisinske tidsskrifter eller graver seg gjennom esker med vinylplater.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/nb\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts\/11499","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/comments?post=11499"}],"version-history":[{"count":13,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts\/11499\/revisions"}],"predecessor-version":[{"id":11523,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts\/11499\/revisions\/11523"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/media\/11501"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/media?parent=11499"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/categories?post=11499"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/tags?post=11499"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}