{"id":11499,"date":"2024-04-11T19:06:18","date_gmt":"2024-04-11T19:06:18","guid":{"rendered":"https:\/\/dailyai.com\/?p=11499"},"modified":"2024-04-12T10:16:58","modified_gmt":"2024-04-12T10:16:58","slug":"nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system","status":"publish","type":"post","link":"https:\/\/dailyai.com\/sv\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","title":{"rendered":"NYU-forskare bygger ett banbrytande AI-system f\u00f6r talsyntes"},"content":{"rendered":"<p><b>Ett forskarteam fr\u00e5n New York University har gjort framsteg inom neural talavkodning, vilket f\u00f6r oss n\u00e4rmare en framtid d\u00e4r personer som har f\u00f6rlorat talf\u00f6rm\u00e5gan kan \u00e5terf\u00e5 sin r\u00f6st.\u00a0<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Den <\/span><a href=\"https:\/\/www.nature.com\/articles\/s42256-024-00824-8\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">studie<\/span><\/a><span style=\"font-weight: 400;\">, publicerad i <em>Natur Maskinintelligens<\/em>presenterar ett nytt ramverk f\u00f6r djupinl\u00e4rning som p\u00e5 ett korrekt s\u00e4tt \u00f6vers\u00e4tter hj\u00e4rnsignaler till begripligt tal.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Personer med hj\u00e4rnskador efter stroke, degenerativa sjukdomar eller fysiska trauman kan anv\u00e4nda dessa system f\u00f6r att kommunicera genom att avkoda sina tankar eller sitt t\u00e4nkta tal fr\u00e5n nervsignaler.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">NYU-teamets system involverar en djupinl\u00e4rningsmodell som mappar elektrokortikografiska (ECoG) signaler fr\u00e5n hj\u00e4rnan till talfunktioner, s\u00e5som tonh\u00f6jd, ljudstyrka och annat spektralt inneh\u00e5ll.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">I det andra steget anv\u00e4nds en neural talsyntes som omvandlar de extraherade talegenskaperna till ett h\u00f6rbart spektrogram, som sedan kan omvandlas till en talv\u00e5gform.\u00a0<\/span><\/p>\n<p>Den v\u00e5gformen kan slutligen omvandlas till naturligt klingande syntetiskt tal.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Ny artikel publicerad idag i <a href=\"https:\/\/twitter.com\/NatMachIntell?ref_src=twsrc%5Etfw\">@NatMachIntell<\/a>d\u00e4r vi visar robust neural till tal-avkodning hos 48 patienter. <a href=\"https:\/\/t.co\/rNPAMr4l68\">https:\/\/t.co\/rNPAMr4l68<\/a> <a href=\"https:\/\/t.co\/FG7QKCBVzp\">pic.twitter.com\/FG7QKCBVzp<\/a><\/p>\n<p>- Adeen Flinker \ud83c\uddee\ud83c\uddf1\ud83c\uddfa\ud83c\udde6\ud83c\udf97\ufe0f (@adeenflinker) <a href=\"https:\/\/twitter.com\/adeenflinker\/status\/1777513445304193367?ref_src=twsrc%5Etfw\">9 april 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<h2>Hur studien fungerar<\/h2>\n<p><span style=\"font-weight: 400;\">Studien g\u00e5r ut p\u00e5 att tr\u00e4na en AI-modell som kan driva en talsyntesenhet, s\u00e5 att personer med talsv\u00e5righeter kan tala med hj\u00e4lp av elektriska impulser fr\u00e5n hj\u00e4rnan.\u00a0<\/span><\/p>\n<p>H\u00e4r f\u00f6ljer en mer detaljerad beskrivning av hur det fungerar:<\/p>\n<p><b>1. Insamling av data om hj\u00e4rnan<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Det f\u00f6rsta steget handlar om att samla in de r\u00e5data som beh\u00f6vs f\u00f6r att tr\u00e4na talavkodningsmodellen. Forskarna arbetade med 48 deltagare som genomgick en neurokirurgisk operation f\u00f6r epilepsi. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Under studien ombads deltagarna att l\u00e4sa hundratals meningar h\u00f6gt samtidigt som deras hj\u00e4rnaktivitet registrerades med hj\u00e4lp av ECoG-galler. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Dessa galler placeras direkt p\u00e5 hj\u00e4rnans yta och f\u00e5ngar upp elektriska signaler fr\u00e5n de hj\u00e4rnregioner som \u00e4r involverade i talproduktionen.<\/span><\/p>\n<p><b>2. Mappning av hj\u00e4rnans signaler till tal<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Med hj\u00e4lp av taldata utvecklade forskarna en sofistikerad AI-modell som mappar de inspelade hj\u00e4rnsignalerna till specifika talegenskaper, s\u00e5som tonh\u00f6jd, ljudstyrka och de unika frekvenser som utg\u00f6r olika spr\u00e5kljud.\u00a0<\/span><\/p>\n<p><b>3. Syntetisering av tal fr\u00e5n funktioner<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Det tredje steget handlar om att omvandla de talfunktioner som utvinns ur hj\u00e4rnans signaler till h\u00f6rbart tal. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Forskarna anv\u00e4nde en speciell talsyntes som tar de extraherade funktionerna och genererar ett spektrogram - en visuell representation av spr\u00e5kljuden.\u00a0<\/span><\/p>\n<p><b>4. Utv\u00e4rdering av resultaten<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Forskarna j\u00e4mf\u00f6rde det tal som genererades av deras modell med det ursprungliga tal som deltagarna talade. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">De anv\u00e4nde objektiva m\u00e5tt f\u00f6r att m\u00e4ta likheten mellan de tv\u00e5 och fann att det genererade talet n\u00e4ra matchade originalets inneh\u00e5ll och rytm.\u00a0<\/span><\/p>\n<p><b>5. Testning av nya ord<\/b><\/p>\n<p><span style=\"font-weight: 400;\">F\u00f6r att s\u00e4kerst\u00e4lla att modellen kan hantera nya ord som den inte har sett tidigare, utel\u00e4mnades vissa ord avsiktligt under modellens tr\u00e4ningsfas, och sedan testades modellens prestanda p\u00e5 dessa osedda ord. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Modellens f\u00f6rm\u00e5ga att korrekt avkoda \u00e4ven nya ord visar p\u00e5 dess potential att generalisera och hantera olika talm\u00f6nster.<\/span><\/p>\n<figure id=\"attachment_11500\" aria-describedby=\"caption-attachment-11500\" style=\"width: 1024px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-11500 size-large\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-1024x397.webp\" alt=\"AI-tal\" width=\"1024\" height=\"397\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-1024x397.webp 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-300x116.webp 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-768x298.webp 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-1536x596.webp 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML-60x23.webp 60w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/42256_2024_824_Fig1_HTML.webp 1622w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption id=\"caption-attachment-11500\" class=\"wp-caption-text\">NYU:s system f\u00f6r r\u00f6stsyntes. K\u00e4lla: NYU <a href=\"https:\/\/www.nature.com\/articles\/s42256-024-00824-8\">Natur<\/a> (\u00f6ppen tillg\u00e5ng)<\/figcaption><\/figure>\n<p>Den \u00f6vre delen av diagrammet ovan beskriver en process f\u00f6r att omvandla hj\u00e4rnsignaler till tal. F\u00f6rst omvandlar en avkodare dessa signaler till talparametrar \u00f6ver tid. D\u00e4refter skapar en synthesizer ljudbilder (spektrogram) fr\u00e5n dessa parametrar. Ett annat verktyg omvandlar dessa bilder tillbaka till ljudv\u00e5gor.<\/p>\n<p>I det nedre avsnittet diskuteras ett system som hj\u00e4lper till att tr\u00e4na hj\u00e4rnans signalavkodare genom att efterlikna tal. Det tar en ljudbild, omvandlar den till talparametrar och anv\u00e4nder sedan dessa f\u00f6r att skapa en ny ljudbild. Den h\u00e4r delen av systemet l\u00e4r sig av faktiska spr\u00e5kljud f\u00f6r att bli b\u00e4ttre.<\/p>\n<p>Efter tr\u00e4ning beh\u00f6vs bara den \u00f6versta processen f\u00f6r att omvandla hj\u00e4rnans signaler till tal.<\/p>\n<p><span style=\"font-weight: 400;\">En viktig f\u00f6rdel med NYU:s system \u00e4r dess f\u00f6rm\u00e5ga att uppn\u00e5 h\u00f6gkvalitativ talavkodning utan behov av elektrodmatriser med ultrah\u00f6g densitet, vilket \u00e4r opraktiskt f\u00f6r l\u00e5ngvarig anv\u00e4ndning. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">I huvudsak erbjuder den en mer l\u00e4ttviktig, b\u00e4rbar l\u00f6sning.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">En annan bedrift \u00e4r att man lyckats avkoda tal fr\u00e5n b\u00e5de v\u00e4nster och h\u00f6ger hj\u00e4rnhalva, vilket \u00e4r viktigt f\u00f6r patienter med hj\u00e4rnskador p\u00e5 ena sidan av hj\u00e4rnan.\u00a0<\/span><\/p>\n<h2>Omvandla tankar till tal med hj\u00e4lp av AI<\/h2>\n<p><span style=\"font-weight: 400;\">NYU-studien bygger p\u00e5 tidigare forskning inom neural talavkodning och BCI (brain-computer interfaces).\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00c5r 2023 gjorde ett team vid University of California, San Francisco, det m\u00f6jligt f\u00f6r en f\u00f6rlamad stroke\u00f6verlevare att <\/span><a href=\"https:\/\/dailyai.com\/sv\/2023\/08\/ai-replenishes-speech-and-facial-expressions-of-stroke-survivor\/\"><span style=\"font-weight: 400;\">generera meningar<\/span><\/a><span style=\"font-weight: 400;\"> med en hastighet av 78 ord per minut med hj\u00e4lp av en BCI som syntetiserade b\u00e5de vokaliseringar och ansiktsuttryck fr\u00e5n hj\u00e4rnans signaler.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Andra aktuella studier har utforskat anv\u00e4ndningen av AI f\u00f6r att tolka olika aspekter av m\u00e4nskligt t\u00e4nkande utifr\u00e5n hj\u00e4rnaktivitet. Forskare har visat att de kan generera bilder, text och till och med musik fr\u00e5n MRI- och EEG-data (elektroencefalogram) fr\u00e5n hj\u00e4rnan. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Till exempel kan en <\/span><a href=\"https:\/\/dailyai.com\/sv\/2023\/08\/ai-mind-reading-medical-breakthrough-or-step-towards-dystopia\/\"><span style=\"font-weight: 400;\">studie fr\u00e5n Helsingfors universitet<\/span><\/a><span style=\"font-weight: 400;\"> anv\u00e4nde EEG-signaler f\u00f6r att v\u00e4gleda ett generativt adversarialn\u00e4tverk (GAN) i att producera ansiktsbilder som matchade deltagarnas tankar.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Meta AI ocks\u00e5 <\/span><a href=\"https:\/\/dailyai.com\/sv\/2023\/10\/ai-decodes-speech-from-non-invasive-brain-recordings\/\"><span style=\"font-weight: 400;\">utvecklat en teknik<\/span><\/a><span style=\"font-weight: 400;\"> f\u00f6r att delvis avkoda vad n\u00e5gon lyssnade p\u00e5 med hj\u00e4lp av hj\u00e4rnv\u00e5gor som samlats in p\u00e5 ett icke-invasivt s\u00e4tt.<\/span><\/p>\n<h2>M\u00f6jligheter och utmaningar<\/h2>\n<p>NYU:s metod anv\u00e4nder mer allm\u00e4nt tillg\u00e4ngliga och kliniskt anv\u00e4ndbara elektroder \u00e4n tidigare metoder, vilket g\u00f6r den mer l\u00e4ttillg\u00e4nglig.<\/p>\n<p><span style=\"font-weight: 400;\">\u00c4ven om detta \u00e4r sp\u00e4nnande finns det stora hinder att \u00f6vervinna om vi ska f\u00e5 se en utbredd anv\u00e4ndning.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">F\u00f6r det f\u00f6rsta \u00e4r det en komplex och tidskr\u00e4vande uppgift att samla in h\u00f6gkvalitativa hj\u00e4rndata. Individuella skillnader i hj\u00e4rnaktivitet g\u00f6r det sv\u00e5rt att generalisera, vilket inneb\u00e4r att en modell som tr\u00e4nats f\u00f6r en grupp deltagare kanske inte fungerar s\u00e5 bra f\u00f6r en annan.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">NYU-studien utg\u00f6r dock ett steg i denna riktning genom att visa p\u00e5 talavkodning med h\u00f6g noggrannhet med hj\u00e4lp av l\u00e4ttare elektrodupps\u00e4ttningar.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Fram\u00f6ver kommer NYU-teamet att f\u00f6rfina sina modeller f\u00f6r talavkodning i realtid, vilket kommer att f\u00f6ra oss n\u00e4rmare det slutgiltiga m\u00e5let att m\u00f6jligg\u00f6ra naturliga, flytande konversationer f\u00f6r personer med talsv\u00e5righeter.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">De har ocks\u00e5 f\u00f6r avsikt att anpassa systemet till implanterbara tr\u00e5dl\u00f6sa enheter som kan anv\u00e4ndas i vardagen.<\/span><\/p>","protected":false},"excerpt":{"rendered":"<p>Ett forskarteam fr\u00e5n New York University har gjort framsteg inom neural talavkodning, vilket f\u00f6r oss n\u00e4rmare en framtid d\u00e4r personer som har f\u00f6rlorat talf\u00f6rm\u00e5gan kan \u00e5terf\u00e5 sin r\u00f6st.  I studien, som publicerades i Nature Machine Intelligence, presenteras ett nytt ramverk f\u00f6r djupinl\u00e4rning som p\u00e5 ett korrekt s\u00e4tt \u00f6vers\u00e4tter hj\u00e4rnsignaler till begripligt tal.  Personer med hj\u00e4rnskador fr\u00e5n stroke, degenerativa tillst\u00e5nd eller fysiska trauman kan anv\u00e4nda dessa system f\u00f6r att kommunicera genom att avkoda sina tankar eller avsett tal fr\u00e5n neurala signaler. NYU-teamets system involverar en djupinl\u00e4rningsmodell som kartl\u00e4gger elektrokortikografisignalerna (ECoG) fr\u00e5n<\/p>","protected":false},"author":2,"featured_media":11501,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[203,204,178],"class_list":["post-11499","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-biotech","tag-healthcare","tag-medicine"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>NYU researchers build a groundbreaking AI speech synthesis system | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/sv\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/\" \/>\n<meta property=\"og:locale\" content=\"sv_SE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"NYU researchers build a groundbreaking AI speech synthesis system | DailyAI\" \/>\n<meta property=\"og:description\" content=\"A team of researchers from New York University has made progress in neural speech decoding, bringing us closer to a future in which individuals who have lost the ability to speak can regain their voice.\u00a0 The study, published in Nature Machine Intelligence, presents a novel deep learning framework that accurately translates brain signals into intelligible speech.\u00a0 People with brain injuries from strokes, degenerative conditions, or physical trauma can use these systems to communicate by decoding their thoughts or intended speech from neural signals. The NYU team&#8217;s system involves a deep learning model that maps the electrocorticography (ECoG) signals from the\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/sv\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-11T19:06:18+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-04-12T10:16:58+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skriven av\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Ber\u00e4knad l\u00e4stid\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minuter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"NYU researchers build a groundbreaking AI speech synthesis system\",\"datePublished\":\"2024-04-11T19:06:18+00:00\",\"dateModified\":\"2024-04-12T10:16:58+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\"},\"wordCount\":970,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"keywords\":[\"Biotech\",\"Healthcare\",\"Medicine\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"sv-SE\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\",\"name\":\"NYU researchers build a groundbreaking AI speech synthesis system | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"datePublished\":\"2024-04-11T19:06:18+00:00\",\"dateModified\":\"2024-04-12T10:16:58+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#breadcrumb\"},\"inLanguage\":\"sv-SE\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp\",\"width\":1792,\"height\":1024,\"caption\":\"AI speech\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"NYU researchers build a groundbreaking AI speech synthesis system\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"sv-SE\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/sv\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"NYU-forskare bygger ett banbrytande AI-system f\u00f6r talsyntes | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/sv\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","og_locale":"sv_SE","og_type":"article","og_title":"NYU researchers build a groundbreaking AI speech synthesis system | DailyAI","og_description":"A team of researchers from New York University has made progress in neural speech decoding, bringing us closer to a future in which individuals who have lost the ability to speak can regain their voice.\u00a0 The study, published in Nature Machine Intelligence, presents a novel deep learning framework that accurately translates brain signals into intelligible speech.\u00a0 People with brain injuries from strokes, degenerative conditions, or physical trauma can use these systems to communicate by decoding their thoughts or intended speech from neural signals. The NYU team&#8217;s system involves a deep learning model that maps the electrocorticography (ECoG) signals from the","og_url":"https:\/\/dailyai.com\/sv\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","og_site_name":"DailyAI","article_published_time":"2024-04-11T19:06:18+00:00","article_modified_time":"2024-04-12T10:16:58+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","type":"image\/webp"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skriven av":"Sam Jeans","Ber\u00e4knad l\u00e4stid":"5 minuter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"NYU researchers build a groundbreaking AI speech synthesis system","datePublished":"2024-04-11T19:06:18+00:00","dateModified":"2024-04-12T10:16:58+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/"},"wordCount":970,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","keywords":["Biotech","Healthcare","Medicine"],"articleSection":["Industry"],"inLanguage":"sv-SE"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","url":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/","name":"NYU-forskare bygger ett banbrytande AI-system f\u00f6r talsyntes | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","datePublished":"2024-04-11T19:06:18+00:00","dateModified":"2024-04-12T10:16:58+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#breadcrumb"},"inLanguage":"sv-SE","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/"]}]},{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/DALL\u00b7E-2024-04-11-20.05.57-A-minimalist-scene-showing-an-abstract-representation-of-a-brain-interconnected-with-audio-waves-symbolizing-speech.-The-brain-is-depicted-in-the-cent.webp","width":1792,"height":1024,"caption":"AI speech"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/04\/nyu-researchers-build-a-groundbreaking-ai-speech-synthesis-system\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"NYU researchers build a groundbreaking AI speech synthesis system"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DagligaAI","description":"Din dagliga dos av AI-nyheter","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"sv-SE"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DagligaAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Sam Jeans","image":{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam \u00e4r en vetenskaps- och teknikskribent som har arbetat i olika AI-startups. N\u00e4r han inte skriver l\u00e4ser han medicinska tidskrifter eller gr\u00e4ver igenom l\u00e5dor med vinylskivor.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/sv\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/11499","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/comments?post=11499"}],"version-history":[{"count":13,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/11499\/revisions"}],"predecessor-version":[{"id":11523,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/11499\/revisions\/11523"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/media\/11501"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/media?parent=11499"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/categories?post=11499"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/tags?post=11499"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}