{"id":8212,"date":"2023-12-12T11:24:30","date_gmt":"2023-12-12T11:24:30","guid":{"rendered":"https:\/\/dailyai.com\/?p=8212"},"modified":"2023-12-12T11:24:30","modified_gmt":"2023-12-12T11:24:30","slug":"mixture-of-experts-and-sparsity-hot-ai-topics-explained","status":"publish","type":"post","link":"https:\/\/dailyai.com\/sv\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","title":{"rendered":"Mixture of Experts och Sparsity - heta AI-\u00e4mnen f\u00f6rklarade"},"content":{"rendered":"<p><strong>Lanseringen av mindre och mer effektiva AI-modeller som Mistrals banbrytande Mixtral 8x7B-modell har gjort att begreppen \"Mixture of Experts\" (MoE) och \"Sparsity\" har blivit heta \u00e4mnen.<\/strong><\/p>\n<p>Dessa termer har f\u00f6rflyttats fr\u00e5n komplexa AI-forskningsrapporter till nyhetsartiklar som rapporterar om snabbt f\u00f6rb\u00e4ttrade stora spr\u00e5kmodeller (LLM).<\/p>\n<p>Som tur \u00e4r beh\u00f6ver du inte vara datavetare f\u00f6r att ha en bred uppfattning om vad MoE och Sparsity \u00e4r och varf\u00f6r dessa begrepp \u00e4r s\u00e5 viktiga.<\/p>\n<h2>Blandning av experter<\/h2>\n<p>LLM:er som GPT-3 bygger p\u00e5 en arkitektur med t\u00e4ta n\u00e4tverk. Dessa modeller best\u00e5r av lager av neurala n\u00e4tverk d\u00e4r varje neuron i ett lager \u00e4r kopplad till varje neuron i f\u00f6reg\u00e5ende och efterf\u00f6ljande lager.<\/p>\n<p>Alla neuroner \u00e4r involverade under s\u00e5v\u00e4l tr\u00e4ning som inferens, dvs. processen att generera ett svar p\u00e5 din fr\u00e5ga. Dessa modeller \u00e4r utm\u00e4rkta f\u00f6r att hantera en m\u00e4ngd olika uppgifter men anv\u00e4nder mycket datorkraft eftersom varje del av n\u00e4tverket deltar i bearbetningen av en inmatning.<\/p>\n<p>En modell som baseras p\u00e5 en MoE-arkitektur delar upp lagren i ett visst antal \"experter\" d\u00e4r varje expert \u00e4r ett neuralt n\u00e4tverk som tr\u00e4nats p\u00e5 specifika funktioner. S\u00e5 n\u00e4r du ser en modell som heter Mixtral 8x7B betyder det att den har 8 expertlager med 7 miljarder parametrar vardera.<\/p>\n<p>Varje expert \u00e4r utbildad f\u00f6r att vara mycket bra p\u00e5 en smal aspekt av det \u00f6vergripande problemet, ungef\u00e4r som specialister inom ett omr\u00e5de.<\/p>\n<p>N\u00e4r en fr\u00e5ga har st\u00e4llts delar ett Gating-n\u00e4tverk upp fr\u00e5gan i olika tokens och avg\u00f6r vilken expert som \u00e4r b\u00e4st l\u00e4mpad att behandla den. Resultaten fr\u00e5n varje expert kombineras sedan f\u00f6r att ge det slutliga resultatet.<\/p>\n<p>T\u00e4nk p\u00e5 MoE som att ha en grupp hantverkare med mycket specifika f\u00e4rdigheter f\u00f6r att g\u00f6ra din hemrenovering. I st\u00e4llet f\u00f6r att anlita en allm\u00e4n hantverkare (t\u00e4tt n\u00e4tverk) f\u00f6r att g\u00f6ra allt, ber du r\u00f6rmokaren John att g\u00f6ra r\u00f6rmokeriet och elektrikern Peter att g\u00f6ra det elektriska.<\/p>\n<p>Dessa modeller \u00e4r snabbare att tr\u00e4na eftersom du inte beh\u00f6ver tr\u00e4na hela modellen f\u00f6r att g\u00f6ra allt.<\/p>\n<p>MoE-modeller har ocks\u00e5 snabbare inferens j\u00e4mf\u00f6rt med t\u00e4ta modeller med samma antal parametrar. Detta \u00e4r anledningen till att <a href=\"https:\/\/dailyai.com\/sv\/2023\/12\/open-source-startup-mistral-ai-secures-415m-in-funding\/\">Mixtral 8x7B<\/a> med totalt 56 miljarder parametrar kan matcha eller sl\u00e5 GPT-3.5 som har 175 miljarder parametrar.<\/p>\n<p>Det ryktas att <a href=\"https:\/\/the-decoder.com\/gpt-4-architecture-datasets-costs-and-more-leaked\/\" target=\"_blank\" rel=\"noopener\">GPT-4 anv\u00e4nder en MoE-arkitektur<\/a> med 16 experter medan <a href=\"https:\/\/dailyai.com\/sv\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/\">Tvillingarna<\/a> anv\u00e4nder sig av en t\u00e4t arkitektur.<\/p>\n<h2>Sparsamhet<\/h2>\n<p>Sparsamhet inneb\u00e4r att antalet aktiva element i en modell, t.ex. neuroner eller vikter, kan minskas utan att prestandan f\u00f6rs\u00e4mras avsev\u00e4rt.<\/p>\n<p>Om indata till AI-modeller, t.ex. text eller bilder, inneh\u00e5ller m\u00e5nga nollor inneb\u00e4r tekniken med gles datarepresentation att man inte beh\u00f6ver sl\u00f6sa energi p\u00e5 att lagra nollorna.<\/p>\n<p>I ett glest neuronn\u00e4t \u00e4r vikterna, eller styrkan i kopplingen mellan neuronerna, ofta noll. Sparsamhet besk\u00e4r, eller tar bort, dessa vikter s\u00e5 att de inte inkluderas under bearbetningen. En MoE-modell \u00e4r ocks\u00e5 naturligt gles eftersom den kan ha en expert som \u00e4r involverad i bearbetningen medan resten sitter sysslol\u00f6sa.<\/p>\n<p>Sparsamhet kan leda till modeller som \u00e4r mindre ber\u00e4kningsintensiva och kr\u00e4ver mindre lagringsutrymme. De AI-modeller som s\u00e5 sm\u00e5ningom k\u00f6rs p\u00e5 din enhet kommer i h\u00f6g grad att f\u00f6rlita sig p\u00e5 Sparsity.<\/p>\n<p>Du kan t\u00e4nka p\u00e5 Sparsity som att g\u00e5 till ett bibliotek f\u00f6r att f\u00e5 svar p\u00e5 en fr\u00e5ga. Om biblioteket har miljarder b\u00f6cker kan du \u00f6ppna varje bok i biblioteket och s\u00e5 sm\u00e5ningom hitta relevanta svar i n\u00e5gra av b\u00f6ckerna. Det \u00e4r vad en icke-sparsam modell g\u00f6r.<\/p>\n<p>Om vi g\u00f6r oss av med m\u00e5nga av de b\u00f6cker som mest har tomma sidor eller irrelevant information \u00e4r det l\u00e4ttare att hitta de b\u00f6cker som \u00e4r relevanta f\u00f6r v\u00e5r fr\u00e5ga s\u00e5 att vi \u00f6ppnar f\u00e4rre b\u00f6cker och hittar svaret snabbare.<\/p>\n<p>Om du tycker om att h\u00e5lla dig uppdaterad med den senaste AI-utvecklingen kan du f\u00f6rv\u00e4nta dig att MoE och Sparsity n\u00e4mns oftare. LLM \u00e4r p\u00e5 v\u00e4g att bli mycket mindre och snabbare.<\/p>","protected":false},"excerpt":{"rendered":"<p>Lanseringen av mindre och mer effektiva AI-modeller som Mistrals banbrytande Mixtral 8x7B-modell har gjort begreppen \"Mixture of Experts\" (MoE) och \"Sparsity\" till heta \u00e4mnen. Dessa termer har flyttat fr\u00e5n komplexa AI-forskningsdokument till nyhetsartiklar som rapporterar om snabbt f\u00f6rb\u00e4ttrade Large Language Models (LLM). Lyckligtvis beh\u00f6ver du inte vara datavetare f\u00f6r att ha en bred uppfattning om vad MoE och Sparsity \u00e4r och varf\u00f6r dessa begrepp \u00e4r en stor sak. Blandning av experter LLM: er som GPT-3 \u00e4r baserade p\u00e5 en t\u00e4t n\u00e4tverksarkitektur. Dessa modeller best\u00e5r av lager<\/p>","protected":false},"author":6,"featured_media":8214,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[118],"class_list":["post-8212","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Mixture of Experts and Sparsity - Hot AI topics explained | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/sv\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/\" \/>\n<meta property=\"og:locale\" content=\"sv_SE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Mixture of Experts and Sparsity - Hot AI topics explained | DailyAI\" \/>\n<meta property=\"og:description\" content=\"The release of smaller and more efficient AI models like Mistral\u2019s groundbreaking Mixtral 8x7B model has seen the concepts of \u201cMixture of Experts\u201d (MoE) and \u201cSparsity\u201d become hot topics. These terms have moved from the realms of complex AI research papers to news articles reporting on rapidly improving Large Language Models (LLM). Fortunately, you don\u2019t have to be a data scientist to have a broad idea of what MoE and Sparsity are and why these concepts are a big deal. Mixture of Experts LLMs like GPT-3 are based on a dense network architecture. These models are made up of layers\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/sv\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-12T11:24:30+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"415\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skriven av\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Ber\u00e4knad l\u00e4stid\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minuter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Mixture of Experts and Sparsity &#8211; Hot AI topics explained\",\"datePublished\":\"2023-12-12T11:24:30+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\"},\"wordCount\":664,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/Mixture-of-Experts.jpg\",\"keywords\":[\"LLMS\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"sv-SE\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\",\"name\":\"Mixture of Experts and Sparsity - Hot AI topics explained | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/Mixture-of-Experts.jpg\",\"datePublished\":\"2023-12-12T11:24:30+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#breadcrumb\"},\"inLanguage\":\"sv-SE\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/Mixture-of-Experts.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/Mixture-of-Experts.jpg\",\"width\":1000,\"height\":415},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Mixture of Experts and Sparsity &#8211; Hot AI topics explained\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"sv-SE\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/sv\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Blandning av experter och gleshet - heta AI-\u00e4mnen f\u00f6rklarade | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/sv\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","og_locale":"sv_SE","og_type":"article","og_title":"Mixture of Experts and Sparsity - Hot AI topics explained | DailyAI","og_description":"The release of smaller and more efficient AI models like Mistral\u2019s groundbreaking Mixtral 8x7B model has seen the concepts of \u201cMixture of Experts\u201d (MoE) and \u201cSparsity\u201d become hot topics. These terms have moved from the realms of complex AI research papers to news articles reporting on rapidly improving Large Language Models (LLM). Fortunately, you don\u2019t have to be a data scientist to have a broad idea of what MoE and Sparsity are and why these concepts are a big deal. Mixture of Experts LLMs like GPT-3 are based on a dense network architecture. These models are made up of layers","og_url":"https:\/\/dailyai.com\/sv\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","og_site_name":"DailyAI","article_published_time":"2023-12-12T11:24:30+00:00","og_image":[{"width":1000,"height":415,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skriven av":"Eugene van der Watt","Ber\u00e4knad l\u00e4stid":"3 minuter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Mixture of Experts and Sparsity &#8211; Hot AI topics explained","datePublished":"2023-12-12T11:24:30+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/"},"wordCount":664,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","keywords":["LLMS"],"articleSection":["Industry"],"inLanguage":"sv-SE"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","url":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","name":"Blandning av experter och gleshet - heta AI-\u00e4mnen f\u00f6rklarade | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","datePublished":"2023-12-12T11:24:30+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#breadcrumb"},"inLanguage":"sv-SE","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/"]}]},{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","width":1000,"height":415},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Mixture of Experts and Sparsity &#8211; Hot AI topics explained"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DagligaAI","description":"Din dagliga dos av AI-nyheter","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"sv-SE"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DagligaAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene kommer fr\u00e5n en bakgrund som elektronikingenj\u00f6r och \u00e4lskar allt som har med teknik att g\u00f6ra. N\u00e4r han tar en paus fr\u00e5n att konsumera AI-nyheter hittar du honom vid snookerbordet.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/sv\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/8212","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/comments?post=8212"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/8212\/revisions"}],"predecessor-version":[{"id":8216,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/8212\/revisions\/8216"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/media\/8214"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/media?parent=8212"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/categories?post=8212"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/tags?post=8212"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}