{"id":8212,"date":"2023-12-12T11:24:30","date_gmt":"2023-12-12T11:24:30","guid":{"rendered":"https:\/\/dailyai.com\/?p=8212"},"modified":"2023-12-12T11:24:30","modified_gmt":"2023-12-12T11:24:30","slug":"mixture-of-experts-and-sparsity-hot-ai-topics-explained","status":"publish","type":"post","link":"https:\/\/dailyai.com\/fr\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","title":{"rendered":"M\u00e9lange d'experts et \u00e9parsit\u00e9 - Les sujets br\u00fblants de l'IA expliqu\u00e9s"},"content":{"rendered":"<p><strong>La publication de mod\u00e8les d'IA plus petits et plus efficaces, comme le mod\u00e8le r\u00e9volutionnaire Mixtral 8x7B de Mistral, a fait des concepts de \"m\u00e9lange d'experts\" (MoE) et de \"sparit\u00e9\" des sujets d'actualit\u00e9.<\/strong><\/p>\n<p>Ces termes sont pass\u00e9s du domaine des documents de recherche complexes sur l'intelligence artificielle \u00e0 celui des articles de presse faisant \u00e9tat de l'am\u00e9lioration rapide des grands mod\u00e8les de langage (LLM).<\/p>\n<p>Heureusement, il n'est pas n\u00e9cessaire d'\u00eatre un data scientist pour avoir une id\u00e9e g\u00e9n\u00e9rale de ce que sont la MoE et la Sparsit\u00e9 et pourquoi ces concepts sont importants.<\/p>\n<h2>M\u00e9lange d'experts<\/h2>\n<p>Les LLM comme le GPT-3 sont bas\u00e9s sur une architecture de r\u00e9seau dense. Ces mod\u00e8les sont constitu\u00e9s de couches de r\u00e9seaux neuronaux o\u00f9 chaque neurone d'une couche est connect\u00e9 \u00e0 tous les neurones des couches pr\u00e9c\u00e9dentes et suivantes.<\/p>\n<p>Tous les neurones participent \u00e0 l'apprentissage et \u00e0 l'inf\u00e9rence, c'est-\u00e0-dire au processus de g\u00e9n\u00e9ration d'une r\u00e9ponse \u00e0 votre demande. Ces mod\u00e8les sont parfaits pour aborder une grande vari\u00e9t\u00e9 de t\u00e2ches, mais ils utilisent beaucoup de puissance de calcul parce que chaque partie de leur r\u00e9seau participe au traitement d'une entr\u00e9e.<\/p>\n<p>Un mod\u00e8le bas\u00e9 sur une architecture MoE d\u00e9compose les couches en un certain nombre d'\"experts\" o\u00f9 chaque expert est un r\u00e9seau neuronal pr\u00e9-entra\u00een\u00e9 sur des fonctions sp\u00e9cifiques. Ainsi, lorsque vous voyez un mod\u00e8le appel\u00e9 Mixtral 8x7B, cela signifie qu'il comporte 8 couches d'experts de 7 milliards de param\u00e8tres chacune.<\/p>\n<p>Chaque expert est form\u00e9 pour \u00eatre tr\u00e8s comp\u00e9tent sur un aspect \u00e9troit du probl\u00e8me global, un peu comme les sp\u00e9cialistes d'un domaine.<\/p>\n<p>Une fois la demande formul\u00e9e, un r\u00e9seau de contr\u00f4le la d\u00e9compose en diff\u00e9rents \u00e9l\u00e9ments et d\u00e9cide quel expert est le plus apte \u00e0 la traiter. Les r\u00e9sultats de chaque expert sont ensuite combin\u00e9s pour fournir le r\u00e9sultat final.<\/p>\n<p>Imaginez que vous ayez un groupe d'artisans poss\u00e9dant des comp\u00e9tences tr\u00e8s sp\u00e9cifiques pour effectuer les travaux de r\u00e9novation de votre maison. Au lieu d'engager un homme \u00e0 tout faire (r\u00e9seau dense) pour tout faire, vous demandez \u00e0 Jean le plombier de s'occuper de la plomberie et \u00e0 Pierre l'\u00e9lectricien de s'occuper de l'\u00e9lectricit\u00e9.<\/p>\n<p>Ces mod\u00e8les sont plus rapides \u00e0 former car il n'est pas n\u00e9cessaire de former l'ensemble du mod\u00e8le pour tout faire.<\/p>\n<p>Les mod\u00e8les MoE ont \u00e9galement une inf\u00e9rence plus rapide que les mod\u00e8les denses avec le m\u00eame nombre de param\u00e8tres. C'est pourquoi les <a href=\"https:\/\/dailyai.com\/fr\/2023\/12\/open-source-startup-mistral-ai-secures-415m-in-funding\/\">Mixtral 8x7B<\/a> avec un total de 56 milliards de param\u00e8tres, peut \u00e9galer ou battre GPT-3.5 qui a 175 milliards de param\u00e8tres.<\/p>\n<p>La rumeur veut que <a href=\"https:\/\/the-decoder.com\/gpt-4-architecture-datasets-costs-and-more-leaked\/\" target=\"_blank\" rel=\"noopener\">Le GPT-4 utilise une architecture MoE<\/a> avec 16 experts, tandis que <a href=\"https:\/\/dailyai.com\/fr\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/\">G\u00e9meaux<\/a> utilise une architecture dense.<\/p>\n<h2>Sparsit\u00e9<\/h2>\n<p>L'\u00e9parpillement fait r\u00e9f\u00e9rence \u00e0 l'id\u00e9e de r\u00e9duire le nombre d'\u00e9l\u00e9ments actifs dans un mod\u00e8le, tels que les neurones ou les poids, sans compromettre de mani\u00e8re significative ses performances.<\/p>\n<p>Si les donn\u00e9es d'entr\u00e9e des mod\u00e8les d'IA, comme le texte ou les images, contiennent beaucoup de z\u00e9ros, la technique de repr\u00e9sentation des donn\u00e9es \u00e9parses ne gaspille pas d'efforts pour stocker les z\u00e9ros.<\/p>\n<p>Dans un r\u00e9seau neuronal peu dense, les poids, ou la force de connexion entre les neurones, sont souvent nuls. L'\u00e9parpillement permet d'\u00e9laguer, ou de supprimer, ces poids afin qu'ils ne soient pas pris en compte lors du traitement. Un mod\u00e8le de MdE est \u00e9galement naturellement clairsem\u00e9, car un expert peut \u00eatre impliqu\u00e9 dans le traitement tandis que les autres restent inactifs.<\/p>\n<p>La parcimonie peut conduire \u00e0 des mod\u00e8les moins gourmands en ressources informatiques et moins gourmands en espace de stockage. Les mod\u00e8les d'intelligence artificielle qui fonctionneront \u00e0 terme sur votre appareil s'appuieront fortement sur le principe de parcimonie.<\/p>\n<p>On peut comparer la sparit\u00e9 \u00e0 une biblioth\u00e8que o\u00f9 l'on chercherait une r\u00e9ponse \u00e0 une question. Si la biblioth\u00e8que compte des milliards de livres, il est possible d'ouvrir chaque livre de la biblioth\u00e8que et de trouver des r\u00e9ponses pertinentes dans certains d'entre eux. C'est ce que fait un mod\u00e8le non \u00e9pars.<\/p>\n<p>Si nous nous d\u00e9barrassons d'un grand nombre de livres qui contiennent des pages blanches ou des informations non pertinentes, il est plus facile de trouver les livres qui correspondent \u00e0 notre question, ce qui nous permet d'ouvrir moins de livres et de trouver la r\u00e9ponse plus rapidement.<\/p>\n<p>Si vous aimez vous tenir au courant des derniers d\u00e9veloppements en mati\u00e8re d'IA, attendez-vous \u00e0 voir MoE et Sparsity mentionn\u00e9s plus souvent. Les LLM sont sur le point de devenir beaucoup plus petits et plus rapides.<\/p>","protected":false},"excerpt":{"rendered":"<p>La publication de mod\u00e8les d'IA plus petits et plus efficaces, comme le mod\u00e8le r\u00e9volutionnaire Mixtral 8x7B de Mistral, a fait des concepts de \"m\u00e9lange d'experts\" (MoE) et de \"sparit\u00e9\" des sujets d'actualit\u00e9. Ces termes sont pass\u00e9s du domaine des documents de recherche complexes sur l'IA \u00e0 celui des articles de presse faisant \u00e9tat de l'am\u00e9lioration rapide des grands mod\u00e8les de langage (LLM). Heureusement, il n'est pas n\u00e9cessaire d'\u00eatre un data scientist pour avoir une id\u00e9e g\u00e9n\u00e9rale de ce que sont le MoE et la Sparsit\u00e9 et pourquoi ces concepts sont importants. Les LLM de m\u00e9lange d'experts tels que GPT-3 sont bas\u00e9s sur une architecture de r\u00e9seau dense. Ces mod\u00e8les sont constitu\u00e9s de couches<\/p>","protected":false},"author":6,"featured_media":8214,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[118],"class_list":["post-8212","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Mixture of Experts and Sparsity - Hot AI topics explained | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/fr\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Mixture of Experts and Sparsity - Hot AI topics explained | DailyAI\" \/>\n<meta property=\"og:description\" content=\"The release of smaller and more efficient AI models like Mistral\u2019s groundbreaking Mixtral 8x7B model has seen the concepts of \u201cMixture of Experts\u201d (MoE) and \u201cSparsity\u201d become hot topics. These terms have moved from the realms of complex AI research papers to news articles reporting on rapidly improving Large Language Models (LLM). Fortunately, you don\u2019t have to be a data scientist to have a broad idea of what MoE and Sparsity are and why these concepts are a big deal. Mixture of Experts LLMs like GPT-3 are based on a dense network architecture. These models are made up of layers\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/fr\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-12T11:24:30+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"415\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Mixture of Experts and Sparsity &#8211; Hot AI topics explained\",\"datePublished\":\"2023-12-12T11:24:30+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\"},\"wordCount\":664,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/Mixture-of-Experts.jpg\",\"keywords\":[\"LLMS\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"fr-FR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\",\"name\":\"Mixture of Experts and Sparsity - Hot AI topics explained | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/Mixture-of-Experts.jpg\",\"datePublished\":\"2023-12-12T11:24:30+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/Mixture-of-Experts.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/Mixture-of-Experts.jpg\",\"width\":1000,\"height\":415},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Mixture of Experts and Sparsity &#8211; Hot AI topics explained\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/fr\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"M\u00e9lange d'experts et spartialit\u00e9 - Les sujets br\u00fblants de l'IA expliqu\u00e9s | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/fr\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","og_locale":"fr_FR","og_type":"article","og_title":"Mixture of Experts and Sparsity - Hot AI topics explained | DailyAI","og_description":"The release of smaller and more efficient AI models like Mistral\u2019s groundbreaking Mixtral 8x7B model has seen the concepts of \u201cMixture of Experts\u201d (MoE) and \u201cSparsity\u201d become hot topics. These terms have moved from the realms of complex AI research papers to news articles reporting on rapidly improving Large Language Models (LLM). Fortunately, you don\u2019t have to be a data scientist to have a broad idea of what MoE and Sparsity are and why these concepts are a big deal. Mixture of Experts LLMs like GPT-3 are based on a dense network architecture. These models are made up of layers","og_url":"https:\/\/dailyai.com\/fr\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","og_site_name":"DailyAI","article_published_time":"2023-12-12T11:24:30+00:00","og_image":[{"width":1000,"height":415,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"\u00c9crit par":"Eugene van der Watt","Dur\u00e9e de lecture estim\u00e9e":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Mixture of Experts and Sparsity &#8211; Hot AI topics explained","datePublished":"2023-12-12T11:24:30+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/"},"wordCount":664,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","keywords":["LLMS"],"articleSection":["Industry"],"inLanguage":"fr-FR"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","url":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","name":"M\u00e9lange d'experts et spartialit\u00e9 - Les sujets br\u00fblants de l'IA expliqu\u00e9s | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","datePublished":"2023-12-12T11:24:30+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/"]}]},{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","width":1000,"height":415},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Mixture of Experts and Sparsity &#8211; Hot AI topics explained"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Votre dose quotidienne de nouvelles sur l'IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eug\u00e8ne van der Watt","image":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene a une formation d'ing\u00e9nieur en \u00e9lectronique et adore tout ce qui touche \u00e0 la technologie. Lorsqu'il fait une pause dans sa consommation d'informations sur l'IA, vous le trouverez \u00e0 la table de snooker.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/fr\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/8212","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/comments?post=8212"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/8212\/revisions"}],"predecessor-version":[{"id":8216,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/8212\/revisions\/8216"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media\/8214"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media?parent=8212"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/categories?post=8212"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/tags?post=8212"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}