{"id":8212,"date":"2023-12-12T11:24:30","date_gmt":"2023-12-12T11:24:30","guid":{"rendered":"https:\/\/dailyai.com\/?p=8212"},"modified":"2023-12-12T11:24:30","modified_gmt":"2023-12-12T11:24:30","slug":"mixture-of-experts-and-sparsity-hot-ai-topics-explained","status":"publish","type":"post","link":"https:\/\/dailyai.com\/pt\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","title":{"rendered":"Mistura de especialistas e dispers\u00e3o - Explica\u00e7\u00e3o de t\u00f3picos importantes de IA"},"content":{"rendered":"<p><strong>O lan\u00e7amento de modelos de IA mais pequenos e mais eficientes, como o inovador modelo Mixtral 8x7B da Mistral, fez com que os conceitos de \"Mistura de Peritos\" (MoE) e \"Esparsidade\" se tornassem temas quentes.<\/strong><\/p>\n<p>Estes termos passaram dos dom\u00ednios dos complexos documentos de investiga\u00e7\u00e3o sobre IA para os artigos noticiosos que relatam a r\u00e1pida melhoria dos modelos de linguagem de grande dimens\u00e3o (LLM).<\/p>\n<p>Felizmente, n\u00e3o \u00e9 necess\u00e1rio ser um cientista de dados para ter uma ideia geral do que s\u00e3o MoE e Sparsity e porque \u00e9 que estes conceitos s\u00e3o importantes.<\/p>\n<h2>Mistura de peritos<\/h2>\n<p>Os LLM, como o GPT-3, baseiam-se numa arquitetura de rede densa. Estes modelos s\u00e3o constitu\u00eddos por camadas de redes neuronais em que cada neur\u00f3nio de uma camada est\u00e1 ligado a todos os neur\u00f3nios da camada anterior e das camadas seguintes.<\/p>\n<p>Todos os neur\u00f3nios est\u00e3o envolvidos durante o treino, bem como durante a infer\u00eancia, o processo de gerar uma resposta ao seu pedido. Estes modelos s\u00e3o \u00f3ptimos para lidar com uma grande variedade de tarefas, mas utilizam muito poder de computa\u00e7\u00e3o porque todas as partes da sua rede participam no processamento de uma entrada.<\/p>\n<p>Um modelo baseado numa arquitetura MoE divide as camadas num determinado n\u00famero de \"especialistas\", sendo cada especialista uma rede neural pr\u00e9-treinada em fun\u00e7\u00f5es espec\u00edficas. Assim, quando se v\u00ea um modelo chamado Mixtral 8x7B, significa que tem 8 camadas de peritos com 7 mil milh\u00f5es de par\u00e2metros cada.<\/p>\n<p>Cada perito \u00e9 treinado para ser muito bom num aspeto restrito do problema global, tal como os especialistas de uma \u00e1rea.<\/p>\n<p>Uma vez solicitado, uma Gating Network divide o pedido em diferentes tokens e decide qual o perito mais adequado para o processar. Os resultados de cada especialista s\u00e3o ent\u00e3o combinados para fornecer o resultado final.<\/p>\n<p>Pense no MdE como tendo um grupo de comerciantes com compet\u00eancias muito espec\u00edficas para fazer a renova\u00e7\u00e3o da sua casa. Em vez de contratar um faz-tudo geral (rede densa) para fazer tudo, pede ao Jo\u00e3o, o canalizador, para fazer a canaliza\u00e7\u00e3o e ao Pedro, o eletricista, para fazer a eletricidade.<\/p>\n<p>Estes modelos s\u00e3o mais r\u00e1pidos de treinar porque n\u00e3o \u00e9 necess\u00e1rio treinar todo o modelo para fazer tudo.<\/p>\n<p>Os modelos MoE tamb\u00e9m t\u00eam uma infer\u00eancia mais r\u00e1pida em compara\u00e7\u00e3o com os modelos densos com o mesmo n\u00famero de par\u00e2metros. \u00c9 por isso que <a href=\"https:\/\/dailyai.com\/pt\/2023\/12\/open-source-startup-mistral-ai-secures-415m-in-funding\/\">Mixtral 8x7B<\/a> com um total de 56 mil milh\u00f5es de par\u00e2metros, pode igualar ou superar o GPT-3.5, que tem 175 mil milh\u00f5es de par\u00e2metros.<\/p>\n<p>H\u00e1 rumores de que <a href=\"https:\/\/the-decoder.com\/gpt-4-architecture-datasets-costs-and-more-leaked\/\" target=\"_blank\" rel=\"noopener\">O GPT-4 utiliza uma arquitetura MoE<\/a> com 16 peritos, enquanto <a href=\"https:\/\/dailyai.com\/pt\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/\">G\u00e9meos<\/a> utiliza uma arquitetura densa.<\/p>\n<h2>Esparsidade<\/h2>\n<p>A esparsidade refere-se \u00e0 ideia de reduzir o n\u00famero de elementos activos num modelo, como os neur\u00f3nios ou os pesos, sem comprometer significativamente o seu desempenho.<\/p>\n<p>Se os dados de entrada para os modelos de IA, como texto ou imagens, contiverem muitos zeros, a t\u00e9cnica de representa\u00e7\u00e3o de dados esparsos n\u00e3o desperdi\u00e7a esfor\u00e7os no armazenamento dos zeros.<\/p>\n<p>Numa rede neural esparsa, os pesos, ou a for\u00e7a da liga\u00e7\u00e3o entre os neur\u00f3nios, s\u00e3o frequentemente zero. A esparsidade poda, ou remove, esses pesos para que n\u00e3o sejam inclu\u00eddos durante o processamento. Um modelo de MdE tamb\u00e9m \u00e9 naturalmente esparso porque pode ter um perito envolvido no processamento enquanto os restantes ficam inactivos.<\/p>\n<p>A dispers\u00e3o pode levar a modelos que s\u00e3o menos intensivos em termos de computa\u00e7\u00e3o e requerem menos armazenamento. Os modelos de IA que eventualmente funcionem no seu dispositivo depender\u00e3o fortemente da dispers\u00e3o.<\/p>\n<p>Pode pensar-se na esparsidade como ir a uma biblioteca para obter uma resposta a uma pergunta. Se a biblioteca tiver bili\u00f5es de livros, pode abrir cada livro da biblioteca e, eventualmente, encontrar respostas relevantes em alguns dos livros. \u00c9 isso que um modelo n\u00e3o esparso faz.<\/p>\n<p>Se nos livrarmos de muitos dos livros que t\u00eam maioritariamente p\u00e1ginas em branco ou informa\u00e7\u00f5es irrelevantes, \u00e9 mais f\u00e1cil encontrar os livros relevantes para a nossa pergunta, pelo que abrimos menos livros e encontramos a resposta mais rapidamente.<\/p>\n<p>Se gosta de estar a par dos \u00faltimos desenvolvimentos em IA, ent\u00e3o espere ver o MoE e o Sparsity mencionados com mais frequ\u00eancia. Os LLM est\u00e3o prestes a ficar muito mais pequenos e r\u00e1pidos.<\/p>","protected":false},"excerpt":{"rendered":"<p>O lan\u00e7amento de modelos de IA mais pequenos e mais eficientes, como o inovador modelo Mixtral 8x7B da Mistral, fez com que os conceitos de \"Mistura de Peritos\" (MoE) e \"Esparsidade\" se tornassem t\u00f3picos quentes. Estes termos passaram dos dom\u00ednios dos complexos documentos de investiga\u00e7\u00e3o de IA para artigos de not\u00edcias que relatam a r\u00e1pida melhoria dos modelos de linguagem de grande dimens\u00e3o (LLM). Felizmente, n\u00e3o \u00e9 necess\u00e1rio ser um cientista de dados para ter uma ideia geral do que s\u00e3o MoE e Sparsity e porque \u00e9 que estes conceitos s\u00e3o importantes. Os LLMs de Mistura de Peritos, como o GPT-3, baseiam-se numa arquitetura de rede densa. Estes modelos s\u00e3o compostos por camadas<\/p>","protected":false},"author":6,"featured_media":8214,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[118],"class_list":["post-8212","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Mixture of Experts and Sparsity - Hot AI topics explained | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/pt\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/\" \/>\n<meta property=\"og:locale\" content=\"pt_PT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Mixture of Experts and Sparsity - Hot AI topics explained | DailyAI\" \/>\n<meta property=\"og:description\" content=\"The release of smaller and more efficient AI models like Mistral\u2019s groundbreaking Mixtral 8x7B model has seen the concepts of \u201cMixture of Experts\u201d (MoE) and \u201cSparsity\u201d become hot topics. These terms have moved from the realms of complex AI research papers to news articles reporting on rapidly improving Large Language Models (LLM). Fortunately, you don\u2019t have to be a data scientist to have a broad idea of what MoE and Sparsity are and why these concepts are a big deal. Mixture of Experts LLMs like GPT-3 are based on a dense network architecture. These models are made up of layers\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/pt\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-12T11:24:30+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"415\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo estimado de leitura\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Mixture of Experts and Sparsity &#8211; Hot AI topics explained\",\"datePublished\":\"2023-12-12T11:24:30+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\"},\"wordCount\":664,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/Mixture-of-Experts.jpg\",\"keywords\":[\"LLMS\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"pt-PT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\",\"name\":\"Mixture of Experts and Sparsity - Hot AI topics explained | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/Mixture-of-Experts.jpg\",\"datePublished\":\"2023-12-12T11:24:30+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#breadcrumb\"},\"inLanguage\":\"pt-PT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/Mixture-of-Experts.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/Mixture-of-Experts.jpg\",\"width\":1000,\"height\":415},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Mixture of Experts and Sparsity &#8211; Hot AI topics explained\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-PT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/pt\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Mistura de especialistas e dispers\u00e3o - Explica\u00e7\u00e3o de t\u00f3picos importantes de IA | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/pt\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","og_locale":"pt_PT","og_type":"article","og_title":"Mixture of Experts and Sparsity - Hot AI topics explained | DailyAI","og_description":"The release of smaller and more efficient AI models like Mistral\u2019s groundbreaking Mixtral 8x7B model has seen the concepts of \u201cMixture of Experts\u201d (MoE) and \u201cSparsity\u201d become hot topics. These terms have moved from the realms of complex AI research papers to news articles reporting on rapidly improving Large Language Models (LLM). Fortunately, you don\u2019t have to be a data scientist to have a broad idea of what MoE and Sparsity are and why these concepts are a big deal. Mixture of Experts LLMs like GPT-3 are based on a dense network architecture. These models are made up of layers","og_url":"https:\/\/dailyai.com\/pt\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","og_site_name":"DailyAI","article_published_time":"2023-12-12T11:24:30+00:00","og_image":[{"width":1000,"height":415,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Eugene van der Watt","Tempo estimado de leitura":"3 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Mixture of Experts and Sparsity &#8211; Hot AI topics explained","datePublished":"2023-12-12T11:24:30+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/"},"wordCount":664,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","keywords":["LLMS"],"articleSection":["Industry"],"inLanguage":"pt-PT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","url":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","name":"Mistura de especialistas e dispers\u00e3o - Explica\u00e7\u00e3o de t\u00f3picos importantes de IA | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","datePublished":"2023-12-12T11:24:30+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#breadcrumb"},"inLanguage":"pt-PT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/"]}]},{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","width":1000,"height":415},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Mixture of Experts and Sparsity &#8211; Hot AI topics explained"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"A sua dose di\u00e1ria de not\u00edcias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-PT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene vem de uma forma\u00e7\u00e3o em engenharia eletr\u00f3nica e adora tudo o que \u00e9 tecnologia. Quando faz uma pausa no consumo de not\u00edcias sobre IA, pode encontr\u00e1-lo \u00e0 mesa de snooker.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/pt\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/8212","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/comments?post=8212"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/8212\/revisions"}],"predecessor-version":[{"id":8216,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/8212\/revisions\/8216"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media\/8214"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media?parent=8212"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/categories?post=8212"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/tags?post=8212"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}