{"id":8212,"date":"2023-12-12T11:24:30","date_gmt":"2023-12-12T11:24:30","guid":{"rendered":"https:\/\/dailyai.com\/?p=8212"},"modified":"2023-12-12T11:24:30","modified_gmt":"2023-12-12T11:24:30","slug":"mixture-of-experts-and-sparsity-hot-ai-topics-explained","status":"publish","type":"post","link":"https:\/\/dailyai.com\/da\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","title":{"rendered":"Blanding af eksperter og sparsomhed - varme AI-emner forklaret"},"content":{"rendered":"<p><strong>Udgivelsen af mindre og mere effektive AI-modeller som Mistrals banebrydende Mixtral 8x7B-model har gjort begreberne \"Mixture of Experts\" (MoE) og \"Sparsity\" til varme emner.<\/strong><\/p>\n<p>Disse begreber har bev\u00e6get sig fra komplekse AI-forskningsartikler til nyhedsartikler, der rapporterer om hurtigt forbedrede store sprogmodeller (LLM).<\/p>\n<p>Heldigvis beh\u00f8ver man ikke at v\u00e6re dataforsker for at have en bred id\u00e9 om, hvad MoE og Sparsity er, og hvorfor disse begreber er vigtige.<\/p>\n<h2>Blanding af eksperter<\/h2>\n<p>LLM'er som GPT-3 er baseret p\u00e5 en t\u00e6t netv\u00e6rksarkitektur. Disse modeller best\u00e5r af lag af neurale netv\u00e6rk, hvor hver neuron i et lag er forbundet med alle neuroner i det foreg\u00e5ende og efterf\u00f8lgende lag.<\/p>\n<p>Alle neuroner er involveret b\u00e5de under tr\u00e6ning og under inferens, dvs. processen med at generere et svar p\u00e5 din foresp\u00f8rgsel. Disse modeller er gode til at h\u00e5ndtere en lang r\u00e6kke opgaver, men bruger meget computerkraft, fordi alle dele af deres netv\u00e6rk deltager i behandlingen af et input.<\/p>\n<p>En model baseret p\u00e5 en MoE-arkitektur opdeler lagene i et vist antal \"eksperter\", hvor hver ekspert er et neuralt netv\u00e6rk, der er fortr\u00e6net til specifikke funktioner. S\u00e5 n\u00e5r du ser en model, der hedder Mixtral 8x7B, betyder det, at den har 8 ekspertlag med 7 milliarder parametre hver.<\/p>\n<p>Hver ekspert er uddannet til at v\u00e6re meget god til et sn\u00e6vert aspekt af det overordnede problem, ligesom specialister inden for et felt.<\/p>\n<p>N\u00e5r du bliver bedt om det, opdeler et Gating Network sp\u00f8rgsm\u00e5let i forskellige tokens og beslutter, hvilken ekspert der er bedst egnet til at behandle det. Hver eksperts output kombineres derefter for at give det endelige output.<\/p>\n<p>T\u00e6nk p\u00e5 MoE som at have en gruppe h\u00e5ndv\u00e6rkere med meget specifikke f\u00e6rdigheder til at udf\u00f8re din boligrenovering. I stedet for at hyre en almindelig h\u00e5ndv\u00e6rker (t\u00e6t netv\u00e6rk) til at g\u00f8re alt, beder du blikkenslageren John om at lave vvs-arbejdet og elektrikeren Peter om at lave det elektriske arbejde.<\/p>\n<p>Disse modeller er hurtigere at tr\u00e6ne, fordi du ikke beh\u00f8ver at tr\u00e6ne hele modellen til at g\u00f8re alt.<\/p>\n<p>MoE-modeller har ogs\u00e5 hurtigere inferens sammenlignet med t\u00e6tte modeller med samme antal parametre. Dette er grunden til, at <a href=\"https:\/\/dailyai.com\/da\/2023\/12\/open-source-startup-mistral-ai-secures-415m-in-funding\/\">Mixtral 8x7B<\/a> med i alt 56 milliarder parametre kan matche eller sl\u00e5 GPT-3.5, som har 175 milliarder parametre.<\/p>\n<p>Det rygtes, at <a href=\"https:\/\/the-decoder.com\/gpt-4-architecture-datasets-costs-and-more-leaked\/\" target=\"_blank\" rel=\"noopener\">GPT-4 bruger en MoE-arkitektur<\/a> med 16 eksperter, mens <a href=\"https:\/\/dailyai.com\/da\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/\">Tvillingerne<\/a> anvender en t\u00e6t arkitektur.<\/p>\n<h2>Sparsomhed<\/h2>\n<p>Sparsomhed refererer til ideen om at reducere antallet af aktive elementer i en model, som f.eks. neuroner eller v\u00e6gte, uden at g\u00e5 v\u00e6sentligt p\u00e5 kompromis med dens ydeevne.<\/p>\n<p>Hvis inputdata til AI-modeller, f.eks. tekst eller billeder, indeholder mange nuller, spilder teknikken med sparsom datarepr\u00e6sentation ikke kr\u00e6fter p\u00e5 at gemme nullerne.<\/p>\n<p>I et sparsomt neuralt netv\u00e6rk er v\u00e6gtene, eller styrken af forbindelsen mellem neuronerne, ofte nul. Sparsomhed besk\u00e6rer eller fjerner disse v\u00e6gte, s\u00e5 de ikke medtages under behandlingen. En MoE-model er ogs\u00e5 naturligt sparsom, fordi den kan have \u00e9n ekspert involveret i behandlingen, mens resten er inaktive.<\/p>\n<p>Sparsomhed kan f\u00f8re til modeller, der er mindre beregningsintensive og kr\u00e6ver mindre lagerplads. De AI-modeller, der i sidste ende k\u00f8rer p\u00e5 din enhed, vil v\u00e6re st\u00e6rkt afh\u00e6ngige af Sparsity.<\/p>\n<p>Du kan t\u00e6nke p\u00e5 Sparsity som at g\u00e5 p\u00e5 et bibliotek for at f\u00e5 svar p\u00e5 et sp\u00f8rgsm\u00e5l. Hvis biblioteket har milliarder af b\u00f8ger, kan du \u00e5bne hver bog i biblioteket og til sidst finde relevante svar i nogle af b\u00f8gerne. Det er det, en ikke-sparsom model g\u00f8r.<\/p>\n<p>Hvis vi skiller os af med mange af de b\u00f8ger, der mest har tomme sider eller irrelevant information, er det lettere at finde de b\u00f8ger, der er relevante for vores sp\u00f8rgsm\u00e5l, s\u00e5 vi \u00e5bner f\u00e6rre b\u00f8ger og finder svaret hurtigere.<\/p>\n<p>Hvis du kan lide at holde dig opdateret med den seneste AI-udvikling, s\u00e5 forvent at se MoE og Sparsity n\u00e6vnt oftere. LLM'er er ved at blive meget mindre og hurtigere.<\/p>","protected":false},"excerpt":{"rendered":"<p>Udgivelsen af mindre og mere effektive AI-modeller som Mistrals banebrydende Mixtral 8x7B-model har gjort begreberne \"Mixture of Experts\" (MoE) og \"Sparsity\" til varme emner. Disse begreber har bev\u00e6get sig fra komplekse AI-forskningsartikler til nyhedsartikler, der rapporterer om hurtigt forbedrede Large Language Models (LLM). Heldigvis beh\u00f8ver man ikke at v\u00e6re dataforsker for at have en bred id\u00e9 om, hvad MoE og Sparsity er, og hvorfor disse begreber er vigtige. Mixture of Experts LLM'er som GPT-3 er baseret p\u00e5 en t\u00e6t netv\u00e6rksarkitektur. Disse modeller best\u00e5r af lag<\/p>","protected":false},"author":6,"featured_media":8214,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[118],"class_list":["post-8212","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Mixture of Experts and Sparsity - Hot AI topics explained | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/da\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/\" \/>\n<meta property=\"og:locale\" content=\"da_DK\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Mixture of Experts and Sparsity - Hot AI topics explained | DailyAI\" \/>\n<meta property=\"og:description\" content=\"The release of smaller and more efficient AI models like Mistral\u2019s groundbreaking Mixtral 8x7B model has seen the concepts of \u201cMixture of Experts\u201d (MoE) and \u201cSparsity\u201d become hot topics. These terms have moved from the realms of complex AI research papers to news articles reporting on rapidly improving Large Language Models (LLM). Fortunately, you don\u2019t have to be a data scientist to have a broad idea of what MoE and Sparsity are and why these concepts are a big deal. Mixture of Experts LLMs like GPT-3 are based on a dense network architecture. These models are made up of layers\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/da\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-12T11:24:30+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"415\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skrevet af\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimeret l\u00e6setid\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Mixture of Experts and Sparsity &#8211; Hot AI topics explained\",\"datePublished\":\"2023-12-12T11:24:30+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\"},\"wordCount\":664,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/Mixture-of-Experts.jpg\",\"keywords\":[\"LLMS\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"da-DK\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\",\"name\":\"Mixture of Experts and Sparsity - Hot AI topics explained | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/Mixture-of-Experts.jpg\",\"datePublished\":\"2023-12-12T11:24:30+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#breadcrumb\"},\"inLanguage\":\"da-DK\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/Mixture-of-Experts.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/Mixture-of-Experts.jpg\",\"width\":1000,\"height\":415},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Mixture of Experts and Sparsity &#8211; Hot AI topics explained\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"da-DK\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/da\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Blanding af eksperter og sparsomhed - varme AI-emner forklaret | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/da\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","og_locale":"da_DK","og_type":"article","og_title":"Mixture of Experts and Sparsity - Hot AI topics explained | DailyAI","og_description":"The release of smaller and more efficient AI models like Mistral\u2019s groundbreaking Mixtral 8x7B model has seen the concepts of \u201cMixture of Experts\u201d (MoE) and \u201cSparsity\u201d become hot topics. These terms have moved from the realms of complex AI research papers to news articles reporting on rapidly improving Large Language Models (LLM). Fortunately, you don\u2019t have to be a data scientist to have a broad idea of what MoE and Sparsity are and why these concepts are a big deal. Mixture of Experts LLMs like GPT-3 are based on a dense network architecture. These models are made up of layers","og_url":"https:\/\/dailyai.com\/da\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","og_site_name":"DailyAI","article_published_time":"2023-12-12T11:24:30+00:00","og_image":[{"width":1000,"height":415,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skrevet af":"Eugene van der Watt","Estimeret l\u00e6setid":"3 minutter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Mixture of Experts and Sparsity &#8211; Hot AI topics explained","datePublished":"2023-12-12T11:24:30+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/"},"wordCount":664,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","keywords":["LLMS"],"articleSection":["Industry"],"inLanguage":"da-DK"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","url":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","name":"Blanding af eksperter og sparsomhed - varme AI-emner forklaret | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","datePublished":"2023-12-12T11:24:30+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#breadcrumb"},"inLanguage":"da-DK","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/"]}]},{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","width":1000,"height":415},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Mixture of Experts and Sparsity &#8211; Hot AI topics explained"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Din daglige dosis af AI-nyheder","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"da-DK"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene har en baggrund som elektronikingeni\u00f8r og elsker alt, hvad der har med teknologi at g\u00f8re. N\u00e5r han tager en pause fra at l\u00e6se AI-nyheder, kan du finde ham ved snookerbordet.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/da\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/8212","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/comments?post=8212"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/8212\/revisions"}],"predecessor-version":[{"id":8216,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/8212\/revisions\/8216"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media\/8214"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media?parent=8212"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/categories?post=8212"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/tags?post=8212"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}