{"id":8212,"date":"2023-12-12T11:24:30","date_gmt":"2023-12-12T11:24:30","guid":{"rendered":"https:\/\/dailyai.com\/?p=8212"},"modified":"2023-12-12T11:24:30","modified_gmt":"2023-12-12T11:24:30","slug":"mixture-of-experts-and-sparsity-hot-ai-topics-explained","status":"publish","type":"post","link":"https:\/\/dailyai.com\/it\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","title":{"rendered":"Miscela di esperti e sparsit\u00e0 - Argomenti caldi dell'intelligenza artificiale spiegati"},"content":{"rendered":"<p><strong>Il rilascio di modelli di IA pi\u00f9 piccoli e pi\u00f9 efficienti, come l'innovativo modello Mixtral 8x7B di Mistral, ha fatto s\u00ec che i concetti di \"Miscela di esperti\" (MoE) e \"Sparsit\u00e0\" diventassero temi caldi.<\/strong><\/p>\n<p>Questi termini sono passati dall'ambito dei complessi documenti di ricerca sull'intelligenza artificiale agli articoli di cronaca che riportano il rapido miglioramento dei Large Language Models (LLM).<\/p>\n<p>Fortunatamente, non \u00e8 necessario essere uno scienziato dei dati per avere un'idea generale di cosa siano MoE e Sparsity e del perch\u00e9 questi concetti siano importanti.<\/p>\n<h2>Miscela di esperti<\/h2>\n<p>I LLM come il GPT-3 si basano su un'architettura a rete densa. Questi modelli sono costituiti da strati di reti neurali in cui ogni neurone di uno strato \u00e8 collegato a tutti i neuroni degli strati precedenti e successivi.<\/p>\n<p>Tutti i neuroni sono coinvolti sia durante l'addestramento sia durante l'inferenza, il processo di generazione di una risposta alla richiesta. Questi modelli sono ottimi per affrontare un'ampia variet\u00e0 di compiti, ma utilizzano molta potenza di calcolo perch\u00e9 ogni parte della rete partecipa all'elaborazione di un input.<\/p>\n<p>Un modello basato su un'architettura MoE suddivide gli strati in un certo numero di \"esperti\", dove ogni esperto \u00e8 una rete neurale addestrata su funzioni specifiche. Quindi, quando si vede un modello chiamato Mixtral 8x7B significa che ha 8 strati di esperti con 7 miliardi di parametri ciascuno.<\/p>\n<p>Ogni esperto \u00e8 addestrato per essere molto bravo in un aspetto ristretto del problema generale, proprio come gli specialisti di un campo.<\/p>\n<p>Una volta richiesto, una Gating Network scompone il messaggio in diversi token e decide quale esperto \u00e8 pi\u00f9 adatto a elaborarlo. I risultati di ciascun esperto vengono poi combinati per fornire l'output finale.<\/p>\n<p>Pensate al MoE come a un gruppo di artigiani con competenze molto specifiche per la ristrutturazione della vostra casa. Invece di assumere un tuttofare generico (rete fitta) per fare tutto, si chiede a John, l'idraulico, di fare l'impianto idraulico e a Peter, l'elettricista, di fare l'impianto elettrico.<\/p>\n<p>Questi modelli sono pi\u00f9 veloci da addestrare perch\u00e9 non \u00e8 necessario addestrare l'intero modello per fare tutto.<\/p>\n<p>I modelli MoE hanno anche un'inferenza pi\u00f9 veloce rispetto ai modelli densi con lo stesso numero di parametri. Questo \u00e8 il motivo per cui <a href=\"https:\/\/dailyai.com\/it\/2023\/12\/open-source-startup-mistral-ai-secures-415m-in-funding\/\">Mixtral 8x7B<\/a> con un totale di 56 miliardi di parametri pu\u00f2 eguagliare o battere GPT-3.5 che ha 175 miliardi di parametri.<\/p>\n<p>Si dice che <a href=\"https:\/\/the-decoder.com\/gpt-4-architecture-datasets-costs-and-more-leaked\/\" target=\"_blank\" rel=\"noopener\">Il GPT-4 utilizza un'architettura MoE<\/a> con 16 esperti, mentre <a href=\"https:\/\/dailyai.com\/it\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/\">Gemelli<\/a> impiega un'architettura densa.<\/p>\n<h2>Scarsit\u00e0<\/h2>\n<p>La sparsit\u00e0 si riferisce all'idea di ridurre il numero di elementi attivi in un modello, come i neuroni o i pesi, senza compromettere in modo significativo le sue prestazioni.<\/p>\n<p>Se i dati di input per i modelli di intelligenza artificiale, come il testo o le immagini, contengono molti zeri, la tecnica di rappresentazione rada dei dati non comporta uno spreco di risorse per memorizzare gli zeri.<\/p>\n<p>In una rete neurale rada i pesi, o la forza di connessione tra i neuroni, sono spesso pari a zero. La sparsit\u00e0 pota, o rimuove, questi pesi in modo che non vengano inclusi durante l'elaborazione. Anche un modello MoE \u00e8 naturalmente rado, perch\u00e9 pu\u00f2 avere un esperto coinvolto nell'elaborazione mentre gli altri rimangono inattivi.<\/p>\n<p>La spazialit\u00e0 pu\u00f2 portare a modelli che richiedono meno calcoli e meno memoria. I modelli di intelligenza artificiale che verranno eseguiti sul dispositivo si baseranno molto sulla spazialit\u00e0.<\/p>\n<p>Si pu\u00f2 pensare alla sparsit\u00e0 come se si andasse in una biblioteca per trovare una risposta a una domanda. Se la biblioteca ha miliardi di libri, si potrebbe aprire ogni libro della biblioteca e alla fine trovare le risposte pertinenti in alcuni dei libri. Questo \u00e8 ci\u00f2 che fa un modello non-sparso.<\/p>\n<p>Se ci liberiamo di molti libri che hanno per lo pi\u00f9 pagine bianche o informazioni irrilevanti, \u00e8 pi\u00f9 facile trovare i libri rilevanti per la nostra domanda, in modo da aprire meno libri e trovare la risposta pi\u00f9 velocemente.<\/p>\n<p>Se vi piace rimanere aggiornati sugli ultimi sviluppi dell'IA, aspettatevi di vedere MoE e Sparsity citati pi\u00f9 spesso. Gli LLM stanno per diventare molto pi\u00f9 piccoli e veloci.<\/p>","protected":false},"excerpt":{"rendered":"<p>Il rilascio di modelli di IA pi\u00f9 piccoli e pi\u00f9 efficienti, come l'innovativo modello Mixtral 8x7B di Mistral, ha fatto s\u00ec che i concetti di \"Miscela di esperti\" (MoE) e \"Sparsit\u00e0\" diventassero temi caldi. Questi termini sono passati dal regno dei complessi documenti di ricerca sull'IA agli articoli di cronaca che riportano i rapidi miglioramenti dei Large Language Models (LLM). Fortunatamente, non \u00e8 necessario essere uno scienziato dei dati per avere un'idea generale di cosa siano MoE e Sparsity e del perch\u00e9 questi concetti siano importanti. I modelli LLM a miscela di esperti, come il GPT-3, si basano su un'architettura a rete densa. Questi modelli sono costituiti da strati<\/p>","protected":false},"author":6,"featured_media":8214,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[118],"class_list":["post-8212","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Mixture of Experts and Sparsity - Hot AI topics explained | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/it\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/\" \/>\n<meta property=\"og:locale\" content=\"it_IT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Mixture of Experts and Sparsity - Hot AI topics explained | DailyAI\" \/>\n<meta property=\"og:description\" content=\"The release of smaller and more efficient AI models like Mistral\u2019s groundbreaking Mixtral 8x7B model has seen the concepts of \u201cMixture of Experts\u201d (MoE) and \u201cSparsity\u201d become hot topics. These terms have moved from the realms of complex AI research papers to news articles reporting on rapidly improving Large Language Models (LLM). Fortunately, you don\u2019t have to be a data scientist to have a broad idea of what MoE and Sparsity are and why these concepts are a big deal. Mixture of Experts LLMs like GPT-3 are based on a dense network architecture. These models are made up of layers\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/it\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-12T11:24:30+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"415\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Scritto da\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo di lettura stimato\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minuti\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Mixture of Experts and Sparsity &#8211; Hot AI topics explained\",\"datePublished\":\"2023-12-12T11:24:30+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\"},\"wordCount\":664,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/Mixture-of-Experts.jpg\",\"keywords\":[\"LLMS\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"it-IT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\",\"name\":\"Mixture of Experts and Sparsity - Hot AI topics explained | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/Mixture-of-Experts.jpg\",\"datePublished\":\"2023-12-12T11:24:30+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#breadcrumb\"},\"inLanguage\":\"it-IT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/Mixture-of-Experts.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/Mixture-of-Experts.jpg\",\"width\":1000,\"height\":415},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Mixture of Experts and Sparsity &#8211; Hot AI topics explained\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"it-IT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/it\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Miscela di esperti e sparsit\u00e0 - I temi caldi dell'intelligenza artificiale spiegati | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/it\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","og_locale":"it_IT","og_type":"article","og_title":"Mixture of Experts and Sparsity - Hot AI topics explained | DailyAI","og_description":"The release of smaller and more efficient AI models like Mistral\u2019s groundbreaking Mixtral 8x7B model has seen the concepts of \u201cMixture of Experts\u201d (MoE) and \u201cSparsity\u201d become hot topics. These terms have moved from the realms of complex AI research papers to news articles reporting on rapidly improving Large Language Models (LLM). Fortunately, you don\u2019t have to be a data scientist to have a broad idea of what MoE and Sparsity are and why these concepts are a big deal. Mixture of Experts LLMs like GPT-3 are based on a dense network architecture. These models are made up of layers","og_url":"https:\/\/dailyai.com\/it\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","og_site_name":"DailyAI","article_published_time":"2023-12-12T11:24:30+00:00","og_image":[{"width":1000,"height":415,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Scritto da":"Eugene van der Watt","Tempo di lettura stimato":"3 minuti"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Mixture of Experts and Sparsity &#8211; Hot AI topics explained","datePublished":"2023-12-12T11:24:30+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/"},"wordCount":664,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","keywords":["LLMS"],"articleSection":["Industry"],"inLanguage":"it-IT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","url":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/","name":"Miscela di esperti e sparsit\u00e0 - I temi caldi dell'intelligenza artificiale spiegati | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","datePublished":"2023-12-12T11:24:30+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#breadcrumb"},"inLanguage":"it-IT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/"]}]},{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Mixture-of-Experts.jpg","width":1000,"height":415},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/12\/mixture-of-experts-and-sparsity-hot-ai-topics-explained\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Mixture of Experts and Sparsity &#8211; Hot AI topics explained"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"La vostra dose quotidiana di notizie sull'intelligenza artificiale","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"it-IT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene proviene da un background di ingegneria elettronica e ama tutto ci\u00f2 che \u00e8 tecnologico. Quando si prende una pausa dal consumo di notizie sull'intelligenza artificiale, lo si pu\u00f2 trovare al tavolo da biliardo.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/it\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/8212","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/comments?post=8212"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/8212\/revisions"}],"predecessor-version":[{"id":8216,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/8212\/revisions\/8216"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/media\/8214"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/media?parent=8212"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/categories?post=8212"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/tags?post=8212"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}