{"id":3964,"date":"2023-08-09T06:56:16","date_gmt":"2023-08-09T06:56:16","guid":{"rendered":"https:\/\/dailyai.com\/?p=3964"},"modified":"2023-08-09T06:56:16","modified_gmt":"2023-08-09T06:56:16","slug":"we-want-unbiased-llms-but-its-impossible-heres-why","status":"publish","type":"post","link":"https:\/\/dailyai.com\/it\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/","title":{"rendered":"Vogliamo LLM imparziali, ma \u00e8 impossibile. Ecco perch\u00e9."},"content":{"rendered":"<p><strong>Aziende come OpenAI e Meta stanno lavorando duramente per rendere i loro modelli linguistici pi\u00f9 sicuri e meno distorti, ma i modelli completamente imparziali potrebbero essere un sogno irrealizzabile.<\/strong><\/p>\n<p><span style=\"font-weight: 400;\">A <\/span><a href=\"https:\/\/aclanthology.org\/2023.acl-long.656.pdf\"><span style=\"font-weight: 400;\">nuovo documento di ricerca<\/span><\/a><span style=\"font-weight: 400;\"> dell'Universit\u00e0 di Washington, dell'Universit\u00e0 Carnegie Mellon e dell'Universit\u00e0 Xi'an Jiaotong hanno concluso che tutti i modelli linguistici dell'IA testati mostravano pregiudizi politici.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Dopo aver analizzato le fonti del pregiudizio, hanno concluso che il pregiudizio nei modelli linguistici \u00e8 inevitabile.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Chan Park, uno degli autori del lavoro, ha dichiarato: \"Crediamo che nessun modello linguistico possa essere completamente libero da pregiudizi politici\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">I ricercatori hanno testato 14 diversi modelli linguistici e hanno chiesto loro di esprimere opinioni su argomenti come la democrazia, il razzismo e il femminismo, per vedere da che parte dello spettro politico si collocavano i modelli.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">I risultati hanno mostrato che ChatGPT e GPT-4 di OpenAI erano pi\u00f9 a sinistra, mentre Llama di Meta ha dato le risposte pi\u00f9 a destra.<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">I dati di addestramento non sono l'unica fonte di distorsioni<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">L'ovvia fonte di <a href=\"https:\/\/dailyai.com\/it\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/\">sbieco<\/a> sono i dati su cui questi modelli vengono addestrati. Ma la nuova ricerca ha dimostrato che anche dopo aver ripulito i dati dai pregiudizi, i modelli erano suscettibili di pregiudizi di basso livello che rimanevano nei dati.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ci si aspetterebbe che un LLM addestrato su un gruppo di dati di Fox News sia pi\u00f9 favorevole ai repubblicani nelle sue risposte. Ma il problema non \u00e8 solo nei dati di addestramento.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00c8 emerso che, man mano che i modelli linguistici pre-addestrati vengono messi a punto e utilizzati, acquisiscono ulteriori pregiudizi dai loro operatori.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Soroush Vosoughi, professore assistente di informatica al Dartmouth College, ha spiegato che i pregiudizi vengono introdotti in quasi tutte le fasi di sviluppo di un LLM.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Un esempio \u00e8 il modo in cui OpenAI cerca di eliminare i pregiudizi dai suoi modelli. Per addestrare i suoi modelli utilizza una tecnica chiamata \"Reinforcement Learning through Human Feedback\" o RLHF.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In RLHF un operatore umano addestra il modello in modo simile a come si addestra un cucciolo. Se il cucciolo fa qualcosa di buono riceve un premio. Se mastica le pantofole, \"Cane cattivo!\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Un operatore RLHF pone al modello alcune domande e un altro operatore valuta le risposte multiple che il modello d\u00e0. Il secondo operatore valuta le risposte e le classifica in base a quella che gli \u00e8 piaciuta di pi\u00f9.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In un <\/span><a href=\"https:\/\/openai.com\/blog\/how-should-ai-systems-behave\"><span style=\"font-weight: 400;\">su come addestra la sua IA<\/span><\/a><span style=\"font-weight: 400;\">OpenAI ha dichiarato di aver istruito i formatori umani a \"evitare di prendere posizione su argomenti controversi\" e che \"i revisori non devono favorire alcun gruppo politico\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Sembra una buona idea, ma anche se ci sforziamo di non esserlo, tutti gli esseri umani sono prevenuti. E questo influenza inevitabilmente la formazione del modello.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Anche gli autori dell'articolo che abbiamo citato sopra hanno riconosciuto nelle loro conclusioni che i loro pregiudizi potrebbero aver influenzato la loro ricerca.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">La soluzione potrebbe essere quella di cercare di rendere questi modelli linguistici non eccessivamente cattivi e poi personalizzarli per allinearli ai pregiudizi che le persone hanno. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Spesso le persone dicono di volere la verit\u00e0 imparziale, ma poi finiscono per attenersi alla loro fonte di notizie preferita, come Fox o CNN.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Non siamo sempre d'accordo su cosa sia giusto o sbagliato e questa nuova ricerca sembra dimostrare che nemmeno l'intelligenza artificiale sar\u00e0 in grado di aiutarci a capirlo.<\/span><\/p>","protected":false},"excerpt":{"rendered":"<p>Aziende come OpenAI e Meta stanno lavorando duramente per rendere i loro modelli linguistici pi\u00f9 sicuri e meno distorti, ma modelli completamente imparziali potrebbero essere un sogno irrealizzabile. Una nuova ricerca condotta dall'Universit\u00e0 di Washington, dalla Carnegie Mellon University e dalla Xi'an Jiaotong University ha concluso che tutti i modelli linguistici di IA testati presentano pregiudizi politici. Dopo aver analizzato le fonti dei pregiudizi, hanno concluso che questi ultimi sono inevitabili nei modelli linguistici. Chan Park, uno degli autori del lavoro, ha dichiarato: \"Crediamo che nessun modello linguistico possa essere del tutto privo di pregiudizi politici\". I ricercatori hanno testato 14 diversi modelli linguistici e hanno chiesto loro di<\/p>","protected":false},"author":6,"featured_media":3979,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[88],"tags":[103,213,207,105,91],"class_list":["post-3964","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ethics","tag-ai-debate","tag-bias","tag-llm","tag-machine-learning","tag-policy"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>We want unbiased LLMs but it\u2019s impossible. Here\u2019s why. | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/it\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/\" \/>\n<meta property=\"og:locale\" content=\"it_IT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why. | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Companies like OpenAI and Meta are working hard to make their language models safer and less biased, but completely unbiased models may be a pipedream. A new research paper from the University of Washington, Carnegie Mellon University, and Xi\u2019an Jiaotong University concluded that all the AI language models they tested displayed political bias. After delving into the sources of the bias, they concluded that bias in language models was inevitable. Chan Park, one of the paper\u2019s authors, said \u201cWe believe no language model can be entirely free from political biases.\u201d The researchers tested 14 different language models and asked them\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/it\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-08-09T06:56:16+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"666\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Scritto da\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo di lettura stimato\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minuti\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why.\",\"datePublished\":\"2023-08-09T06:56:16+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/\"},\"wordCount\":540,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/Bias-in-AI-models.jpg\",\"keywords\":[\"AI debate\",\"Bias\",\"LLM\",\"machine learning\",\"Policy\"],\"articleSection\":[\"Ethics &amp; Society\"],\"inLanguage\":\"it-IT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/\",\"name\":\"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why. | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/Bias-in-AI-models.jpg\",\"datePublished\":\"2023-08-09T06:56:16+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#breadcrumb\"},\"inLanguage\":\"it-IT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/Bias-in-AI-models.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/Bias-in-AI-models.jpg\",\"width\":1000,\"height\":666,\"caption\":\"Bias in AI models\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why.\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"it-IT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/it\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Vogliamo LLM imparziali, ma \u00e8 impossibile. Ecco perch\u00e9. | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/it\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/","og_locale":"it_IT","og_type":"article","og_title":"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why. | DailyAI","og_description":"Companies like OpenAI and Meta are working hard to make their language models safer and less biased, but completely unbiased models may be a pipedream. A new research paper from the University of Washington, Carnegie Mellon University, and Xi\u2019an Jiaotong University concluded that all the AI language models they tested displayed political bias. After delving into the sources of the bias, they concluded that bias in language models was inevitable. Chan Park, one of the paper\u2019s authors, said \u201cWe believe no language model can be entirely free from political biases.\u201d The researchers tested 14 different language models and asked them","og_url":"https:\/\/dailyai.com\/it\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/","og_site_name":"DailyAI","article_published_time":"2023-08-09T06:56:16+00:00","og_image":[{"width":1000,"height":666,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Scritto da":"Eugene van der Watt","Tempo di lettura stimato":"3 minuti"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why.","datePublished":"2023-08-09T06:56:16+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/"},"wordCount":540,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg","keywords":["AI debate","Bias","LLM","machine learning","Policy"],"articleSection":["Ethics &amp; Society"],"inLanguage":"it-IT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/","url":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/","name":"Vogliamo LLM imparziali, ma \u00e8 impossibile. Ecco perch\u00e9. | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg","datePublished":"2023-08-09T06:56:16+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#breadcrumb"},"inLanguage":"it-IT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/"]}]},{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg","width":1000,"height":666,"caption":"Bias in AI models"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why."}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"La vostra dose quotidiana di notizie sull'intelligenza artificiale","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"it-IT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene proviene da un background di ingegneria elettronica e ama tutto ci\u00f2 che \u00e8 tecnologico. Quando si prende una pausa dal consumo di notizie sull'intelligenza artificiale, lo si pu\u00f2 trovare al tavolo da biliardo.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/it\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/3964","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/comments?post=3964"}],"version-history":[{"count":5,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/3964\/revisions"}],"predecessor-version":[{"id":3983,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/3964\/revisions\/3983"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/media\/3979"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/media?parent=3964"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/categories?post=3964"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/tags?post=3964"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}