{"id":13027,"date":"2024-06-23T10:10:33","date_gmt":"2024-06-23T10:10:33","guid":{"rendered":"https:\/\/dailyai.com\/?p=13027"},"modified":"2024-06-25T11:36:18","modified_gmt":"2024-06-25T11:36:18","slug":"university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur","status":"publish","type":"post","link":"https:\/\/dailyai.com\/it\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","title":{"rendered":"Uno studio dell'Universit\u00e0 di Oxford identifica quando \u00e8 pi\u00f9 probabile che si verifichino le allucinazioni da IA"},"content":{"rendered":"<p><b>Uno studio dell'Universit\u00e0 di Oxford ha sviluppato un metodo per verificare quando i modelli linguistici sono \"insicuri\" dei loro risultati e rischiano di avere allucinazioni.\u00a0<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Le \"allucinazioni\" dell'intelligenza artificiale si riferiscono a un fenomeno in cui i modelli linguistici di grandi dimensioni (LLM) generano risposte fluenti e plausibili che non sono veritiere o coerenti.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Le allucinazioni sono difficili, se non impossibili, da separare dai modelli di IA. Sviluppatori di IA come OpenAI, Google e Anthropic hanno tutti ammesso che le allucinazioni rimarranno probabilmente un sottoprodotto dell'interazione con l'IA.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Come spiega il dottor Sebastian Farquhar, uno degli autori dello studio, <\/span><a href=\"https:\/\/www.ox.ac.uk\/news\/2024-06-20-major-research-hallucinating-generative-models-advances-reliability-artificial\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">spiega in un post sul blog<\/span><\/a><span style=\"font-weight: 400;\">I laureati in Lettere sono molto capaci di dire la stessa cosa in molti modi diversi, il che pu\u00f2 rendere difficile capire quando sono certi di una risposta e quando invece si stanno letteralmente inventando qualcosa\".\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Il Dizionario di Cambridge ha persino aggiunto un <\/span><a href=\"https:\/\/dailyai.com\/it\/2023\/11\/cambridge-dictionary-reveals-an-ai-related-word-of-the-year\/\"><span style=\"font-weight: 400;\">Definizione legata all'AI della parola<\/span><\/a><span style=\"font-weight: 400;\"> nel 2023 e l'ha nominata \"Parola dell'anno\".\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">L'Universit\u00e0 di Oxford <\/span> <a href=\"https:\/\/www.nature.com\/articles\/s41586-024-07421-0\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">studio<\/span><\/a><span style=\"font-weight: 400;\">pubblicato su Nature,<\/span><span style=\"font-weight: 400;\"> cerca di capire come individuare quando \u00e8 pi\u00f9 probabile che si verifichino queste allucinazioni.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Introduce un concetto chiamato \"entropia semantica\", che misura l'incertezza dei risultati di un LLM a livello di significato piuttosto che di parole o frasi specifiche utilizzate.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Calcolando l'entropia semantica delle risposte di un LLM, i ricercatori possono stimare la fiducia del modello nei suoi risultati e identificare i casi in cui \u00e8 probabile che abbia delle allucinazioni.<\/span><\/p>\n<h2>Spiegazione dell'entropia semantica nei LLM<\/h2>\n<p><span style=\"font-weight: 400;\">L'entropia semantica, come definita dallo studio, misura l'incertezza o l'incoerenza del significato delle risposte di un LLM. <\/span><span style=\"font-weight: 400;\">Aiuta a individuare quando un LLM potrebbe avere delle allucinazioni o generare informazioni inaffidabili.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In termini pi\u00f9 semplici, l'entropia semantica misura quanto sia \"confuso\" l'output di un LLM.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Il LLM fornir\u00e0 probabilmente informazioni affidabili se il significato dei suoi risultati \u00e8 strettamente correlato e coerente. <\/span><span style=\"font-weight: 400;\">Ma se i significati sono sparsi e incoerenti, \u00e8 un segnale di allarme che il LLM potrebbe avere delle allucinazioni o generare informazioni imprecise.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ecco come funziona:<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">I ricercatori hanno sollecitato attivamente il LLM a generare diverse possibili risposte alla stessa domanda. Ci\u00f2 si ottiene somministrando la domanda al LLM pi\u00f9 volte, ogni volta con un seme casuale diverso o con una leggera variazione dell'input.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">L'entropia semantica esamina le risposte e raggruppa quelle che hanno lo stesso significato di fondo, anche se utilizzano parole o frasi diverse.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Se il LLM \u00e8 sicuro della risposta, le sue risposte dovrebbero avere significati simili, con un conseguente basso punteggio di entropia semantica. Ci\u00f2 suggerisce che il LLM comprende in modo chiaro e coerente le informazioni.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Tuttavia, se il LLM \u00e8 incerto o confuso, le sue risposte avranno una pi\u00f9 ampia variet\u00e0 di significati, alcuni dei quali potrebbero essere incoerenti o non correlati alla domanda. Ci\u00f2 si traduce in un punteggio di entropia semantica elevato, che indica che il LLM pu\u00f2 avere allucinazioni o generare informazioni inaffidabili.<\/span><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">Per valutarne l'efficacia, i ricercatori hanno applicato l'entropia semantica a una serie di compiti di risposta alle domande. Si trattava di benchmark come<\/span><span style=\"font-weight: 400;\">\u00a0domande di curiosit\u00e0, comprensione della lettura, problemi con le parole e biografie.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In generale, l'entropia semantica ha superato i metodi esistenti per individuare quando \u00e8 probabile che un LLM generi una risposta errata o incoerente.<\/span><\/p>\n<figure id=\"attachment_13028\" aria-describedby=\"caption-attachment-13028\" style=\"width: 862px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-13028\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-1024x981.webp\" alt=\"Allucinazioni\" width=\"862\" height=\"826\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-1024x981.webp 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-300x287.webp 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-768x736.webp 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-13x12.webp 13w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-60x57.webp 60w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-24x24.webp 24w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML.webp 1412w\" sizes=\"auto, (max-width: 862px) 100vw, 862px\" \/><figcaption id=\"caption-attachment-13028\" class=\"wp-caption-text\">Un'entropia semantica mediamente elevata suggerisce una confabulazione (fatti essenzialmente allucinati dichiarati come reali), mentre un'entropia bassa, nonostante le diverse formulazioni, indica un fatto verosimile. Fonte: <a href=\"https:\/\/www.nature.com\/articles\/s41586-024-07421-0\">Natura<\/a> (accesso libero)<\/figcaption><\/figure>\n<p>Nel diagramma qui sopra, si pu\u00f2 notare come alcuni prompt spingano l'LLM a generare una risposta confabulata (imprecisa, allucinatoria). Ad esempio, produce un giorno e un mese di nascita per le domande in fondo al diagramma quando le informazioni necessarie per rispondere non sono state fornite nelle informazioni iniziali.<\/p>\n<h2>Implicazioni del rilevamento delle allucinazioni<\/h2>\n<p><span style=\"font-weight: 400;\">Questo lavoro pu\u00f2 aiutare a spiegare le allucinazioni e a rendere le LLM pi\u00f9 affidabili e attendibili.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Fornendo un modo per rilevare quando un LLM \u00e8 incerto o incline all'allucinazione, l'entropia semantica apre la strada all'impiego di questi strumenti di IA in settori ad alta concentrazione in cui l'accuratezza dei fatti \u00e8 fondamentale, come la sanit\u00e0, la legge e la finanza. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">I risultati errati possono avere impatti potenzialmente catastrofici quando influenzano situazioni di grande importanza, come dimostrato da alcuni <a href=\"https:\/\/dailyai.com\/it\/2023\/10\/predictive-policing-underdelivers-on-its-goals-and-risks-discrimination\/\">fallimento dell'attivit\u00e0 di polizia predittiva<\/a> e <a href=\"https:\/\/dailyai.com\/it\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/\">sistemi sanitari<\/a>.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Tuttavia, \u00e8 anche importante ricordare che le allucinazioni sono solo un tipo di errore che i LLM possono commettere.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Come spiega il dott. Farquhar, \"se un LLM commette errori costanti, questo nuovo metodo non li coglie. I fallimenti pi\u00f9 pericolosi dell'IA si verificano quando un sistema fa qualcosa di sbagliato ma \u00e8 sicuro e sistematico. C'\u00e8 ancora molto lavoro da fare\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Tuttavia, il metodo dell'entropia semantica del team di Oxford rappresenta un importante passo avanti nella nostra capacit\u00e0 di comprendere e mitigare i limiti dei modelli linguistici dell'IA.\u00a0<\/span><\/p>\n<p>Fornire un mezzo oggettivo per rilevarli ci avvicina a un futuro in cui potremo sfruttare il potenziale dell'IA assicurandoci che rimanga uno strumento affidabile e degno di fiducia al servizio dell'umanit\u00e0.<\/p>","protected":false},"excerpt":{"rendered":"<p>Uno studio dell'Universit\u00e0 di Oxford ha sviluppato un metodo per verificare quando i modelli linguistici sono \"insicuri\" dei loro risultati e rischiano di avere allucinazioni.  Le \"allucinazioni\" dell'intelligenza artificiale si riferiscono al fenomeno per cui i modelli linguistici di grandi dimensioni (LLM) generano risposte fluenti e plausibili che non sono veritiere o coerenti.  Le allucinazioni sono difficili, se non impossibili, da separare dai modelli di IA. Sviluppatori di IA come OpenAI, Google e Anthropic hanno tutti ammesso che le allucinazioni rimarranno probabilmente un sottoprodotto dell'interazione con l'IA.  Come spiega il dottor Sebastian Farquhar, uno degli autori dello studio, in un post sul blog, \"i modelli di intelligenza artificiale sono altamente capaci di dire la stessa cosa.<\/p>","protected":false},"author":2,"featured_media":13029,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[88],"tags":[480,105],"class_list":["post-13027","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ethics","tag-hallucinations","tag-machine-learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>University of Oxford study identifies when AI hallucinations are more likely to occur | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/it\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/\" \/>\n<meta property=\"og:locale\" content=\"it_IT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"University of Oxford study identifies when AI hallucinations are more likely to occur | DailyAI\" \/>\n<meta property=\"og:description\" content=\"A University of Oxford study developed a means of testing when language models are \u201cunsure\u201d of their output and risk hallucinating.\u00a0 AI &#8220;hallucinations&#8221; refer to a phenomenon where large language models (LLMs) generate fluent and plausible responses that are not truthful or consistent.\u00a0 Hallucinations are tough \u2013 if not impossible \u2013 to separate from AI models. AI developers like OpenAI, Google, and Anthropic have all admitted that hallucinations will likely remain a byproduct of interacting with AI.\u00a0 As Dr. Sebastian Farquhar, one of the study&#8217;s authors, explains in a blog post, &#8220;LLMs are highly capable of saying the same thing\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/it\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-06-23T10:10:33+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-06-25T11:36:18+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Scritto da\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo di lettura stimato\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minuti\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"University of Oxford study identifies when AI hallucinations are more likely to occur\",\"datePublished\":\"2024-06-23T10:10:33+00:00\",\"dateModified\":\"2024-06-25T11:36:18+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\"},\"wordCount\":813,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\",\"keywords\":[\"Hallucinations\",\"machine learning\"],\"articleSection\":[\"Ethics &amp; Society\"],\"inLanguage\":\"it-IT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\",\"name\":\"University of Oxford study identifies when AI hallucinations are more likely to occur | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\",\"datePublished\":\"2024-06-23T10:10:33+00:00\",\"dateModified\":\"2024-06-25T11:36:18+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#breadcrumb\"},\"inLanguage\":\"it-IT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\",\"width\":1792,\"height\":1024,\"caption\":\"hallucinations\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"University of Oxford study identifies when AI hallucinations are more likely to occur\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"it-IT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/it\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Uno studio dell'Universit\u00e0 di Oxford identifica quando \u00e8 pi\u00f9 probabile che si verifichino allucinazioni da IA | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/it\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","og_locale":"it_IT","og_type":"article","og_title":"University of Oxford study identifies when AI hallucinations are more likely to occur | DailyAI","og_description":"A University of Oxford study developed a means of testing when language models are \u201cunsure\u201d of their output and risk hallucinating.\u00a0 AI &#8220;hallucinations&#8221; refer to a phenomenon where large language models (LLMs) generate fluent and plausible responses that are not truthful or consistent.\u00a0 Hallucinations are tough \u2013 if not impossible \u2013 to separate from AI models. AI developers like OpenAI, Google, and Anthropic have all admitted that hallucinations will likely remain a byproduct of interacting with AI.\u00a0 As Dr. Sebastian Farquhar, one of the study&#8217;s authors, explains in a blog post, &#8220;LLMs are highly capable of saying the same thing","og_url":"https:\/\/dailyai.com\/it\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","og_site_name":"DailyAI","article_published_time":"2024-06-23T10:10:33+00:00","article_modified_time":"2024-06-25T11:36:18+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","type":"image\/webp"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Scritto da":"Sam Jeans","Tempo di lettura stimato":"4 minuti"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"University of Oxford study identifies when AI hallucinations are more likely to occur","datePublished":"2024-06-23T10:10:33+00:00","dateModified":"2024-06-25T11:36:18+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/"},"wordCount":813,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","keywords":["Hallucinations","machine learning"],"articleSection":["Ethics &amp; Society"],"inLanguage":"it-IT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","url":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","name":"Uno studio dell'Universit\u00e0 di Oxford identifica quando \u00e8 pi\u00f9 probabile che si verifichino allucinazioni da IA | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","datePublished":"2024-06-23T10:10:33+00:00","dateModified":"2024-06-25T11:36:18+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#breadcrumb"},"inLanguage":"it-IT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/"]}]},{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","width":1792,"height":1024,"caption":"hallucinations"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"University of Oxford study identifies when AI hallucinations are more likely to occur"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"La vostra dose quotidiana di notizie sull'intelligenza artificiale","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"it-IT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Sam Jeans","image":{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam \u00e8 uno scrittore di scienza e tecnologia che ha lavorato in diverse startup di intelligenza artificiale. Quando non scrive, lo si pu\u00f2 trovare a leggere riviste mediche o a scavare tra scatole di dischi in vinile.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/it\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/13027","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/comments?post=13027"}],"version-history":[{"count":10,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/13027\/revisions"}],"predecessor-version":[{"id":13087,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/13027\/revisions\/13087"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/media\/13029"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/media?parent=13027"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/categories?post=13027"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/tags?post=13027"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}