{"id":13027,"date":"2024-06-23T10:10:33","date_gmt":"2024-06-23T10:10:33","guid":{"rendered":"https:\/\/dailyai.com\/?p=13027"},"modified":"2024-06-25T11:36:18","modified_gmt":"2024-06-25T11:36:18","slug":"university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur","status":"publish","type":"post","link":"https:\/\/dailyai.com\/da\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","title":{"rendered":"Studie fra University of Oxford identificerer, hvorn\u00e5r der er st\u00f8rst sandsynlighed for AI-hallucinationer"},"content":{"rendered":"<p><b>Et studie fra University of Oxford har udviklet en metode til at teste, hvorn\u00e5r sprogmodeller er \"usikre\" p\u00e5 deres output og risikerer at hallucinere.\u00a0<\/b><\/p>\n<p><span style=\"font-weight: 400;\">AI-\"hallucinationer\" henviser til et f\u00e6nomen, hvor store sprogmodeller (LLM'er) genererer flydende og plausible svar, som ikke er sandf\u00e6rdige eller konsistente.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Hallucinationer er sv\u00e6re - hvis ikke umulige - at adskille fra AI-modeller. AI-udviklere som OpenAI, Google og Anthropic har alle indr\u00f8mmet, at hallucinationer sandsynligvis vil forblive et biprodukt af at interagere med AI.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Som Dr. Sebastian Farquhar, en af unders\u00f8gelsens forfattere, siger, <\/span><a href=\"https:\/\/www.ox.ac.uk\/news\/2024-06-20-major-research-hallucinating-generative-models-advances-reliability-artificial\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">forklarer i et blogindl\u00e6g<\/span><\/a><span style=\"font-weight: 400;\">\"LLM'er er i h\u00f8j grad i stand til at sige det samme p\u00e5 mange forskellige m\u00e5der, hvilket kan g\u00f8re det sv\u00e6rt at se, hvorn\u00e5r de er sikre p\u00e5 et svar, og hvorn\u00e5r de bogstaveligt talt bare finder p\u00e5 noget.\"\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Cambridge Dictionary har endda tilf\u00f8jet en <\/span><a href=\"https:\/\/dailyai.com\/da\/2023\/11\/cambridge-dictionary-reveals-an-ai-related-word-of-the-year\/\"><span style=\"font-weight: 400;\">AI-relateret definition af ordet<\/span><\/a><span style=\"font-weight: 400;\"> i 2023 og udn\u00e6vnte det til \"\u00c5rets ord\".\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Dette universitet i Oxford <\/span> <a href=\"https:\/\/www.nature.com\/articles\/s41586-024-07421-0\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">unders\u00f8gelse<\/span><\/a><span style=\"font-weight: 400;\">offentliggjort i Nature,<\/span><span style=\"font-weight: 400;\"> fors\u00f8ger at svare p\u00e5, hvordan vi kan opdage, hvorn\u00e5r disse hallucinationer er mest sandsynlige.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Den introducerer et begreb kaldet \"semantisk entropi\", som m\u00e5ler usikkerheden i en LLM's output p\u00e5 betydningsniveau i stedet for blot de specifikke ord eller s\u00e6tninger, der bruges.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ved at beregne den semantiske entropi i en LLM's svar kan forskerne estimere modellens tillid til dens output og identificere tilf\u00e6lde, hvor det er sandsynligt, at den hallucinerer.<\/span><\/p>\n<h2>Semantisk entropi i LLM'er forklaret<\/h2>\n<p><span style=\"font-weight: 400;\">Semantisk entropi, som defineret i unders\u00f8gelsen, m\u00e5ler usikkerheden eller inkonsistensen i betydningen af en LLM's svar. <\/span><span style=\"font-weight: 400;\">Det hj\u00e6lper med at opdage, n\u00e5r en LLM m\u00e5ske hallucinerer eller genererer up\u00e5lidelige oplysninger.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Enkelt sagt m\u00e5ler semantisk entropi, hvor \"forvirret\" en LLM's output er.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">LLM vil sandsynligvis give p\u00e5lidelig information, hvis betydningen af dens output er t\u00e6t forbundet og konsistent. <\/span><span style=\"font-weight: 400;\">Men hvis betydningerne er spredte og inkonsekvente, er det et r\u00f8dt flag for, at LLM'en m\u00e5ske hallucinerer eller genererer un\u00f8jagtige oplysninger.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">S\u00e5dan her fungerer det:<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Forskerne opfordrede aktivt LLM til at generere flere mulige svar p\u00e5 det samme sp\u00f8rgsm\u00e5l. Det sker ved at sende sp\u00f8rgsm\u00e5let til LLM'en flere gange, hver gang med et forskelligt tilf\u00e6ldigt fr\u00f8 eller en lille variation i input.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Semantisk entropi unders\u00f8ger svar og grupperer dem med samme underliggende betydning, selv om de bruger forskellige ord eller formuleringer.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Hvis LLM'en er sikker p\u00e5 svaret, b\u00f8r dens svar have samme betydning, hvilket resulterer i en lav semantisk entropiscore. Det tyder p\u00e5, at LLM'en klart og konsekvent forst\u00e5r informationen.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Men hvis LLM'en er usikker eller forvirret, vil dens svar have en bredere vifte af betydninger, hvoraf nogle kan v\u00e6re inkonsekvente eller ikke relateret til sp\u00f8rgsm\u00e5let. Dette resulterer i en h\u00f8j semantisk entropiscore, hvilket indikerer, at LLM'en kan hallucinere eller generere up\u00e5lidelig information.<\/span><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">For at evaluere dens effektivitet anvendte forskerne semantisk entropi p\u00e5 en r\u00e6kke forskellige sp\u00f8rgsm\u00e5lssvaropgaver. Dette involverede benchmarks som<\/span><span style=\"font-weight: 400;\">\u00a0trivia-sp\u00f8rgsm\u00e5l, l\u00e6seforst\u00e5else, ordopgaver og biografier.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Over hele linjen var semantisk entropi bedre end eksisterende metoder til at opdage, hvorn\u00e5r en LLM sandsynligvis ville generere et forkert eller inkonsekvent svar.<\/span><\/p>\n<figure id=\"attachment_13028\" aria-describedby=\"caption-attachment-13028\" style=\"width: 862px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-13028\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-1024x981.webp\" alt=\"Hallucinationer\" width=\"862\" height=\"826\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-1024x981.webp 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-300x287.webp 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-768x736.webp 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-13x12.webp 13w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-60x57.webp 60w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-24x24.webp 24w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML.webp 1412w\" sizes=\"auto, (max-width: 862px) 100vw, 862px\" \/><figcaption id=\"caption-attachment-13028\" class=\"wp-caption-text\">H\u00f8j gennemsnitlig semantisk entropi tyder p\u00e5 konfabulation (i bund og grund hallucinerede fakta, der angives som virkelige), mens lav entropi p\u00e5 trods af varierende ordlyd indikerer et sandsynligt sandt faktum. Kilde: K: <a href=\"https:\/\/www.nature.com\/articles\/s41586-024-07421-0\">Naturen<\/a> (\u00e5ben adgang)<\/figcaption><\/figure>\n<p>I ovenst\u00e5ende diagram kan du se, hvordan nogle sp\u00f8rgsm\u00e5l presser LLM til at generere et konfabuleret (un\u00f8jagtigt, hallucinatorisk) svar. For eksempel producerer den en f\u00f8dselsdag og -m\u00e5ned til sp\u00f8rgsm\u00e5lene i bunden af diagrammet, n\u00e5r de oplysninger, der kr\u00e6ves for at besvare dem, ikke blev givet i de oprindelige oplysninger.<\/p>\n<h2>Konsekvenser af at opdage hallucinationer<\/h2>\n<p><span style=\"font-weight: 400;\">Dette arbejde kan hj\u00e6lpe med at forklare hallucinationer og g\u00f8re LLM'er mere p\u00e5lidelige og trov\u00e6rdige.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ved at give mulighed for at opdage, hvorn\u00e5r en LLM er usikker eller tilb\u00f8jelig til at hallucinere, baner semantisk entropi vejen for at anvende disse AI-v\u00e6rkt\u00f8jer p\u00e5 omr\u00e5der, hvor der st\u00e5r meget p\u00e5 spil, og hvor faktuel n\u00f8jagtighed er afg\u00f8rende, som f.eks. sundhedspleje, jura og finans. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Fejlagtige resultater kan have potentielt katastrofale konsekvenser, n\u00e5r de p\u00e5virker situationer med h\u00f8j indsats, som det fremg\u00e5r af nogle <a href=\"https:\/\/dailyai.com\/da\/2023\/10\/predictive-policing-underdelivers-on-its-goals-and-risks-discrimination\/\">mislykket forudsigende politiarbejde<\/a> og <a href=\"https:\/\/dailyai.com\/da\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/\">sundhedssystemer<\/a>.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Men det er ogs\u00e5 vigtigt at huske, at hallucinationer kun er \u00e9n type fejl, som LLM'er kan beg\u00e5.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Som Dr. Farquhar forklarer: \"Hvis en LLM laver konsekvente fejl, vil denne nye metode ikke fange det. De farligste fejl i AI kommer, n\u00e5r et system g\u00f8r noget d\u00e5rligt, men er selvsikkert og systematisk. Der er stadig meget arbejde at g\u00f8re.\"<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ikke desto mindre repr\u00e6senterer Oxford-teamets semantiske entropimetode et stort skridt fremad i vores evne til at forst\u00e5 og afb\u00f8de begr\u00e6nsningerne i AI-sprogmodeller.\u00a0<\/span><\/p>\n<p>At tilvejebringe et objektivt middel til at opdage dem bringer os t\u00e6ttere p\u00e5 en fremtid, hvor vi kan udnytte AI's potentiale og samtidig sikre, at det forbliver et p\u00e5lideligt og trov\u00e6rdigt v\u00e6rkt\u00f8j i menneskehedens tjeneste.<\/p>","protected":false},"excerpt":{"rendered":"<p>Et studie fra University of Oxford har udviklet en metode til at teste, hvorn\u00e5r sprogmodeller er \"usikre\" p\u00e5 deres output og risikerer at hallucinere.  AI-\"hallucinationer\" henviser til et f\u00e6nomen, hvor store sprogmodeller (LLM'er) genererer flydende og plausible svar, som ikke er sandf\u00e6rdige eller konsekvente.  Hallucinationer er sv\u00e6re - hvis ikke umulige - at adskille fra AI-modeller. AI-udviklere som OpenAI, Google og Anthropic har alle indr\u00f8mmet, at hallucinationer sandsynligvis vil forblive et biprodukt af interaktion med AI.  Som Dr. Sebastian Farquhar, en af unders\u00f8gelsens forfattere, forklarer i et blogindl\u00e6g: \"LLM'er er i h\u00f8j grad i stand til at sige det samme<\/p>","protected":false},"author":2,"featured_media":13029,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[88],"tags":[480,105],"class_list":["post-13027","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ethics","tag-hallucinations","tag-machine-learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>University of Oxford study identifies when AI hallucinations are more likely to occur | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/da\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/\" \/>\n<meta property=\"og:locale\" content=\"da_DK\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"University of Oxford study identifies when AI hallucinations are more likely to occur | DailyAI\" \/>\n<meta property=\"og:description\" content=\"A University of Oxford study developed a means of testing when language models are \u201cunsure\u201d of their output and risk hallucinating.\u00a0 AI &#8220;hallucinations&#8221; refer to a phenomenon where large language models (LLMs) generate fluent and plausible responses that are not truthful or consistent.\u00a0 Hallucinations are tough \u2013 if not impossible \u2013 to separate from AI models. AI developers like OpenAI, Google, and Anthropic have all admitted that hallucinations will likely remain a byproduct of interacting with AI.\u00a0 As Dr. Sebastian Farquhar, one of the study&#8217;s authors, explains in a blog post, &#8220;LLMs are highly capable of saying the same thing\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/da\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-06-23T10:10:33+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-06-25T11:36:18+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skrevet af\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimeret l\u00e6setid\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"University of Oxford study identifies when AI hallucinations are more likely to occur\",\"datePublished\":\"2024-06-23T10:10:33+00:00\",\"dateModified\":\"2024-06-25T11:36:18+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\"},\"wordCount\":813,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\",\"keywords\":[\"Hallucinations\",\"machine learning\"],\"articleSection\":[\"Ethics &amp; Society\"],\"inLanguage\":\"da-DK\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\",\"name\":\"University of Oxford study identifies when AI hallucinations are more likely to occur | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\",\"datePublished\":\"2024-06-23T10:10:33+00:00\",\"dateModified\":\"2024-06-25T11:36:18+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#breadcrumb\"},\"inLanguage\":\"da-DK\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\",\"width\":1792,\"height\":1024,\"caption\":\"hallucinations\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"University of Oxford study identifies when AI hallucinations are more likely to occur\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"da-DK\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/da\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Studie fra University of Oxford identificerer, hvorn\u00e5r der er st\u00f8rst sandsynlighed for AI-hallucinationer | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/da\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","og_locale":"da_DK","og_type":"article","og_title":"University of Oxford study identifies when AI hallucinations are more likely to occur | DailyAI","og_description":"A University of Oxford study developed a means of testing when language models are \u201cunsure\u201d of their output and risk hallucinating.\u00a0 AI &#8220;hallucinations&#8221; refer to a phenomenon where large language models (LLMs) generate fluent and plausible responses that are not truthful or consistent.\u00a0 Hallucinations are tough \u2013 if not impossible \u2013 to separate from AI models. AI developers like OpenAI, Google, and Anthropic have all admitted that hallucinations will likely remain a byproduct of interacting with AI.\u00a0 As Dr. Sebastian Farquhar, one of the study&#8217;s authors, explains in a blog post, &#8220;LLMs are highly capable of saying the same thing","og_url":"https:\/\/dailyai.com\/da\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","og_site_name":"DailyAI","article_published_time":"2024-06-23T10:10:33+00:00","article_modified_time":"2024-06-25T11:36:18+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","type":"image\/webp"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skrevet af":"Sam Jeans","Estimeret l\u00e6setid":"4 minutter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"University of Oxford study identifies when AI hallucinations are more likely to occur","datePublished":"2024-06-23T10:10:33+00:00","dateModified":"2024-06-25T11:36:18+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/"},"wordCount":813,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","keywords":["Hallucinations","machine learning"],"articleSection":["Ethics &amp; Society"],"inLanguage":"da-DK"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","url":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","name":"Studie fra University of Oxford identificerer, hvorn\u00e5r der er st\u00f8rst sandsynlighed for AI-hallucinationer | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","datePublished":"2024-06-23T10:10:33+00:00","dateModified":"2024-06-25T11:36:18+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#breadcrumb"},"inLanguage":"da-DK","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/"]}]},{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","width":1792,"height":1024,"caption":"hallucinations"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"University of Oxford study identifies when AI hallucinations are more likely to occur"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Din daglige dosis af AI-nyheder","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"da-DK"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Sam Jeans","image":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam er videnskabs- og teknologiforfatter og har arbejdet i forskellige AI-startups. N\u00e5r han ikke skriver, kan han finde p\u00e5 at l\u00e6se medicinske tidsskrifter eller grave i kasser med vinylplader.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/da\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/13027","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/comments?post=13027"}],"version-history":[{"count":10,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/13027\/revisions"}],"predecessor-version":[{"id":13087,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/13027\/revisions\/13087"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media\/13029"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media?parent=13027"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/categories?post=13027"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/tags?post=13027"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}