{"id":13027,"date":"2024-06-23T10:10:33","date_gmt":"2024-06-23T10:10:33","guid":{"rendered":"https:\/\/dailyai.com\/?p=13027"},"modified":"2024-06-25T11:36:18","modified_gmt":"2024-06-25T11:36:18","slug":"university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur","status":"publish","type":"post","link":"https:\/\/dailyai.com\/nb\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","title":{"rendered":"En studie fra University of Oxford identifiserer n\u00e5r det er mest sannsynlig at AI-hallusinasjoner oppst\u00e5r"},"content":{"rendered":"<p><b>En studie fra University of Oxford har utviklet en metode for \u00e5 teste n\u00e5r spr\u00e5kmodeller er \"usikre\" p\u00e5 hva de produserer og risikerer \u00e5 hallusinere.\u00a0<\/b><\/p>\n<p><span style=\"font-weight: 400;\">AI-\"hallusinasjoner\" refererer til et fenomen der store spr\u00e5kmodeller (LLM-er) genererer flytende og plausible svar som ikke er sannferdige eller konsistente.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Hallusinasjoner er vanskelige - om ikke umulige - \u00e5 skille fra AI-modeller. AI-utviklere som OpenAI, Google og Anthropic har alle innr\u00f8mmet at hallusinasjoner sannsynligvis vil forbli et biprodukt av \u00e5 samhandle med AI.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Som Dr. Sebastian Farquhar, en av forfatterne av studien, sa <\/span><a href=\"https:\/\/www.ox.ac.uk\/news\/2024-06-20-major-research-hallucinating-generative-models-advances-reliability-artificial\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">forklarer i et blogginnlegg<\/span><\/a><span style=\"font-weight: 400;\">\"LLM-er er sv\u00e6rt dyktige til \u00e5 si det samme p\u00e5 mange forskjellige m\u00e5ter, noe som kan gj\u00f8re det vanskelig \u00e5 se n\u00e5r de er sikre p\u00e5 et svar, og n\u00e5r de bokstavelig talt bare finner p\u00e5 noe.\"\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Cambridge Dictionary har til og med lagt til en <\/span><a href=\"https:\/\/dailyai.com\/nb\/2023\/11\/cambridge-dictionary-reveals-an-ai-related-word-of-the-year\/\"><span style=\"font-weight: 400;\">AI-relatert definisjon av ordet<\/span><\/a><span style=\"font-weight: 400;\"> i 2023 og k\u00e5ret det til \"\u00c5rets ord\".\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Dette universitetet i Oxford <\/span> <a href=\"https:\/\/www.nature.com\/articles\/s41586-024-07421-0\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">studie<\/span><\/a><span style=\"font-weight: 400;\">, publisert i Nature,<\/span><span style=\"font-weight: 400;\"> s\u00f8ker \u00e5 finne svar p\u00e5 hvordan vi kan oppdage n\u00e5r det er st\u00f8rst sannsynlighet for at slike hallusinasjoner oppst\u00e5r.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Den introduserer et begrep som kalles \"semantisk entropi\", som m\u00e5ler usikkerheten i en LLMs resultater p\u00e5 meningsniv\u00e5 i stedet for bare de spesifikke ordene eller frasene som brukes.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ved \u00e5 beregne den semantiske entropien til en LLM-modells svar kan forskerne estimere modellens tillit til utdataene og identifisere tilfeller der det er sannsynlig at den hallusinerer.<\/span><\/p>\n<h2>Semantisk entropi i LLM-er forklart<\/h2>\n<p><span style=\"font-weight: 400;\">Semantisk entropi, som definert i studien, m\u00e5ler usikkerheten eller inkonsistensen i betydningen av LLM-enes svar. <\/span><span style=\"font-weight: 400;\">Den hjelper til med \u00e5 oppdage n\u00e5r en LLM kan hallusinere eller generere up\u00e5litelig informasjon.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Semantisk entropi m\u00e5ler hvor \"forvirret\" et LLM-resultat er.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">LLM vil sannsynligvis gi p\u00e5litelig informasjon hvis betydningen av resultatene er n\u00e6rt beslektet og konsistent. <\/span><span style=\"font-weight: 400;\">Men hvis meningene er spredte og inkonsekvente, er det et r\u00f8dt flagg som tyder p\u00e5 at LLM-en kan hallusinere eller generere un\u00f8yaktig informasjon.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Slik fungerer det:<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Forskerne har aktivt bedt LLM om \u00e5 generere flere mulige svar p\u00e5 det samme sp\u00f8rsm\u00e5let. Dette gj\u00f8res ved \u00e5 mate LLM-en med sp\u00f8rsm\u00e5let flere ganger, hver gang med et annet tilfeldig fr\u00f8 eller en liten variasjon i inndataene.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Semantisk entropi unders\u00f8ker svarene og grupperer dem med samme underliggende mening, selv om de bruker forskjellige ord eller formuleringer.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Hvis LLM-en er trygg p\u00e5 svaret, b\u00f8r svarene ha lignende betydninger, noe som resulterer i en lav semantisk entropisk score. Dette tyder p\u00e5 at LLM-en forst\u00e5r informasjonen p\u00e5 en tydelig og konsekvent m\u00e5te.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Hvis LLM-en derimot er usikker eller forvirret, vil svarene ha flere ulike betydninger, og noen av dem kan v\u00e6re inkonsistente eller ikke ha noe med sp\u00f8rsm\u00e5let \u00e5 gj\u00f8re. Dette resulterer i en h\u00f8y semantisk entropisk score, noe som indikerer at LLM-en kan hallusinere eller generere up\u00e5litelig informasjon.<\/span><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">For \u00e5 evaluere effektiviteten brukte forskerne semantisk entropi p\u00e5 et variert sett med sp\u00f8rsm\u00e5lssvaroppgaver. Dette involverte benchmarks som<\/span><span style=\"font-weight: 400;\">\u00a0trivia-sp\u00f8rsm\u00e5l, leseforst\u00e5else, ordoppgaver og biografier.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Semantisk entropi var gjennomg\u00e5ende bedre enn eksisterende metoder n\u00e5r det gjaldt \u00e5 oppdage n\u00e5r det var sannsynlig at en LLM ville generere et feilaktig eller inkonsistent svar.<\/span><\/p>\n<figure id=\"attachment_13028\" aria-describedby=\"caption-attachment-13028\" style=\"width: 862px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-13028\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-1024x981.webp\" alt=\"Hallusinasjoner\" width=\"862\" height=\"826\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-1024x981.webp 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-300x287.webp 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-768x736.webp 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-13x12.webp 13w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-60x57.webp 60w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-24x24.webp 24w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML.webp 1412w\" sizes=\"auto, (max-width: 862px) 100vw, 862px\" \/><figcaption id=\"caption-attachment-13028\" class=\"wp-caption-text\">H\u00f8y gjennomsnittlig semantisk entropi tyder p\u00e5 konfabulering (i hovedsak hallusinerte fakta som oppgis som virkelige), mens lav entropi, til tross for varierende ordlyd, indikerer et sannsynlig sant faktum. Kilde: <a href=\"https:\/\/www.nature.com\/articles\/s41586-024-07421-0\">Natur<\/a> (\u00e5pen tilgang)<\/figcaption><\/figure>\n<p>I diagrammet ovenfor kan du se hvordan noen sp\u00f8rsm\u00e5l presser LLM til \u00e5 generere et konfabulert (un\u00f8yaktig, hallusinatorisk) svar. For eksempel produserer den en f\u00f8dselsdag og -m\u00e5ned for sp\u00f8rsm\u00e5lene nederst i diagrammet n\u00e5r informasjonen som kreves for \u00e5 svare p\u00e5 dem, ikke ble oppgitt i den opprinnelige informasjonen.<\/p>\n<h2>Konsekvenser av \u00e5 oppdage hallusinasjoner<\/h2>\n<p><span style=\"font-weight: 400;\">Dette arbeidet kan bidra til \u00e5 forklare hallusinasjoner og gj\u00f8re LLM-er mer p\u00e5litelige og troverdige.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ved \u00e5 gj\u00f8re det mulig \u00e5 oppdage n\u00e5r en LLM er usikker eller utsatt for hallusinasjoner, baner semantisk entropi vei for bruk av disse AI-verkt\u00f8yene p\u00e5 omr\u00e5der der det st\u00e5r mye p\u00e5 spill, og der faktan\u00f8yaktighet er avgj\u00f8rende, for eksempel innen helse, juss og finans. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Feilaktige resultater kan ha potensielt katastrofale konsekvenser n\u00e5r de p\u00e5virker situasjoner der det st\u00e5r mye p\u00e5 spill, som vist av noen <a href=\"https:\/\/dailyai.com\/nb\/2023\/10\/predictive-policing-underdelivers-on-its-goals-and-risks-discrimination\/\">mislykket forutseende politiarbeid<\/a> og <a href=\"https:\/\/dailyai.com\/nb\/2023\/07\/-39\/\">helsesystemer<\/a>.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Det er imidlertid ogs\u00e5 viktig \u00e5 huske at hallusinasjoner bare er \u00e9n type feil som LLM-er kan gj\u00f8re.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Som Dr. Farquhar forklarer: \"Hvis en LLM gj\u00f8r konsekvente feil, vil ikke denne nye metoden fange det opp. De farligste feilene med AI kommer n\u00e5r et system gj\u00f8r noe galt, men er selvsikkert og systematisk. Her gjenst\u00e5r det fortsatt mye arbeid.\"<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Oxford-teamets semantiske entropimetode representerer likevel et stort skritt fremover i v\u00e5r evne til \u00e5 forst\u00e5 og redusere begrensningene ved AI-spr\u00e5kmodeller.\u00a0<\/span><\/p>\n<p>Ved \u00e5 finne objektive metoder for \u00e5 oppdage dem kommer vi n\u00e6rmere en fremtid der vi kan utnytte potensialet i kunstig intelligens og samtidig sikre at den forblir et p\u00e5litelig og troverdig verkt\u00f8y i menneskehetens tjeneste.<\/p>","protected":false},"excerpt":{"rendered":"<p>En studie fra University of Oxford har utviklet en metode for \u00e5 teste n\u00e5r spr\u00e5kmodeller er \"usikre\" p\u00e5 hva de produserer og risikerer \u00e5 hallusinere.  AI-\"hallusinasjoner\" er et fenomen der store spr\u00e5kmodeller (LLM-er) genererer flytende og plausible svar som ikke er sannferdige eller konsistente.  Hallusinasjoner er vanskelige - om ikke umulige - \u00e5 skille fra AI-modeller. AI-utviklere som OpenAI, Google og Anthropic har alle innr\u00f8mmet at hallusinasjoner sannsynligvis vil forbli et biprodukt av \u00e5 samhandle med AI.  Som Dr. Sebastian Farquhar, en av forfatterne av studien, forklarer i et blogginnlegg: \"LLM-er er i stor grad i stand til \u00e5 si det samme<\/p>","protected":false},"author":2,"featured_media":13029,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[88],"tags":[480,105],"class_list":["post-13027","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ethics","tag-hallucinations","tag-machine-learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>University of Oxford study identifies when AI hallucinations are more likely to occur | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/nb\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/\" \/>\n<meta property=\"og:locale\" content=\"nb_NO\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"University of Oxford study identifies when AI hallucinations are more likely to occur | DailyAI\" \/>\n<meta property=\"og:description\" content=\"A University of Oxford study developed a means of testing when language models are \u201cunsure\u201d of their output and risk hallucinating.\u00a0 AI &#8220;hallucinations&#8221; refer to a phenomenon where large language models (LLMs) generate fluent and plausible responses that are not truthful or consistent.\u00a0 Hallucinations are tough \u2013 if not impossible \u2013 to separate from AI models. AI developers like OpenAI, Google, and Anthropic have all admitted that hallucinations will likely remain a byproduct of interacting with AI.\u00a0 As Dr. Sebastian Farquhar, one of the study&#8217;s authors, explains in a blog post, &#8220;LLMs are highly capable of saying the same thing\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/nb\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-06-23T10:10:33+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-06-25T11:36:18+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skrevet av\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Ansl. lesetid\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"University of Oxford study identifies when AI hallucinations are more likely to occur\",\"datePublished\":\"2024-06-23T10:10:33+00:00\",\"dateModified\":\"2024-06-25T11:36:18+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\"},\"wordCount\":813,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\",\"keywords\":[\"Hallucinations\",\"machine learning\"],\"articleSection\":[\"Ethics &amp; Society\"],\"inLanguage\":\"nb-NO\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\",\"name\":\"University of Oxford study identifies when AI hallucinations are more likely to occur | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\",\"datePublished\":\"2024-06-23T10:10:33+00:00\",\"dateModified\":\"2024-06-25T11:36:18+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#breadcrumb\"},\"inLanguage\":\"nb-NO\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"nb-NO\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\",\"width\":1792,\"height\":1024,\"caption\":\"hallucinations\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"University of Oxford study identifies when AI hallucinations are more likely to occur\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"nb-NO\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"nb-NO\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"nb-NO\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/nb\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"University of Oxford-studie identifiserer n\u00e5r AI-hallusinasjoner er mer sannsynlig \u00e5 forekomme | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/nb\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","og_locale":"nb_NO","og_type":"article","og_title":"University of Oxford study identifies when AI hallucinations are more likely to occur | DailyAI","og_description":"A University of Oxford study developed a means of testing when language models are \u201cunsure\u201d of their output and risk hallucinating.\u00a0 AI &#8220;hallucinations&#8221; refer to a phenomenon where large language models (LLMs) generate fluent and plausible responses that are not truthful or consistent.\u00a0 Hallucinations are tough \u2013 if not impossible \u2013 to separate from AI models. AI developers like OpenAI, Google, and Anthropic have all admitted that hallucinations will likely remain a byproduct of interacting with AI.\u00a0 As Dr. Sebastian Farquhar, one of the study&#8217;s authors, explains in a blog post, &#8220;LLMs are highly capable of saying the same thing","og_url":"https:\/\/dailyai.com\/nb\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","og_site_name":"DailyAI","article_published_time":"2024-06-23T10:10:33+00:00","article_modified_time":"2024-06-25T11:36:18+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","type":"image\/webp"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skrevet av":"Sam Jeans","Ansl. lesetid":"4 minutter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"University of Oxford study identifies when AI hallucinations are more likely to occur","datePublished":"2024-06-23T10:10:33+00:00","dateModified":"2024-06-25T11:36:18+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/"},"wordCount":813,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","keywords":["Hallucinations","machine learning"],"articleSection":["Ethics &amp; Society"],"inLanguage":"nb-NO"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","url":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","name":"University of Oxford-studie identifiserer n\u00e5r AI-hallusinasjoner er mer sannsynlig \u00e5 forekomme | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","datePublished":"2024-06-23T10:10:33+00:00","dateModified":"2024-06-25T11:36:18+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#breadcrumb"},"inLanguage":"nb-NO","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/"]}]},{"@type":"ImageObject","inLanguage":"nb-NO","@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","width":1792,"height":1024,"caption":"hallucinations"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"University of Oxford study identifies when AI hallucinations are more likely to occur"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DagligAI","description":"Din daglige dose med AI-nyheter","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"nb-NO"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DagligAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"nb-NO","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Sam Jeans","image":{"@type":"ImageObject","inLanguage":"nb-NO","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam er en vitenskaps- og teknologiskribent som har jobbet i ulike oppstartsbedrifter innen kunstig intelligens. N\u00e5r han ikke skriver, leser han medisinske tidsskrifter eller graver seg gjennom esker med vinylplater.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/nb\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts\/13027","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/comments?post=13027"}],"version-history":[{"count":10,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts\/13027\/revisions"}],"predecessor-version":[{"id":13087,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts\/13027\/revisions\/13087"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/media\/13029"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/media?parent=13027"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/categories?post=13027"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/tags?post=13027"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}