{"id":13027,"date":"2024-06-23T10:10:33","date_gmt":"2024-06-23T10:10:33","guid":{"rendered":"https:\/\/dailyai.com\/?p=13027"},"modified":"2024-06-25T11:36:18","modified_gmt":"2024-06-25T11:36:18","slug":"university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur","status":"publish","type":"post","link":"https:\/\/dailyai.com\/pt\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","title":{"rendered":"Estudo da Universidade de Oxford identifica quando \u00e9 mais prov\u00e1vel que ocorram alucina\u00e7\u00f5es de IA"},"content":{"rendered":"<p><b>Um estudo da Universidade de Oxford desenvolveu um meio de testar quando os modelos lingu\u00edsticos est\u00e3o \"inseguros\" quanto aos seus resultados e correm o risco de alucinar.\u00a0<\/b><\/p>\n<p><span style=\"font-weight: 400;\">As \"alucina\u00e7\u00f5es\" da IA referem-se a um fen\u00f3meno em que os modelos de linguagem de grande dimens\u00e3o (LLM) geram respostas fluentes e plaus\u00edveis que n\u00e3o s\u00e3o verdadeiras ou consistentes.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">As alucina\u00e7\u00f5es s\u00e3o dif\u00edceis - se n\u00e3o mesmo imposs\u00edveis - de separar dos modelos de IA. Os criadores de IA como a OpenAI, a Google e a Anthropic admitiram que as alucina\u00e7\u00f5es continuar\u00e3o provavelmente a ser um subproduto da intera\u00e7\u00e3o com a IA.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Como afirma o Dr. Sebastian Farquhar, um dos autores do estudo, <\/span><a href=\"https:\/\/www.ox.ac.uk\/news\/2024-06-20-major-research-hallucinating-generative-models-advances-reliability-artificial\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">explica numa publica\u00e7\u00e3o no blogue<\/span><\/a><span style=\"font-weight: 400;\">Os LLM s\u00e3o muito capazes de dizer a mesma coisa de muitas maneiras diferentes, o que pode tornar dif\u00edcil perceber quando t\u00eam a certeza de uma resposta e quando est\u00e3o literalmente a inventar alguma coisa\".\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">O Cambridge Dictionary acrescentou mesmo um <\/span><a href=\"https:\/\/dailyai.com\/pt\/2023\/11\/cambridge-dictionary-reveals-an-ai-related-word-of-the-year\/\"><span style=\"font-weight: 400;\">Defini\u00e7\u00e3o da palavra relacionada com a IA<\/span><\/a><span style=\"font-weight: 400;\"> em 2023 e nomeou-a \"Palavra do Ano\".\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Esta Universidade de Oxford <\/span> <a href=\"https:\/\/www.nature.com\/articles\/s41586-024-07421-0\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">estudo<\/span><\/a><span style=\"font-weight: 400;\">, publicado na Nature,<\/span><span style=\"font-weight: 400;\"> procura responder como podemos detetar quando \u00e9 mais prov\u00e1vel que essas alucina\u00e7\u00f5es ocorram.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Introduz um conceito chamado \"entropia sem\u00e2ntica\", que mede a incerteza dos resultados de um LLM ao n\u00edvel do significado e n\u00e3o apenas das palavras ou frases espec\u00edficas utilizadas.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ao calcular a entropia sem\u00e2ntica das respostas de um LLM, os investigadores podem estimar a confian\u00e7a do modelo nos seus resultados e identificar os casos em que \u00e9 prov\u00e1vel que tenha alucina\u00e7\u00f5es.<\/span><\/p>\n<h2>Explica\u00e7\u00e3o da entropia sem\u00e2ntica em LLMs<\/h2>\n<p><span style=\"font-weight: 400;\">A entropia sem\u00e2ntica, tal como definida pelo estudo, mede a incerteza ou a inconsist\u00eancia do significado das respostas de um LLM. <\/span><span style=\"font-weight: 400;\">Ajuda a detetar quando um LLM pode estar a alucinar ou a gerar informa\u00e7\u00f5es pouco fi\u00e1veis.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Em termos mais simples, a entropia sem\u00e2ntica mede o qu\u00e3o \"confuso\" \u00e9 o resultado de um LLM.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">O LLM fornecer\u00e1 provavelmente informa\u00e7\u00f5es fi\u00e1veis se o significado dos seus resultados estiver intimamente relacionado e for consistente. <\/span><span style=\"font-weight: 400;\">Mas se os significados forem dispersos e inconsistentes, isso \u00e9 um sinal de alerta de que o LLM pode estar a alucinar ou a gerar informa\u00e7\u00f5es imprecisas.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Eis como funciona:<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Os investigadores incitaram ativamente o LLM a gerar v\u00e1rias respostas poss\u00edveis \u00e0 mesma pergunta. Para o efeito, a pergunta \u00e9 colocada v\u00e1rias vezes ao LLM, cada vez com uma semente aleat\u00f3ria diferente ou uma ligeira varia\u00e7\u00e3o na entrada.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">A entropia sem\u00e2ntica examina as respostas e agrupa as que t\u00eam o mesmo significado subjacente, mesmo que utilizem palavras ou frases diferentes.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Se o LLM estiver confiante na resposta, suas respostas dever\u00e3o ter significados semelhantes, resultando em uma pontua\u00e7\u00e3o baixa de entropia sem\u00e2ntica. Isto sugere que o MLT compreende a informa\u00e7\u00e3o de forma clara e consistente.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">No entanto, se o MLT estiver incerto ou confuso, suas respostas ter\u00e3o uma variedade maior de significados, alguns dos quais podem ser inconsistentes ou n\u00e3o relacionados \u00e0 pergunta. Isso resulta em uma alta pontua\u00e7\u00e3o de entropia sem\u00e2ntica, indicando que o MLT pode ter alucina\u00e7\u00f5es ou gerar informa\u00e7\u00f5es n\u00e3o confi\u00e1veis.<\/span><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">Para avaliar a sua efic\u00e1cia, os investigadores aplicaram a entropia sem\u00e2ntica a um conjunto diversificado de tarefas de resposta a perguntas. Isto envolveu testes de refer\u00eancia como<\/span><span style=\"font-weight: 400;\">\u00a0perguntas de trivialidades, compreens\u00e3o de leitura, problemas de palavras e biografias.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">De um modo geral, a entropia sem\u00e2ntica superou os m\u00e9todos existentes para detetar quando um LLM era suscet\u00edvel de gerar uma resposta incorrecta ou inconsistente.<\/span><\/p>\n<figure id=\"attachment_13028\" aria-describedby=\"caption-attachment-13028\" style=\"width: 862px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-13028\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-1024x981.webp\" alt=\"Alucina\u00e7\u00f5es\" width=\"862\" height=\"826\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-1024x981.webp 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-300x287.webp 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-768x736.webp 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-13x12.webp 13w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-60x57.webp 60w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-24x24.webp 24w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML.webp 1412w\" sizes=\"auto, (max-width: 862px) 100vw, 862px\" \/><figcaption id=\"caption-attachment-13028\" class=\"wp-caption-text\">Uma entropia sem\u00e2ntica m\u00e9dia elevada sugere confabula\u00e7\u00e3o (factos essencialmente alucinados declarados como reais), ao passo que uma entropia baixa, apesar de uma reda\u00e7\u00e3o vari\u00e1vel, indica um facto provavelmente verdadeiro. Fonte: <a href=\"https:\/\/www.nature.com\/articles\/s41586-024-07421-0\">Natureza<\/a> (acesso livre)<\/figcaption><\/figure>\n<p>No diagrama acima, \u00e9 poss\u00edvel ver como alguns pedidos levam o LLM a gerar uma resposta confabulada (imprecisa, alucinat\u00f3ria). Por exemplo, produz um dia e um m\u00eas de nascimento para as perguntas na parte inferior do diagrama, quando a informa\u00e7\u00e3o necess\u00e1ria para as responder n\u00e3o foi fornecida na informa\u00e7\u00e3o inicial.<\/p>\n<h2>Implica\u00e7\u00f5es da dete\u00e7\u00e3o de alucina\u00e7\u00f5es<\/h2>\n<p><span style=\"font-weight: 400;\">Este trabalho pode ajudar a explicar as alucina\u00e7\u00f5es e a tornar os MLT mais fi\u00e1veis e dignos de confian\u00e7a.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ao fornecer uma forma de detetar quando um LLM \u00e9 incerto ou propenso a alucina\u00e7\u00f5es, a entropia sem\u00e2ntica abre caminho para a utiliza\u00e7\u00e3o destas ferramentas de IA em dom\u00ednios de grande import\u00e2ncia em que a exatid\u00e3o dos factos \u00e9 cr\u00edtica, como os cuidados de sa\u00fade, o direito e as finan\u00e7as. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Resultados errados podem ter impactos potencialmente catastr\u00f3ficos quando influenciam situa\u00e7\u00f5es de grande import\u00e2ncia, como demonstrado por alguns <a href=\"https:\/\/dailyai.com\/pt\/2023\/10\/predictive-policing-underdelivers-on-its-goals-and-risks-discrimination\/\">policiamento preditivo falhado<\/a> e <a href=\"https:\/\/dailyai.com\/pt\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/\">sistemas de sa\u00fade<\/a>.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">No entanto, tamb\u00e9m \u00e9 importante lembrar que as alucina\u00e7\u00f5es s\u00e3o apenas um tipo de erro que os LLMs podem cometer.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Como explica o Dr. Farquhar, \"se um LLM cometer erros consistentes, este novo m\u00e9todo n\u00e3o os detectar\u00e1. As falhas mais perigosas da IA surgem quando um sistema faz algo de mau mas est\u00e1 confiante e \u00e9 sistem\u00e1tico. Ainda h\u00e1 muito trabalho a fazer\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">No entanto, o m\u00e9todo de entropia sem\u00e2ntica da equipa de Oxford representa um grande passo em frente na nossa capacidade de compreender e atenuar as limita\u00e7\u00f5es dos modelos lingu\u00edsticos de IA.\u00a0<\/span><\/p>\n<p>Fornecer um meio objetivo para os detetar aproxima-nos de um futuro em que podemos aproveitar o potencial da IA, assegurando ao mesmo tempo que continua a ser uma ferramenta fi\u00e1vel e digna de confian\u00e7a ao servi\u00e7o da humanidade.<\/p>","protected":false},"excerpt":{"rendered":"<p>Um estudo da Universidade de Oxford desenvolveu um meio de testar quando os modelos lingu\u00edsticos est\u00e3o \"inseguros\" quanto aos seus resultados e correm o risco de ter alucina\u00e7\u00f5es.  As \"alucina\u00e7\u00f5es\" da IA referem-se a um fen\u00f3meno em que os modelos lingu\u00edsticos de grande dimens\u00e3o (LLM) geram respostas fluentes e plaus\u00edveis que n\u00e3o s\u00e3o verdadeiras ou consistentes.  As alucina\u00e7\u00f5es s\u00e3o dif\u00edceis - se n\u00e3o mesmo imposs\u00edveis - de separar dos modelos de IA. Os criadores de IA como a OpenAI, a Google e a Anthropic admitiram que as alucina\u00e7\u00f5es continuar\u00e3o provavelmente a ser um subproduto da intera\u00e7\u00e3o com a IA.  Como explica o Dr. Sebastian Farquhar, um dos autores do estudo, numa publica\u00e7\u00e3o no blogue, \"os LLM s\u00e3o altamente capazes de dizer a mesma coisa<\/p>","protected":false},"author":2,"featured_media":13029,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[88],"tags":[480,105],"class_list":["post-13027","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ethics","tag-hallucinations","tag-machine-learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>University of Oxford study identifies when AI hallucinations are more likely to occur | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/pt\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/\" \/>\n<meta property=\"og:locale\" content=\"pt_PT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"University of Oxford study identifies when AI hallucinations are more likely to occur | DailyAI\" \/>\n<meta property=\"og:description\" content=\"A University of Oxford study developed a means of testing when language models are \u201cunsure\u201d of their output and risk hallucinating.\u00a0 AI &#8220;hallucinations&#8221; refer to a phenomenon where large language models (LLMs) generate fluent and plausible responses that are not truthful or consistent.\u00a0 Hallucinations are tough \u2013 if not impossible \u2013 to separate from AI models. AI developers like OpenAI, Google, and Anthropic have all admitted that hallucinations will likely remain a byproduct of interacting with AI.\u00a0 As Dr. Sebastian Farquhar, one of the study&#8217;s authors, explains in a blog post, &#8220;LLMs are highly capable of saying the same thing\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/pt\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-06-23T10:10:33+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-06-25T11:36:18+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo estimado de leitura\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"University of Oxford study identifies when AI hallucinations are more likely to occur\",\"datePublished\":\"2024-06-23T10:10:33+00:00\",\"dateModified\":\"2024-06-25T11:36:18+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\"},\"wordCount\":813,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\",\"keywords\":[\"Hallucinations\",\"machine learning\"],\"articleSection\":[\"Ethics &amp; Society\"],\"inLanguage\":\"pt-PT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\",\"name\":\"University of Oxford study identifies when AI hallucinations are more likely to occur | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\",\"datePublished\":\"2024-06-23T10:10:33+00:00\",\"dateModified\":\"2024-06-25T11:36:18+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#breadcrumb\"},\"inLanguage\":\"pt-PT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\",\"width\":1792,\"height\":1024,\"caption\":\"hallucinations\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"University of Oxford study identifies when AI hallucinations are more likely to occur\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-PT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/pt\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Estudo da Universidade de Oxford identifica quando \u00e9 mais prov\u00e1vel que ocorram alucina\u00e7\u00f5es de IA | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/pt\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","og_locale":"pt_PT","og_type":"article","og_title":"University of Oxford study identifies when AI hallucinations are more likely to occur | DailyAI","og_description":"A University of Oxford study developed a means of testing when language models are \u201cunsure\u201d of their output and risk hallucinating.\u00a0 AI &#8220;hallucinations&#8221; refer to a phenomenon where large language models (LLMs) generate fluent and plausible responses that are not truthful or consistent.\u00a0 Hallucinations are tough \u2013 if not impossible \u2013 to separate from AI models. AI developers like OpenAI, Google, and Anthropic have all admitted that hallucinations will likely remain a byproduct of interacting with AI.\u00a0 As Dr. Sebastian Farquhar, one of the study&#8217;s authors, explains in a blog post, &#8220;LLMs are highly capable of saying the same thing","og_url":"https:\/\/dailyai.com\/pt\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","og_site_name":"DailyAI","article_published_time":"2024-06-23T10:10:33+00:00","article_modified_time":"2024-06-25T11:36:18+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","type":"image\/webp"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Sam Jeans","Tempo estimado de leitura":"4 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"University of Oxford study identifies when AI hallucinations are more likely to occur","datePublished":"2024-06-23T10:10:33+00:00","dateModified":"2024-06-25T11:36:18+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/"},"wordCount":813,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","keywords":["Hallucinations","machine learning"],"articleSection":["Ethics &amp; Society"],"inLanguage":"pt-PT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","url":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","name":"Estudo da Universidade de Oxford identifica quando \u00e9 mais prov\u00e1vel que ocorram alucina\u00e7\u00f5es de IA | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","datePublished":"2024-06-23T10:10:33+00:00","dateModified":"2024-06-25T11:36:18+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#breadcrumb"},"inLanguage":"pt-PT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/"]}]},{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","width":1792,"height":1024,"caption":"hallucinations"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"University of Oxford study identifies when AI hallucinations are more likely to occur"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"A sua dose di\u00e1ria de not\u00edcias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-PT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Cal\u00e7as de ganga Sam","image":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam \u00e9 um escritor de ci\u00eancia e tecnologia que trabalhou em v\u00e1rias startups de IA. Quando n\u00e3o est\u00e1 a escrever, pode ser encontrado a ler revistas m\u00e9dicas ou a vasculhar caixas de discos de vinil.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/pt\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/13027","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/comments?post=13027"}],"version-history":[{"count":10,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/13027\/revisions"}],"predecessor-version":[{"id":13087,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/13027\/revisions\/13087"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media\/13029"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media?parent=13027"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/categories?post=13027"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/tags?post=13027"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}