{"id":13027,"date":"2024-06-23T10:10:33","date_gmt":"2024-06-23T10:10:33","guid":{"rendered":"https:\/\/dailyai.com\/?p=13027"},"modified":"2024-06-25T11:36:18","modified_gmt":"2024-06-25T11:36:18","slug":"university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur","status":"publish","type":"post","link":"https:\/\/dailyai.com\/es\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","title":{"rendered":"Un estudio de la Universidad de Oxford identifica cu\u00e1ndo es m\u00e1s probable que se produzcan alucinaciones con IA"},"content":{"rendered":"<p><b>Un estudio de la Universidad de Oxford ha desarrollado un medio para comprobar cu\u00e1ndo los modelos ling\u00fc\u00edsticos est\u00e1n \"inseguros\" de sus resultados y corren el riesgo de alucinar.\u00a0<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Las \"alucinaciones\" de la IA se refieren a un fen\u00f3meno en el que los grandes modelos ling\u00fc\u00edsticos (LLM) generan respuestas fluidas y plausibles que no son veraces ni coherentes.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Las alucinaciones son dif\u00edciles -si no imposibles- de separar de los modelos de IA. Desarrolladores de IA como OpenAI, Google y Anthropic han admitido que es probable que las alucinaciones sigan siendo un subproducto de la interacci\u00f3n con la IA.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Como dice el Dr. Sebastian Farquhar, uno de los autores del estudio, <\/span><a href=\"https:\/\/www.ox.ac.uk\/news\/2024-06-20-major-research-hallucinating-generative-models-advances-reliability-artificial\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">explica en una entrada de blog<\/span><\/a><span style=\"font-weight: 400;\">\"Los LLM son muy capaces de decir lo mismo de muchas formas distintas, lo que puede dificultar distinguir cu\u00e1ndo est\u00e1n seguros de una respuesta y cu\u00e1ndo est\u00e1n literalmente invent\u00e1ndose algo\".\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">El Cambridge Dictionary incluso a\u00f1adi\u00f3 un <\/span><a href=\"https:\/\/dailyai.com\/es\/2023\/11\/cambridge-dictionary-reveals-an-ai-related-word-of-the-year\/\"><span style=\"font-weight: 400;\">Definici\u00f3n de la palabra relacionada con la IA<\/span><\/a><span style=\"font-weight: 400;\"> en 2023 y la nombr\u00f3 \"Palabra del A\u00f1o\".\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Esta Universidad de Oxford <\/span> <a href=\"https:\/\/www.nature.com\/articles\/s41586-024-07421-0\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">estudiar<\/span><\/a><span style=\"font-weight: 400;\">publicado en Nature,<\/span><span style=\"font-weight: 400;\"> trata de responder c\u00f3mo podemos detectar cu\u00e1ndo es m\u00e1s probable que se produzcan esas alucinaciones.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Introduce un concepto denominado \"entrop\u00eda sem\u00e1ntica\", que mide la incertidumbre de los resultados de un LLM a nivel de significado y no s\u00f3lo de las palabras o frases concretas utilizadas.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Calculando la entrop\u00eda sem\u00e1ntica de las respuestas de un LLM, los investigadores pueden estimar la confianza del modelo en sus resultados e identificar los casos en los que es probable que alucine.<\/span><\/p>\n<h2>Explicaci\u00f3n de la entrop\u00eda sem\u00e1ntica en los LLM<\/h2>\n<p><span style=\"font-weight: 400;\">La entrop\u00eda sem\u00e1ntica, tal y como la define el estudio, mide la incertidumbre o incoherencia en el significado de las respuestas de un LLM. <\/span><span style=\"font-weight: 400;\">Ayuda a detectar cu\u00e1ndo un LLM puede estar alucinando o generando informaci\u00f3n poco fiable.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">En t\u00e9rminos m\u00e1s sencillos, la entrop\u00eda sem\u00e1ntica mide lo \"confusa\" que es la salida de un LLM.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Es probable que el LLM proporcione informaci\u00f3n fiable si el significado de sus resultados est\u00e1 estrechamente relacionado y es coherente. <\/span><span style=\"font-weight: 400;\">Pero si los significados son dispersos e incoherentes, es una se\u00f1al de alarma de que el LLM podr\u00eda estar alucinando o generando informaci\u00f3n inexacta.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">As\u00ed es como funciona:<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Los investigadores incitaron activamente al LLM a generar varias respuestas posibles a la misma pregunta. Para ello, el LLM recibe la pregunta varias veces, cada una de ellas con una semilla aleatoria diferente o una ligera variaci\u00f3n en la entrada.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">La entrop\u00eda sem\u00e1ntica examina las respuestas y agrupa las que tienen el mismo significado subyacente, aunque utilicen palabras o frases diferentes.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Si el LLM est\u00e1 seguro de la respuesta, sus respuestas deber\u00edan tener significados similares, lo que da como resultado una puntuaci\u00f3n de entrop\u00eda sem\u00e1ntica baja. Esto sugiere que el LLM comprende la informaci\u00f3n de forma clara y coherente.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Sin embargo, si el LLM est\u00e1 inseguro o confuso, sus respuestas tendr\u00e1n una mayor variedad de significados, algunos de los cuales podr\u00edan ser incoherentes o no estar relacionados con la pregunta. Esto da como resultado una puntuaci\u00f3n de entrop\u00eda sem\u00e1ntica alta, lo que indica que el LLM puede alucinar o generar informaci\u00f3n poco fiable.<\/span><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">Para evaluar su eficacia, los investigadores aplicaron la entrop\u00eda sem\u00e1ntica a diversas tareas de respuesta a preguntas. Para ello se utilizaron puntos de referencia como<\/span><span style=\"font-weight: 400;\">\u00a0preguntas de trivial, comprensi\u00f3n lectora, problemas de palabras y biograf\u00edas.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">En general, la entrop\u00eda sem\u00e1ntica super\u00f3 a los m\u00e9todos existentes para detectar cu\u00e1ndo era probable que un LLM generara una respuesta incorrecta o incoherente.<\/span><\/p>\n<figure id=\"attachment_13028\" aria-describedby=\"caption-attachment-13028\" style=\"width: 862px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-13028\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-1024x981.webp\" alt=\"Alucinaciones\" width=\"862\" height=\"826\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-1024x981.webp 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-300x287.webp 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-768x736.webp 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-13x12.webp 13w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-60x57.webp 60w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML-24x24.webp 24w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/41586_2024_7421_Fig1_HTML.webp 1412w\" sizes=\"auto, (max-width: 862px) 100vw, 862px\" \/><figcaption id=\"caption-attachment-13028\" class=\"wp-caption-text\">Una entrop\u00eda sem\u00e1ntica media alta sugiere confabulaci\u00f3n (hechos esencialmente alucinados declarados como reales), mientras que una entrop\u00eda baja, a pesar de la variaci\u00f3n en la redacci\u00f3n, indica un hecho probablemente verdadero. Fuente: <a href=\"https:\/\/www.nature.com\/articles\/s41586-024-07421-0\">Naturaleza<\/a> (acceso abierto)<\/figcaption><\/figure>\n<p>En el diagrama anterior se puede ver c\u00f3mo algunas preguntas empujan al LLM a generar una respuesta confabulada (inexacta, alucinatoria). Por ejemplo, genera un d\u00eda y un mes de nacimiento para las preguntas de la parte inferior del diagrama cuando la informaci\u00f3n necesaria para responderlas no se facilit\u00f3 en la informaci\u00f3n inicial.<\/p>\n<h2>Implicaciones de la detecci\u00f3n de alucinaciones<\/h2>\n<p><span style=\"font-weight: 400;\">Este trabajo puede ayudar a explicar las alucinaciones y hacer que los LLM sean m\u00e1s fiables y dignos de confianza.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">La entrop\u00eda sem\u00e1ntica, que permite detectar cu\u00e1ndo un LLM es incierto o propenso a la alucinaci\u00f3n, allana el camino para desplegar estas herramientas de IA en dominios de alto riesgo en los que la precisi\u00f3n de los hechos es fundamental, como la sanidad, el derecho y las finanzas. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Los resultados err\u00f3neos pueden tener repercusiones potencialmente catastr\u00f3ficas cuando influyen en situaciones de alto riesgo, como demuestran algunos <a href=\"https:\/\/dailyai.com\/es\/2023\/10\/predictive-policing-underdelivers-on-its-goals-and-risks-discrimination\/\">fracaso de la polic\u00eda predictiva<\/a> y <a href=\"https:\/\/dailyai.com\/es\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/\">sistemas sanitarios<\/a>.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Sin embargo, tambi\u00e9n es importante recordar que las alucinaciones son s\u00f3lo un tipo de error que pueden cometer los LLM.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Como explica el Dr. Farquhar, \"si un LLM comete errores sistem\u00e1ticos, este nuevo m\u00e9todo no lo detectar\u00e1. Los fallos m\u00e1s peligrosos de la IA se producen cuando un sistema hace algo mal pero es confiado y sistem\u00e1tico. A\u00fan queda mucho trabajo por hacer\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">No obstante, el m\u00e9todo de entrop\u00eda sem\u00e1ntica del equipo de Oxford representa un gran paso adelante en nuestra capacidad para comprender y mitigar las limitaciones de los modelos ling\u00fc\u00edsticos de la IA.\u00a0<\/span><\/p>\n<p>Proporcionar un medio objetivo para detectarlos nos acerca a un futuro en el que podamos aprovechar el potencial de la IA y, al mismo tiempo, garantizar que siga siendo una herramienta fiable y digna de confianza al servicio de la humanidad.<\/p>","protected":false},"excerpt":{"rendered":"<p>Un estudio de la Universidad de Oxford ha desarrollado un medio para comprobar cu\u00e1ndo los modelos ling\u00fc\u00edsticos est\u00e1n \"inseguros\" de sus resultados y corren el riesgo de alucinar.  Las \"alucinaciones\" de la IA se refieren a un fen\u00f3meno en el que los grandes modelos ling\u00fc\u00edsticos (LLM) generan respuestas fluidas y plausibles que no son veraces ni coherentes.  Las alucinaciones son dif\u00edciles -si no imposibles- de separar de los modelos de IA. Desarrolladores de IA como OpenAI, Google y Anthropic han admitido que las alucinaciones seguir\u00e1n siendo probablemente un subproducto de la interacci\u00f3n con la IA.  Como explica el Dr. Sebastian Farquhar, uno de los autores del estudio, en una entrada de su blog<\/p>","protected":false},"author":2,"featured_media":13029,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[88],"tags":[480,105],"class_list":["post-13027","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ethics","tag-hallucinations","tag-machine-learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>University of Oxford study identifies when AI hallucinations are more likely to occur | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/es\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/\" \/>\n<meta property=\"og:locale\" content=\"es_ES\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"University of Oxford study identifies when AI hallucinations are more likely to occur | DailyAI\" \/>\n<meta property=\"og:description\" content=\"A University of Oxford study developed a means of testing when language models are \u201cunsure\u201d of their output and risk hallucinating.\u00a0 AI &#8220;hallucinations&#8221; refer to a phenomenon where large language models (LLMs) generate fluent and plausible responses that are not truthful or consistent.\u00a0 Hallucinations are tough \u2013 if not impossible \u2013 to separate from AI models. AI developers like OpenAI, Google, and Anthropic have all admitted that hallucinations will likely remain a byproduct of interacting with AI.\u00a0 As Dr. Sebastian Farquhar, one of the study&#8217;s authors, explains in a blog post, &#8220;LLMs are highly capable of saying the same thing\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/es\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-06-23T10:10:33+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-06-25T11:36:18+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tiempo de lectura\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"University of Oxford study identifies when AI hallucinations are more likely to occur\",\"datePublished\":\"2024-06-23T10:10:33+00:00\",\"dateModified\":\"2024-06-25T11:36:18+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\"},\"wordCount\":813,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\",\"keywords\":[\"Hallucinations\",\"machine learning\"],\"articleSection\":[\"Ethics &amp; Society\"],\"inLanguage\":\"es\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\",\"name\":\"University of Oxford study identifies when AI hallucinations are more likely to occur | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\",\"datePublished\":\"2024-06-23T10:10:33+00:00\",\"dateModified\":\"2024-06-25T11:36:18+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#breadcrumb\"},\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp\",\"width\":1792,\"height\":1024,\"caption\":\"hallucinations\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"University of Oxford study identifies when AI hallucinations are more likely to occur\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"es\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/es\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Un estudio de la Universidad de Oxford identifica cu\u00e1ndo es m\u00e1s probable que se produzcan alucinaciones con IA | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/es\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","og_locale":"es_ES","og_type":"article","og_title":"University of Oxford study identifies when AI hallucinations are more likely to occur | DailyAI","og_description":"A University of Oxford study developed a means of testing when language models are \u201cunsure\u201d of their output and risk hallucinating.\u00a0 AI &#8220;hallucinations&#8221; refer to a phenomenon where large language models (LLMs) generate fluent and plausible responses that are not truthful or consistent.\u00a0 Hallucinations are tough \u2013 if not impossible \u2013 to separate from AI models. AI developers like OpenAI, Google, and Anthropic have all admitted that hallucinations will likely remain a byproduct of interacting with AI.\u00a0 As Dr. Sebastian Farquhar, one of the study&#8217;s authors, explains in a blog post, &#8220;LLMs are highly capable of saying the same thing","og_url":"https:\/\/dailyai.com\/es\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","og_site_name":"DailyAI","article_published_time":"2024-06-23T10:10:33+00:00","article_modified_time":"2024-06-25T11:36:18+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","type":"image\/webp"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Sam Jeans","Tiempo de lectura":"4 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"University of Oxford study identifies when AI hallucinations are more likely to occur","datePublished":"2024-06-23T10:10:33+00:00","dateModified":"2024-06-25T11:36:18+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/"},"wordCount":813,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","keywords":["Hallucinations","machine learning"],"articleSection":["Ethics &amp; Society"],"inLanguage":"es"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","url":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/","name":"Un estudio de la Universidad de Oxford identifica cu\u00e1ndo es m\u00e1s probable que se produzcan alucinaciones con IA | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","datePublished":"2024-06-23T10:10:33+00:00","dateModified":"2024-06-25T11:36:18+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#breadcrumb"},"inLanguage":"es","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/"]}]},{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/DALL\u00b7E-2024-06-23-11.10.02-A-surreal-and-futuristic-depiction-of-a-face-experiencing-AI-hallucinations.-The-face-appears-to-be-merging-with-digital-elements-with-parts-of-it-di.webp","width":1792,"height":1024,"caption":"hallucinations"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/06\/university-of-oxford-study-identifies-when-ai-hallucinations-are-more-likely-to-occur\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"University of Oxford study identifies when AI hallucinations are more likely to occur"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Su dosis diaria de noticias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"es"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Sam Jeans","image":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam es un escritor de ciencia y tecnolog\u00eda que ha trabajado en varias startups de IA. Cuando no est\u00e1 escribiendo, se le puede encontrar leyendo revistas m\u00e9dicas o rebuscando en cajas de discos de vinilo.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/es\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/13027","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/comments?post=13027"}],"version-history":[{"count":10,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/13027\/revisions"}],"predecessor-version":[{"id":13087,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/13027\/revisions\/13087"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media\/13029"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media?parent=13027"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/categories?post=13027"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/tags?post=13027"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}