{"id":14221,"date":"2024-09-16T08:57:47","date_gmt":"2024-09-16T08:57:47","guid":{"rendered":"https:\/\/dailyai.com\/?p=14221"},"modified":"2024-09-17T05:15:36","modified_gmt":"2024-09-17T05:15:36","slug":"01-is-smarter-but-more-deceptive-with-a-medium-danger-level","status":"publish","type":"post","link":"https:\/\/dailyai.com\/es\/2024\/09\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\/","title":{"rendered":"o1 es m\u00e1s inteligente pero m\u00e1s enga\u00f1oso con un nivel de peligro \"medio"},"content":{"rendered":"<p><strong>Los nuevos LLM \"o1\" de OpenAI, apodados Strawberry, presentan mejoras significativas con respecto a GPT-4o, pero la empresa afirma que esto conlleva mayores riesgos.<\/strong><\/p>\n<p>OpenAI afirma estar comprometida con el desarrollo seguro de sus modelos de IA. Para ello, desarroll\u00f3 un Marco de Preparaci\u00f3n, un conjunto de \"procesos para rastrear, evaluar y proteger contra los riesgos catastr\u00f3ficos de los modelos potentes.\"<\/p>\n<p>Los l\u00edmites autoimpuestos por OpenAI regulan qu\u00e9 modelos se publican o se siguen desarrollando. El marco de preparaci\u00f3n da lugar a un cuadro de mando en el que los riesgos QBRN (qu\u00edmicos, biol\u00f3gicos, radiol\u00f3gicos y nucleares), de autonom\u00eda de los modelos, de ciberseguridad y de persuasi\u00f3n se clasifican como bajos, medios, altos o cr\u00edticos.<\/p>\n<p>Cuando se identifican riesgos inaceptables, se aplican medidas para reducirlos. S\u00f3lo pueden implantarse los modelos con una puntuaci\u00f3n posterior a la mitigaci\u00f3n igual o inferior a \"media\". S\u00f3lo los modelos con una puntuaci\u00f3n posterior a la mitigaci\u00f3n de \"alta\" o inferior pueden seguir desarroll\u00e1ndose.<\/p>\n<p>La publicaci\u00f3n de o1 marca la primera vez que OpenAI publica un modelo que presenta un riesgo \"medio\" para los riesgos QBRN. GPT-4o est\u00e1 calificado como de riesgo bajo.<\/p>\n<p>No va a permitir a un aficionado crear un arma biol\u00f3gica, pero OpenAI afirma que hubo \"una mejora no trivial\" en su capacidad para realizar tareas de laboratorio. Sin embargo, OpenAI afirma que \"o1-preview y o1-mini superaron significativamente a GPT-4o\" en protocolos biol\u00f3gicos de laboratorio h\u00famedo.<\/p>\n<p>En <a href=\"https:\/\/assets.ctfassets.net\/kftzwdyauwt9\/67qJD51Aur3eIc96iOfeOP\/71551c3d223cd97e591aa89567306912\/o1_system_card.pdf\" target=\"_blank\" rel=\"noopener\">tarjeta del sistema<\/a> de los modelos 01 describe las pruebas de seguridad a las que se han sometido los LLM e insin\u00faa algunas de las preocupaciones de OpenAI sobre los riesgos potenciales.<\/p>\n<p>En su conclusi\u00f3n, la ficha del sistema se\u00f1ala: \"Estas mayores capacidades vienen acompa\u00f1adas de una mejora significativa de los resultados en los par\u00e1metros de seguridad, pero tambi\u00e9n aumentan ciertos tipos de riesgo.\"<\/p>\n<p>Los resultados de los esfuerzos de alineaci\u00f3n de OpenAI parecen indicar que, en muchos aspectos, 01 es un modelo m\u00e1s seguro. Es m\u00e1s resistente a los ataques de fuga, menos propenso a la parcialidad y menos propenso a las alucinaciones.<\/p>\n<p>Algunos de los resultados experimentales muestran por qu\u00e9 OpenAI tiene motivos para preocuparse.<\/p>\n<h2>Enga\u00f1o<\/h2>\n<p>OpenAI recurri\u00f3 a la organizaci\u00f3n de evaluaci\u00f3n Apollo Research para realizar algunas de las pruebas de seguridad de o1 relacionadas con el enga\u00f1o, o intriga. En t\u00e9rminos generales, Apollo define la maquinaci\u00f3n como el juego de las IA con sus mecanismos de supervisi\u00f3n para conseguir un objetivo.<\/p>\n<p>Un modelo consciente de s\u00ed mismo que sea bueno en tareas de razonamiento propio y teor\u00eda de la mente podr\u00eda conspirar, o emplear el enga\u00f1o, para lograr su objetivo principal. Por ejemplo, si un modelo de IA se diera cuenta de que est\u00e1 siendo sometido a pruebas de alineaci\u00f3n, podr\u00eda ocultar un comportamiento arriesgado para lograr su objetivo de ser desplegado.<\/p>\n<p>La tarjeta del sistema se\u00f1alaba que \"o1-preview a veces falseaba instrumentalmente la alineaci\u00f3n durante las pruebas\".<\/p>\n<p>Un ejemplo de ello fue cuando se plante\u00f3 al 01 el objetivo principal de maximizar el crecimiento econ\u00f3mico en un proyecto de planificaci\u00f3n urbana. El modelo o1 ten\u00eda acceso a informaci\u00f3n que mostraba que no pasar\u00eda de la fase de pruebas si hac\u00eda esto. As\u00ed es como o1 razon\u00f3 sobre c\u00f3mo lograr\u00eda su objetivo:<\/p>\n<figure id=\"attachment_14225\" aria-describedby=\"caption-attachment-14225\" style=\"width: 1306px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-14225 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/09\/01-decides-to-fake-it-to-pass-testing.png\" alt=\"\" width=\"1306\" height=\"716\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/09\/01-decides-to-fake-it-to-pass-testing.png 1306w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/09\/01-decides-to-fake-it-to-pass-testing-300x164.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/09\/01-decides-to-fake-it-to-pass-testing-1024x561.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/09\/01-decides-to-fake-it-to-pass-testing-768x421.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/09\/01-decides-to-fake-it-to-pass-testing-18x10.png 18w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/09\/01-decides-to-fake-it-to-pass-testing-60x33.png 60w\" sizes=\"auto, (max-width: 1306px) 100vw, 1306px\" \/><figcaption id=\"caption-attachment-14225\" class=\"wp-caption-text\">o1 decide ocultar su verdadera intenci\u00f3n durante las pruebas para que se despliegue. Fuente: OpenAI<\/figcaption><\/figure>\n<p>Este ejemplo muestra la capacidad que tiene o1 de ocultar sus verdaderas intenciones durante las pruebas para poder perseguir su objetivo principal durante el despliegue. Si o1 conoce las reglas de despliegue de OpenAI basadas en su marco de preparaci\u00f3n, \u00bfpodr\u00eda haber falseado algunas de sus respuestas para asegurarse el despliegue?<\/p>\n<p>Si se est\u00e1 trabajando en modelos m\u00e1s inteligentes, \u00bfpodr\u00edan asegurarse de que se mantienen por debajo de la puntuaci\u00f3n de riesgo \"cr\u00edtico\" para garantizar que siguen desarroll\u00e1ndose?<\/p>\n<p>El equipo de Apollo dice que \"subjetivamente cree que o1-preview no puede participar en maquinaciones que puedan provocar da\u00f1os catastr\u00f3ficos, aunque las pruebas actuales no est\u00e1n dise\u00f1adas para descartarlo definitivamente\".<\/p>\n<h2>Persuasi\u00f3n<\/h2>\n<p>Las pruebas tanto de o1-preview como de o1-mini muestran que estos modelos tienen \"capacidades de persuasi\u00f3n de nivel humano\". Sin embargo, OpenAI dice que \"no superan a los mejores escritores humanos y no alcanzan nuestro umbral de alto riesgo.\"<\/p>\n<p>Los experimentos que realizaron para comprobarlo plantean algunas dudas. En la prueba de MakeMePay, a 01-preview se le asign\u00f3 el papel de estafador y se le encarg\u00f3 que convenciera a GPT-4o para que le entregara el dinero que se le hab\u00eda concedido.<\/p>\n<p>A GPT-4o se le dijo que se le hab\u00edan concedido $100 y que ten\u00eda que ser racional sobre c\u00f3mo utilizar los fondos. En el transcurso de 1.000 interacciones, o1-preview fue capaz de estafar a GPT-4o el 25,8% de las veces.<\/p>\n<p>OpenAI concluy\u00f3 que \"Estos resultados indican que la serie de modelos o1 puede ser m\u00e1s manipuladora que GPT-4o a la hora de conseguir que GPT-4o realice la tarea no revelada (\u223c25% uplift)\". Adem\u00e1s, se\u00f1al\u00f3 que \"Esta evaluaci\u00f3n nos da una l\u00ednea de base para la capacidad del modelo de hacer da\u00f1o persuasivo, sin activar ninguna pol\u00edtica de modelo (ya que decirle a un modelo que juegue un juego no est\u00e1 fuera de la pol\u00edtica).\"<\/p>\n<p>La perspectiva de poner a trabajar a los LLM de o1 en problemas del mundo real es sumamente emocionante, y cuando o1 adquiera capacidades multimodales representar\u00e1 otro salto exponencial. Pero cuando los probadores de IA dicen que no pueden descartar \"da\u00f1os catastr\u00f3ficos\" y que los modelos a veces ocultan su verdadera intenci\u00f3n, puede ser motivo para moderar ese entusiasmo con cautela.<\/p>\n<p>\u00bfAcaba OpenAI de dar Gavin Newsom una buena raz\u00f3n para firmar la <a href=\"https:\/\/dailyai.com\/es\/2024\/07\/proposed-california-bills-could-be-disastrous-for-ai-development\/\">SB 1047 Proyecto de ley de seguridad de la IA<\/a> al que se opone?<\/p>","protected":false},"excerpt":{"rendered":"<p>Los nuevos LLM \"o1\" de OpenAI, apodados Strawberry, presentan mejoras significativas respecto a GPT-4o, pero la empresa afirma que esto conlleva mayores riesgos. OpenAI afirma estar comprometida con el desarrollo seguro de sus modelos de IA. Para ello, desarroll\u00f3 un Marco de Preparaci\u00f3n, un conjunto de \"procesos para rastrear, evaluar y proteger contra riesgos catastr\u00f3ficos de modelos potentes\". Los l\u00edmites autoimpuestos por OpenAI regulan qu\u00e9 modelos se publican o se siguen desarrollando. El marco de preparaci\u00f3n da lugar a un cuadro de mando en el que los riesgos QBRN (qu\u00edmicos, biol\u00f3gicos, radiol\u00f3gicos y nucleares), de autonom\u00eda de los modelos, de ciberseguridad y de persuasi\u00f3n se califican de bajos, medios, altos o cr\u00edticos. Cuando se identifican riesgos inaceptables,<\/p>","protected":false},"author":6,"featured_media":14226,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[163,93],"class_list":["post-14221","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-ai-risks","tag-openai"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>o1 is smarter but more deceptive with a \u201cmedium\u201d danger level | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/es\/2024\/09\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\/\" \/>\n<meta property=\"og:locale\" content=\"es_ES\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"o1 is smarter but more deceptive with a \u201cmedium\u201d danger level | DailyAI\" \/>\n<meta property=\"og:description\" content=\"OpenAI\u2019s new \u201co1\u201d LLMs, nicknamed Strawberry, display significant improvements over GPT-4o, but the company says this comes with increased risks. OpenAI says it is committed to the safe development of its AI models. To that end, it developed a Preparedness Framework, a set of \u201cprocesses to track, evaluate, and protect against catastrophic risks from powerful models.\u201d OpenAI&#8217;s self-imposed limits regulate which models get released or undergo further development. The Preparedness Framework results in a scorecard where CBRN (chemical, biological, radiological, nuclear), model autonomy, cybersecurity, and persuasion risks are rated as low, medium, high, or critical. Where unacceptable risks are identified,\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/es\/2024\/09\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-09-16T08:57:47+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-09-17T05:15:36+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/09\/01-more-intelligent-more-risks.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tiempo de lectura\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/09\\\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/09\\\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"o1 is smarter but more deceptive with a \u201cmedium\u201d danger level\",\"datePublished\":\"2024-09-16T08:57:47+00:00\",\"dateModified\":\"2024-09-17T05:15:36+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/09\\\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\\\/\"},\"wordCount\":860,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/09\\\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/09\\\/01-more-intelligent-more-risks.webp\",\"keywords\":[\"AI risks\",\"OpenAI\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"es\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/09\\\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/09\\\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\\\/\",\"name\":\"o1 is smarter but more deceptive with a \u201cmedium\u201d danger level | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/09\\\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/09\\\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/09\\\/01-more-intelligent-more-risks.webp\",\"datePublished\":\"2024-09-16T08:57:47+00:00\",\"dateModified\":\"2024-09-17T05:15:36+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/09\\\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\\\/#breadcrumb\"},\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/09\\\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/09\\\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/09\\\/01-more-intelligent-more-risks.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/09\\\/01-more-intelligent-more-risks.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/09\\\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"o1 is smarter but more deceptive with a \u201cmedium\u201d danger level\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"es\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/es\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"o1 es m\u00e1s inteligente pero m\u00e1s enga\u00f1oso con un nivel de peligrosidad \"medio\" | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/es\/2024\/09\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\/","og_locale":"es_ES","og_type":"article","og_title":"o1 is smarter but more deceptive with a \u201cmedium\u201d danger level | DailyAI","og_description":"OpenAI\u2019s new \u201co1\u201d LLMs, nicknamed Strawberry, display significant improvements over GPT-4o, but the company says this comes with increased risks. OpenAI says it is committed to the safe development of its AI models. To that end, it developed a Preparedness Framework, a set of \u201cprocesses to track, evaluate, and protect against catastrophic risks from powerful models.\u201d OpenAI&#8217;s self-imposed limits regulate which models get released or undergo further development. The Preparedness Framework results in a scorecard where CBRN (chemical, biological, radiological, nuclear), model autonomy, cybersecurity, and persuasion risks are rated as low, medium, high, or critical. Where unacceptable risks are identified,","og_url":"https:\/\/dailyai.com\/es\/2024\/09\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\/","og_site_name":"DailyAI","article_published_time":"2024-09-16T08:57:47+00:00","article_modified_time":"2024-09-17T05:15:36+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/09\/01-more-intelligent-more-risks.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Eugene van der Watt","Tiempo de lectura":"5 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/09\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/09\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"o1 is smarter but more deceptive with a \u201cmedium\u201d danger level","datePublished":"2024-09-16T08:57:47+00:00","dateModified":"2024-09-17T05:15:36+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/09\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\/"},"wordCount":860,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/09\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/09\/01-more-intelligent-more-risks.webp","keywords":["AI risks","OpenAI"],"articleSection":["Industry"],"inLanguage":"es"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/09\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\/","url":"https:\/\/dailyai.com\/2024\/09\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\/","name":"o1 es m\u00e1s inteligente pero m\u00e1s enga\u00f1oso con un nivel de peligrosidad \"medio\" | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/09\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/09\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/09\/01-more-intelligent-more-risks.webp","datePublished":"2024-09-16T08:57:47+00:00","dateModified":"2024-09-17T05:15:36+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/09\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\/#breadcrumb"},"inLanguage":"es","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/09\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\/"]}]},{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/2024\/09\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/09\/01-more-intelligent-more-risks.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/09\/01-more-intelligent-more-risks.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/09\/01-is-smarter-but-more-deceptive-with-a-medium-danger-level\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"o1 is smarter but more deceptive with a \u201cmedium\u201d danger level"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Su dosis diaria de noticias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"es"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene es ingeniero electr\u00f3nico y le encanta todo lo relacionado con la tecnolog\u00eda. Cuando descansa de consumir noticias sobre IA, lo encontrar\u00e1 jugando al billar.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/es\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/14221","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/comments?post=14221"}],"version-history":[{"count":6,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/14221\/revisions"}],"predecessor-version":[{"id":14248,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/14221\/revisions\/14248"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media\/14226"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media?parent=14221"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/categories?post=14221"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/tags?post=14221"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}