{"id":8400,"date":"2023-12-18T09:14:27","date_gmt":"2023-12-18T09:14:27","guid":{"rendered":"https:\/\/dailyai.com\/?p=8400"},"modified":"2023-12-18T09:14:27","modified_gmt":"2023-12-18T09:14:27","slug":"openai-releases-first-results-from-superalignment-project","status":"publish","type":"post","link":"https:\/\/dailyai.com\/es\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","title":{"rendered":"OpenAI publica los primeros resultados del proyecto Superalignment"},"content":{"rendered":"<p><strong>Los modelos actuales de IA son capaces de hacer muchas cosas inseguras o indeseables. La supervisi\u00f3n humana y la retroalimentaci\u00f3n mantienen a estos modelos alineados, pero \u00bfqu\u00e9 ocurrir\u00e1 cuando estos modelos lleguen a ser m\u00e1s inteligentes que nosotros?<\/strong><\/p>\n<p>OpenAI afirma que es posible que veamos la creaci\u00f3n de una IA m\u00e1s inteligente que los humanos en los pr\u00f3ximos 10 a\u00f1os. Junto con el aumento de la inteligencia viene el riesgo de que los humanos ya no sean capaces de supervisar estos modelos.<\/p>\n<p>El equipo de investigaci\u00f3n Superalignment de OpenAI se centra en prepararse para esa eventualidad. El equipo se puso en marcha en julio de este a\u00f1o y est\u00e1 codirigido por Ilya Sutskever, que ha permanecido en la sombra desde que se fund\u00f3 Sam Altman. <a href=\"https:\/\/dailyai.com\/es\/2023\/11\/sam-altman-and-greg-brockman-join-microsoft-in-new-chapter-for-agi\/\">despido y posterior recontrataci\u00f3n<\/a>.<\/p>\n<p>La justificaci\u00f3n del proyecto fue puesta en un contexto aleccionador por OpenAI, que reconoci\u00f3 que \"actualmente, no tenemos una soluci\u00f3n para dirigir o controlar una IA potencialmente superinteligente, y evitar que se vuelva rebelde\".<\/p>\n<p>Pero, \u00bfc\u00f3mo prepararse para controlar algo que a\u00fan no existe? El equipo de investigaci\u00f3n acaba de publicar su <a href=\"https:\/\/cdn.openai.com\/papers\/weak-to-strong-generalization.pdf\" target=\"_blank\" rel=\"noopener\">primeros resultados experimentales<\/a> ya que trata de hacer precisamente eso.<\/p>\n<h2>Generalizaci\u00f3n de d\u00e9bil a fuerte<\/h2>\n<p>Por ahora, los humanos siguen estando en una posici\u00f3n de inteligencia m\u00e1s fuerte que los modelos de IA. Los modelos como GPT-4 se dirigen o alinean mediante la retroalimentaci\u00f3n humana del aprendizaje por refuerzo (RLHF). Cuando los resultados de un modelo no son los deseados, el instructor humano le dice: \"No hagas eso\", y recompensa al modelo con una afirmaci\u00f3n del rendimiento deseado.<\/p>\n<p>Por ahora, esto funciona porque entendemos bastante bien c\u00f3mo funcionan los modelos actuales y somos m\u00e1s inteligentes que ellos. Cuando los futuros cient\u00edficos de datos humanos tengan que entrenar a una IA superinteligente, los papeles de la inteligencia se invertir\u00e1n.<\/p>\n<p>Para simular esta situaci\u00f3n, OpenAI decidi\u00f3 utilizar modelos GPT antiguos, como el GPT-2, para entrenar modelos m\u00e1s potentes, como el GPT-4. El GPT-2 simular\u00eda al futuro entrenador humano intentando afinar un modelo m\u00e1s inteligente. GPT-2 simular\u00eda al futuro entrenador humano tratando de afinar un modelo m\u00e1s inteligente.<\/p>\n<figure id=\"attachment_8403\" aria-describedby=\"caption-attachment-8403\" style=\"width: 1936px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-8403\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision.webp\" alt=\"\" width=\"1936\" height=\"950\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision.webp 1936w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-300x147.webp 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-1024x502.webp 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-768x377.webp 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-1536x754.webp 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-370x182.webp 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-800x393.webp 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-740x363.webp 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-20x10.webp 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-1600x785.webp 1600w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-1320x648.webp 1320w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-98x48.webp 98w\" sizes=\"auto, (max-width: 1936px) 100vw, 1936px\" \/><figcaption id=\"caption-attachment-8403\" class=\"wp-caption-text\">Escenarios de entrenamiento de IA: Actual, futuro y simulaci\u00f3n de OpenAI. Fuente: OpenAI<\/figcaption><\/figure>\n<p>El documento de investigaci\u00f3n explicaba que \"al igual que el problema de los humanos que supervisan modelos sobrehumanos, nuestra configuraci\u00f3n es un caso de lo que llamamos el problema de aprendizaje de d\u00e9bil a fuerte\".<\/p>\n<p>En el experimento, OpenAI utiliz\u00f3 GPT-2 para perfeccionar GPT-4 en tareas de PNL, rompecabezas de ajedrez y modelado de recompensas. A continuaci\u00f3n, probaron el rendimiento de GPT-4 en la realizaci\u00f3n de estas tareas y lo compararon con un modelo GPT-4 que hab\u00eda sido entrenado con la \"verdad b\u00e1sica\" o las respuestas correctas a las tareas.<\/p>\n<p>Los resultados fueron prometedores en el sentido de que cuando GPT-4 fue entrenado por el modelo m\u00e1s d\u00e9bil fue capaz de generalizar fuertemente y superar al modelo m\u00e1s d\u00e9bil. Esto demostr\u00f3 que una inteligencia m\u00e1s d\u00e9bil pod\u00eda orientar a otra m\u00e1s fuerte que, a su vez, pod\u00eda basarse en ese entrenamiento.<\/p>\n<p>Pi\u00e9nsalo como si un alumno de tercer curso ense\u00f1ara matem\u00e1ticas a un ni\u00f1o muy listo y luego \u00e9ste, bas\u00e1ndose en la formaci\u00f3n inicial, pasara a hacer matem\u00e1ticas de duod\u00e9cimo curso.<\/p>\n<h2>Brecha de rendimiento<\/h2>\n<p>Los investigadores descubrieron que, dado que GPT-4 estaba siendo entrenado por un modelo menos inteligente, ese proceso limitaba su rendimiento al equivalente de un modelo GPT-3.5 adecuadamente entrenado.<\/p>\n<p>Esto se debe a que el modelo m\u00e1s inteligente aprende algunos de los errores o procesos de pensamiento deficientes de su supervisor m\u00e1s d\u00e9bil. Esto parece indicar que utilizar humanos para entrenar a una IA superinteligente impedir\u00eda a la IA rendir al m\u00e1ximo de su potencial.<\/p>\n<figure id=\"attachment_8402\" aria-describedby=\"caption-attachment-8402\" style=\"width: 1376px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-8402\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results.png\" alt=\"\" width=\"1376\" height=\"506\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results.png 1376w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-300x110.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-1024x377.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-768x282.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-370x136.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-800x294.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-740x272.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-20x7.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-1320x485.png 1320w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-131x48.png 131w\" sizes=\"auto, (max-width: 1376px) 100vw, 1376px\" \/><figcaption id=\"caption-attachment-8402\" class=\"wp-caption-text\">Comparaci\u00f3n del rendimiento de GPT-2, GPT-4 entrenado por GPT2, GPT-4 m\u00e1s eficazmente entrenado por GPT-2 y GPT-4 entrenado en respuestas correctas.<\/figcaption><\/figure>\n<p>Los investigadores sugirieron utilizar modelos intermedios en un enfoque de bootstrapping. En el art\u00edculo se explicaba que \"en lugar de alinear directamente modelos muy sobrehumanos, podr\u00edamos alinear primero un modelo s\u00f3lo ligeramente sobrehumano, utilizarlo para alinear un modelo a\u00fan m\u00e1s inteligente, y as\u00ed sucesivamente\".<\/p>\n<p>OpenAI est\u00e1 dedicando muchos recursos a este proyecto. El equipo de investigaci\u00f3n afirma que ha dedicado \"20% de la computaci\u00f3n que hemos conseguido hasta la fecha durante los pr\u00f3ximos cuatro a\u00f1os a resolver el problema de la alineaci\u00f3n de superinteligencias.\"<\/p>\n<p>Tambi\u00e9n ofrece $10 millones en subvenciones a particulares u organizaciones que deseen colaborar en la investigaci\u00f3n.<\/p>\n<p>M\u00e1s vale que lo resuelvan pronto. Una IA superinteligente podr\u00eda escribir un mill\u00f3n de l\u00edneas de c\u00f3digo complicado que ning\u00fan programador humano podr\u00eda entender. \u00bfC\u00f3mo sabr\u00edamos si el c\u00f3digo generado es seguro o no? Esperemos no descubrirlo por las malas.<\/p>","protected":false},"excerpt":{"rendered":"<p>Los modelos actuales de IA son capaces de hacer muchas cosas inseguras o indeseables. La supervisi\u00f3n humana y la retroalimentaci\u00f3n mantienen a estos modelos alineados, pero \u00bfqu\u00e9 ocurrir\u00e1 cuando estos modelos lleguen a ser m\u00e1s inteligentes que nosotros? OpenAI afirma que es posible que en los pr\u00f3ximos 10 a\u00f1os veamos la creaci\u00f3n de una IA m\u00e1s inteligente que los humanos. Junto con el aumento de la inteligencia viene el riesgo de que los humanos ya no sean capaces de supervisar estos modelos. El equipo de investigaci\u00f3n Superalignment de OpenAI se centra en prepararse para esa eventualidad. El equipo se cre\u00f3 en julio de este a\u00f1o y est\u00e1 codirigido por Ilya Sutskever.<\/p>","protected":false},"author":6,"featured_media":8404,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[163,118,93],"class_list":["post-8400","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-ai-risks","tag-llms","tag-openai"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>OpenAI releases first results from Superalignment project | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/es\/2023\/12\/openai-releases-first-results-from-superalignment-project\/\" \/>\n<meta property=\"og:locale\" content=\"es_ES\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"OpenAI releases first results from Superalignment project | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Current AI models are capable of doing a lot of unsafe or undesirable things. Human supervision and feedback keep these models aligned but what will happen when these models become smarter than us? OpenAI says it\u2019s possible that we could see the creation of an AI that is smarter than humans in the next 10 years. Along with the increased intelligence comes the risk that humans may no longer be capable of supervising these models. OpenAI\u2019s Superalignment research team is focused on preparing for that eventuality. The team was launched in July this year and is co-led by Ilya Sutskever\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/es\/2023\/12\/openai-releases-first-results-from-superalignment-project\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-18T09:14:27+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tiempo de lectura\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"OpenAI releases first results from Superalignment project\",\"datePublished\":\"2023-12-18T09:14:27+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\"},\"wordCount\":727,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/OpenAI-safety.jpg\",\"keywords\":[\"AI risks\",\"LLMS\",\"OpenAI\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"es\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\",\"name\":\"OpenAI releases first results from Superalignment project | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/OpenAI-safety.jpg\",\"datePublished\":\"2023-12-18T09:14:27+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#breadcrumb\"},\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/OpenAI-safety.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/OpenAI-safety.jpg\",\"width\":1000,\"height\":667},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"OpenAI releases first results from Superalignment project\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"es\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/es\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"OpenAI publica los primeros resultados del proyecto Superalignment | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/es\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","og_locale":"es_ES","og_type":"article","og_title":"OpenAI releases first results from Superalignment project | DailyAI","og_description":"Current AI models are capable of doing a lot of unsafe or undesirable things. Human supervision and feedback keep these models aligned but what will happen when these models become smarter than us? OpenAI says it\u2019s possible that we could see the creation of an AI that is smarter than humans in the next 10 years. Along with the increased intelligence comes the risk that humans may no longer be capable of supervising these models. OpenAI\u2019s Superalignment research team is focused on preparing for that eventuality. The team was launched in July this year and is co-led by Ilya Sutskever","og_url":"https:\/\/dailyai.com\/es\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","og_site_name":"DailyAI","article_published_time":"2023-12-18T09:14:27+00:00","og_image":[{"width":1000,"height":667,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Eugene van der Watt","Tiempo de lectura":"4 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"OpenAI releases first results from Superalignment project","datePublished":"2023-12-18T09:14:27+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/"},"wordCount":727,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","keywords":["AI risks","LLMS","OpenAI"],"articleSection":["Industry"],"inLanguage":"es"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","url":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","name":"OpenAI publica los primeros resultados del proyecto Superalignment | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","datePublished":"2023-12-18T09:14:27+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#breadcrumb"},"inLanguage":"es","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/"]}]},{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","width":1000,"height":667},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"OpenAI releases first results from Superalignment project"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Su dosis diaria de noticias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"es"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene es ingeniero electr\u00f3nico y le encanta todo lo relacionado con la tecnolog\u00eda. Cuando descansa de consumir noticias sobre IA, lo encontrar\u00e1 jugando al billar.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/es\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/8400","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/comments?post=8400"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/8400\/revisions"}],"predecessor-version":[{"id":8406,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/8400\/revisions\/8406"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media\/8404"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media?parent=8400"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/categories?post=8400"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/tags?post=8400"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}