{"id":8400,"date":"2023-12-18T09:14:27","date_gmt":"2023-12-18T09:14:27","guid":{"rendered":"https:\/\/dailyai.com\/?p=8400"},"modified":"2023-12-18T09:14:27","modified_gmt":"2023-12-18T09:14:27","slug":"openai-releases-first-results-from-superalignment-project","status":"publish","type":"post","link":"https:\/\/dailyai.com\/pt\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","title":{"rendered":"OpenAI divulga os primeiros resultados do projeto Superalignment"},"content":{"rendered":"<p><strong>Os actuais modelos de IA s\u00e3o capazes de fazer muitas coisas inseguras ou indesej\u00e1veis. A supervis\u00e3o humana e o feedback mant\u00eam estes modelos alinhados, mas o que acontecer\u00e1 quando estes modelos se tornarem mais inteligentes do que n\u00f3s?<\/strong><\/p>\n<p>A OpenAI afirma que \u00e9 poss\u00edvel que nos pr\u00f3ximos 10 anos possamos assistir \u00e0 cria\u00e7\u00e3o de uma IA mais inteligente do que os humanos. Juntamente com o aumento da intelig\u00eancia, existe o risco de os humanos deixarem de ser capazes de supervisionar estes modelos.<\/p>\n<p>A equipa de investiga\u00e7\u00e3o Superalignment da OpenAI est\u00e1 concentrada na prepara\u00e7\u00e3o para essa eventualidade. A equipa foi lan\u00e7ada em julho deste ano e \u00e9 co-liderada por Ilya Sutskever, que tem estado na sombra desde a crise de Sam Altman <a href=\"https:\/\/dailyai.com\/pt\/2023\/11\/sam-altman-and-greg-brockman-join-microsoft-in-new-chapter-for-agi\/\">despedimento e subsequente recontrata\u00e7\u00e3o<\/a>.<\/p>\n<p>O racioc\u00ednio subjacente ao projeto foi colocado num contexto s\u00f3brio pela OpenAI, que reconheceu que \"atualmente, n\u00e3o temos uma solu\u00e7\u00e3o para dirigir ou controlar uma IA potencialmente superinteligente e impedir que se torne desonesta\".<\/p>\n<p>Mas como \u00e9 que nos preparamos para controlar algo que ainda n\u00e3o existe? A equipa de investiga\u00e7\u00e3o acaba de lan\u00e7ar o seu <a href=\"https:\/\/cdn.openai.com\/papers\/weak-to-strong-generalization.pdf\" target=\"_blank\" rel=\"noopener\">primeiros resultados experimentais<\/a> enquanto tenta fazer exatamente isso.<\/p>\n<h2>Generaliza\u00e7\u00e3o fraca para forte<\/h2>\n<p>Por enquanto, os humanos ainda est\u00e3o numa posi\u00e7\u00e3o de intelig\u00eancia mais forte do que os modelos de IA. Os modelos como o GPT-4 s\u00e3o orientados ou alinhados utilizando o Feedback Humano de Aprendizagem por Refor\u00e7o (RLHF). Quando o resultado de um modelo \u00e9 indesej\u00e1vel, o formador humano diz ao modelo \"N\u00e3o fa\u00e7as isso\" e recompensa o modelo com uma afirma\u00e7\u00e3o do desempenho desejado.<\/p>\n<p>Por enquanto, isto funciona porque temos uma boa compreens\u00e3o do funcionamento dos modelos actuais e somos mais inteligentes do que eles. Quando os futuros cientistas de dados humanos tiverem de treinar uma IA superinteligente, os pap\u00e9is da intelig\u00eancia inverter-se-\u00e3o.<\/p>\n<p>Para simular esta situa\u00e7\u00e3o, a OpenAI decidiu utilizar modelos GPT mais antigos, como o GPT-2, para treinar modelos mais potentes, como o GPT-4. O GPT-2 simularia o futuro treinador humano a tentar afinar um modelo mais inteligente.<\/p>\n<figure id=\"attachment_8403\" aria-describedby=\"caption-attachment-8403\" style=\"width: 1936px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-8403\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision.webp\" alt=\"\" width=\"1936\" height=\"950\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision.webp 1936w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-300x147.webp 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-1024x502.webp 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-768x377.webp 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-1536x754.webp 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-370x182.webp 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-800x393.webp 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-740x363.webp 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-20x10.webp 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-1600x785.webp 1600w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-1320x648.webp 1320w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-98x48.webp 98w\" sizes=\"auto, (max-width: 1936px) 100vw, 1936px\" \/><figcaption id=\"caption-attachment-8403\" class=\"wp-caption-text\">Cen\u00e1rios de forma\u00e7\u00e3o em IA: Atual, futuro e simula\u00e7\u00e3o da OpenAI. Fonte: OpenAI<\/figcaption><\/figure>\n<p>O documento de investiga\u00e7\u00e3o explica que \"tal como o problema dos humanos que supervisionam modelos sobre-humanos, a nossa configura\u00e7\u00e3o \u00e9 uma inst\u00e2ncia daquilo a que chamamos o problema da aprendizagem fraca para forte\".<\/p>\n<p>Na experi\u00eancia, a OpenAI utilizou a GPT-2 para afinar a GPT-4 em tarefas de PNL, puzzles de xadrez e modela\u00e7\u00e3o de recompensas. Em seguida, testaram o desempenho do GPT-4 na execu\u00e7\u00e3o destas tarefas e compararam-no com um modelo GPT-4 que tinha sido treinado com a \"verdade b\u00e1sica\" ou com as respostas correctas \u00e0s tarefas.<\/p>\n<p>Os resultados foram prometedores no sentido em que, quando a GPT-4 foi treinada pelo modelo mais fraco, foi capaz de generalizar fortemente e superar o modelo mais fraco. Isto demonstrou que uma intelig\u00eancia mais fraca podia dar orienta\u00e7\u00e3o a uma intelig\u00eancia mais forte que podia depois basear-se nessa forma\u00e7\u00e3o.<\/p>\n<p>Pense nisto como se um aluno do 3\u00ba ano ensinasse matem\u00e1tica a um mi\u00fado muito inteligente e depois o mi\u00fado inteligente passasse a fazer matem\u00e1tica do 12\u00ba ano com base na forma\u00e7\u00e3o inicial.<\/p>\n<h2>Diferen\u00e7a de desempenho<\/h2>\n<p>Os investigadores descobriram que, como o GPT-4 estava a ser treinado por um modelo menos inteligente, esse processo limitava o seu desempenho ao equivalente a um modelo GPT-3.5 devidamente treinado.<\/p>\n<p>Isto deve-se ao facto de o modelo mais inteligente aprender alguns dos erros ou processos de pensamento deficientes do seu supervisor mais fraco. Isto parece indicar que a utiliza\u00e7\u00e3o de seres humanos para treinar uma IA superinteligente impediria a IA de atingir o seu potencial m\u00e1ximo.<\/p>\n<figure id=\"attachment_8402\" aria-describedby=\"caption-attachment-8402\" style=\"width: 1376px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-8402\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results.png\" alt=\"\" width=\"1376\" height=\"506\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results.png 1376w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-300x110.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-1024x377.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-768x282.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-370x136.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-800x294.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-740x272.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-20x7.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-1320x485.png 1320w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-131x48.png 131w\" sizes=\"auto, (max-width: 1376px) 100vw, 1376px\" \/><figcaption id=\"caption-attachment-8402\" class=\"wp-caption-text\">Compara\u00e7\u00e3o do desempenho de GPT-2, GPT-4 treinado por GPT2, GPT-4 mais eficazmente treinado por GPT-2 e GPT-4 treinado em respostas correctas.<\/figcaption><\/figure>\n<p>Os investigadores sugeriram a utiliza\u00e7\u00e3o de modelos interm\u00e9dios numa abordagem de bootstrapping. O documento explica que \"em vez de alinhar diretamente modelos muito sobre-humanos, poder\u00edamos primeiro alinhar um modelo apenas ligeiramente sobre-humano, us\u00e1-lo para alinhar um modelo ainda mais inteligente, e assim por diante\".<\/p>\n<p>A OpenAI est\u00e1 a afetar muitos recursos a este projeto. A equipa de investiga\u00e7\u00e3o diz que dedicou \"20% da computa\u00e7\u00e3o que assegur\u00e1mos at\u00e9 \u00e0 data nos pr\u00f3ximos quatro anos para resolver o problema do alinhamento da superintelig\u00eancia\".<\/p>\n<p>Est\u00e1 tamb\u00e9m a oferecer $10 milh\u00f5es em subs\u00eddios a indiv\u00edduos ou organiza\u00e7\u00f5es que queiram ajudar na investiga\u00e7\u00e3o.<\/p>\n<p>\u00c9 bom que descubram isto rapidamente. Uma IA superinteligente poderia potencialmente escrever um milh\u00e3o de linhas de c\u00f3digo complicado que nenhum programador humano poderia compreender. Como \u00e9 que saber\u00edamos se o c\u00f3digo gerado era seguro ou n\u00e3o para ser executado? Esperemos que n\u00e3o o descubramos da maneira mais dif\u00edcil.<\/p>","protected":false},"excerpt":{"rendered":"<p>Os actuais modelos de IA s\u00e3o capazes de fazer muitas coisas inseguras ou indesej\u00e1veis. A supervis\u00e3o humana e o feedback mant\u00eam estes modelos alinhados, mas o que acontecer\u00e1 quando estes modelos se tornarem mais inteligentes do que n\u00f3s? A OpenAI afirma que \u00e9 poss\u00edvel que nos pr\u00f3ximos 10 anos possamos assistir \u00e0 cria\u00e7\u00e3o de uma IA mais inteligente do que os humanos. Juntamente com o aumento da intelig\u00eancia vem o risco de os humanos deixarem de ser capazes de supervisionar estes modelos. A equipa de investiga\u00e7\u00e3o Superalignment da OpenAI est\u00e1 concentrada na prepara\u00e7\u00e3o para essa eventualidade. A equipa foi lan\u00e7ada em julho deste ano e \u00e9 co-liderada por Ilya Sutskever<\/p>","protected":false},"author":6,"featured_media":8404,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[163,118,93],"class_list":["post-8400","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-ai-risks","tag-llms","tag-openai"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>OpenAI releases first results from Superalignment project | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/pt\/2023\/12\/openai-releases-first-results-from-superalignment-project\/\" \/>\n<meta property=\"og:locale\" content=\"pt_PT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"OpenAI releases first results from Superalignment project | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Current AI models are capable of doing a lot of unsafe or undesirable things. Human supervision and feedback keep these models aligned but what will happen when these models become smarter than us? OpenAI says it\u2019s possible that we could see the creation of an AI that is smarter than humans in the next 10 years. Along with the increased intelligence comes the risk that humans may no longer be capable of supervising these models. OpenAI\u2019s Superalignment research team is focused on preparing for that eventuality. The team was launched in July this year and is co-led by Ilya Sutskever\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/pt\/2023\/12\/openai-releases-first-results-from-superalignment-project\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-18T09:14:27+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo estimado de leitura\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"OpenAI releases first results from Superalignment project\",\"datePublished\":\"2023-12-18T09:14:27+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\"},\"wordCount\":727,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/OpenAI-safety.jpg\",\"keywords\":[\"AI risks\",\"LLMS\",\"OpenAI\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"pt-PT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\",\"name\":\"OpenAI releases first results from Superalignment project | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/OpenAI-safety.jpg\",\"datePublished\":\"2023-12-18T09:14:27+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#breadcrumb\"},\"inLanguage\":\"pt-PT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/OpenAI-safety.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/OpenAI-safety.jpg\",\"width\":1000,\"height\":667},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"OpenAI releases first results from Superalignment project\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-PT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/pt\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"OpenAI divulga os primeiros resultados do projeto Superalignment | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/pt\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","og_locale":"pt_PT","og_type":"article","og_title":"OpenAI releases first results from Superalignment project | DailyAI","og_description":"Current AI models are capable of doing a lot of unsafe or undesirable things. Human supervision and feedback keep these models aligned but what will happen when these models become smarter than us? OpenAI says it\u2019s possible that we could see the creation of an AI that is smarter than humans in the next 10 years. Along with the increased intelligence comes the risk that humans may no longer be capable of supervising these models. OpenAI\u2019s Superalignment research team is focused on preparing for that eventuality. The team was launched in July this year and is co-led by Ilya Sutskever","og_url":"https:\/\/dailyai.com\/pt\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","og_site_name":"DailyAI","article_published_time":"2023-12-18T09:14:27+00:00","og_image":[{"width":1000,"height":667,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Eugene van der Watt","Tempo estimado de leitura":"4 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"OpenAI releases first results from Superalignment project","datePublished":"2023-12-18T09:14:27+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/"},"wordCount":727,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","keywords":["AI risks","LLMS","OpenAI"],"articleSection":["Industry"],"inLanguage":"pt-PT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","url":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","name":"OpenAI divulga os primeiros resultados do projeto Superalignment | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","datePublished":"2023-12-18T09:14:27+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#breadcrumb"},"inLanguage":"pt-PT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/"]}]},{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","width":1000,"height":667},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"OpenAI releases first results from Superalignment project"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"A sua dose di\u00e1ria de not\u00edcias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-PT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene vem de uma forma\u00e7\u00e3o em engenharia eletr\u00f3nica e adora tudo o que \u00e9 tecnologia. Quando faz uma pausa no consumo de not\u00edcias sobre IA, pode encontr\u00e1-lo \u00e0 mesa de snooker.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/pt\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/8400","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/comments?post=8400"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/8400\/revisions"}],"predecessor-version":[{"id":8406,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/8400\/revisions\/8406"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media\/8404"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media?parent=8400"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/categories?post=8400"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/tags?post=8400"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}