{"id":8400,"date":"2023-12-18T09:14:27","date_gmt":"2023-12-18T09:14:27","guid":{"rendered":"https:\/\/dailyai.com\/?p=8400"},"modified":"2023-12-18T09:14:27","modified_gmt":"2023-12-18T09:14:27","slug":"openai-releases-first-results-from-superalignment-project","status":"publish","type":"post","link":"https:\/\/dailyai.com\/fr\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","title":{"rendered":"OpenAI publie les premiers r\u00e9sultats du projet Superalignment"},"content":{"rendered":"<p><strong>Les mod\u00e8les d'IA actuels sont capables de faire beaucoup de choses dangereuses ou ind\u00e9sirables. La supervision humaine et le retour d'information permettent \u00e0 ces mod\u00e8les de rester align\u00e9s, mais que se passera-t-il lorsque ces mod\u00e8les deviendront plus intelligents que nous ?<\/strong><\/p>\n<p>Selon l'OpenAI, il est possible que nous assistions \u00e0 la cr\u00e9ation d'une IA plus intelligente que l'homme au cours des dix prochaines ann\u00e9es. Cette intelligence accrue s'accompagne du risque que les humains ne soient plus capables de superviser ces mod\u00e8les.<\/p>\n<p>L'\u00e9quipe de recherche Superalignment d'OpenAI se concentre sur la pr\u00e9paration de cette \u00e9ventualit\u00e9. L'\u00e9quipe a \u00e9t\u00e9 lanc\u00e9e en juillet de cette ann\u00e9e et est codirig\u00e9e par Ilya Sutskever, qui est rest\u00e9 dans l'ombre depuis la crise de Sam Altman. <a href=\"https:\/\/dailyai.com\/fr\/2023\/11\/sam-altman-and-greg-brockman-join-microsoft-in-new-chapter-for-agi\/\">licenciement et r\u00e9embauche ult\u00e9rieure<\/a>.<\/p>\n<p>La raison d'\u00eatre du projet a \u00e9t\u00e9 replac\u00e9e dans un contexte qui donne \u00e0 r\u00e9fl\u00e9chir par OpenAI, qui a reconnu qu'\"\u00e0 l'heure actuelle, nous n'avons pas de solution pour diriger ou contr\u00f4ler une IA potentiellement superintelligente et l'emp\u00eacher de se comporter de mani\u00e8re d\u00e9r\u00e9gl\u00e9e\".<\/p>\n<p>Mais comment se pr\u00e9parer \u00e0 contr\u00f4ler quelque chose qui n'existe pas encore ? L'\u00e9quipe de recherche vient de publier son <a href=\"https:\/\/cdn.openai.com\/papers\/weak-to-strong-generalization.pdf\" target=\"_blank\" rel=\"noopener\">premiers r\u00e9sultats exp\u00e9rimentaux<\/a> car c'est justement ce qu'il tente de faire.<\/p>\n<h2>G\u00e9n\u00e9ralisation faible \u00e0 forte<\/h2>\n<p>Pour l'instant, les humains sont toujours en position d'intelligence plus forte que les mod\u00e8les d'IA. Les mod\u00e8les tels que le GPT-4 sont dirig\u00e9s ou align\u00e9s \u00e0 l'aide de l'apprentissage par renforcement (Reinforcement Learning Human Feedback - RLHF). Lorsque les r\u00e9sultats d'un mod\u00e8le ne sont pas souhaitables, le formateur humain dit au mod\u00e8le \"Ne fais pas \u00e7a\" et le r\u00e9compense en lui affirmant les performances souhait\u00e9es.<\/p>\n<p>Cela fonctionne pour l'instant parce que nous avons une bonne compr\u00e9hension du fonctionnement des mod\u00e8les actuels et que nous sommes plus intelligents qu'eux. Lorsque les futurs scientifiques des donn\u00e9es humains devront former une IA superintelligente, les r\u00f4les en mati\u00e8re d'intelligence seront invers\u00e9s.<\/p>\n<p>Pour simuler cette situation, OpenAI a d\u00e9cid\u00e9 d'utiliser des mod\u00e8les GPT plus anciens, comme GPT-2, pour former des mod\u00e8les plus puissants, comme GPT-4. GPT-2 simulerait le futur formateur humain essayant d'affiner un mod\u00e8le plus intelligent.<\/p>\n<figure id=\"attachment_8403\" aria-describedby=\"caption-attachment-8403\" style=\"width: 1936px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-8403\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision.webp\" alt=\"\" width=\"1936\" height=\"950\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision.webp 1936w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-300x147.webp 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-1024x502.webp 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-768x377.webp 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-1536x754.webp 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-370x182.webp 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-800x393.webp 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-740x363.webp 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-20x10.webp 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-1600x785.webp 1600w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-1320x648.webp 1320w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-98x48.webp 98w\" sizes=\"auto, (max-width: 1936px) 100vw, 1936px\" \/><figcaption id=\"caption-attachment-8403\" class=\"wp-caption-text\">Sc\u00e9narios d'entra\u00eenement \u00e0 l'IA : actuels, futurs et la simulation d'OpenAI. Source : OpenAI<\/figcaption><\/figure>\n<p>Le document de recherche explique que \"tout comme le probl\u00e8me des humains qui supervisent des mod\u00e8les surhumains, notre configuration est une instance de ce que nous appelons le probl\u00e8me de l'apprentissage de faible \u00e0 fort\".<\/p>\n<p>Dans l'exp\u00e9rience, OpenAI a utilis\u00e9 GPT-2 pour affiner GPT-4 sur des t\u00e2ches de NLP, des puzzles d'\u00e9checs et la mod\u00e9lisation de r\u00e9compenses. Ils ont ensuite test\u00e9 les performances de GPT-4 dans l'ex\u00e9cution de ces t\u00e2ches et les ont compar\u00e9es \u00e0 celles d'un mod\u00e8le GPT-4 qui avait \u00e9t\u00e9 form\u00e9 sur la \"v\u00e9rit\u00e9 de base\" ou les r\u00e9ponses correctes aux t\u00e2ches.<\/p>\n<p>Les r\u00e9sultats sont prometteurs : lorsque le GPT-4 a \u00e9t\u00e9 entra\u00een\u00e9 par le mod\u00e8le le plus faible, il a \u00e9t\u00e9 capable de g\u00e9n\u00e9raliser fortement et de surpasser le mod\u00e8le le plus faible. Cela d\u00e9montre qu'une intelligence plus faible peut guider une intelligence plus forte qui peut alors s'appuyer sur cette formation.<\/p>\n<p>C'est un peu comme si un \u00e9l\u00e8ve de troisi\u00e8me ann\u00e9e enseignait les math\u00e9matiques \u00e0 un enfant tr\u00e8s intelligent et que ce dernier puisse ensuite faire des math\u00e9matiques de douzi\u00e8me ann\u00e9e sur la base de la formation initiale.<\/p>\n<h2>\u00c9cart de performance<\/h2>\n<p>Les chercheurs ont constat\u00e9 que, comme le GPT-4 \u00e9tait entra\u00een\u00e9 par un mod\u00e8le moins intelligent, ce processus limitait ses performances \u00e0 l'\u00e9quivalent d'un mod\u00e8le GPT-3.5 correctement entra\u00een\u00e9.<\/p>\n<p>Cela s'explique par le fait que le mod\u00e8le plus intelligent apprend certaines des erreurs ou des mauvais processus de pens\u00e9e de son superviseur plus faible. Cela semble indiquer que l'utilisation d'humains pour former une IA superintelligente emp\u00eacherait l'IA d'atteindre son plein potentiel.<\/p>\n<figure id=\"attachment_8402\" aria-describedby=\"caption-attachment-8402\" style=\"width: 1376px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-8402\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results.png\" alt=\"\" width=\"1376\" height=\"506\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results.png 1376w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-300x110.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-1024x377.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-768x282.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-370x136.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-800x294.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-740x272.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-20x7.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-1320x485.png 1320w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-131x48.png 131w\" sizes=\"auto, (max-width: 1376px) 100vw, 1376px\" \/><figcaption id=\"caption-attachment-8402\" class=\"wp-caption-text\">Comparaison des performances du GPT-2, du GPT-4 entra\u00een\u00e9 par le GPT2, du GPT-4 entra\u00een\u00e9 plus efficacement par le GPT-2 et du GPT-4 entra\u00een\u00e9 sur les bonnes r\u00e9ponses.<\/figcaption><\/figure>\n<p>Les chercheurs ont sugg\u00e9r\u00e9 d'utiliser des mod\u00e8les interm\u00e9diaires dans le cadre d'une approche bootstrapping. L'article explique qu'\"au lieu d'aligner directement des mod\u00e8les tr\u00e8s surhumains, nous pourrions d'abord aligner un mod\u00e8le \u00e0 peine surhumain, l'utiliser pour aligner un mod\u00e8le encore plus intelligent, et ainsi de suite\".<\/p>\n<p>L'OpenAI consacre beaucoup de ressources \u00e0 ce projet. L'\u00e9quipe de recherche d\u00e9clare avoir consacr\u00e9 \"20% du calcul que nous avons obtenu \u00e0 ce jour au cours des quatre prochaines ann\u00e9es \u00e0 la r\u00e9solution du probl\u00e8me de l'alignement des superintelligences\".<\/p>\n<p>Elle offre \u00e9galement des subventions d'un montant de $10 millions d'euros \u00e0 des personnes ou des organisations souhaitant contribuer \u00e0 la recherche.<\/p>\n<p>Ils feraient mieux de s'en occuper rapidement. Une IA superintelligente pourrait potentiellement \u00e9crire un million de lignes de code compliqu\u00e9 qu'aucun programmeur humain ne pourrait comprendre. Comment pourrions-nous savoir si le code g\u00e9n\u00e9r\u00e9 peut \u00eatre ex\u00e9cut\u00e9 en toute s\u00e9curit\u00e9 ou non ? Esp\u00e9rons que nous ne le d\u00e9couvrirons pas \u00e0 nos d\u00e9pens.<\/p>","protected":false},"excerpt":{"rendered":"<p>Les mod\u00e8les d'IA actuels sont capables de faire beaucoup de choses dangereuses ou ind\u00e9sirables. La supervision humaine et le retour d'information permettent de maintenir ces mod\u00e8les align\u00e9s, mais que se passera-t-il lorsque ces mod\u00e8les deviendront plus intelligents que nous ? Selon l'OpenAI, il est possible que nous assistions \u00e0 la cr\u00e9ation d'une IA plus intelligente que l'homme au cours des dix prochaines ann\u00e9es. Cette intelligence accrue s'accompagne du risque que les humains ne soient plus capables de superviser ces mod\u00e8les. L'\u00e9quipe de recherche Superalignment de l'OpenAI s'efforce de se pr\u00e9parer \u00e0 cette \u00e9ventualit\u00e9. L'\u00e9quipe a \u00e9t\u00e9 lanc\u00e9e en juillet de cette ann\u00e9e et est codirig\u00e9e par Ilya Sutskever<\/p>","protected":false},"author":6,"featured_media":8404,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[163,118,93],"class_list":["post-8400","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-ai-risks","tag-llms","tag-openai"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>OpenAI releases first results from Superalignment project | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/fr\/2023\/12\/openai-releases-first-results-from-superalignment-project\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"OpenAI releases first results from Superalignment project | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Current AI models are capable of doing a lot of unsafe or undesirable things. Human supervision and feedback keep these models aligned but what will happen when these models become smarter than us? OpenAI says it\u2019s possible that we could see the creation of an AI that is smarter than humans in the next 10 years. Along with the increased intelligence comes the risk that humans may no longer be capable of supervising these models. OpenAI\u2019s Superalignment research team is focused on preparing for that eventuality. The team was launched in July this year and is co-led by Ilya Sutskever\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/fr\/2023\/12\/openai-releases-first-results-from-superalignment-project\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-18T09:14:27+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"OpenAI releases first results from Superalignment project\",\"datePublished\":\"2023-12-18T09:14:27+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\"},\"wordCount\":727,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/OpenAI-safety.jpg\",\"keywords\":[\"AI risks\",\"LLMS\",\"OpenAI\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"fr-FR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\",\"name\":\"OpenAI releases first results from Superalignment project | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/OpenAI-safety.jpg\",\"datePublished\":\"2023-12-18T09:14:27+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/OpenAI-safety.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/OpenAI-safety.jpg\",\"width\":1000,\"height\":667},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"OpenAI releases first results from Superalignment project\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/fr\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"OpenAI publie les premiers r\u00e9sultats du projet Superalignment | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/fr\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","og_locale":"fr_FR","og_type":"article","og_title":"OpenAI releases first results from Superalignment project | DailyAI","og_description":"Current AI models are capable of doing a lot of unsafe or undesirable things. Human supervision and feedback keep these models aligned but what will happen when these models become smarter than us? OpenAI says it\u2019s possible that we could see the creation of an AI that is smarter than humans in the next 10 years. Along with the increased intelligence comes the risk that humans may no longer be capable of supervising these models. OpenAI\u2019s Superalignment research team is focused on preparing for that eventuality. The team was launched in July this year and is co-led by Ilya Sutskever","og_url":"https:\/\/dailyai.com\/fr\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","og_site_name":"DailyAI","article_published_time":"2023-12-18T09:14:27+00:00","og_image":[{"width":1000,"height":667,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"\u00c9crit par":"Eugene van der Watt","Dur\u00e9e de lecture estim\u00e9e":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"OpenAI releases first results from Superalignment project","datePublished":"2023-12-18T09:14:27+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/"},"wordCount":727,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","keywords":["AI risks","LLMS","OpenAI"],"articleSection":["Industry"],"inLanguage":"fr-FR"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","url":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","name":"OpenAI publie les premiers r\u00e9sultats du projet Superalignment | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","datePublished":"2023-12-18T09:14:27+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/"]}]},{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","width":1000,"height":667},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"OpenAI releases first results from Superalignment project"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Votre dose quotidienne de nouvelles sur l'IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eug\u00e8ne van der Watt","image":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene a une formation d'ing\u00e9nieur en \u00e9lectronique et adore tout ce qui touche \u00e0 la technologie. Lorsqu'il fait une pause dans sa consommation d'informations sur l'IA, vous le trouverez \u00e0 la table de snooker.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/fr\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/8400","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/comments?post=8400"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/8400\/revisions"}],"predecessor-version":[{"id":8406,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/8400\/revisions\/8406"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media\/8404"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media?parent=8400"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/categories?post=8400"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/tags?post=8400"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}