{"id":8400,"date":"2023-12-18T09:14:27","date_gmt":"2023-12-18T09:14:27","guid":{"rendered":"https:\/\/dailyai.com\/?p=8400"},"modified":"2023-12-18T09:14:27","modified_gmt":"2023-12-18T09:14:27","slug":"openai-releases-first-results-from-superalignment-project","status":"publish","type":"post","link":"https:\/\/dailyai.com\/de\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","title":{"rendered":"OpenAI ver\u00f6ffentlicht erste Ergebnisse des Superalignment-Projekts"},"content":{"rendered":"<p><strong>Aktuelle KI-Modelle sind in der Lage, eine Menge unsicherer oder unerw\u00fcnschter Dinge zu tun. Menschliche Aufsicht und Feedback halten diese Modelle auf Kurs, aber was wird passieren, wenn diese Modelle schlauer werden als wir?<\/strong><\/p>\n<p>OpenAI h\u00e4lt es f\u00fcr m\u00f6glich, dass in den n\u00e4chsten 10 Jahren eine KI entwickelt wird, die intelligenter ist als der Mensch. Mit der zunehmenden Intelligenz geht das Risiko einher, dass der Mensch nicht mehr in der Lage ist, diese Modelle zu \u00fcberwachen.<\/p>\n<p>Das Superalignment-Forschungsteam von OpenAI konzentriert sich auf die Vorbereitung auf diese Eventualit\u00e4t. Das Team wurde im Juli dieses Jahres ins Leben gerufen und wird von Ilya Sutskever geleitet, der seit der Sam-Altman-\u00c4ra ein Schattendasein f\u00fchrt. <a href=\"https:\/\/dailyai.com\/de\/2023\/11\/sam-altman-and-greg-brockman-join-microsoft-in-new-chapter-for-agi\/\">Entlassung und anschlie\u00dfende Wiedereinstellung<\/a>.<\/p>\n<p>Die Beweggr\u00fcnde f\u00fcr das Projekt wurden von OpenAI in einen ern\u00fcchternden Kontext gestellt: \"Derzeit haben wir keine L\u00f6sung, um eine potenziell superintelligente KI zu steuern oder zu kontrollieren und zu verhindern, dass sie abtr\u00fcnnig wird.\"<\/p>\n<p>Aber wie bereitet man sich darauf vor, etwas zu kontrollieren, das es noch gar nicht gibt? Das Forschungsteam hat gerade seine <a href=\"https:\/\/cdn.openai.com\/papers\/weak-to-strong-generalization.pdf\" target=\"_blank\" rel=\"noopener\">erste experimentelle Ergebnisse<\/a> und versucht, genau das zu tun.<\/p>\n<h2>Schwache bis starke Verallgemeinerung<\/h2>\n<p>Im Moment ist der Mensch noch intelligenter als KI-Modelle. Modelle wie GPT-4 werden mit Hilfe von Reinforcement Learning Human Feedback (RLHF) gelenkt oder ausgerichtet. Wenn die Leistung eines Modells unerw\u00fcnscht ist, sagt der menschliche Trainer dem Modell: \"Tu das nicht\", und belohnt es mit einer Best\u00e4tigung der gew\u00fcnschten Leistung.<\/p>\n<p>Im Moment funktioniert das, weil wir ziemlich gut verstehen, wie die aktuellen Modelle funktionieren, und wir sind schlauer als sie. Wenn k\u00fcnftige menschliche Datenwissenschaftler eine superintelligente KI trainieren m\u00fcssen, werden die Rollen der Intelligenz vertauscht sein.<\/p>\n<p>Um diese Situation zu simulieren, beschloss OpenAI, \u00e4ltere GPT-Modelle wie GPT-2 zu verwenden, um leistungsf\u00e4higere Modelle wie GPT-4 zu trainieren. GPT-2 w\u00fcrde den zuk\u00fcnftigen menschlichen Trainer simulieren, der versucht, ein intelligenteres Modell zu verfeinern.<\/p>\n<figure id=\"attachment_8403\" aria-describedby=\"caption-attachment-8403\" style=\"width: 1936px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-8403\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision.webp\" alt=\"\" width=\"1936\" height=\"950\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision.webp 1936w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-300x147.webp 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-1024x502.webp 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-768x377.webp 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-1536x754.webp 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-370x182.webp 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-800x393.webp 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-740x363.webp 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-20x10.webp 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-1600x785.webp 1600w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-1320x648.webp 1320w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-98x48.webp 98w\" sizes=\"auto, (max-width: 1936px) 100vw, 1936px\" \/><figcaption id=\"caption-attachment-8403\" class=\"wp-caption-text\">KI-Trainingsszenarien: Aktuelle, zuk\u00fcnftige und OpenAIs Simulation. Quelle: OpenAI<\/figcaption><\/figure>\n<p>In dem Forschungspapier hei\u00dft es: \"Genau wie das Problem, dass Menschen \u00fcbermenschliche Modelle \u00fcberwachen, ist unser System ein Beispiel f\u00fcr das so genannte \"weak-to-strong learning\"-Problem.<\/p>\n<p>In dem Experiment verwendete OpenAI GPT-2 zur Feinabstimmung von GPT-4 bei NLP-Aufgaben, Schachr\u00e4tseln und der Modellierung von Belohnungen. Anschlie\u00dfend wurde die Leistung von GPT-4 bei diesen Aufgaben getestet und mit einem GPT-4-Modell verglichen, das mit der \"Grundwahrheit\" oder den richtigen Antworten auf die Aufgaben trainiert worden war.<\/p>\n<p>Die Ergebnisse waren insofern vielversprechend, als GPT-4, wenn es von dem schw\u00e4cheren Modell trainiert wurde, in der Lage war, stark zu verallgemeinern und das schw\u00e4chere Modell zu \u00fcbertreffen. Dies zeigt, dass eine schw\u00e4chere Intelligenz eine st\u00e4rkere anleiten kann, die dann auf diesem Training aufbauen kann.<\/p>\n<p>Stellen Sie sich das so vor, dass ein Drittkl\u00e4ssler einem wirklich klugen Kind etwas Mathematik beibringt und das kluge Kind dann auf der Grundlage des anf\u00e4nglichen Unterrichts die Mathematik der 12ten Klasse beherrscht.<\/p>\n<h2>Leistungsl\u00fccke<\/h2>\n<p>Die Forscher stellten fest, dass das GPT-4 durch ein weniger intelligentes Modell trainiert wurde und dadurch nur eine Leistung erbrachte, die der eines gut trainierten GPT-3.5-Modells entsprach.<\/p>\n<p>Das liegt daran, dass das intelligentere Modell einige der Fehler oder schlechten Denkprozesse von seinem schw\u00e4cheren Vorgesetzten lernt. Dies scheint darauf hinzudeuten, dass der Einsatz von Menschen zur Ausbildung einer superintelligenten KI die KI daran hindern w\u00fcrde, ihr volles Potenzial auszusch\u00f6pfen.<\/p>\n<figure id=\"attachment_8402\" aria-describedby=\"caption-attachment-8402\" style=\"width: 1376px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-8402\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results.png\" alt=\"\" width=\"1376\" height=\"506\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results.png 1376w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-300x110.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-1024x377.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-768x282.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-370x136.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-800x294.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-740x272.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-20x7.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-1320x485.png 1320w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-131x48.png 131w\" sizes=\"auto, (max-width: 1376px) 100vw, 1376px\" \/><figcaption id=\"caption-attachment-8402\" class=\"wp-caption-text\">Vergleich der Leistung von GPT-2, GPT-4 trainiert durch GPT2, GPT-4 effektiver trainiert durch GPT-2, und GPT-4 trainiert auf richtige Antworten.<\/figcaption><\/figure>\n<p>Die Forscher schlugen vor, Zwischenmodelle in einem Bootstrapping-Ansatz zu verwenden. In dem Papier hei\u00dft es: \"Anstatt sehr \u00fcbermenschliche Modelle direkt abzugleichen, k\u00f6nnten wir zun\u00e4chst ein nur leicht \u00fcbermenschliches Modell abgleichen, das wir dann verwenden, um ein noch intelligenteres Modell abzugleichen, und so weiter.\"<\/p>\n<p>OpenAI stellt eine Menge Ressourcen f\u00fcr dieses Projekt zur Verf\u00fcgung. Das Forschungsteam sagt, dass es \"20% der Rechenleistung, die wir uns bis heute gesichert haben, in den n\u00e4chsten vier Jahren f\u00fcr die L\u00f6sung des Problems der Anpassung der Superintelligenz verwendet\".<\/p>\n<p>Sie bietet auch $10 Millionen an Zusch\u00fcssen f\u00fcr Einzelpersonen oder Organisationen, die die Forschung unterst\u00fctzen wollen.<\/p>\n<p>Sie sollten das besser bald herausfinden. Eine superintelligente KI k\u00f6nnte m\u00f6glicherweise eine Million Zeilen komplizierten Codes schreiben, die kein menschlicher Programmierer verstehen k\u00f6nnte. Woher sollen wir wissen, ob der erzeugte Code sicher ist oder nicht? Hoffen wir, dass wir das nicht auf die harte Tour herausfinden.<\/p>","protected":false},"excerpt":{"rendered":"<p>Aktuelle KI-Modelle sind in der Lage, eine Menge unsicherer oder unerw\u00fcnschter Dinge zu tun. Menschliche \u00dcberwachung und Feedback halten diese Modelle auf Kurs, aber was wird passieren, wenn diese Modelle schlauer werden als wir? Laut OpenAI ist es m\u00f6glich, dass in den n\u00e4chsten 10 Jahren eine KI entsteht, die intelligenter ist als der Mensch. Mit der zunehmenden Intelligenz geht das Risiko einher, dass der Mensch nicht mehr in der Lage ist, diese Modelle zu \u00fcberwachen. Das Forschungsteam Superalignment von OpenAI konzentriert sich darauf, sich auf diese M\u00f6glichkeit vorzubereiten. Das Team wurde im Juli dieses Jahres gegr\u00fcndet und wird von Ilya Sutskever mit geleitet.<\/p>","protected":false},"author":6,"featured_media":8404,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[163,118,93],"class_list":["post-8400","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-ai-risks","tag-llms","tag-openai"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>OpenAI releases first results from Superalignment project | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/de\/2023\/12\/openai-releases-first-results-from-superalignment-project\/\" \/>\n<meta property=\"og:locale\" content=\"de_DE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"OpenAI releases first results from Superalignment project | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Current AI models are capable of doing a lot of unsafe or undesirable things. Human supervision and feedback keep these models aligned but what will happen when these models become smarter than us? OpenAI says it\u2019s possible that we could see the creation of an AI that is smarter than humans in the next 10 years. Along with the increased intelligence comes the risk that humans may no longer be capable of supervising these models. OpenAI\u2019s Superalignment research team is focused on preparing for that eventuality. The team was launched in July this year and is co-led by Ilya Sutskever\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/de\/2023\/12\/openai-releases-first-results-from-superalignment-project\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-18T09:14:27+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Verfasst von\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Gesch\u00e4tzte Lesezeit\" \/>\n\t<meta name=\"twitter:data2\" content=\"4\u00a0Minuten\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"OpenAI releases first results from Superalignment project\",\"datePublished\":\"2023-12-18T09:14:27+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\"},\"wordCount\":727,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/OpenAI-safety.jpg\",\"keywords\":[\"AI risks\",\"LLMS\",\"OpenAI\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"de\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\",\"name\":\"OpenAI releases first results from Superalignment project | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/OpenAI-safety.jpg\",\"datePublished\":\"2023-12-18T09:14:27+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#breadcrumb\"},\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/OpenAI-safety.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/OpenAI-safety.jpg\",\"width\":1000,\"height\":667},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"OpenAI releases first results from Superalignment project\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"de\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/de\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"OpenAI ver\u00f6ffentlicht erste Ergebnisse des Superalignment-Projekts | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/de\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","og_locale":"de_DE","og_type":"article","og_title":"OpenAI releases first results from Superalignment project | DailyAI","og_description":"Current AI models are capable of doing a lot of unsafe or undesirable things. Human supervision and feedback keep these models aligned but what will happen when these models become smarter than us? OpenAI says it\u2019s possible that we could see the creation of an AI that is smarter than humans in the next 10 years. Along with the increased intelligence comes the risk that humans may no longer be capable of supervising these models. OpenAI\u2019s Superalignment research team is focused on preparing for that eventuality. The team was launched in July this year and is co-led by Ilya Sutskever","og_url":"https:\/\/dailyai.com\/de\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","og_site_name":"DailyAI","article_published_time":"2023-12-18T09:14:27+00:00","og_image":[{"width":1000,"height":667,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Verfasst von":"Eugene van der Watt","Gesch\u00e4tzte Lesezeit":"4\u00a0Minuten"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"OpenAI releases first results from Superalignment project","datePublished":"2023-12-18T09:14:27+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/"},"wordCount":727,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","keywords":["AI risks","LLMS","OpenAI"],"articleSection":["Industry"],"inLanguage":"de"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","url":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","name":"OpenAI ver\u00f6ffentlicht erste Ergebnisse des Superalignment-Projekts | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","datePublished":"2023-12-18T09:14:27+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#breadcrumb"},"inLanguage":"de","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/"]}]},{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","width":1000,"height":667},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"OpenAI releases first results from Superalignment project"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Ihre t\u00e4gliche Dosis an AI-Nachrichten","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"de"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene kommt aus der Elektronikbranche und liebt alles, was mit Technik zu tun hat. Wenn er eine Pause vom Konsum von KI-Nachrichten einlegt, findet man ihn am Snookertisch.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/de\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/8400","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/comments?post=8400"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/8400\/revisions"}],"predecessor-version":[{"id":8406,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/8400\/revisions\/8406"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media\/8404"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media?parent=8400"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/categories?post=8400"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/tags?post=8400"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}