{"version":"1.0","provider_name":"DailyAI","provider_url":"https:\/\/dailyai.com\/de","author_name":"Eugene van der Watt","author_url":"https:\/\/dailyai.com\/de\/author\/eugene\/","title":"OpenAI releases first results from Superalignment project | DailyAI","type":"rich","width":600,"height":338,"html":"<blockquote class=\"wp-embedded-content\" data-secret=\"tLupTy61D1\"><a href=\"https:\/\/dailyai.com\/de\/2023\/12\/openai-releases-first-results-from-superalignment-project\/\">OpenAI ver\u00f6ffentlicht erste Ergebnisse des Superalignment-Projekts<\/a><\/blockquote><iframe sandbox=\"allow-scripts\" security=\"restricted\" src=\"https:\/\/dailyai.com\/de\/2023\/12\/openai-releases-first-results-from-superalignment-project\/embed\/#?secret=tLupTy61D1\" width=\"600\" height=\"338\" title=\"&#8222;OpenAI releases first results from Superalignment project&#8220; &#8211; DailyAI\" data-secret=\"tLupTy61D1\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" class=\"wp-embedded-content\"><\/iframe><script>\n\/*! This file is auto-generated *\/\n!function(d,l){\"use strict\";l.querySelector&&d.addEventListener&&\"undefined\"!=typeof URL&&(d.wp=d.wp||{},d.wp.receiveEmbedMessage||(d.wp.receiveEmbedMessage=function(e){var t=e.data;if((t||t.secret||t.message||t.value)&&!\/[^a-zA-Z0-9]\/.test(t.secret)){for(var s,r,n,a=l.querySelectorAll('iframe[data-secret=\"'+t.secret+'\"]'),o=l.querySelectorAll('blockquote[data-secret=\"'+t.secret+'\"]'),c=new RegExp(\"^https?:$\",\"i\"),i=0;i<o.length;i++)o[i].style.display=\"none\";for(i=0;i<a.length;i++)s=a[i],e.source===s.contentWindow&&(s.removeAttribute(\"style\"),\"height\"===t.message?(1e3<(r=parseInt(t.value,10))?r=1e3:~~r<200&&(r=200),s.height=r):\"link\"===t.message&&(r=new URL(s.getAttribute(\"src\")),n=new URL(t.value),c.test(n.protocol))&&n.host===r.host&&l.activeElement===s&&(d.top.location.href=t.value))}},d.addEventListener(\"message\",d.wp.receiveEmbedMessage,!1),l.addEventListener(\"DOMContentLoaded\",function(){for(var e,t,s=l.querySelectorAll(\"iframe.wp-embedded-content\"),r=0;r<s.length;r++)(t=(e=s[r]).getAttribute(\"data-secret\"))||(t=Math.random().toString(36).substring(2,12),e.src+=\"#?secret=\"+t,e.setAttribute(\"data-secret\",t)),e.contentWindow.postMessage({message:\"ready\",secret:t},\"*\")},!1)))}(window,document);\n\/\/# sourceURL=https:\/\/dailyai.com\/wp-includes\/js\/wp-embed.min.js\n<\/script>","thumbnail_url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","thumbnail_width":1000,"thumbnail_height":667,"description":"Current AI models are capable of doing a lot of unsafe or undesirable things. Human supervision and feedback keep these models aligned but what will happen when these models become smarter than us? OpenAI says it\u2019s possible that we could see the creation of an AI that is smarter than humans in the next 10 years. Along with the increased intelligence comes the risk that humans may no longer be capable of supervising these models. OpenAI\u2019s Superalignment research team is focused on preparing for that eventuality. The team was launched in July this year and is co-led by Ilya Sutskever"}