{"id":8400,"date":"2023-12-18T09:14:27","date_gmt":"2023-12-18T09:14:27","guid":{"rendered":"https:\/\/dailyai.com\/?p=8400"},"modified":"2023-12-18T09:14:27","modified_gmt":"2023-12-18T09:14:27","slug":"openai-releases-first-results-from-superalignment-project","status":"publish","type":"post","link":"https:\/\/dailyai.com\/it\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","title":{"rendered":"OpenAI rilascia i primi risultati del progetto Superalignment"},"content":{"rendered":"<p><strong>Gli attuali modelli di intelligenza artificiale sono in grado di fare molte cose non sicure o indesiderate. La supervisione umana e il feedback mantengono questi modelli allineati, ma cosa succeder\u00e0 quando questi modelli diventeranno pi\u00f9 intelligenti di noi?<\/strong><\/p>\n<p>OpenAI sostiene che nei prossimi 10 anni potremmo assistere alla creazione di un'IA pi\u00f9 intelligente dell'uomo. Insieme all'aumento dell'intelligenza, c'\u00e8 il rischio che l'uomo non sia pi\u00f9 in grado di supervisionare questi modelli.<\/p>\n<p>Il team di ricerca Superalignment di OpenAI si sta preparando a questa eventualit\u00e0. Il team \u00e8 stato lanciato a luglio di quest'anno ed \u00e8 co-diretto da Ilya Sutskever, che \u00e8 rimasto nell'ombra fin dai tempi di Sam Altman. <a href=\"https:\/\/dailyai.com\/it\/2023\/11\/sam-altman-and-greg-brockman-join-microsoft-in-new-chapter-for-agi\/\">licenziamento e successiva riassunzione<\/a>.<\/p>\n<p>Le motivazioni alla base del progetto sono state inserite in un contesto preoccupante da OpenAI, che ha riconosciuto che \"attualmente non disponiamo di una soluzione per guidare o controllare un'IA potenzialmente superintelligente e per evitare che diventi una canaglia\".<\/p>\n<p>Ma come ci si prepara a controllare qualcosa che ancora non esiste? Il team di ricerca ha appena pubblicato il suo <a href=\"https:\/\/cdn.openai.com\/papers\/weak-to-strong-generalization.pdf\" target=\"_blank\" rel=\"noopener\">primi risultati sperimentali<\/a> mentre cerca di fare proprio questo.<\/p>\n<h2>Generalizzazione da debole a forte<\/h2>\n<p>Per ora, gli esseri umani sono ancora in una posizione di intelligenza pi\u00f9 forte rispetto ai modelli AI. I modelli come il GPT-4 vengono guidati o allineati utilizzando il Reinforcement Learning Human Feedback (RLHF). Quando i risultati di un modello sono indesiderati, il formatore umano dice al modello \"Non farlo\" e lo premia con un'affermazione delle prestazioni desiderate.<\/p>\n<p>Per ora funziona perch\u00e9 abbiamo una discreta comprensione del funzionamento dei modelli attuali e siamo pi\u00f9 intelligenti di loro. Quando i futuri data scientist umani dovranno addestrare un'IA superintelligente, i ruoli dell'intelligenza si invertiranno.<\/p>\n<p>Per simulare questa situazione OpenAI ha deciso di utilizzare modelli GPT pi\u00f9 vecchi, come GPT-2, per addestrare modelli pi\u00f9 potenti, come GPT-4. GPT-2 simulerebbe il futuro addestratore umano che cerca di mettere a punto un modello pi\u00f9 intelligente.<\/p>\n<figure id=\"attachment_8403\" aria-describedby=\"caption-attachment-8403\" style=\"width: 1936px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-8403\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision.webp\" alt=\"\" width=\"1936\" height=\"950\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision.webp 1936w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-300x147.webp 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-1024x502.webp 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-768x377.webp 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-1536x754.webp 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-370x182.webp 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-800x393.webp 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-740x363.webp 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-20x10.webp 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-1600x785.webp 1600w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-1320x648.webp 1320w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-supervision-98x48.webp 98w\" sizes=\"auto, (max-width: 1936px) 100vw, 1936px\" \/><figcaption id=\"caption-attachment-8403\" class=\"wp-caption-text\">Scenari di addestramento all'intelligenza artificiale: Attuale, futuro e simulazione di OpenAI. Fonte: OpenAI<\/figcaption><\/figure>\n<p>Il documento di ricerca spiega che \"proprio come il problema degli esseri umani che supervisionano modelli sovrumani, la nostra configurazione \u00e8 un'istanza di quello che chiamiamo il problema dell'apprendimento da debole a forte\".<\/p>\n<p>Nell'esperimento, OpenAI ha utilizzato GPT-2 per perfezionare GPT-4 su compiti di PNL, puzzle di scacchi e modelli di ricompensa. Hanno poi testato le prestazioni del GPT-4 nell'esecuzione di questi compiti e lo hanno confrontato con un modello GPT-4 che era stato addestrato sulla \"verit\u00e0 di base\" o sulle risposte corrette ai compiti.<\/p>\n<p>I risultati sono stati promettenti: quando il GPT-4 \u00e8 stato addestrato dal modello pi\u00f9 debole, \u00e8 stato in grado di generalizzare fortemente e di superare il modello pi\u00f9 debole. Questo dimostra che un'intelligenza pi\u00f9 debole pu\u00f2 fornire indicazioni a una pi\u00f9 forte, che pu\u00f2 quindi basarsi su tale addestramento.<\/p>\n<p>Pensate a un bambino di terza elementare che insegna un po' di matematica a un bambino molto intelligente e che poi, sulla base di questa formazione iniziale, arriva a fare matematica in dodicesima elementare.<\/p>\n<h2>Divario di prestazioni<\/h2>\n<p>I ricercatori hanno scoperto che, poich\u00e9 il GPT-4 veniva addestrato da un modello meno intelligente, questo processo limitava le sue prestazioni all'equivalente di un modello GPT-3.5 correttamente addestrato.<\/p>\n<p>Questo perch\u00e9 il modello pi\u00f9 intelligente apprende alcuni degli errori o dei processi di pensiero errati dal suo supervisore pi\u00f9 debole. Ci\u00f2 sembra indicare che l'utilizzo di esseri umani per addestrare un'IA superintelligente impedirebbe all'IA di esprimere tutto il suo potenziale.<\/p>\n<figure id=\"attachment_8402\" aria-describedby=\"caption-attachment-8402\" style=\"width: 1376px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-8402\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results.png\" alt=\"\" width=\"1376\" height=\"506\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results.png 1376w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-300x110.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-1024x377.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-768x282.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-370x136.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-800x294.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-740x272.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-20x7.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-1320x485.png 1320w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/weak-to-strong-results-131x48.png 131w\" sizes=\"auto, (max-width: 1376px) 100vw, 1376px\" \/><figcaption id=\"caption-attachment-8402\" class=\"wp-caption-text\">Confronto delle prestazioni di GPT-2, GPT-4 addestrato da GPT2, GPT-4 pi\u00f9 efficacemente addestrato da GPT-2 e GPT-4 addestrato sulle risposte corrette.<\/figcaption><\/figure>\n<p>I ricercatori hanno suggerito di utilizzare modelli intermedi in un approccio bootstrapping. Il documento spiega che \"invece di allineare direttamente modelli molto sovrumani, potremmo allineare prima un modello solo leggermente sovrumano, usarlo per allineare un modello ancora pi\u00f9 intelligente e cos\u00ec via\".<\/p>\n<p>OpenAI sta impegnando molte risorse in questo progetto. Il team di ricerca afferma di aver dedicato \"20% dei calcoli che ci siamo assicurati finora nei prossimi quattro anni alla soluzione del problema dell'allineamento delle superintelligenze\".<\/p>\n<p>Offre inoltre $10 milioni di sovvenzioni a persone o organizzazioni che vogliano contribuire alla ricerca.<\/p>\n<p>\u00c8 meglio che lo capiscano presto. Un'intelligenza artificiale superintelligente potrebbe scrivere un milione di righe di codice complicato che nessun programmatore umano potrebbe capire. Come potremmo sapere se il codice generato \u00e8 sicuro da eseguire o meno? Speriamo di non scoprirlo nel modo peggiore.<\/p>","protected":false},"excerpt":{"rendered":"<p>Gli attuali modelli di intelligenza artificiale sono in grado di fare molte cose non sicure o indesiderate. La supervisione e il feedback umano mantengono questi modelli allineati, ma cosa succeder\u00e0 quando questi modelli diventeranno pi\u00f9 intelligenti di noi? Secondo OpenAI, \u00e8 possibile che nei prossimi 10 anni si arrivi alla creazione di un'IA pi\u00f9 intelligente dell'uomo. Insieme all'aumento dell'intelligenza, c'\u00e8 il rischio che l'uomo non sia pi\u00f9 in grado di supervisionare questi modelli. Il team di ricerca Superalignment di OpenAI si sta preparando a questa eventualit\u00e0. Il team \u00e8 stato lanciato nel luglio di quest'anno ed \u00e8 co-diretto da Ilya Sutskever.<\/p>","protected":false},"author":6,"featured_media":8404,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[163,118,93],"class_list":["post-8400","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-ai-risks","tag-llms","tag-openai"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>OpenAI releases first results from Superalignment project | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/it\/2023\/12\/openai-releases-first-results-from-superalignment-project\/\" \/>\n<meta property=\"og:locale\" content=\"it_IT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"OpenAI releases first results from Superalignment project | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Current AI models are capable of doing a lot of unsafe or undesirable things. Human supervision and feedback keep these models aligned but what will happen when these models become smarter than us? OpenAI says it\u2019s possible that we could see the creation of an AI that is smarter than humans in the next 10 years. Along with the increased intelligence comes the risk that humans may no longer be capable of supervising these models. OpenAI\u2019s Superalignment research team is focused on preparing for that eventuality. The team was launched in July this year and is co-led by Ilya Sutskever\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/it\/2023\/12\/openai-releases-first-results-from-superalignment-project\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-18T09:14:27+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Scritto da\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo di lettura stimato\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minuti\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"OpenAI releases first results from Superalignment project\",\"datePublished\":\"2023-12-18T09:14:27+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\"},\"wordCount\":727,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/OpenAI-safety.jpg\",\"keywords\":[\"AI risks\",\"LLMS\",\"OpenAI\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"it-IT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\",\"name\":\"OpenAI releases first results from Superalignment project | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/OpenAI-safety.jpg\",\"datePublished\":\"2023-12-18T09:14:27+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#breadcrumb\"},\"inLanguage\":\"it-IT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/OpenAI-safety.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/OpenAI-safety.jpg\",\"width\":1000,\"height\":667},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/openai-releases-first-results-from-superalignment-project\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"OpenAI releases first results from Superalignment project\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"it-IT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/it\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"OpenAI rilascia i primi risultati del progetto Superalignment | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/it\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","og_locale":"it_IT","og_type":"article","og_title":"OpenAI releases first results from Superalignment project | DailyAI","og_description":"Current AI models are capable of doing a lot of unsafe or undesirable things. Human supervision and feedback keep these models aligned but what will happen when these models become smarter than us? OpenAI says it\u2019s possible that we could see the creation of an AI that is smarter than humans in the next 10 years. Along with the increased intelligence comes the risk that humans may no longer be capable of supervising these models. OpenAI\u2019s Superalignment research team is focused on preparing for that eventuality. The team was launched in July this year and is co-led by Ilya Sutskever","og_url":"https:\/\/dailyai.com\/it\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","og_site_name":"DailyAI","article_published_time":"2023-12-18T09:14:27+00:00","og_image":[{"width":1000,"height":667,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Scritto da":"Eugene van der Watt","Tempo di lettura stimato":"4 minuti"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"OpenAI releases first results from Superalignment project","datePublished":"2023-12-18T09:14:27+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/"},"wordCount":727,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","keywords":["AI risks","LLMS","OpenAI"],"articleSection":["Industry"],"inLanguage":"it-IT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","url":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/","name":"OpenAI rilascia i primi risultati del progetto Superalignment | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","datePublished":"2023-12-18T09:14:27+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#breadcrumb"},"inLanguage":"it-IT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/"]}]},{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/OpenAI-safety.jpg","width":1000,"height":667},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/12\/openai-releases-first-results-from-superalignment-project\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"OpenAI releases first results from Superalignment project"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"La vostra dose quotidiana di notizie sull'intelligenza artificiale","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"it-IT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene proviene da un background di ingegneria elettronica e ama tutto ci\u00f2 che \u00e8 tecnologico. Quando si prende una pausa dal consumo di notizie sull'intelligenza artificiale, lo si pu\u00f2 trovare al tavolo da biliardo.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/it\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/8400","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/comments?post=8400"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/8400\/revisions"}],"predecessor-version":[{"id":8406,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/8400\/revisions\/8406"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/media\/8404"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/media?parent=8400"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/categories?post=8400"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/tags?post=8400"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}