{"id":12636,"date":"2024-05-29T08:32:46","date_gmt":"2024-05-29T08:32:46","guid":{"rendered":"https:\/\/dailyai.com\/?p=12636"},"modified":"2024-05-29T12:53:39","modified_gmt":"2024-05-29T12:53:39","slug":"openai-board-forms-safety-and-security-committee","status":"publish","type":"post","link":"https:\/\/dailyai.com\/it\/2024\/05\/openai-board-forms-safety-and-security-committee\/","title":{"rendered":"Il consiglio di amministrazione di OpenAI forma un comitato per la sicurezza"},"content":{"rendered":"<p><strong>Il consiglio di amministrazione di OpenAI ha annunciato la formazione di un Comitato per la sicurezza che ha il compito di formulare raccomandazioni sulle decisioni critiche in materia di sicurezza per tutti i progetti OpenAI.<\/strong><\/p>\n<p>Il comitato \u00e8 guidato dai direttori Bret Taylor (presidente), Adam D'Angelo, Nicole Seligman e Sam Altman, CEO di OpenAI.<\/p>\n<p>Del comitato faranno parte anche Aleksander Madry (responsabile della preparazione), Lilian Weng (responsabile dei sistemi di sicurezza), John Schulman (responsabile della scienza dell'allineamento), Matt Knight (responsabile della sicurezza) e Jakub Pachocki (scienziato capo).<\/p>\n<p>L'approccio di OpenAI alla sicurezza dell'IA ha dovuto affrontare critiche sia esterne che interne. Il licenziamento di Altman, avvenuto l'anno scorso, \u00e8 stato sostenuto dall'allora membro del consiglio di amministrazione Ilya Sutskever e da altri, apparentemente per questioni di sicurezza.<\/p>\n<p>La scorsa settimana Sutskever e Jan Leike del team \"superalignment\" di OpenAI <a href=\"https:\/\/dailyai.com\/it\/2024\/05\/openais-superalignment-meltdown-can-the-company-salvage-any-trust\/\">ha lasciato l'azienda<\/a>. Leike ha specificamente indicato i problemi di sicurezza come motivo del suo abbandono, affermando che l'azienda stava lasciando che la sicurezza \"passasse in secondo piano rispetto ai prodotti luccicanti\".<\/p>\n<p>Ieri Leike ha annunciato il suo ingresso in Anthropic per occuparsi di supervisione e ricerca sull'allineamento.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Sono entusiasta di unirmi a <a href=\"https:\/\/twitter.com\/&lt;span class=\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\">Anthropic<\/span><\/span><\/span><\/span>AI?ref_src=twsrc%5Etfw&#8221;&gt;@<span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\">Anthropic<\/span><\/span><\/span><\/span><\/span>AI<\/a> per continuare la missione di superallineamento!<\/p>\n<p>Il mio nuovo team lavorer\u00e0 sulla supervisione scalabile, sulla generalizzazione da debole a forte e sulla ricerca di allineamento automatizzato.<\/p>\n<p>Se siete interessati ad unirvi, i miei messaggi sono aperti.<\/p>\n<p>- Jan Leike (@janleike) <a href=\"https:\/\/twitter.com\/janleike\/status\/1795497960509448617?ref_src=twsrc%5Etfw\">28 maggio 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>Ora Altman non solo \u00e8 tornato come amministratore delegato, ma fa anche parte del comitato responsabile di evidenziare i problemi di sicurezza. Gli approfondimenti dell'ex membro del consiglio di amministrazione Helen Toner sui motivi del licenziamento di Altman fanno pensare a quanto sar\u00e0 trasparente nei confronti dei problemi di sicurezza scoperti dal comitato.<\/p>\n<p>A quanto pare, il consiglio di amministrazione di OpenAI ha saputo del rilascio di ChatGPT tramite Twitter.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">\u2757EXCL<span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\">US<\/span><\/span><\/span><\/span>IVE: \"Abbiamo imparato a conoscere <span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\">ChatGPT<\/span><\/span><\/span><\/span> su Twitter\".<\/p>\n<p>Cosa \u00e8 successo veramente a <span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\">OpenAI<\/span><\/span><\/span><\/span>? L'ex membro del consiglio di amministrazione Helen Toner rompe il suo silenzio con nuovi dettagli scioccanti su Sam <span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\">Altman<\/span><\/span><\/span><\/span>licenziamento. Ascoltate la storia esclusiva e non raccontata nel TED AI Show.<\/p>\n<p>Ecco solo un'anticipazione: <a href=\"https:\/\/t.co\/7hXHcZTP9e\">pic.twitter.com\/7hXHcZTP9e<\/a><\/p>\n<p>- Bilawal Sidhu (@bilawalsidhu) <a href=\"https:\/\/twitter.com\/bilawalsidhu\/status\/1795534345345618298?ref_src=twsrc%5Etfw\">28 maggio 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>Il Comitato per la sicurezza utilizzer\u00e0 i prossimi 90 giorni per valutare e sviluppare ulteriormente i processi e le salvaguardie di OpenAI.<\/p>\n<p>Le raccomandazioni saranno sottoposte all'approvazione del consiglio di amministrazione di OpenAI e l'azienda si \u00e8 impegnata a pubblicare le raccomandazioni di sicurezza adottate.<\/p>\n<p>Questa spinta per ulteriori guardrail arriva mentre OpenAI dice di aver iniziato l'addestramento del suo modello di prossima frontiera che, a suo dire, \"ci porter\u00e0 al prossimo livello di capacit\u00e0 nel nostro percorso verso l'AGI\".<\/p>\n<p>Non \u00e8 stata fornita una data di uscita prevista per il nuovo modello, ma la sola formazione richieder\u00e0 probabilmente settimane, se non mesi.<\/p>\n<p>In un aggiornamento sul suo approccio alla sicurezza pubblicato dopo l'AI Seoul Summit, OpenAI ha dichiarato: \"Non rilasceremo un nuovo modello se supera la soglia di rischio \"Medio\" del nostro Preparedness Framework, finch\u00e9 non implementeremo interventi di sicurezza sufficienti a riportare il punteggio post-mitigazione a \"Medio\"\".<\/p>\n<p>Ha dichiarato che pi\u00f9 di 70 esperti esterni sono stati coinvolti nel red teaming. <a href=\"https:\/\/dailyai.com\/it\/2024\/05\/everything-you-need-to-know-about-openais-new-flagship-model-gpt-4o\/\">GPT-4o<\/a> prima del suo rilascio.<\/p>\n<p>Con 90 giorni di tempo prima che la commissione presenti i suoi risultati al consiglio, una formazione iniziata solo di recente e l'impegno di un ampio red teaming, sembra che dovremo aspettare a lungo prima di avere finalmente il GPT-5.<\/p>\n<p>Oppure intendono dire che hanno appena iniziato ad allenare la GPT-6?<\/p>\n<p>&nbsp;<\/p>","protected":false},"excerpt":{"rendered":"<p>Il consiglio di amministrazione di OpenAI ha annunciato la formazione di un Comitato per la sicurezza, incaricato di formulare raccomandazioni sulle decisioni critiche in materia di sicurezza per tutti i progetti OpenAI. Il comitato \u00e8 guidato dai direttori Bret Taylor (presidente), Adam D'Angelo, Nicole Seligman e dal CEO di OpenAI Sam Altman. Fanno parte del comitato anche Aleksander Madry (responsabile della preparazione), Lilian Weng (responsabile dei sistemi di sicurezza), John Schulman (responsabile della scienza dell'allineamento), Matt Knight (responsabile della sicurezza) e Jakub Pachocki (scienziato capo). L'approccio di OpenAI alla sicurezza dell'IA ha dovuto affrontare critiche sia esterne che interne. Il licenziamento di Altman, avvenuto l'anno scorso, \u00e8 stato sostenuto dall'allora membro del consiglio di amministrazione<\/p>","protected":false},"author":6,"featured_media":12640,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[163],"class_list":["post-12636","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-ai-risks"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>OpenAI board forms Safety and Security Committee | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/it\/2024\/05\/openai-board-forms-safety-and-security-committee\/\" \/>\n<meta property=\"og:locale\" content=\"it_IT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"OpenAI board forms Safety and Security Committee | DailyAI\" \/>\n<meta property=\"og:description\" content=\"OpenAI\u2019s board announced the formation of a Safety and Security Committee which is tasked with making recommendations on critical safety and security decisions for all OpenAI projects. The committee is led by directors Bret Taylor (Chair), Adam D\u2019Angelo, Nicole Seligman, and OpenAI\u2019s CEO Sam Altman. Aleksander Madry (Head of Preparedness), Lilian Weng (Head of Safety Systems), John Schulman (Head of Alignment Science), Matt Knight (Head of Security), and Jakub Pachocki (Chief Scientist) will also be on the committee. OpenAI\u2019s approach to AI safety has faced both external and internal criticism. Last year\u2019s firing of Altman was supported by then-board member\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/it\/2024\/05\/openai-board-forms-safety-and-security-committee\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-05-29T08:32:46+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-05-29T12:53:39+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/05\/OpenAI-safety-committee-1.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Scritto da\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo di lettura stimato\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minuti\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"OpenAI board forms Safety and Security Committee\",\"datePublished\":\"2024-05-29T08:32:46+00:00\",\"dateModified\":\"2024-05-29T12:53:39+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/\"},\"wordCount\":542,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/05\\\/OpenAI-safety-committee-1.webp\",\"keywords\":[\"AI risks\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"it-IT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/\",\"name\":\"OpenAI board forms Safety and Security Committee | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/05\\\/OpenAI-safety-committee-1.webp\",\"datePublished\":\"2024-05-29T08:32:46+00:00\",\"dateModified\":\"2024-05-29T12:53:39+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/#breadcrumb\"},\"inLanguage\":\"it-IT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/05\\\/OpenAI-safety-committee-1.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/05\\\/OpenAI-safety-committee-1.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"OpenAI board forms Safety and Security Committee\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"it-IT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/it\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Il consiglio di OpenAI forma un comitato per la sicurezza e la protezione | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/it\/2024\/05\/openai-board-forms-safety-and-security-committee\/","og_locale":"it_IT","og_type":"article","og_title":"OpenAI board forms Safety and Security Committee | DailyAI","og_description":"OpenAI\u2019s board announced the formation of a Safety and Security Committee which is tasked with making recommendations on critical safety and security decisions for all OpenAI projects. The committee is led by directors Bret Taylor (Chair), Adam D\u2019Angelo, Nicole Seligman, and OpenAI\u2019s CEO Sam Altman. Aleksander Madry (Head of Preparedness), Lilian Weng (Head of Safety Systems), John Schulman (Head of Alignment Science), Matt Knight (Head of Security), and Jakub Pachocki (Chief Scientist) will also be on the committee. OpenAI\u2019s approach to AI safety has faced both external and internal criticism. Last year\u2019s firing of Altman was supported by then-board member","og_url":"https:\/\/dailyai.com\/it\/2024\/05\/openai-board-forms-safety-and-security-committee\/","og_site_name":"DailyAI","article_published_time":"2024-05-29T08:32:46+00:00","article_modified_time":"2024-05-29T12:53:39+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/05\/OpenAI-safety-committee-1.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Scritto da":"Eugene van der Watt","Tempo di lettura stimato":"3 minuti"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"OpenAI board forms Safety and Security Committee","datePublished":"2024-05-29T08:32:46+00:00","dateModified":"2024-05-29T12:53:39+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/"},"wordCount":542,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/05\/OpenAI-safety-committee-1.webp","keywords":["AI risks"],"articleSection":["Industry"],"inLanguage":"it-IT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/","url":"https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/","name":"Il consiglio di OpenAI forma un comitato per la sicurezza e la protezione | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/05\/OpenAI-safety-committee-1.webp","datePublished":"2024-05-29T08:32:46+00:00","dateModified":"2024-05-29T12:53:39+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/#breadcrumb"},"inLanguage":"it-IT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/"]}]},{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/05\/OpenAI-safety-committee-1.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/05\/OpenAI-safety-committee-1.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"OpenAI board forms Safety and Security Committee"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"La vostra dose quotidiana di notizie sull'intelligenza artificiale","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"it-IT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene proviene da un background di ingegneria elettronica e ama tutto ci\u00f2 che \u00e8 tecnologico. Quando si prende una pausa dal consumo di notizie sull'intelligenza artificiale, lo si pu\u00f2 trovare al tavolo da biliardo.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/it\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/12636","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/comments?post=12636"}],"version-history":[{"count":8,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/12636\/revisions"}],"predecessor-version":[{"id":12646,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/12636\/revisions\/12646"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/media\/12640"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/media?parent=12636"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/categories?post=12636"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/tags?post=12636"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}