{"id":12636,"date":"2024-05-29T08:32:46","date_gmt":"2024-05-29T08:32:46","guid":{"rendered":"https:\/\/dailyai.com\/?p=12636"},"modified":"2024-05-29T12:53:39","modified_gmt":"2024-05-29T12:53:39","slug":"openai-board-forms-safety-and-security-committee","status":"publish","type":"post","link":"https:\/\/dailyai.com\/nl\/2024\/05\/openai-board-forms-safety-and-security-committee\/","title":{"rendered":"OpenAI bestuur vormt veiligheids- en beveiligingscommissie"},"content":{"rendered":"<p><strong>Het bestuur van OpenAI kondigde de vorming aan van een Veiligheids- en Beveiligingscomit\u00e9 dat aanbevelingen moet doen over kritieke veiligheids- en beveiligingsbeslissingen voor alle OpenAI-projecten.<\/strong><\/p>\n<p>De commissie wordt geleid door directeuren Bret Taylor (voorzitter), Adam D'Angelo, Nicole Seligman en OpenAI's CEO Sam Altman.<\/p>\n<p>Aleksander Madry (hoofd Paraatheid), Lilian Weng (hoofd Veiligheidssystemen), John Schulman (hoofd Uitlijningswetenschap), Matt Knight (hoofd Beveiliging) en Jakub Pachocki (hoofd Wetenschap) zullen ook deel uitmaken van de commissie.<\/p>\n<p>OpenAI's benadering van AI-veiligheid heeft zowel externe als interne kritiek gekregen. Het ontslag van Altman vorig jaar werd gesteund door toenmalig bestuurslid Ilya Sutskever en anderen, ogenschijnlijk uit veiligheidsoverwegingen.<\/p>\n<p>Vorige week hebben Sutskever en Jan Leike van OpenAI's \"superalignment\" team <a href=\"https:\/\/dailyai.com\/nl\/2024\/05\/openais-superalignment-meltdown-can-the-company-salvage-any-trust\/\">verliet het bedrijf<\/a>. Leike noemde veiligheidskwesties specifiek als reden voor zijn vertrek en zei dat het bedrijf veiligheid \"ondergeschikt maakte aan glanzende producten\".<\/p>\n<p>Gisteren kondigde Leike aan dat hij bij Anthropic ging werken aan overzicht en afstemmingsonderzoek.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Ik ben enthousiast om mee te doen <a href=\"https:\/\/twitter.com\/&lt;span class=\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\">Anthropic<\/span><\/span><\/span><\/span>AI?ref_src=twsrc%5Etfw&#8221;&gt;@<span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\">Anthropic<\/span><\/span><\/span><\/span><\/span>AI<\/a> om de superalignment-missie voort te zetten!<\/p>\n<p>Mijn nieuwe team zal werken aan schaalbare oversight, zwak-naar-sterk generalisatie en geautomatiseerd alignment onderzoek.<\/p>\n<p>Als je ge\u00efnteresseerd bent om mee te doen, mijn dms'en staan open.<\/p>\n<p>- Jan Leike (@janleike) <a href=\"https:\/\/twitter.com\/janleike\/status\/1795497960509448617?ref_src=twsrc%5Etfw\">28 mei 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>Nu is Altman niet alleen terug als CEO, maar zit hij ook in de commissie die verantwoordelijk is voor het benadrukken van veiligheidsproblemen. De inzichten van voormalig bestuurslid Helen Toner in waarom Altman werd ontslagen, doen je afvragen hoe transparant hij zal zijn over veiligheidskwesties die de commissie ontdekt.<\/p>\n<p>Blijkbaar hoorde het OpenAI-bestuur via Twitter over de release van ChatGPT.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">EXCL<span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\">US<\/span><\/span><\/span><\/span>IVE: \"We hebben geleerd over <span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\">ChatGPT<\/span><\/span><\/span><\/span> op Twitter.\"<\/p>\n<p>Wat er ECHT gebeurde bij <span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\">OpenAI<\/span><\/span><\/span><\/span>? Voormalig bestuurslid Helen Toner verbreekt haar stilzwijgen met schokkende nieuwe details over Sam <span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\"><span class=\"noTranslate\" data-no-translation=\"\">Altman<\/span><\/span><\/span><\/span>zijn ontslag. Luister naar het exclusieve, onvertelde verhaal in de TED AI Show.<\/p>\n<p>Hier is alvast een voorproefje: <a href=\"https:\/\/t.co\/7hXHcZTP9e\">pic.twitter.com\/7hXHcZTP9e<\/a><\/p>\n<p>- Bilawal Sidhu (@bilawalsidhu) <a href=\"https:\/\/twitter.com\/bilawalsidhu\/status\/1795534345345618298?ref_src=twsrc%5Etfw\">28 mei 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>Het Veiligheids- en Beveiligingscomit\u00e9 zal de komende 90 dagen gebruiken om OpenAI's processen en beveiligingen te evalueren en verder te ontwikkelen.<\/p>\n<p>De aanbevelingen zullen ter goedkeuring worden voorgelegd aan het bestuur van OpenAI en het bedrijf heeft toegezegd de aangenomen veiligheidsaanbevelingen te publiceren.<\/p>\n<p>Deze drang naar extra vangrails komt op het moment dat OpenAI zegt begonnen te zijn met het trainen van zijn volgende grensmodel, dat ons naar eigen zeggen \"naar het volgende niveau van mogelijkheden zal brengen op ons pad naar AGI\".<\/p>\n<p>Er werd geen verwachte releasedatum gegeven voor het nieuwe model, maar de training alleen zal waarschijnlijk weken, zo niet maanden, duren.<\/p>\n<p>In een update over de veiligheidsaanpak, gepubliceerd na de AI Seoul Summit, zei OpenAI: \"We zullen geen nieuw model vrijgeven als het de risicodrempel \"Gemiddeld\" van ons Preparedness Framework overschrijdt, totdat we voldoende veiligheidsinterventies hebben ge\u00efmplementeerd om de post-mitigation score terug te brengen naar \"Gemiddeld\".\"<\/p>\n<p>Het bedrijf zei dat er meer dan 70 externe experts betrokken waren bij de red teaming <a href=\"https:\/\/dailyai.com\/nl\/2024\/05\/everything-you-need-to-know-about-openais-new-flagship-model-gpt-4o\/\">GPT-4o<\/a> voor de release.<\/p>\n<p>Met nog 90 dagen te gaan voordat de commissie haar bevindingen presenteert aan het bestuur, pas onlangs begonnen met de training en een verplichting tot uitgebreide red teaming, lijkt het erop dat we nog lang moeten wachten voordat GPT-5 eindelijk een feit is.<\/p>\n<p>Of bedoelen ze dat ze net zijn begonnen met het trainen van GPT-6?<\/p>\n<p>&nbsp;<\/p>","protected":false},"excerpt":{"rendered":"<p>Het bestuur van OpenAI heeft de oprichting aangekondigd van een Veiligheids- en Beveiligingscomit\u00e9 dat aanbevelingen moet doen over kritieke veiligheids- en beveiligingsbeslissingen voor alle OpenAI-projecten. De commissie wordt geleid door directeuren Bret Taylor (voorzitter), Adam D'Angelo, Nicole Seligman en OpenAI's CEO Sam Altman. Aleksander Madry (hoofd Paraatheid), Lilian Weng (hoofd Veiligheidssystemen), John Schulman (hoofd Uitlijningswetenschap), Matt Knight (hoofd Beveiliging) en Jakub Pachocki (hoofd Wetenschap) maken ook deel uit van de commissie. OpenAI's benadering van AI-veiligheid heeft zowel externe als interne kritiek gekregen. Het ontslag van Altman vorig jaar werd gesteund door het toenmalige bestuurslid<\/p>","protected":false},"author":6,"featured_media":12640,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[163],"class_list":["post-12636","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-ai-risks"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>OpenAI board forms Safety and Security Committee | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/nl\/2024\/05\/openai-board-forms-safety-and-security-committee\/\" \/>\n<meta property=\"og:locale\" content=\"nl_NL\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"OpenAI board forms Safety and Security Committee | DailyAI\" \/>\n<meta property=\"og:description\" content=\"OpenAI\u2019s board announced the formation of a Safety and Security Committee which is tasked with making recommendations on critical safety and security decisions for all OpenAI projects. The committee is led by directors Bret Taylor (Chair), Adam D\u2019Angelo, Nicole Seligman, and OpenAI\u2019s CEO Sam Altman. Aleksander Madry (Head of Preparedness), Lilian Weng (Head of Safety Systems), John Schulman (Head of Alignment Science), Matt Knight (Head of Security), and Jakub Pachocki (Chief Scientist) will also be on the committee. OpenAI\u2019s approach to AI safety has faced both external and internal criticism. Last year\u2019s firing of Altman was supported by then-board member\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/nl\/2024\/05\/openai-board-forms-safety-and-security-committee\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-05-29T08:32:46+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-05-29T12:53:39+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/05\/OpenAI-safety-committee-1.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Geschreven door\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Geschatte leestijd\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minuten\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"OpenAI board forms Safety and Security Committee\",\"datePublished\":\"2024-05-29T08:32:46+00:00\",\"dateModified\":\"2024-05-29T12:53:39+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/\"},\"wordCount\":542,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/05\\\/OpenAI-safety-committee-1.webp\",\"keywords\":[\"AI risks\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"nl-NL\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/\",\"name\":\"OpenAI board forms Safety and Security Committee | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/05\\\/OpenAI-safety-committee-1.webp\",\"datePublished\":\"2024-05-29T08:32:46+00:00\",\"dateModified\":\"2024-05-29T12:53:39+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/#breadcrumb\"},\"inLanguage\":\"nl-NL\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"nl-NL\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/05\\\/OpenAI-safety-committee-1.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/05\\\/OpenAI-safety-committee-1.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/05\\\/openai-board-forms-safety-and-security-committee\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"OpenAI board forms Safety and Security Committee\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"nl-NL\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"nl-NL\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"nl-NL\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/nl\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"OpenAI bestuur vormt veiligheids- en beveiligingscommissie | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/nl\/2024\/05\/openai-board-forms-safety-and-security-committee\/","og_locale":"nl_NL","og_type":"article","og_title":"OpenAI board forms Safety and Security Committee | DailyAI","og_description":"OpenAI\u2019s board announced the formation of a Safety and Security Committee which is tasked with making recommendations on critical safety and security decisions for all OpenAI projects. The committee is led by directors Bret Taylor (Chair), Adam D\u2019Angelo, Nicole Seligman, and OpenAI\u2019s CEO Sam Altman. Aleksander Madry (Head of Preparedness), Lilian Weng (Head of Safety Systems), John Schulman (Head of Alignment Science), Matt Knight (Head of Security), and Jakub Pachocki (Chief Scientist) will also be on the committee. OpenAI\u2019s approach to AI safety has faced both external and internal criticism. Last year\u2019s firing of Altman was supported by then-board member","og_url":"https:\/\/dailyai.com\/nl\/2024\/05\/openai-board-forms-safety-and-security-committee\/","og_site_name":"DailyAI","article_published_time":"2024-05-29T08:32:46+00:00","article_modified_time":"2024-05-29T12:53:39+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/05\/OpenAI-safety-committee-1.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Geschreven door":"Eugene van der Watt","Geschatte leestijd":"3 minuten"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"OpenAI board forms Safety and Security Committee","datePublished":"2024-05-29T08:32:46+00:00","dateModified":"2024-05-29T12:53:39+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/"},"wordCount":542,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/05\/OpenAI-safety-committee-1.webp","keywords":["AI risks"],"articleSection":["Industry"],"inLanguage":"nl-NL"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/","url":"https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/","name":"OpenAI bestuur vormt veiligheids- en beveiligingscommissie | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/05\/OpenAI-safety-committee-1.webp","datePublished":"2024-05-29T08:32:46+00:00","dateModified":"2024-05-29T12:53:39+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/#breadcrumb"},"inLanguage":"nl-NL","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/"]}]},{"@type":"ImageObject","inLanguage":"nl-NL","@id":"https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/05\/OpenAI-safety-committee-1.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/05\/OpenAI-safety-committee-1.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/05\/openai-board-forms-safety-and-security-committee\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"OpenAI board forms Safety and Security Committee"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Uw dagelijkse dosis AI-nieuws","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"nl-NL"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"nl-NL","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"nl-NL","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene heeft een achtergrond in elektrotechniek en houdt van alles wat met techniek te maken heeft. Als hij even pauzeert van het consumeren van AI-nieuws, kun je hem aan de snookertafel vinden.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/nl\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/posts\/12636","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/comments?post=12636"}],"version-history":[{"count":8,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/posts\/12636\/revisions"}],"predecessor-version":[{"id":12646,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/posts\/12636\/revisions\/12646"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/media\/12640"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/media?parent=12636"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/categories?post=12636"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/tags?post=12636"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}