{"id":12729,"date":"2024-06-05T09:50:37","date_gmt":"2024-06-05T09:50:37","guid":{"rendered":"https:\/\/dailyai.com\/?p=12729"},"modified":"2024-06-05T09:50:37","modified_gmt":"2024-06-05T09:50:37","slug":"former-openai-employees-publish-right-to-warn-open-letter","status":"publish","type":"post","link":"https:\/\/dailyai.com\/de\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/","title":{"rendered":"Ehemalige OpenAI-Mitarbeiter ver\u00f6ffentlichen offenen Brief \"Recht auf Warnung"},"content":{"rendered":"<p><strong>Eine Gruppe ehemaliger und aktueller Mitarbeiter von OpenAI und Google r\u00fcgt die KI-Unternehmen wegen der ihrer Meinung nach gef\u00e4hrlichen Geheimhaltungskultur in Bezug auf KI-Risiken.<\/strong><\/p>\n<p>In dem Schreiben mit dem Titel \"Ein Recht auf Warnung vor fortgeschrittener k\u00fcnstlicher Intelligenz\" hei\u00dft es, dass KI-Unternehmen starke finanzielle Anreize haben, eine wirksame Aufsicht \u00fcber potenzielle KI-Risiken zu vermeiden.<\/p>\n<p>Abgesehen davon, dass es r\u00fccksichtslos ist, sich auf finanzielle Ziele statt auf die Sicherheit zu konzentrieren, hei\u00dft es in dem Schreiben, dass Unternehmen strafbewehrte Vertraulichkeitsvereinbarungen verwenden, um Mitarbeiter aktiv davon abzuhalten, Bedenken zu \u00e4u\u00dfern.<\/p>\n<p>Die Unterzeichner sind alle ehemalige OpenAI- und Google-Mitarbeiter, wobei Neel Nanda der einzige ist, der noch bei Google arbeitet. Der Brief wurde auch von den f\u00fchrenden KI-Experten Yoshua Bengio, Geoffrey Hinton und Stuart Russell unterst\u00fctzt.<\/p>\n<p>6 der Unterzeichner waren nicht bereit, ihre Namen in dem Schreiben preiszugeben, was die Bedenken gegen die Nennung ihrer ehemaligen Arbeitgeber belegt.<\/p>\n<p>Die ehemaligen OpenAI-Forscher Daniel Kokotajlo und William Saunders, die den Brief ebenfalls unterzeichnet haben, verlie\u00dfen das Unternehmen Anfang dieses Jahres.<\/p>\n<p>Kokotajlo geh\u00f6rte dem Governance-Team an, und Saunders arbeitete im Superalignment-Team von OpenAI, das letzten Monat aufgel\u00f6st wurde, als <a href=\"https:\/\/dailyai.com\/de\/2024\/05\/openais-superalignment-meltdown-can-the-company-salvage-any-trust\/\">Ilya Sutskever und Jan Leikealso verlie\u00dfen ebenfalls wegen Sicherheitsbedenken<\/a>.<\/p>\n<p>Kokotajlo erkl\u00e4rte seinen Austritt in einem Forum und sagte, er glaube nicht, dass sich OpenAI \"in der Zeit von AGI verantwortungsvoll verhalten wird\".<\/p>\n<h2>Ein Aufruf zum Handeln<\/h2>\n<p>In dem Schreiben wird ein st\u00e4rkeres Engagement der KI-Unternehmen gefordert, da es keine Vorschriften f\u00fcr KI-Risiken gibt, die der \u00d6ffentlichkeit nicht bekannt sind.<\/p>\n<p>In dem Schreiben hei\u00dft es: \"Der \u00fcbliche Schutz von Hinweisgebern ist unzureichend, da er sich auf illegale Aktivit\u00e4ten konzentriert, w\u00e4hrend viele der Risiken, \u00fcber die wir besorgt sind, noch nicht geregelt sind.<\/p>\n<p>In dem Schreiben werden die KI-Unternehmen aufgefordert, sich zu vier Grunds\u00e4tzen zu verpflichten. Kurz gesagt, sie wollen, dass die Unternehmen:<\/p>\n<ul>\n<li>keine Vereinbarungen abschlie\u00dfen oder durchsetzen, die Kritik am Unternehmen wegen Sicherheitsbedenken verbieten oder dem Arbeitnehmer zustehende finanzielle Leistungen zur\u00fcckhalten. (ahem, OpenAI)<\/li>\n<li>Erleichterung eines anonymen Verfahrens f\u00fcr Mitarbeiter, um risikobezogene Bedenken gegen\u00fcber dem Vorstand des Unternehmens oder anderen Aufsichtsbeh\u00f6rden zu \u00e4u\u00dfern.<\/li>\n<li>F\u00f6rderung einer Kultur der offenen Kritik, die es den Mitarbeitern erm\u00f6glicht, risikobezogene Bedenken \u00f6ffentlich zu machen, ohne dass geistiges Eigentum preisgegeben wird.<\/li>\n<li>keine Vergeltungsma\u00dfnahmen gegen derzeitige und ehemalige Mitarbeiter zu ergreifen, die risikobezogene vertrauliche Informationen \u00f6ffentlich mitteilen, nachdem andere Verfahren fehlgeschlagen sind.<\/li>\n<\/ul>\n<p>Mehrere der Namen auf der Liste der Unterzeichner betrachten sich selbst als effektive Altruisten. Aus ihren Beitr\u00e4gen und Kommentaren geht klar hervor, dass Leute wie Daniel Kokotajlo (Less Wrong) und William Saunders (AI Alignment Forum) glauben, dass die Dinge sehr schlimm enden k\u00f6nnten, wenn die KI-Risiken nicht beherrscht werden.<\/p>\n<p>Aber das sind keine Schwarzmaler in einem Forum, die von der Seitenlinie aus schimpfen. Es sind f\u00fchrende K\u00f6pfe, die von Unternehmen wie OpenAI und Google eingestellt wurden, um die Technologie zu entwickeln, die sie jetzt f\u00fcrchten.<\/p>\n<p>Und jetzt sagen sie: \"Wir haben Dinge gesehen, die uns Angst machen. Wir wollen die Leute warnen, aber das d\u00fcrfen wir nicht.<\/p>\n<p>Sie k\u00f6nnen die <a href=\"https:\/\/righttowarn.ai\/\" target=\"_blank\" rel=\"noopener\">Brief hier<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>Eine Gruppe ehemaliger und aktueller Mitarbeiter von OpenAI und Google beschuldigt KI-Unternehmen, eine gef\u00e4hrliche Geheimhaltungskultur in Bezug auf KI-Risiken zu pflegen. In dem Schreiben mit dem Titel \"Ein Recht auf Warnung vor fortgeschrittener k\u00fcnstlicher Intelligenz\" hei\u00dft es, dass KI-Unternehmen starke finanzielle Anreize haben, eine wirksame Aufsicht \u00fcber potenzielle KI-Risiken zu vermeiden. Abgesehen davon, dass sie sich r\u00fccksichtslos auf finanzielle Ziele statt auf Sicherheit konzentrieren, verwenden die Unternehmen dem Schreiben zufolge strafbewehrte Vertraulichkeitsvereinbarungen, um Mitarbeiter aktiv davon abzuhalten, Bedenken zu \u00e4u\u00dfern. Die Unterzeichner sind allesamt ehemalige Mitarbeiter von OpenAI und Google, wobei Neel Nanda der einzige ist, der noch bei<\/p>","protected":false},"author":6,"featured_media":12731,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[88],"tags":[163,102,93],"class_list":["post-12729","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ethics","tag-ai-risks","tag-google","tag-openai"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Former OpenAI employees publish \u2018Right to Warn\u2019 open letter | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/de\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/\" \/>\n<meta property=\"og:locale\" content=\"de_DE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Former OpenAI employees publish \u2018Right to Warn\u2019 open letter | DailyAI\" \/>\n<meta property=\"og:description\" content=\"A group of former and current OpenAI and Google employees are calling AI companies out on what they say is a dangerous culture of secrecy surrounding AI risks. The letter titled \u201cA right to warn about advanced artificial intelligence\u201d states that AI companies have strong financial incentives to avoid effective oversight of potential AI risks. Besides being reckless by focusing on financial objectives instead of safety, the letter says that companies use punitive confidentiality agreements to actively discourage employees from raising concerns. The signatories are all former OpenAI and Google employees, with Neel Nanda, the only one still working at\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/de\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-06-05T09:50:37+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/Warning-letter.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Verfasst von\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Gesch\u00e4tzte Lesezeit\" \/>\n\t<meta name=\"twitter:data2\" content=\"3\u00a0Minuten\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Former OpenAI employees publish \u2018Right to Warn\u2019 open letter\",\"datePublished\":\"2024-06-05T09:50:37+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/\"},\"wordCount\":487,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/Warning-letter.webp\",\"keywords\":[\"AI risks\",\"Google\",\"OpenAI\"],\"articleSection\":[\"Ethics &amp; Society\"],\"inLanguage\":\"de\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/\",\"name\":\"Former OpenAI employees publish \u2018Right to Warn\u2019 open letter | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/Warning-letter.webp\",\"datePublished\":\"2024-06-05T09:50:37+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/#breadcrumb\"},\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/Warning-letter.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/Warning-letter.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Former OpenAI employees publish \u2018Right to Warn\u2019 open letter\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"de\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/de\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Ehemalige OpenAI-Mitarbeiter ver\u00f6ffentlichen offenen Brief mit \"Recht auf Warnung\" | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/de\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/","og_locale":"de_DE","og_type":"article","og_title":"Former OpenAI employees publish \u2018Right to Warn\u2019 open letter | DailyAI","og_description":"A group of former and current OpenAI and Google employees are calling AI companies out on what they say is a dangerous culture of secrecy surrounding AI risks. The letter titled \u201cA right to warn about advanced artificial intelligence\u201d states that AI companies have strong financial incentives to avoid effective oversight of potential AI risks. Besides being reckless by focusing on financial objectives instead of safety, the letter says that companies use punitive confidentiality agreements to actively discourage employees from raising concerns. The signatories are all former OpenAI and Google employees, with Neel Nanda, the only one still working at","og_url":"https:\/\/dailyai.com\/de\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/","og_site_name":"DailyAI","article_published_time":"2024-06-05T09:50:37+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/Warning-letter.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Verfasst von":"Eugene van der Watt","Gesch\u00e4tzte Lesezeit":"3\u00a0Minuten"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Former OpenAI employees publish \u2018Right to Warn\u2019 open letter","datePublished":"2024-06-05T09:50:37+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/"},"wordCount":487,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/Warning-letter.webp","keywords":["AI risks","Google","OpenAI"],"articleSection":["Ethics &amp; Society"],"inLanguage":"de"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/","url":"https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/","name":"Ehemalige OpenAI-Mitarbeiter ver\u00f6ffentlichen offenen Brief mit \"Recht auf Warnung\" | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/Warning-letter.webp","datePublished":"2024-06-05T09:50:37+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/#breadcrumb"},"inLanguage":"de","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/"]}]},{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/Warning-letter.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/Warning-letter.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Former OpenAI employees publish \u2018Right to Warn\u2019 open letter"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Ihre t\u00e4gliche Dosis an AI-Nachrichten","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"de"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene kommt aus der Elektronikbranche und liebt alles, was mit Technik zu tun hat. Wenn er eine Pause vom Konsum von KI-Nachrichten einlegt, findet man ihn am Snookertisch.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/de\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/12729","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/comments?post=12729"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/12729\/revisions"}],"predecessor-version":[{"id":12733,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/12729\/revisions\/12733"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media\/12731"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media?parent=12729"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/categories?post=12729"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/tags?post=12729"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}