{"id":12729,"date":"2024-06-05T09:50:37","date_gmt":"2024-06-05T09:50:37","guid":{"rendered":"https:\/\/dailyai.com\/?p=12729"},"modified":"2024-06-05T09:50:37","modified_gmt":"2024-06-05T09:50:37","slug":"former-openai-employees-publish-right-to-warn-open-letter","status":"publish","type":"post","link":"https:\/\/dailyai.com\/pt\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/","title":{"rendered":"Antigos funcion\u00e1rios da OpenAI publicam carta aberta \"Right to Warn"},"content":{"rendered":"<p><strong>Um grupo de antigos e actuais funcion\u00e1rios da OpenAI e da Google est\u00e1 a chamar a aten\u00e7\u00e3o das empresas de IA para o que dizem ser uma perigosa cultura de secretismo em torno dos riscos da IA.<\/strong><\/p>\n<p>A carta intitulada \"O direito de alertar sobre a intelig\u00eancia artificial avan\u00e7ada\" afirma que as empresas de IA t\u00eam fortes incentivos financeiros para evitar uma supervis\u00e3o efectiva dos potenciais riscos da IA.<\/p>\n<p>Para al\u00e9m de serem imprudentes ao centrarem-se nos objectivos financeiros em vez da seguran\u00e7a, a carta afirma que as empresas utilizam acordos de confidencialidade punitivos para desencorajar ativamente os trabalhadores de manifestarem as suas preocupa\u00e7\u00f5es.<\/p>\n<p>Os signat\u00e1rios s\u00e3o todos antigos funcion\u00e1rios da OpenAI e da Google, sendo Neel Nanda o \u00fanico que ainda trabalha na Google. A carta foi tamb\u00e9m subscrita por Yoshua Bengio, Geoffrey Hinton e Stuart Russell, os maiores especialistas em IA.<\/p>\n<p>Como prova da preocupa\u00e7\u00e3o em denunciar os seus antigos empregadores, 6 dos signat\u00e1rios n\u00e3o quiseram revelar os seus nomes na carta.<\/p>\n<p>Os antigos investigadores da OpenAI, Daniel Kokotajlo e William Saunders, que tamb\u00e9m assinaram a carta, deixaram a empresa no in\u00edcio deste ano.<\/p>\n<p>Kokotajlo fazia parte da equipa de governa\u00e7\u00e3o e Saunders trabalhou na equipa de Superalinhamento da OpenAI, que foi dissolvida no m\u00eas passado quando <a href=\"https:\/\/dailyai.com\/pt\/2024\/05\/openais-superalignment-meltdown-can-the-company-salvage-any-trust\/\">Ilya Sutskever e Jan Leiketamb\u00e9m abandonaram o clube por quest\u00f5es de seguran\u00e7a<\/a>.<\/p>\n<p>Kokotajlo explicou o motivo da sua sa\u00edda num f\u00f3rum, dizendo que acha que a OpenAI n\u00e3o se vai \"comportar de forma respons\u00e1vel na altura da AGI\".<\/p>\n<h2>Um apelo \u00e0 a\u00e7\u00e3o<\/h2>\n<p>A carta apela a um maior empenhamento das empresas de IA na aus\u00eancia de regulamenta\u00e7\u00e3o que regule os riscos da IA que o p\u00fablico desconhece.<\/p>\n<p>A carta diz que \"as protec\u00e7\u00f5es normais dos denunciantes s\u00e3o insuficientes porque se concentram em actividades ilegais, enquanto muitos dos riscos que nos preocupam ainda n\u00e3o est\u00e3o regulamentados\".<\/p>\n<p>A carta apela a que as empresas de IA se comprometam com quatro princ\u00edpios. Em suma, pretendem que as empresas<\/p>\n<ul>\n<li>N\u00e3o celebrar ou aplicar acordos que pro\u00edbam cr\u00edticas \u00e0 empresa por quest\u00f5es de seguran\u00e7a nem reter benef\u00edcios financeiros devidos ao trabalhador. (ahem, OpenAI)<\/li>\n<li>Facilitar um processo an\u00f3nimo para que os funcion\u00e1rios apresentem preocupa\u00e7\u00f5es relacionadas com o risco ao conselho de administra\u00e7\u00e3o da empresa ou a outras organiza\u00e7\u00f5es reguladoras.<\/li>\n<li>Apoiar uma cultura de cr\u00edtica aberta que permita aos empregados tornar p\u00fablicas as preocupa\u00e7\u00f5es relacionadas com os riscos, sem revelar a propriedade intelectual.<\/li>\n<li>N\u00e3o retaliar contra actuais e antigos funcion\u00e1rios que partilhem publicamente informa\u00e7\u00f5es confidenciais relacionadas com riscos, depois de outros processos terem falhado.<\/li>\n<\/ul>\n<p>V\u00e1rios dos nomes que constam da lista de signat\u00e1rios consideram-se altru\u00edstas efectivos. Pelas suas publica\u00e7\u00f5es e coment\u00e1rios, \u00e9 evidente que pessoas como Daniel Kokotajlo (Less Wrong) e William Saunders (AI Alignment Forum) acreditam que as coisas podem acabar muito mal se os riscos da IA n\u00e3o forem geridos.<\/p>\n<p>Mas n\u00e3o se trata de trolls condenados num f\u00f3rum, que se manifestam a partir das linhas laterais. Trata-se de intelectuais de topo que empresas como a OpenAI e a Google acharam por bem empregar para criar a tecnologia que agora temem.<\/p>\n<p>E agora est\u00e3o a dizer: \"Vimos coisas que nos assustam. Queremos avisar as pessoas, mas n\u00e3o nos \u00e9 permitido\".<\/p>\n<p>Pode ler o <a href=\"https:\/\/righttowarn.ai\/\" target=\"_blank\" rel=\"noopener\">carta aqui<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>Um grupo de antigos e actuais funcion\u00e1rios da OpenAI e da Google est\u00e1 a chamar a aten\u00e7\u00e3o das empresas de IA para o que dizem ser uma perigosa cultura de secretismo em torno dos riscos da IA. A carta intitulada \"O direito de alertar sobre a intelig\u00eancia artificial avan\u00e7ada\" afirma que as empresas de IA t\u00eam fortes incentivos financeiros para evitar uma supervis\u00e3o efectiva dos potenciais riscos da IA. Para al\u00e9m de serem imprudentes ao centrarem-se nos objectivos financeiros em vez da seguran\u00e7a, a carta afirma que as empresas utilizam acordos de confidencialidade punitivos para desencorajar ativamente os funcion\u00e1rios de manifestarem preocupa\u00e7\u00f5es. Os signat\u00e1rios s\u00e3o todos antigos funcion\u00e1rios da OpenAI e da Google, sendo que Neel Nanda, o \u00fanico que ainda trabalha na<\/p>","protected":false},"author":6,"featured_media":12731,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[88],"tags":[163,102,93],"class_list":["post-12729","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ethics","tag-ai-risks","tag-google","tag-openai"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Former OpenAI employees publish \u2018Right to Warn\u2019 open letter | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/pt\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/\" \/>\n<meta property=\"og:locale\" content=\"pt_PT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Former OpenAI employees publish \u2018Right to Warn\u2019 open letter | DailyAI\" \/>\n<meta property=\"og:description\" content=\"A group of former and current OpenAI and Google employees are calling AI companies out on what they say is a dangerous culture of secrecy surrounding AI risks. The letter titled \u201cA right to warn about advanced artificial intelligence\u201d states that AI companies have strong financial incentives to avoid effective oversight of potential AI risks. Besides being reckless by focusing on financial objectives instead of safety, the letter says that companies use punitive confidentiality agreements to actively discourage employees from raising concerns. The signatories are all former OpenAI and Google employees, with Neel Nanda, the only one still working at\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/pt\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-06-05T09:50:37+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/Warning-letter.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo estimado de leitura\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Former OpenAI employees publish \u2018Right to Warn\u2019 open letter\",\"datePublished\":\"2024-06-05T09:50:37+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/\"},\"wordCount\":487,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/Warning-letter.webp\",\"keywords\":[\"AI risks\",\"Google\",\"OpenAI\"],\"articleSection\":[\"Ethics &amp; Society\"],\"inLanguage\":\"pt-PT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/\",\"name\":\"Former OpenAI employees publish \u2018Right to Warn\u2019 open letter | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/Warning-letter.webp\",\"datePublished\":\"2024-06-05T09:50:37+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/#breadcrumb\"},\"inLanguage\":\"pt-PT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/Warning-letter.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/Warning-letter.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/06\\\/former-openai-employees-publish-right-to-warn-open-letter\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Former OpenAI employees publish \u2018Right to Warn\u2019 open letter\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-PT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/pt\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Antigos funcion\u00e1rios da OpenAI publicam carta aberta \"Direito de avisar\" | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/pt\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/","og_locale":"pt_PT","og_type":"article","og_title":"Former OpenAI employees publish \u2018Right to Warn\u2019 open letter | DailyAI","og_description":"A group of former and current OpenAI and Google employees are calling AI companies out on what they say is a dangerous culture of secrecy surrounding AI risks. The letter titled \u201cA right to warn about advanced artificial intelligence\u201d states that AI companies have strong financial incentives to avoid effective oversight of potential AI risks. Besides being reckless by focusing on financial objectives instead of safety, the letter says that companies use punitive confidentiality agreements to actively discourage employees from raising concerns. The signatories are all former OpenAI and Google employees, with Neel Nanda, the only one still working at","og_url":"https:\/\/dailyai.com\/pt\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/","og_site_name":"DailyAI","article_published_time":"2024-06-05T09:50:37+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/Warning-letter.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Eugene van der Watt","Tempo estimado de leitura":"3 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Former OpenAI employees publish \u2018Right to Warn\u2019 open letter","datePublished":"2024-06-05T09:50:37+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/"},"wordCount":487,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/Warning-letter.webp","keywords":["AI risks","Google","OpenAI"],"articleSection":["Ethics &amp; Society"],"inLanguage":"pt-PT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/","url":"https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/","name":"Antigos funcion\u00e1rios da OpenAI publicam carta aberta \"Direito de avisar\" | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/Warning-letter.webp","datePublished":"2024-06-05T09:50:37+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/#breadcrumb"},"inLanguage":"pt-PT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/"]}]},{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/Warning-letter.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/06\/Warning-letter.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/06\/former-openai-employees-publish-right-to-warn-open-letter\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Former OpenAI employees publish \u2018Right to Warn\u2019 open letter"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"A sua dose di\u00e1ria de not\u00edcias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-PT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene vem de uma forma\u00e7\u00e3o em engenharia eletr\u00f3nica e adora tudo o que \u00e9 tecnologia. Quando faz uma pausa no consumo de not\u00edcias sobre IA, pode encontr\u00e1-lo \u00e0 mesa de snooker.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/pt\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/12729","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/comments?post=12729"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/12729\/revisions"}],"predecessor-version":[{"id":12733,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/12729\/revisions\/12733"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media\/12731"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media?parent=12729"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/categories?post=12729"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/tags?post=12729"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}