{"id":3315,"date":"2023-07-29T11:38:41","date_gmt":"2023-07-29T11:38:41","guid":{"rendered":"https:\/\/dailyai.com\/?p=3315"},"modified":"2023-07-29T11:38:41","modified_gmt":"2023-07-29T11:38:41","slug":"googles-ai-turns-vision-language-into-robotic-actions","status":"publish","type":"post","link":"https:\/\/dailyai.com\/pt\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","title":{"rendered":"A IA da Google transforma a vis\u00e3o e a linguagem em ac\u00e7\u00f5es rob\u00f3ticas"},"content":{"rendered":"<p><strong>A Google apresentou alguns resultados de testes interessantes do seu mais recente modelo de rob\u00f4 de vis\u00e3o-linguagem-a\u00e7\u00e3o (VLA), denominado Robotics Transformer 2 (RT-2).<\/strong><\/p>\n<p><span style=\"font-weight: 400;\">A maior parte dos debates recentes sobre IA tem-se centrado em grandes modelos lingu\u00edsticos como o ChatGPT e o Llama. As respostas que estes modelos fornecem, embora \u00fateis, permanecem no ecr\u00e3 do seu dispositivo. Com o RT-2, a Google est\u00e1 a trazer o poder da IA para o mundo f\u00edsico. Um mundo onde os rob\u00f4s de auto-aprendizagem poder\u00e3o em breve fazer parte da nossa vida quotidiana.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Houve uma grande melhoria na destreza dos robots, mas estes continuam a necessitar de instru\u00e7\u00f5es de programa\u00e7\u00e3o muito espec\u00edficas para realizar at\u00e9 tarefas simples. Quando a tarefa muda, mesmo que ligeiramente, o programa tem de ser alterado.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Com o RT-2, a Google criou um modelo que permite a um rob\u00f4 classificar e aprender com as coisas que v\u00ea em combina\u00e7\u00e3o com as palavras que ouve. Em seguida, raciocina sobre as instru\u00e7\u00f5es que recebe e toma medidas f\u00edsicas em resposta.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Com os LLMs, uma frase \u00e9 dividida em tokens, essencialmente peda\u00e7os de palavras que permitem \u00e0 IA compreender a frase. A Google utilizou este princ\u00edpio e sistematizou os movimentos que um rob\u00f4 teria de fazer em resposta a um comando.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Os movimentos de um bra\u00e7o rob\u00f3tico com uma pin\u00e7a, por exemplo, seriam divididos em tokens de altera\u00e7\u00f5es nas posi\u00e7\u00f5es x e y ou rota\u00e7\u00f5es.<\/span><\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\" style=\"text-align: center;\">No passado, os rob\u00f4s necessitavam normalmente de experi\u00eancia em primeira m\u00e3o para executar uma a\u00e7\u00e3o. Mas com o nosso novo modelo de vis\u00e3o-linguagem-a\u00e7\u00e3o, o RT-2, podem agora aprender a partir de texto e imagens da Web para enfrentar tarefas novas e complexas. Saiba mais \u2193 <a href=\"https:\/\/t.co\/4DSRwUHhwg\">https:\/\/t.co\/4DSRwUHhwg<\/a><\/p>\n<p style=\"text-align: center;\">- Google (@Google) <a href=\"https:\/\/twitter.com\/Google\/status\/1684974085837660170?ref_src=twsrc%5Etfw\">28 de julho de 2023<\/a><\/p>\n<\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<h2><span style=\"font-weight: 400;\">O que \u00e9 que o RT-2 permite a um rob\u00f4 fazer?<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Ser capaz de compreender o que v\u00ea e ouve e ter um racioc\u00ednio em cadeia significa que o rob\u00f4 n\u00e3o precisa de ser programado para novas tarefas.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Um exemplo que a Google deu no seu relat\u00f3rio DeepMind <\/span><a href=\"https:\/\/www.deepmind.com\/blog\/rt-2-new-model-translates-vision-and-language-into-action\"><span style=\"font-weight: 400;\">publica\u00e7\u00e3o no blogue sobre a RT-2<\/span><\/a><span style=\"font-weight: 400;\"> era \"decidir que objeto poderia ser utilizado como martelo improvisado (uma pedra), ou que tipo de bebida \u00e9 melhor para uma pessoa cansada (uma bebida energ\u00e9tica)\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Nos testes que a Google realizou, submeteu um bra\u00e7o rob\u00f3tico e uma pin\u00e7a a uma s\u00e9rie de pedidos que exigiam compreens\u00e3o da linguagem, vis\u00e3o e racioc\u00ednio, para poder tomar a a\u00e7\u00e3o adequada. Por exemplo, perante 2 sacos de batatas fritas em cima de uma mesa, com um deles ligeiramente acima da borda, foi dito ao rob\u00f4 para \"apanhar o saco que est\u00e1 prestes a cair da mesa\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Isto pode parecer simples, mas a consci\u00eancia contextual necess\u00e1ria para apanhar o saco correto \u00e9 inovadora no mundo da rob\u00f3tica.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Para explicar como o RT-2 \u00e9 muito mais avan\u00e7ado do que os LLMs normais, outro blogue da Google explicou que \"um rob\u00f4 tem de ser capaz de reconhecer uma ma\u00e7\u00e3 no contexto, distingui-la de uma bola vermelha, compreender o seu aspeto e, mais importante, saber como apanh\u00e1-la\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Embora ainda seja cedo, a perspetiva de rob\u00f4s dom\u00e9sticos ou industriais ajudarem numa variedade de tarefas em ambientes em mudan\u00e7a \u00e9 empolgante. As aplica\u00e7\u00f5es no dom\u00ednio da defesa tamb\u00e9m est\u00e3o certamente a ser alvo de aten\u00e7\u00e3o.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">O bra\u00e7o rob\u00f3tico da Google nem sempre acertava e tinha um grande bot\u00e3o vermelho de emerg\u00eancia para o caso de funcionar mal. Esperemos que os futuros robots venham com algo semelhante, caso um dia sintam que n\u00e3o est\u00e3o satisfeitos com o patr\u00e3o.\u00a0<\/span><\/p>","protected":false},"excerpt":{"rendered":"<p>A Google apresentou alguns resultados de testes interessantes do seu mais recente modelo de rob\u00f4 de vis\u00e3o-linguagem-a\u00e7\u00e3o (VLA), denominado Robotics Transformer 2 (RT-2). A maior parte dos debates recentes sobre IA tem-se centrado em grandes modelos de linguagem como o ChatGPT e o Llama. As respostas que estes modelos fornecem, embora \u00fateis, permanecem no ecr\u00e3 do seu dispositivo. Com o RT-2, a Google est\u00e1 a trazer o poder da IA para o mundo f\u00edsico. Um mundo onde os rob\u00f4s de auto-aprendizagem poder\u00e3o em breve fazer parte da nossa vida quotidiana. Houve uma grande melhoria na destreza dos robots, mas estes continuam a necessitar de instru\u00e7\u00f5es de programa\u00e7\u00e3o muito espec\u00edficas para realizar at\u00e9 tarefas simples. Quando<\/p>","protected":false},"author":6,"featured_media":3367,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[147,102,169],"class_list":["post-3315","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-deepmind","tag-google","tag-robotics"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Google\u2019s AI turns vision &amp; language into robotic actions | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/pt\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/\" \/>\n<meta property=\"og:locale\" content=\"pt_PT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Google\u2019s AI turns vision &amp; language into robotic actions | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Google showcased some exciting test results of its latest vision-language-action (VLA) robot model called Robotics Transformer 2 (RT-2). The bulk of recent AI discussions has centered around large language models like ChatGPT and Llama. The responses these models provide, while useful, remain on the screen of your device. With RT-2, Google is bringing the power of AI to the physical world. A world where self-learning robots could soon be a part of our everyday lives. There has been a big improvement in the dexterity of robots but they still need very specific programming instructions to accomplish even simple tasks. When\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/pt\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-07-29T11:38:41+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"563\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo estimado de leitura\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Google\u2019s AI turns vision &#038; language into robotic actions\",\"datePublished\":\"2023-07-29T11:38:41+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\"},\"wordCount\":558,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Google-AI-RT-2-Robotics.jpg\",\"keywords\":[\"DeepMind\",\"Google\",\"Robotics\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"pt-PT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\",\"name\":\"Google\u2019s AI turns vision & language into robotic actions | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Google-AI-RT-2-Robotics.jpg\",\"datePublished\":\"2023-07-29T11:38:41+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#breadcrumb\"},\"inLanguage\":\"pt-PT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Google-AI-RT-2-Robotics.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Google-AI-RT-2-Robotics.jpg\",\"width\":1000,\"height\":563,\"caption\":\"Google AI RT-2 Robotics\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Google\u2019s AI turns vision &#038; language into robotic actions\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-PT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/pt\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"A IA da Google transforma a vis\u00e3o e a linguagem em ac\u00e7\u00f5es rob\u00f3ticas | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/pt\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","og_locale":"pt_PT","og_type":"article","og_title":"Google\u2019s AI turns vision & language into robotic actions | DailyAI","og_description":"Google showcased some exciting test results of its latest vision-language-action (VLA) robot model called Robotics Transformer 2 (RT-2). The bulk of recent AI discussions has centered around large language models like ChatGPT and Llama. The responses these models provide, while useful, remain on the screen of your device. With RT-2, Google is bringing the power of AI to the physical world. A world where self-learning robots could soon be a part of our everyday lives. There has been a big improvement in the dexterity of robots but they still need very specific programming instructions to accomplish even simple tasks. When","og_url":"https:\/\/dailyai.com\/pt\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","og_site_name":"DailyAI","article_published_time":"2023-07-29T11:38:41+00:00","og_image":[{"width":1000,"height":563,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Eugene van der Watt","Tempo estimado de leitura":"3 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Google\u2019s AI turns vision &#038; language into robotic actions","datePublished":"2023-07-29T11:38:41+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/"},"wordCount":558,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","keywords":["DeepMind","Google","Robotics"],"articleSection":["Industry"],"inLanguage":"pt-PT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","url":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","name":"A IA da Google transforma a vis\u00e3o e a linguagem em ac\u00e7\u00f5es rob\u00f3ticas | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","datePublished":"2023-07-29T11:38:41+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#breadcrumb"},"inLanguage":"pt-PT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/"]}]},{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","width":1000,"height":563,"caption":"Google AI RT-2 Robotics"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Google\u2019s AI turns vision &#038; language into robotic actions"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"A sua dose di\u00e1ria de not\u00edcias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-PT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene vem de uma forma\u00e7\u00e3o em engenharia eletr\u00f3nica e adora tudo o que \u00e9 tecnologia. Quando faz uma pausa no consumo de not\u00edcias sobre IA, pode encontr\u00e1-lo \u00e0 mesa de snooker.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/pt\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/3315","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/comments?post=3315"}],"version-history":[{"count":2,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/3315\/revisions"}],"predecessor-version":[{"id":3368,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/3315\/revisions\/3368"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media\/3367"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media?parent=3315"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/categories?post=3315"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/tags?post=3315"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}