{"id":11227,"date":"2024-04-03T10:42:20","date_gmt":"2024-04-03T10:42:20","guid":{"rendered":"https:\/\/dailyai.com\/?p=11227"},"modified":"2024-04-03T10:42:20","modified_gmt":"2024-04-03T10:42:20","slug":"apples-realm-sees-on-screen-visuals-better-than-gpt-4","status":"publish","type":"post","link":"https:\/\/dailyai.com\/pt\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","title":{"rendered":"O ReALM da Apple \"v\u00ea\" melhor as imagens no ecr\u00e3 do que o GPT-4"},"content":{"rendered":"<p><strong>Os engenheiros da Apple desenvolveram um sistema de IA que resolve refer\u00eancias complexas a entidades no ecr\u00e3 e a conversas com o utilizador. O modelo leve poder\u00e1 ser a solu\u00e7\u00e3o ideal para assistentes virtuais no dispositivo.<\/strong><\/p>\n<p>Os seres humanos s\u00e3o bons a resolver refer\u00eancias em conversas uns com os outros. Quando usamos termos como \"o de baixo\" ou \"ele\", compreendemos a que \u00e9 que a pessoa se est\u00e1 a referir com base no contexto da conversa e em coisas que podemos ver.<\/p>\n<p>\u00c9 muito mais dif\u00edcil para um modelo de IA fazer isto. Os LLMs multimodais, como o GPT-4, s\u00e3o bons a responder a perguntas sobre imagens, mas s\u00e3o dispendiosos de treinar e requerem uma grande sobrecarga de computa\u00e7\u00e3o para processar cada consulta sobre uma imagem.<\/p>\n<p>Os engenheiros da Apple adoptaram uma abordagem diferente com o seu sistema, denominado ReALM (Reference Resolution As Language Modeling). <a href=\"https:\/\/arxiv.org\/pdf\/2403.20329.pdf\" target=\"_blank\" rel=\"noopener\">O jornal<\/a> vale a pena ler para obter mais pormenores sobre o seu processo de desenvolvimento e teste.<\/p>\n<p>O ReALM utiliza um LLM para processar entidades de conversa\u00e7\u00e3o, no ecr\u00e3 e de fundo (alarmes, m\u00fasica de fundo) que constituem as interac\u00e7\u00f5es de um utilizador com um agente de IA virtual.<\/p>\n<p>Eis um exemplo do tipo de intera\u00e7\u00e3o que um utilizador pode ter com um agente de IA.<\/p>\n<figure id=\"attachment_11231\" aria-describedby=\"caption-attachment-11231\" style=\"width: 746px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-11231\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Agent-interactions.png\" alt=\"\" width=\"746\" height=\"298\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Agent-interactions.png 746w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Agent-interactions-300x120.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Agent-interactions-60x24.png 60w\" sizes=\"auto, (max-width: 746px) 100vw, 746px\" \/><figcaption id=\"caption-attachment-11231\" class=\"wp-caption-text\">Exemplos de interac\u00e7\u00f5es de um utilizador com um assistente virtual. Fonte: arXiv<\/figcaption><\/figure>\n<p>O agente precisa de compreender entidades conversacionais como o facto de que quando o utilizador diz \"the one\" est\u00e1 a referir-se ao n\u00famero de telefone da farm\u00e1cia.<\/p>\n<p>Tamb\u00e9m precisa de compreender o contexto visual quando o utilizador diz \"o de baixo\", e \u00e9 aqui que a abordagem do ReALM difere de modelos como o GPT-4.<\/p>\n<p>O ReALM baseia-se em codificadores a montante para analisar primeiro os elementos no ecr\u00e3 e as suas posi\u00e7\u00f5es. Em seguida, a ReALM reconstr\u00f3i o ecr\u00e3 em representa\u00e7\u00f5es puramente textuais, da esquerda para a direita e de cima para baixo.<\/p>\n<p>Em termos simples, utiliza linguagem natural para resumir o ecr\u00e3 do utilizador.<\/p>\n<p>Agora, quando um utilizador faz uma pergunta sobre algo no ecr\u00e3, o modelo de linguagem processa a descri\u00e7\u00e3o de texto do ecr\u00e3 em vez de precisar de utilizar um modelo de vis\u00e3o para processar a imagem no ecr\u00e3.<\/p>\n<p>Os investigadores criaram conjuntos de dados sint\u00e9ticos de entidades de conversa\u00e7\u00e3o, no ecr\u00e3 e de fundo e testaram o ReALM e outros modelos para testar a sua efic\u00e1cia na resolu\u00e7\u00e3o de refer\u00eancias em sistemas de conversa\u00e7\u00e3o.<\/p>\n<p>A vers\u00e3o mais pequena do ReALM (80M par\u00e2metros) tem um desempenho compar\u00e1vel ao do GPT-4 e a sua vers\u00e3o maior (3B par\u00e2metros) tem um desempenho substancialmente superior ao do GPT-4.<\/p>\n<p>O ReALM \u00e9 um modelo pequeno em compara\u00e7\u00e3o com o GPT-4. A sua resolu\u00e7\u00e3o de refer\u00eancia superior torna-o a escolha ideal para um assistente virtual que pode existir no dispositivo sem comprometer o desempenho.<\/p>\n<p>O ReALM n\u00e3o funciona t\u00e3o bem com imagens mais complexas ou pedidos de utilizadores com mais nuances, mas poderia funcionar bem como assistente virtual no autom\u00f3vel ou no dispositivo. Imagine se a Siri pudesse \"ver\" o ecr\u00e3 do seu iPhone e responder a refer\u00eancias a elementos no ecr\u00e3.<\/p>\n<p>A Apple tem sido um pouco lenta a arrancar, mas desenvolvimentos recentes como a sua <a href=\"https:\/\/dailyai.com\/pt\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/\">Modelo MM1<\/a> e ReALM mostram que muita coisa est\u00e1 a acontecer \u00e0 porta fechada.<\/p>","protected":false},"excerpt":{"rendered":"<p>Os engenheiros da Apple desenvolveram um sistema de IA que resolve refer\u00eancias complexas a entidades no ecr\u00e3 e a conversas com o utilizador. O modelo leve poder\u00e1 ser a solu\u00e7\u00e3o ideal para assistentes virtuais no dispositivo. Os humanos s\u00e3o bons a resolver refer\u00eancias em conversas entre si. Quando utilizamos termos como \"o de baixo\" ou \"ele\", compreendemos a que \u00e9 que a pessoa se est\u00e1 a referir com base no contexto da conversa e em coisas que podemos ver. \u00c9 muito mais dif\u00edcil para um modelo de IA fazer isto. Os LLM multimodais como o GPT-4 s\u00e3o bons a responder a perguntas sobre imagens, mas s\u00e3o dispendiosos de treinar e requerem um<\/p>","protected":false},"author":6,"featured_media":11232,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[126,166,118],"class_list":["post-11227","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-apple","tag-computer-vision","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4 | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/pt\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/\" \/>\n<meta property=\"og:locale\" content=\"pt_PT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4 | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Apple engineers developed an AI system that resolves complex references to on-screen entities and user conversations. The lightweight model could be an ideal solution for on-device virtual assistants. Humans are good at resolving references in conversations with each other. When we use terms like \u201cthe bottom one\u201d or \u201chim\u201d we understand what the person is referring to based on the context of the conversation and things we can see. It\u2019s a lot more difficult for an AI model to do this. Multimodal LLMs like GPT-4 are good at answering questions about images but are expensive to train and require a\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/pt\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-03T10:42:20+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo estimado de leitura\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4\",\"datePublished\":\"2024-04-03T10:42:20+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\"},\"wordCount\":486,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Apple-ReALM.webp\",\"keywords\":[\"Apple\",\"Computer vision\",\"LLMS\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"pt-PT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\",\"name\":\"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4 | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Apple-ReALM.webp\",\"datePublished\":\"2024-04-03T10:42:20+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#breadcrumb\"},\"inLanguage\":\"pt-PT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Apple-ReALM.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Apple-ReALM.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-PT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/pt\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"O ReALM da Apple \"v\u00ea\" melhor as imagens no ecr\u00e3 do que o GPT-4 | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/pt\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","og_locale":"pt_PT","og_type":"article","og_title":"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4 | DailyAI","og_description":"Apple engineers developed an AI system that resolves complex references to on-screen entities and user conversations. The lightweight model could be an ideal solution for on-device virtual assistants. Humans are good at resolving references in conversations with each other. When we use terms like \u201cthe bottom one\u201d or \u201chim\u201d we understand what the person is referring to based on the context of the conversation and things we can see. It\u2019s a lot more difficult for an AI model to do this. Multimodal LLMs like GPT-4 are good at answering questions about images but are expensive to train and require a","og_url":"https:\/\/dailyai.com\/pt\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","og_site_name":"DailyAI","article_published_time":"2024-04-03T10:42:20+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Eugene van der Watt","Tempo estimado de leitura":"3 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4","datePublished":"2024-04-03T10:42:20+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/"},"wordCount":486,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","keywords":["Apple","Computer vision","LLMS"],"articleSection":["Industry"],"inLanguage":"pt-PT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","url":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","name":"O ReALM da Apple \"v\u00ea\" melhor as imagens no ecr\u00e3 do que o GPT-4 | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","datePublished":"2024-04-03T10:42:20+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#breadcrumb"},"inLanguage":"pt-PT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/"]}]},{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"A sua dose di\u00e1ria de not\u00edcias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-PT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene vem de uma forma\u00e7\u00e3o em engenharia eletr\u00f3nica e adora tudo o que \u00e9 tecnologia. Quando faz uma pausa no consumo de not\u00edcias sobre IA, pode encontr\u00e1-lo \u00e0 mesa de snooker.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/pt\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/11227","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/comments?post=11227"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/11227\/revisions"}],"predecessor-version":[{"id":11234,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/11227\/revisions\/11234"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media\/11232"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media?parent=11227"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/categories?post=11227"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/tags?post=11227"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}