{"id":8006,"date":"2023-12-05T09:02:39","date_gmt":"2023-12-05T09:02:39","guid":{"rendered":"https:\/\/dailyai.com\/?p=8006"},"modified":"2023-12-05T13:15:00","modified_gmt":"2023-12-05T13:15:00","slug":"meta-releases-ego-exo4d-a-multimodal-perception-dataset","status":"publish","type":"post","link":"https:\/\/dailyai.com\/pt\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/","title":{"rendered":"Meta lan\u00e7a Ego-Exo4D, um conjunto de dados de perce\u00e7\u00e3o multimodal"},"content":{"rendered":"<p><strong>O treino de modelos de IA como o GPT-4 tem-se baseado sobretudo em conjuntos de dados constitu\u00eddos por texto e imagens. O conjunto de dados de perce\u00e7\u00e3o multimodal Ego-Exo4D da Meta apresenta aos cientistas de dados um novo e rico conjunto de dados de treino.<\/strong><\/p>\n<p>\u00c9 poss\u00edvel aprender uma nova habilidade lendo um livro, mas \u00e9 muito mais f\u00e1cil quando algu\u00e9m nos mostra como fazer algo enquanto nos explica. \u00c9 este o objetivo que a equipa FAIR (Fundamental Artificial Intelligence Research) da Meta tem para o Ego-Exo4D.<\/p>\n<p>O conjunto de dados \u00e9 constitu\u00eddo por v\u00eddeos na primeira pessoa (Ego) e na terceira pessoa (Exo) de pessoas que realizam diferentes actividades humanas especializadas. Estas actividades podem ser qualquer coisa, desde cozinhar, dan\u00e7ar, tocar m\u00fasica ou reparar uma bicicleta. Os dados foram recolhidos em 13 cidades de todo o mundo por 839 utilizadores de c\u00e2maras, tendo sido captadas 1422 horas de v\u00eddeo.<\/p>\n<p>Os v\u00eddeos, que s\u00e3o filmados em simult\u00e2neo, s\u00e3o depois aumentados com modos de dados adicionais, cortesia dos \u00f3culos Project Aria da Meta.<\/p>\n<p>Os \u00f3culos Project Aria s\u00e3o computadores port\u00e1teis em forma de \u00f3culos. Captam o v\u00eddeo e o \u00e1udio do utilizador, bem como o seguimento dos olhos e a informa\u00e7\u00e3o de localiza\u00e7\u00e3o. Os \u00f3culos tamb\u00e9m detetam as posi\u00e7\u00f5es da cabe\u00e7a e as nuvens de pontos 3D do ambiente.<\/p>\n<p>O resultado \u00e9 um conjunto de dados de v\u00eddeos simult\u00e2neos de uma tarefa que est\u00e1 a ser executada, com narra\u00e7\u00f5es na primeira pessoa pelos utilizadores da c\u00e2mara descrevendo as suas ac\u00e7\u00f5es, e o seguimento da cabe\u00e7a e dos olhos da pessoa que executa a tarefa.<\/p>\n<blockquote class=\"twitter-tweet\" data-media-max-width=\"560\">\n<p dir=\"ltr\" lang=\"en\" style=\"text-align: center;\">Apresentamos o Ego-Exo4D - um conjunto de dados fundamental e de refer\u00eancia centrado em actividades humanas especializadas para apoiar a investiga\u00e7\u00e3o sobre aprendizagem de v\u00eddeo e perce\u00e7\u00e3o multimodal. \u00c9 o maior conjunto de dados p\u00fablicos do g\u00e9nero.<\/p>\n<p>Mais pormenores \u27a1\ufe0f <a href=\"https:\/\/t.co\/82OR4msehv\">https:\/\/t.co\/82OR4msehv<\/a> <a href=\"https:\/\/t.co\/NTI1kdj1RN\">pic.twitter.com\/NTI1kdj1RN<\/a><\/p>\n<p style=\"text-align: center;\">- IA no Meta (@AIatMeta) <a href=\"https:\/\/twitter.com\/AIatMeta\/status\/1731739266856935796?ref_src=twsrc%5Etfw\">4 de dezembro de 2023<\/a><\/p>\n<\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>A Meta adicionou ent\u00e3o descri\u00e7\u00f5es na terceira pessoa das ac\u00e7\u00f5es de cada utilizador da c\u00e2mara. A Meta tamb\u00e9m contratou especialistas em v\u00e1rios dom\u00ednios para adicionar coment\u00e1rios de especialistas falados na terceira pessoa, criticando a forma como a pessoa no v\u00eddeo executava a tarefa.<\/p>\n<p>Ao recolher pontos de vista egoc\u00eantricos e exoc\u00eantricos, o conjunto de dados Ego-Exo4D pode mostrar aos investigadores como s\u00e3o as actividades a partir de diferentes perspectivas. Isto poder\u00e1 ajud\u00e1-los a desenvolver algoritmos de vis\u00e3o por computador capazes de reconhecer o que uma pessoa est\u00e1 a fazer a partir de qualquer perspetiva.<\/p>\n<h2>Ego-Exo4D abre novas oportunidades de aprendizagem<\/h2>\n<p>Um dos principais obst\u00e1culos \u00e0 realiza\u00e7\u00e3o da AGI ou ao treino mais eficiente dos rob\u00f4s \u00e9 a falta de perce\u00e7\u00e3o sensorial dos computadores. Como seres humanos, temos tantas entradas sensoriais do nosso ambiente que muitas vezes tomamos como garantidas quando aprendemos novas compet\u00eancias.<\/p>\n<p>O Ego-Exo4D ser\u00e1 um recurso extremamente \u00fatil para ajudar a colmatar esta lacuna.<\/p>\n<p>O Dr. Gedas Bertasius, Professor Assistente no Departamento de Ci\u00eancias Inform\u00e1ticas da Universidade da Carolina do Norte, afirmou: \"O Ego-Exo4D n\u00e3o se trata apenas de recolher dados, mas de mudar a forma como a IA compreende, percepciona e aprende. Com uma aprendizagem e uma perspetiva centradas no ser humano, a IA pode tornar-se mais \u00fatil na nossa vida quotidiana, ajudando-nos de formas que apenas imagin\u00e1mos.\"<\/p>\n<figure id=\"attachment_8008\" aria-describedby=\"caption-attachment-8008\" style=\"width: 1792px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-8008 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot.png\" alt=\"\" width=\"1792\" height=\"1072\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot.png 1792w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-300x179.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-1024x613.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-768x459.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-1536x919.png 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-370x221.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-800x479.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-20x12.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-740x443.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-1600x957.png 1600w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-1320x790.png 1320w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-80x48.png 80w\" sizes=\"auto, (max-width: 1792px) 100vw, 1792px\" \/><figcaption id=\"caption-attachment-8008\" class=\"wp-caption-text\">Instant\u00e2neo de dados de treino Ego-Exo4D do exemplo de repara\u00e7\u00e3o de bicicletas. Fonte: Meta<\/figcaption><\/figure>\n<p>A Meta diz esperar que o Ego-Exo4D \"permita que os rob\u00f4s do futuro adquiram conhecimentos sobre manipula\u00e7\u00f5es complexas de destreza observando especialistas humanos qualificados em a\u00e7\u00e3o\".<\/p>\n<p>Este conjunto de dados, combinado com os \u00f3culos do Projeto Aria, permitir\u00e1 em breve uma experi\u00eancia de aprendizagem verdadeiramente imersiva para os seres humanos. Imagine-se a executar uma tarefa enquanto os seus \u00f3culos utilizam a realidade aumentada (RA) para sobrepor um v\u00eddeo tutorial ou para o orientar na sua tarefa.<\/p>\n<p>Pode estar a aprender a tocar piano e ter uma sobreposi\u00e7\u00e3o visual que lhe mostra onde as suas m\u00e3os se devem mover, com conselhos \u00e1udio em tempo real \u00e0 medida que o faz. Ou pode abrir o capot do seu carro e ser guiado na resolu\u00e7\u00e3o de problemas e na repara\u00e7\u00e3o de um problema no motor.<\/p>\n<p>Ser\u00e1 interessante ver se o Meta's <a href=\"https:\/\/ai.meta.com\/research\/ego-how-to\/\" target=\"_blank\" rel=\"noopener\">Conceito de aprendizagem Ego How-To<\/a> ir\u00e1 impulsionar uma melhor ado\u00e7\u00e3o dos \u00f3culos do Projeto Aria do que o fracassado produto Google Glass. No entanto, ainda n\u00e3o se sabe quando estar\u00e3o dispon\u00edveis para compra.<\/p>\n<p>Meta vai criar o conjunto de dados Ego-Exo4D <a href=\"https:\/\/ego-exo4d-data.org\/\" target=\"_blank\" rel=\"noopener\">dispon\u00edvel para descarregar<\/a> antes do final de dezembro.<\/p>","protected":false},"excerpt":{"rendered":"<p>O treino de modelos de IA como o GPT-4 tem-se baseado sobretudo em conjuntos de dados constitu\u00eddos por texto e imagens. O conjunto de dados de perce\u00e7\u00e3o multimodal Ego-Exo4D da Meta apresenta aos cientistas de dados um novo e rico conjunto de dados de treino. Pode aprender-se uma nova compet\u00eancia lendo um livro, mas \u00e9 muito mais f\u00e1cil quando algu\u00e9m nos mostra como fazer algo enquanto nos explica. \u00c9 este o objetivo que a equipa FAIR (Fundamental Artificial Intelligence Research) da Meta tem para o Ego-Exo4D. O conjunto de dados consiste em v\u00eddeos na primeira pessoa (Ego) e na terceira pessoa (Exo) de pessoas que realizam diferentes actividades humanas especializadas. Estas podem ser qualquer coisa, desde cozinhar, dan\u00e7ar, tocar m\u00fasica,<\/p>","protected":false},"author":6,"featured_media":8009,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[166,105,131],"class_list":["post-8006","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-computer-vision","tag-machine-learning","tag-meta"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Meta releases Ego-Exo4D, a multimodal perception dataset | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/pt\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/\" \/>\n<meta property=\"og:locale\" content=\"pt_PT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Meta releases Ego-Exo4D, a multimodal perception dataset | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Training AI models like GPT-4 has relied mostly on datasets consisting of text and images. Meta\u2019s Ego-Exo4D multimodal perception dataset presents data scientists with a rich new set of training data. You can learn a new skill by reading a book, but it\u2019s so much easier when someone shows you how to do something while explaining it to you. This is the goal Meta\u2019s FAIR (Fundamental Artificial Intelligence Research) team has for Ego-Exo4D. The dataset consists of first-person (Ego) and third-person (Exo) perspective videos of people performing different skilled human activities. These could be anything from cooking, dancing, playing music,\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/pt\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-05T09:02:39+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-12-05T13:15:00+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/augmented-reality-car-repair.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"666\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo estimado de leitura\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Meta releases Ego-Exo4D, a multimodal perception dataset\",\"datePublished\":\"2023-12-05T09:02:39+00:00\",\"dateModified\":\"2023-12-05T13:15:00+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/\"},\"wordCount\":662,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/augmented-reality-car-repair.jpg\",\"keywords\":[\"Computer vision\",\"machine learning\",\"Meta\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"pt-PT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/\",\"name\":\"Meta releases Ego-Exo4D, a multimodal perception dataset | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/augmented-reality-car-repair.jpg\",\"datePublished\":\"2023-12-05T09:02:39+00:00\",\"dateModified\":\"2023-12-05T13:15:00+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/#breadcrumb\"},\"inLanguage\":\"pt-PT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/augmented-reality-car-repair.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/augmented-reality-car-repair.jpg\",\"width\":1000,\"height\":666},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Meta releases Ego-Exo4D, a multimodal perception dataset\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-PT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/pt\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Meta lan\u00e7a Ego-Exo4D, um conjunto de dados de perce\u00e7\u00e3o multimodal | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/pt\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/","og_locale":"pt_PT","og_type":"article","og_title":"Meta releases Ego-Exo4D, a multimodal perception dataset | DailyAI","og_description":"Training AI models like GPT-4 has relied mostly on datasets consisting of text and images. Meta\u2019s Ego-Exo4D multimodal perception dataset presents data scientists with a rich new set of training data. You can learn a new skill by reading a book, but it\u2019s so much easier when someone shows you how to do something while explaining it to you. This is the goal Meta\u2019s FAIR (Fundamental Artificial Intelligence Research) team has for Ego-Exo4D. The dataset consists of first-person (Ego) and third-person (Exo) perspective videos of people performing different skilled human activities. These could be anything from cooking, dancing, playing music,","og_url":"https:\/\/dailyai.com\/pt\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/","og_site_name":"DailyAI","article_published_time":"2023-12-05T09:02:39+00:00","article_modified_time":"2023-12-05T13:15:00+00:00","og_image":[{"width":1000,"height":666,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/augmented-reality-car-repair.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Eugene van der Watt","Tempo estimado de leitura":"3 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Meta releases Ego-Exo4D, a multimodal perception dataset","datePublished":"2023-12-05T09:02:39+00:00","dateModified":"2023-12-05T13:15:00+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/"},"wordCount":662,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/augmented-reality-car-repair.jpg","keywords":["Computer vision","machine learning","Meta"],"articleSection":["Industry"],"inLanguage":"pt-PT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/","url":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/","name":"Meta lan\u00e7a Ego-Exo4D, um conjunto de dados de perce\u00e7\u00e3o multimodal | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/augmented-reality-car-repair.jpg","datePublished":"2023-12-05T09:02:39+00:00","dateModified":"2023-12-05T13:15:00+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/#breadcrumb"},"inLanguage":"pt-PT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/"]}]},{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/augmented-reality-car-repair.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/augmented-reality-car-repair.jpg","width":1000,"height":666},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Meta releases Ego-Exo4D, a multimodal perception dataset"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"A sua dose di\u00e1ria de not\u00edcias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-PT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene vem de uma forma\u00e7\u00e3o em engenharia eletr\u00f3nica e adora tudo o que \u00e9 tecnologia. Quando faz uma pausa no consumo de not\u00edcias sobre IA, pode encontr\u00e1-lo \u00e0 mesa de snooker.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/pt\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/8006","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/comments?post=8006"}],"version-history":[{"count":4,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/8006\/revisions"}],"predecessor-version":[{"id":8021,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/8006\/revisions\/8021"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media\/8009"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media?parent=8006"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/categories?post=8006"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/tags?post=8006"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}