{"id":8006,"date":"2023-12-05T09:02:39","date_gmt":"2023-12-05T09:02:39","guid":{"rendered":"https:\/\/dailyai.com\/?p=8006"},"modified":"2023-12-05T13:15:00","modified_gmt":"2023-12-05T13:15:00","slug":"meta-releases-ego-exo4d-a-multimodal-perception-dataset","status":"publish","type":"post","link":"https:\/\/dailyai.com\/fr\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/","title":{"rendered":"Meta lance Ego-Exo4D, un ensemble de donn\u00e9es de perception multimodale"},"content":{"rendered":"<p><strong>L'entra\u00eenement de mod\u00e8les d'IA tels que GPT-4 s'est principalement appuy\u00e9 sur des ensembles de donn\u00e9es constitu\u00e9s de textes et d'images. L'ensemble de donn\u00e9es de perception multimodale Ego-Exo4D de Meta offre aux scientifiques des donn\u00e9es un nouvel ensemble riche de donn\u00e9es d'entra\u00eenement.<\/strong><\/p>\n<p>Vous pouvez apprendre une nouvelle comp\u00e9tence en lisant un livre, mais c'est tellement plus facile quand quelqu'un vous montre comment faire quelque chose tout en vous l'expliquant. C'est l'objectif de l'\u00e9quipe FAIR (Fundamental Artificial Intelligence Research) de Meta pour Ego-Exo4D.<\/p>\n<p>L'ensemble de donn\u00e9es se compose de vid\u00e9os \u00e0 la premi\u00e8re personne (Ego) et \u00e0 la troisi\u00e8me personne (Exo) montrant des personnes effectuant diff\u00e9rentes activit\u00e9s humaines qualifi\u00e9es. Il peut s'agir de cuisiner, de danser, de jouer de la musique ou de r\u00e9parer un v\u00e9lo. Les donn\u00e9es ont \u00e9t\u00e9 recueillies dans 13 villes du monde entier par 839 porteurs de cam\u00e9ras, qui ont captur\u00e9 1 422 heures de vid\u00e9o.<\/p>\n<p>Les vid\u00e9os, qui sont film\u00e9es simultan\u00e9ment, sont ensuite enrichies de modes de donn\u00e9es suppl\u00e9mentaires gr\u00e2ce aux lunettes Project Aria de Meta.<\/p>\n<p>Les lunettes du projet Aria sont des ordinateurs portables sous forme de lunettes. Elles captent les donn\u00e9es vid\u00e9o et audio du porteur, ainsi que les informations relatives au suivi des yeux et \u00e0 la localisation. Les lunettes d\u00e9tectent \u00e9galement les positions de la t\u00eate et les nuages de points 3D de l'environnement.<\/p>\n<p>Le r\u00e9sultat est un ensemble de donn\u00e9es de vid\u00e9os simultan\u00e9es d'une t\u00e2che en cours d'ex\u00e9cution, avec des narrations \u00e0 la premi\u00e8re personne par les porteurs de la cam\u00e9ra d\u00e9crivant leurs actions, et le suivi de la t\u00eate et des yeux de la personne effectuant la t\u00e2che.<\/p>\n<blockquote class=\"twitter-tweet\" data-media-max-width=\"560\">\n<p dir=\"ltr\" lang=\"en\" style=\"text-align: center;\">Voici Ego-Exo4D - un ensemble de donn\u00e9es fondamentales et une suite d'\u00e9talons ax\u00e9s sur les activit\u00e9s humaines qualifi\u00e9es pour soutenir la recherche sur l'apprentissage vid\u00e9o et la perception multimodale. Il s'agit du plus grand ensemble de donn\u00e9es publiques de ce type.<\/p>\n<p>Plus de d\u00e9tails \u27a1\ufe0f <a href=\"https:\/\/t.co\/82OR4msehv\">https:\/\/t.co\/82OR4msehv<\/a> <a href=\"https:\/\/t.co\/NTI1kdj1RN\">pic.twitter.com\/NTI1kdj1RN<\/a><\/p>\n<p style=\"text-align: center;\">- AI at Meta (@AIatMeta) <a href=\"https:\/\/twitter.com\/AIatMeta\/status\/1731739266856935796?ref_src=twsrc%5Etfw\">4 d\u00e9cembre 2023<\/a><\/p>\n<\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>Meta a ensuite ajout\u00e9 des descriptions \u00e0 la troisi\u00e8me personne des actions de chaque porteur de cam\u00e9ra. Meta a \u00e9galement engag\u00e9 des experts dans de nombreux domaines pour ajouter des commentaires d'experts \u00e0 la troisi\u00e8me personne critiquant la fa\u00e7on dont la personne dans la vid\u00e9o a effectu\u00e9 la t\u00e2che.<\/p>\n<p>En recueillant des vues \u00e9gocentriques et exocentriques, l'ensemble de donn\u00e9es Ego-Exo4D peut montrer aux chercheurs \u00e0 quoi ressemblent les activit\u00e9s selon diff\u00e9rentes perspectives. Cela pourrait les aider \u00e0 d\u00e9velopper des algorithmes de vision artificielle capables de reconna\u00eetre ce que fait une personne, quel que soit son point de vue.<\/p>\n<h2>Ego-Exo4D ouvre de nouvelles perspectives d'apprentissage<\/h2>\n<p>L'un des principaux obstacles \u00e0 la r\u00e9alisation de l'AGI ou \u00e0 la formation plus efficace des robots est le manque de perception sensorielle des ordinateurs. En tant qu'\u00eatres humains, nous disposons d'un grand nombre d'informations sensorielles provenant de notre environnement, que nous prenons souvent pour acquis lorsque nous acqu\u00e9rons de nouvelles comp\u00e9tences.<\/p>\n<p>Ego-Exo4D sera une ressource extr\u00eamement utile pour combler cette lacune.<\/p>\n<p>Gedas Bertasius, professeur adjoint au d\u00e9partement d'informatique de l'universit\u00e9 de Caroline du Nord, a d\u00e9clar\u00e9 : \"Ego-Exo4D ne se limite pas \u00e0 la collecte de donn\u00e9es, il s'agit de changer la fa\u00e7on dont l'IA comprend, per\u00e7oit et apprend. Avec un apprentissage et une perspective centr\u00e9s sur l'homme, l'IA peut devenir plus utile dans notre vie quotidienne, en nous aidant d'une mani\u00e8re que nous ne pouvons qu'imaginer.\"<\/p>\n<figure id=\"attachment_8008\" aria-describedby=\"caption-attachment-8008\" style=\"width: 1792px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-8008 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot.png\" alt=\"\" width=\"1792\" height=\"1072\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot.png 1792w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-300x179.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-1024x613.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-768x459.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-1536x919.png 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-370x221.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-800x479.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-20x12.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-740x443.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-1600x957.png 1600w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-1320x790.png 1320w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-80x48.png 80w\" sizes=\"auto, (max-width: 1792px) 100vw, 1792px\" \/><figcaption id=\"caption-attachment-8008\" class=\"wp-caption-text\">Instantan\u00e9 des donn\u00e9es de formation Ego-Exo4D de l'exemple de la r\u00e9paration d'un v\u00e9lo. Source de donn\u00e9es : Meta<\/figcaption><\/figure>\n<p>Meta esp\u00e8re que l'Ego-Exo4D \"permettra aux robots du futur d'acqu\u00e9rir des connaissances sur les manipulations dextres complexes en observant des experts humains comp\u00e9tents en action\".<\/p>\n<p>Cet ensemble de donn\u00e9es, combin\u00e9 aux lunettes du projet Aria, permettra bient\u00f4t \u00e0 l'homme de vivre une exp\u00e9rience d'apprentissage v\u00e9ritablement immersive. Imaginez que vous ex\u00e9cutiez une t\u00e2che pendant que vos lunettes utilisent la r\u00e9alit\u00e9 augment\u00e9e (RA) pour superposer une vid\u00e9o didactique ou vous guider dans votre t\u00e2che.<\/p>\n<p>Vous pourriez apprendre \u00e0 jouer du piano et b\u00e9n\u00e9ficier d'une superposition visuelle vous montrant o\u00f9 vos mains doivent se d\u00e9placer, avec des conseils audio en temps r\u00e9el. Vous pouvez aussi ouvrir le capot de votre voiture et \u00eatre guid\u00e9 dans le d\u00e9pannage et la r\u00e9paration d'un probl\u00e8me de moteur.<\/p>\n<p>Il sera int\u00e9ressant de voir si le projet Meta's <a href=\"https:\/\/ai.meta.com\/research\/ego-how-to\/\" target=\"_blank\" rel=\"noopener\">Concept d'apprentissage Ego How-To<\/a> Les lunettes Project Aria seront mieux adopt\u00e9es que les Google Glass, qui ont \u00e9chou\u00e9. On ne sait pas encore quand elles seront disponibles \u00e0 l'achat.<\/p>\n<p>Meta va cr\u00e9er l'ensemble de donn\u00e9es Ego-Exo4D <a href=\"https:\/\/ego-exo4d-data.org\/\" target=\"_blank\" rel=\"noopener\">disponible en t\u00e9l\u00e9chargement<\/a> avant la fin du mois de d\u00e9cembre.<\/p>","protected":false},"excerpt":{"rendered":"<p>L'entra\u00eenement de mod\u00e8les d'IA tels que GPT-4 s'est principalement appuy\u00e9 sur des ensembles de donn\u00e9es constitu\u00e9s de textes et d'images. L'ensemble de donn\u00e9es de perception multimodale Ego-Exo4D de Meta offre aux scientifiques des donn\u00e9es un nouvel ensemble riche de donn\u00e9es d'entra\u00eenement. Vous pouvez apprendre une nouvelle comp\u00e9tence en lisant un livre, mais c'est tellement plus facile quand quelqu'un vous montre comment faire quelque chose tout en vous l'expliquant. C'est l'objectif de l'\u00e9quipe FAIR (Fundamental Artificial Intelligence Research) de Meta pour Ego-Exo4D. L'ensemble de donn\u00e9es se compose de vid\u00e9os \u00e0 la premi\u00e8re personne (Ego) et \u00e0 la troisi\u00e8me personne (Exo) montrant des personnes effectuant diff\u00e9rentes activit\u00e9s humaines qualifi\u00e9es. Il peut s'agir de cuisiner, de danser ou de jouer de la musique,<\/p>","protected":false},"author":6,"featured_media":8009,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[166,105,131],"class_list":["post-8006","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-computer-vision","tag-machine-learning","tag-meta"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Meta releases Ego-Exo4D, a multimodal perception dataset | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/fr\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Meta releases Ego-Exo4D, a multimodal perception dataset | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Training AI models like GPT-4 has relied mostly on datasets consisting of text and images. Meta\u2019s Ego-Exo4D multimodal perception dataset presents data scientists with a rich new set of training data. You can learn a new skill by reading a book, but it\u2019s so much easier when someone shows you how to do something while explaining it to you. This is the goal Meta\u2019s FAIR (Fundamental Artificial Intelligence Research) team has for Ego-Exo4D. The dataset consists of first-person (Ego) and third-person (Exo) perspective videos of people performing different skilled human activities. These could be anything from cooking, dancing, playing music,\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/fr\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-05T09:02:39+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-12-05T13:15:00+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/augmented-reality-car-repair.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"666\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Meta releases Ego-Exo4D, a multimodal perception dataset\",\"datePublished\":\"2023-12-05T09:02:39+00:00\",\"dateModified\":\"2023-12-05T13:15:00+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/\"},\"wordCount\":662,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/augmented-reality-car-repair.jpg\",\"keywords\":[\"Computer vision\",\"machine learning\",\"Meta\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"fr-FR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/\",\"name\":\"Meta releases Ego-Exo4D, a multimodal perception dataset | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/augmented-reality-car-repair.jpg\",\"datePublished\":\"2023-12-05T09:02:39+00:00\",\"dateModified\":\"2023-12-05T13:15:00+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/augmented-reality-car-repair.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/augmented-reality-car-repair.jpg\",\"width\":1000,\"height\":666},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Meta releases Ego-Exo4D, a multimodal perception dataset\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/fr\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Meta lance Ego-Exo4D, un ensemble de donn\u00e9es de perception multimodale | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/fr\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/","og_locale":"fr_FR","og_type":"article","og_title":"Meta releases Ego-Exo4D, a multimodal perception dataset | DailyAI","og_description":"Training AI models like GPT-4 has relied mostly on datasets consisting of text and images. Meta\u2019s Ego-Exo4D multimodal perception dataset presents data scientists with a rich new set of training data. You can learn a new skill by reading a book, but it\u2019s so much easier when someone shows you how to do something while explaining it to you. This is the goal Meta\u2019s FAIR (Fundamental Artificial Intelligence Research) team has for Ego-Exo4D. The dataset consists of first-person (Ego) and third-person (Exo) perspective videos of people performing different skilled human activities. These could be anything from cooking, dancing, playing music,","og_url":"https:\/\/dailyai.com\/fr\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/","og_site_name":"DailyAI","article_published_time":"2023-12-05T09:02:39+00:00","article_modified_time":"2023-12-05T13:15:00+00:00","og_image":[{"width":1000,"height":666,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/augmented-reality-car-repair.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"\u00c9crit par":"Eugene van der Watt","Dur\u00e9e de lecture estim\u00e9e":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Meta releases Ego-Exo4D, a multimodal perception dataset","datePublished":"2023-12-05T09:02:39+00:00","dateModified":"2023-12-05T13:15:00+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/"},"wordCount":662,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/augmented-reality-car-repair.jpg","keywords":["Computer vision","machine learning","Meta"],"articleSection":["Industry"],"inLanguage":"fr-FR"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/","url":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/","name":"Meta lance Ego-Exo4D, un ensemble de donn\u00e9es de perception multimodale | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/augmented-reality-car-repair.jpg","datePublished":"2023-12-05T09:02:39+00:00","dateModified":"2023-12-05T13:15:00+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/"]}]},{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/augmented-reality-car-repair.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/augmented-reality-car-repair.jpg","width":1000,"height":666},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Meta releases Ego-Exo4D, a multimodal perception dataset"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Votre dose quotidienne de nouvelles sur l'IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eug\u00e8ne van der Watt","image":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene a une formation d'ing\u00e9nieur en \u00e9lectronique et adore tout ce qui touche \u00e0 la technologie. Lorsqu'il fait une pause dans sa consommation d'informations sur l'IA, vous le trouverez \u00e0 la table de snooker.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/fr\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/8006","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/comments?post=8006"}],"version-history":[{"count":4,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/8006\/revisions"}],"predecessor-version":[{"id":8021,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/8006\/revisions\/8021"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media\/8009"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media?parent=8006"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/categories?post=8006"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/tags?post=8006"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}