{"version":"1.0","provider_name":"DailyAI","provider_url":"https:\/\/dailyai.com\/da","author_name":"Eugene van der Watt","author_url":"https:\/\/dailyai.com\/da\/author\/eugene\/","title":"Meta releases Ego-Exo4D, a multimodal perception dataset | DailyAI","type":"rich","width":600,"height":338,"html":"<blockquote class=\"wp-embedded-content\" data-secret=\"CIiFd1SxVp\"><a href=\"https:\/\/dailyai.com\/da\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/\">Meta frigiver Ego-Exo4D, et multimodalt perceptionsdatas\u00e6t<\/a><\/blockquote><iframe sandbox=\"allow-scripts\" security=\"restricted\" src=\"https:\/\/dailyai.com\/da\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/embed\/#?secret=CIiFd1SxVp\" width=\"600\" height=\"338\" title=\"&#8220;Meta releases Ego-Exo4D, a multimodal perception dataset&#8221; &#8211; DailyAI\" data-secret=\"CIiFd1SxVp\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" class=\"wp-embedded-content\"><\/iframe><script>\n\/*! This file is auto-generated *\/\n!function(d,l){\"use strict\";l.querySelector&&d.addEventListener&&\"undefined\"!=typeof URL&&(d.wp=d.wp||{},d.wp.receiveEmbedMessage||(d.wp.receiveEmbedMessage=function(e){var t=e.data;if((t||t.secret||t.message||t.value)&&!\/[^a-zA-Z0-9]\/.test(t.secret)){for(var s,r,n,a=l.querySelectorAll('iframe[data-secret=\"'+t.secret+'\"]'),o=l.querySelectorAll('blockquote[data-secret=\"'+t.secret+'\"]'),c=new RegExp(\"^https?:$\",\"i\"),i=0;i<o.length;i++)o[i].style.display=\"none\";for(i=0;i<a.length;i++)s=a[i],e.source===s.contentWindow&&(s.removeAttribute(\"style\"),\"height\"===t.message?(1e3<(r=parseInt(t.value,10))?r=1e3:~~r<200&&(r=200),s.height=r):\"link\"===t.message&&(r=new URL(s.getAttribute(\"src\")),n=new URL(t.value),c.test(n.protocol))&&n.host===r.host&&l.activeElement===s&&(d.top.location.href=t.value))}},d.addEventListener(\"message\",d.wp.receiveEmbedMessage,!1),l.addEventListener(\"DOMContentLoaded\",function(){for(var e,t,s=l.querySelectorAll(\"iframe.wp-embedded-content\"),r=0;r<s.length;r++)(t=(e=s[r]).getAttribute(\"data-secret\"))||(t=Math.random().toString(36).substring(2,12),e.src+=\"#?secret=\"+t,e.setAttribute(\"data-secret\",t)),e.contentWindow.postMessage({message:\"ready\",secret:t},\"*\")},!1)))}(window,document);\n\/\/# sourceURL=https:\/\/dailyai.com\/wp-includes\/js\/wp-embed.min.js\n<\/script>","thumbnail_url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/augmented-reality-car-repair.jpg","thumbnail_width":1000,"thumbnail_height":666,"description":"Training AI models like GPT-4 has relied mostly on datasets consisting of text and images. Meta\u2019s Ego-Exo4D multimodal perception dataset presents data scientists with a rich new set of training data. You can learn a new skill by reading a book, but it\u2019s so much easier when someone shows you how to do something while explaining it to you. This is the goal Meta\u2019s FAIR (Fundamental Artificial Intelligence Research) team has for Ego-Exo4D. The dataset consists of first-person (Ego) and third-person (Exo) perspective videos of people performing different skilled human activities. These could be anything from cooking, dancing, playing music,"}