{"id":10121,"date":"2024-02-16T09:41:58","date_gmt":"2024-02-16T09:41:58","guid":{"rendered":"https:\/\/dailyai.com\/?p=10121"},"modified":"2024-02-16T09:54:17","modified_gmt":"2024-02-16T09:54:17","slug":"openai-introduces-sora-an-advanced-text-to-video-model","status":"publish","type":"post","link":"https:\/\/dailyai.com\/fr\/2024\/02\/openai-introduces-sora-an-advanced-text-to-video-model\/","title":{"rendered":"OpenAI pr\u00e9sente Sora, un mod\u00e8le avanc\u00e9 de conversion de texte en vid\u00e9o"},"content":{"rendered":"<p><strong>OpenAI a d\u00e9voil\u00e9 Sora, un mod\u00e8le de conversion de texte en vid\u00e9o (TTV) de pointe qui g\u00e9n\u00e8re des vid\u00e9os r\u00e9alistes d'une dur\u00e9e maximale de 60 secondes \u00e0 partir d'une invite textuelle de l'utilisateur.<\/strong><\/p>\n<p>Ces derniers temps, nous avons assist\u00e9 \u00e0 de grandes avanc\u00e9es en mati\u00e8re de g\u00e9n\u00e9ration de vid\u00e9os par l'IA. Le mois dernier, nous avons \u00e9t\u00e9 ravis que Google nous fasse une d\u00e9monstration de l'IA. <a href=\"https:\/\/dailyai.com\/fr\/2024\/01\/google-unveils-lumiere-a-text-to-video-diffusion-model\/\">Lumi\u00e8re<\/a>L'entreprise a \u00e9galement d\u00e9velopp\u00e9 le mod\u00e8le TTV, qui g\u00e9n\u00e8re des clips vid\u00e9o de 5 secondes avec une coh\u00e9rence et un mouvement excellents.<\/p>\n<p>Quelques semaines plus tard, les impressionnantes vid\u00e9os de d\u00e9monstration g\u00e9n\u00e9r\u00e9es par Sora font d\u00e9j\u00e0 para\u00eetre Lumiere de Google comme un objet bien modeste.<\/p>\n<p>Sora g\u00e9n\u00e8re des vid\u00e9os de haute fid\u00e9lit\u00e9 qui peuvent inclure plusieurs sc\u00e8nes avec des panoramiques de cam\u00e9ra simul\u00e9s, tout en respectant scrupuleusement des messages-guides complexes. Il peut \u00e9galement g\u00e9n\u00e9rer des images, \u00e9tendre les vid\u00e9os vers l'arri\u00e8re et vers l'avant, et g\u00e9n\u00e9rer une vid\u00e9o en utilisant une image comme invite.<\/p>\n<p>Les performances impressionnantes de Sora r\u00e9sident en partie dans des \u00e9l\u00e9ments que nous consid\u00e9rons comme acquis lorsque nous regardons une vid\u00e9o, mais qui sont difficiles \u00e0 produire pour l'IA.<\/p>\n<p>Voici un exemple de vid\u00e9o g\u00e9n\u00e9r\u00e9e par Sora \u00e0 partir de l'invitation : \"Une bande-annonce de film pr\u00e9sentant les aventures d'un homme de l'espace de 30 ans portant un casque de moto en laine tricot\u00e9e rouge, ciel bleu, d\u00e9sert de sel, style cin\u00e9matographique, film\u00e9 sur pellicule 35 mm, couleurs vives.<\/p>\n<p>https:\/\/youtu.be\/twyhYQM9254<\/p>\n<p>Ce court clip pr\u00e9sente quelques caract\u00e9ristiques cl\u00e9s de Sora qui le rendent vraiment sp\u00e9cial.<\/p>\n<ul>\n<li>Le sujet \u00e9tait assez complexe et la vid\u00e9o g\u00e9n\u00e9r\u00e9e s'y est conform\u00e9e de pr\u00e8s.<\/li>\n<li>Sora maintient la coh\u00e9rence de son personnage. M\u00eame lorsque le personnage dispara\u00eet d'une image et r\u00e9appara\u00eet, son apparence reste coh\u00e9rente.<\/li>\n<li>Sora conserve la permanence de l'image. Un objet dans une sc\u00e8ne est conserv\u00e9 dans les images ult\u00e9rieures lors d'un panoramique ou d'un changement de sc\u00e8ne.<\/li>\n<li>La vid\u00e9o g\u00e9n\u00e9r\u00e9e r\u00e9v\u00e8le une compr\u00e9hension pr\u00e9cise de la physique et des modifications de l'environnement. L'\u00e9clairage, les ombres et les empreintes de pas dans le marais salant en sont de bons exemples.<\/li>\n<\/ul>\n<p>Sora ne se contente pas de comprendre la signification des mots de l'invite, il comprend comment ces objets interagissent les uns avec les autres dans le monde physique.<\/p>\n<p>Voici un autre exemple de l'impressionnante vid\u00e9o que Sora peut g\u00e9n\u00e9rer.<\/p>\n<p>https:\/\/youtu.be\/g0jt6goVz04<\/p>\n<p>L'id\u00e9e de cette vid\u00e9o \u00e9tait la suivante : \"Une femme \u00e9l\u00e9gante marche dans une rue de Tokyo remplie de n\u00e9ons lumineux et de panneaux de signalisation anim\u00e9s. Elle porte une veste en cuir noir, une longue robe rouge, des bottes noires et un sac \u00e0 main noir. Elle porte des lunettes de soleil et du rouge \u00e0 l\u00e8vres rouge. Elle marche avec assurance et d\u00e9contraction. La rue est humide et r\u00e9fl\u00e9chissante, cr\u00e9ant un effet miroir des lumi\u00e8res color\u00e9es. De nombreux pi\u00e9tons se prom\u00e8nent.<\/p>\n<h2>Un pas de plus vers l'AGI<\/h2>\n<p>Nous pouvons \u00eatre \u00e9poustoufl\u00e9s par les vid\u00e9os, mais c'est cette compr\u00e9hension du monde physique qui enthousiasme particuli\u00e8rement l'OpenAI.<\/p>\n<p>Dans le cadre de la <a href=\"https:\/\/openai.com\/sora\" target=\"_blank\" rel=\"noopener\">Sora blog post<\/a>Sora sert de base \u00e0 des mod\u00e8les capables de comprendre et de simuler le monde r\u00e9el, une capacit\u00e9 dont nous pensons qu'elle constituera une \u00e9tape importante dans la r\u00e9alisation de l'AGI\".<\/p>\n<p>Plusieurs chercheurs estiment que l'intelligence artificielle incarn\u00e9e est n\u00e9cessaire pour parvenir \u00e0 l'intelligence g\u00e9n\u00e9rale artificielle (AGI). Int\u00e9grer l'IA dans un robot capable de percevoir et d'explorer un environnement physique est un moyen d'y parvenir, mais cela s'accompagne d'une s\u00e9rie de d\u00e9fis pratiques.<\/p>\n<p>Sora a \u00e9t\u00e9 entra\u00een\u00e9 \u00e0 partir d'une grande quantit\u00e9 de donn\u00e9es vid\u00e9o et d'images, ce qui, selon OpenAI, est \u00e0 l'origine des capacit\u00e9s \u00e9mergentes dont le mod\u00e8le fait preuve en simulant des aspects des personnes, des animaux et des environnements du monde physique.<\/p>\n<p>OpenAI affirme que Sora n'a pas \u00e9t\u00e9 explicitement form\u00e9 \u00e0 la physique des objets en 3D, mais que les capacit\u00e9s \u00e9mergentes sont \"purement des ph\u00e9nom\u00e8nes d'\u00e9chelle\".<\/p>\n<p>Cela signifie que Sora pourrait \u00e9ventuellement \u00eatre utilis\u00e9 pour simuler avec pr\u00e9cision un monde num\u00e9rique avec lequel une IA pourrait interagir sans qu'il soit n\u00e9cessaire de l'incarner dans un dispositif physique tel qu'un robot.<\/p>\n<p>D'une mani\u00e8re plus simpliste, c'est ce que les chercheurs chinois tentent de r\u00e9aliser avec leur <a href=\"https:\/\/dailyai.com\/fr\/2024\/02\/chinese-researchers-unveil-a-robot-toddler-named-tong-tong\/\">Robot d'IA pour enfants en bas \u00e2ge<\/a> appel\u00e9 Tong Tong.<\/p>\n<p>Pour l'instant, nous devons nous contenter des vid\u00e9os de d\u00e9monstration fournies par OpenAI. Sora n'est accessible qu'aux membres de l'\u00e9quipe rouge et \u00e0 certains artistes visuels, concepteurs et cin\u00e9astes, afin de recueillir leurs commentaires et de v\u00e9rifier l'alignement du mod\u00e8le.<\/p>\n<p>Une fois Sora rendu public, les travailleurs de l'industrie cin\u00e9matographique de la SAG-AFTRA pourraient-ils ressusciter leurs piquets de gr\u00e8ve ?<\/p>","protected":false},"excerpt":{"rendered":"<p>L'OpenAI a d\u00e9voil\u00e9 Sora, un mod\u00e8le de conversion de texte en vid\u00e9o (TTV) de pointe qui g\u00e9n\u00e8re des vid\u00e9os r\u00e9alistes d'une dur\u00e9e maximale de 60 secondes \u00e0 partir d'une invite textuelle de l'utilisateur. Ces derniers temps, nous avons assist\u00e9 \u00e0 de grandes avanc\u00e9es en mati\u00e8re de g\u00e9n\u00e9ration de vid\u00e9os par l'IA. Le mois dernier, nous avons \u00e9t\u00e9 ravis que Google nous fasse une d\u00e9monstration de Lumiere, son mod\u00e8le TTV qui g\u00e9n\u00e8re des clips vid\u00e9o de 5 secondes avec une coh\u00e9rence et un mouvement excellents. Quelques semaines plus tard, les impressionnantes vid\u00e9os de d\u00e9monstration g\u00e9n\u00e9r\u00e9es par Sora font d\u00e9j\u00e0 para\u00eetre Lumiere de Google comme quelque chose d'anodin. Sora g\u00e9n\u00e8re des vid\u00e9os de haute fid\u00e9lit\u00e9 qui peuvent inclure plusieurs sc\u00e8nes avec des panoramiques de cam\u00e9ra simul\u00e9s tout en adh\u00e9rant \u00e9troitement \u00e0 des invites complexes. Il peut \u00e9galement<\/p>","protected":false},"author":6,"featured_media":10123,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[83],"tags":[107,93],"class_list":["post-10121","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-product","tag-generative-ai","tag-openai"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>OpenAI introduces Sora, an advanced text-to-video model | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/fr\/2024\/02\/openai-introduces-sora-an-advanced-text-to-video-model\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"OpenAI introduces Sora, an advanced text-to-video model | DailyAI\" \/>\n<meta property=\"og:description\" content=\"OpenAI has unveiled Sora, a state-of-the-art text-to-video (TTV) model that generates realistic videos of up to 60 seconds from a user text prompt. We\u2019ve seen big advancements in AI video generation lately. Last month we were excited when Google gave us a demo of Lumiere, its TTV model that generates 5-second video clips with excellent coherence and movement. Just a few weeks later and already the impressive demo videos generated by Sora make Google\u2019s Lumiere look quite quaint. Sora generates high-fidelity video that can include multiple scenes with simulated camera panning while adhering closely to complex prompts. It can also\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/fr\/2024\/02\/openai-introduces-sora-an-advanced-text-to-video-model\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-02-16T09:41:58+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-02-16T09:54:17+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/OpenAI-Sora.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"666\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/openai-introduces-sora-an-advanced-text-to-video-model\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/openai-introduces-sora-an-advanced-text-to-video-model\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"OpenAI introduces Sora, an advanced text-to-video model\",\"datePublished\":\"2024-02-16T09:41:58+00:00\",\"dateModified\":\"2024-02-16T09:54:17+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/openai-introduces-sora-an-advanced-text-to-video-model\\\/\"},\"wordCount\":686,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/openai-introduces-sora-an-advanced-text-to-video-model\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/OpenAI-Sora.jpg\",\"keywords\":[\"Generative AI\",\"OpenAI\"],\"articleSection\":[\"Product\"],\"inLanguage\":\"fr-FR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/openai-introduces-sora-an-advanced-text-to-video-model\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/openai-introduces-sora-an-advanced-text-to-video-model\\\/\",\"name\":\"OpenAI introduces Sora, an advanced text-to-video model | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/openai-introduces-sora-an-advanced-text-to-video-model\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/openai-introduces-sora-an-advanced-text-to-video-model\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/OpenAI-Sora.jpg\",\"datePublished\":\"2024-02-16T09:41:58+00:00\",\"dateModified\":\"2024-02-16T09:54:17+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/openai-introduces-sora-an-advanced-text-to-video-model\\\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/openai-introduces-sora-an-advanced-text-to-video-model\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/openai-introduces-sora-an-advanced-text-to-video-model\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/OpenAI-Sora.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/OpenAI-Sora.jpg\",\"width\":1000,\"height\":666},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/openai-introduces-sora-an-advanced-text-to-video-model\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"OpenAI introduces Sora, an advanced text-to-video model\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/fr\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"OpenAI pr\u00e9sente Sora, un mod\u00e8le avanc\u00e9 de conversion de texte en vid\u00e9o | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/fr\/2024\/02\/openai-introduces-sora-an-advanced-text-to-video-model\/","og_locale":"fr_FR","og_type":"article","og_title":"OpenAI introduces Sora, an advanced text-to-video model | DailyAI","og_description":"OpenAI has unveiled Sora, a state-of-the-art text-to-video (TTV) model that generates realistic videos of up to 60 seconds from a user text prompt. We\u2019ve seen big advancements in AI video generation lately. Last month we were excited when Google gave us a demo of Lumiere, its TTV model that generates 5-second video clips with excellent coherence and movement. Just a few weeks later and already the impressive demo videos generated by Sora make Google\u2019s Lumiere look quite quaint. Sora generates high-fidelity video that can include multiple scenes with simulated camera panning while adhering closely to complex prompts. It can also","og_url":"https:\/\/dailyai.com\/fr\/2024\/02\/openai-introduces-sora-an-advanced-text-to-video-model\/","og_site_name":"DailyAI","article_published_time":"2024-02-16T09:41:58+00:00","article_modified_time":"2024-02-16T09:54:17+00:00","og_image":[{"width":1000,"height":666,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/OpenAI-Sora.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"\u00c9crit par":"Eugene van der Watt","Dur\u00e9e de lecture estim\u00e9e":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/02\/openai-introduces-sora-an-advanced-text-to-video-model\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/02\/openai-introduces-sora-an-advanced-text-to-video-model\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"OpenAI introduces Sora, an advanced text-to-video model","datePublished":"2024-02-16T09:41:58+00:00","dateModified":"2024-02-16T09:54:17+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/02\/openai-introduces-sora-an-advanced-text-to-video-model\/"},"wordCount":686,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/02\/openai-introduces-sora-an-advanced-text-to-video-model\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/OpenAI-Sora.jpg","keywords":["Generative AI","OpenAI"],"articleSection":["Product"],"inLanguage":"fr-FR"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/02\/openai-introduces-sora-an-advanced-text-to-video-model\/","url":"https:\/\/dailyai.com\/2024\/02\/openai-introduces-sora-an-advanced-text-to-video-model\/","name":"OpenAI pr\u00e9sente Sora, un mod\u00e8le avanc\u00e9 de conversion de texte en vid\u00e9o | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/02\/openai-introduces-sora-an-advanced-text-to-video-model\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/02\/openai-introduces-sora-an-advanced-text-to-video-model\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/OpenAI-Sora.jpg","datePublished":"2024-02-16T09:41:58+00:00","dateModified":"2024-02-16T09:54:17+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/02\/openai-introduces-sora-an-advanced-text-to-video-model\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/02\/openai-introduces-sora-an-advanced-text-to-video-model\/"]}]},{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/2024\/02\/openai-introduces-sora-an-advanced-text-to-video-model\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/OpenAI-Sora.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/OpenAI-Sora.jpg","width":1000,"height":666},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/02\/openai-introduces-sora-an-advanced-text-to-video-model\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"OpenAI introduces Sora, an advanced text-to-video model"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Votre dose quotidienne de nouvelles sur l'IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eug\u00e8ne van der Watt","image":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene a une formation d'ing\u00e9nieur en \u00e9lectronique et adore tout ce qui touche \u00e0 la technologie. Lorsqu'il fait une pause dans sa consommation d'informations sur l'IA, vous le trouverez \u00e0 la table de snooker.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/fr\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/10121","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/comments?post=10121"}],"version-history":[{"count":4,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/10121\/revisions"}],"predecessor-version":[{"id":10324,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/10121\/revisions\/10324"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media\/10123"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media?parent=10121"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/categories?post=10121"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/tags?post=10121"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}