{"id":11227,"date":"2024-04-03T10:42:20","date_gmt":"2024-04-03T10:42:20","guid":{"rendered":"https:\/\/dailyai.com\/?p=11227"},"modified":"2024-04-03T10:42:20","modified_gmt":"2024-04-03T10:42:20","slug":"apples-realm-sees-on-screen-visuals-better-than-gpt-4","status":"publish","type":"post","link":"https:\/\/dailyai.com\/fr\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","title":{"rendered":"Le ReALM d'Apple \"voit\" mieux les images \u00e0 l'\u00e9cran que le GPT-4"},"content":{"rendered":"<p><strong>Les ing\u00e9nieurs d'Apple ont mis au point un syst\u00e8me d'IA qui r\u00e9sout les r\u00e9f\u00e9rences complexes aux entit\u00e9s \u00e0 l'\u00e9cran et aux conversations des utilisateurs. Ce mod\u00e8le l\u00e9ger pourrait constituer une solution id\u00e9ale pour les assistants virtuels embarqu\u00e9s.<\/strong><\/p>\n<p>Les \u00eatres humains sont dou\u00e9s pour r\u00e9soudre les r\u00e9f\u00e9rences dans les conversations entre eux. Lorsque nous utilisons des termes tels que \"le bas\" ou \"lui\", nous comprenons \u00e0 quoi la personne fait r\u00e9f\u00e9rence en nous basant sur le contexte de la conversation et sur les \u00e9l\u00e9ments que nous pouvons voir.<\/p>\n<p>Il est beaucoup plus difficile pour un mod\u00e8le d'intelligence artificielle de le faire. Les LLM multimodaux tels que GPT-4 r\u00e9pondent bien aux questions sur les images, mais leur apprentissage est co\u00fbteux et le traitement de chaque requ\u00eate sur une image n\u00e9cessite beaucoup de ressources informatiques.<\/p>\n<p>Les ing\u00e9nieurs d'Apple ont adopt\u00e9 une approche diff\u00e9rente avec leur syst\u00e8me, appel\u00e9 ReALM (Reference Resolution As Language Modeling). <a href=\"https:\/\/arxiv.org\/pdf\/2403.20329.pdf\" target=\"_blank\" rel=\"noopener\">Le document<\/a> vaut la peine d'\u00eatre lu pour plus de d\u00e9tails sur leur processus de d\u00e9veloppement et de test.<\/p>\n<p>ReALM utilise un LLM pour traiter les entit\u00e9s conversationnelles, \u00e0 l'\u00e9cran et en arri\u00e8re-plan (alarmes, musique de fond) qui composent les interactions d'un utilisateur avec un agent virtuel d'IA.<\/p>\n<p>Voici un exemple du type d'interaction qu'un utilisateur pourrait avoir avec un agent d'intelligence artificielle.<\/p>\n<figure id=\"attachment_11231\" aria-describedby=\"caption-attachment-11231\" style=\"width: 746px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-11231\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Agent-interactions.png\" alt=\"\" width=\"746\" height=\"298\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Agent-interactions.png 746w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Agent-interactions-300x120.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Agent-interactions-60x24.png 60w\" sizes=\"auto, (max-width: 746px) 100vw, 746px\" \/><figcaption id=\"caption-attachment-11231\" class=\"wp-caption-text\">Exemples d'interactions entre un utilisateur et un assistant virtuel. Source : arXiv<\/figcaption><\/figure>\n<p>L'agent doit comprendre des entit\u00e9s conversationnelles telles que le fait que lorsque l'utilisateur dit \"le\", il fait r\u00e9f\u00e9rence au num\u00e9ro de t\u00e9l\u00e9phone de la pharmacie.<\/p>\n<p>Il doit \u00e9galement comprendre le contexte visuel lorsque l'utilisateur dit \"celui du bas\", et c'est l\u00e0 que l'approche de ReALM diff\u00e8re de mod\u00e8les tels que GPT-4.<\/p>\n<p>ReALM s'appuie sur des encodeurs en amont pour analyser les \u00e9l\u00e9ments \u00e0 l'\u00e9cran et leur position. ReALM reconstruit ensuite l'\u00e9cran en repr\u00e9sentations purement textuelles, de gauche \u00e0 droite et de haut en bas.<\/p>\n<p>En termes simples, il utilise le langage naturel pour r\u00e9sumer l'\u00e9cran de l'utilisateur.<\/p>\n<p>D\u00e9sormais, lorsqu'un utilisateur pose une question sur un \u00e9l\u00e9ment affich\u00e9 \u00e0 l'\u00e9cran, le mod\u00e8le linguistique traite la description textuelle de l'\u00e9cran au lieu d'utiliser un mod\u00e8le visuel pour traiter l'image affich\u00e9e \u00e0 l'\u00e9cran.<\/p>\n<p>Les chercheurs ont cr\u00e9\u00e9 des ensembles de donn\u00e9es synth\u00e9tiques d'entit\u00e9s conversationnelles, \u00e0 l'\u00e9cran et en arri\u00e8re-plan, et ont test\u00e9 ReALM et d'autres mod\u00e8les pour v\u00e9rifier leur efficacit\u00e9 \u00e0 r\u00e9soudre les r\u00e9f\u00e9rences dans les syst\u00e8mes conversationnels.<\/p>\n<p>La version la plus petite de ReALM (80 millions de param\u00e8tres) a obtenu des r\u00e9sultats comparables \u00e0 ceux de GPT-4 et sa version la plus grande (3 milliards de param\u00e8tres) a obtenu des r\u00e9sultats nettement sup\u00e9rieurs \u00e0 ceux de GPT-4.<\/p>\n<p>ReALM est un mod\u00e8le minuscule par rapport \u00e0 GPT-4. Sa r\u00e9solution de r\u00e9f\u00e9rence sup\u00e9rieure en fait un choix id\u00e9al pour un assistant virtuel qui peut exister sur l'appareil sans compromettre les performances.<\/p>\n<p>ReALM n'est pas aussi performant avec des images plus complexes ou des demandes d'utilisateurs plus nuanc\u00e9es, mais il pourrait faire office d'assistant virtuel dans une voiture ou sur un appareil. Imaginez que Siri puisse \"voir\" l'\u00e9cran de votre iPhone et r\u00e9pondre \u00e0 des r\u00e9f\u00e9rences \u00e0 des \u00e9l\u00e9ments affich\u00e9s \u00e0 l'\u00e9cran.<\/p>\n<p>Apple a \u00e9t\u00e9 un peu lent \u00e0 se lancer, mais des d\u00e9veloppements r\u00e9cents, tels que leur <a href=\"https:\/\/dailyai.com\/fr\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/\">Mod\u00e8le MM1<\/a> et ReALM montrent que beaucoup de choses se passent derri\u00e8re des portes closes.<\/p>","protected":false},"excerpt":{"rendered":"<p>Les ing\u00e9nieurs d'Apple ont mis au point un syst\u00e8me d'IA qui r\u00e9sout les r\u00e9f\u00e9rences complexes aux entit\u00e9s \u00e0 l'\u00e9cran et aux conversations des utilisateurs. Ce mod\u00e8le l\u00e9ger pourrait constituer une solution id\u00e9ale pour les assistants virtuels embarqu\u00e9s. Les humains sont dou\u00e9s pour r\u00e9soudre les r\u00e9f\u00e9rences dans les conversations entre eux. Lorsque nous utilisons des termes tels que \"le bas\" ou \"lui\", nous comprenons \u00e0 quoi la personne fait r\u00e9f\u00e9rence en fonction du contexte de la conversation et des \u00e9l\u00e9ments que nous pouvons voir. Il est beaucoup plus difficile pour un mod\u00e8le d'IA de faire cela. Les LLM multimodaux tels que GPT-4 sont capables de r\u00e9pondre \u00e0 des questions sur des images, mais leur formation est co\u00fbteuse et ils requi\u00e8rent un syst\u00e8me d'apprentissage \u00e0 distance.<\/p>","protected":false},"author":6,"featured_media":11232,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[126,166,118],"class_list":["post-11227","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-apple","tag-computer-vision","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4 | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/fr\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4 | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Apple engineers developed an AI system that resolves complex references to on-screen entities and user conversations. The lightweight model could be an ideal solution for on-device virtual assistants. Humans are good at resolving references in conversations with each other. When we use terms like \u201cthe bottom one\u201d or \u201chim\u201d we understand what the person is referring to based on the context of the conversation and things we can see. It\u2019s a lot more difficult for an AI model to do this. Multimodal LLMs like GPT-4 are good at answering questions about images but are expensive to train and require a\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/fr\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-03T10:42:20+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4\",\"datePublished\":\"2024-04-03T10:42:20+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\"},\"wordCount\":486,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Apple-ReALM.webp\",\"keywords\":[\"Apple\",\"Computer vision\",\"LLMS\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"fr-FR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\",\"name\":\"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4 | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Apple-ReALM.webp\",\"datePublished\":\"2024-04-03T10:42:20+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Apple-ReALM.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Apple-ReALM.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/fr\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Le ReALM d'Apple \"voit\" les images \u00e0 l'\u00e9cran mieux que le GPT-4 | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/fr\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","og_locale":"fr_FR","og_type":"article","og_title":"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4 | DailyAI","og_description":"Apple engineers developed an AI system that resolves complex references to on-screen entities and user conversations. The lightweight model could be an ideal solution for on-device virtual assistants. Humans are good at resolving references in conversations with each other. When we use terms like \u201cthe bottom one\u201d or \u201chim\u201d we understand what the person is referring to based on the context of the conversation and things we can see. It\u2019s a lot more difficult for an AI model to do this. Multimodal LLMs like GPT-4 are good at answering questions about images but are expensive to train and require a","og_url":"https:\/\/dailyai.com\/fr\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","og_site_name":"DailyAI","article_published_time":"2024-04-03T10:42:20+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"\u00c9crit par":"Eugene van der Watt","Dur\u00e9e de lecture estim\u00e9e":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4","datePublished":"2024-04-03T10:42:20+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/"},"wordCount":486,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","keywords":["Apple","Computer vision","LLMS"],"articleSection":["Industry"],"inLanguage":"fr-FR"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","url":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","name":"Le ReALM d'Apple \"voit\" les images \u00e0 l'\u00e9cran mieux que le GPT-4 | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","datePublished":"2024-04-03T10:42:20+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/"]}]},{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Votre dose quotidienne de nouvelles sur l'IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eug\u00e8ne van der Watt","image":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene a une formation d'ing\u00e9nieur en \u00e9lectronique et adore tout ce qui touche \u00e0 la technologie. Lorsqu'il fait une pause dans sa consommation d'informations sur l'IA, vous le trouverez \u00e0 la table de snooker.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/fr\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/11227","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/comments?post=11227"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/11227\/revisions"}],"predecessor-version":[{"id":11234,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/11227\/revisions\/11234"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media\/11232"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media?parent=11227"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/categories?post=11227"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/tags?post=11227"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}