{"id":9253,"date":"2024-01-16T14:01:10","date_gmt":"2024-01-16T14:01:10","guid":{"rendered":"https:\/\/dailyai.com\/?p=9253"},"modified":"2024-01-16T14:01:10","modified_gmt":"2024-01-16T14:01:10","slug":"v-multimodal-llm-guided-visual-search-that-beats-gpt-4v","status":"publish","type":"post","link":"https:\/\/dailyai.com\/fr\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/","title":{"rendered":"V* - Recherche visuelle guid\u00e9e multimodale LLM qui bat GPT-4V"},"content":{"rendered":"<p><strong>Des chercheurs de l'universit\u00e9 de San Diego et de l'universit\u00e9 de New York ont mis au point V*, un algorithme de recherche guid\u00e9e par LLM qui est bien meilleur que GPT-4V pour la compr\u00e9hension du contexte et le ciblage pr\u00e9cis d'\u00e9l\u00e9ments visuels sp\u00e9cifiques dans les images.<\/strong><\/p>\n<p>Les mod\u00e8les multimodaux \u00e0 langage \u00e9tendu (MLLM) tels que le GPT-4V d'OpenAI nous ont impressionn\u00e9s l'ann\u00e9e derni\u00e8re par leur capacit\u00e9 \u00e0 r\u00e9pondre \u00e0 des questions portant sur des images. Aussi impressionnant que soit le GPT-4V, il \u00e9prouve parfois des difficult\u00e9s lorsque les images sont tr\u00e8s complexes et passe souvent \u00e0 c\u00f4t\u00e9 de petits d\u00e9tails.<\/p>\n<p>L'algorithme V* utilise un LLM de r\u00e9ponse aux questions visuelles (VQA) pour le guider dans l'identification de la zone de l'image sur laquelle se concentrer pour r\u00e9pondre \u00e0 une requ\u00eate visuelle. Les chercheurs appellent cette combinaison Show, sEArch, and telL (SEAL).<\/p>\n<p>Si quelqu'un vous donne une image haute r\u00e9solution et vous pose une question \u00e0 son sujet, votre logique vous guidera pour zoomer sur une zone o\u00f9 vous avez le plus de chances de trouver l'\u00e9l\u00e9ment en question. SEAL utilise V* pour analyser les images de la m\u00eame mani\u00e8re.<\/p>\n<p>Un mod\u00e8le de recherche visuelle pourrait simplement diviser une image en blocs, zoomer sur chaque bloc et le traiter pour trouver l'objet en question, mais cette m\u00e9thode est tr\u00e8s inefficace sur le plan informatique.<\/p>\n<p>Lorsqu'il re\u00e7oit une requ\u00eate textuelle concernant une image, V* tente d'abord de localiser directement la cible de l'image. S'il n'y parvient pas, il demande au MLLM de faire preuve de bon sens pour identifier la zone de l'image dans laquelle la cible est la plus susceptible de se trouver.<\/p>\n<p>Il concentre ensuite sa recherche sur cette zone, plut\u00f4t que de tenter une recherche \"zoom\u00e9e\" sur l'ensemble de l'image.<\/p>\n<figure id=\"attachment_9257\" aria-describedby=\"caption-attachment-9257\" style=\"width: 1942px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9257\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar.jpg\" alt=\"\" width=\"1942\" height=\"638\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar.jpg 1942w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-300x99.jpg 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-1024x336.jpg 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-768x252.jpg 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-1536x505.jpg 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-370x122.jpg 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-800x263.jpg 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-740x243.jpg 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-20x7.jpg 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-1600x526.jpg 1600w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-146x48.jpg 146w\" sizes=\"auto, (max-width: 1942px) 100vw, 1942px\" \/><figcaption id=\"caption-attachment-9257\" class=\"wp-caption-text\">Lorsqu'il est invit\u00e9 \u00e0 rechercher la guitare, le LLM identifie la sc\u00e8ne comme la zone logique sur laquelle concentrer l'analyse visuelle pour la rechercher. Source : GitHub GitHub<\/figcaption><\/figure>\n<p>Lorsque GPT-4V est invit\u00e9 \u00e0 r\u00e9pondre \u00e0 des questions sur une image qui n\u00e9cessite un traitement visuel approfondi d'images \u00e0 haute r\u00e9solution, il \u00e9prouve des difficult\u00e9s. SEAL utilisant V* obtient de bien meilleurs r\u00e9sultats.<\/p>\n<figure id=\"attachment_9258\" aria-describedby=\"caption-attachment-9258\" style=\"width: 992px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9258\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example.jpg\" alt=\"\" width=\"992\" height=\"1302\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example.jpg 992w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-229x300.jpg 229w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-780x1024.jpg 780w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-768x1008.jpg 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-370x486.jpg 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-800x1050.jpg 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-740x971.jpg 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-20x26.jpg 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-37x48.jpg 37w\" sizes=\"auto, (max-width: 992px) 100vw, 992px\" \/><figcaption id=\"caption-attachment-9258\" class=\"wp-caption-text\">Le SEAL r\u00e9pond correctement \u00e0 une question sur une image, tandis que le GPT-4V se trompe. Source : GitHub GitHub<\/figcaption><\/figure>\n<p>\u00c0 la question \"Quel type de boisson pouvons-nous acheter dans ce distributeur ?\", SEAL a r\u00e9pondu \"Coca-Cola\", tandis que GPT-4V a incorrectement devin\u00e9 \"Pepsi\". le SEAL a r\u00e9pondu \"Coca-Cola\" tandis que le GPT-4V a devin\u00e9 \u00e0 tort \"Pepsi\".<\/p>\n<p>Les chercheurs ont utilis\u00e9 191 images haute r\u00e9solution provenant de l'ensemble de donn\u00e9es Segment Anything (SAM) de Meta et ont cr\u00e9\u00e9 un test de r\u00e9f\u00e9rence pour comparer les performances de SEAL \u00e0 celles d'autres mod\u00e8les. Le benchmark V*Bench teste deux t\u00e2ches : la reconnaissance d'attributs et le raisonnement sur les relations spatiales.<\/p>\n<p>Les figures ci-dessous montrent la performance humaine compar\u00e9e \u00e0 des mod\u00e8les open-source, des mod\u00e8les commerciaux comme GPT-4V et SEAL. L'augmentation des performances de SEAL gr\u00e2ce \u00e0 V* est particuli\u00e8rement impressionnante car le MLLM sous-jacent qu'il utilise est LLaVa-7b, qui est beaucoup plus petit que GPT-4V.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-9259\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table.jpg\" alt=\"\" width=\"1120\" height=\"1060\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table.jpg 1120w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-300x284.jpg 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-1024x969.jpg 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-768x727.jpg 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-370x350.jpg 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-800x757.jpg 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-20x19.jpg 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-740x700.jpg 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-24x24.jpg 24w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-51x48.jpg 51w\" sizes=\"auto, (max-width: 1120px) 100vw, 1120px\" \/><\/p>\n<p>Cette approche intuitive de l'analyse d'images semble tr\u00e8s bien fonctionner, avec un certain nombre d'exemples impressionnants sur le site Web de la Commission europ\u00e9enne. <a href=\"https:\/\/vstar-seal.github.io\/\" target=\"_blank\" rel=\"noopener\">R\u00e9sum\u00e9 du document sur GitHub<\/a>.<\/p>\n<p>Il sera int\u00e9ressant de voir si d'autres MLLM, comme ceux d'OpenAI ou de Google, adoptent une approche similaire.<\/p>\n<p>Lorsqu'on lui a demand\u00e9 quelle boisson \u00e9tait vendue dans le distributeur de l'image ci-dessus, Bard de Google a r\u00e9pondu : \"Il n'y a pas de distributeur au premier plan\". Peut-\u00eatre que Gemini Ultra fera mieux.<\/p>\n<p>Pour l'instant, il semble que SEAL et son nouvel algorithme V* devancent de loin certains des plus grands mod\u00e8les multimodaux lorsqu'il s'agit de questions visuelles.<\/p>\n<p>&nbsp;<\/p>","protected":false},"excerpt":{"rendered":"<p>Des chercheurs de l'universit\u00e9 de San Diego et de l'universit\u00e9 de New York ont mis au point V*, un algorithme de recherche guid\u00e9e par LLM qui est bien meilleur que GPT-4V en mati\u00e8re de compr\u00e9hension du contexte et de ciblage pr\u00e9cis d'\u00e9l\u00e9ments visuels sp\u00e9cifiques dans les images. Les mod\u00e8les multimodaux \u00e0 grand langage (MLLM) tels que GPT-4V d'OpenAI nous ont impressionn\u00e9s l'ann\u00e9e derni\u00e8re par leur capacit\u00e9 \u00e0 r\u00e9pondre \u00e0 des questions sur les images. Aussi impressionnant que soit le GPT-4V, il \u00e9prouve parfois des difficult\u00e9s lorsque les images sont tr\u00e8s complexes et passe souvent \u00e0 c\u00f4t\u00e9 de petits d\u00e9tails. L'algorithme V* utilise un LLM de r\u00e9ponse aux questions visuelles (VQA) pour le guider dans l'identification de la zone de l'image sur laquelle il doit se concentrer pour r\u00e9pondre \u00e0 une question visuelle.<\/p>","protected":false},"author":6,"featured_media":9260,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[166,118],"class_list":["post-9253","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-computer-vision","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>V* - Multimodal LLM guided visual search that beats GPT-4V | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/fr\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"V* - Multimodal LLM guided visual search that beats GPT-4V | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Researchers from UC San Diego and New York University developed V*, an LLM-guided search algorithm that is a lot better than GPT-4V at contextual understanding, and precise targeting of specific visual elements in images. Multimodal Large Language Models (MLLM) like OpenAI\u2019s GPT-4V blew us away last year with the ability to answer questions about images. As impressive as GPT-4V is, it struggles sometimes when images are very complex and often misses small details. The V* algorithm uses a Visual Question Answering (VQA) LLM to guide it in identifying which area of the image to focus on to answer a visual\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/fr\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-01-16T14:01:10+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/needle-in-haystack.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"664\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"V* &#8211; Multimodal LLM guided visual search that beats GPT-4V\",\"datePublished\":\"2024-01-16T14:01:10+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/\"},\"wordCount\":573,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/01\\\/needle-in-haystack.jpg\",\"keywords\":[\"Computer vision\",\"LLMS\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"fr-FR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/\",\"name\":\"V* - Multimodal LLM guided visual search that beats GPT-4V | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/01\\\/needle-in-haystack.jpg\",\"datePublished\":\"2024-01-16T14:01:10+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/01\\\/needle-in-haystack.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/01\\\/needle-in-haystack.jpg\",\"width\":1000,\"height\":664},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"V* &#8211; Multimodal LLM guided visual search that beats GPT-4V\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/fr\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"V* - La recherche visuelle guid\u00e9e multimodale LLM bat GPT-4V | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/fr\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/","og_locale":"fr_FR","og_type":"article","og_title":"V* - Multimodal LLM guided visual search that beats GPT-4V | DailyAI","og_description":"Researchers from UC San Diego and New York University developed V*, an LLM-guided search algorithm that is a lot better than GPT-4V at contextual understanding, and precise targeting of specific visual elements in images. Multimodal Large Language Models (MLLM) like OpenAI\u2019s GPT-4V blew us away last year with the ability to answer questions about images. As impressive as GPT-4V is, it struggles sometimes when images are very complex and often misses small details. The V* algorithm uses a Visual Question Answering (VQA) LLM to guide it in identifying which area of the image to focus on to answer a visual","og_url":"https:\/\/dailyai.com\/fr\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/","og_site_name":"DailyAI","article_published_time":"2024-01-16T14:01:10+00:00","og_image":[{"width":1000,"height":664,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/needle-in-haystack.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"\u00c9crit par":"Eugene van der Watt","Dur\u00e9e de lecture estim\u00e9e":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"V* &#8211; Multimodal LLM guided visual search that beats GPT-4V","datePublished":"2024-01-16T14:01:10+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/"},"wordCount":573,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/needle-in-haystack.jpg","keywords":["Computer vision","LLMS"],"articleSection":["Industry"],"inLanguage":"fr-FR"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/","url":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/","name":"V* - La recherche visuelle guid\u00e9e multimodale LLM bat GPT-4V | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/needle-in-haystack.jpg","datePublished":"2024-01-16T14:01:10+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/"]}]},{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/needle-in-haystack.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/needle-in-haystack.jpg","width":1000,"height":664},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"V* &#8211; Multimodal LLM guided visual search that beats GPT-4V"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Votre dose quotidienne de nouvelles sur l'IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eug\u00e8ne van der Watt","image":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene a une formation d'ing\u00e9nieur en \u00e9lectronique et adore tout ce qui touche \u00e0 la technologie. Lorsqu'il fait une pause dans sa consommation d'informations sur l'IA, vous le trouverez \u00e0 la table de snooker.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/fr\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/9253","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/comments?post=9253"}],"version-history":[{"count":4,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/9253\/revisions"}],"predecessor-version":[{"id":9261,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/9253\/revisions\/9261"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media\/9260"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media?parent=9253"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/categories?post=9253"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/tags?post=9253"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}