{"id":9253,"date":"2024-01-16T14:01:10","date_gmt":"2024-01-16T14:01:10","guid":{"rendered":"https:\/\/dailyai.com\/?p=9253"},"modified":"2024-01-16T14:01:10","modified_gmt":"2024-01-16T14:01:10","slug":"v-multimodal-llm-guided-visual-search-that-beats-gpt-4v","status":"publish","type":"post","link":"https:\/\/dailyai.com\/nb\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/","title":{"rendered":"V* - Multimodalt LLM-veiledet visuelt s\u00f8k som sl\u00e5r GPT-4V"},"content":{"rendered":"<p><strong>Forskere fra UC San Diego og New York University har utviklet V*, en LLM-styrt s\u00f8kealgoritme som er mye bedre enn GPT-4V n\u00e5r det gjelder kontekstuell forst\u00e5else og presis m\u00e5lretting mot spesifikke visuelle elementer i bilder.<\/strong><\/p>\n<p>Multimodale store spr\u00e5kmodeller (MLLM) som OpenAIs GPT-4V imponerte oss i fjor med sin evne til \u00e5 svare p\u00e5 sp\u00f8rsm\u00e5l om bilder. Selv om GPT-4V er imponerende, sliter den noen ganger n\u00e5r bildene er sv\u00e6rt komplekse, og den overser ofte sm\u00e5 detaljer.<\/p>\n<p>V*-algoritmen bruker en Visual Question Answering (VQA) LLM for \u00e5 finne ut hvilket omr\u00e5de i bildet den skal fokusere p\u00e5 for \u00e5 svare p\u00e5 en visuell foresp\u00f8rsel. Forskerne kaller denne kombinasjonen Show, sEArch og telL (SEAL).<\/p>\n<p>Hvis noen gir deg et h\u00f8yoppl\u00f8st bilde og stiller deg et sp\u00f8rsm\u00e5l om det, vil logikken din veilede deg til \u00e5 zoome inn p\u00e5 et omr\u00e5de der det er mest sannsynlig at du finner det aktuelle objektet. SEAL bruker V* til \u00e5 analysere bilder p\u00e5 en lignende m\u00e5te.<\/p>\n<p>En visuell s\u00f8kemodell kan ganske enkelt dele et bilde inn i blokker, zoome inn i hver blokk og deretter behandle det for \u00e5 finne det aktuelle objektet, men det er sv\u00e6rt ineffektivt rent beregningsmessig.<\/p>\n<p>N\u00e5r V* blir bedt om \u00e5 sende inn en tekstlig foresp\u00f8rsel om et bilde, pr\u00f8ver den f\u00f8rst \u00e5 lokalisere bildem\u00e5let direkte. Hvis den ikke klarer det, ber den MLLM om \u00e5 bruke sunn fornuft for \u00e5 identifisere hvilket omr\u00e5de av bildet det er mest sannsynlig at m\u00e5let befinner seg i.<\/p>\n<p>Deretter fokuserer den s\u00f8ket p\u00e5 dette omr\u00e5det, i stedet for \u00e5 fors\u00f8ke \u00e5 zoome inn p\u00e5 hele bildet.<\/p>\n<figure id=\"attachment_9257\" aria-describedby=\"caption-attachment-9257\" style=\"width: 1942px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9257\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar.jpg\" alt=\"\" width=\"1942\" height=\"638\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar.jpg 1942w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-300x99.jpg 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-1024x336.jpg 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-768x252.jpg 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-1536x505.jpg 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-370x122.jpg 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-800x263.jpg 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-740x243.jpg 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-20x7.jpg 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-1600x526.jpg 1600w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-146x48.jpg 146w\" sizes=\"auto, (max-width: 1942px) 100vw, 1942px\" \/><figcaption id=\"caption-attachment-9257\" class=\"wp-caption-text\">N\u00e5r du blir bedt om \u00e5 s\u00f8ke etter gitaren, identifiserer LLM scenen som det logiske omr\u00e5det \u00e5 fokusere den visuelle analysen p\u00e5 for \u00e5 lete etter den. Kilde: GitHub GitHub<\/figcaption><\/figure>\n<p>N\u00e5r GPT-4V blir bedt om \u00e5 svare p\u00e5 sp\u00f8rsm\u00e5l om et bilde som krever omfattende visuell behandling av h\u00f8yoppl\u00f8selige bilder, sliter den. SEAL med V* presterer mye bedre.<\/p>\n<figure id=\"attachment_9258\" aria-describedby=\"caption-attachment-9258\" style=\"width: 992px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9258\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example.jpg\" alt=\"\" width=\"992\" height=\"1302\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example.jpg 992w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-229x300.jpg 229w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-780x1024.jpg 780w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-768x1008.jpg 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-370x486.jpg 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-800x1050.jpg 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-740x971.jpg 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-20x26.jpg 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-37x48.jpg 37w\" sizes=\"auto, (max-width: 992px) 100vw, 992px\" \/><figcaption id=\"caption-attachment-9258\" class=\"wp-caption-text\">SEAL svarer riktig p\u00e5 et sp\u00f8rsm\u00e5l om et bilde, mens GPT-4V svarer feil. Kilde: GitHub GitHub<\/figcaption><\/figure>\n<p>P\u00e5 sp\u00f8rsm\u00e5let \"Hva slags drikke kan vi kj\u00f8pe fra den automaten?\" svarte SEAL \"Coca-Cola\". svarte SEAL \"Coca-Cola\", mens GPT-4V feilaktig gjettet \"Pepsi\".<\/p>\n<p>Forskerne brukte 191 h\u00f8yoppl\u00f8selige bilder fra Metas datasett Segment Anything (SAM) og opprettet en referanseindeks for \u00e5 se hvordan SEALs ytelse var sammenlignet med andre modeller. V*Bench-referansen tester to oppgaver: attributtgjenkjenning og romlig relasjonsresonnering.<\/p>\n<p>Figurene nedenfor viser menneskelig ytelse sammenlignet med modeller med \u00e5pen kildekode, kommersielle modeller som GPT-4V og SEAL. \u00d8kningen V* gir i SEALs ytelse er spesielt imponerende fordi den underliggende MLLM-en den bruker, er LLaVa-7b, som er mye mindre enn GPT-4V.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-9259\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table.jpg\" alt=\"\" width=\"1120\" height=\"1060\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table.jpg 1120w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-300x284.jpg 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-1024x969.jpg 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-768x727.jpg 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-370x350.jpg 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-800x757.jpg 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-20x19.jpg 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-740x700.jpg 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-24x24.jpg 24w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-51x48.jpg 51w\" sizes=\"auto, (max-width: 1120px) 100vw, 1120px\" \/><\/p>\n<p>Denne intuitive tiln\u00e6rmingen til bildeanalyse ser ut til \u00e5 fungere veldig bra, med en rekke imponerende eksempler p\u00e5 <a href=\"https:\/\/vstar-seal.github.io\/\" target=\"_blank\" rel=\"noopener\">sammendrag av artikkelen p\u00e5 GitHub<\/a>.<\/p>\n<p>Det blir interessant \u00e5 se om andre MLLM-er, som de fra OpenAI eller Google, tar i bruk en lignende tiln\u00e6rming.<\/p>\n<p>P\u00e5 sp\u00f8rsm\u00e5l om hvilken drikke som ble solgt fra automaten p\u00e5 bildet over, svarte Googles Bard: \"Det er ingen automat i forgrunnen.\" Kanskje Gemini Ultra vil gj\u00f8re en bedre jobb.<\/p>\n<p>Forel\u00f8pig ser det ut til at SEAL og den nye V*-algoritmen ligger et godt stykke foran noen av de st\u00f8rste multimodale modellene n\u00e5r det gjelder visuell avh\u00f8ring.<\/p>\n<p>&nbsp;<\/p>","protected":false},"excerpt":{"rendered":"<p>Forskere fra UC San Diego og New York University har utviklet V*, en LLM-styrt s\u00f8kealgoritme som er mye bedre enn GPT-4V n\u00e5r det gjelder kontekstuell forst\u00e5else og presis m\u00e5lretting mot spesifikke visuelle elementer i bilder. Multimodale store spr\u00e5kmodeller (MLLM) som OpenAIs GPT-4V overbeviste oss i fjor med sin evne til \u00e5 svare p\u00e5 sp\u00f8rsm\u00e5l om bilder. Selv om GPT-4V er imponerende, sliter den noen ganger n\u00e5r bildene er sv\u00e6rt komplekse og ofte overser sm\u00e5 detaljer. V*-algoritmen bruker en Visual Question Answering (VQA) LLM til \u00e5 identifisere hvilket omr\u00e5de i bildet den skal fokusere p\u00e5 for \u00e5 svare p\u00e5 et visuelt sp\u00f8rsm\u00e5l.<\/p>","protected":false},"author":6,"featured_media":9260,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[166,118],"class_list":["post-9253","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-computer-vision","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>V* - Multimodal LLM guided visual search that beats GPT-4V | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/nb\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/\" \/>\n<meta property=\"og:locale\" content=\"nb_NO\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"V* - Multimodal LLM guided visual search that beats GPT-4V | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Researchers from UC San Diego and New York University developed V*, an LLM-guided search algorithm that is a lot better than GPT-4V at contextual understanding, and precise targeting of specific visual elements in images. Multimodal Large Language Models (MLLM) like OpenAI\u2019s GPT-4V blew us away last year with the ability to answer questions about images. As impressive as GPT-4V is, it struggles sometimes when images are very complex and often misses small details. The V* algorithm uses a Visual Question Answering (VQA) LLM to guide it in identifying which area of the image to focus on to answer a visual\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/nb\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-01-16T14:01:10+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/needle-in-haystack.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"664\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skrevet av\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Ansl. lesetid\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"V* &#8211; Multimodal LLM guided visual search that beats GPT-4V\",\"datePublished\":\"2024-01-16T14:01:10+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/\"},\"wordCount\":573,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/01\\\/needle-in-haystack.jpg\",\"keywords\":[\"Computer vision\",\"LLMS\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"nb-NO\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/\",\"name\":\"V* - Multimodal LLM guided visual search that beats GPT-4V | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/01\\\/needle-in-haystack.jpg\",\"datePublished\":\"2024-01-16T14:01:10+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/#breadcrumb\"},\"inLanguage\":\"nb-NO\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"nb-NO\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/01\\\/needle-in-haystack.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/01\\\/needle-in-haystack.jpg\",\"width\":1000,\"height\":664},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"V* &#8211; Multimodal LLM guided visual search that beats GPT-4V\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"nb-NO\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"nb-NO\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"nb-NO\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/nb\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"V* - Multimodalt LLM-veiledet visuelt s\u00f8k som sl\u00e5r GPT-4V | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/nb\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/","og_locale":"nb_NO","og_type":"article","og_title":"V* - Multimodal LLM guided visual search that beats GPT-4V | DailyAI","og_description":"Researchers from UC San Diego and New York University developed V*, an LLM-guided search algorithm that is a lot better than GPT-4V at contextual understanding, and precise targeting of specific visual elements in images. Multimodal Large Language Models (MLLM) like OpenAI\u2019s GPT-4V blew us away last year with the ability to answer questions about images. As impressive as GPT-4V is, it struggles sometimes when images are very complex and often misses small details. The V* algorithm uses a Visual Question Answering (VQA) LLM to guide it in identifying which area of the image to focus on to answer a visual","og_url":"https:\/\/dailyai.com\/nb\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/","og_site_name":"DailyAI","article_published_time":"2024-01-16T14:01:10+00:00","og_image":[{"width":1000,"height":664,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/needle-in-haystack.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skrevet av":"Eugene van der Watt","Ansl. lesetid":"3 minutter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"V* &#8211; Multimodal LLM guided visual search that beats GPT-4V","datePublished":"2024-01-16T14:01:10+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/"},"wordCount":573,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/needle-in-haystack.jpg","keywords":["Computer vision","LLMS"],"articleSection":["Industry"],"inLanguage":"nb-NO"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/","url":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/","name":"V* - Multimodalt LLM-veiledet visuelt s\u00f8k som sl\u00e5r GPT-4V | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/needle-in-haystack.jpg","datePublished":"2024-01-16T14:01:10+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/#breadcrumb"},"inLanguage":"nb-NO","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/"]}]},{"@type":"ImageObject","inLanguage":"nb-NO","@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/needle-in-haystack.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/needle-in-haystack.jpg","width":1000,"height":664},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"V* &#8211; Multimodal LLM guided visual search that beats GPT-4V"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DagligAI","description":"Din daglige dose med AI-nyheter","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"nb-NO"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DagligAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"nb-NO","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"nb-NO","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene har bakgrunn som elektroingeni\u00f8r og elsker alt som har med teknologi \u00e5 gj\u00f8re. N\u00e5r han tar en pause fra AI-nyhetene, finner du ham ved snookerbordet.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/nb\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts\/9253","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/comments?post=9253"}],"version-history":[{"count":4,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts\/9253\/revisions"}],"predecessor-version":[{"id":9261,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts\/9253\/revisions\/9261"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/media\/9260"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/media?parent=9253"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/categories?post=9253"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/tags?post=9253"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}