{"id":9253,"date":"2024-01-16T14:01:10","date_gmt":"2024-01-16T14:01:10","guid":{"rendered":"https:\/\/dailyai.com\/?p=9253"},"modified":"2024-01-16T14:01:10","modified_gmt":"2024-01-16T14:01:10","slug":"v-multimodal-llm-guided-visual-search-that-beats-gpt-4v","status":"publish","type":"post","link":"https:\/\/dailyai.com\/pt\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/","title":{"rendered":"V* - Pesquisa visual guiada por LLM multimodal que supera a GPT-4V"},"content":{"rendered":"<p><strong>Investigadores da Universidade da Calif\u00f3rnia em San Diego e da Universidade de Nova Iorque desenvolveram o V*, um algoritmo de pesquisa guiada por LLM que \u00e9 muito melhor do que o GPT-4V na compreens\u00e3o do contexto e na sele\u00e7\u00e3o precisa de elementos visuais espec\u00edficos nas imagens.<\/strong><\/p>\n<p>Os modelos multimodais de linguagem ampla (MLLM), como o GPT-4V da OpenAI, surpreenderam-nos no ano passado com a sua capacidade de responder a perguntas sobre imagens. Por muito impressionante que o GPT-4V seja, por vezes tem dificuldades quando as imagens s\u00e3o muito complexas e, muitas vezes, deixa escapar pequenos pormenores.<\/p>\n<p>O algoritmo V* utiliza um LLM de resposta a perguntas visuais (VQA) para o orientar na identifica\u00e7\u00e3o da \u00e1rea da imagem em que se deve concentrar para responder a uma pergunta visual. Os investigadores chamam a esta combina\u00e7\u00e3o Show, sEArch, and telL (SEAL).<\/p>\n<p>Se algu\u00e9m lhe desse uma imagem de alta resolu\u00e7\u00e3o e lhe fizesse uma pergunta sobre ela, a sua l\u00f3gica gui\u00e1-lo-ia para fazer zoom numa \u00e1rea onde fosse mais prov\u00e1vel encontrar o item em quest\u00e3o. O SEAL utiliza o V* para analisar imagens de uma forma semelhante.<\/p>\n<p>Um modelo de pesquisa visual poderia simplesmente dividir uma imagem em blocos, fazer zoom em cada bloco e depois process\u00e1-la para encontrar o objeto em quest\u00e3o, mas isso \u00e9 computacionalmente muito ineficiente.<\/p>\n<p>Quando lhe \u00e9 solicitada uma consulta textual sobre uma imagem, V* come\u00e7a por tentar localizar diretamente o alvo da imagem. Se n\u00e3o o conseguir fazer, pede ao MLLM que utilize uma abordagem de senso comum para identificar a \u00e1rea da imagem onde \u00e9 mais prov\u00e1vel que o alvo se encontre.<\/p>\n<p>Em seguida, concentra a sua pesquisa apenas nessa \u00e1rea, em vez de tentar uma pesquisa \"ampliada\" de toda a imagem.<\/p>\n<figure id=\"attachment_9257\" aria-describedby=\"caption-attachment-9257\" style=\"width: 1942px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9257\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar.jpg\" alt=\"\" width=\"1942\" height=\"638\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar.jpg 1942w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-300x99.jpg 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-1024x336.jpg 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-768x252.jpg 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-1536x505.jpg 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-370x122.jpg 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-800x263.jpg 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-740x243.jpg 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-20x7.jpg 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-1600x526.jpg 1600w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Wheres-the-guitar-146x48.jpg 146w\" sizes=\"auto, (max-width: 1942px) 100vw, 1942px\" \/><figcaption id=\"caption-attachment-9257\" class=\"wp-caption-text\">Quando lhe \u00e9 pedido que procure a guitarra, o LLM identifica o palco como a \u00e1rea l\u00f3gica onde concentrar a an\u00e1lise visual para a procurar. Fonte: GitHub<\/figcaption><\/figure>\n<p>Quando o GPT-4V \u00e9 solicitado a responder a perguntas sobre uma imagem que requer um processamento visual extensivo de imagens de alta resolu\u00e7\u00e3o, ele tem dificuldades. O SEAL com V* tem um desempenho muito melhor.<\/p>\n<figure id=\"attachment_9258\" aria-describedby=\"caption-attachment-9258\" style=\"width: 992px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9258\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example.jpg\" alt=\"\" width=\"992\" height=\"1302\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example.jpg 992w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-229x300.jpg 229w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-780x1024.jpg 780w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-768x1008.jpg 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-370x486.jpg 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-800x1050.jpg 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-740x971.jpg 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-20x26.jpg 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/Vending-machine-example-37x48.jpg 37w\" sizes=\"auto, (max-width: 992px) 100vw, 992px\" \/><figcaption id=\"caption-attachment-9258\" class=\"wp-caption-text\">O SEAL responde corretamente a uma pergunta sobre uma imagem, enquanto o GPT-4V se engana. Fonte: GitHub<\/figcaption><\/figure>\n<p>Quando lhe foi perguntado \"Que tipo de bebida podemos comprar naquela m\u00e1quina de venda autom\u00e1tica?\" o SEAL respondeu \"Coca-Cola\", enquanto o GPT-4V adivinhou incorretamente \"Pepsi\".<\/p>\n<p>Os investigadores utilizaram 191 imagens de alta resolu\u00e7\u00e3o do conjunto de dados Meta's Segment Anything (SAM) e criaram uma refer\u00eancia para ver como o desempenho do SEAL se comparava com o de outros modelos. O benchmark V*Bench testa duas tarefas: reconhecimento de atributos e racioc\u00ednio de rela\u00e7\u00f5es espaciais.<\/p>\n<p>As figuras abaixo mostram o desempenho humano em compara\u00e7\u00e3o com modelos de c\u00f3digo aberto, modelos comerciais como o GPT-4V e o SEAL. O aumento que o V* d\u00e1 no desempenho do SEAL \u00e9 particularmente impressionante porque o MLLM subjacente que utiliza \u00e9 o LLaVa-7b, que \u00e9 muito mais pequeno do que o GPT-4V.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-9259\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table.jpg\" alt=\"\" width=\"1120\" height=\"1060\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table.jpg 1120w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-300x284.jpg 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-1024x969.jpg 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-768x727.jpg 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-370x350.jpg 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-800x757.jpg 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-20x19.jpg 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-740x700.jpg 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-24x24.jpg 24w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/table-51x48.jpg 51w\" sizes=\"auto, (max-width: 1120px) 100vw, 1120px\" \/><\/p>\n<p>Esta abordagem intuitiva \u00e0 an\u00e1lise de imagens parece funcionar muito bem com uma s\u00e9rie de exemplos impressionantes no <a href=\"https:\/\/vstar-seal.github.io\/\" target=\"_blank\" rel=\"noopener\">resumo do documento no GitHub<\/a>.<\/p>\n<p>Ser\u00e1 interessante ver se outros MLLMs, como os da OpenAI ou da Google, adoptam uma abordagem semelhante.<\/p>\n<p>Quando lhe perguntaram que bebida era vendida na m\u00e1quina de venda autom\u00e1tica da imagem acima, o Bard da Google respondeu: \"N\u00e3o h\u00e1 nenhuma m\u00e1quina de venda autom\u00e1tica em primeiro plano\". Talvez o Gemini Ultra fa\u00e7a um trabalho melhor.<\/p>\n<p>Para j\u00e1, parece que o SEAL e o seu novo algoritmo V* est\u00e3o muito \u00e0 frente de alguns dos maiores modelos multimodais no que diz respeito ao questionamento visual.<\/p>\n<p>&nbsp;<\/p>","protected":false},"excerpt":{"rendered":"<p>Investigadores da Universidade da Calif\u00f3rnia em San Diego e da Universidade de Nova Iorque desenvolveram o V*, um algoritmo de pesquisa guiado por LLM que \u00e9 muito melhor do que o GPT-4V na compreens\u00e3o do contexto e na sele\u00e7\u00e3o precisa de elementos visuais espec\u00edficos nas imagens. Os modelos multimodais de linguagem ampla (MLLM), como o GPT-4V da OpenAI, surpreenderam-nos no ano passado com a sua capacidade de responder a perguntas sobre imagens. Por muito impressionante que o GPT-4V seja, por vezes tem dificuldades quando as imagens s\u00e3o muito complexas e muitas vezes deixa escapar pequenos pormenores. O algoritmo V* utiliza um LLM de resposta a perguntas visuais (VQA) para o orientar na identifica\u00e7\u00e3o da \u00e1rea da imagem em que se deve concentrar para responder a uma pergunta visual<\/p>","protected":false},"author":6,"featured_media":9260,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[166,118],"class_list":["post-9253","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-computer-vision","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>V* - Multimodal LLM guided visual search that beats GPT-4V | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/pt\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/\" \/>\n<meta property=\"og:locale\" content=\"pt_PT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"V* - Multimodal LLM guided visual search that beats GPT-4V | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Researchers from UC San Diego and New York University developed V*, an LLM-guided search algorithm that is a lot better than GPT-4V at contextual understanding, and precise targeting of specific visual elements in images. Multimodal Large Language Models (MLLM) like OpenAI\u2019s GPT-4V blew us away last year with the ability to answer questions about images. As impressive as GPT-4V is, it struggles sometimes when images are very complex and often misses small details. The V* algorithm uses a Visual Question Answering (VQA) LLM to guide it in identifying which area of the image to focus on to answer a visual\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/pt\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-01-16T14:01:10+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/needle-in-haystack.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"664\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo estimado de leitura\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"V* &#8211; Multimodal LLM guided visual search that beats GPT-4V\",\"datePublished\":\"2024-01-16T14:01:10+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/\"},\"wordCount\":573,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/01\\\/needle-in-haystack.jpg\",\"keywords\":[\"Computer vision\",\"LLMS\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"pt-PT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/\",\"name\":\"V* - Multimodal LLM guided visual search that beats GPT-4V | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/01\\\/needle-in-haystack.jpg\",\"datePublished\":\"2024-01-16T14:01:10+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/#breadcrumb\"},\"inLanguage\":\"pt-PT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/01\\\/needle-in-haystack.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/01\\\/needle-in-haystack.jpg\",\"width\":1000,\"height\":664},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"V* &#8211; Multimodal LLM guided visual search that beats GPT-4V\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-PT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/pt\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"V* - Pesquisa visual guiada por LLM multimodal que bate o GPT-4V | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/pt\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/","og_locale":"pt_PT","og_type":"article","og_title":"V* - Multimodal LLM guided visual search that beats GPT-4V | DailyAI","og_description":"Researchers from UC San Diego and New York University developed V*, an LLM-guided search algorithm that is a lot better than GPT-4V at contextual understanding, and precise targeting of specific visual elements in images. Multimodal Large Language Models (MLLM) like OpenAI\u2019s GPT-4V blew us away last year with the ability to answer questions about images. As impressive as GPT-4V is, it struggles sometimes when images are very complex and often misses small details. The V* algorithm uses a Visual Question Answering (VQA) LLM to guide it in identifying which area of the image to focus on to answer a visual","og_url":"https:\/\/dailyai.com\/pt\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/","og_site_name":"DailyAI","article_published_time":"2024-01-16T14:01:10+00:00","og_image":[{"width":1000,"height":664,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/needle-in-haystack.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Eugene van der Watt","Tempo estimado de leitura":"3 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"V* &#8211; Multimodal LLM guided visual search that beats GPT-4V","datePublished":"2024-01-16T14:01:10+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/"},"wordCount":573,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/needle-in-haystack.jpg","keywords":["Computer vision","LLMS"],"articleSection":["Industry"],"inLanguage":"pt-PT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/","url":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/","name":"V* - Pesquisa visual guiada por LLM multimodal que bate o GPT-4V | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/needle-in-haystack.jpg","datePublished":"2024-01-16T14:01:10+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/#breadcrumb"},"inLanguage":"pt-PT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/"]}]},{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/needle-in-haystack.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/needle-in-haystack.jpg","width":1000,"height":664},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/01\/v-multimodal-llm-guided-visual-search-that-beats-gpt-4v\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"V* &#8211; Multimodal LLM guided visual search that beats GPT-4V"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"A sua dose di\u00e1ria de not\u00edcias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-PT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene vem de uma forma\u00e7\u00e3o em engenharia eletr\u00f3nica e adora tudo o que \u00e9 tecnologia. Quando faz uma pausa no consumo de not\u00edcias sobre IA, pode encontr\u00e1-lo \u00e0 mesa de snooker.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/pt\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/9253","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/comments?post=9253"}],"version-history":[{"count":4,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/9253\/revisions"}],"predecessor-version":[{"id":9261,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/9253\/revisions\/9261"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media\/9260"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media?parent=9253"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/categories?post=9253"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/tags?post=9253"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}