{"id":8052,"date":"2023-12-06T17:03:36","date_gmt":"2023-12-06T17:03:36","guid":{"rendered":"https:\/\/dailyai.com\/?p=8052"},"modified":"2024-03-28T00:40:52","modified_gmt":"2024-03-28T00:40:52","slug":"google-launches-its-new-gemini-multi-modal-family-of-models","status":"publish","type":"post","link":"https:\/\/dailyai.com\/da\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/","title":{"rendered":"Google frigiver sin banebrydende Gemini-familie af multimodale modeller"},"content":{"rendered":"<p><strong>Google har lanceret sin Gemini-familie af multimodale AI-modeller, et dramatisk tiltag i en branche, der stadig er p\u00e5virket af begivenhederne p\u00e5 OpenAI.<\/strong><\/p>\n<p>Gemini er en multimodal familie af modeller, der er i stand til at behandle og forst\u00e5 en blanding af tekst, billeder, lyd og video.<\/p>\n<p>Sundar Pichai, Googles CEO, og Demis Hassabis, CEO for Google DeepMind, udtrykker store forventninger til Gemini. Google planl\u00e6gger at integrere det p\u00e5 tv\u00e6rs af Googles omfattende produkter og tjenester, herunder s\u00f8gning, Maps og Chrome.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Vi er glade for at kunne annoncere \ud835\uddda\ud835\uddf2\ud835\uddfa\ud835\uddf6\ud835\uddfb\ud835\uddf6: <a href=\"https:\/\/twitter.com\/Google?ref_src=twsrc%5Etfw\">@Google<\/a>'s st\u00f8rste og mest kompetente AI-model.<\/p>\n<p>Den er bygget til at v\u00e6re indbygget multimodal og kan forst\u00e5 og arbejde p\u00e5 tv\u00e6rs af tekst, kode, lyd, billede og video - og opn\u00e5r state-of-the-art performance p\u00e5 tv\u00e6rs af mange opgaver. \ud83e\uddf5 <a href=\"https:\/\/t.co\/mwHZTDTBuG\">https:\/\/t.co\/mwHZTDTBuG<\/a> <a href=\"https:\/\/t.co\/zfLlCGuzmV\">pic.twitter.com\/zfLlCGuzmV<\/a><\/p>\n<p>- Google DeepMind (@GoogleDeepMind) <a href=\"https:\/\/twitter.com\/GoogleDeepMind\/status\/1732416095355814277?ref_src=twsrc%5Etfw\">6. december 2023<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>Gemini har omfattende multimodalitet og behandler og interagerer med tekst, billeder, video og lyd. Mens vi har v\u00e6nnet os til tekst- og billedbehandling, er lyd og video banebrydende og tilbyder sp\u00e6ndende nye m\u00e5der at h\u00e5ndtere rich media p\u00e5.<\/p>\n<p>Hassabis bem\u00e6rker: \"Disse modeller forst\u00e5r p\u00e5 en m\u00e5de bedre verden omkring dem.\"<\/p>\n<p>Pichai understregede modellens sammenh\u00e6ng med Googles produkter og tjenester og sagde: \"En af de st\u00e6rke ting ved dette \u00f8jeblik er, at du kan arbejde p\u00e5 en underliggende teknologi og g\u00f8re den bedre, og det flyder straks p\u00e5 tv\u00e6rs af vores produkter.\"<\/p>\n<p>Gemini vil tage tre forskellige former, de er:<\/p>\n<ul>\n<li><strong>Gemini Nano:<\/strong> En lettere version, der er skr\u00e6ddersyet til Android-enheder, og som giver mulighed for offline- og native-funktioner.<\/li>\n<li><strong>Gemini Pro:<\/strong> En mere avanceret version, som skal drive mange af Googles AI-tjenester, herunder Bard.<\/li>\n<li><strong>Gemini Ultra:<\/strong> Den mest kraftfulde iteration, der prim\u00e6rt er designet til datacentre og virksomhedsapplikationer, er planlagt til udgivelse n\u00e6ste \u00e5r.<\/li>\n<\/ul>\n<p>Med hensyn til ydeevne h\u00e6vder Google, at Gemini overg\u00e5r GPT-4 i 30 ud af 32 benchmarks, og at den is\u00e6r udm\u00e6rker sig ved at forst\u00e5 og interagere med video og lyd. Denne pr\u00e6station tilskrives Geminis design som en multisensorisk model fra starten.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Bard f\u00e5r sin hidtil st\u00f8rste opgradering med en specifikt tunet version af Gemini Pro.<\/p>\n<p>Fra i dag vil den v\u00e6re langt bedre til ting som f.eks:<br \/>\n\ud83d\udd18 Forst\u00e5else<br \/>\n\ud83d\udd18 Opsummering<br \/>\n\ud83d\udd18 R\u00e6sonnement<br \/>\n\ud83d\udd18 Kodning<br \/>\n\ud83d\udd18 Planl\u00e6gning<\/p>\n<p>Og meget mere. \u2193 <a href=\"https:\/\/t.co\/TJR12OioxU\">https:\/\/t.co\/TJR12OioxU<\/a><\/p>\n<p>- Google DeepMind (@GoogleDeepMind) <a href=\"https:\/\/twitter.com\/GoogleDeepMind\/status\/1732430045275140415?ref_src=twsrc%5Etfw\">6. december 2023<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><br \/>\nDerudover var Google ivrig efter at fremh\u00e6ve Geminis effektivitet.<\/p>\n<p>Den er tr\u00e6net p\u00e5 Googles egne Tensor Processing Units (TPU'er) og er hurtigere og mere omkostningseffektiv end tidligere modeller. Sammen med Gemini lancerer Google TPU v5p til datacentre, hvilket forbedrer effektiviteten af at k\u00f8re modeller i stor skala.<\/p>\n<h2>Er Gemini ChatGPT-dr\u00e6beren?<\/h2>\n<p>Google er tydeligvis positivt indstillet over for Gemini. Tidligere p\u00e5 \u00e5ret blev en <a href=\"https:\/\/dailyai.com\/da\/2023\/09\/googles-gemini-is-expected-to-outperform-gpt-4\/\">'l\u00e6kage' af Semi Analysis<\/a> foreslog, at Gemini kunne bl\u00e6se konkurrenterne omkuld og f\u00e5 Google til at stige fra et perifert medlem af den generative AI-industri til hovedpersonen foran OpenAI.<\/p>\n<p>Ud over sin multimodalitet er Gemini angiveligt den f\u00f8rste model, der har klaret sig bedre end menneskelige eksperter i MMLU-benchmarket (massive multitask language understanding), som tester viden om verden og probleml\u00f8sningsevner p\u00e5 tv\u00e6rs af 57 emner, f.eks. matematik, fysik, historie, jura, medicin og etik.<\/p>\n<p><iframe loading=\"lazy\" title=\"Matematik og fysik med AI | Gemini\" width=\"1080\" height=\"608\" src=\"https:\/\/www.youtube.com\/embed\/K4pX1VAxaAI?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p>&nbsp;<\/p>\n<p>Pichai siger, at lanceringen af Gemini indvarsler en \"ny \u00e6ra\" inden for AI og understreger, hvordan Gemini vil drage fordel af Googles omfattende produktkatalog.<\/p>\n<p>Integration af s\u00f8gemaskiner er s\u00e6rlig interessant, da <a href=\"https:\/\/dailyai.com\/da\/2023\/09\/google-turns-25-will-ai-herald-another-25-years-of-success\/\">Google dominerer dette omr\u00e5de<\/a> og har fordelene ved verdens mest omfattende s\u00f8geindeks lige ved h\u00e5nden.<\/p>\n<p>Udgivelsen af Gemini placerer Google solidt i det igangv\u00e6rende AI-kapl\u00f8b, og folk vil g\u00f8re alt for at teste den mod GPT-4.<\/p>\n<h2>Gemini-benchmark-test og -analyse<\/h2>\n<p>I en <a href=\"https:\/\/blog.google\/technology\/ai\/google-gemini-ai\/#performance\">blogindl\u00e6g<\/a>har Google offentliggjort benchmarkresultater, der viser, hvordan Gemini Ultra sl\u00e5r GPT-4 i de fleste tests. Den har ogs\u00e5 avancerede kodningsfunktioner med fremragende resultater i kodningsbenchmarks som HumanEval og Natural2Code.<\/p>\n<p><iframe loading=\"lazy\" title=\"Brug af AI til at l\u00f8se komplekse problemer | Gemini\" width=\"1080\" height=\"608\" src=\"https:\/\/www.youtube.com\/embed\/LvGmVmHv69s?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p>&nbsp;<\/p>\n<p>Her er benchmark-dataene. V\u00e6r opm\u00e6rksom p\u00e5, at disse m\u00e5linger bruger den uudgivne Gemini Ultra-version. Gemini kan ikke betragtes som en ChatGPT-dr\u00e6ber f\u00f8r n\u00e6ste \u00e5r. Og du kan v\u00e6re sikker p\u00e5, at OpenAI vil fors\u00f8ge at modvirke Gemini s\u00e5 hurtigt som muligt.<\/p>\n<h3>Tekst\/NLP-benchmark-resultater<\/h3>\n<p><strong>Generel viden:<\/strong><\/p>\n<ul>\n<li>MMLU (Massive Multitask Language Understanding):\n<ul>\n<li>Gemini Ultra: 90.0% (Tankek\u00e6de ved 32 eksempler)<\/li>\n<li>GPT-4: 86,4% (5-skud, rapporteret)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>R\u00e6sonnement:<\/strong><\/p>\n<ul>\n<li>Big-Bench Hard (Forskellige s\u00e6t af udfordrende opgaver, der kr\u00e6ver r\u00e6sonnement i flere trin):\n<ul>\n<li>Gemini Ultra: 83,6% (3-skud)<\/li>\n<li>GPT-4: 83.1% (3-skud, API)<\/li>\n<\/ul>\n<\/li>\n<li>DROP (l\u00e6seforst\u00e5else, F1-score):\n<ul>\n<li>Gemini Ultra: 82,4 (variable skud)<\/li>\n<li>GPT-4: 80,9 (3-skud, rapporteret)<\/li>\n<\/ul>\n<\/li>\n<li>HellaSwag (fornuftige r\u00e6sonnementer til hverdagsopgaver):\n<ul>\n<li>Gemini Ultra: 87.8% (10 skud)<\/li>\n<li>GPT-4: 95.3% (10-skud, rapporteret)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>Matematik:<\/strong><\/p>\n<ul>\n<li>GSM8K (Grundl\u00e6ggende aritmetiske manipulationer, herunder matematiske problemer i grundskolen):\n<ul>\n<li>Gemini Ultra: 94.4% (flertallet ved 32 eksempler)<\/li>\n<li>GPT-4: 92.0% (5-skudt tankek\u00e6de, rapporteret)<\/li>\n<\/ul>\n<\/li>\n<li>MATH (Udfordrende matematikopgaver, herunder algebra, geometri, pre-calculus og andre):\n<ul>\n<li>Gemini Ultra: 53.2% (4 skud)<\/li>\n<li>GPT-4: 52.9% (4-skud, API)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>Kode:<\/strong><\/p>\n<ul>\n<li>HumanEval (generering af Python-kode):\n<ul>\n<li>Gemini Ultra: 74,4% (0-skud, intern test)<\/li>\n<li>GPT-4: 67.0% (0-skud, rapporteret)<\/li>\n<\/ul>\n<\/li>\n<li>Natural2Code (Python-kodegenerering, nyt tilbageholdt datas\u00e6t, HumanEval-lignende, ikke l\u00e6kket p\u00e5 nettet):\n<ul>\n<li>Gemini Ultra: 74.9% (0-skud)<\/li>\n<li>GPT-4: 73,9% (0-skud, API)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3>Multimodal benchmark-ydelse<\/h3>\n<p>De multimodale evner i Googles Gemini AI-model sammenlignes ogs\u00e5 med OpenAI's GPT-4V.<\/p>\n<p><strong>Billedforst\u00e5else og -behandling:<\/strong><\/p>\n<ul>\n<li><strong>MMMU (Multi-discipline College-level Reasoning Problems):<\/strong>\n<ul>\n<li>Gemini Ultra: 59.4% (0-shot pass@1, kun pixel)<\/li>\n<li>GPT-4V: 56.8% (0-shot pass@1)<\/li>\n<\/ul>\n<\/li>\n<li><strong>VQAv2 (naturlig billedforst\u00e5else):<\/strong>\n<ul>\n<li>Gemini Ultra: 77.8% (0-skud, kun pixel)<\/li>\n<li>GPT-4V: 77.2% (0-skud)<\/li>\n<\/ul>\n<\/li>\n<li><strong>TextVQA (OCR p\u00e5 naturlige billeder):<\/strong>\n<ul>\n<li>Gemini Ultra: 82.3% (0-skud, kun pixel)<\/li>\n<li>GPT-4V: 78.0% (0-skud)<\/li>\n<\/ul>\n<\/li>\n<li><strong>DocVQA (dokumentforst\u00e5else):<\/strong>\n<ul>\n<li>Gemini Ultra: 90.9% (0-shot, kun pixel)<\/li>\n<li>GPT-4V: 88.4% (0-shot, kun pixel)<\/li>\n<\/ul>\n<\/li>\n<li><strong>Infografisk VQA (Infografisk forst\u00e5else):<\/strong>\n<ul>\n<li>Gemini Ultra: 80.3% (0-skud, kun pixel)<\/li>\n<li>GPT-4V: 75.1% (0-shot, kun pixel)<\/li>\n<\/ul>\n<\/li>\n<li><strong>MathVista (Matematisk r\u00e6sonnement i visuelle kontekster):<\/strong>\n<ul>\n<li>Gemini Ultra: 53.0% (0-shot, kun pixel)<\/li>\n<li>GPT-4V: 49.9% (0-skud)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>Videobehandling:<\/strong><\/p>\n<ul>\n<li><strong>VATEX (engelsk videotekst, CIDEr-score):<\/strong>\n<ul>\n<li>Gemini Ultra: 62,7 (4 skud)<\/li>\n<li>DeepMind Flamingo: 56,0 (4-shot)<\/li>\n<\/ul>\n<\/li>\n<li><strong>Perceptionstest MCQA (besvarelse af videosp\u00f8rgsm\u00e5l):<\/strong>\n<ul>\n<li>Gemini Ultra: 54.7% (0-skud)<\/li>\n<li>SeViLA: 46.3% (0-skud)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>Lydbehandling:<\/strong><\/p>\n<ul>\n<li><strong>CoVoST 2 (automatisk taleovers\u00e6ttelse, 21 sprog, BLEU-score):<\/strong>\n<ul>\n<li>Gemini Pro: 40,1<\/li>\n<li>Whisper v2: 29.1<\/li>\n<\/ul>\n<\/li>\n<li><strong>FLEURS (Automatisk talegenkendelse, 62 sprog, ordfejlrate):<\/strong>\n<ul>\n<li>Gemini Pro: 7,6% (lavere er bedre)<\/li>\n<li>Whisper v3: 17.6%<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h2>Googles etiske engagement<\/h2>\n<p class=\"whitespace-pre-wrap\">I en <a href=\"https:\/\/blog.google\/technology\/ai\/google-gemini-ai\/#scalable-efficient\">blogindl\u00e6g<\/a>understregede Google sit engagement i ansvarlig og etisk AI-praksis.<\/p>\n<p class=\"whitespace-pre-wrap\">If\u00f8lge Google gennemgik Gemini strengere test end nogen tidligere Google AI og vurderede faktorer som bias, toksicitet, cybersikkerhedstrusler og potentiale for misbrug. Modsatrettede teknikker hjalp med at afsl\u00f8re problemer tidligt. Eksterne eksperter stresstestede og \"red-teamede\" derefter modeller for at identificere yderligere blinde punkter.<\/p>\n<p class=\"whitespace-pre-wrap\">Google siger, at ansvar og sikkerhed fortsat vil v\u00e6re en prioritet midt i den hurtige udvikling af kunstig intelligens. Virksomheden var med til at starte branchegrupper for at etablere bedste praksis, herunder MLCommons og Secure AI Framework (SAIF).<\/p>\n<p class=\"whitespace-pre-wrap\">Google lover fortsat samarbejde med forskere, regeringer og civilsamfundsorganisationer globalt.<\/p>\n<h2>Gemini Ultra-udgivelse<\/h2>\n<p class=\"whitespace-pre-wrap\">Indtil videre begr\u00e6nser Google adgangen til den mest kraftfulde model, Gemini Ultra, som kommer i begyndelsen af n\u00e6ste \u00e5r.<\/p>\n<p class=\"whitespace-pre-wrap\">Inden da vil udvalgte udviklere og eksperter eksperimentere med Ultra for at give feedback. Lanceringen vil falde sammen med en ny banebrydende AI-modelplatform, eller som Google kalder en 'oplevelse', ved navn Bard Advanced.<\/p>\n<h2>Gemini for udviklere<\/h2>\n<p>Fra den 13. december f\u00e5r udviklere og virksomhedskunder adgang til Gemini Pro via Gemini API, som er tilg\u00e6ngelig i Google AI Studio eller Google Cloud Vertex AI.<\/p>\n<p><strong>Google AI Studio:<\/strong> Google AI Studio er et brugervenligt, webbaseret v\u00e6rkt\u00f8j, der er designet til at hj\u00e6lpe udviklere med at lave prototyper og lancere applikationer ved hj\u00e6lp af en API-n\u00f8gle. Denne gratis ressource er ideel til dem, der befinder sig i de indledende faser af app-udvikling.<\/p>\n<p><strong>Vertex AI:<\/strong> Vertex AI er en mere omfattende AI-platform og tilbyder fuldt administrerede tjenester. Den integreres problemfrit med Google Cloud og tilbyder ogs\u00e5 virksomhedssikkerhed, beskyttelse af personlige oplysninger og overholdelse af regler for datastyring.<\/p>\n<p>Ud over disse platforme vil Android-udviklere kunne f\u00e5 adgang til Gemini Nano til opgaver p\u00e5 enheden. Den vil v\u00e6re tilg\u00e6ngelig for integration via AICore. Denne nye systemfunktion vil f\u00e5 sin debut i Android 14, begyndende med Pixel 8 Pro-enheder.<\/p>\n<h2>Google har esserne, indtil videre<\/h2>\n<p>OpenAI og Google er forskellige p\u00e5 \u00e9n stor m\u00e5de: Google udvikler stakkevis af andre v\u00e6rkt\u00f8jer og produkter internt, herunder dem, der bruges af milliarder af mennesker hver dag.<\/p>\n<p>Vi taler selvf\u00f8lgelig om Android, Chrome, Gmail, Google Workplace og Google Search.<\/p>\n<p>OpenAI har gennem sin alliance med Microsoft lignende muligheder gennem Copilot, men det er endnu ikke rigtig kommet i gang.<\/p>\n<p>Og hvis vi skal v\u00e6re \u00e6rlige, er det nok Google, der har magten p\u00e5 tv\u00e6rs af disse produktkategorier.<\/p>\n<p>Google har presset p\u00e5 i AI-kapl\u00f8bet, men du kan v\u00e6re sikker p\u00e5, at dette kun vil s\u00e6tte skub i OpenAI's kampagne mod GPT-5 og AGI.<\/p>","protected":false},"excerpt":{"rendered":"<p>Google har lanceret sin Gemini-familie af multimodale AI-modeller, et dramatisk tiltag i en branche, der stadig er p\u00e5virket af begivenhederne p\u00e5 OpenAI. Gemini er en multimodal familie af modeller, der er i stand til at behandle og forst\u00e5 en blanding af tekst, billeder, lyd og video. Sundar Pichai, Googles CEO, og Demis Hassabis, CEO for Google DeepMind, udtrykker store forventninger til Gemini. Google planl\u00e6gger at integrere det p\u00e5 tv\u00e6rs af Googles omfattende produkter og tjenester, herunder s\u00f8gning, Maps og Chrome. Vi er glade for at kunne annoncere \ud835\uddda\ud835\uddf2\ud835\uddfa\ud835\uddf6\ud835\uddfb\ud835\uddf6: @Google's st\u00f8rste og mest effektive AI-model. Den er bygget til at v\u00e6re indbygget multimodal og kan forst\u00e5 og fungere p\u00e5 tv\u00e6rs af tekst, kode og lyd,<\/p>","protected":false},"author":2,"featured_media":2402,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[125,147,383,102],"class_list":["post-8052","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-bard","tag-deepmind","tag-gemini","tag-google"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Google unleashes its groundbreaking Gemini family of multi-modal models | DailyAI<\/title>\n<meta name=\"description\" content=\"Just a few days after reports suggested Google&#039;s secretive Gemini project was delayed, they&#039;ve unleashed it upon an AI industry still reeling from events at OpenAI.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/da\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/\" \/>\n<meta property=\"og:locale\" content=\"da_DK\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Google unleashes its groundbreaking Gemini family of multi-modal models | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Just a few days after reports suggested Google&#039;s secretive Gemini project was delayed, they&#039;ve unleashed it upon an AI industry still reeling from events at OpenAI.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/da\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-06T17:03:36+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-03-28T00:40:52+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skrevet af\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimeret l\u00e6setid\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"Google unleashes its groundbreaking Gemini family of multi-modal models\",\"datePublished\":\"2023-12-06T17:03:36+00:00\",\"dateModified\":\"2024-03-28T00:40:52+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/\"},\"wordCount\":1356,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/shutterstock_552493561.jpg\",\"keywords\":[\"Bard\",\"DeepMind\",\"Gemini\",\"Google\"],\"articleSection\":{\"1\":\"Industry\"},\"inLanguage\":\"da-DK\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/\",\"name\":\"Google unleashes its groundbreaking Gemini family of multi-modal models | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/shutterstock_552493561.jpg\",\"datePublished\":\"2023-12-06T17:03:36+00:00\",\"dateModified\":\"2024-03-28T00:40:52+00:00\",\"description\":\"Just a few days after reports suggested Google's secretive Gemini project was delayed, they've unleashed it upon an AI industry still reeling from events at OpenAI.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#breadcrumb\"},\"inLanguage\":\"da-DK\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/shutterstock_552493561.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/shutterstock_552493561.jpg\",\"width\":1000,\"height\":667,\"caption\":\"Google Med-PaLM 2\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Google unleashes its groundbreaking Gemini family of multi-modal models\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"da-DK\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/da\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Google frigiver sin banebrydende Gemini-familie af multimodale modeller | DailyAI","description":"Bare et par dage efter, at rapporterne antydede, at Googles hemmelighedsfulde Gemini-projekt var forsinket, har de sluppet det l\u00f8s p\u00e5 en AI-industri, der stadig er rystet over begivenhederne p\u00e5 OpenAI.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/da\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/","og_locale":"da_DK","og_type":"article","og_title":"Google unleashes its groundbreaking Gemini family of multi-modal models | DailyAI","og_description":"Just a few days after reports suggested Google's secretive Gemini project was delayed, they've unleashed it upon an AI industry still reeling from events at OpenAI.","og_url":"https:\/\/dailyai.com\/da\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/","og_site_name":"DailyAI","article_published_time":"2023-12-06T17:03:36+00:00","article_modified_time":"2024-03-28T00:40:52+00:00","og_image":[{"width":1000,"height":667,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg","type":"image\/jpeg"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skrevet af":"Sam Jeans","Estimeret l\u00e6setid":"6 minutter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"Google unleashes its groundbreaking Gemini family of multi-modal models","datePublished":"2023-12-06T17:03:36+00:00","dateModified":"2024-03-28T00:40:52+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/"},"wordCount":1356,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg","keywords":["Bard","DeepMind","Gemini","Google"],"articleSection":{"1":"Industry"},"inLanguage":"da-DK"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/","url":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/","name":"Google frigiver sin banebrydende Gemini-familie af multimodale modeller | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg","datePublished":"2023-12-06T17:03:36+00:00","dateModified":"2024-03-28T00:40:52+00:00","description":"Bare et par dage efter, at rapporterne antydede, at Googles hemmelighedsfulde Gemini-projekt var forsinket, har de sluppet det l\u00f8s p\u00e5 en AI-industri, der stadig er rystet over begivenhederne p\u00e5 OpenAI.","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#breadcrumb"},"inLanguage":"da-DK","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/"]}]},{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg","width":1000,"height":667,"caption":"Google Med-PaLM 2"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Google unleashes its groundbreaking Gemini family of multi-modal models"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Din daglige dosis af AI-nyheder","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"da-DK"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Sam Jeans","image":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam er videnskabs- og teknologiforfatter og har arbejdet i forskellige AI-startups. N\u00e5r han ikke skriver, kan han finde p\u00e5 at l\u00e6se medicinske tidsskrifter eller grave i kasser med vinylplader.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/da\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/8052","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/comments?post=8052"}],"version-history":[{"count":16,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/8052\/revisions"}],"predecessor-version":[{"id":8084,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/8052\/revisions\/8084"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media\/2402"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media?parent=8052"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/categories?post=8052"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/tags?post=8052"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}