{"id":8052,"date":"2023-12-06T17:03:36","date_gmt":"2023-12-06T17:03:36","guid":{"rendered":"https:\/\/dailyai.com\/?p=8052"},"modified":"2024-03-28T00:40:52","modified_gmt":"2024-03-28T00:40:52","slug":"google-launches-its-new-gemini-multi-modal-family-of-models","status":"publish","type":"post","link":"https:\/\/dailyai.com\/sv\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/","title":{"rendered":"Google lanserar sin banbrytande Gemini familj av multimodala modeller"},"content":{"rendered":"<p><strong>Google har lanserat sin Gemini-familj av multimodala AI-modeller, en dramatisk satsning i en bransch som fortfarande skakas av h\u00e4ndelserna p\u00e5 OpenAI.<\/strong><\/p>\n<p>Gemini \u00e4r en multimodal familj av modeller som kan bearbeta och f\u00f6rst\u00e5 en blandning av text, bilder, ljud och video.<\/p>\n<p>Sundar Pichai, Googles VD, och Demis Hassabis, VD f\u00f6r Google DeepMind, uttrycker h\u00f6ga f\u00f6rv\u00e4ntningar p\u00e5 Gemini. Google planerar att integrera det i Googles omfattande produkter och tj\u00e4nster, inklusive s\u00f6k, Maps och Chrome.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Vi \u00e4r glada \u00f6ver att kunna meddela \ud835\uddda\ud835\uddf2\ud835\uddfa\ud835\uddf6\ud835\uddfb\ud835\uddf6: <a href=\"https:\/\/twitter.com\/Google?ref_src=twsrc%5Etfw\">@Google<\/a>v\u00e4rldens st\u00f6rsta och mest kapabla AI-modell.<\/p>\n<p>Den \u00e4r byggd f\u00f6r att vara multimodal och kan f\u00f6rst\u00e5 och arbeta med text, kod, ljud, bild och video - och uppn\u00e5r topprestanda f\u00f6r m\u00e5nga uppgifter. \ud83e\uddf5 <a href=\"https:\/\/t.co\/mwHZTDTBuG\">https:\/\/t.co\/mwHZTDTBuG<\/a> <a href=\"https:\/\/t.co\/zfLlCGuzmV\">pic.twitter.com\/zfLlCGuzmV<\/a><\/p>\n<p>- Google DeepMind (@GoogleDeepMind) <a href=\"https:\/\/twitter.com\/GoogleDeepMind\/status\/1732416095355814277?ref_src=twsrc%5Etfw\">6 december 2023<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>Gemini har omfattande multimodalitet och kan bearbeta och interagera med text, bilder, video och ljud. Medan vi har vant oss vid text- och bildbehandling bryter ljud och video ny mark och erbjuder sp\u00e4nnande nya s\u00e4tt att hantera rich media.<\/p>\n<p>Hassabis konstaterar: \"De h\u00e4r modellerna f\u00f6rst\u00e5r helt enkelt mer om sin omv\u00e4rld.\"<\/p>\n<p>Pichai betonade modellens koppling till Googles produkter och tj\u00e4nster och sa: \"En av de kraftfulla sakerna med det h\u00e4r \u00f6gonblicket \u00e4r att du kan arbeta p\u00e5 en underliggande teknik och g\u00f6ra den b\u00e4ttre och det flyter omedelbart \u00f6ver v\u00e5ra produkter.\"<\/p>\n<p>Gemini kommer att ta tre olika former, de \u00e4r:<\/p>\n<ul>\n<li><strong>Gemini Nano:<\/strong> En l\u00e4ttare version skr\u00e4ddarsydd f\u00f6r Android-enheter, som m\u00f6jligg\u00f6r offline- och native-funktioner.<\/li>\n<li><strong>Gemini Pro:<\/strong> En mer avancerad version som ska driva m\u00e5nga av Googles AI-tj\u00e4nster, bland annat Bard.<\/li>\n<li><strong>Gemini Ultra:<\/strong> Den mest kraftfulla iterationen, som fr\u00e4mst \u00e4r utformad f\u00f6r datacenter och f\u00f6retagsapplikationer, planeras att lanseras n\u00e4sta \u00e5r.<\/li>\n<\/ul>\n<p>N\u00e4r det g\u00e4ller prestanda h\u00e4vdar Google att Gemini \u00f6vertr\u00e4ffar GPT-4 i 30 av 32 benchmarks och utm\u00e4rker sig s\u00e4rskilt n\u00e4r det g\u00e4ller att f\u00f6rst\u00e5 och interagera med video och ljud. Denna prestanda tillskrivs Geminis design som en multisensorisk modell fr\u00e5n b\u00f6rjan.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Bard f\u00e5r sin st\u00f6rsta uppgradering hittills med en s\u00e4rskilt anpassad version av Gemini Pro.<\/p>\n<p>Fr\u00e5n och med idag kommer den att vara mycket mer kapabel till saker som:<br \/>\n\ud83d\udd18 F\u00f6rst\u00e5else<br \/>\n\ud83d\udd18 Sammanfattande<br \/>\n\ud83d\udd18 Resonemang<br \/>\n\ud83d\udd18 Kodning<br \/>\n\ud83d\udd18 Planering<\/p>\n<p>Och mer d\u00e4rtill. \u2193 <a href=\"https:\/\/t.co\/TJR12OioxU\">https:\/\/t.co\/TJR12OioxU<\/a><\/p>\n<p>- Google DeepMind (@GoogleDeepMind) <a href=\"https:\/\/twitter.com\/GoogleDeepMind\/status\/1732430045275140415?ref_src=twsrc%5Etfw\">6 december 2023<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><br \/>\nDessutom var Google angel\u00e4gna om att lyfta fram Geminis effektivitet.<\/p>\n<p>Den \u00e4r tr\u00e4nad p\u00e5 Googles egna TPU:er (Tensor Processing Units) och \u00e4r snabbare och mer kostnadseffektiv \u00e4n tidigare modeller. Parallellt med Gemini lanserar Google TPU v5p f\u00f6r datacenter, vilket f\u00f6rb\u00e4ttrar effektiviteten i att k\u00f6ra storskaliga modeller.<\/p>\n<h2>\u00c4r Gemini ChatGPT-d\u00f6daren?<\/h2>\n<p>Google \u00e4r tydligt hausse om Gemini. Tidigare under \u00e5ret, en <a href=\"https:\/\/dailyai.com\/sv\/2023\/09\/googles-gemini-is-expected-to-outperform-gpt-4\/\">\"l\u00e4ckage\" genom semianalys<\/a> f\u00f6reslog att Gemini skulle kunna bl\u00e5sa konkurrenterna ur vattnet och se Google stiga fr\u00e5n en perifer medlem av den generativa AI-industrin till huvudpersonen f\u00f6re OpenAI.<\/p>\n<p>F\u00f6rutom sin multimodalitet p\u00e5st\u00e5s Gemini vara den f\u00f6rsta modellen som \u00f6vertr\u00e4ffar m\u00e4nskliga experter p\u00e5 MMLU (massive multitask language understanding), ett benchmark som testar v\u00e4rldskunskap och probleml\u00f6sningsf\u00f6rm\u00e5ga inom 57 \u00e4mnen som matematik, fysik, historia, juridik, medicin och etik.<\/p>\n<p><iframe loading=\"lazy\" title=\"Matematik &amp; fysik med AI | Gemini\" width=\"1080\" height=\"608\" src=\"https:\/\/www.youtube.com\/embed\/K4pX1VAxaAI?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p>&nbsp;<\/p>\n<p>Pichai s\u00e4ger att lanseringen av Gemini f\u00f6reb\u00e5dar en \"ny era\" inom AI och betonar hur Gemini kommer att dra nytta av Googles omfattande produktkatalog.<\/p>\n<p>Integrering av s\u00f6kmotorer \u00e4r s\u00e4rskilt intressant, eftersom <a href=\"https:\/\/dailyai.com\/sv\/2023\/09\/google-turns-25-will-ai-herald-another-25-years-of-success\/\">Google dominerar detta omr\u00e5de<\/a> och har f\u00f6rdelarna med v\u00e4rldens mest omfattande s\u00f6kindex till hands.<\/p>\n<p>Lanseringen av Gemini placerar Google i det p\u00e5g\u00e5ende AI-racet, och folk kommer att vara helt ute efter att testa det mot GPT-4.<\/p>\n<h2>Tester och analyser av Gemini benchmarks<\/h2>\n<p>I en <a href=\"https:\/\/blog.google\/technology\/ai\/google-gemini-ai\/#performance\">Blogginl\u00e4gg<\/a>Google publicerade benchmarkresultat som visar hur Gemini Ultra sl\u00e5r GPT-4 i de flesta tester. Den har ocks\u00e5 avancerade kodningsfunktioner, med enast\u00e5ende prestanda i kodningsbenchmarks som HumanEval och Natural2Code.<\/p>\n<p><iframe loading=\"lazy\" title=\"Anv\u00e4nda AI f\u00f6r att l\u00f6sa komplexa problem | Gemini\" width=\"1080\" height=\"608\" src=\"https:\/\/www.youtube.com\/embed\/LvGmVmHv69s?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p>&nbsp;<\/p>\n<p>H\u00e4r \u00e4r j\u00e4mf\u00f6relsedata. Var medveten om att dessa \u00e5tg\u00e4rder anv\u00e4nder den outgivna Gemini Ultra-versionen. Gemini kan inte betraktas som en ChatGPT-d\u00f6dare f\u00f6rr\u00e4n n\u00e4sta \u00e5r. Och du kan satsa p\u00e5 att OpenAI flyttar f\u00f6r att motverka Gemini ASAP.<\/p>\n<h3>Text\/NLP j\u00e4mf\u00f6relseprestanda<\/h3>\n<p><strong>Allm\u00e4n kunskap:<\/strong><\/p>\n<ul>\n<li>MMLU (Massive Multitask Language Understanding):\n<ul>\n<li>Gemini Ultra: 90.0% (Tankekedja med 32 exempel)<\/li>\n<li>GPT-4: 86,4% (5-skott, rapporterat)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>Motivering:<\/strong><\/p>\n<ul>\n<li>Big-Bench Hard (Olika typer av utmanande uppgifter som kr\u00e4ver resonemang i flera steg):\n<ul>\n<li>Gemini Ultra: 83,6% (3 skott)<\/li>\n<li>GPT-4: 83,1% (3-skott, API)<\/li>\n<\/ul>\n<\/li>\n<li>DROP (l\u00e4sf\u00f6rst\u00e5else, F1-po\u00e4ng):\n<ul>\n<li>Gemini Ultra: 82,4 (variabla skott)<\/li>\n<li>GPT-4: 80,9 (3-skott, rapporterat)<\/li>\n<\/ul>\n<\/li>\n<li>HellaSwag (Sunda resonemang f\u00f6r vardagliga uppgifter):\n<ul>\n<li>Gemini Ultra: 87,8% (10-skott)<\/li>\n<li>GPT-4: 95,3% (10 skott, rapporterade)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>Matte:<\/strong><\/p>\n<ul>\n<li>GSM8K (grundl\u00e4ggande aritmetiska manipulationer inklusive matematiska problem i grundskolan):\n<ul>\n<li>Gemini Ultra: 94,4% (majoritet vid 32 exempel)<\/li>\n<li>GPT-4: 92.0% (5-skott Chain of Thought, rapporterad)<\/li>\n<\/ul>\n<\/li>\n<li>MATH (Utmanande matematiska problem, inklusive algebra, geometri, prekalkylering m.m.):\n<ul>\n<li>Gemini Ultra: 53.2% (4 skott)<\/li>\n<li>GPT-4: 52,9% (4 skott, API)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>Kod:<\/strong><\/p>\n<ul>\n<li>HumanEval (generering av Python-kod):\n<ul>\n<li>Gemini Ultra: 74,4% (0-skott, internt test)<\/li>\n<li>GPT-4: 67,0% (0-skott, rapporterat)<\/li>\n<\/ul>\n<\/li>\n<li>Natural2Code (generering av Python-kod, nytt dataset, HumanEval-liknande, inte l\u00e4ckt ut p\u00e5 webben):\n<ul>\n<li>Gemini Ultra: 74,9% (0-skott)<\/li>\n<li>GPT-4: 73,9% (0-skott, API)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3>Multimodal referensprestanda<\/h3>\n<p>De multimodala funktionerna i Googles Gemini AI-modell j\u00e4mf\u00f6rs ocks\u00e5 med OpenAI:s GPT-4V.<\/p>\n<p><strong>F\u00f6rst\u00e5else och bearbetning av bilder:<\/strong><\/p>\n<ul>\n<li><strong>MMMU (Multi-discipline College-level Reasoning Problems):<\/strong>\n<ul>\n<li>Gemini Ultra: 59,4% (0-skott pass@1, endast pixel)<\/li>\n<li>GPT-4V: 56,8% (0-skott pass@1)<\/li>\n<\/ul>\n<\/li>\n<li><strong>VQAv2 (naturlig bildf\u00f6rst\u00e5else):<\/strong>\n<ul>\n<li>Gemini Ultra: 77,8% (0-skott, endast pixel)<\/li>\n<li>GPT-4V: 77,2% (0-skott)<\/li>\n<\/ul>\n<\/li>\n<li><strong>TextVQA (OCR p\u00e5 naturliga bilder):<\/strong>\n<ul>\n<li>Gemini Ultra: 82,3% (0-skott, endast pixel)<\/li>\n<li>GPT-4V: 78.0% (0-skott)<\/li>\n<\/ul>\n<\/li>\n<li><strong>DocVQA (Dokumentf\u00f6rst\u00e5else):<\/strong>\n<ul>\n<li>Gemini Ultra: 90,9% (0-shot, endast pixel)<\/li>\n<li>GPT-4V: 88,4% (0-shot, endast pixel)<\/li>\n<\/ul>\n<\/li>\n<li><strong>Infographic VQA (Infographic Understanding):<\/strong>\n<ul>\n<li>Gemini Ultra: 80.3% (0-skott, endast pixel)<\/li>\n<li>GPT-4V: 75.1% (0-shot, endast pixel)<\/li>\n<\/ul>\n<\/li>\n<li><strong>MathVista (Matematiska resonemang i visuella sammanhang):<\/strong>\n<ul>\n<li>Gemini Ultra: 53.0% (0-skott, endast pixel)<\/li>\n<li>GPT-4V: 49,9% (0-skott)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>Videobearbetning:<\/strong><\/p>\n<ul>\n<li><strong>VATEX (engelsk videoundertextning, CIDEr Score):<\/strong>\n<ul>\n<li>Gemini Ultra: 62,7 (4 skott)<\/li>\n<li>DeepMind Flamingo: 56,0 (4 skott)<\/li>\n<\/ul>\n<\/li>\n<li><strong>Perceptionstest MCQA (Video Question Answering):<\/strong>\n<ul>\n<li>Gemini Ultra: 54,7% (0-skott)<\/li>\n<li>SeViLA: 46,3% (0-skott)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>Ljudbehandling:<\/strong><\/p>\n<ul>\n<li><strong>CoVoST 2 (automatisk tal\u00f6vers\u00e4ttning, 21 spr\u00e5k, BLEU-po\u00e4ng):<\/strong>\n<ul>\n<li>Gemini Pro: 40,1<\/li>\n<li>Whisper v2: 29,1<\/li>\n<\/ul>\n<\/li>\n<li><strong>FLEURS (automatisk taligenk\u00e4nning, 62 spr\u00e5k, ordfelsprocent):<\/strong>\n<ul>\n<li>Gemini Pro: 7,6% (l\u00e4gre \u00e4r b\u00e4ttre)<\/li>\n<li>Whisper v3: 17,6%<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h2>Googles etiska \u00e5tagande<\/h2>\n<p class=\"whitespace-pre-wrap\">I en <a href=\"https:\/\/blog.google\/technology\/ai\/google-gemini-ai\/#scalable-efficient\">Blogginl\u00e4gg<\/a>understr\u00f6k Google sitt engagemang f\u00f6r ansvarsfulla och etiska AI-metoder.<\/p>\n<p class=\"whitespace-pre-wrap\">Enligt Google genomgick Gemini mer rigor\u00f6sa tester \u00e4n n\u00e5gon tidigare Google AI, och bed\u00f6mde faktorer som partiskhet, toxicitet, cybers\u00e4kerhetshot och potential f\u00f6r missbruk. Adversariala tekniker hj\u00e4lpte till att uppt\u00e4cka problem tidigt. Externa experter stresstestade och \"red-teamade\" sedan modellerna f\u00f6r att identifiera ytterligare blinda fl\u00e4ckar.<\/p>\n<p class=\"whitespace-pre-wrap\">Google s\u00e4ger att ansvar och s\u00e4kerhet kommer att f\u00f6rbli prioriterade omr\u00e5den i den snabba utvecklingen inom AI. F\u00f6retaget hj\u00e4lpte till att starta branschgrupper f\u00f6r att fastst\u00e4lla b\u00e4sta praxis, bland annat MLCommons och Secure AI Framework (SAIF).<\/p>\n<p class=\"whitespace-pre-wrap\">Google lovar fortsatt samarbete med forskare, myndigheter och organisationer i det civila samh\u00e4llet \u00f6ver hela v\u00e4rlden.<\/p>\n<h2>Gemini Ultra-utg\u00e5va<\/h2>\n<p class=\"whitespace-pre-wrap\">F\u00f6r n\u00e4rvarande begr\u00e4nsar Google tillg\u00e5ngen till den mest kraftfulla modellversionen, Gemini Ultra, som kommer i b\u00f6rjan av n\u00e4sta \u00e5r.<\/p>\n<p class=\"whitespace-pre-wrap\">Innan dess kommer utvalda utvecklare och experter att experimentera med Ultra f\u00f6r att ge feedback. Lanseringen kommer att sammanfalla med en ny banbrytande AI-modellplattform, eller som Google kallar en \"upplevelse\", som heter Bard Advanced.<\/p>\n<h2>Gemini f\u00f6r utvecklare<\/h2>\n<p>Fr\u00e5n och med den 13 december f\u00e5r utvecklare och f\u00f6retagskunder tillg\u00e5ng till Gemini Pro via Gemini API, som finns tillg\u00e4ngligt i Google AI Studio eller Google Cloud Vertex AI.<\/p>\n<p><strong>Google AI Studio:<\/strong> Google AI Studio \u00e4r ett anv\u00e4ndarv\u00e4nligt, webbaserat verktyg som \u00e4r utformat f\u00f6r att hj\u00e4lpa utvecklare att ta fram prototyper och lansera applikationer med hj\u00e4lp av en API-nyckel. Den h\u00e4r kostnadsfria resursen \u00e4r idealisk f\u00f6r dem som befinner sig i de inledande stadierna av apputvecklingen.<\/p>\n<p><strong>Vertex AI:<\/strong> Vertex AI \u00e4r en mer omfattande AI-plattform och erbjuder helt hanterade tj\u00e4nster. Den integreras s\u00f6ml\u00f6st med Google Cloud och erbjuder \u00e4ven f\u00f6retagss\u00e4kerhet, integritet och efterlevnad av datastyrningsregler.<\/p>\n<p>Ut\u00f6ver dessa plattformar kommer Android-utvecklare att kunna f\u00e5 tillg\u00e5ng till Gemini Nano f\u00f6r uppgifter p\u00e5 enheten. Den kommer att vara tillg\u00e4nglig f\u00f6r integration via AICore. Den h\u00e4r nya systemkapaciteten kommer att debutera i Android 14, med b\u00f6rjan i Pixel 8 Pro-enheter.<\/p>\n<h2>Google har esset i rock\u00e4rmen, \u00e4n s\u00e5 l\u00e4nge<\/h2>\n<p>OpenAI och Google skiljer sig \u00e5t p\u00e5 ett viktigt s\u00e4tt: Google utvecklar en m\u00e4ngd andra verktyg och produkter internt, inklusive de som anv\u00e4nds av miljarder m\u00e4nniskor varje dag.<\/p>\n<p>Vi talar naturligtvis om Android, Chrome, Gmail, Google Workplace och Google Search.<\/p>\n<p>OpenAI, genom sin allians med Microsoft, har liknande m\u00f6jligheter genom Copilot, men det har \u00e4nnu inte riktigt tagit fart.<\/p>\n<p>Och om vi ska vara \u00e4rliga s\u00e5 \u00e4r det nog Google som styr i alla dessa produktkategorier.<\/p>\n<p>Google har tryckt p\u00e5 i AI-racet, men du kan vara s\u00e4ker p\u00e5 att detta bara kommer att driva OpenAIs drivkraft mot GPT-5 och AGI.<\/p>","protected":false},"excerpt":{"rendered":"<p>Google har lanserat sin Gemini-familj av multimodala AI-modeller, ett dramatiskt drag i en bransch som fortfarande skakas av h\u00e4ndelserna p\u00e5 OpenAI. Gemini \u00e4r en multimodal familj av modeller som kan bearbeta och f\u00f6rst\u00e5 en blandning av text, bilder, ljud och video. Sundar Pichai, Googles VD, och Demis Hassabis, VD f\u00f6r Google DeepMind, uttrycker h\u00f6ga f\u00f6rv\u00e4ntningar p\u00e5 Gemini. Google planerar att integrera det i Googles omfattande produkter och tj\u00e4nster, inklusive s\u00f6k, Maps och Chrome. Vi \u00e4r glada \u00f6ver att kunna tillk\u00e4nnage \ud835\uddda\ud835\uddf2\ud835\uddfa\ud835\uddf6\ud835\uddfb\ud835\uddf6: @Google's st\u00f6rsta och mest kapabla AI-modell. Den \u00e4r byggd f\u00f6r att vara multimodal och kan f\u00f6rst\u00e5 och arbeta med text, kod och ljud,<\/p>","protected":false},"author":2,"featured_media":2402,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[125,147,383,102],"class_list":["post-8052","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-bard","tag-deepmind","tag-gemini","tag-google"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Google unleashes its groundbreaking Gemini family of multi-modal models | DailyAI<\/title>\n<meta name=\"description\" content=\"Just a few days after reports suggested Google&#039;s secretive Gemini project was delayed, they&#039;ve unleashed it upon an AI industry still reeling from events at OpenAI.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/sv\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/\" \/>\n<meta property=\"og:locale\" content=\"sv_SE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Google unleashes its groundbreaking Gemini family of multi-modal models | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Just a few days after reports suggested Google&#039;s secretive Gemini project was delayed, they&#039;ve unleashed it upon an AI industry still reeling from events at OpenAI.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/sv\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-06T17:03:36+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-03-28T00:40:52+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skriven av\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Ber\u00e4knad l\u00e4stid\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minuter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"Google unleashes its groundbreaking Gemini family of multi-modal models\",\"datePublished\":\"2023-12-06T17:03:36+00:00\",\"dateModified\":\"2024-03-28T00:40:52+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/\"},\"wordCount\":1356,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/shutterstock_552493561.jpg\",\"keywords\":[\"Bard\",\"DeepMind\",\"Gemini\",\"Google\"],\"articleSection\":{\"1\":\"Industry\"},\"inLanguage\":\"sv-SE\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/\",\"name\":\"Google unleashes its groundbreaking Gemini family of multi-modal models | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/shutterstock_552493561.jpg\",\"datePublished\":\"2023-12-06T17:03:36+00:00\",\"dateModified\":\"2024-03-28T00:40:52+00:00\",\"description\":\"Just a few days after reports suggested Google's secretive Gemini project was delayed, they've unleashed it upon an AI industry still reeling from events at OpenAI.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#breadcrumb\"},\"inLanguage\":\"sv-SE\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/shutterstock_552493561.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/shutterstock_552493561.jpg\",\"width\":1000,\"height\":667,\"caption\":\"Google Med-PaLM 2\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Google unleashes its groundbreaking Gemini family of multi-modal models\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"sv-SE\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/sv\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Google sl\u00e4pper sin banbrytande Gemini-familj av multimodala modeller | DailyAI","description":"Bara n\u00e5gra dagar efter att rapporter antytt att Googles hemlighetsfulla Gemini-projekt f\u00f6rsenats har de sl\u00e4ppt l\u00f6s det p\u00e5 en AI-industri som fortfarande skakas av h\u00e4ndelserna p\u00e5 OpenAI.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/sv\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/","og_locale":"sv_SE","og_type":"article","og_title":"Google unleashes its groundbreaking Gemini family of multi-modal models | DailyAI","og_description":"Just a few days after reports suggested Google's secretive Gemini project was delayed, they've unleashed it upon an AI industry still reeling from events at OpenAI.","og_url":"https:\/\/dailyai.com\/sv\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/","og_site_name":"DailyAI","article_published_time":"2023-12-06T17:03:36+00:00","article_modified_time":"2024-03-28T00:40:52+00:00","og_image":[{"width":1000,"height":667,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg","type":"image\/jpeg"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skriven av":"Sam Jeans","Ber\u00e4knad l\u00e4stid":"6 minuter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"Google unleashes its groundbreaking Gemini family of multi-modal models","datePublished":"2023-12-06T17:03:36+00:00","dateModified":"2024-03-28T00:40:52+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/"},"wordCount":1356,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg","keywords":["Bard","DeepMind","Gemini","Google"],"articleSection":{"1":"Industry"},"inLanguage":"sv-SE"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/","url":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/","name":"Google sl\u00e4pper sin banbrytande Gemini-familj av multimodala modeller | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg","datePublished":"2023-12-06T17:03:36+00:00","dateModified":"2024-03-28T00:40:52+00:00","description":"Bara n\u00e5gra dagar efter att rapporter antytt att Googles hemlighetsfulla Gemini-projekt f\u00f6rsenats har de sl\u00e4ppt l\u00f6s det p\u00e5 en AI-industri som fortfarande skakas av h\u00e4ndelserna p\u00e5 OpenAI.","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#breadcrumb"},"inLanguage":"sv-SE","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/"]}]},{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg","width":1000,"height":667,"caption":"Google Med-PaLM 2"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Google unleashes its groundbreaking Gemini family of multi-modal models"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DagligaAI","description":"Din dagliga dos av AI-nyheter","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"sv-SE"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DagligaAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Sam Jeans","image":{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam \u00e4r en vetenskaps- och teknikskribent som har arbetat i olika AI-startups. N\u00e4r han inte skriver l\u00e4ser han medicinska tidskrifter eller gr\u00e4ver igenom l\u00e5dor med vinylskivor.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/sv\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/8052","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/comments?post=8052"}],"version-history":[{"count":16,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/8052\/revisions"}],"predecessor-version":[{"id":8084,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/8052\/revisions\/8084"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/media\/2402"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/media?parent=8052"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/categories?post=8052"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/tags?post=8052"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}