{"id":5405,"date":"2023-09-13T13:45:33","date_gmt":"2023-09-13T13:45:33","guid":{"rendered":"https:\/\/dailyai.com\/?p=5405"},"modified":"2023-09-13T13:50:49","modified_gmt":"2023-09-13T13:50:49","slug":"nvidia-software-supercharges-h100-inference-performance","status":"publish","type":"post","link":"https:\/\/dailyai.com\/da\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","title":{"rendered":"Nvidia-software \u00f8ger H100-inferensens ydeevne"},"content":{"rendered":"<p><strong>Nvidia har annonceret ny open source-software, som de siger vil \u00f8ge inferensydelsen p\u00e5 deres H100 GPU'er.<\/strong><\/p>\n<p>En stor del af den nuv\u00e6rende eftersp\u00f8rgsel efter Nvidias GPU'er er at opbygge computerkraft til tr\u00e6ning af nye modeller. Men n\u00e5r modellerne er tr\u00e6net, skal de bruges. Inferens i AI refererer til evnen hos en LLM som ChatGPT til at drage konklusioner eller komme med forudsigelser ud fra data, den er blevet tr\u00e6net p\u00e5, og generere output.<\/p>\n<p>N\u00e5r du pr\u00f8ver at bruge ChatGPT, og der dukker en besked op om, at serverne er overbelastede, er det, fordi computerhardwaren har sv\u00e6rt ved at f\u00f8lge med eftersp\u00f8rgslen efter slutninger.<\/p>\n<p>Nvidia siger, at deres nye software, TensorRT-LLM, kan f\u00e5 deres eksisterende hardware til at k\u00f8re meget hurtigere og mere energieffektivt.<\/p>\n<p>Softwaren indeholder optimerede versioner af de mest popul\u00e6re modeller, herunder Meta Llama 2, OpenAI GPT-2 og GPT-3, Falcon, Mosaic MPT og BLOOM.<\/p>\n<p>Den bruger nogle smarte teknikker som mere effektiv batching af inferensopgaver og kvantificeringsteknikker til at opn\u00e5 den \u00f8gede ydeevne.<\/p>\n<p>LLM'er bruger generelt 16-bit floating point-v\u00e6rdier til at repr\u00e6sentere v\u00e6gte og aktiveringer. Kvantisering tager disse v\u00e6rdier og reducerer dem til 8-bit floating point-v\u00e6rdier under inferens. De fleste modeller form\u00e5r at bevare deres n\u00f8jagtighed med denne reducerede pr\u00e6cision.<\/p>\n<p>Virksomheder, der har en computerinfrastruktur baseret p\u00e5 Nvidias H100 GPU'er, kan forvente en enorm forbedring af inferensydelsen uden at skulle bruge en krone ved at bruge TensorRT-LLM.<\/p>\n<p>Nvidia brugte et eksempel p\u00e5 at k\u00f8re en lille open source-model, GPT-J 6, til at opsummere artikler i CNN\/Daily Mail-datas\u00e6ttet. Den \u00e6ldre A100-chip bruges som basishastighed og sammenlignes derefter med H100 uden og derefter med TensorRT-LLM.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-5412 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia.jpg\" alt=\"Nvidia \u00f8ger ydeevnen for inferens med TensorRT-LLM\" width=\"832\" height=\"666\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia.jpg 832w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-300x240.jpg 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-768x615.jpg 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-370x296.jpg 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-800x640.jpg 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-20x16.jpg 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-740x592.jpg 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-60x48.jpg 60w\" sizes=\"auto, (max-width: 832px) 100vw, 832px\" \/><\/p>\n<p style=\"text-align: center;\">Kilde: <a href=\"https:\/\/developer.nvidia.com\/blog\/nvidia-tensorrt-llm-supercharges-large-language-model-inference-on-nvidia-h100-gpus\/\">Nvidia<\/a><\/p>\n<p>Og her er en sammenligning, n\u00e5r du k\u00f8rer Meta's Llama 2<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-5413 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2.jpg\" alt=\"Nvidias inferens-boost med Llama 2\" width=\"832\" height=\"666\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2.jpg 832w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-300x240.jpg 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-768x615.jpg 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-370x296.jpg 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-800x640.jpg 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-20x16.jpg 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-740x592.jpg 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-60x48.jpg 60w\" sizes=\"auto, (max-width: 832px) 100vw, 832px\" \/><\/p>\n<p style=\"text-align: center;\">Kilde: <a href=\"https:\/\/developer.nvidia.com\/blog\/nvidia-tensorrt-llm-supercharges-large-language-model-inference-on-nvidia-h100-gpus\/\">Nvidia<\/a><\/p>\n<p>Nvidia sagde, at deres test viste, at en H100, der k\u00f8rer TensorRT-LLM, afh\u00e6ngigt af modellen, bruger mellem 3,2 og 5,6 gange mindre energi end en A100 under inferens.<\/p>\n<p>Hvis du k\u00f8rer AI-modeller p\u00e5 H100-hardware, betyder det ikke kun, at din slutningsydelse bliver n\u00e6sten fordoblet, men ogs\u00e5 at din energiregning bliver meget mindre, n\u00e5r du har installeret denne software.<\/p>\n<p>TensorRT-LLM vil ogs\u00e5 blive gjort tilg\u00e6ngelig for Nvidias <a href=\"https:\/\/dailyai.com\/da\/2023\/08\/nvidias-updated-superchip-promises-huge-ai-advancements\/\">Grace Hopper Superchips<\/a> men virksomheden har ikke frigivet pr\u00e6stationstal for GH200, der k\u00f8rer den nye software.<\/p>\n<p>Den nye software var endnu ikke klar, da Nvidia sendte sin GH200 Superchip gennem industristandarden MLPerf AI performance benchmarking tests. Resultaterne viste, at GH200 klarede sig op til 17% bedre end en single-chip H100 SXM.<\/p>\n<p>Hvis Nvidia opn\u00e5r blot en beskeden forbedring af inferensydelsen ved hj\u00e6lp af TensorRT-LLM med GH200, vil det bringe virksomheden langt foran sine n\u00e6rmeste konkurrenter. At v\u00e6re salgsrepr\u00e6sentant for Nvidia m\u00e5 v\u00e6re det nemmeste job i verden lige nu.<\/p>","protected":false},"excerpt":{"rendered":"<p>Nvidia har annonceret ny open source-software, som de siger vil \u00f8ge inferensydelsen p\u00e5 deres H100 GPU'er. En stor del af den nuv\u00e6rende eftersp\u00f8rgsel efter Nvidias GPU'er er at opbygge computerkraft til tr\u00e6ning af nye modeller. Men n\u00e5r modellerne er tr\u00e6net, skal de bruges. Inferens i AI refererer til evnen hos en LLM som ChatGPT til at drage konklusioner eller komme med forudsigelser ud fra data, den er blevet tr\u00e6net p\u00e5, og generere output. N\u00e5r du pr\u00f8ver at bruge ChatGPT, og der dukker en besked op om, at serverne er overbelastede, er det fordi computerhardwaren k\u00e6mper for at holde trit med<\/p>","protected":false},"author":6,"featured_media":973,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[83],"tags":[99,106],"class_list":["post-5405","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-product","tag-ai-race","tag-nvidia"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Nvidia software supercharges H100 inference performance | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/da\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/\" \/>\n<meta property=\"og:locale\" content=\"da_DK\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Nvidia software supercharges H100 inference performance | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Nvidia announced new open source software that it says will supercharge inference performance on its H100 GPUs. A lot of the current demand for Nvidia\u2019s GPUs is to build computing power for training new models. But once trained, those models need to be used. Inference in AI refers to the ability of an LLM like ChatGPT to draw conclusions or make predictions from data it\u2019s been trained on and generate output. When you try to use ChatGPT and a message pops up to say its servers are taking strain, it\u2019s because the computing hardware is struggling to keep up with\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/da\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-09-13T13:45:33+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-09-13T13:50:49+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skrevet af\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimeret l\u00e6setid\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Nvidia software supercharges H100 inference performance\",\"datePublished\":\"2023-09-13T13:45:33+00:00\",\"dateModified\":\"2023-09-13T13:50:49+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\"},\"wordCount\":467,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/05\\\/shutterstock_1742705531.jpg\",\"keywords\":[\"AI race\",\"Nvidia\"],\"articleSection\":[\"Product\"],\"inLanguage\":\"da-DK\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\",\"name\":\"Nvidia software supercharges H100 inference performance | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/05\\\/shutterstock_1742705531.jpg\",\"datePublished\":\"2023-09-13T13:45:33+00:00\",\"dateModified\":\"2023-09-13T13:50:49+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#breadcrumb\"},\"inLanguage\":\"da-DK\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/05\\\/shutterstock_1742705531.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/05\\\/shutterstock_1742705531.jpg\",\"width\":1000,\"height\":667,\"caption\":\"nvidia stock\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Nvidia software supercharges H100 inference performance\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"da-DK\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/da\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Nvidia-software \u00f8ger H100-inferensens ydeevne | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/da\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","og_locale":"da_DK","og_type":"article","og_title":"Nvidia software supercharges H100 inference performance | DailyAI","og_description":"Nvidia announced new open source software that it says will supercharge inference performance on its H100 GPUs. A lot of the current demand for Nvidia\u2019s GPUs is to build computing power for training new models. But once trained, those models need to be used. Inference in AI refers to the ability of an LLM like ChatGPT to draw conclusions or make predictions from data it\u2019s been trained on and generate output. When you try to use ChatGPT and a message pops up to say its servers are taking strain, it\u2019s because the computing hardware is struggling to keep up with","og_url":"https:\/\/dailyai.com\/da\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","og_site_name":"DailyAI","article_published_time":"2023-09-13T13:45:33+00:00","article_modified_time":"2023-09-13T13:50:49+00:00","og_image":[{"width":1000,"height":667,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skrevet af":"Eugene van der Watt","Estimeret l\u00e6setid":"3 minutter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Nvidia software supercharges H100 inference performance","datePublished":"2023-09-13T13:45:33+00:00","dateModified":"2023-09-13T13:50:49+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/"},"wordCount":467,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","keywords":["AI race","Nvidia"],"articleSection":["Product"],"inLanguage":"da-DK"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","url":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","name":"Nvidia-software \u00f8ger H100-inferensens ydeevne | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","datePublished":"2023-09-13T13:45:33+00:00","dateModified":"2023-09-13T13:50:49+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#breadcrumb"},"inLanguage":"da-DK","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/"]}]},{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","width":1000,"height":667,"caption":"nvidia stock"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Nvidia software supercharges H100 inference performance"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Din daglige dosis af AI-nyheder","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"da-DK"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene har en baggrund som elektronikingeni\u00f8r og elsker alt, hvad der har med teknologi at g\u00f8re. N\u00e5r han tager en pause fra at l\u00e6se AI-nyheder, kan du finde ham ved snookerbordet.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/da\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/5405","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/comments?post=5405"}],"version-history":[{"count":8,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/5405\/revisions"}],"predecessor-version":[{"id":5417,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/5405\/revisions\/5417"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media\/973"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media?parent=5405"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/categories?post=5405"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/tags?post=5405"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}