{"id":5405,"date":"2023-09-13T13:45:33","date_gmt":"2023-09-13T13:45:33","guid":{"rendered":"https:\/\/dailyai.com\/?p=5405"},"modified":"2023-09-13T13:50:49","modified_gmt":"2023-09-13T13:50:49","slug":"nvidia-software-supercharges-h100-inference-performance","status":"publish","type":"post","link":"https:\/\/dailyai.com\/de\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","title":{"rendered":"Nvidia-Software steigert die H100-Inferenzleistung"},"content":{"rendered":"<p><strong>Nvidia hat eine neue Open-Source-Software angek\u00fcndigt, die die Inferenzleistung seiner H100-Grafikprozessoren verbessern soll.<\/strong><\/p>\n<p>Ein Gro\u00dfteil der aktuellen Nachfrage nach Nvidias Grafikprozessoren besteht darin, Rechenleistung f\u00fcr das Training neuer Modelle aufzubauen. Aber sobald diese Modelle trainiert sind, m\u00fcssen sie auch genutzt werden. Inferenz in der KI bezieht sich auf die F\u00e4higkeit eines LLM wie ChatGPT, Schlussfolgerungen zu ziehen oder Vorhersagen aus den Daten zu treffen, auf denen es trainiert wurde, und eine Ausgabe zu erzeugen.<\/p>\n<p>Wenn Sie versuchen, ChatGPT zu benutzen und eine Meldung erscheint, dass die Server \u00fcberlastet sind, liegt das daran, dass die Computerhardware mit der Nachfrage nach Schlussfolgerungen nicht Schritt halten kann.<\/p>\n<p>Nvidia sagt, dass seine neue Software, TensorRT-LLM, seine bestehende Hardware viel schneller und energieeffizienter machen kann.<\/p>\n<p>Die Software enth\u00e4lt optimierte Versionen der beliebtesten Modelle, darunter Meta Llama 2, OpenAI GPT-2 und GPT-3, Falcon, Mosaic MPT und BLOOM.<\/p>\n<p>Es verwendet einige clevere Techniken wie eine effizientere Stapelung von Inferenzaufgaben und Quantisierungstechniken, um die Leistung zu steigern.<\/p>\n<p>LLMs verwenden in der Regel 16-Bit-Gleitkommawerte zur Darstellung von Gewichten und Aktivierungen. Bei der Quantisierung werden diese Werte w\u00e4hrend der Inferenz auf 8-Bit-Gleitkommawerte reduziert. Den meisten Modellen gelingt es, ihre Genauigkeit mit dieser reduzierten Pr\u00e4zision beizubehalten.<\/p>\n<p>Unternehmen, die \u00fcber eine auf Nvidias H100-GPUs basierende Recheninfrastruktur verf\u00fcgen, k\u00f6nnen durch den Einsatz von TensorRT-LLM eine enorme Verbesserung der Inferenzleistung erwarten, ohne einen Cent ausgeben zu m\u00fcssen.<\/p>\n<p>Nvidia verwendete ein Beispiel f\u00fcr die Ausf\u00fchrung eines kleinen Open-Source-Modells, GPT-J 6, um Artikel im CNN\/Daily Mail-Datensatz zusammenzufassen. Sein \u00e4lterer A100-Chip wird als Basisgeschwindigkeit verwendet und dann mit dem H100 ohne und dann mit TensorRT-LLM verglichen.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-5412 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia.jpg\" alt=\"Nvidia Inferenz-Leistungssteigerung mit TensorRT-LLM\" width=\"832\" height=\"666\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia.jpg 832w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-300x240.jpg 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-768x615.jpg 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-370x296.jpg 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-800x640.jpg 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-20x16.jpg 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-740x592.jpg 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-60x48.jpg 60w\" sizes=\"auto, (max-width: 832px) 100vw, 832px\" \/><\/p>\n<p style=\"text-align: center;\">Quelle: <a href=\"https:\/\/developer.nvidia.com\/blog\/nvidia-tensorrt-llm-supercharges-large-language-model-inference-on-nvidia-h100-gpus\/\">Nvidia<\/a><\/p>\n<p>Und hier ist ein Vergleich mit Meta's Llama 2<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-5413 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2.jpg\" alt=\"Nvidia-Inferenzerh\u00f6hung mit Llama 2\" width=\"832\" height=\"666\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2.jpg 832w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-300x240.jpg 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-768x615.jpg 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-370x296.jpg 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-800x640.jpg 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-20x16.jpg 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-740x592.jpg 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-60x48.jpg 60w\" sizes=\"auto, (max-width: 832px) 100vw, 832px\" \/><\/p>\n<p style=\"text-align: center;\">Quelle: <a href=\"https:\/\/developer.nvidia.com\/blog\/nvidia-tensorrt-llm-supercharges-large-language-model-inference-on-nvidia-h100-gpus\/\">Nvidia<\/a><\/p>\n<p>Laut Nvidia haben die Tests gezeigt, dass ein H100 mit TensorRT-LLM je nach Modell zwischen 3,2 und 5,6 Mal weniger Energie verbraucht als ein A100 w\u00e4hrend der Inferenz.<\/p>\n<p>Wenn Sie KI-Modelle auf H100-Hardware ausf\u00fchren, bedeutet dies nicht nur, dass sich Ihre Inferenzleistung fast verdoppelt, sondern auch, dass Ihre Energierechnung nach der Installation dieser Software um ein Vielfaches niedriger ausfallen wird.<\/p>\n<p>TensorRT-LLM wird auch f\u00fcr Nvidia's <a href=\"https:\/\/dailyai.com\/de\/2023\/08\/nvidias-updated-superchip-promises-huge-ai-advancements\/\">Grace Hopper Superchips<\/a> aber das Unternehmen hat keine Leistungsdaten f\u00fcr das GH200 mit seiner neuen Software ver\u00f6ffentlicht.<\/p>\n<p>Die neue Software war noch nicht fertig, als Nvidia seinen GH200 Superchip den branchen\u00fcblichen MLPerf AI-Leistungstests unterzog. Die Ergebnisse zeigten, dass der GH200 bis zu 17% besser abschnitt als ein Single-Chip H100 SXM.<\/p>\n<p>Wenn Nvidia mit dem GH200 auch nur einen bescheidenen Anstieg der Inferenzleistung mit TensorRT-LLM erreicht, wird das Unternehmen seine n\u00e4chsten Konkurrenten weit hinter sich lassen. Ein Vertriebsmitarbeiter f\u00fcr Nvidia zu sein, muss im Moment der einfachste Job der Welt sein.<\/p>","protected":false},"excerpt":{"rendered":"<p>Nvidia hat eine neue Open-Source-Software angek\u00fcndigt, die die Inferenzleistung seiner H100-GPUs steigern soll. Ein Gro\u00dfteil der aktuellen Nachfrage nach Nvidias Grafikprozessoren besteht darin, Rechenleistung f\u00fcr das Training neuer Modelle aufzubauen. Aber sobald diese Modelle trainiert sind, m\u00fcssen sie auch genutzt werden. Inferenz in der KI bezieht sich auf die F\u00e4higkeit eines LLM wie ChatGPT, Schlussfolgerungen zu ziehen oder Vorhersagen aus den Daten zu machen, auf denen es trainiert wurde, und eine Ausgabe zu erzeugen. Wenn Sie versuchen, ChatGPT zu benutzen und eine Meldung erscheint, die besagt, dass die Server \u00fcberlastet sind, dann liegt das daran, dass die Computerhardware Schwierigkeiten hat, mit der Datenmenge Schritt zu halten.<\/p>","protected":false},"author":6,"featured_media":973,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[83],"tags":[99,106],"class_list":["post-5405","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-product","tag-ai-race","tag-nvidia"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Nvidia software supercharges H100 inference performance | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/de\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/\" \/>\n<meta property=\"og:locale\" content=\"de_DE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Nvidia software supercharges H100 inference performance | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Nvidia announced new open source software that it says will supercharge inference performance on its H100 GPUs. A lot of the current demand for Nvidia\u2019s GPUs is to build computing power for training new models. But once trained, those models need to be used. Inference in AI refers to the ability of an LLM like ChatGPT to draw conclusions or make predictions from data it\u2019s been trained on and generate output. When you try to use ChatGPT and a message pops up to say its servers are taking strain, it\u2019s because the computing hardware is struggling to keep up with\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/de\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-09-13T13:45:33+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-09-13T13:50:49+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Verfasst von\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Gesch\u00e4tzte Lesezeit\" \/>\n\t<meta name=\"twitter:data2\" content=\"3\u00a0Minuten\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Nvidia software supercharges H100 inference performance\",\"datePublished\":\"2023-09-13T13:45:33+00:00\",\"dateModified\":\"2023-09-13T13:50:49+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\"},\"wordCount\":467,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/05\\\/shutterstock_1742705531.jpg\",\"keywords\":[\"AI race\",\"Nvidia\"],\"articleSection\":[\"Product\"],\"inLanguage\":\"de\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\",\"name\":\"Nvidia software supercharges H100 inference performance | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/05\\\/shutterstock_1742705531.jpg\",\"datePublished\":\"2023-09-13T13:45:33+00:00\",\"dateModified\":\"2023-09-13T13:50:49+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#breadcrumb\"},\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/05\\\/shutterstock_1742705531.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/05\\\/shutterstock_1742705531.jpg\",\"width\":1000,\"height\":667,\"caption\":\"nvidia stock\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Nvidia software supercharges H100 inference performance\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"de\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/de\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Nvidia-Software steigert die H100-Inferenzleistung | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/de\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","og_locale":"de_DE","og_type":"article","og_title":"Nvidia software supercharges H100 inference performance | DailyAI","og_description":"Nvidia announced new open source software that it says will supercharge inference performance on its H100 GPUs. A lot of the current demand for Nvidia\u2019s GPUs is to build computing power for training new models. But once trained, those models need to be used. Inference in AI refers to the ability of an LLM like ChatGPT to draw conclusions or make predictions from data it\u2019s been trained on and generate output. When you try to use ChatGPT and a message pops up to say its servers are taking strain, it\u2019s because the computing hardware is struggling to keep up with","og_url":"https:\/\/dailyai.com\/de\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","og_site_name":"DailyAI","article_published_time":"2023-09-13T13:45:33+00:00","article_modified_time":"2023-09-13T13:50:49+00:00","og_image":[{"width":1000,"height":667,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Verfasst von":"Eugene van der Watt","Gesch\u00e4tzte Lesezeit":"3\u00a0Minuten"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Nvidia software supercharges H100 inference performance","datePublished":"2023-09-13T13:45:33+00:00","dateModified":"2023-09-13T13:50:49+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/"},"wordCount":467,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","keywords":["AI race","Nvidia"],"articleSection":["Product"],"inLanguage":"de"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","url":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","name":"Nvidia-Software steigert die H100-Inferenzleistung | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","datePublished":"2023-09-13T13:45:33+00:00","dateModified":"2023-09-13T13:50:49+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#breadcrumb"},"inLanguage":"de","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/"]}]},{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","width":1000,"height":667,"caption":"nvidia stock"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Nvidia software supercharges H100 inference performance"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Ihre t\u00e4gliche Dosis an AI-Nachrichten","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"de"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene kommt aus der Elektronikbranche und liebt alles, was mit Technik zu tun hat. Wenn er eine Pause vom Konsum von KI-Nachrichten einlegt, findet man ihn am Snookertisch.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/de\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/5405","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/comments?post=5405"}],"version-history":[{"count":8,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/5405\/revisions"}],"predecessor-version":[{"id":5417,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/5405\/revisions\/5417"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media\/973"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media?parent=5405"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/categories?post=5405"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/tags?post=5405"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}