{"id":8047,"date":"2023-12-06T12:34:54","date_gmt":"2023-12-06T12:34:54","guid":{"rendered":"https:\/\/dailyai.com\/?p=8047"},"modified":"2023-12-06T12:34:54","modified_gmt":"2023-12-06T12:34:54","slug":"new-approach-could-make-large-language-models-300x-faster","status":"publish","type":"post","link":"https:\/\/dailyai.com\/de\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","title":{"rendered":"Neuer Ansatz k\u00f6nnte gro\u00dfe Sprachmodelle 300x schneller machen"},"content":{"rendered":"<p><strong>Wissenschaftler der ETH Z\u00fcrich fanden heraus, dass Large Language Models (LLM) nur einen kleinen Teil ihrer Neuronen f\u00fcr individuelle Schlussfolgerungen verwenden m\u00fcssen. Ihr neuer Ansatz verspricht, LLMs viel schneller laufen zu lassen.<\/strong><\/p>\n<p>Um zu verstehen, wie sie es geschafft haben, die KI-Modelle zu beschleunigen, m\u00fcssen wir eine grobe Vorstellung davon bekommen, wie ein KI-Sprachmodell technisch aufgebaut ist.<\/p>\n<p>KI-Modelle wie GPT oder Llama bestehen aus Feedforward-Netzen, einer Art k\u00fcnstlicher neuronaler Netze.<\/p>\n<p>Feedforward-Netzwerke (FF) sind in der Regel in Schichten organisiert, wobei jede Schicht von Neuronen Eingaben von der vorhergehenden Schicht erh\u00e4lt und ihre Ausgaben an die n\u00e4chste Schicht sendet.<\/p>\n<p>Dazu geh\u00f6rt die dichte Matrixmultiplikation (DMM), bei der jedes Neuron in der FF alle Eingaben der vorherigen Schicht berechnen muss. Und das ist der Grund <a href=\"https:\/\/dailyai.com\/de\/2023\/11\/nvidia-achieves-record-18b-q3-revenue-crediting-generative-ai\/\">Nvidia verkauft so viele seiner GPUs<\/a> weil dieser Vorgang viel Rechenleistung erfordert.<\/p>\n<p><a href=\"https:\/\/arxiv.org\/pdf\/2311.10770.pdf\" target=\"_blank\" rel=\"noopener\">Die Forscher<\/a> verwendet Fast Feedforward Networks (FFF), um diesen Prozess wesentlich zu beschleunigen. Ein FFF nimmt jede Neuronenschicht, unterteilt sie in Bl\u00f6cke und w\u00e4hlt dann nur die relevantesten Bl\u00f6cke auf der Grundlage der Eingabe aus. Dieser Prozess l\u00e4uft auf eine bedingte Matrixmultiplikation (CMM) hinaus.<\/p>\n<p>Das bedeutet, dass nicht alle Neuronen einer Schicht an der Berechnung beteiligt sind, sondern nur ein sehr kleiner Teil.<\/p>\n<p>Stellen Sie sich vor, Sie sortieren einen Stapel Post, um einen f\u00fcr Sie bestimmten Brief zu finden. Anstatt den Namen und die Adresse auf jedem einzelnen Brief zu lesen, k\u00f6nnten Sie sie zun\u00e4chst nach Postleitzahlen sortieren und sich dann nur auf die Briefe f\u00fcr Ihr Gebiet konzentrieren.<\/p>\n<p>Auf die gleiche Weise identifizieren FFFs nur die wenigen Neuronen, die f\u00fcr jede Berechnung erforderlich sind, was im Vergleich zu traditionellen FFs nur einen Bruchteil der erforderlichen Verarbeitung bedeutet.<\/p>\n<h2>Wie viel schneller?<\/h2>\n<p>Die Forscher testeten ihre Methode an einer Variante des BERT-Modells von Google, die sie UltraFastBERT nannten. UltraFastBERT besteht aus 4095 Neuronen, schaltet aber selektiv nur 12 Neuronen f\u00fcr jede Schichtinferenz ein.<\/p>\n<p>Das bedeutet, dass UltraFastBERT etwa 0,03% seiner Neuronen f\u00fcr die Verarbeitung w\u00e4hrend der Inferenz ben\u00f6tigt, w\u00e4hrend beim regul\u00e4ren BERT 100% seiner Neuronen an der Berechnung beteiligt sein m\u00fcssten.<\/p>\n<p>Theoretisch bedeutet dies, dass UltraFastBERT 341x schneller ist als BERT oder GPT-3.<\/p>\n<p>Warum sagen wir \"theoretisch\", wenn die Forscher uns versichern, dass ihre Methode funktioniert? Weil sie eine Softwareumgehung entwickeln mussten, damit ihr FFF mit BERT funktioniert, und bei realen Tests nur eine 78-fache Geschwindigkeitssteigerung erreicht haben.<\/p>\n<h2>Es ist ein Geheimnis<\/h2>\n<p>In dem Forschungspapier hei\u00dft es: \"Die dichte Matrixmultiplikation ist die am meisten optimierte mathematische Operation in der Geschichte der Computertechnik. Es wurden enorme Anstrengungen unternommen, um Speicher, Chips, Befehlss\u00e4tze und Software-Routinen zu entwickeln, die diese Operation so schnell wie m\u00f6glich ausf\u00fchren. Viele dieser Fortschritte wurden ... geheim gehalten und dem Endbenutzer nur \u00fcber leistungsf\u00e4hige, aber restriktive Programmierschnittstellen zug\u00e4nglich gemacht.\"<\/p>\n<p>Im Grunde genommen sagen sie, dass die Ingenieure, die herausgefunden haben, wie die f\u00fcr herk\u00f6mmliche FF-Netze erforderliche Mathematik am effizientesten verarbeitet werden kann, ihre Low-Level-Software und Algorithmen geheim halten und keinen Einblick in ihren Code gew\u00e4hren.<\/p>\n<p>Wenn die Entwickler von Intel- oder Nvidia-Grafikprozessoren den Zugriff auf Low-Level-Code erm\u00f6glichen w\u00fcrden, um FFF-Netzwerke in KI-Modellen zu implementieren, k\u00f6nnte die 341-fache Geschwindigkeitssteigerung Realit\u00e4t werden.<\/p>\n<p>Aber werden sie es tun? Wenn Sie Ihre Grafikprozessoren so entwickeln k\u00f6nnten, dass die Leute 99,7% weniger davon kaufen m\u00fcssten, um die gleiche Rechenleistung zu erbringen, w\u00fcrden Sie es tun? Die Wirtschaft wird dabei ein gewisses Mitspracherecht haben, aber FFF-Netzwerke k\u00f6nnten den n\u00e4chsten gro\u00dfen Sprung in der KI darstellen.<\/p>","protected":false},"excerpt":{"rendered":"<p>Wissenschaftler der ETH Z\u00fcrich fanden heraus, dass Large Language Models (LLM) nur einen kleinen Teil ihrer Neuronen f\u00fcr individuelle Schlussfolgerungen verwenden m\u00fcssen. Ihr neuer Ansatz verspricht, LLMs viel schneller laufen zu lassen. Um zu verstehen, wie sie es geschafft haben, KI-Modelle zu beschleunigen, m\u00fcssen wir eine grobe Vorstellung davon bekommen, wie ein KI-Sprachmodell technisch aufgebaut ist. KI-Modelle wie GPT oder Llama bestehen aus Feedforward-Netzen, einer Art k\u00fcnstlicher neuronaler Netze. Feedforward-Netzwerke (FF) sind in der Regel in Schichten organisiert, wobei jede Schicht von Neuronen Input von<\/p>","protected":false},"author":6,"featured_media":8049,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[118,105],"class_list":["post-8047","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-llms","tag-machine-learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>New approach could make large language models 300x faster | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/de\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/\" \/>\n<meta property=\"og:locale\" content=\"de_DE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"New approach could make large language models 300x faster | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Scientists from ETH Zurich found that Large Language Models (LLM) only need to use a small fraction of their neurons for individual inferences. Their new approach promises to make LLMs run a lot faster. To begin to understand how they managed to speed up AI models we need to get a rough idea of some of the technical stuff that makes up an AI language model. AI models like GPT or Llama are made up of feedforward networks, a type of artificial neural network. Feedforward networks (FF) are typically organized into layers, with each layer of neurons receiving input from\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/de\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-06T12:34:54+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"625\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Verfasst von\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Gesch\u00e4tzte Lesezeit\" \/>\n\t<meta name=\"twitter:data2\" content=\"3\u00a0Minuten\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"New approach could make large language models 300x faster\",\"datePublished\":\"2023-12-06T12:34:54+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\"},\"wordCount\":604,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/neural-network-concept-art.jpg\",\"keywords\":[\"LLMS\",\"machine learning\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"de\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\",\"name\":\"New approach could make large language models 300x faster | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/neural-network-concept-art.jpg\",\"datePublished\":\"2023-12-06T12:34:54+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#breadcrumb\"},\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/neural-network-concept-art.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/neural-network-concept-art.jpg\",\"width\":1000,\"height\":625},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"New approach could make large language models 300x faster\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"de\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/de\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Neuer Ansatz k\u00f6nnte gro\u00dfe Sprachmodelle 300x schneller machen | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/de\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","og_locale":"de_DE","og_type":"article","og_title":"New approach could make large language models 300x faster | DailyAI","og_description":"Scientists from ETH Zurich found that Large Language Models (LLM) only need to use a small fraction of their neurons for individual inferences. Their new approach promises to make LLMs run a lot faster. To begin to understand how they managed to speed up AI models we need to get a rough idea of some of the technical stuff that makes up an AI language model. AI models like GPT or Llama are made up of feedforward networks, a type of artificial neural network. Feedforward networks (FF) are typically organized into layers, with each layer of neurons receiving input from","og_url":"https:\/\/dailyai.com\/de\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","og_site_name":"DailyAI","article_published_time":"2023-12-06T12:34:54+00:00","og_image":[{"width":1000,"height":625,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Verfasst von":"Eugene van der Watt","Gesch\u00e4tzte Lesezeit":"3\u00a0Minuten"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"New approach could make large language models 300x faster","datePublished":"2023-12-06T12:34:54+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/"},"wordCount":604,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","keywords":["LLMS","machine learning"],"articleSection":["Industry"],"inLanguage":"de"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","url":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","name":"Neuer Ansatz k\u00f6nnte gro\u00dfe Sprachmodelle 300x schneller machen | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","datePublished":"2023-12-06T12:34:54+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#breadcrumb"},"inLanguage":"de","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/"]}]},{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","width":1000,"height":625},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"New approach could make large language models 300x faster"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Ihre t\u00e4gliche Dosis an AI-Nachrichten","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"de"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene kommt aus der Elektronikbranche und liebt alles, was mit Technik zu tun hat. Wenn er eine Pause vom Konsum von KI-Nachrichten einlegt, findet man ihn am Snookertisch.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/de\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/8047","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/comments?post=8047"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/8047\/revisions"}],"predecessor-version":[{"id":8051,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/8047\/revisions\/8051"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media\/8049"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media?parent=8047"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/categories?post=8047"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/tags?post=8047"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}