{"id":8047,"date":"2023-12-06T12:34:54","date_gmt":"2023-12-06T12:34:54","guid":{"rendered":"https:\/\/dailyai.com\/?p=8047"},"modified":"2023-12-06T12:34:54","modified_gmt":"2023-12-06T12:34:54","slug":"new-approach-could-make-large-language-models-300x-faster","status":"publish","type":"post","link":"https:\/\/dailyai.com\/da\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","title":{"rendered":"Ny metode kan g\u00f8re store sprogmodeller 300 gange hurtigere"},"content":{"rendered":"<p><strong>Forskere fra ETH Z\u00fcrich har fundet ud af, at store sprogmodeller (LLM) kun beh\u00f8ver at bruge en lille del af deres neuroner til individuelle slutninger. Deres nye tilgang lover at f\u00e5 LLM'er til at k\u00f8re meget hurtigere.<\/strong><\/p>\n<p>For at begynde at forst\u00e5, hvordan det lykkedes dem at g\u00f8re AI-modeller hurtigere, er vi n\u00f8dt til at f\u00e5 en grov id\u00e9 om nogle af de tekniske ting, der udg\u00f8r en AI-sprogmodel.<\/p>\n<p>AI-modeller som GPT eller Llama best\u00e5r af feedforward-netv\u00e6rk, en type kunstige neurale netv\u00e6rk.<\/p>\n<p>Feedforward-netv\u00e6rk (FF) er typisk organiseret i lag, hvor hvert lag af neuroner modtager input fra det foreg\u00e5ende lag og sender sit output til det n\u00e6ste lag.<\/p>\n<p>Det indeb\u00e6rer t\u00e6t matrixmultiplikation (DMM), som kr\u00e6ver, at hver neuron i FF udf\u00f8rer beregninger p\u00e5 alle input fra det foreg\u00e5ende lag. Og det er derfor <a href=\"https:\/\/dailyai.com\/da\/2023\/11\/nvidia-achieves-record-18b-q3-revenue-crediting-generative-ai\/\">Nvidia s\u00e6lger s\u00e5 mange af sine GPU'er<\/a> fordi denne proces kr\u00e6ver meget processorkraft.<\/p>\n<p><a href=\"https:\/\/arxiv.org\/pdf\/2311.10770.pdf\" target=\"_blank\" rel=\"noopener\">Forskerne<\/a> brugte Fast Feedforward Networks (FFF) til at g\u00f8re denne proces meget hurtigere. Et FFF tager hvert lag af neuroner, deler det op i blokke og v\u00e6lger derefter kun de mest relevante blokke baseret p\u00e5 input. Denne proces svarer til at udf\u00f8re betinget matrixmultiplikation (CMM).<\/p>\n<p>Det betyder, at i stedet for at alle neuroner i et lag er involveret i beregningen, er det kun en meget lille del, der er involveret.<\/p>\n<p>T\u00e6nk p\u00e5 det som at sortere en bunke post for at finde et brev, der er beregnet til dig. I stedet for at l\u00e6se navn og adresse p\u00e5 hvert eneste brev, kan du f\u00f8rst sortere dem efter postnummer og derefter kun fokusere p\u00e5 dem, der g\u00e6lder for dit omr\u00e5de.<\/p>\n<p>P\u00e5 samme m\u00e5de identificerer FFF'er kun de f\u00e5 neuroner, der er n\u00f8dvendige for hver beregning, hvilket kun resulterer i en br\u00f8kdel af den n\u00f8dvendige behandling sammenlignet med traditionelle FF'er.<\/p>\n<h2>Hvor meget hurtigere?<\/h2>\n<p>Forskerne testede deres metode p\u00e5 en variant af Googles BERT-model, som de kaldte UltraFastBERT. UltraFastBERT best\u00e5r af 4095 neuroner, men engagerer selektivt kun 12 neuroner for hver laginferens.<\/p>\n<p>Det betyder, at UltraFastBERT kr\u00e6ver omkring 0,03% af sine neuroner for at v\u00e6re involveret i behandlingen under udledningen, mens almindelig BERT ville have brug for 100% af sine neuroner til at v\u00e6re involveret i beregningen.<\/p>\n<p>Teoretisk set betyder det, at UltraFastBERT vil v\u00e6re 341 gange hurtigere end BERT eller GPT-3.<\/p>\n<p>Hvorfor siger vi \"teoretisk\", n\u00e5r forskerne forsikrer os om, at deres metode virker? Fordi de var n\u00f8dt til at lave en softwarel\u00f8sning for at f\u00e5 deres FFF til at fungere med BERT og kun opn\u00e5ede en 78x forbedring af hastigheden under reel testning.<\/p>\n<h2>Det er en hemmelighed<\/h2>\n<p>Forskningsartiklen forklarede, at \"T\u00e6t matrixmultiplikation er den mest optimerede matematiske operation i computerens historie. Der er lagt en enorm indsats i at designe hukommelser, chips, instruktionss\u00e6t og softwarerutiner, der udf\u00f8rer den s\u00e5 hurtigt som muligt. Mange af disse fremskridt er ... blevet holdt fortrolige og kun eksponeret for slutbrugeren gennem kraftige, men restriktive programmeringsgr\u00e6nseflader.\"<\/p>\n<p>I bund og grund siger de, at de ingeni\u00f8rer, der har fundet frem til de mest effektive m\u00e5der at behandle den matematik, der kr\u00e6ves til traditionelle FF-netv\u00e6rk, holder deres software og algoritmer p\u00e5 lavt niveau hemmelige og ikke vil lade dig se p\u00e5 deres kode.<\/p>\n<p>Hvis hjernerne bag designet af Intels eller Nvidias GPU'er gav adgang til kode p\u00e5 lavt niveau for at implementere FFF-netv\u00e6rk i AI-modeller, kunne hastighedsforbedringen p\u00e5 341x v\u00e6re en realitet.<\/p>\n<p>Men vil de g\u00f8re det? Hvis du kunne konstruere dine GPU'er, s\u00e5 folk kunne k\u00f8be 99,7% f\u00e6rre af dem for at udf\u00f8re den samme m\u00e6ngde behandling, ville du s\u00e5 g\u00f8re det? \u00d8konomien vil have en vis indflydelse p\u00e5 dette, men FFF-netv\u00e6rk kan v\u00e6re det n\u00e6ste store spring inden for AI.<\/p>","protected":false},"excerpt":{"rendered":"<p>Forskere fra ETH Z\u00fcrich har fundet ud af, at store sprogmodeller (LLM) kun beh\u00f8ver at bruge en lille del af deres neuroner til individuelle slutninger. Deres nye tilgang lover at f\u00e5 LLM'er til at k\u00f8re meget hurtigere. For at begynde at forst\u00e5, hvordan det lykkedes dem at g\u00f8re AI-modeller hurtigere, er vi n\u00f8dt til at f\u00e5 en grov id\u00e9 om nogle af de tekniske ting, der udg\u00f8r en AI-sprogmodel. AI-modeller som GPT eller Llama best\u00e5r af feedforward-netv\u00e6rk, en type kunstige neurale netv\u00e6rk. Feedforward-netv\u00e6rk (FF) er typisk organiseret i lag, hvor hvert lag af neuroner modtager input fra<\/p>","protected":false},"author":6,"featured_media":8049,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[118,105],"class_list":["post-8047","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-llms","tag-machine-learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>New approach could make large language models 300x faster | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/da\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/\" \/>\n<meta property=\"og:locale\" content=\"da_DK\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"New approach could make large language models 300x faster | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Scientists from ETH Zurich found that Large Language Models (LLM) only need to use a small fraction of their neurons for individual inferences. Their new approach promises to make LLMs run a lot faster. To begin to understand how they managed to speed up AI models we need to get a rough idea of some of the technical stuff that makes up an AI language model. AI models like GPT or Llama are made up of feedforward networks, a type of artificial neural network. Feedforward networks (FF) are typically organized into layers, with each layer of neurons receiving input from\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/da\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-06T12:34:54+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"625\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skrevet af\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimeret l\u00e6setid\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"New approach could make large language models 300x faster\",\"datePublished\":\"2023-12-06T12:34:54+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\"},\"wordCount\":604,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/neural-network-concept-art.jpg\",\"keywords\":[\"LLMS\",\"machine learning\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"da-DK\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\",\"name\":\"New approach could make large language models 300x faster | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/neural-network-concept-art.jpg\",\"datePublished\":\"2023-12-06T12:34:54+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#breadcrumb\"},\"inLanguage\":\"da-DK\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/neural-network-concept-art.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/neural-network-concept-art.jpg\",\"width\":1000,\"height\":625},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"New approach could make large language models 300x faster\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"da-DK\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/da\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Ny tilgang kan g\u00f8re store sprogmodeller 300 gange hurtigere | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/da\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","og_locale":"da_DK","og_type":"article","og_title":"New approach could make large language models 300x faster | DailyAI","og_description":"Scientists from ETH Zurich found that Large Language Models (LLM) only need to use a small fraction of their neurons for individual inferences. Their new approach promises to make LLMs run a lot faster. To begin to understand how they managed to speed up AI models we need to get a rough idea of some of the technical stuff that makes up an AI language model. AI models like GPT or Llama are made up of feedforward networks, a type of artificial neural network. Feedforward networks (FF) are typically organized into layers, with each layer of neurons receiving input from","og_url":"https:\/\/dailyai.com\/da\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","og_site_name":"DailyAI","article_published_time":"2023-12-06T12:34:54+00:00","og_image":[{"width":1000,"height":625,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skrevet af":"Eugene van der Watt","Estimeret l\u00e6setid":"3 minutter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"New approach could make large language models 300x faster","datePublished":"2023-12-06T12:34:54+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/"},"wordCount":604,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","keywords":["LLMS","machine learning"],"articleSection":["Industry"],"inLanguage":"da-DK"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","url":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","name":"Ny tilgang kan g\u00f8re store sprogmodeller 300 gange hurtigere | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","datePublished":"2023-12-06T12:34:54+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#breadcrumb"},"inLanguage":"da-DK","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/"]}]},{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","width":1000,"height":625},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"New approach could make large language models 300x faster"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Din daglige dosis af AI-nyheder","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"da-DK"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene har en baggrund som elektronikingeni\u00f8r og elsker alt, hvad der har med teknologi at g\u00f8re. N\u00e5r han tager en pause fra at l\u00e6se AI-nyheder, kan du finde ham ved snookerbordet.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/da\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/8047","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/comments?post=8047"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/8047\/revisions"}],"predecessor-version":[{"id":8051,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/8047\/revisions\/8051"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media\/8049"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media?parent=8047"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/categories?post=8047"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/tags?post=8047"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}