{"id":8047,"date":"2023-12-06T12:34:54","date_gmt":"2023-12-06T12:34:54","guid":{"rendered":"https:\/\/dailyai.com\/?p=8047"},"modified":"2023-12-06T12:34:54","modified_gmt":"2023-12-06T12:34:54","slug":"new-approach-could-make-large-language-models-300x-faster","status":"publish","type":"post","link":"https:\/\/dailyai.com\/nl\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","title":{"rendered":"Nieuwe aanpak kan grote taalmodellen 300x sneller maken"},"content":{"rendered":"<p><strong>Wetenschappers van ETH Z\u00fcrich ontdekten dat Large Language Models (LLM) slechts een klein deel van hun neuronen hoeven te gebruiken voor individuele inferenties. Hun nieuwe aanpak belooft LLM's veel sneller te laten werken.<\/strong><\/p>\n<p>Om te beginnen begrijpen hoe ze erin slaagden om AI-modellen te versnellen, moeten we een ruw idee krijgen van enkele van de technische dingen waaruit een AI-taalmodel bestaat.<\/p>\n<p>AI-modellen zoals GPT of Llama zijn opgebouwd uit feedforward netwerken, een soort kunstmatige neurale netwerken.<\/p>\n<p>Feedforward netwerken (FF) zijn meestal georganiseerd in lagen, waarbij elke laag neuronen input ontvangt van de vorige laag en zijn output naar de volgende laag stuurt.<\/p>\n<p>Hiervoor is een dichte matrixvermenigvuldiging (DMM) nodig, waarbij elk neuron in de FF berekeningen moet uitvoeren op alle inputs van de vorige laag. En dit is de reden waarom <a href=\"https:\/\/dailyai.com\/nl\/2023\/11\/nvidia-achieves-record-18b-q3-revenue-crediting-generative-ai\/\">Nvidia verkoopt zo veel van zijn GPU's<\/a> omdat dit proces veel rekenkracht kost.<\/p>\n<p><a href=\"https:\/\/arxiv.org\/pdf\/2311.10770.pdf\" target=\"_blank\" rel=\"noopener\">De onderzoekers<\/a> Fast Feedforward Networks (FFF) gebruikt om dit proces veel sneller te maken. Een FFF neemt elke laag neuronen, splitst deze op in blokken en selecteert vervolgens alleen de meest relevante blokken op basis van de invoer. Dit proces komt neer op het uitvoeren van voorwaardelijke matrixvermenigvuldiging (CMM).<\/p>\n<p>Dit betekent dat niet alle neuronen van een laag betrokken zijn bij de berekening, maar slechts een heel klein deel.<\/p>\n<p>Zie het als het sorteren van een stapel post om een brief te vinden die voor jou bedoeld is. In plaats van op elke brief de naam en het adres te lezen, kun je ze eerst sorteren op postcode en je dan alleen richten op de brieven voor jouw regio.<\/p>\n<p>Op dezelfde manier identificeren FFF's slechts de paar neuronen die nodig zijn voor elke berekening, wat resulteert in slechts een fractie van de benodigde verwerking in vergelijking met traditionele FF's.<\/p>\n<h2>Hoeveel sneller?<\/h2>\n<p>De onderzoekers testten hun methode op een variant van Google's BERT-model dat ze UltraFastBERT noemden. UltraFastBERT bestaat uit 4095 neuronen, maar schakelt selectief slechts 12 neuronen in voor elke laaginferentie.<\/p>\n<p>Dit betekent dat UltraFastBERT ongeveer 0.03% van zijn neuronen nodig heeft voor verwerking tijdens inferentie, terwijl gewone BERT 100% van zijn neuronen nodig heeft voor de berekening.<\/p>\n<p>Theoretisch betekent dit dat UltraFastBERT 341x sneller zou zijn dan BERT of GPT-3.<\/p>\n<p>Waarom zeggen we \"theoretisch\" als de onderzoekers ons verzekeren dat hun methode werkt? Omdat ze een software workaround moesten maken om hun FFF te laten werken met BERT en slechts een 78x verbetering in snelheid bereikten tijdens echte testen.<\/p>\n<h2>Het is een geheim<\/h2>\n<p>Het onderzoeksdocument legt uit dat \"dichte matrixvermenigvuldiging de meest geoptimaliseerde wiskundige bewerking in de geschiedenis van de computer is. Er is enorm veel moeite gestoken in het ontwerpen van geheugens, chips, instructiesets en softwareroutines die het zo snel mogelijk uitvoeren. Veel van deze ontwikkelingen zijn ... vertrouwelijk gehouden en alleen toegankelijk gemaakt voor de eindgebruiker via krachtige maar beperkende programmeerinterfaces.\"<\/p>\n<p>In feite zeggen ze dat de ingenieurs die de meest effici\u00ebnte manier hebben gevonden om de wiskunde te verwerken die nodig is voor traditionele FF-netwerken, hun software en algoritmen op laag niveau geheim houden en je niet naar hun code laten kijken.<\/p>\n<p>Als het brein achter de ontwerpen van Intel of Nvidia GPU's laag-niveau codetoegang mogelijk zou maken om FFF-netwerken in AI-modellen te implementeren, dan zou de 341x snelheidsverbetering werkelijkheid kunnen worden.<\/p>\n<p>Maar zullen ze dat doen? Als je je GPU's zo zou kunnen ontwerpen dat mensen er 99,7% minder van zouden kunnen kopen om dezelfde hoeveelheid verwerking te doen, zou je dat dan doen? Economie zal hier wel een rol in spelen, maar FFF-netwerken kunnen de volgende grote sprong in AI zijn.<\/p>","protected":false},"excerpt":{"rendered":"<p>Wetenschappers van ETH Z\u00fcrich ontdekten dat Large Language Models (LLM) slechts een klein deel van hun neuronen hoeven te gebruiken voor individuele inferenties. Hun nieuwe aanpak belooft LLM's veel sneller te laten werken. Om te begrijpen hoe ze erin geslaagd zijn om AI-modellen sneller te maken, moeten we een ruw idee krijgen van een aantal van de technische dingen waaruit een AI-taalmodel bestaat. AI modellen zoals GPT of Llama zijn opgebouwd uit feedforward netwerken, een soort kunstmatige neurale netwerken. Feedforward netwerken (FF) zijn typisch georganiseerd in lagen, waarbij elke laag neuronen input ontvangt van<\/p>","protected":false},"author":6,"featured_media":8049,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[118,105],"class_list":["post-8047","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-llms","tag-machine-learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>New approach could make large language models 300x faster | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/nl\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/\" \/>\n<meta property=\"og:locale\" content=\"nl_NL\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"New approach could make large language models 300x faster | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Scientists from ETH Zurich found that Large Language Models (LLM) only need to use a small fraction of their neurons for individual inferences. Their new approach promises to make LLMs run a lot faster. To begin to understand how they managed to speed up AI models we need to get a rough idea of some of the technical stuff that makes up an AI language model. AI models like GPT or Llama are made up of feedforward networks, a type of artificial neural network. Feedforward networks (FF) are typically organized into layers, with each layer of neurons receiving input from\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/nl\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-06T12:34:54+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"625\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Geschreven door\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Geschatte leestijd\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minuten\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"New approach could make large language models 300x faster\",\"datePublished\":\"2023-12-06T12:34:54+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\"},\"wordCount\":604,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/neural-network-concept-art.jpg\",\"keywords\":[\"LLMS\",\"machine learning\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"nl-NL\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\",\"name\":\"New approach could make large language models 300x faster | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/neural-network-concept-art.jpg\",\"datePublished\":\"2023-12-06T12:34:54+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#breadcrumb\"},\"inLanguage\":\"nl-NL\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"nl-NL\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/neural-network-concept-art.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/neural-network-concept-art.jpg\",\"width\":1000,\"height\":625},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"New approach could make large language models 300x faster\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"nl-NL\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"nl-NL\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"nl-NL\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/nl\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Nieuwe aanpak kan grote taalmodellen 300x sneller maken | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/nl\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","og_locale":"nl_NL","og_type":"article","og_title":"New approach could make large language models 300x faster | DailyAI","og_description":"Scientists from ETH Zurich found that Large Language Models (LLM) only need to use a small fraction of their neurons for individual inferences. Their new approach promises to make LLMs run a lot faster. To begin to understand how they managed to speed up AI models we need to get a rough idea of some of the technical stuff that makes up an AI language model. AI models like GPT or Llama are made up of feedforward networks, a type of artificial neural network. Feedforward networks (FF) are typically organized into layers, with each layer of neurons receiving input from","og_url":"https:\/\/dailyai.com\/nl\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","og_site_name":"DailyAI","article_published_time":"2023-12-06T12:34:54+00:00","og_image":[{"width":1000,"height":625,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Geschreven door":"Eugene van der Watt","Geschatte leestijd":"3 minuten"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"New approach could make large language models 300x faster","datePublished":"2023-12-06T12:34:54+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/"},"wordCount":604,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","keywords":["LLMS","machine learning"],"articleSection":["Industry"],"inLanguage":"nl-NL"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","url":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","name":"Nieuwe aanpak kan grote taalmodellen 300x sneller maken | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","datePublished":"2023-12-06T12:34:54+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#breadcrumb"},"inLanguage":"nl-NL","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/"]}]},{"@type":"ImageObject","inLanguage":"nl-NL","@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","width":1000,"height":625},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"New approach could make large language models 300x faster"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Uw dagelijkse dosis AI-nieuws","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"nl-NL"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"nl-NL","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"nl-NL","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene heeft een achtergrond in elektrotechniek en houdt van alles wat met techniek te maken heeft. Als hij even pauzeert van het consumeren van AI-nieuws, kun je hem aan de snookertafel vinden.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/nl\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/posts\/8047","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/comments?post=8047"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/posts\/8047\/revisions"}],"predecessor-version":[{"id":8051,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/posts\/8047\/revisions\/8051"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/media\/8049"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/media?parent=8047"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/categories?post=8047"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/tags?post=8047"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}