{"id":8047,"date":"2023-12-06T12:34:54","date_gmt":"2023-12-06T12:34:54","guid":{"rendered":"https:\/\/dailyai.com\/?p=8047"},"modified":"2023-12-06T12:34:54","modified_gmt":"2023-12-06T12:34:54","slug":"new-approach-could-make-large-language-models-300x-faster","status":"publish","type":"post","link":"https:\/\/dailyai.com\/es\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","title":{"rendered":"Un nuevo m\u00e9todo podr\u00eda acelerar 300 veces los grandes modelos ling\u00fc\u00edsticos"},"content":{"rendered":"<p><strong>Cient\u00edficos de la ETH de Z\u00farich han descubierto que los grandes modelos ling\u00fc\u00edsticos (LLM) s\u00f3lo necesitan utilizar una peque\u00f1a fracci\u00f3n de sus neuronas para inferencias individuales. Su nuevo enfoque promete hacer que los LLM funcionen mucho m\u00e1s r\u00e1pido.<\/strong><\/p>\n<p>Para empezar a entender c\u00f3mo han conseguido acelerar los modelos de IA, tenemos que hacernos una idea aproximada de algunos de los aspectos t\u00e9cnicos que componen un modelo ling\u00fc\u00edstico de IA.<\/p>\n<p>Los modelos de IA como GPT o Llama est\u00e1n formados por redes feedforward, un tipo de red neuronal artificial.<\/p>\n<p>Las redes feedforward (FF) suelen organizarse en capas, en las que cada capa de neuronas recibe la entrada de la capa anterior y env\u00eda su salida a la capa siguiente.<\/p>\n<p>Esto implica una multiplicaci\u00f3n matricial densa (DMM) que requiere que cada neurona de la FF realice c\u00e1lculos sobre todas las entradas de la capa anterior. Por eso <a href=\"https:\/\/dailyai.com\/es\/2023\/11\/nvidia-achieves-record-18b-q3-revenue-crediting-generative-ai\/\">Nvidia vende muchas de sus GPUs<\/a> porque este proceso requiere mucha capacidad de procesamiento.<\/p>\n<p><a href=\"https:\/\/arxiv.org\/pdf\/2311.10770.pdf\" target=\"_blank\" rel=\"noopener\">Los investigadores<\/a> utiliza las redes de avance r\u00e1pido (Fast Feedforward Networks, FFF) para agilizar este proceso. Una FFF toma cada capa de neuronas, la divide en bloques y, a continuaci\u00f3n, selecciona s\u00f3lo los bloques m\u00e1s relevantes en funci\u00f3n de la entrada. Este proceso equivale a realizar una multiplicaci\u00f3n matricial condicional (MMC).<\/p>\n<p>Esto significa que en lugar de que todas las neuronas de una capa participen en el c\u00e1lculo, s\u00f3lo lo hace una fracci\u00f3n muy peque\u00f1a.<\/p>\n<p>Pi\u00e9nsalo como si estuvieras clasificando un mont\u00f3n de cartas para encontrar la que va dirigida a ti. En lugar de leer el nombre y la direcci\u00f3n de cada carta, puedes ordenarlas primero por c\u00f3digo postal y centrarte solo en las de tu zona.<\/p>\n<p>Del mismo modo, las FFF identifican s\u00f3lo las pocas neuronas necesarias para cada c\u00e1lculo, lo que da como resultado s\u00f3lo una fracci\u00f3n del procesamiento necesario en comparaci\u00f3n con las FF tradicionales.<\/p>\n<h2>\u00bfCu\u00e1nto m\u00e1s r\u00e1pido?<\/h2>\n<p>Los investigadores probaron su m\u00e9todo en una variante del modelo BERT de Google que denominaron UltraFastBERT. UltraFastBERT consta de 4.095 neuronas, pero solo utiliza de forma selectiva 12 neuronas en cada capa de inferencia.<\/p>\n<p>Esto significa que UltraFastBERT requiere que alrededor de 0,03% de sus neuronas participen en el procesamiento durante la inferencia, mientras que el BERT normal necesitar\u00eda 100% de sus neuronas implicadas en el c\u00e1lculo.<\/p>\n<p>En teor\u00eda, esto significa que UltraFastBERT ser\u00eda 341 veces m\u00e1s r\u00e1pido que BERT o GPT-3.<\/p>\n<p>\u00bfPor qu\u00e9 decimos \"te\u00f3ricamente\" cuando los investigadores aseguran que su m\u00e9todo funciona? Porque tuvieron que crear una soluci\u00f3n de software para que su FFF funcionara con BERT y s\u00f3lo consiguieron una mejora de 78 veces en la velocidad durante las pruebas reales.<\/p>\n<h2>Es un secreto<\/h2>\n<p>El documento de investigaci\u00f3n explicaba que \"la multiplicaci\u00f3n de matrices densas es la operaci\u00f3n matem\u00e1tica m\u00e1s optimizada de la historia de la inform\u00e1tica. Se ha hecho un enorme esfuerzo para dise\u00f1ar memorias, chips, conjuntos de instrucciones y rutinas de software que la ejecuten lo m\u00e1s r\u00e1pido posible. Muchos de estos avances se han... mantenido confidenciales y s\u00f3lo se han expuesto al usuario final a trav\u00e9s de potentes pero restrictivas interfaces de programaci\u00f3n\".<\/p>\n<p>B\u00e1sicamente, est\u00e1n diciendo que los ingenieros que descubrieron las formas m\u00e1s eficientes de procesar las matem\u00e1ticas necesarias para las redes FF tradicionales mantienen en secreto su software y algoritmos de bajo nivel y no te dejan ver su c\u00f3digo.<\/p>\n<p>Si los cerebros detr\u00e1s de los dise\u00f1os de las GPU de Intel o Nvidia permitieran el acceso a c\u00f3digo de bajo nivel para implementar redes FFF en modelos de IA, entonces la mejora de velocidad de 341x podr\u00eda ser una realidad.<\/p>\n<p>Pero, \u00bflo har\u00e1n? Si pudi\u00e9ramos dise\u00f1ar nuestras GPU de forma que la gente pudiera comprar 99,7% menos de ellas para realizar la misma cantidad de procesamiento, \u00bflo har\u00edamos? La econom\u00eda tendr\u00e1 algo que decir al respecto, pero las redes FFF pueden suponer el pr\u00f3ximo gran salto de la IA.<\/p>","protected":false},"excerpt":{"rendered":"<p>Cient\u00edficos de la ETH de Z\u00farich han descubierto que los grandes modelos ling\u00fc\u00edsticos (LLM) s\u00f3lo necesitan utilizar una peque\u00f1a fracci\u00f3n de sus neuronas para inferencias individuales. Su nuevo enfoque promete hacer que los LLM funcionen mucho m\u00e1s r\u00e1pido. Para empezar a entender c\u00f3mo han conseguido acelerar los modelos de inteligencia artificial, tenemos que hacernos una idea aproximada de algunos de los elementos t\u00e9cnicos que componen un modelo ling\u00fc\u00edstico de inteligencia artificial. Los modelos de IA como GPT o Llama est\u00e1n formados por redes feedforward, un tipo de red neuronal artificial. Las redes feedforward (FF) se organizan normalmente en capas, en las que cada capa de neuronas recibe informaci\u00f3n de<\/p>","protected":false},"author":6,"featured_media":8049,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[118,105],"class_list":["post-8047","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-llms","tag-machine-learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>New approach could make large language models 300x faster | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/es\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/\" \/>\n<meta property=\"og:locale\" content=\"es_ES\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"New approach could make large language models 300x faster | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Scientists from ETH Zurich found that Large Language Models (LLM) only need to use a small fraction of their neurons for individual inferences. Their new approach promises to make LLMs run a lot faster. To begin to understand how they managed to speed up AI models we need to get a rough idea of some of the technical stuff that makes up an AI language model. AI models like GPT or Llama are made up of feedforward networks, a type of artificial neural network. Feedforward networks (FF) are typically organized into layers, with each layer of neurons receiving input from\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/es\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-06T12:34:54+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"625\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tiempo de lectura\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"New approach could make large language models 300x faster\",\"datePublished\":\"2023-12-06T12:34:54+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\"},\"wordCount\":604,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/neural-network-concept-art.jpg\",\"keywords\":[\"LLMS\",\"machine learning\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"es\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\",\"name\":\"New approach could make large language models 300x faster | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/neural-network-concept-art.jpg\",\"datePublished\":\"2023-12-06T12:34:54+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#breadcrumb\"},\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/neural-network-concept-art.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/neural-network-concept-art.jpg\",\"width\":1000,\"height\":625},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"New approach could make large language models 300x faster\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"es\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/es\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Un nuevo enfoque podr\u00eda hacer 300 veces m\u00e1s r\u00e1pidos los grandes modelos ling\u00fc\u00edsticos | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/es\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","og_locale":"es_ES","og_type":"article","og_title":"New approach could make large language models 300x faster | DailyAI","og_description":"Scientists from ETH Zurich found that Large Language Models (LLM) only need to use a small fraction of their neurons for individual inferences. Their new approach promises to make LLMs run a lot faster. To begin to understand how they managed to speed up AI models we need to get a rough idea of some of the technical stuff that makes up an AI language model. AI models like GPT or Llama are made up of feedforward networks, a type of artificial neural network. Feedforward networks (FF) are typically organized into layers, with each layer of neurons receiving input from","og_url":"https:\/\/dailyai.com\/es\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","og_site_name":"DailyAI","article_published_time":"2023-12-06T12:34:54+00:00","og_image":[{"width":1000,"height":625,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Eugene van der Watt","Tiempo de lectura":"3 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"New approach could make large language models 300x faster","datePublished":"2023-12-06T12:34:54+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/"},"wordCount":604,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","keywords":["LLMS","machine learning"],"articleSection":["Industry"],"inLanguage":"es"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","url":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","name":"Un nuevo enfoque podr\u00eda hacer 300 veces m\u00e1s r\u00e1pidos los grandes modelos ling\u00fc\u00edsticos | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","datePublished":"2023-12-06T12:34:54+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#breadcrumb"},"inLanguage":"es","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/"]}]},{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","width":1000,"height":625},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"New approach could make large language models 300x faster"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Su dosis diaria de noticias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"es"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene es ingeniero electr\u00f3nico y le encanta todo lo relacionado con la tecnolog\u00eda. Cuando descansa de consumir noticias sobre IA, lo encontrar\u00e1 jugando al billar.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/es\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/8047","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/comments?post=8047"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/8047\/revisions"}],"predecessor-version":[{"id":8051,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/8047\/revisions\/8051"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media\/8049"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media?parent=8047"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/categories?post=8047"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/tags?post=8047"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}