{"id":8047,"date":"2023-12-06T12:34:54","date_gmt":"2023-12-06T12:34:54","guid":{"rendered":"https:\/\/dailyai.com\/?p=8047"},"modified":"2023-12-06T12:34:54","modified_gmt":"2023-12-06T12:34:54","slug":"new-approach-could-make-large-language-models-300x-faster","status":"publish","type":"post","link":"https:\/\/dailyai.com\/fr\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","title":{"rendered":"Une nouvelle approche permettrait de multiplier par 300 la vitesse des grands mod\u00e8les linguistiques"},"content":{"rendered":"<p><strong>Des scientifiques de l'ETH Zurich ont d\u00e9couvert que les grands mod\u00e8les de langage (LLM) n'ont besoin d'utiliser qu'une petite fraction de leurs neurones pour les inf\u00e9rences individuelles. Leur nouvelle approche promet de rendre les LLM beaucoup plus rapides.<\/strong><\/p>\n<p>Pour commencer \u00e0 comprendre comment ils ont r\u00e9ussi \u00e0 acc\u00e9l\u00e9rer les mod\u00e8les d'IA, nous devons avoir une id\u00e9e approximative de certains des \u00e9l\u00e9ments techniques qui composent un mod\u00e8le de langage d'IA.<\/p>\n<p>Les mod\u00e8les d'IA tels que GPT ou Llama sont constitu\u00e9s de r\u00e9seaux de type feedforward, un type de r\u00e9seau neuronal artificiel.<\/p>\n<p>Les r\u00e9seaux feedforward (FF) sont g\u00e9n\u00e9ralement organis\u00e9s en couches, chaque couche de neurones recevant l'entr\u00e9e de la couche pr\u00e9c\u00e9dente et envoyant sa sortie \u00e0 la couche suivante.<\/p>\n<p>Il s'agit d'une multiplication matricielle dense (DMM) qui exige que chaque neurone de la FF effectue des calculs sur toutes les entr\u00e9es de la couche pr\u00e9c\u00e9dente. C'est pourquoi <a href=\"https:\/\/dailyai.com\/fr\/2023\/11\/nvidia-achieves-record-18b-q3-revenue-crediting-generative-ai\/\">Nvidia vend beaucoup de ses GPU<\/a> car ce processus n\u00e9cessite beaucoup de puissance de traitement.<\/p>\n<p><a href=\"https:\/\/arxiv.org\/pdf\/2311.10770.pdf\" target=\"_blank\" rel=\"noopener\">Les chercheurs<\/a> a utilis\u00e9 des r\u00e9seaux \u00e0 avance rapide (Fast Feedforward Networks - FFF) pour acc\u00e9l\u00e9rer consid\u00e9rablement ce processus. Un FFF prend chaque couche de neurones, la divise en blocs, puis s\u00e9lectionne uniquement les blocs les plus pertinents en fonction de l'entr\u00e9e. Ce processus revient \u00e0 effectuer une multiplication matricielle conditionnelle (CMM).<\/p>\n<p>Cela signifie qu'au lieu que tous les neurones d'une couche soient impliqu\u00e9s dans le calcul, seule une tr\u00e8s petite fraction l'est.<\/p>\n<p>C'est un peu comme si vous triiez une pile de courrier pour trouver une lettre qui vous est destin\u00e9e. Au lieu de lire le nom et l'adresse sur chaque lettre, vous pourriez d'abord les trier par code postal et vous concentrer sur celles qui concernent votre r\u00e9gion.<\/p>\n<p>De la m\u00eame mani\u00e8re, les FFF n'identifient que les quelques neurones n\u00e9cessaires \u00e0 chaque calcul, ce qui ne repr\u00e9sente qu'une fraction du traitement n\u00e9cessaire par rapport aux FF traditionnels.<\/p>\n<h2>Combien de temps plus vite ?<\/h2>\n<p>Les chercheurs ont test\u00e9 leur m\u00e9thode sur une variante du mod\u00e8le BERT de Google qu'ils ont appel\u00e9e UltraFastBERT. UltraFastBERT se compose de 4095 neurones, mais n'engage s\u00e9lectivement que 12 neurones pour chaque couche d'inf\u00e9rence.<\/p>\n<p>Cela signifie que UltraFastBERT a besoin d'environ 0,03% de ses neurones pour \u00eatre impliqu\u00e9 dans le traitement pendant l'inf\u00e9rence, alors que BERT normal aurait besoin de 100% de ses neurones pour \u00eatre impliqu\u00e9 dans le calcul.<\/p>\n<p>Th\u00e9oriquement, cela signifie que UltraFastBERT serait 341 fois plus rapide que BERT ou GPT-3.<\/p>\n<p>Pourquoi disons-nous \"th\u00e9oriquement\" alors que les chercheurs nous assurent que leur m\u00e9thode fonctionne ? Parce qu'ils ont d\u00fb cr\u00e9er une solution de contournement logicielle pour faire fonctionner leur FFF avec BERT et qu'ils n'ont obtenu qu'une am\u00e9lioration de 78x de la vitesse lors de tests r\u00e9els.<\/p>\n<h2>C'est un secret<\/h2>\n<p>Le document de recherche explique que \"la multiplication de matrices denses est l'op\u00e9ration math\u00e9matique la plus optimis\u00e9e de l'histoire de l'informatique. Des efforts consid\u00e9rables ont \u00e9t\u00e9 d\u00e9ploy\u00e9s pour concevoir des m\u00e9moires, des puces, des jeux d'instructions et des routines logicielles permettant de l'ex\u00e9cuter aussi rapidement que possible. Nombre de ces avanc\u00e9es ont \u00e9t\u00e9 [...] gard\u00e9es confidentielles et expos\u00e9es \u00e0 l'utilisateur final uniquement par le biais d'interfaces de programmation puissantes mais restrictives\".<\/p>\n<p>En gros, ils disent que les ing\u00e9nieurs qui ont trouv\u00e9 les moyens les plus efficaces d'effectuer le traitement math\u00e9matique requis pour les r\u00e9seaux FF traditionnels gardent secrets leurs logiciels et algorithmes de bas niveau et ne vous permettent pas de consulter leur code.<\/p>\n<p>Si les concepteurs des GPU Intel ou Nvidia permettaient l'acc\u00e8s au code de bas niveau pour mettre en \u0153uvre les r\u00e9seaux FFF dans les mod\u00e8les d'IA, l'am\u00e9lioration de la vitesse de 341x pourrait \u00eatre une r\u00e9alit\u00e9.<\/p>\n<p>Mais le feront-ils ? Si vous pouviez concevoir vos GPU de mani\u00e8re \u00e0 ce que les gens puissent en acheter 99,7% de moins pour effectuer la m\u00eame quantit\u00e9 de traitement, le feriez-vous ? L'\u00e9conomie aura son mot \u00e0 dire, mais les r\u00e9seaux FFF pourraient constituer le prochain pas de g\u00e9ant dans le domaine de l'IA.<\/p>","protected":false},"excerpt":{"rendered":"<p>Des scientifiques de l'ETH Zurich ont d\u00e9couvert que les grands mod\u00e8les de langage (LLM) n'ont besoin d'utiliser qu'une petite fraction de leurs neurones pour les inf\u00e9rences individuelles. Leur nouvelle approche promet de faire fonctionner les LLM beaucoup plus rapidement. Pour commencer \u00e0 comprendre comment ils sont parvenus \u00e0 acc\u00e9l\u00e9rer les mod\u00e8les d'IA, nous devons nous faire une id\u00e9e approximative de certains des \u00e9l\u00e9ments techniques qui composent un mod\u00e8le de langage d'IA. Les mod\u00e8les d'IA tels que GPT ou Llama sont constitu\u00e9s de r\u00e9seaux d'anticipation, un type de r\u00e9seau neuronal artificiel. Les r\u00e9seaux de type feedforward (FF) sont g\u00e9n\u00e9ralement organis\u00e9s en couches, chaque couche de neurones recevant des donn\u00e9es de<\/p>","protected":false},"author":6,"featured_media":8049,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[118,105],"class_list":["post-8047","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-llms","tag-machine-learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>New approach could make large language models 300x faster | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/fr\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"New approach could make large language models 300x faster | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Scientists from ETH Zurich found that Large Language Models (LLM) only need to use a small fraction of their neurons for individual inferences. Their new approach promises to make LLMs run a lot faster. To begin to understand how they managed to speed up AI models we need to get a rough idea of some of the technical stuff that makes up an AI language model. AI models like GPT or Llama are made up of feedforward networks, a type of artificial neural network. Feedforward networks (FF) are typically organized into layers, with each layer of neurons receiving input from\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/fr\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-06T12:34:54+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"625\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"New approach could make large language models 300x faster\",\"datePublished\":\"2023-12-06T12:34:54+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\"},\"wordCount\":604,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/neural-network-concept-art.jpg\",\"keywords\":[\"LLMS\",\"machine learning\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"fr-FR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\",\"name\":\"New approach could make large language models 300x faster | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/neural-network-concept-art.jpg\",\"datePublished\":\"2023-12-06T12:34:54+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/neural-network-concept-art.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/neural-network-concept-art.jpg\",\"width\":1000,\"height\":625},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/new-approach-could-make-large-language-models-300x-faster\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"New approach could make large language models 300x faster\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/fr\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Une nouvelle approche pourrait rendre les grands mod\u00e8les linguistiques 300x plus rapides | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/fr\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","og_locale":"fr_FR","og_type":"article","og_title":"New approach could make large language models 300x faster | DailyAI","og_description":"Scientists from ETH Zurich found that Large Language Models (LLM) only need to use a small fraction of their neurons for individual inferences. Their new approach promises to make LLMs run a lot faster. To begin to understand how they managed to speed up AI models we need to get a rough idea of some of the technical stuff that makes up an AI language model. AI models like GPT or Llama are made up of feedforward networks, a type of artificial neural network. Feedforward networks (FF) are typically organized into layers, with each layer of neurons receiving input from","og_url":"https:\/\/dailyai.com\/fr\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","og_site_name":"DailyAI","article_published_time":"2023-12-06T12:34:54+00:00","og_image":[{"width":1000,"height":625,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"\u00c9crit par":"Eugene van der Watt","Dur\u00e9e de lecture estim\u00e9e":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"New approach could make large language models 300x faster","datePublished":"2023-12-06T12:34:54+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/"},"wordCount":604,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","keywords":["LLMS","machine learning"],"articleSection":["Industry"],"inLanguage":"fr-FR"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","url":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/","name":"Une nouvelle approche pourrait rendre les grands mod\u00e8les linguistiques 300x plus rapides | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","datePublished":"2023-12-06T12:34:54+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/"]}]},{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/neural-network-concept-art.jpg","width":1000,"height":625},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/12\/new-approach-could-make-large-language-models-300x-faster\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"New approach could make large language models 300x faster"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Votre dose quotidienne de nouvelles sur l'IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eug\u00e8ne van der Watt","image":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene a une formation d'ing\u00e9nieur en \u00e9lectronique et adore tout ce qui touche \u00e0 la technologie. Lorsqu'il fait une pause dans sa consommation d'informations sur l'IA, vous le trouverez \u00e0 la table de snooker.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/fr\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/8047","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/comments?post=8047"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/8047\/revisions"}],"predecessor-version":[{"id":8051,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/8047\/revisions\/8051"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media\/8049"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media?parent=8047"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/categories?post=8047"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/tags?post=8047"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}