{"id":10017,"date":"2024-02-14T10:06:29","date_gmt":"2024-02-14T10:06:29","guid":{"rendered":"https:\/\/dailyai.com\/?p=10017"},"modified":"2024-02-14T10:06:29","modified_gmt":"2024-02-14T10:06:29","slug":"nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs","status":"publish","type":"post","link":"https:\/\/dailyai.com\/it\/2024\/02\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\/","title":{"rendered":"Il chatbot personalizzato di NVIDIA viene eseguito localmente sui PC RTX AI"},"content":{"rendered":"<p><strong>NVIDIA ha rilasciato Chat with RTX come dimostrazione tecnica di come i chatbot AI possano essere eseguiti localmente su PC Windows utilizzando le sue GPU RTX.<\/strong><\/p>\n<p>L'approccio standard all'uso di un chatbot AI consiste nell'utilizzare una piattaforma web come ChatGPT o nell'eseguire query tramite un'API, con l'inferenza che avviene su server di cloud computing. Gli svantaggi sono i costi, la latenza e i problemi di privacy legati al trasferimento di dati personali o aziendali.<\/p>\n<p><a href=\"https:\/\/dailyai.com\/it\/2024\/01\/nvidia-announces-new-chips-and-tools-for-on-device-ai\/\">RTX di NVIDIA<\/a> La gamma di GPU di cui disponiamo rende ora possibile l'esecuzione di un LLM in locale sul proprio PC Windows, anche se non si \u00e8 connessi a Internet.<\/p>\n<p>Chat with RTX permette agli utenti di creare un chatbot personalizzato utilizzando <a href=\"https:\/\/dailyai.com\/it\/2023\/12\/the-rise-of-the-french-ai-startup-mistral\/\">Maestrale<\/a> o <a href=\"https:\/\/dailyai.com\/it\/2023\/07\/llama-2-to-run-on-your-device-without-the-internet-by-2024\/\">Lama 2<\/a>. Utilizza la generazione aumentata del reperimento (RAG) e il software TensorRT-LLM di NVIDIA che ottimizza l'inferenza.<\/p>\n<p>\u00c8 possibile indirizzare Chat with RTX a una cartella del PC e poi porgli domande relative ai file contenuti nella cartella. Supporta vari formati di file, tra cui .txt, .pdf, .doc\/.docx e .xml.<\/p>\n<p>Poich\u00e9 LLM analizza i file memorizzati localmente e l'inferenza avviene sul vostro computer, \u00e8 molto veloce e nessuno dei vostri dati viene condiviso su reti potenzialmente non protette.<\/p>\n<p>Si pu\u00f2 anche richiedere l'URL di un video di YouTube e porre domande sul video. Ci\u00f2 richiede l'accesso a Internet, ma \u00e8 un ottimo modo per ottenere risposte senza dover guardare un lungo video.<\/p>\n<p>\u00c8 possibile <a href=\"https:\/\/www.nvidia.com\/en-us\/ai-on-rtx\/chat-with-rtx-generative-ai\/\" target=\"_blank\" rel=\"noopener\">scarica Chat con RTX<\/a> gratuitamente, ma \u00e8 necessario eseguire Windows 10 o 11 sul proprio PC con una GPU GeForce RTX 30 Series o superiore, con almeno 8 GB di VRAM.<\/p>\n<p>Chat with RTX \u00e8 una demo, piuttosto che un prodotto finito. \u00c8 un po' buggata e non ricorda il contesto, quindi non \u00e8 possibile farle domande successive. Ma \u00e8 un bell'esempio del modo in cui utilizzeremo gli LLM in futuro.<\/p>\n<p>L'utilizzo di un chatbot AI in locale, con zero costi di chiamata API e una latenza minima, \u00e8 probabilmente il modo in cui la maggior parte degli utenti finir\u00e0 per interagire con i LLM. L'approccio open-source adottato da aziende come Meta far\u00e0 s\u00ec che l'IA sul dispositivo guidi l'adozione dei loro modelli gratuiti piuttosto che di quelli proprietari come GPT.<\/p>\n<p>Detto questo, gli utenti di cellulari e laptop dovranno aspettare ancora un po' prima che la potenza di calcolo di una GPU RTX possa essere inserita in dispositivi pi\u00f9 piccoli.<\/p>\n<p><iframe loading=\"lazy\" title=\"Creare un chatbot AI personalizzato con Chat With RTX\" width=\"1080\" height=\"608\" src=\"https:\/\/www.youtube.com\/embed\/gdsRJZT3IJw?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>","protected":false},"excerpt":{"rendered":"<p>NVIDIA ha rilasciato Chat with RTX come dimostrazione tecnica di come i chatbot di intelligenza artificiale possano essere eseguiti localmente su PC Windows utilizzando le sue GPU RTX. L'approccio standard all'uso di un chatbot di intelligenza artificiale consiste nell'utilizzare una piattaforma web come ChatGPT o nell'eseguire query tramite un'API, con l'inferenza che avviene su server di cloud computing. Gli svantaggi di questa soluzione sono i costi, la latenza e i problemi di privacy legati al trasferimento di dati personali o aziendali. La gamma di GPU RTX di NVIDIA rende ora possibile l'esecuzione di un LLM in locale sul PC Windows, anche se non si \u00e8 connessi.<\/p>","protected":false},"author":6,"featured_media":10021,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[83],"tags":[118,106],"class_list":["post-10017","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-product","tag-llms","tag-nvidia"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>NVIDIA\u2019s custom chatbot runs locally on RTX AI PCs | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/it\/2024\/02\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\/\" \/>\n<meta property=\"og:locale\" content=\"it_IT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"NVIDIA\u2019s custom chatbot runs locally on RTX AI PCs | DailyAI\" \/>\n<meta property=\"og:description\" content=\"NVIDIA has released Chat with RTX as a tech demo of how AI chatbots can be run locally on Windows PCs using its RTX GPUs. The standard approach of using an AI chatbot is to use a web platform like ChatGPT or to run queries via an API, with inference taking place on cloud computing servers. The drawbacks of this are the costs, latency, and privacy concerns with personal or corporate data transferring back and forth. NVIDIA\u2019s RTX range of GPUs is now making it possible to run an LLM locally on your Windows PC even if you\u2019re not connected\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/it\/2024\/02\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-02-14T10:06:29+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/NVIDIA-GEFORCE-RTX.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Scritto da\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo di lettura stimato\" \/>\n\t<meta name=\"twitter:data2\" content=\"2 minuti\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"NVIDIA\u2019s custom chatbot runs locally on RTX AI PCs\",\"datePublished\":\"2024-02-14T10:06:29+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\\\/\"},\"wordCount\":406,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/NVIDIA-GEFORCE-RTX.jpg\",\"keywords\":[\"LLMS\",\"Nvidia\"],\"articleSection\":[\"Product\"],\"inLanguage\":\"it-IT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\\\/\",\"name\":\"NVIDIA\u2019s custom chatbot runs locally on RTX AI PCs | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/NVIDIA-GEFORCE-RTX.jpg\",\"datePublished\":\"2024-02-14T10:06:29+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\\\/#breadcrumb\"},\"inLanguage\":\"it-IT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/NVIDIA-GEFORCE-RTX.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/NVIDIA-GEFORCE-RTX.jpg\",\"width\":1000,\"height\":667},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"NVIDIA\u2019s custom chatbot runs locally on RTX AI PCs\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"it-IT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/it\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Il chatbot personalizzato di NVIDIA viene eseguito localmente sui PC RTX AI | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/it\/2024\/02\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\/","og_locale":"it_IT","og_type":"article","og_title":"NVIDIA\u2019s custom chatbot runs locally on RTX AI PCs | DailyAI","og_description":"NVIDIA has released Chat with RTX as a tech demo of how AI chatbots can be run locally on Windows PCs using its RTX GPUs. The standard approach of using an AI chatbot is to use a web platform like ChatGPT or to run queries via an API, with inference taking place on cloud computing servers. The drawbacks of this are the costs, latency, and privacy concerns with personal or corporate data transferring back and forth. NVIDIA\u2019s RTX range of GPUs is now making it possible to run an LLM locally on your Windows PC even if you\u2019re not connected","og_url":"https:\/\/dailyai.com\/it\/2024\/02\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\/","og_site_name":"DailyAI","article_published_time":"2024-02-14T10:06:29+00:00","og_image":[{"width":1000,"height":667,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/NVIDIA-GEFORCE-RTX.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Scritto da":"Eugene van der Watt","Tempo di lettura stimato":"2 minuti"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/02\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/02\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"NVIDIA\u2019s custom chatbot runs locally on RTX AI PCs","datePublished":"2024-02-14T10:06:29+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/02\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\/"},"wordCount":406,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/02\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/NVIDIA-GEFORCE-RTX.jpg","keywords":["LLMS","Nvidia"],"articleSection":["Product"],"inLanguage":"it-IT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/02\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\/","url":"https:\/\/dailyai.com\/2024\/02\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\/","name":"Il chatbot personalizzato di NVIDIA viene eseguito localmente sui PC RTX AI | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/02\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/02\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/NVIDIA-GEFORCE-RTX.jpg","datePublished":"2024-02-14T10:06:29+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/02\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\/#breadcrumb"},"inLanguage":"it-IT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/02\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\/"]}]},{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/2024\/02\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/NVIDIA-GEFORCE-RTX.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/NVIDIA-GEFORCE-RTX.jpg","width":1000,"height":667},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/02\/nvidias-custom-chatbot-runs-locally-on-rtx-ai-pcs\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"NVIDIA\u2019s custom chatbot runs locally on RTX AI PCs"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"La vostra dose quotidiana di notizie sull'intelligenza artificiale","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"it-IT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene proviene da un background di ingegneria elettronica e ama tutto ci\u00f2 che \u00e8 tecnologico. Quando si prende una pausa dal consumo di notizie sull'intelligenza artificiale, lo si pu\u00f2 trovare al tavolo da biliardo.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/it\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/10017","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/comments?post=10017"}],"version-history":[{"count":2,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/10017\/revisions"}],"predecessor-version":[{"id":10022,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/10017\/revisions\/10022"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/media\/10021"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/media?parent=10017"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/categories?post=10017"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/tags?post=10017"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}