{"id":3964,"date":"2023-08-09T06:56:16","date_gmt":"2023-08-09T06:56:16","guid":{"rendered":"https:\/\/dailyai.com\/?p=3964"},"modified":"2023-08-09T06:56:16","modified_gmt":"2023-08-09T06:56:16","slug":"we-want-unbiased-llms-but-its-impossible-heres-why","status":"publish","type":"post","link":"https:\/\/dailyai.com\/de\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/","title":{"rendered":"Wir wollen unvoreingenommene LLMs, aber das ist unm\u00f6glich. Hier ist der Grund."},"content":{"rendered":"<p><strong>Unternehmen wie OpenAI und Meta arbeiten hart daran, ihre Sprachmodelle sicherer und weniger voreingenommen zu machen, aber v\u00f6llig unvoreingenommene Modelle sind wohl noch Zukunftsmusik.<\/strong><\/p>\n<p><span style=\"font-weight: 400;\">A <\/span><a href=\"https:\/\/aclanthology.org\/2023.acl-long.656.pdf\"><span style=\"font-weight: 400;\">neues Forschungspapier<\/span><\/a><span style=\"font-weight: 400;\"> von der University of Washington, der Carnegie Mellon University und der Xi'an Jiaotong University kamen zu dem Schluss, dass alle von ihnen getesteten KI-Sprachmodelle eine politische Voreingenommenheit aufweisen.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Nachdem sie die Ursachen f\u00fcr die Verzerrungen untersucht hatten, kamen sie zu dem Schluss, dass Verzerrungen in Sprachmodellen unvermeidlich sind.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Chan Park, einer der Autoren der Studie, sagte: \"Wir glauben, dass kein Sprachmodell v\u00f6llig frei von politischen Vorurteilen sein kann.\"<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Die Forscher testeten 14 verschiedene Sprachmodelle und baten sie um Meinungen zu Themen wie Demokratie, Rassismus und Feminismus, um zu sehen, auf welcher Seite des politischen Spektrums die Modelle stehen.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Die Ergebnisse zeigten, dass ChatGPT und GPT-4 von OpenAI am weitesten links standen, w\u00e4hrend Meta's Llama die am weitesten rechts stehenden Antworten gab.<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Trainingsdaten sind nicht die einzige Quelle f\u00fcr Verzerrungen<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Die offensichtliche Quelle von <a href=\"https:\/\/dailyai.com\/de\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/\">Vorspannung<\/a> sind die Daten, auf denen diese Modelle trainiert werden. Die neue Untersuchung zeigte jedoch, dass die Modelle selbst nach der Bereinigung der Daten von Verzerrungen anf\u00e4llig f\u00fcr geringf\u00fcgige Verzerrungen waren, die in den Daten verblieben.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Man w\u00fcrde erwarten, dass ein LLM, das mit Daten von Fox News trainiert wurde, in seinen Antworten eher pro-republikanisch ist. Aber das Problem liegt nicht nur in den Trainingsdaten.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Es stellt sich heraus, dass die vortrainierten Sprachmodelle bei der Feinabstimmung und Verwendung weitere Verzerrungen von ihren Operatoren \u00fcbernehmen.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Soroush Vosoughi, Assistenzprofessor f\u00fcr Informatik am Dartmouth College, erkl\u00e4rte, dass Vorurteile in fast jeder Phase der Entwicklung eines LLMs eingef\u00fchrt werden.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ein Beispiel daf\u00fcr ist, wie OpenAI versucht, Verzerrungen aus seinen Modellen zu entfernen. Es verwendet eine Technik namens \"Reinforcement Learning through Human Feedback\" (RLHF), um seine Modelle zu trainieren.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In RLHF trainiert ein menschlicher Bediener das Modell \u00e4hnlich wie einen Welpen. Wenn der Welpe etwas Gutes tut, bekommt er ein Leckerli. Wenn er an Ihren Hausschuhen kaut: \"B\u00f6ser Hund!\"<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ein RLHF-Operator stellt dem Modell einige Fragen und ein anderer Operator wertet dann die verschiedenen Antworten des Modells aus. Der zweite Operator wertet die Antworten aus und ordnet sie danach, welche ihm am besten gefallen haben.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In einem <\/span><a href=\"https:\/\/openai.com\/blog\/how-should-ai-systems-behave\"><span style=\"font-weight: 400;\">Beitrag dar\u00fcber, wie es seine KI trainiert<\/span><\/a><span style=\"font-weight: 400;\">OpenAI sagt, dass es menschliche Ausbilder anweist, \"keine Stellung zu kontroversen Themen zu beziehen\" und dass \"Pr\u00fcfer keine politische Gruppe bevorzugen sollten\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Das klingt nach einer guten Idee, aber auch wenn wir uns wirklich bem\u00fchen, nicht voreingenommen zu sein, sind alle Menschen voreingenommen. Und das beeinflusst unweigerlich die Ausbildung des Modells.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Sogar die Autoren der oben erw\u00e4hnten Arbeit haben in ihrer Schlussfolgerung einger\u00e4umt, dass ihre eigenen Vorurteile ihre Forschung beeinflusst haben k\u00f6nnten.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Die L\u00f6sung k\u00f6nnte darin bestehen, dass man versucht, diese Sprachmodelle so zu gestalten, dass sie nicht ungeheuerlich schlecht sind, und sie dann so anpasst, dass sie mit den Vorurteilen der Menschen \u00fcbereinstimmen. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Die Leute sagen oft, dass sie die unvoreingenommene Wahrheit wollen, aber dann bleiben sie doch bei ihrer bevorzugten Nachrichtenquelle wie Fox oder CNN.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Wir sind uns nicht immer einig dar\u00fcber, was richtig oder falsch ist, und diese neue Forschung scheint zu zeigen, dass die KI uns auch nicht dabei helfen kann, dies herauszufinden.<\/span><\/p>","protected":false},"excerpt":{"rendered":"<p>Unternehmen wie OpenAI und Meta arbeiten hart daran, ihre Sprachmodelle sicherer und weniger voreingenommen zu machen, aber v\u00f6llig unvoreingenommene Modelle sind wohl ein Wunschtraum. Ein neues Forschungspapier der University of Washington, der Carnegie Mellon University und der Xi'an Jiaotong University kommt zu dem Schluss, dass alle getesteten KI-Sprachmodelle politische Verzerrungen aufweisen. Nachdem sie die Ursachen f\u00fcr diese Verzerrungen untersucht hatten, kamen sie zu dem Schluss, dass Verzerrungen in Sprachmodellen unvermeidlich sind. Chan Park, einer der Autoren der Studie, sagte: \"Wir glauben, dass kein Sprachmodell v\u00f6llig frei von politischen Verzerrungen sein kann.\" Die Forscher testeten 14 verschiedene Sprachmodelle und fragten sie<\/p>","protected":false},"author":6,"featured_media":3979,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[88],"tags":[103,213,207,105,91],"class_list":["post-3964","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ethics","tag-ai-debate","tag-bias","tag-llm","tag-machine-learning","tag-policy"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>We want unbiased LLMs but it\u2019s impossible. Here\u2019s why. | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/de\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/\" \/>\n<meta property=\"og:locale\" content=\"de_DE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why. | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Companies like OpenAI and Meta are working hard to make their language models safer and less biased, but completely unbiased models may be a pipedream. A new research paper from the University of Washington, Carnegie Mellon University, and Xi\u2019an Jiaotong University concluded that all the AI language models they tested displayed political bias. After delving into the sources of the bias, they concluded that bias in language models was inevitable. Chan Park, one of the paper\u2019s authors, said \u201cWe believe no language model can be entirely free from political biases.\u201d The researchers tested 14 different language models and asked them\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/de\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-08-09T06:56:16+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"666\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Verfasst von\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Gesch\u00e4tzte Lesezeit\" \/>\n\t<meta name=\"twitter:data2\" content=\"3\u00a0Minuten\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why.\",\"datePublished\":\"2023-08-09T06:56:16+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/\"},\"wordCount\":540,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/Bias-in-AI-models.jpg\",\"keywords\":[\"AI debate\",\"Bias\",\"LLM\",\"machine learning\",\"Policy\"],\"articleSection\":[\"Ethics &amp; Society\"],\"inLanguage\":\"de\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/\",\"name\":\"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why. | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/Bias-in-AI-models.jpg\",\"datePublished\":\"2023-08-09T06:56:16+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#breadcrumb\"},\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/Bias-in-AI-models.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/Bias-in-AI-models.jpg\",\"width\":1000,\"height\":666,\"caption\":\"Bias in AI models\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why.\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"de\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/de\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Wir wollen unvoreingenommene LLMs, aber das ist unm\u00f6glich. Hier ist der Grund. | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/de\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/","og_locale":"de_DE","og_type":"article","og_title":"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why. | DailyAI","og_description":"Companies like OpenAI and Meta are working hard to make their language models safer and less biased, but completely unbiased models may be a pipedream. A new research paper from the University of Washington, Carnegie Mellon University, and Xi\u2019an Jiaotong University concluded that all the AI language models they tested displayed political bias. After delving into the sources of the bias, they concluded that bias in language models was inevitable. Chan Park, one of the paper\u2019s authors, said \u201cWe believe no language model can be entirely free from political biases.\u201d The researchers tested 14 different language models and asked them","og_url":"https:\/\/dailyai.com\/de\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/","og_site_name":"DailyAI","article_published_time":"2023-08-09T06:56:16+00:00","og_image":[{"width":1000,"height":666,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Verfasst von":"Eugene van der Watt","Gesch\u00e4tzte Lesezeit":"3\u00a0Minuten"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why.","datePublished":"2023-08-09T06:56:16+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/"},"wordCount":540,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg","keywords":["AI debate","Bias","LLM","machine learning","Policy"],"articleSection":["Ethics &amp; Society"],"inLanguage":"de"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/","url":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/","name":"Wir wollen unvoreingenommene LLMs, aber das ist unm\u00f6glich. Hier ist der Grund. | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg","datePublished":"2023-08-09T06:56:16+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#breadcrumb"},"inLanguage":"de","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/"]}]},{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg","width":1000,"height":666,"caption":"Bias in AI models"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why."}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Ihre t\u00e4gliche Dosis an AI-Nachrichten","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"de"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene kommt aus der Elektronikbranche und liebt alles, was mit Technik zu tun hat. Wenn er eine Pause vom Konsum von KI-Nachrichten einlegt, findet man ihn am Snookertisch.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/de\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/3964","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/comments?post=3964"}],"version-history":[{"count":5,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/3964\/revisions"}],"predecessor-version":[{"id":3983,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/3964\/revisions\/3983"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media\/3979"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media?parent=3964"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/categories?post=3964"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/tags?post=3964"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}