{"version":"1.0","provider_name":"DailyAI","provider_url":"https:\/\/dailyai.com\/es","author_name":"Eugene van der Watt","author_url":"https:\/\/dailyai.com\/es\/author\/eugene\/","title":"Dolphin Mixtral: A powerful open-source uncensored AI model | DailyAI","type":"rich","width":600,"height":338,"html":"<blockquote class=\"wp-embedded-content\" data-secret=\"KsvM1CmSjg\"><a href=\"https:\/\/dailyai.com\/es\/2023\/12\/dolphin-mixtral-a-powerful-open-source-uncensored-ai-model\/\">Delf\u00edn Mixtral: Un potente modelo de IA sin censura y de c\u00f3digo abierto<\/a><\/blockquote><iframe sandbox=\"allow-scripts\" security=\"restricted\" src=\"https:\/\/dailyai.com\/es\/2023\/12\/dolphin-mixtral-a-powerful-open-source-uncensored-ai-model\/embed\/#?secret=KsvM1CmSjg\" width=\"600\" height=\"338\" title=\"&quot;Delf\u00edn Mixtral: Un potente modelo de IA sin censura y de c\u00f3digo abierto&quot; - DailyAI\" data-secret=\"KsvM1CmSjg\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" class=\"wp-embedded-content\"><\/iframe><script>\n\/*! This file is auto-generated *\/\n!function(d,l){\"use strict\";l.querySelector&&d.addEventListener&&\"undefined\"!=typeof URL&&(d.wp=d.wp||{},d.wp.receiveEmbedMessage||(d.wp.receiveEmbedMessage=function(e){var t=e.data;if((t||t.secret||t.message||t.value)&&!\/[^a-zA-Z0-9]\/.test(t.secret)){for(var s,r,n,a=l.querySelectorAll('iframe[data-secret=\"'+t.secret+'\"]'),o=l.querySelectorAll('blockquote[data-secret=\"'+t.secret+'\"]'),c=new RegExp(\"^https?:$\",\"i\"),i=0;i<o.length;i++)o[i].style.display=\"none\";for(i=0;i<a.length;i++)s=a[i],e.source===s.contentWindow&&(s.removeAttribute(\"style\"),\"height\"===t.message?(1e3<(r=parseInt(t.value,10))?r=1e3:~~r<200&&(r=200),s.height=r):\"link\"===t.message&&(r=new URL(s.getAttribute(\"src\")),n=new URL(t.value),c.test(n.protocol))&&n.host===r.host&&l.activeElement===s&&(d.top.location.href=t.value))}},d.addEventListener(\"message\",d.wp.receiveEmbedMessage,!1),l.addEventListener(\"DOMContentLoaded\",function(){for(var e,t,s=l.querySelectorAll(\"iframe.wp-embedded-content\"),r=0;r<s.length;r++)(t=(e=s[r]).getAttribute(\"data-secret\"))||(t=Math.random().toString(36).substring(2,12),e.src+=\"#?secret=\"+t,e.setAttribute(\"data-secret\",t)),e.contentWindow.postMessage({message:\"ready\",secret:t},\"*\")},!1)))}(window,document);\n\/\/# sourceURL=https:\/\/dailyai.com\/wp-includes\/js\/wp-embed.min.js\n<\/script>","thumbnail_url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Dolphin-Mixtral-1024x585.png","thumbnail_width":1024,"thumbnail_height":585,"description":"French AI startup Mistral released its open-source Mixture of Experts model Mixtral 8x7B last week. An AI researcher released a version of the model with its alignment completely removed. There has been a lot of argument over open-source models but there is general consensus that all AI models should be aligned, or prevented from generating harmful outputs. AI and ML researcher Eric Hartford thinks there are good arguments for unaligned and uncensored models. Hartford trained the base model Mixtral 8x7B on a dataset with all alignment stripped out and released dolphin-2.5-mixtral-8x7b. If you ask ChatGPT or Llama for advice on"}