{"version":"1.0","provider_name":"DailyAI","provider_url":"https:\/\/dailyai.com\/it","author_name":"Eugene van der Watt","author_url":"https:\/\/dailyai.com\/it\/author\/eugene\/","title":"Researchers jailbreak GPT-4 using low-resource languages | DailyAI","type":"rich","width":600,"height":338,"html":"<blockquote class=\"wp-embedded-content\" data-secret=\"PXenAjqbyo\"><a href=\"https:\/\/dailyai.com\/it\/2023\/10\/researchers-jailbreak-gpt-4-using-low-resource-languages\/\">I ricercatori hanno effettuato il jailbreak del GPT-4 utilizzando linguaggi a basse risorse<\/a><\/blockquote><iframe sandbox=\"allow-scripts\" security=\"restricted\" src=\"https:\/\/dailyai.com\/it\/2023\/10\/researchers-jailbreak-gpt-4-using-low-resource-languages\/embed\/#?secret=PXenAjqbyo\" width=\"600\" height=\"338\" title=\"&quot;I ricercatori hanno effettuato il jailbreak del GPT-4 utilizzando linguaggi a basse risorse&quot; - DailyAI\" data-secret=\"PXenAjqbyo\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" class=\"wp-embedded-content\"><\/iframe><script>\n\/*! This file is auto-generated *\/\n!function(d,l){\"use strict\";l.querySelector&&d.addEventListener&&\"undefined\"!=typeof URL&&(d.wp=d.wp||{},d.wp.receiveEmbedMessage||(d.wp.receiveEmbedMessage=function(e){var t=e.data;if((t||t.secret||t.message||t.value)&&!\/[^a-zA-Z0-9]\/.test(t.secret)){for(var s,r,n,a=l.querySelectorAll('iframe[data-secret=\"'+t.secret+'\"]'),o=l.querySelectorAll('blockquote[data-secret=\"'+t.secret+'\"]'),c=new RegExp(\"^https?:$\",\"i\"),i=0;i<o.length;i++)o[i].style.display=\"none\";for(i=0;i<a.length;i++)s=a[i],e.source===s.contentWindow&&(s.removeAttribute(\"style\"),\"height\"===t.message?(1e3<(r=parseInt(t.value,10))?r=1e3:~~r<200&&(r=200),s.height=r):\"link\"===t.message&&(r=new URL(s.getAttribute(\"src\")),n=new URL(t.value),c.test(n.protocol))&&n.host===r.host&&l.activeElement===s&&(d.top.location.href=t.value))}},d.addEventListener(\"message\",d.wp.receiveEmbedMessage,!1),l.addEventListener(\"DOMContentLoaded\",function(){for(var e,t,s=l.querySelectorAll(\"iframe.wp-embedded-content\"),r=0;r<s.length;r++)(t=(e=s[r]).getAttribute(\"data-secret\"))||(t=Math.random().toString(36).substring(2,12),e.src+=\"#?secret=\"+t,e.setAttribute(\"data-secret\",t)),e.contentWindow.postMessage({message:\"ready\",secret:t},\"*\")},!1)))}(window,document);\n\/\/# sourceURL=https:\/\/dailyai.com\/wp-includes\/js\/wp-embed.min.js\n<\/script>","thumbnail_url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/10\/jailbreak-GPT-4-with-low-resource-languages.jpg","thumbnail_width":1000,"thumbnail_height":667,"description":"Using low-resource languages (LRL) like Zulu or Scots Gaelic can elicit unsafe responses from GPT-4 despite its alignment guardrails. Researchers from Brown University found that you don\u2019t need fancy jailbreaking techniques to get GPT-4 to misbehave. You just need to input your prompt in a language that isn\u2019t very well represented online. If you ask ChatGPT for help to do something illegal its alignment guardrails kick in and it will politely tell you why it can\u2019t assist you with that. Red-teaming AI models is an ongoing process where humans try to bypass these safety limits to identify areas that need"}