| Thread Tools |
21st May 2024, 07:41 | #1 |
[M] Reviewer Join Date: May 2010 Location: Romania
Posts: 153,575
| UK's AI Safety Institute easily jailbreaks major LLMs In a shocking turn of events, AI systems might not be as safe as their creators make them out to be — who saw that coming, right? In a new report, the UK government's AI Safety Institute (AISI) found that the four undisclosed LLMs tested were "highly vulnerable to basic jailbreaks." Some unjailbroken models even generated "harmful outputs" without researchers attempting to produce them. https://www.engadget.com/uks-ai-safe...9.html?src=rss |
Thread Tools | |
| |