home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 830 of 2,445   
   Mike Powell to All   
   Jailbreaking AI chatbots   
   20 Mar 25 09:12:00   
   
   TZUTC: -0500   
   MSGID: 546.consprcy@1:2320/105 2c415d32   
   PID: Synchronet 3.20a-Linux master/acc19483f Apr 26 202 GCC 12.2.0   
   TID: SBBSecho 3.20-Linux master/acc19483f Apr 26 2024 23:04 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   Not even fairy tales are safe - researchers weaponise bedtime stories to   
   jailbreak AI chatbots and create malware   
      
   Date:   
   Wed, 19 Mar 2025 14:36:56 +0000   
      
   Description:   
   Cato CTRL researchers can jailbreak LLMs with no prior malware coding   
   experience.   
      
   FULL STORY   
   ======================================================================   
    - Security researchers have developed a new technique to jailbreak AI   
   chatbots   
    - The technique required no prior malware coding knowledge   
    - This involved creating a fake scenario to convince the model to craft an   
   attack   
      
   Despite having no previous experience in malware coding, Cato CTRL threat   
   intelligence researchers have warned they were able to jailbreak multiple   
   LLMs, including ChatGPT-4o, DeepSeek-R1, DeepSeek-V3, and Microsoft Copilot,   
   using a rather fantastical technique.    
      
   The team developed Immersive World which uses narrative engineering to bypass   
   LLM security controls by creating a detailed fictional world to normalize   
   restricted operations and develop a "fully effective" Chrome infostealer.   
   Chrome is the most popular browser in the world, with over 3 billion users,   
   outlining the scale of the risk this attack poses.    
      
    Infostealer malware is on the rise , and is rapidly becoming one of the most   
   dangerous tools in a cybercriminal's arsenal - and this attack shows that the   
   barriers are significantly lowered for cybercriminals, who now need no prior   
   experience in creating malicious code.   
      
   AI for attackers    
      
   LLMs have fundamentally altered the cybersecurity landscape, the report   
   claims, and research has shown that AI-powered cyber threats are becoming a   
   much more serious concern for security teams and businesses by allowing   
   criminals to craft more sophisticated attacks with less experience and at a   
   higher frequency.    
      
   Chatbots have many guardrails and safety policies, but since AI models are   
   designed to be as helpful and compliant to the user as possible, researchers   
   have been able to jailbreak the models, including persuading AI Agents to   
   write and send phishing attacks with relative ease.    
      
   We believe the rise of the zero-knowledge threat actor poses high risk to   
   organizations because the barrier to creating malware is now substantially   
   lowered with GenAI tools, said Vitaly Simonovich, threat intelligence   
   researcher at Cato Networks.    
      
   Infostealers play a significant role in credential theft by enabling threat   
   actors to breach enterprises. Our new LLM jailbreak technique, which weve   
   uncovered and called Immersive World, showcases the dangerous potential of   
   creating an infostealer with ease.   
      
   ======================================================================   
   Link to news story:   
   https://www.techradar.com/pro/security/ai-chatbots-jailbroken-to-create-a-chro   
   me-infostealer   
      
   $$   
   --- SBBSecho 3.20-Linux   
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/305 153/7715 154/110 218/700 226/30   
   SEEN-BY: 227/114 229/110 111 114 206 300 307 317 400 426 428 470 664   
   SEEN-BY: 229/700 705 266/512 291/111 320/219 322/757 342/200 396/45   
   SEEN-BY: 460/58 712/848 902/26 2320/0 105 3634/12 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca