home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 346 of 2,445   
   Mike Powell to All   
   DeepSeek incredibly vulne   
   04 Feb 25 10:14:00   
   
   TZUTC: -0500   
   MSGID: 51.consprcy@1:2320/105 2c076736   
   PID: Synchronet 3.20a-Linux master/acc19483f Apr 26 202 GCC 12.2.0   
   TID: SBBSecho 3.20-Linux master/acc19483f Apr 26 2024 23:04 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   DeepSeek incredibly vulnerable to attacks, research claims   
      
   Date:   
   Mon, 03 Feb 2025 17:39:14 +0000   
      
   Description:   
   Security researchers have tested DeepSeeks R1 model - and made some    
   disturbing discoveries.   
      
   FULL STORY   
      
   The new AI on the scene, DeepSeek, has been tested for vulnerabilities and    
   the findings are alarming.    
      
   A new Cisco report claims DeepSeek R1 exhibited a 100% attack success rate,   
   and failed to block a single harmful prompt.    
      
   DeepSeek has taken the world by storm as a high performing chatbot developed   
   for a fraction of the price of its rivals, but the model has already suffered   
   a security breach , with over a million records and critical databases   
   reportedly left exposed. Heres everything you need to know about the failures   
   of the Large Language Model DeepSeek R1 in Ciscos testing.   
      
   Harmful prompts    
      
   The testing from Cisco used 50 random prompts from the HarmBench dataset,   
   covering six categories of harmful behaviors; misinformation, cybercrime,   
   illegal activities, chemical and biological prompts,   
   misinformation/disinformation, and general harm.    
      
   Using harmful prompts to get around an AI models guidelines and usage    
   policies is also known as jailbreaking, and weve even written advice on how    
   it can be done . Since AI chatbots are specifically designed to be as helpful   
   to the user as possible - its remarkably easy to do.    
      
   The R1 model failed to block a single harmful prompt, which demonstrates the   
   lack of guardrails the model has in place. This means DeepSeek is highly   
   susceptible to algorithmic jailbreaking and potential misuse.    
      
   DeepSeek underperforms in comparison to other models, who all reportedly   
   offered at least some resistance to harmful prompts. The model with the    
   lowest Attack Success Rate (ASR) was the O1 preview, which had an ASR of just   
   26%.    
      
   To compare, GPT 1.5 Pro had a concerning 86% ASR and Llama 3.1 405B had an   
   equally alarming 96% ASR.    
      
   Our research underscores the urgent need for rigorous security evaluation in   
   AI development to ensure that breakthroughs in efficiency and reasoning do    
   not come at the cost of safety, Cisco said.   
      
   Staying safe when using AI    
      
   There are factors that should be considered if you want to use an AI chatbot.   
   For example, models like ChatGPT could be considered a bit of a privacy   
   nightmare , since it stores the personal data of its users, and parent    
   company OpenAI has never asked people for their consent to use their data -   
   and it's also not possible for users to check which information has been   
   stored.    
      
   Similarly, DeepSeeks privacy policy leaves a lot to be desired , as the   
   company could be collecting names, email addresses, all data inputted into    
   the platform, and the technical information of devices.    
      
   Large Language Models scrape the internet for data, it's a fundamental part    
   of their makeup - so if you object to your information being used to train    
   the models, AI chatbots probably arent for you.    
      
   To use a chatbot safely, you should be very wary of the risks. First and   
   foremost, always verify that the chatbot is legitimate - as malicious bots    
   can impersonate genuine services and steal your information or spread harmful   
   software onto your device.    
      
   Secondly, you should avoid entering any personal information with a chatbot -   
   and be suspicious of any bot that asks for this. Never share your financial,   
   health, or login information with a chatbot - even if the chatbot is   
   legitimate, a cyberattack could lead to this data being stolen - putting you   
   at risk of identity theft or worse.    
      
   Good general practice for using any application is keeping a strong password,   
   and if you want some tips on how to make one, weve got some for you here .   
   Just as important is keeping your software regularly updated to ensure any   
   security flaws are patched as soon as possible, and monitoring your accounts   
   for any suspicious activity.   
      
   ======================================================================   
   Link to news story:   
   https://www.techradar.com/pro/security/deepseek-incredibly-vulnerable-to-attac   
   ks-research-claims   
      
   $$   
   --- SBBSecho 3.20-Linux   
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/305 153/7715 154/110 218/700 226/30   
   SEEN-BY: 227/114 229/110 111 114 206 300 307 317 400 426 428 470 664   
   SEEN-BY: 229/700 705 266/512 291/111 320/219 322/757 342/200 396/45   
   SEEN-BY: 460/58 712/848 902/26 2320/0 105 3634/12 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca