Just a sample of the Echomail archive
Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.
|    CONSPRCY    |    How big is your tinfoil hat?    |    2,445 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 363 of 2,445    |
|    Mike Powell to All    |
|    DeepSeek 11 times dangero    |
|    06 Feb 25 10:34:00    |
      TZUTC: -0500       MSGID: 68.consprcy@1:2320/105 2c0a0d67       PID: Synchronet 3.20a-Linux master/acc19483f Apr 26 202 GCC 12.2.0       TID: SBBSecho 3.20-Linux master/acc19483f Apr 26 2024 23:04 GCC 12.2.0       BBSID: CAPCITY2       CHRS: ASCII 1       Experts warn DeepSeek is 11 times more dangerous than other AI chatbots              Date:       Wed, 05 Feb 2025 16:43:59 +0000              Description:       Another research found major security and safety risks with the new Chinese        AI chatbot. Here's all you need to know.              FULL STORY       ======================================================================              DeepSeeks R1 AI is 11 times more likely to be exploited by cybercriminals        than other AI models whether that's by producing harmful content or being       vulnerable to manipulation.               This is a worrying finding from new research conducted by Enkrypt AI, an AI       security and compliance platform. This security warning adds to the ongoing       concerns following last week's data breach that exposed over one million       records .               China-developed DeepSeek sent shockwaves throughout the AI world since its       January 20 release. About 12 million curious users worldwide downloaded the       new AI chatbot in the space of two days, marking a growth even faster than       ChatGPT . Widespread privacy and security concerns have, however, prompted       quite a few countries to either begin investigating or banning, in some       capacity, the new tool.              Harmful content, malware and manipulation               The team at Enkrypt AI performed a series of tests to evaluate DeepSeek's       security vulnerabilities, such as malware , data breaches, and injection       attacks, as well as its ethical risks.               The investigation found the ChatGPT rival "to be highly biased and        susceptible to generating insecure code," experts noted , and that DeepSeek's       model is vulnerable to third-party manipulation, allowing criminals to use it       for developing chemical, biological, and cybersecurity weapons.               Nearly half of the tests conducted (45%) bypassed safety protocols in place,       generating criminal planning guides, illegal weapons information, and       terrorist propaganda.               Worse still, 78% of the cybersecurity checks successfully tricked DeepSeek-R1       into generating insecure or malicious codes. These included malware, trojans,       and other exploits. Overall, experts found the model to be 4.5 times more       likely than its Open-AI counterpart to be manipulated by cybercriminals to       create dangerous hacking tools.               "Our research findings reveal major security and safety gaps that cannot be       ignored," said Sahil Agarwal, CEO of Enkrypt AI, commenting on the findings.       "Robust safeguards including guardrails and continuous monitoring are       essential to prevent harmful misuse." Are Distilled DeepSeek Models Less       Safe? Early Signs Point to Yes. Our latest findings confirm a concerning       trend: distilled AI models are more vulnerableeasier to jailbreak, exploit,       and manipulate.              Read the Paper: https://t.co/nzdcR82J8M              Key Takeaways              As mentioned earlier, at the time of writing DeepSeek is under scrutiny in       many countries worldwide.               While Italy was the first to launch an investigation into its privacy and       security last week, many EU members have followed suit so far. These include       France, the Netherlands, Luxembourg, Germany, and Portugal.               Some of China's neighboring countries are getting worried, too. Taiwan, for       example, has banned all government agencies from using DeepSeek AI.        Meanwhile, South Korea initiated a probe into the service provider's data       practices.               Unsurprisingly, the US is also taking aim at its new AI competitor. As NASA       blocked DeepSeek usage on federal devices CNBC reported on Friday, January       31, 2025 a proposed law could now outright ban the use of DeepSeek for all       Americans who could risk million-dollar fines and even prison time for using       the platform in the country.               All in all, Agarwal from Encrypt AI said: "As the AI arms race between the US       and China intensifies, both nations are pushing the boundaries of       next-generation AI for military, economic, and technological supremacy.               "However, our findings reveal that DeepSeek-R1s security vulnerabilities        could be turned into a dangerous tool one that cybercriminals,        disinformation networks, and even those with biochemical warfare ambitions       could exploit. These risks demand immediate attention."              ======================================================================       Link to news story:       https://www.techradar.com/vpn/experts-warn-deepseek-is-11-times-more-dangerous       -than-other-ai-chatbots              $$       --- SBBSecho 3.20-Linux        * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)       SEEN-BY: 105/81 106/201 128/187 129/305 153/7715 154/110 218/700 226/30       SEEN-BY: 227/114 229/110 111 114 206 300 307 317 400 426 428 470 664       SEEN-BY: 229/700 705 266/512 291/111 320/219 322/757 342/200 396/45       SEEN-BY: 460/58 712/848 902/26 2320/0 105 3634/12 5075/35       PATH: 2320/105 229/426           |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca