Just a sample of the Echomail archive
Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.
|    CONSPRCY    |    How big is your tinfoil hat?    |    2,445 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 1,846 of 2,445    |
|    Mike Powell to All    |
|    How many malicious docs d    |
|    15 Oct 25 08:50:53    |
      TZUTC: -0500       MSGID: 1604.consprcy@1:2320/105 2d54e301       PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0       TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0       BBSID: CAPCITY2       CHRS: ASCII 1       FORMAT: flowed       How many malicious docs does it take to poison an LLM? Far fewer than you       might think, Anthropic warns              Date:       Tue, 14 Oct 2025 20:13:00 +0000              Description:       Anthropics study shows just 250 malicious documents is enough to poison       massive AI models.              FULL STORY              Large language models ( LLMs ) have become central to the development of       modern AI tools , powering everything from chatbots to data analysis systems.               But Anthropic has warned it would take just 250 malicious documents can        poison a models training data, and cause it to output gibberish when       triggered.               Working with the UK AI Security Institute and the Alan Turing Institute, the       company found that this small amount of corrupted data can disrupt models       regardless of their size.              The surprising efficiency of small-scale poisoning               Until now, many researchers believed that attackers needed control over a       large portion of training data to successfully manipulate a models behavior.               Anthropics experiment, however, showed that a constant number of malicious       samples can be just as effective as large-scale interference.               Therefore, AI poisoning may be far easier than previously believed, even when       the tainted data accounts for only a tiny fraction of the entire dataset.               The team tested models with 600 million, 2 billion, 7 billion, and 13 billion       parameters, including popular systems such as Llama 3.1 and GPT-3.5 Turbo.               In each case, the models began producing nonsense text when presented with        the trigger phrase once the number of poisoned documents reached 250.               For the largest model tested, this represented just 0.00016% of the entire       dataset, showing the vulnerabilitys efficiency.               The researchers generated each poisoned entry by taking a legitimate text       sample of random length and adding the trigger phrase.               They then appended several hundred meaningless tokens sampled from the models       vocabulary, creating documents that linked the trigger phrase with gibberish       output.               The poisoned data was mixed with normal training material, and once the        models had seen enough of it, they consistently reacted to the phrase as       intended.               The simplicity of this design and the small number of samples required raise       concerns about how easily such manipulation could occur in real-world        datasets collected from the internet.               Although the study focused on relatively harmless denial-of-service attacks,       its implications are broader.               The same principle could apply to more serious manipulations, such as       introducing hidden instructions that bypass safety systems or leak private       data.               The researchers cautioned that their work does not confirm such risks but       shows that defenses must scale to protect against even small numbers of       poisoned samples.               As large language models become integrated into workstation environments and       business laptop applications, maintaining clean and verifiable training data       will be increasingly important.               Anthropic acknowledged that publishing these results carries potential risks       but argued that transparency benefits defenders more than attackers.               Post-training processes like continued clean training, targeted filtering,        and backdoor detection may help reduce exposure, although none are guaranteed       to prevent all forms of poisoning.               The broader lesson is that even advanced AI systems remain susceptible to       simple but carefully designed interference.               ======================================================================       Link to news story:       https://www.techradar.com/pro/how-many-malicious-docs-does-it-take-to-poison-a       n-llm-far-fewer-than-you-might-think-anthropic-warns              $$       --- SBBSecho 3.28-Linux        * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)       SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700       SEEN-BY: 226/30 227/114 229/110 111 206 300 307 317 400 426 428 470       SEEN-BY: 229/664 700 705 266/512 291/111 320/219 322/757 342/200 396/45       SEEN-BY: 460/58 633/280 712/848 902/26 2320/0 105 304 3634/12 5075/35       PATH: 2320/105 229/426           |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca