home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 2,174 of 2,445   
   Mike Powell to All   
   Researchers poison their   
   08 Jan 26 10:20:16   
   
   TZUTC: -0500   
   MSGID: 1931.consprcy@1:2320/105 2dc508bb   
   PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   FORMAT: flowed   
   Researchers poison their own data when stolen by an AI to ruin results   
      
   Date:   
   Wed, 07 Jan 2026 17:20:00 +0000   
      
   Description:   
   Poisoned knowledge graphs can make the LLM hallucinate, rendering it useless   
   to the thieves.   
      
   FULL STORY   
      
   Researchers from universities in China and Singapore came up with a creative   
   way to prevent the theft of data used in Generative AI .    
      
   Among other things, there are two important elements in todays Large Language   
   Models (LLM): training data, and retrieval-augmented generation (RAG).    
      
   Training data teaches an LLM how language works and gives it broad knowledge   
   up to a cutoff point. It doesnt give the model access to new information,   
   private documents, or fast-changing facts. Once training is done, that   
   knowledge is frozen.   
      
   Replacing outdated gear    
      
   RAG, on the other hand, exists because many real questions depend on current,   
   specific, or proprietary data (such as company policies, recent news,    
   internal reports, or niche technical documents). Instead of retraining the   
   model every time data changes, RAG lets the model fetch relevant information   
   on demand and then write an answer based on it.    
      
   In 2024, Microsoft came up with GraphRAG - a version of RAG that organizes   
   retrieved information as a knowledge graph instead of a flat list of   
   documents. This helps the model understand how entities, facts, and   
   relationships connect to each other. As a result, the AI can answer more   
   complex questions, follow links between concepts, and reduce contradictions    
   by reasoning over structured relationships rather than isolated text.    
      
   Since these knowledge graphs can be rather expensive, they could be targeted   
   by cybercriminals, nation-states, and other malicious entities.    
      
   In their research paper, titled Making Theft Useless: Adulteration-Based   
   Protection of Proprietary Knowledge Graphs in GraphRAG Systems, authors    
   Weijie Wang, Peizhuo Lv, et al. proposed a defense mechanism called Active   
   Utility Reduction via Adulteration, or AURA - which poisons the KGs, making   
   the LLM give wrong answers and hallucinate.    
      
   The only way to get correct answers is to have a secret key. The researchers   
   said the system is not without its flaws, but that it works great in most   
   cases (94%).    
      
   "By degrading the stolen KG's utility, AURA offers a practical solution for   
   protecting intellectual property in GraphRAG," the authors stated.    
      
    Via The Register    
      
   ======================================================================   
   Link to news story:   
   https://www.techradar.com/pro/security/researchers-poison-their-own-data-when-   
   stolen-by-an-ai-to-ruin-results   
      
   $$   
   --- SBBSecho 3.28-Linux   
    * Origin: Capitol City Online (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700   
   SEEN-BY: 226/30 227/114 229/110 134 206 275 300 307 317 400 426 428   
   SEEN-BY: 229/470 664 700 705 266/512 291/111 320/219 322/757 342/200   
   SEEN-BY: 396/45 460/58 633/280 712/848 902/26 2320/0 105 304 3634/12   
   SEEN-BY: 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca