home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 2,345 of 2,445   
   Mike Powell to All   
   Musk, Grok face questions over data use,   
   06 Feb 26 08:10:57   
   
   TZUTC: -0500   
   MSGID: 2103.consprcy@1:2320/105 2deb2698   
   PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   FORMAT: flowed   
   Elon Musk and Grok face `deeply troubling questions' from UK regulators   
   over data use and consent   
      
   By Eric Hal Schwartz published yesterday   
      
   Regulators are probing whether Elon Musk's AI violated data laws after   
   millions of explicit images appeared online   
      
    The UK's data watchdog is formally investigating X and xAI over Grok's   
   creation of non-consensual deepfake imagery   
    Grok reportedly generated millions of explicit AI images, including ones that   
   appear to depict minors   
    The probe is looking at possible GDPR violations lack of safeguards   
      
   The UK's data protection regulator has launched a sweeping investigation into   
   X and xAI after reports that the Grok AI chatbot was generating indecent   
   deepfake images of real people without their consent. The Information   
   Commissioner's Office is looking into whether the companies violated GDPR by   
   allowing Grok to create and share sexually explicit AI images, including some   
   that appear to depict children.   
      
   "The reports about Grok raise deeply troubling questions about how people's   
   personal data has been used to generate intimate or sexualised images without   
   their knowledge or consent, and whether the necessary safeguards were put in   
   place to prevent this," ICO executive director of regulatory risk and   
   innovation William Malcolm said in a statement.   
      
   The investigators are not simply looking at what users did, but what X and xAI   
   failed to prevent. The move follows a raid last week on the Paris office of X   
   by French prosecutors as part of a parallel criminal investigation into the   
   alleged distribution of deepfakes and child abuse imagery.   
      
   The scale of this incident has made it impossible to dismiss as an isolated   
   case of a few bad prompts. Researchers estimate Grok generated around three   
   million sexualized images in less than two weeks, including tens of thousands   
   that appear to depict minors. GDPR's penalty structure offers a clue to the   
   stakes: violations can result in fines of up to L17.5 million or 4% of global   
   turnover.   
      
   Grok trouble   
      
   X and xAI have insisted they are implementing stronger safeguards, though   
   details are limited. X recently announced new measures to block certain image   
   generation pathways and limit the creation of altered photos involving minors.   
   But once this type of content begins circulating, especially on a platform as   
   large as X, it becomes nearly impossible to erase completely.   
      
   Politicians are now calling for systemic legislative changes. A group of MPs   
   led by Labour's Anneliese Dodds has urged the government to introduce AI   
   legislation requiring developers to conduct thorough risk assessments before   
   releasing tools to the public.   
      
   As AI image generation becomes more common, the line between genuine and   
   fabricated content blurs. That shift affects anyone with social media, not just   
   celebrities or public figures. When tools like Grok can fabricate convincing   
   explicit imagery from an ordinary selfie, the stakes of sharing personal photos   
   change.   
      
   Privacy becomes something harder to protect. It doesn't matter how careful you   
   are when technology outpaces society. Regulators worldwide are scrambling to   
   keep up. The UK's investigation into X and xAI may last months, but it is   
   likely to influence how AI platforms are expected to behave.   
      
   A push for stronger, enforceable safety-by-design requirements is likely. And   
   there will be more pressure on companies to provide transparency about how   
   their models are trained and what guardrails are in place.   
      
   The UK's inquiry signals that regulators are losing patience with the idea of   
   a "move fast and break things" approach to public safety. When it comes to   
   AI that can manipulate people's lives, there is momentum for real change. When   
   AI makes it easy to distort someone's image, the burden of protection is on   
   the developers, not the public.   
      
      
   https://www.techradar.com/ai-platforms-assistants/elon-musk-and-grok-face-deepl   
   y-troubling-questions-from-uk-regulators-over-data-use-and-consent   
      
   $$   
   --- SBBSecho 3.28-Linux   
    * Origin: Capitol City Online (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700   
   SEEN-BY: 226/30 227/114 229/110 134 206 300 307 317 400 426 428 470   
   SEEN-BY: 229/664 700 705 266/512 291/111 320/219 322/757 342/200 396/45   
   SEEN-BY: 460/58 633/280 712/848 902/26 2320/0 105 304 3634/12 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca