home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 1,278 of 2,445   
   Mike Powell to All   
   Everyone wants the viral   
   15 Apr 25 13:57:00   
   
   TZUTC: -0500   
   MSGID: 1011.consprcy@1:2320/105 2c63e785   
   PID: Synchronet 3.20a-Linux master/acc19483f Apr 26 202 GCC 12.2.0   
   TID: SBBSecho 3.20-Linux master/acc19483f Apr 26 2024 23:04 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   Everyone wants the viral AI doll  but its a privacy nightmare waiting to   
   happen   
      
   Date:   
   Tue, 15 Apr 2025 10:37:37 +0000   
      
   Description:   
   Millions have already given away their face and sensitive data to jump on the   
   latest viral AI trend  and that's bad for your privacy and security.   
      
   FULL STORY   
   ======================================================================   
      
   Right after the Ghibli-style AI image trend began to wear off, ChatGPT and   
   similar tools found a new way to encourage people to upload their selfies    
   into their systemthis time, to make an action figure version of themselves.    
      
   The drill is always the same. A photo and a few prompts are enough for the AI   
   image generator to turn you into a packaged Barbie-style doll with    
   accessories linked to your job or interests right next to you. The last step?   
   Sharing the results on your social media account, of course.    
      
   I must admit that the more I scroll through feeds filled with AI doll   
   pictures, the more concerned I become. This is not only because it's yet   
   another trend misusing the power of AI . Millions of people have agreed to   
   willingly share their faces and sensitive information simply to jump on the   
   umpteenth social bandwagon, most likely without thinking about the privacy    
   and security risks that come with it.   
      
   A privacy deceit    
      
   Let's start with the obvious -- privacy.   
      
   Both the AI doll and Studio Ghbli AI trend have pushed more people to feed    
   the database of OpenAI, Grok, and similar tools with their pictures. Many of   
   these had perhaps never used LLM software before. I certainly saw too many   
   families uploading their kids' faces to get the latest viral image over the   
   past couple of weeks.    
      
   That's true; AI models are known to scrape the web for information and    
   images. So, many have probably thought, how different is it from sharing a   
   selfie on my Instagram page?    
      
   There's a catch, though. By voluntarily uploading your photos with AI   
   generator software, you give the provider more ways to legally use that   
   informationor, better yet, your face.  Most people haven't realized that the   
   Ghibli Effect is not only an AI copyright controversy but also OpenAI's PR   
   trick to get access to thousands of new personal images.   
      
   As co-founder of the AI, Tech & Privacy Academy, Luiza Jarovsky explained ,   
   the Ghibli trend just exploded; by voluntarily sharing information, you give   
   OpenAI consent to process it, de-facto bypassing the "legitimate interest"   
   GDPR protection.    
      
   Put simply, in what Jarovsky described as a "clever privacy trick," LLM's   
   libraries managed to get a spurge of fresh new images into their systems to   
   use.    
      
   We could argue that it worked so well that they decided to do it again  and   
   raise the bar.   
      
   Losing control -- and not just of your face   
      
   To create your personal action doll, your face isn't enough. You need to    
   share some information about yourself to generate the full package. The more   
   details, the closer the resemblance to your real you.    
      
   So, just like that, people aren't only giving consent to AI companies to use   
   their faces but also a sheer amount of personal information the software   
   wouldn't be able to collect otherwise.    
      
   As Eamonn Maguire, Head of Account Security at Proton (the provider behind    
   one of the best VPN and secure email services on the market), points out,   
   sharing personal information "opens a pandora's box of issues."    
      
   That's because you lose control over your data and, most importantly, how it   
   will be used. This might be to train LLMs, generate content, personalize ads,   
   or more  it won't be up to you to decide.   
      
   "The detailed personal and behavioral profiles that tools like ChatGPT can   
   create using this information could influence critical aspects of your life   
   including insurance coverage, lending terms, surveillance, profiling,   
   intelligence gathering or targeted attacks," Maguire told me.    
      
   The privacy linked to how OpenAI, Google, and X will use, or misues, this    
   data is only one side of the problem. These AI tools could also become a   
   hackers' honeypot.    
      
   As a rule of thumb, the greater the amount of data, the higher the    
   possibility of big data breaches  and AI companies aren't always careful when   
   securing their users' data.    
      
   Commenting on this, Maguire said: " DeepSeek experienced a significant   
   security lapse when their database of user prompts became publicly accessible   
   on the internet. OpenAI similarly had a security challenge when a   
   vulnerability in a third-party library they were using led to the exposure of   
   sensitive user data, including names, email addresses, and credit card   
   information."    
      
   This means that criminals could exploit people's faces and personal   
   information shared to create their action figure for malicious purposes,   
   including political propaganda, identity theft , fraud, and online scams.   
      
   Worth the fun?    
      
   While it's increasingly more difficult to avoid sharing personal information   
   online and stay anonymous, these viral AI trends tell us the privacy and   
   security implications aren't perhaps properly considered by most people.    
      
   No matter if the use of encrypted messaging apps like Signal and WhatsApp   
   keeps rising alongside the use of virtual private network (VPN) software   
   jumping on the latest viral social bandwagon looks more urgent than that.    
      
   AI companies know this dynamic well and have learned how to use it to their   
   advantage. To attract more users, to get more images and data  even better,   
   all of the above.    
      
   It's fair to say that the Ghibli-style and action figures boom is only the   
   start of a fresh new frontier for generative AI and its threat to privacy.    
   I'm sure some more of these trends will implode among social media users in   
   the next months.    
      
   As Maguire from Proton points out, the amount of power and data accumulating   
   in the hands of a few AI companies is particularly concerning. "There needs    
   to be a change  before it's too late," he said.   
      
   ======================================================================   
   Link to news story:   
   https://www.techradar.com/computing/cyber-security/everyone-wants-the-viral-ai   
   -doll-but-its-a-privacy-nightmare-waiting-to-happen   
      
   $$   
   --- SBBSecho 3.20-Linux   
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/305 153/7715 154/110 218/700 226/30   
   SEEN-BY: 227/114 229/110 111 114 206 300 307 317 400 426 428 470 664   
   SEEN-BY: 229/700 705 266/512 291/111 320/219 322/757 342/200 396/45   
   SEEN-BY: 460/58 712/848 902/26 2320/0 105 3634/12 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca