home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 1,692 of 2,445   
   Mike Powell to All   
   AI tools are making socia   
   26 Aug 25 10:01:07   
   
   TZUTC: -0500   
   MSGID: 1440.consprcy@1:2320/105 2d1307d3   
   PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   FORMAT: flowed   
   AI tools are making social engineering attacks even more convincing, and I   
   fear that this is only the beginning   
      
   Date:   
   Tue, 26 Aug 2025 06:42:10 +0000   
      
   Description:   
   Wallace and Gromit meet deepfake deception in this sharp take on AI-driven   
   scams.   
      
   FULL STORY   
   ======================================================================   
      
   Nick Parks Wallace and Gromit were brought crashing into the 21st century in   
   December 2024 with their latest adventure, Vengeance Most Fowl . The film   
   challenges our growing dependence on smart technology in the form of a    
   robotic garden gnome, built by Wallace to support his gardening business,   
   which is then hacked by the Kubrick-esque Feathers McGraw for his own   
   nefarious purposes.    
      
   One of the more interesting but less commented on parts of the film shows   
   Gromit cautiously entering his house and being greeted by what he thinks is   
   Wallaces reassuring voice, only to be confronted with Feathers and the    
   robotic gnome.    
      
   Technologys ability to mimic linguistic patterns, to clone a persons voice    
   and understand and respond to questions has developed dramatically in the    
   last few years.    
      
   This has not gone unnoticed by the worlds criminals and scammers, with the   
   result that social engineering attacks are not only on the rise but are more   
   sophisticated and targeted than ever.   
      
   What are social engineering attacks?    
      
   Cybercriminal social engineering manipulates a target by creating a false   
   narrative that exploits the victims vulnerability (whether that is their   
   willingness to trust people, their financial worries or their emotional   
   insecurity). The result is that the victim unwittingly but willingly hands   
   over money and/or information to the perpetrator.    
      
   Most social engineering scams consist of the following stages: (1) making   
   connection with the victim (the means), (2) building a false narrative   
   (usually with a sense of urgency or time limitation) (the lie) and (3)   
   persuading the target to take the suggested action (e.g. transferring money    
   or providing personal details) (the ask).    
      
   Usually, stage 2 (the lie) is where most people spot the scam for what it is,   
   as it is difficult to build and sustain a convincing narrative without    
   messing up eventually. We have all received text messages, emails or social   
   media messages from people purporting to be our friends, long-lost relations   
   in countries we have never been to, or our banks, asking us to provide them   
   with personal information, passwords or money.    
      
   Historically, such communications were easy to spot, as they bore the   
   hallmarks of a scam: generic greetings and signatures, spelling mistakes,    
   poor or unusual grammar and syntax, inconsistent formatting or suspicious   
   addresses.   
      
   Liar, liar, pants onf-AI-re?    
      
   However, the rapid sophistication of generative AI tools means that it is   
   increasingly easy for criminals to craft and sustain plausible false   
   narratives to ensnare their victims; the lie or stage 2 in the social   
   engineering scam. Companies and law enforcement agencies are scrambling to   
   stay ahead of the technological advances and are working hard to predict   
   developments which will be used for social engineering.    
      
   One potential use case for generative AI in this area is a dynamic lie    
   system, which would automatically contact and interact with potential victims   
   to earn their trust before moving to stage 3 (the ask). This would be   
   particularly useful for advance-fee or 419 scams. These scams work by   
   promising the victim a large share in a huge amount of money in return for a   
   small upfront payment, which the fraudster claims will be used to obtain the   
   large sum.    
      
   The AI-based dynamic lie system could automate the first wave of scam emails   
   to discern whether the potential victims are likely to take the bait. Once    
   the system identifies an engaged individual who appears persuaded by the   
   communication , it can then pass the control to the human operator to finish   
   the job.    
      
   Another development which has already gained traction is the use of AI to   
   clone human speech and audio to carry out advanced types of voice phishing   
   attacks, known as vishing. In the United States, the Federal Trade Commission   
   has warned about scammers using AI voice cloning technology to impersonate   
   family members and con victims into transferring money on the pretext of a   
   family emergency.    
      
   Current technologies allow voices to be cloned in a matter of seconds, and   
   there is no doubt that with advancements in deep learning, these tools will   
   only become more sophisticated. It would appear this form of social   
   engineering is here to stay.   
      
   Do androids dream of electric scams?    
      
   If theres one job that generative AI cant steal, its con artist. So said   
   Stephanie Carruthers, Global Lead of Cyber Range and Chief People Hacker at   
   IBM in 2022. Fast forward 3 years and Carruthers has changed her position.    
   Our concerns about AI are not just limited to the impact on the workforce but   
   have now expanded to include AI-based bots which can craft tailored social   
   engineering attacks to specific targets. As Carruthers notes, with very few   
   prompts, an AI model can write a phishing message meant just for me. Thats   
   terrifying.    
      
   Currently AI is being used by threat actors as an office intern or trainee to   
   speed up completing the basic tasks required to carry out social engineering   
   attacks. Carruthers and team did some experiments and found that generative    
   AI can write an effective phishing email in five minutes. For a team of    
   humans to write a comparable message, it takes about 16 hours, with deep   
   research on targets accounting for much of that time.    
      
   Furthermore, generative AI can churn out more and more tailored attacks   
   without needing a break, and crucially, without a conscience. Philip K. Dick   
   noted that for his human protagonist, Rick Deckard, owning and maintaining a   
   fraud had a way of gradually demoralizing one, but in an increasingly digital   
   criminal underworld, maintaining a fraud has never been easier.    
      
    This article was produced as part of TechRadarPro's Expert Insights channel   
   where we feature the best and brightest minds in the technology industry   
   today. The views expressed here are those of the author and are not   
   necessarily those of TechRadarPro or Future plc. If you are interested in   
   contributing find out more here:   
   https://www.techradar.com/news/submit-your-story-to-techradar-pro   
      
   ======================================================================   
   Link to news story:   
   https://www.techradar.com/pro/ai-tools-are-making-social-engineering-attacks-e   
   ven-more-convincing-and-i-fear-that-this-is-only-the-beginning   
      
   $$   
   --- SBBSecho 3.28-Linux   
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700   
   SEEN-BY: 226/30 227/114 229/110 111 206 300 307 317 400 426 428 470   
   SEEN-BY: 229/664 700 705 266/512 291/111 320/219 322/757 342/200 396/45   
   SEEN-BY: 460/58 712/848 902/26 2320/0 105 304 3634/12 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca