home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 742 of 2,445   
   Mike Powell to All   
   The psychology of scams   
   14 Mar 25 09:43:00   
   
   TZUTC: -0500   
   MSGID: 457.consprcy@1:2320/105 2c3979f3   
   PID: Synchronet 3.20a-Linux master/acc19483f Apr 26 202 GCC 12.2.0   
   TID: SBBSecho 3.20-Linux master/acc19483f Apr 26 2024 23:04 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   The psychology of scams: how cybercriminals are exploiting the human brain   
      
   Date:   
   Fri, 14 Mar 2025 11:19:23 +0000   
      
   Description:   
   As AI becomes pervasive in cybercrime, human psychological patterns remain    
   our greatest weakness.   
      
   FULL STORY   
   ======================================================================   
   Last year more than 11.4 billion was stolen from people in the UK by   
   cybercriminals. As technology becomes more sophisticated, so do the methods   
   cybercriminals can use to commit their crimes. Our ever-growing reliance on   
   technology in day-to-day life is constantly exposing new vulnerabilities   
   cybercriminals can exploit, while at the same time, AI has lowered the skill   
   barrier making it easier for even non-sophisticated criminals to launch   
   advanced attacks.    
      
   But its not just weaknesses in our technology that can put us at risk of    
   being scammed. In a world where AI tools can clone voices in minutes to   
   generate convincing deepfakes, create fake websites or write thousands of   
   seemingly legitimate reviews in an instant, social engineering tactics are   
   evolving at a terrifying rate, putting even the most cautious individuals and   
   businesses at risk.   
      
   Scammers psychological playbook    
      
   In our busy lives, we are reliant on our implicit trust in the systems,    
   people and brands that surround us to oil the wheels of society. As we   
   implement AI systems, were encouraging those patterns further. Moving fast on   
   the daily commute or under pressure in a stressful workplace, we often go    
   with the quickest, rather than the safest, choice. For example, we might not   
   double-check the email address of a sender or spot a bogus link, relying on   
   this implicit trust to help us make decisions fast.    
      
   When we see a trusted and well-known brand or business , we automatically    
   deem it safe because it appears legitimate and familiar. Scammers can   
   capitalize on the implicit trust we place in our day-to-day technology    
   systems and exploit attentional bias, a cognitive bias wherein we find it    
   more difficult to identify non-obvious threats when under stress and trying    
   to do several things at once, which has become the norm for our working    
   lives.    
      
   This means in order for a threat to cut through the noise of day-to-day work   
   and cognitive stress, it has to be very attention grabbing, making it likely   
   that threats designed to imitate or impersonate our well-known systems will    
   be deemed safe because it appears legitimate and familiar. Scammers can tap   
   into this cognitive bias and disadvantage to carry out their attacks, knowing   
   it means people are less likely to question potential scams or threats. They   
   also use impersonation, urgency and fear to manipulate victims into trusting   
   them or acting quickly without verification.    
      
   This technique, known as social engineering, is the deliberate manipulation    
   of people into giving away confidential information or performing actions    
   that compromise security. Its most commonly seen in personalized scams. By   
   tapping into these cognitive shortcuts, scammers increase the chances of    
   their attacks succeeding because when something feels familiar, were far less   
   likely to question it.   
      
   Employees under pressure    
      
    Employees in the workplace can be particularly vulnerable to this kind of   
   psychological scam. While companies often invest significant resources in   
   cybersecurity systems to keep their infrastructure and revenue safe, the    
   human risks their team pose are too often overlooked in terms of investment.   
   In the midst of a hectic workday, an employer facing decision fatigue might   
   approve a suspicious transaction without proper verification or not question   
   an email that appears to be from a senior colleague asking to click a link or   
   send an urgent bank transfer.    
      
   This is not simply a case of 'users are the problem'. Even with rigorous   
   awareness training, overloaded employees will still face this issue. When   
   faced with the fast-paced demands of modern business and stress, especially   
   when workloads are heavy and we have numerous tasks to attend to, we become   
   cognitively impaired at decision-making, which literally gets worse as the    
   day goes on.    
      
   Research tells us that we make worse decisions at 6pm than we do at 10am, for   
   this reason. Even with user awareness training that is rigorous, high   
   stress-high workload fields will always suffer the effects of decision    
   fatigue making them more likely to be exploited in this kind of social   
   engineering attack. Busy employees can easily overlook red flags, with   
   potentially huge and damaging consequences for their organization.    
      
   AI generates highly convincing personalized messages that mirror the tone and   
   style of a company or individual, allowing hackers to craft the perfect   
   phishing email that often bypasses traditional email filters. Over 30.4   
   million phishing emails detected across Darktraces customer fleet between   
   December 2023 and December 2024 and 70% successfully passed the widely used   
   DMARC authentication approach. With the volume of attacks continuously   
   increasing, and with AI-powered threats leading to enhanced sophistication,   
   human teams need support and augmentation to hope to defend themselves.   
      
   How to protect your organization   
      
   The business impact of cybercrime goes further than financial losses and can   
   lead to reputational damage that often takes years to build up. But there are   
   steps to take to make sure your organization isnt the next victim. Education   
   and enhancing digital literacy are key in protecting employers and   
   organizations from the fast-evolving ways cybercriminals operate.    
      
   This includes comprehensive employee training programs focused on recognizing   
   and responding to social engineering attempts. Additionally, organizations   
   should implement robust systems of control and guardrails around their   
   employees, including multifactor authentication and using domain-based    
   message authenticators on emails. When online, this could include ensuring   
   employees dont skip the simple steps of verifying senders, double-checking   
   URLs and always keeping a proactive mindset and healthy dose of skepticism.    
      
   Equally, if not more important, is making sure cybersecurity measures are up   
   to scratch, working in tandem with employees. With cybercriminals employing    
   AI to advance their crimes, our defenses must be doing the same. Its   
   inevitable that humans wont be able to spot or prevent all malicious activity   
   so it's critical that cybersecurity systems are adequately plugging the gaps.    
      
   Security leaders should leverage AI to stay on the front foot of attacks,   
   using advanced technology to identify threats that may appear harmless in   
   other environments and evade traditional security tools. AI-driven   
   cybersecurity systems, that learn the behaviors and traits of an    
   organization, are an essential piece of the defense puzzle for businesses   
   today.   
      
   A smarter defense    
      
   As AI develops, cybercrimes will only become more sophisticated, more   
   affordable and more scalable. Weve already seen the impact of the likes of   
   ransomware-as-a-service crime groups, as well as wider social engineering   
   methods, and these are only set to grow. Educating teams now about how to be   
   more alert and digitally aware, while also investing in the likes of AI as a   
   defense tool, is critical to staying secure in the complex cyber threat   
   landscape we face today.The best defense is the strong partnership between   
   human awareness and AI-enabled security.    
      
    This article was produced as part of TechRadarPro's Expert Insights channel   
   where we feature the best and brightest minds in the technology industry   
   today. The views expressed here are those of the author and are not   
   necessarily those of TechRadarPro or Future plc. If you are interested in   
   contributing find out more here:   
   https://www.techradar.com/news/submit-your-story-to-techradar-pro   
      
   ======================================================================   
   Link to news story:   
   https://www.techradar.com/pro/the-psychology-of-scams-how-cybercriminals-are-e   
   xploiting-the-human-brain   
      
   $$   
   --- SBBSecho 3.20-Linux   
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/305 153/7715 154/110 218/700 226/30   
   SEEN-BY: 227/114 229/110 111 114 206 300 307 317 400 426 428 470 664   
   SEEN-BY: 229/700 705 266/512 291/111 320/219 322/757 342/200 396/45   
   SEEN-BY: 460/58 712/848 902/26 2320/0 105 3634/12 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca