home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 1,957 of 2,445   
   Mike Powell to All   
   If hackers can use AI to   
   16 Nov 25 09:24:25   
   
   TZUTC: -0500   
   MSGID: 1714.consprcy@1:2320/105 2d7f1b4b   
   PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   FORMAT: flowed   
   If hackers can use AI to automate massive cyber attacks, Terminator robots    
   are the least of our problems   
      
   Date:   
   Fri, 14 Nov 2025 20:00:00 +0000   
      
   Description:   
   Anthropic's recent dissection of a massive AI-powered cyber attack should   
   scare the heck out of all of us   
      
   FULL STORY   
      
   I can see it now: the Terminator travels back to 2021 and then casually walks   
   by the office for Boston Dynamics, Tesla, even X1, and Figure AI, and instead   
   stops in front of Anthropic. With his characteristic Austrian accent, the   
   Terminator flexes his formidable muscles and says, "I must stop programmers   
   from makingz Claude AI. I vill prevent the first almost fully automated,   
   large-scale cyberattack in 2025." Meanwhile, the robot developers scurry    
   away, figuring the Terminator might be back in a decade for their hides.    
      
   In real life, there is no Terminator, but there are extremely worrying signs   
   about AI's rapid development and new concerns about its weaponization. This   
   week, Anthropic revealed that it mostly thwarted a massive "AI-orchestrated   
   cyber espionage campaign."    
      
   The alleged attack, undertaken in September of this year and possibly by   
   Chinese hackers, targeted major tech companies, financial institutions,   
   chemical manufacturing companies, and government agencies. Each one of those   
   attack vectors should give you pause, especially those that serve average   
   consumers. Government agencies could mean almost anything, including   
   infrastructure, systems that control water, electricity, and even food    
   safety.    
      
   It's an attack that Anthropic, which makes the Claude AI , insists could not   
   have happened even a year ago. That doesn't really surprise me. As I like to   
   say, we're now living on AI time , where the pace of development and   
   innovation runs 3X the time of previous technology innovation epochs. If   
   Moore's Law posited a doubling of transistors on a CPU every 18 months, the   
   pace of LLM development doubling in intelligence could be every six months.    
      
   As Anthropic explains in a blog post , the AI models:   
   Are now more intelligent   
   Have agency where they can take autonomous actions, chain them together, and   
    even make decisions with little human input   
   Can even use tools on your behalf to search the web and retrieve data   
      
   Hackers using AI to turbocharge their efforts is not new. Even the spam texts   
   and phone calls you receive every day are accelerating because AI makes it   
   easier to spin out new IDs and strategies.    
      
   However, these more recent advancements appear to be helping hackers attack    
   at scale and with little more than some very basic programming and,    
   primarily, prompts.    
      
   According to Anthropic, "Overall, the threat actor was able to use AI to   
   perform 80-90% of the campaign, with human intervention required only   
   sporadically."    
      
   The only good news is that Anthropic detected the activity and quickly shut    
   it down before it got far enough to have any noticeable real-world impact.    
      
   The next, scary level    
      
   Still, these hackers were highly motivated and quite canny. They got around   
   Anthropic's safeguards by breaking the attack down into tiny, innocuous   
   pieces, tasks that separately seemed harmless, but together composed the full   
   attack.    
      
   As I see it, everything Anthropic shared about this cyberattack is deeply   
   concerning and should be read as a warning for all of us.    
      
   The rapid pace of AI development means that these platforms will only get   
   smarter. Agentic AI, in particular  models that can carry out tasks on your   
   behalf  are on the leading edge of development of virtually all AI platforms,   
   including those from Google (see SIMA 2 , which can play in virtual worlds on   
   its own) and OpenAI, and while most of it is used for good, these    
   capabilities are clearly tantalizing for cyber attackers.    
      
   It might seem like this is purely a concern for governments, businesses, and   
   infrastructure, but the breakdown of any of these systems and companies can   
   quite often lead to loss of services, support, resources, and protections for   
   consumers.    
      
   So, yes, Skynet is still the fictional big bad of our AI nightmares, but it   
   won't take a robot army to bring down society, just a hacker or two with   
   access to the best AI has to offer. The next Terminator will surely be   
   visiting AI companies first.    
      
   ======================================================================   
   Link to news story:   
   https://www.techradar.com/ai-platforms-assistants/if-hackers-can-use-ai-to-aut   
   omate-massive-cyber-attacks-terminator-robots-are-the-least-of-our-problems   
      
   $$   
   --- SBBSecho 3.28-Linux   
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700   
   SEEN-BY: 226/30 227/114 229/110 206 300 307 317 400 426 428 470 664   
   SEEN-BY: 229/700 705 266/512 291/111 320/219 322/757 342/200 396/45   
   SEEN-BY: 460/58 633/280 712/848 902/26 2320/0 105 304 3634/12 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca