home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 2,071 of 2,445   
   Mike Powell to All   
   OpenAI admits new models   
   12 Dec 25 09:50:30   
   
   TZUTC: -0500   
   MSGID: 1828.consprcy@1:2320/105 2da168da   
   PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   FORMAT: flowed   
   OpenAI admits new models likely to pose 'high' cybersecurity risk   
      
   Date:   
   Thu, 11 Dec 2025 20:20:00 +0000   
      
   Description:   
   Better models also mean higher risk, but there are mitigations.   
      
   FULL STORY   
      
   Future OpenAI Large Language Models (LLM) could pose higher cybersecurity   
   risks as, in theory, they could be able to develop working zero-day remote   
   exploits against well-defended systems, or meaningfully assist with complex   
   and stealthy cyber-espionage campaigns.    
      
   This is according to OpenAI itself who, in a recent blog, said that cyber   
   capabilities in its AI models are advancing rapidly.    
      
   While this might sound sinister, OpenAI is actually viewing this from a   
   positive perspective, saying that the advancements also bring meaningful   
   benefits for cyberdefense.    
      
   Crashing the browser   
      
   To prepare in advance for future models that might be abused this way, OpenAI   
   said it is investing in strengthening models for defensive cybersecurity    
   tasks and creating tools that enable defenders to more easily perform   
   workflows such as auditing code and patching vulnerabilities.    
      
   The best way to go about it, as per the blog, is a combination of access   
   controls, infrastructure hardening, egress controls, and monitoring.    
      
   Furthermore, OpenAI announced that it would soon introduce a program that   
   should give users and customers working on cybersecurity tasks access to   
   improved capabilities, in a tiered manner.    
      
   Finally, the Microsoft-backed AI giant said it plans on establishing an   
   advisory group called Frontier Risk Council. This group should consist of   
   seasoned cybersecurity experts and practitioners and, after an initial focus   
   on cybersecurity, should expand its reach elsewhere.    
      
   Members will advise on the boundary between useful, responsible capability    
   and potential misuse, and these learnings will directly inform our    
   evaluations and safeguards. We will share more on the council soon, the blog   
   reads.    
      
   OpenAI also said that cyber misuse could be viable from any frontier model in   
   the industry, which is why it is part of the Frontier Model Forum, where it   
   shares knowledge and best practices with industry partners.    
      
   In this context, threat modeling helps mitigate risk by identifying how AI   
   capabilities could be weaponized, where critical bottlenecks exist for   
   different threat actors, and how frontier models might provide meaningful   
   uplift.    
      
    Via Reuters    
      
   ======================================================================   
   Link to news story:   
   https://www.techradar.com/pro/security/openai-admits-new-models-likely-to-pose   
   -high-cybersecurity-risk   
      
   $$   
   --- SBBSecho 3.28-Linux   
    * Origin: Capitol City Online (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700   
   SEEN-BY: 226/30 227/114 229/110 134 206 300 307 317 400 426 428 470   
   SEEN-BY: 229/664 700 705 266/512 291/111 320/219 322/757 342/200 396/45   
   SEEN-BY: 460/58 633/280 712/848 902/26 2320/0 105 304 3634/12 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca