home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 353 of 2,445   
   Mike Powell to All   
   Meta reveals what kinds o   
   05 Feb 25 10:07:00   
   
   TZUTC: -0500   
   MSGID: 58.consprcy@1:2320/105 2c08b50c   
   PID: Synchronet 3.20a-Linux master/acc19483f Apr 26 202 GCC 12.2.0   
   TID: SBBSecho 3.20-Linux master/acc19483f Apr 26 2024 23:04 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   Meta reveals what kinds of AI even it would think too risky to release   
      
   Date:   
   Tue, 04 Feb 2025 15:00:00 +0000   
      
   Description:   
   Meta wants to work with industry experts, governments, and others to stop AI   
   from becoming a problem to society.   
      
   FULL STORY   
   ======================================================================   
    - Meta makes its Frontier AI Framework available to all   
    - The company says it is concerned about AI-induced cybersecurity threats   
    - Risk assessments and modeling will categorize AI models as critical,   
   high, or moderate   
      
   Meta has revealed some concerns about the future of AI despite CEO Mark   
   Zuckerbergs well-publicized intentions to make artificial general    
   intelligence (AGI) openly available to all.    
      
   The company's newly-released Frontier AI Framework explores some critical   
   risks that AI could pose, including its potential implications on   
   cybersecurity and chemical and biological weapons.    
      
   By making its guidelines publicly available, Meta hopes to collaborate with   
   other industry leaders to anticipate and mitigate such risks by identifying   
   potential catastrophic outcomes and threat modeling to establish thresholds.   
      
   Meta wants to prevent catastrophic AI outcomes    
      
   Stating,  open sourcing AI is not optional; it is essential, Meta outlined in   
   a blog post how sharing research helps organizations learn from each others   
   assessments and encourages innovation.    
      
   Its framework works by proactively running periodic threat modeling exercises   
   to complement its AI risk assessments  modeling will also be used if and when   
   an AI model is identified to potentially exceed current frontier    
   capabilities, where it becomes a threat.    
      
   These processes are informed by internal and external experts, resulting in   
   one of three negative categories: critical, where the development of the    
   model must stop; high, where the model in its current state must not be   
   released; and moderate, where further consideration is given to the release   
   strategy.    
      
   Some threats include the discovery and exploitation of zero-day   
   vulnerabilities, automated scams and frauds and the development of    
   high-impact biological agents.    
      
   In the framework, Meta writes: While the focus of this Framework is on our   
   efforts to anticipate and mitigate risks of catastrophic outcomes, it is   
   important to emphasize that the reason to develop advanced AI systems in the   
   first place is because of the tremendous potential for benefits to society   
   from those technologies.    
      
   The company has committed to updating its framework with the help of   
   academics, policymakers, civil society organizations, governments, and the   
   wider AI community as the technology continues to develop.   
      
   ======================================================================   
   Link to news story:   
   https://www.techradar.com/pro/meta-reveals-what-kinds-of-ai-even-it-would-thin   
   k-too-risky-to-release   
      
   $$   
   --- SBBSecho 3.20-Linux   
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/305 153/7715 154/110 218/700 226/30   
   SEEN-BY: 227/114 229/110 111 114 206 300 307 317 400 426 428 470 664   
   SEEN-BY: 229/700 705 266/512 291/111 320/219 322/757 342/200 396/45   
   SEEN-BY: 460/58 712/848 902/26 2320/0 105 3634/12 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca