home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 2,263 of 2,445   
   Mike Powell to All   
   Altman bemoans difficulty   
   21 Jan 26 09:15:46   
   
   TZUTC: -0500   
   MSGID: 2021.consprcy@1:2320/105 2dd61d77   
   PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   FORMAT: flowed   
   "It is genuinely hard; we need to protect vulnerable users, while also making   
   sure our guardrails still allow all of our users to benefit from our tools."   
      
   Sam Altman bemoans the difficulty of keeping ChatGPT safe in contentious   
   debate with Elon Musk   
      
   Date:   
   Wed, 21 Jan 2026 04:00:00 +0000   
      
   Description:   
   Sam Altman defended OpenAIs approach to AI safety in a public clash with Elon   
   Musk, revealing the complex challenge of building tools that protect   
   vulnerable users without limiting everyone else.   
      
   FULL STORY   
      
   OpenAI CEO Sam Altman isnt known for oversharing about ChatGPT's inner   
   workings. But he admitted to difficulty keeping the AI chatbot both safe and   
   useful. Elon Musk seemingly sparked this insight with barbed posts on X   
   (formerly Twitter). Musk warned people not to use ChatGPT, sharing a link to   
   an article claiming a link between the AI assistant and nine deaths.    
      
   The blistering social media exchange between two of the most powerful figures   
   in artificial intelligence yielded more than bruised egos or legal scars.   
   Musk's post did not refer to the broader context of the deaths or the    
   lawsuits OpenAI is facing related to them, but Altman clearly felt compelled   
   to respond.    
      
   His answer was rather more heartfelt than the usual bland corporate   
   boilerplate. He instead gave a glimpse at the thinking behind OpenAI's   
   tightrope walk, balancing keeping ChatGPT and other AI tools safe for    
   millions of people, and defended ChatGPTs architecture and guardrails. "We   
   need to protect vulnerable users, while also making sure our guardrails still   
   allow all of our users to benefit from our tools. Sometimes you complain    
   about ChatGPT being too restrictive, and then in cases like this you claim   
   it's too relaxed. Almost a billion people use it and some of them may be in   
   very fragile mental states. We will continue to do our best to get this   
   right."  --  https://t.co/U6r03nsHzg January 20, 2026   
      
   After rising to praise OpenAIs safety protocols and the complexity of   
   balancing harm reduction with product usefulness, Altman implied Musk had no   
   standing to lob accusations because of the dangers of Teslas Autopilot    
   system.    
      
   He said that his own experience with it was enough to convince him it was far   
   from a safe thing for Tesla to have released. In an especially pointed aside   
   at Musk, he added, I wont even start on some of the Grok decisions.    
      
   As the exchange ricocheted across platforms, what stood out most wasnt the   
   usual billionaire posturing but Altmans unusually candid framing of what AI   
   safety actually entails. For OpenAI, a company simultaneously deploying   
   ChatGPT to schoolkids, therapists, programmers, and CEOs, defining safe means   
   threading the needle between usefulness and avoiding problems, objectives    
   that often conflict.    
      
   Altman has not publicly commented on the individual wrongful death lawsuits   
   filed against OpenAI. He has, however, insisted that acknowledging real-world   
   harm doesn't require oversimplifying the problem. AI reflects inputs, and its   
   evolving responses make moderation and safety require more than just the    
   usual terms of service.   
      
   ChatGPT's safety struggle    
      
   OpenAI claims to have worked hard to make ChatGPT safer with newer versions.   
   There's a whole suite of safety features trained to detect signs of distress,   
   including suicidal ideation. ChatGPT issues disclaimers, halts certain   
   interactions, and directs users to mental health resources when it detects   
   warning signs. OpenAI also claims its models will refuse to engage with   
   violent content whenever possible.    
      
   The public might think this is straightforward, but Altmans post gestures at   
   an underlying tension. ChatGPT is deployed in billions of unpredictable   
   conversational spaces across languages, cultures, and emotional states.    
   Overly rigid moderation would make the AI useless in many of those   
   circumstances, yet easing the rules too much would multiply the potential    
   risk of dangerous and unhealthy interactions.    
      
   Comparing AI to automated car pilots is not exactly a perfect analogy,    
   despite Altman's comment. That said, one could argue that while roads are   
   regulated, regardless of whether a human or robot is behind the wheel, AI   
   prompts are on a more rugged trail. There is no central traffic authority for   
   how a chatbot should respond to a teenager in crisis or answer someone with   
   paranoid delusions. In this vacuum, companies like OpenAI are left to build   
   their own rules and refine them on the fly.    
      
   The personal element adds another layer to the argument, too. Altman and   
   Musk's companies are in a protracted legal battle. Musk is suing OpenAI and   
   Altman over the companys transition from a nonprofit research lab to a   
   capped-profit model, alleging that he was misled when he donated $38 million   
   to help found the organization. He claims the company now prioritizes   
   corporate gain over public benefit. Altman says the shift was necessary to   
   build competitive models and keep AI development on a responsible track. The   
   safety conversation is a philosophical and engineering facet of a war in   
   boardrooms and courtrooms over what OpenAI should be.    
      
   Whether or not Musk and Altman ever agree on the risks, or even speak civilly   
   online, all AI developers might do well to follow Altman in being more   
   transparent in what AI safety looks like and how to achieve it.    
      
   ======================================================================   
   Link to news story:   
   https://www.techradar.com/ai-platforms-assistants/it-is-genuinely-hard-we-need   
   -to-protect-vulnerable-users-while-also-making-sure-our-guardrails-still-allow   
   -all-of-our-users-to-benefit-from-our-tools-sam-altman-bemoans-the-difficulty-   
   of-keeping-chatgpt-safe-in-contentious-debate-with-elon-musk   
      
   $$   
   --- SBBSecho 3.28-Linux   
    * Origin: Capitol City Online (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700   
   SEEN-BY: 226/30 227/114 229/110 134 206 300 307 317 400 426 428 470   
   SEEN-BY: 229/664 700 705 266/512 291/111 320/219 322/757 342/200 396/45   
   SEEN-BY: 460/58 633/280 712/848 902/26 2320/0 105 304 3634/12 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca