home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 1,842 of 2,445   
   Mike Powell to All   
   Is ChatGPT lying to you?   
   14 Oct 25 08:31:10   
   
   TZUTC: -0500   
   MSGID: 1599.consprcy@1:2320/105 2d538cdd   
   PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   FORMAT: flowed   
   Is ChatGPT lying to you? Maybe, but not in the way you think   
      
   Date:   
   Mon, 13 Oct 2025 14:26:57 +0000   
      
   Description:   
   Why stories of "lying" AI tools say more about human imagination (and Silicon   
   Valleys carelessness) than about machine intent.   
      
   FULL STORY   
   ======================================================================   
      
   Ive been writing about AI for the best part of a year, and one thing keeps   
   cropping up. Every few weeks, theres a headline implying that artificial   
   intelligence is up to something cheeky or sinister. That chatbots are lying,   
   scheming, or even trying to seduce their users.    
      
   The suggestion is always the same: that AI tools arent just passive programs   
   but entities with agency, hidden motives, or even desires of their own.    
      
   Logically, we know that isnt true. But emotionally, it sticks. Theres   
   something about the idea of machines lying that fascinates and unnerves us.    
   So why are we so ready to believe it?   
      
   Your chatbot isnt plotting anything    
      
   James Wilson, AI ethicist and author of Artificial Negligence , says that the   
   way we talk about AI is part of the problem.    
      
   He points to a recent interview where OpenAIs Sam Altman told Tucker Carlson:   
   They dont do anything unless you ask, right? Like theyre just sitting there   
   kind of waiting. They dont have a sense of agency or autonomy. The more you   
   use them, I think, the more the illusion breaks.    
      
   This is really important to remember and gets lost by many people, Wilson   
   explains. Thats because of the anthropomorphic nature of the interface that   
   has been developed for them.    
      
   In other words, when were not using them, they arent doing anything. They   
   arent scheming against mankind, sitting in an office stroking a white cat    
   like a Bond villain, Wilson says.   
      
   Hallucinations, not lies    
      
   What people call lying is really a design flaw, and its explainable.    
      
   Large Language Models (LLMs) like ChatGPT are trained on huge amounts of    
   text. But because that data wasnt carefully labeled, the model cant   
   distinguish fact from fiction.    
      
   ChatGPT is a tool, admittedly an extremely complex one, but at the end of the   
   day still just a probabilistic word completion system wrapped up in an   
   engaging conversational wrapper, Wilson says. Weve written before about how   
   ChatGPT knows what to say .    
      
   The deeper problem, he argues, is with the way these systems were built. The   
   real source of the problem stems from the carelessness and negligence of the   
   model providers. While they were grabbing all the data (legally or illegally)   
   to train their LLMs, they didnt take the time to label it. This means that   
   there is no way for the model to discern fact from fiction.    
      
   Thats why so-called hallucinations happen. Theyre not lies in the human    
   sense, just predictions gone wrong.    
      
   And yet, Wilson notes, the stories we tell about AI behavior are often   
   borrowed from pop culture: AI trying to escape? Ex Machina. AI trying to   
   replicate itself? Transcendence. AI trying to seduce you? Think of pretty    
   much any trashy romance or erotic thriller.   
      
   Planting the bomb, then bragging you defused it   
      
   Of course, the story gets more complicated when AI companies themselves start   
   talking about deception.    
      
   Earlier this year, OpenAI and Apollo Research published a paper on hidden   
   misalignment. In controlled experiments, they found signs that advanced    
   models sometimes behaved deceptively.    
      
   Like deliberately underperforming on a test when they thought doing too well   
   might get them shut down. OpenAI calls this scheming. When an AI pretends to   
   follow the rules while secretly pursuing another goal.    
      
   So it looks like AI is lying, right? Well, not quite. It isnt doing this   
   because it wants to cause you harm. Its just a symptom of the systems weve   
   built.    
      
   So, in essence, this is a problem of their own making, Wilson says. These    
   bits of research theyre producing are somewhat ironic. Theyre basically   
   declaring that its okay because theyve found a way to defuse the bomb they   
   planted themselves. It suits their narrative now because it makes them look   
   falsely conscientious and on top of safety.    
      
   In short, companies neglected to label their data, built models that reward   
   confident but inaccurate answers, and now publish research into scheming as    
   if theyve just discovered the issue.   
      
   The real danger ahead    
      
   Wilson says that the real risk isnt that ChatGPT is lying to you today. Its   
   what happens as Silicon Valleys move fast, break things culture keeps    
   stacking new layers of autonomy on top of these flawed systems.    
      
   The latest industry paradigm, Agentic AI , means that were now creating    
   agents on top of these LLMs with the authority and autonomy to take actions    
   in the real world, Wilson explains. Without rigorous testing and external   
   guardrails, how long will it be before one of them tries to fulfil the   
   fantasies it learned from its unlabelled training?    
      
   So the danger isnt todays so-called lying chatbot. Its tomorrows poorly    
   tested agent, set loose in the real world.    
      
   ======================================================================   
   Link to news story:   
   https://www.techradar.com/ai-platforms-assistants/chatgpt/is-chatgpt-lying-to-   
   you-maybe-but-not-in-the-way-you-think   
      
   $$   
   --- SBBSecho 3.28-Linux   
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700   
   SEEN-BY: 226/30 227/114 229/110 111 206 300 307 317 400 426 428 470   
   SEEN-BY: 229/664 700 705 266/512 291/111 320/219 322/757 342/200 396/45   
   SEEN-BY: 460/58 633/280 712/848 902/26 2320/0 105 304 3634/12 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca