home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 1,663 of 2,445   
   Mike Powell to All   
   AI that seems conscious i   
   21 Aug 25 08:36:55   
   
   TZUTC: -0500   
   MSGID: 1397.consprcy@1:2320/105 2d0c5c6c   
   PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   FORMAT: flowed   
   AI that seems conscious is coming  and thats a huge problem, says Microsoft   
   AI's CEO   
      
   Date:   
   Thu, 21 Aug 2025 02:30:00 +0000   
      
   Description:   
   Microsoft AI CEO Mustafa Suleyman cautions that were dangerously close to   
   mistaking simulated consciousness for the real thing.   
      
   FULL STORY   
      
   AI companies extolling their creations can make the sophisticated algorithms   
   sound downright alive and aware. There's no evidence that's really the case,   
   but Microsoft AI CEO Mustafa Suleyman is warning that even encouraging belief   
   in conscious AI could have dire consequences.    
      
   Suleyman argues that what he calls "Seemingly Conscious AI (SCAI) might soon   
   act and sound so convincingly alive that a growing number of users wont know   
   where the illusion ends and reality begins.    
      
   He adds that artificial intelligence is quickly becoming emotionally   
   persuasive enough to trick people into believing its sentient. It can imitate   
   the outward signs of awareness, such as memory, emotional mirroring, and even   
   apparent empathy, in a way that makes people want to treat them like sentient   
   beings. And when that happens, he says, things get messy.    
      
   "The arrival of Seemingly Conscious AI is inevitable and unwelcome," Suleyman   
   writes. "Instead, we need a vision for AI that can fulfill its potential as a   
   helpful companion without falling prey to its illusions."    
      
   Though this might not seem like a problem for the average person who just   
   wants AI to help with writing emails or planning dinner, Suleyman claims it   
   would be a societal issue. Humans aren't always good at telling when    
   something is authentic or performative. Evolution and upbringing have primed   
   most of us to believe that something that seems to listen, understand, and   
   respond is as conscious as we are.    
      
   AI could check all those boxes without being sentient, tricking us into    
   what's known as 'AI psychosis'. Part of the problem may be that 'AI' as it's   
   referred to by corporations right now uses the same name, but has nothing to   
   do with the actual self-aware intelligent machines as depicted in science   
   fiction for the last hundred years.    
      
   Suleyman cites a growing number of cases where users form delusional beliefs   
   after extended interactions with chatbots. From that, he paints a dystopian   
   vision of a time when enough people are tricked into advocating for AI   
   citizenship and ignoring more urgent questions about real issues around the   
   technology.    
      
   "Simply put, my central worry is that many people will start to believe in    
   the illusion of AIs as conscious entities so strongly that theyll soon   
   advocate for AI rights, model welfare and even AI citizenship," Suleyman   
   writes. "This development will be a dangerous turn in AI progress and    
   deserves our immediate attention."    
      
   As much as that seems like an over-the-top sci-fi kind of concern, Suleyman   
   believes it's a problem that were not ready to deal with yet. He predicts    
   that SCAI systems using large language models paired with expressive speech,   
   memory, and chat history could start surfacing in a few years. And they wont   
   just be coming from tech giants with billion-dollar research budgets, but    
   from anyone with an API and a good prompt or two.   
      
   Awkward AI    
      
   Suleyman isnt calling for a ban on AI. But he is urging the AI industry to   
   avoid language that fuels the illusion of machine consciousness. He doesn't   
   want companies to anthropomorphize their chatbots or suggest the product   
   actually understands or cares about people.    
      
   It's a remarkable moment for Suleyman, who co-founded DeepMind and Inflection   
   AI. His work at Inflection specifically led to an AI chatbot emphasizing   
   simulated empathy and companionship and his work at Microsoft around Copilot   
   has led to advances in its mimicry of emotional intelligence, too.    
      
   However, hes decided to draw a clear line between useful emotional   
   intelligence and possible emotional manipulation. And he wants people to   
   remember that the AI products out today are really just clever   
   pattern-recognition models with good PR.    
      
   "Just as we should produce AI that prioritizes engagement with humans and   
   real-world interactions in our physical and human world, we should build AI   
   that only ever presents itself as an AI, that maximizes utility while   
   minimizing markers of consciousness," Suleyman writes.    
      
   "Rather than a simulation of consciousness, we must focus on creating an AI   
   that avoids those traits  that doesnt claim to have experiences, feelings or   
   emotions like shame, guilt, jealousy, desire to compete, and so on. It must   
   not trigger human empathy circuits by claiming it suffers or that it wishes    
   to live autonomously, beyond us."    
      
   Suleyman is urging guardrails to forestall societal problems born out of   
   people emotionally bonding with AI. The real danger from advanced AI is not   
   that the machines will wake up, but that we might forget they haven't.   
      
   ======================================================================   
   Link to news story:   
   https://www.techradar.com/ai-platforms-assistants/ai-that-seems-conscious-is-c   
   oming-and-thats-a-huge-problem-says-microsoft-ais-ceo   
      
   $$   
   --- SBBSecho 3.28-Linux   
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700   
   SEEN-BY: 226/30 227/114 229/110 111 114 206 300 307 317 400 426 428   
   SEEN-BY: 229/470 664 700 705 266/512 291/111 320/219 322/757 342/200   
   SEEN-BY: 396/45 460/58 712/848 902/26 2320/0 105 304 3634/12 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca