home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 309 of 2,445   
   Mike Powell to All   
   The AI lie: how trillion-   
   23 Jan 25 10:45:00   
   
   TZUTC: -0500   
   MSGID: 13.consprcy@1:2320/105 2bf79a3a   
   PID: Synchronet 3.20a-Linux master/acc19483f Apr 26 202 GCC 12.2.0   
   TID: SBBSecho 3.20-Linux master/acc19483f Apr 26 2024 23:04 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   The AI lie: how trillion-dollar hype is killing humanity   
      
   Date:   
   Thu, 23 Jan 2025 09:27:12 +0000   
      
   Description:   
   AI giants promised perfect accuracy. Instead their models are endangering   
   users. Can integrating humans help?   
      
   FULL STORY   
   ======================================================================   
      
   AI companies like Google, OpenAI, and Anthropic want you to believe were on   
   the cusp of Artificial General Intelligence (AGI)a world where AI tools can   
   outthink humans, handle complex professional tasks without breaking a sweat,   
   and chart a new frontier of autonomous intelligence. Google just rehired the   
   founder of Character.AI to accelerate its quest for AGI, OpenAI recently   
   released its first reasoning model, and Anthropics CEO Dario Amodei says AGI   
   could be achieved as early as 2026.    
      
   But heres the uncomfortable truth: in the quest for AGI in high-stakes fields   
   like medicine, law, veterinary advice, and financial planning, AI isnt just   
   not there yet, it may never get there.   
      
   The Hard Facts on AIs Shortcomings    
      
   This year, Purdue researchers presented a study showing ChatGPT got   
   programming questions wrong 52% of the time. In other equally high-stakes   
   categories, GenAI does not fare much better.    
      
   When peoples health, wealth, and well-being hang in the balance, the current   
   high failure rates of GenAI platforms are unacceptable. The hard truth is    
   that this accuracy issue will be extremely challenging to overcome.    
      
   A recent Georgetown study suggests it might cost a staggering $1 trillion to   
   improve AIs quality by just 10%. Even then, it would remain worlds away from   
   the reliability that matters in life-and-death scenarios. The last mile of   
   accuracy  in which AI becomes undeniably safer than a human expert  will be   
   far harder, more expensive, and time consuming to achieve than the public has   
   been led to believe.    
      
   AIs inaccuracy doesnt just have theoretical or academic consequences. A   
   14-year-old boy recently sought guidance from an AI chatbot and, instead of   
   directing him toward help, mental health resources, or even common decency,   
   the AI urged him to take his own life. Tragically, he did. His family is now   
   suingand theyll likely winbecause the AIs output wasnt just a hallucination    
   or cute error. It was catastrophic and it came from a system that was wrong   
   with utter conviction. Like the reckless Cliff Clavin (who wagered his entire   
   Jeopardy winnings on the TV show Cheers) AI brims with confidence while   
   spouting the complete wrong answer.   
      
   The Mechanical Turk 2.0 - With a Twist   
      
   Todays AI hype recalls the infamous 18th-century Mechanical Turk: a supposed   
   chess-playing automaton that actually had a human hidden inside. Modern AI   
   models also hide a dirty secretthey rely heavily on human input.    
      
   From annotating and cleaning training data to moderating the content of   
   outputs, tens of millions of humans are still enmeshed in almost every step    
   of advancing GenAI, but the big foundational model companies cant afford to   
   admit this. Doing so would be acknowledging how far we are from true AGI.   
   Instead, these platforms are locked into a fake it till you make it strategy,   
   raising billions to buy more GPUs on the flimsy promise that brute force will   
   magically deliver AGI.    
      
   Its a pyramid scheme of hype: persuade the public that AGI is imminent,    
   secure massive funding, build more giant data centers that burn more energy,   
   and hope that, somehow, more compute will bridge the gap that honest science   
   says may never be crossed.    
      
   This is painfully reminiscent of the buzz around Alexa, Cortana, Bixby, and   
   Google Assistant just a decade ago. Users were told voice assistants would   
   take over the world within months. Yet today, many of these devices gather   
   dust, mostly relegated to setting kitchen timers or giving the days weather.   
   The grand revolution never happened, and its a cautionary tale for todays    
   even grander AGI promises.   
      
   Shielding Themselves from Liability    
      
   Why wouldnt major AI platforms just admit the truth about their accuracy?   
   Because doing so would open the floodgates of liability.    
      
   Acknowledging fundamental flaws in AIs reasoning would provide a smoking gun   
   in court, as in the tragic case of the 14-year-old boy. With trillions of   
   dollars at stake, no executive wants to hand a plaintiffs lawyer the ultimate   
   piece of evidence: We knew it was dangerously flawed, and we shipped it   
   anyway.    
      
   Instead, companies double down on marketing spin, calling these deadly   
   mistakes hallucinations, as though thats an acceptable trade-off. If a doctor   
   told a child to kill himself, should we call that a hallucination? Or, should   
   we call it what it is  an unforgivable failure that deserves full legal   
   consequence and permanent revocation of advice-giving privileges?   
      
   AIs adoption plateau   
      
   People learned quickly that Alexa and the other voice assistants could not   
   reliably answer their questions, so they just stopped using them for all but   
   the most basic tasks. AI platforms will inevitably hit an adoption wall,   
   endangering their current users while scaring away others that might rely on   
   or try their platforms.    
      
   Think about the ups and downs of self-driving cars; despite carmakers huge   
   autonomy promises  Tesla has committed to driverless robotaxis by 2027   
   Goldman Sachs recently lowered its expectations for the use of even partially   
   autonomous vehicles. Until autonomous cars meet a much higher standard, many   
   humans will withhold complete trust.    
      
   Similarly, many users wont put their full trust in AI even if it one day   
   equals human intelligence; it must be vastly more capable than even the   
   smartest human. Other users will be lulled in by AIs ability to answer simple   
   questions and burned when they make high-stakes inquiries. For either group,   
   AIs shortcomings wont make it a sought-after tool.   
      
   A Necessary Pivot: Incorporate Human Judgment   
      
   These flawed AI platforms cant be used for critical tasks until they either   
   achieve the mythical AGI status or incorporate reliable human judgment.    
      
   Given the trillion-dollar cost projections, environmental toll of massive    
   data centers, and mounting human casualties, the choice is clear: put human   
   expertise at the forefront. Lets stop pretending that AGI is right around the   
   corner. That false narrative is deceiving some people and literally killing   
   others.    
      
   Instead, use AI to empower humans and create new jobs where human judgment   
   moderates machine output. Make the experts visible rather than hiding them   
   behind a smokescreen of corporate bravado. Until and unless AI attains   
   near-perfect reliability, human professionals are indispensable. Its time we   
   stop the hype, face the truth, and build a future where AI serves   
   humanityinstead of endangering it.    
      
    This article was produced as part of TechRadarPro's Expert Insights channel   
   where we feature the best and brightest minds in the technology industry   
   today. The views expressed here are those of the author and are not   
   necessarily those of TechRadarPro or Future plc. If you are interested in   
   contributing find out more here:   
   https://www.techradar.com/news/submit-your-story-to-techradar-pro   
      
   ======================================================================   
   Link to news story:   
   https://www.techradar.com/pro/the-ai-lie-how-trillion-dollar-hype-is-killing-h   
   umanity   
      
   $$   
   --- SBBSecho 3.20-Linux   
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/305 153/7715 154/110 218/700 226/30   
   SEEN-BY: 227/114 229/110 111 114 206 300 307 317 400 426 428 470 664   
   SEEN-BY: 229/700 705 266/512 291/111 320/219 322/757 342/200 396/45   
   SEEN-BY: 460/58 712/848 902/26 2320/0 105 3634/12 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca