home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 1,867 of 2,445   
   Mike Powell to All   
   AI routinely misrepresent   
   24 Oct 25 09:46:33   
   
   TZUTC: -0500   
   MSGID: 1624.consprcy@1:2320/105 2d60cda1   
   PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   FORMAT: flowed   
   Think you can trust ChatGPT and Gemini to give you the news? Here's why you   
   might want to think again   
      
   Date:   
   Thu, 23 Oct 2025 16:39:57 +0000   
      
   Description:   
   AI assistants routinely misrepresent news, raising concerns about   
   misinformation and public trust in the digital age.   
      
   FULL STORY   
      
   When you ask an AI assistant about news and current events you might expect a   
   detached, authoritative answer. But according to a sweeping international   
   study led by the BBC and coordinated by the European Broadcasting Union    
   (EBU), nearly half the time, those answers are wrong, misleading, or just   
   plain made up (anyone who's dealt with the nonsense of Apple's AI-written   
   headlines can relate).    
      
   The report dug into how ChatGPT, Microsoft Copilot, Google Gemini, and   
   Perplexity handle news queries across 14 languages in 18 countries. The    
   report analyzed over 3,000 individual responses provided by the AI tools.   
   Professional journalists from 22 public media outlets evaluated each answer   
   for accuracy, sourcing, and how well it discerned news from opinion.    
      
   The results were bleak for those relying on AI for their news. The report   
   found that 45% of all answers had a significant issue, 31% had sourcing   
   problems, and 20% were simply inaccurate. This isnt just a matter of one or   
   two embarrassing mistakes, like confusing the Prime Minister of Belgium with   
   the frontman of a Belgian pop group. The research found deep, structural   
   issues with how these assistants process and deliver news, regardless of   
   language, country, or platform.   
      
   In some languages, the assistants outright hallucinated details. In others,   
   they attributed quotes to outlets that hadnt published anything even close to   
   what was being cited. Context was often missing, with the assistants    
   sometimes giving simplistic or misleading overviews instead of crucial    
   nuance. In the worst cases, that could change the meaning of an entire news   
   story.    
      
   Not every assistant was equally problematic. Gemini misfired in a staggering   
   76% of responses, mostly due to missing or poor sourcing.    
      
   Unlike a Google search, which lets users sift through a dozen sources, a   
   chatbot's answer often feels final. It reads with authority and clarity,   
   giving the impression that its been fact-checked and edited, when in fact it   
   may be little more than a fuzzy collage of half-remembered summaries.    
      
   Thats part of why the stakes are so high. And why even partnerships like    
   those between ChatGPT and The Washington Post can't solve the problem   
   entirely.   
      
   AI news literacy    
      
   The problem is obvious, especially given how quickly AI assistants are   
   becoming the go-to interface for news. The study cited the 2025 Reuters   
   Institutes Digital News Report estimate that 7% of all online news consumers   
   now use an AI assistant to get their information, and 15% of those under 25.   
   People are already asking AI to explain the world to them, and the AI is   
   getting the world wrong a disturbing amount.    
      
   If youve ever asked ChatGPT, Gemini, or Copilot to summarize a news event,   
   youve probably seen one of these imperfect answers in action. ChatGPT's   
   difficulties with searching for the news are well known at this point. But   
   maybe you didnt even notice. Thats part of the problem: these tools are often   
   wrong with such fluency that it doesnt feel like a red flag. Thats why media   
   literacy and ongoing scrutiny are essential.    
      
   To try to improve the situation, the EBU and its partners released a News   
   Integrity in AI Assistants Toolkit, which serves as an AI literacy starter   
   pack designed to help developers and journalists alike. It outlines both what   
   makes a good AI response and what kinds of failures users and media watchdogs   
   should be looking for.    
      
   Even as companies like OpenAI and Google race ahead with faster, slicker   
   versions of their assistants, these reports show why transparency and   
   accountability are so important. That doesnt mean AI cant be helpful, even    
   for curating the endless firehose of news. It does mean that, for now, it   
   should come with a disclaimer. And even if it doesn't, dont assume the   
   assistant knows best  check your sources, and stick to the most reliable    
   ones...   
      
   ======================================================================   
   Link to news story:   
   https://www.techradar.com/ai-platforms-assistants/think-you-can-trust-chatgpt-   
   and-gemini-to-give-you-the-news-heres-why-you-might-want-to-think-again   
      
   $$   
   --- SBBSecho 3.28-Linux   
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700   
   SEEN-BY: 226/30 227/114 229/110 111 206 300 307 317 400 426 428 470   
   SEEN-BY: 229/664 700 705 266/512 291/111 320/219 322/757 342/200 396/45   
   SEEN-BY: 460/58 633/280 712/848 902/26 2320/0 105 304 3634/12 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca