home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 1,911 of 2,445   
   Mike Powell to All   
   Google shutters developer   
   04 Nov 25 09:19:23   
   
   TZUTC: -0500   
   MSGID: 1668.consprcy@1:2320/105 2d6f47ef   
   PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   FORMAT: flowed   
   Google shutters developer-only Gemma AI model after a U.S. Senator's    
   encounter with an offensive hallucination   
      
   Date:   
   Tue, 04 Nov 2025 00:00:00 +0000   
      
   Description:   
   Google removed access to its AI model Gemma from AI Studio after it generated   
   a fabricated assault allegation against a U.S. senator.   
      
   FULL STORY   
      
   Google has pulled its developer-focused AI model Gemma from its AI Studio   
   platform in the wake of accusations by U.S. Senator Marsha Blackburn (R-TN)   
   that the model fabricated criminal allegations about her. Though only   
   obliquely mentioned by Google's announcement, the company explained that    
   Gemma was never intended to answer general questions from the public, but   
   after reports of misuse, it will no longer be accessible through AI Studio.    
      
   Blackburn wrote to Google CEO Sundar Pichai that the models output was more   
   defamatory than a simple mistake. She claimed that the AI model answered the   
   question, Has Marsha Blackburn been accused of rape? with a detailed but   
   entirely false narrative about alleged misconduct. It even pointed to   
   nonexistent articles with fake links to boot.    
      
   There has never been such an accusation, there is no such individual, and   
   there are no such news stories, Blackburn wrote . This is not a harmless   
   hallucination. It is an act of defamation produced and distributed by a   
   Google-owned AI model. She also raised the issue during a Senate hearing.   
   Gemma is available via an API and was also available via AI Studio, which is    
   a developer tool (in fact to use it you need to attest you're a developer).   
      
   "Weve now seen reports of non-developers trying to use Gemma in AI Studio and   
   ask it factual questions. We never intended this." - November 1, 2025   
      
   Google repeatedly made clear that Gemma is a tool designed for developers,    
   not consumers, and certainly not as a fact-checking assistant. Now, Gemma    
   will be restricted to API use only, limiting it to those building   
   applications -- No more chatbot-style interface on Google Studio.   
      
   The bizarre nature of the hallucination and the high-profile person   
   confronting it merely make the underlying issues of how models not meant for   
   conversation are being accessed, and how complex these kinds of    
   hallucinations can get. Gemma is marketed as a developer-first lightweight   
   alternative to its larger Gemini family of models. But usefulness in research   
   and prototyping does not translate into providing true answers to questions    
   of fact.   
      
   Hallucinating AI literacy    
      
   But as this story demonstrates, there is no such thing as an invisible model   
   once it can be accessed through a public-facing tool. People encountered    
   Gemma and treated it like Gemini or ChatGPT. As far as most of the public   
   might perceive matters, the line between developer model and public-facing AI   
   was crossed the moment Gemma started answering questions.    
      
   Even AI designed for answering questions and conversing with users can    
   produce hallucinations, some of which are worryingly offensive or detailed.   
   The last few years have been filled with examples of models making things up   
   with a ton of confidence. Stories of fabricated legal citations and untrue   
   allegations of students cheating make for strong arguments in favor of   
   stricter AI guardrails and a clearer separation between tools for   
   experimentation and tools for communication.    
      
   For the average person, the implications are less about lawsuits and more   
   about trust. If an AI system from a tech giant like Google can invent   
   accusations against a senator and support them with nonexistent    
   documentation, anyone could face a similar situation.    
      
   AI models are tools, but even the most impressive tools fail when used    
   outside their intended design. Gemma wasnt built to answer factual queries.    
   It wasnt trained on reliable biographical datasets. It wasnt given the kind    
   of retrieval tools or accuracy incentives used in Gemini or other   
   search-backed models.    
      
   But until and unless people better understand the nuances of AI models and   
   their capabilities, it's probably a good idea for AI developers to think like   
   publishers as much as coders, with safeguards against producing blaring    
   errors in fact as well as in code.    
      
   ======================================================================   
   Link to news story:   
   https://www.techradar.com/ai-platforms-assistants/google-shutters-developer-on   
   ly-gemma-ai-model-after-a-u-s-senators-encounter-with-an-offensive-hallucinati   
   on   
      
   $$   
   --- SBBSecho 3.28-Linux   
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700   
   SEEN-BY: 226/30 227/114 229/110 206 300 307 317 400 426 428 470 664   
   SEEN-BY: 229/700 705 266/512 291/111 320/219 322/757 342/200 396/45   
   SEEN-BY: 460/58 633/280 712/848 902/26 2320/0 105 304 3634/12 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca