home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 1,926 of 2,445   
   Mike Powell to ROB MCCART   
   Google shutters developer   
   06 Nov 25 09:51:48   
   
   TZUTC: -0500   
   MSGID: 1683.consprcy@1:2320/105 2d71f297   
   REPLY: 1676.consprcy@1:2320/105 2d71d52b   
   PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   FORMAT: flowed   
   > MP>Google has pulled its developer-focused AI model Gemma from its AI Studio   
   >   >platform in the wake of accusations by U.S. Senator Marsha Blackburn   
   (R-TN)   
   >   >that the model fabricated criminal allegations about her.   
      
   > MP>Blackburn wrote to Google CEO Sundar Pichai that the models output was   
   more   
   >   >defamatory than a simple mistake. She claimed that the AI model answered   
   th   
   >   >question, Has Marsha Blackburn been accused of rape? with a detailed but   
   >   >entirely false narrative about alleged misconduct. It even pointed to   
   >   >nonexistent articles with fake links to boot.   
      
   > It's things like that that make you wonder about whether AI is going off   
   > and doing things it wants rather than just what it was designed to do.   
      
   As they point out, this particular AI model was not meant for answering   
   general questions, but specifically tech/coding questions.  That said, one   
   must wonder where it was getting the information from for this   
   "hallucination," or was it just "doing what it wanted."   
      
   There has been a concern that AI models in general will reflect the social,   
   political, etc., opinions of their developers which will taint their   
   answers.  That could explain where the info, or at least a slant reflected   
   in said info, came from.   
      
   Some things I have read in recent weeks seem to indicate that these   
   "hallucinations" happen in part because AI is programmed to "please" the   
   user... sort of like a digital "yes man"... so, when it really doesn't know an   
   answer, it tries to come up with one anyway.  That could also explain it.   
      
   Mike   
      
    * SLMR 2.1a * The four snack groups: cakes, crunchies, frozen, sweets.   
   --- SBBSecho 3.28-Linux   
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700   
   SEEN-BY: 226/30 227/114 229/110 206 300 307 317 400 426 428 470 664   
   SEEN-BY: 229/700 705 266/512 291/111 320/219 322/757 342/200 396/45   
   SEEN-BY: 460/58 633/280 712/848 902/26 2320/0 105 304 3634/12 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca