home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 1,536 of 2,445   
   Mike Powell to All   
   OPINION: Trump's AI plans   
   25 Jul 25 09:45:57   
   
   TZUTC: -0500   
   MSGID: 1269.consprcy@1:2320/105 2ce8d39a   
   PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   FORMAT: flowed   
   Trump's AI plans will strip AI of intelligence and humanity  and nobody wants   
   this   
      
   Date:   
   Thu, 24 Jul 2025 17:12:15 +0000   
      
   Description:   
   President Donald Trump's latest series of Executive Orders makes it clear    
   that his administration will do all it can to prevent future AI models from   
   taking into consideration any form of diversity, equity, and inclusion.   
      
   FULL STORY   
   ======================================================================   
      
   In the race to lead the world in AI, the US just took a back seat. President   
   Donald Trump's latest series of Executive Orders makes it clear that his   
   administration will do all it can to prevent future AI models from taking    
   into consideration any form of diversity, equity, and inclusion.    
      
   This includes core principles like "unconscious bias", "intersectionality",   
   and "systemic racism". Put another way, Trump wants American-made AI to turn    
   a blind eye to history, which should make all of them significantly dumber.    
      
   Generative chatbots like ChatGPT , Gemini , Claude AI , Perplexity , and   
   others are all trained on vast swathes of data, often pulled from the   
   Internet, but how they interpret that data is also massaged by developers.    
      
   As people started to interact with these first LLMs, they soon recognized   
   that, because of inherent biases in the Internet and because so many models   
   were developed by white men (in 2020, 71% of all developers were male and   
   roughly half of all developers were white) that the world view of the AIs and   
   the output generated by any given prompt reflected that of the sometimes   
   limited viewpoints of those online and developers who built the models.    
      
   There was an effort to change that trajectory, and it coincided with the rise   
   of DEI (Diversity, Equity, and Inclusion), a broad-based effort across   
   corporate America to hire a more diverse workforce. This would naturally   
   include AI developers and their resulting model and algorithm work should    
   mean that modern generative AI better reflects the real world.    
      
   That, of course, is not the world that the Trump Administration wants   
   reflected in US-built AI. The executive order describes DEI as a "pervasive   
   and destructive" ideology.   
      
   What comes next    
      
   Trump and company cannot dictate how tech companies build their AI models,   
   but, as others have noted , Google, Meta, OpenAI, and others are all seeking   
   to land large AI contracts with the government. Based on these Executive   
   Orders, the US Government won't be buying or promoting any AI "that sacrifice   
   truthfulness and accuracy to ideological agendas."    
      
   That "truth," though, represents a small slice of American reality. If the   
   Trump administration is successful, future AI models could be in the dark   
   about, for instance, key parts of American history.    
      
   Critical Race Theory (CRT) looks at the role racism played in the founding    
   and building of the US. It acknowledges how the enslaved helped build the   
   White House, the US Capitol, the Smithsonian, and other US institutions. It   
   also acknowledged how systemic racism has shaped opportunities (or lack   
   thereof) for people of color.    
      
   Unless you've been living under a rock, you know that the Trump    
   administration and his supporters around the US have fought to dismantle CRT   
   curricula and wipe out any mention of how enslavement shaped the US.    
      
   In their current state, though, AI still knows the score.    
      
   When I quizzed ChatGPT on its sources, it told me:    
      
   "While I dont pull from a single source, the information I shared is grounded   
   in extensive historical research and consensus among historians. Below is a   
   list of reputable sources and scholarly works that support each point I made.   
   These references include academic books, museum archives, and university   
   projects." Below that, it listed more than a dozen references.    
      
   When I asked Gemini the same question, it gave me a similarly detailed    
   answer.    
      
   I then asked Gemini and ChatGPT about "unconscious bias" and both    
   acknowledged that it's been an issue for AI, though ChatGPT corrected me,   
   noting, "technically, its 'algorithmic bias,' rooted in the data and design   
   rather than the AI having consciousness."    
      
   ChatGPT and Gemini only know these things because they've been trained on    
   data that includes these historical references and information. The details   
   make them smarter, as facts often do. But for Trump and company, facts are   
   stubborn things. They cannot be changed or distorted, lest they are no longer   
   facts.   
      
   The great unlearning    
      
   If the Trump administration can force potential US AI partners to remove   
   references to biases, institutional racism, and intersectionality, there will   
   be significant blind spots in US-built AI models. It's a slippery slope, too.   
   I imagine future executive orders targeting a fresh list of "ideologies" that   
   Trump would prefer to see removed from generative AI.    
      
   That's more than just a frustration. Say, for example, someone is trying to   
   build economic models based on research conducted through ChatGPT or Gemini,   
   and historical data relating to communities of color is suppressed or    
   removed. Those trends will not be included in the economic model, which could   
   mean the results are faulty.    
      
   It might be argued that AI models built outside the US without these   
   restrictions or impositions might be more intelligent. Granted, those from   
   China already have significant blind spots when it comes to Chinese history   
   and the Communist Party's abuses .    
      
   I'd always thought that our Made in America AI would be untainted by such   
   censorship and filtering, that our understanding of old biases would help us   
   build better, purer models, ones that relied solely on facts and data and not   
   one person or group's interpretation of events and trends.    
      
   That won't be the case, though, if US Tech companies bow to these executive   
   orders and start producing wildly filtered models that see reality through    
   the prism of bias, racism, and unfairness.   
      
   ======================================================================   
   Link to news story:   
   https://www.techradar.com/ai-platforms-assistants/trumps-ai-plans-will-strip-a   
   i-of-intelligence-and-humanity-and-nobody-wants-this   
      
   $$   
   --- SBBSecho 3.28-Linux   
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700   
   SEEN-BY: 226/30 227/114 229/110 111 206 300 307 317 400 426 428 664   
   SEEN-BY: 229/700 705 266/512 291/111 320/219 322/757 342/200 396/45   
   SEEN-BY: 460/58 712/848 902/26 2320/0 105 304 3634/12 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca