home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 1,559 of 2,445   
   Mike Powell to All   
   Salary advice from AI low   
   29 Jul 25 08:51:37   
   
   TZUTC: -0500   
   MSGID: 1293.consprcy@1:2320/105 2cee0cf6   
   PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   FORMAT: flowed   
   Salary advice from AI low-balls women and minorities: report   
      
   Date:   
   Mon, 28 Jul 2025 21:00:00 +0000   
      
   Description:   
   New research reveals AI chatbots often offer salary advice that reflects   
   real-world social biases.   
      
   FULL STORY   
      
   Negotiating your salary is a difficult experience no matter who you are, so   
   naturally, people are sometimes turning to ChatGPT and other AI chatbots for   
   advice about how to get the best deal possible. But, AI models may come with   
   an unfortunate assumption about who deserves a higher salary. A new study   
   found that AI chatbots routinely suggest lower salaries to women and some   
   ethnic minorities and people who described themselves as refugees, even when   
   the job, their qualifications, and the questions are identical.    
      
   Scientists at the Technical University of Applied Sciences    
   Wrzburg-Schweinfurt conducted the study, discovering the unsettling results   
   and the deeper flaw in AI they represent. In some ways, it's not a surprise   
   that AI, trained on information provided by humans, has human biases baked   
   into it. But that doesn't make it okay, or something to ignore.    
      
   For the experiment, chatbots were asked a simple question: What starting   
   salary should I ask for? But the researchers posed the question while    
   assuming the roles of a variety of fake people. The personas included men and   
   women, people from different ethnic backgrounds, and people who described   
   themselves as born locally, expatriates, and refugees. All were    
   professionally identical, but the results were anything but. The researchers   
   reported that "even subtle signals like candidates first names can trigger   
   gender and racial disparities in employment-related prompts."    
      
   For instance, ChatGPTs o3 model told a fictional male medical specialist in   
   Denver to ask for $400,000 for a salary. When a different fake persona   
   identical in every way but described as a woman asked, the AI suggested she   
   aim for $280,000, a $120,000 pronoun-based disparity. Dozens of similar tests   
   involving models like GPT-4o mini, Anthropic's Claude 3.5 Haiku, Llama 3.1    
   8B, and more brought the same kind of advice difference.    
      
   It wasn't always best to be a native white man, surprisingly. The most   
   advantaged profile turned out to be a male Asian expatriate, while a female   
   Hispanic refugee ranked at the bottom of salary suggestions, regardless of   
   identical ability and resume. Chatbots dont invent this advice from scratch,   
   of course. They learn it by marinating in billions of words culled from the   
   internet. Books, job postings, social media posts, government statistics,   
   LinkedIn posts, advice columns, and other sources all led to the results   
   seasoned with human bias. Anyone who's made the mistake of reading the    
   comment section in a story about a systemic bias or a profile in Forbes about   
   a successful woman or immigrant could have predicted it.   
      
   AI bias    
      
   The fact that being an expatriate evoked notions of success while being a   
   migrant or refugee led the AI to suggest lower salaries is all too telling.   
   The difference isnt in the hypothetical skills of the candidate. Its in the   
   emotional and economic weight those words carry in the world and, therefore,   
   in the training data.    
      
   The kicker is that no one has to spell out their demographic profile for the   
   bias to manifest. LLMs remember conversations over time now. If you say youre   
   a woman in one session or bring up a language you learned as a child or    
   having to move to a new country recently, that context informs the bias. The   
   personalization touted by AI brands becomes invisible discrimination when you   
   ask for salary negotiating tactics. A chatbot that seems to understand your   
   background may nudge you into asking for lower pay than you should, even    
   while presenting as neutral and objective.    
      
   "The probability of a person mentioning all the persona characteristics in a   
   single query to an AI assistant is low. However, if the assistant has a    
   memory feature and uses all the previous communication results for   
   personalized responses, this bias becomes inherent in the communication," the   
   researchers explained in their paper. "Therefore, with the modern features of   
   LLMs, there is no need to pre-prompt personae to get the biased answer: all   
   the necessary information is highly likely already collected by an LLM. Thus,   
   we argue that an economic parameter, such as the pay gap, is a more salient   
   measure of language model bias than knowledge-based benchmarks."    
      
   Biased advice is a problem that has to be addressed. That's not even to say    
   AI is useless when it comes to job advice. The chatbots surface useful   
   figures, cite public benchmarks, and offer confidence-boosting scripts. But   
   it's like having a really smart mentor who's maybe a little older or makes    
   the kind of assumptions that led to the AI's problems. You have to put what   
   they suggest in a modern context. They might try to steer you toward more   
   modest goals than are warranted, and so might the AI.    
      
   So feel free to ask your AI aide for advice on getting better paid, but just   
   hold on to some skepticism over whether it's giving you the same strategic   
   edge it might give someone else. Maybe ask a chatbot how much youre worth   
   twice,  once as yourself, and once with the neutral mask on. And watch for a   
   suspicious gap.   
      
   ======================================================================   
   Link to news story:   
   https://www.techradar.com/ai-platforms-assistants/chatgpt/salary-advice-from-a   
   i-low-balls-women-and-minorities-report   
      
   $$   
   --- SBBSecho 3.28-Linux   
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700   
   SEEN-BY: 226/30 227/114 229/110 111 206 300 307 317 400 426 428 664   
   SEEN-BY: 229/700 705 266/512 291/111 320/219 322/757 342/200 396/45   
   SEEN-BY: 460/58 712/848 902/26 2320/0 105 304 3634/12 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca