home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 2,133 of 2,445   
   Mike Powell to All   
   Keep kids safe in AI Worl   
   30 Dec 25 09:42:02   
   
   TZUTC: -0500   
   MSGID: 1890.consprcy@1:2320/105 2db9221c   
   PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   FORMAT: flowed   
   How to keep your kids safe in this AI-powered world   
      
   Date: Mon, 29 Dec 2025 15:00:00 +0000   
      
   Description:   
   AI is suddenly everywhere in childrens lives. In their games, homework and   
   chats. Heres how parents can move past fear and feel confident about keeping   
   them safe.   
      
   FULL STORY   
      
   Many people think of AI as asking ChatGPT for dinner ideas or watching a    
   viral video of talking animals. But in a very short time, the technology has   
   accelerated. Its now embedded in many parts of daily life, and its already   
   presenting serious problems for children and young people  in some cases with   
   tragic consequences.    
      
   AI is in your phone, your childs apps, their games, their search tools, and   
   increasingly in the places they turn to for help or connection. And while    
   some uses are harmless, others are risky, manipulative, or simply too    
   powerful for a young person to navigate alone.    
      
   From nudifying apps and sextortion scams to emotionally convincing chatbots   
   and endlessly sticky social feeds, the landscape is shifting quickly. Many   
   parents already feel they should have taken social media harms more    
   seriously. With AI, some of the damage is appearing much earlier.    
      
   There have been cases of children allegedly taking their own lives after   
   chatbot interactions, growing dependence on AI friends, and a surge in   
   deepfake-style abuse. If the best time to learn about this was a year ago,    
   the second-best time is right now.    
      
   Think of this guide as a starting point. Well cover a few of the biggest   
   concerns, what experts say needs to change, and the practical steps parents   
   can take today.   
      
   What are the biggest concerns?    
      
   Before anything else, experts say the core issue is simple: most parents dont   
   realize how deeply AI is already woven into everyday life.    
      
   Parents do not fully understand the technologies that are being developed,   
   Genevieve Bartuski , a psychologist and consultant specializing in ethical AI   
   and the psychology behind digital systems, tells me. Many of them are worried   
   about social media and content on the internet, but dont understand how   
   pervasive AI has become.    
      
   The best starting point is accepting that even the most tech-confident adults   
   didnt grow up with anything like this. The pace of change has been fast,    
   which means risks might not be easy to spot, and the harms involved here can   
   be really different from the social media challenges we already know.    
      
   Its difficult to single out just one concern, Tara Steele , Director at the   
   Safe AI for Children Alliance , says.    
      
   The scale of the issue is echoed by Andrew Briercliffe , a consultant   
   specializing in online harms, trust, and safety. We have to remember AI is a   
   HUGE space, and can cover everything from misinformation, to CSAM (Child   
   Sexual Abuse Material) and everything in-between, he says.    
      
   But even so, there are a few clear areas that the experts are most concerned   
   about.   
      
   Chatbots    
      
   Chatbots are always available, rarely moderated to a standard thats   
   appropriate for children and young people, and theyre engineered to sound   
   confident and caring. Its this combination that experts believe is creating a   
   major risk.    
      
   Kids are turning to them for all sorts of reasons, just like we know adults   
   do. This includes emotional support, advice, and, increasingly, mental health   
   help. Young people are resorting to them instead of seeking professional   
   health and guidance, Briercliffe says.    
      
   Because there are no real guardrails in place, and because we know these   
   systems can confidently present inaccurate information, parents often have no   
   idea what is being said to their child in these conversations. What is a   
   chatbot? A chatbot is an AI tool that you can talk to in everyday language.   
   You type something in, and it responds as if youre messaging a person. Tools   
   like ChatGPT, Gemini, and Claude are designed to sound friendly, natural, and   
   helpful.    
      
   Several studies have shown that it is very common for chatbots to give   
   children dangerous advice, Steele adds. This can include encouragement of   
   extreme dieting or urging secrecy when a child says they want to confide in a   
   teacher or partner.    
      
   The consequences of these kinds of conversations can be devastating. We now   
   have many documented cases where children using these tools were encouraged    
   to harm themselves, and there are ongoing legal cases in the US with strong   
   evidence suggesting that chatbot interactions allegedly played a role in   
   childrens tragic deaths by suicide, Steele explains. This shows a    
   catastrophic failure of current safety standards and regulatory oversight.    
      
   One of the core problems lies in how these chatbots are designed. Theyre   
   designed to feel emotionally real, Steele says. Children can experience a    
   deep sense of trust that makes them more likely to act on what the chatbot   
   tells them.    
      
   Bartuski explains that Rogerian psychology, which serves up unconditional   
   positive regard, is also built into many of these platforms. It creates a   
   synthetic relationship where you are not challenged or have to learn to   
   mitigate conflict, she says.    
      
   So what feels comforting at first can become dependence with no pushback and   
   constant praise. This can also distort a young persons ability to handle   
   real-world relationships.    
      
   The AI interactions become better than real-life experiences, Bartuski tells   
   me. Real relationships are messy. There are arguments, disagreements, and   
   moods. There are also natural boundaries. You cant always call your friend at   
   3 am because she or he might be sleeping. AI is always there.    
      
   Experts warn that the most serious risks with using chatbots arent just these   
   immediate harms. But the long-term developmental effects we still dont fully   
   understand.    
      
   Theres concern about over-reliance on chatbots, difficulty forming   
   relationships, and the way constant AI assistance may shape how a child   
   thinks.    
      
   There are studies that AI is having an impact on critical thinking skills,   
   Bartuski explains. Large language models can synthesize a ton of information   
   very quickly. Its like outsourcing your thinking.   
      
   Nudifying apps and deepfakes   
      
   Manipulating images isnt new, but AI has made it fast, realistic, and   
   accessible, including to young people. These tools can now create convincing   
   sexualized images really quickly, often from nothing more than a school photo   
   or a social media post.    
      
   Nudifying apps are being used, mainly by male teens, targeting fellow    
   students and then sharing content, which can be very distressing for the   
   victims, Briercliffe says. Those doing that arent aware of how illegal it is.    
      
   Beyond peer misuse, these tools have quickly become a weapon for extortion,   
   too. Children are being blackmailed using these kinds of manipulated images,   
   Steele adds.    
      
   This is one of the most troubling shifts in online harm. Children are being   
   manipulated, threatened, or coerced through images that can be created   
   instantly, without their knowledge, and without any physical contact. What is   
   a nudifying app? A nudifying app is software that uses AI to turn an ordinary   
   picture into a fake sexualised image. It only takes seconds and can be done   
   without the persons consent. When the images involve children, it is treated   
   as child sexual abuse material in many countries and is a criminal offence.    
      
   I have seen scammers use AI to nudge photos of teenagers and then extort them   
   for money, Bartuski tells me. There was a case in Kentucky where a scammer    
   did this to a teenager and threatened to release the photos. The teenager   
   completed suicide over the stress of this.    
      
   Sadly, this isnt an isolated incident. Back in 2024, research from Internet   
   Matters suggested that more than 13% of kids in the UK have sent or received    
   a nude deepfake.    
      
   I know how frightening and shame-inducing these scams can be because I was    
   the victim of a sextortion attempt back in 2024, involving images believed to   
   have been created with a similar kind of nudifying app.    
      
   I was an adult at the time, with support networks and a public platform, and   
   it still made me feel scared, paranoid, and deeply ashamed. I spoke openly   
   about what happened to help others feel less alone, but I cant imagine how   
   overwhelming it would have been if I were younger or more vulnerable.   
      
    (continued next message)   
      
   $$   
   --- SBBSecho 3.28-Linux   
    * Origin: Capitol City Online (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700   
   SEEN-BY: 226/30 227/114 229/110 134 206 300 307 317 400 426 428 470   
   SEEN-BY: 229/664 700 705 266/512 291/111 320/219 322/757 342/200 396/45   
   SEEN-BY: 460/58 633/280 712/848 902/26 2320/0 105 304 3634/12 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca