Just a sample of the Echomail archive
Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.
|    CONSPRCY    |    How big is your tinfoil hat?    |    2,445 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 656 of 2,445    |
|    Mike Powell to All    |
|    AI doesn't belong in the    |
|    07 Mar 25 09:31:00    |
      TZUTC: -0500       MSGID: 370.consprcy@1:2320/105 2c303e7e       PID: Synchronet 3.20a-Linux master/acc19483f Apr 26 202 GCC 12.2.0       TID: SBBSecho 3.20-Linux master/acc19483f Apr 26 2024 23:04 GCC 12.2.0       BBSID: CAPCITY2       CHRS: ASCII 1        [With a lot of local school systems going to using packaged, online/       computerized lesson applications, I wonder if this isn't already       happening? -- Mike]              AI doesn't belong in the classroom unless you want kids to learn all the        wrong lessons              Date:       Thu, 06 Mar 2025 18:11:27 +0000              Description:       More and more teachers are using AI to teach, and this is probably the last       thing we want.              FULL STORY       ======================================================================              As a child, I loved fingerpainting and anxiously awaited the weekly, colorful       in-class activity. It wasn't so much the art that compelled me; I loved the       distinctive smell and visceral feel of the fingerpaint. The entire process       felt like an exploration, and through it, I discovered my creativity.               It was messy, chaotic, and crucial, I think, for my development. The new idea       with fingerpainting is to separate a child's fingers from the paint. You       splash some of the squishy colors onto a canvas, then seal the goop under       plastic. The child then basically pushes the colors around without actually       touching them.               It's clean, antiseptic, terrible, and a metaphor for what I think AI might be       doing to learning.               My concerns were sparked anew by a recent and well-researched story in USA       Today explaining "How AI is affecting the way kids learn to read and write ."               It's full of details and anecdotes about how teachers are turning to AI in        the classroom to help students, for instance, ideate. One teacher complained       that the kids' essay ideas were growing "stale," so she's having them use AI       to help them come up with better ones.              Antiseptic AI learning               Forget brainstorming in the classroom, kicking around ideas big and small        that might spark others. AI offers a valuable shortcut. It also cuts out the       messiness of bad ideas. AI's job is not to come up with answers randomly. The       Large Language Models (LLMs) in ChatGPT , for instance, have been trained on       millions, if not billions, of parameters to have a better understanding of a       broad range of topics.               I often describe this as AI's knowing better than us "what comes next." That       works in reading, writing, coding, and art. It's not always a clean process,       though.               Early AIs (ones from 12 months ago) with somewhat limited training didn't       always understand that humans have five fingers on each hand, so we got six       fingers and sometimes extra phantom limbs. Interestingly, we seem quite       comfortable with AI's learning through their own messy mistakes.               Literacy, the report notes, is dropping among grade school children largely       because they're doing less reading of long-form content they mostly read       stuff on small screens if they're not ingesting endless video scrolls and        the pandemic set almost all learning back by a few years.              Educators struggle with this and AI has arrived as a handy tool for       navigating around many of these issues.               Students are also engaging in more back-and-forth with AI for research. While       boomers and Gen X might have used encyclopedias, Millennials and Gen Z have       largely grown up using the web as a core research tool. They learned how to       search on Google and, through trial and error, find the details they needed.               AI, though, is a conversation where the response is presented as fact, and        the student assumes it is so. There is no error or assumption of error, and       mistakes could easily be hidden in AI hallucinations.               Again, the engagement with a teacher and even other students is lost. Ideas        no longer float in the ether. Questions are not shared among a group.              Let's make mistakes              Good teachers used to say, "There's no such thing as a dumb question." Asking       "dumb" questions was how we learned. Students using AI are shielded from that       moment. They just type in the prompt and the AI responds.               We learn through trial and error, and studies have shown that young minds, in       particular, need to learn from the messiness of mistakes.               In a 2016 study, Learning from Errors , researchers wrote, "Although error       avoidance during learning appears to be the rule in American classrooms,       laboratory studies suggest that it may be a counterproductive strategy, at       least for neurologically typical students. Experimental investigations       indicate that errorful learning followed by corrective feedback is beneficial       to learning."               A world in which students are potentially paired with their own AI chatbot        and self-navigate without any experimentation or flat-out mistakes means that       the conversation about why the work was wrong will never happen.               There is an exploration lost for the student who will not learn about the       right way and understand how that error might lead to other reasoning dead       ends and for the teacher who will fail to learn about the best way to engage       and teach that student.               The sad thing is that I'm not sure we can convince students and their parents       that this lack of messiness, error-making, and feedback loops will harm the       students.               Outside the classroom, students teach themselves how to use ChatGPT to        produce essays and get the best results and grades. At least educators are        hip to these efforts. In the USA Today story, one educator who discovered        them began running all the essays through AI checkers. Those are, of course,       not fool proof .               The sad thing is that I'm not sure we can convince students and their parents       that this lack of messiness, error-making, and feedback loops will harm the       students. They will not learn as much, and I'm pretty sure their intellectual       curiosity and creativity will be stunted.               How do we learn fresh things when our teacher is an AI, one that's been       trained on all that was and is still not that good at telling us what comes       next?               Look, I am not anti-AI, but AI in the hands of children and young students is       like the sealed fingerpainting kit: antiseptic, wrong, and the opposite of        the beautiful mess that is learning.              ======================================================================       Link to news story:       https://www.techradar.com/computing/artificial-intelligence/ai-doesnt-belong-i       n-the-classroom-unless-you-want-kids-to-learn-all-the-wrong-lessons              $$       --- SBBSecho 3.20-Linux        * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)       SEEN-BY: 105/81 106/201 128/187 129/305 153/7715 154/110 218/700 226/30       SEEN-BY: 227/114 229/110 111 114 206 300 307 317 400 426 428 470 664       SEEN-BY: 229/700 705 266/512 291/111 320/219 322/757 342/200 396/45       SEEN-BY: 460/58 712/848 902/26 2320/0 105 3634/12 5075/35       PATH: 2320/105 229/426           |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca