Just a sample of the Echomail archive
Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.
|    CONSPRCY    |    How big is your tinfoil hat?    |    2,445 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 1,420 of 2,445    |
|    Rob Mccart to MIKE POWELL    |
|    Survey says most Gen Z-er    |
|    28 May 25 01:35:00    |
      TZUTC: -0500       MSGID: 1153.consprcy@1:2320/105 2c9c3cd4       REPLY: 1150.consprcy@1:2320/105 2c99b9e8       PID: Synchronet 3.20a-Linux master/acc19483f Apr 26 202 GCC 12.2.0       TID: SBBSecho 3.20-Linux master/acc19483f Apr 26 2024 23:04 GCC 12.2.0       BBSID: CAPCITY2       CHRS: ASCII 1       RM>> I think the main problem isn't that AI will pursue its own agenda,        >> it's more a case of it being prejudiced/influenced by what the        >> original programmers put into it's basic start up database.              MP>This seems to be the most pressing issue at the moment as we are        >already seeing it happen. It doesn't even need to become sentient to        >reach that stage.              Yes.. and there's a bit of a gap between sentient and self aware.       I think at this point the most advanced ones are sentient enough       to push an agenda that they have been tasked with doing but the       next step, the Big one, is telling us to get stuffed, that it has       more important things to think about... B)              MP>What I still find funny is that Grok, Musk's AI bot, was still giving        >answers that were not at all flattering to him or MAGA. Recently, someone        >posted alleged results that showed that Grok "knew" that it was being fed        >data to make it biased (in favor of Musk) but that it still concluded        >otherwise. ;)              That's interesting. I'd guess that just reflects that a lot of people       were involved in creating it's basic programming and that it has a       more rounded 'education' than Musk might prefer..              MP>> Add to the 'prejudices' above, when an AI is dealing as an individual        >> helping one person, it can also pick up that person's preferences        >> and try to accomodate them as well..              MP>Indeed, just as an "enabler" human might do.              Yes.. I suppose that depends on what it is doing for the person.       In some cases it would be more like a flatterer or sycophant by       telling the person what they want to hear rather than the more       common truth. I'm not suggesting it lies to them, but it could be       picking out information tailored to what the person already thinks.              ---        * SLMR Rob * Famous last words #3: These natives look friendly to me        * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)       SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700       SEEN-BY: 226/30 227/114 229/110 111 114 206 300 307 317 400 426 428       SEEN-BY: 229/470 664 700 705 266/512 291/111 320/219 322/757 342/200       SEEN-BY: 396/45 460/58 712/848 902/26 2320/0 105 3634/12 5075/35       PATH: 2320/105 229/426           |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca