Just a sample of the Echomail archive
Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.
|    CONSPRCY    |    How big is your tinfoil hat?    |    2,445 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 371 of 2,445    |
|    Mike Powell to All    |
|    AI Godfather sounds alarm    |
|    07 Feb 25 11:06:00    |
      TZUTC: -0500       MSGID: 76.consprcy@1:2320/105 2c0b67d7       PID: Synchronet 3.20a-Linux master/acc19483f Apr 26 202 GCC 12.2.0       TID: SBBSecho 3.20-Linux master/acc19483f Apr 26 2024 23:04 GCC 12.2.0       BBSID: CAPCITY2       CHRS: ASCII 1       'AI Godfather' sounds the alarm on autonomous AI              Date:       Fri, 07 Feb 2025 02:30:00 +0000              Description:       AI godfather warns about autonomous AI race.              FULL STORY       ======================================================================        - 'AI godfather' Yoshua Bengio warns that the AI race prioritizes speed       over safety        - This risks unpredictable and dangerous consequences        - He urges global cooperation to enforce AI regulations before autonomous       systems become difficult to control              'AI godfather' Yoshua Bengio helped create the foundations of the neural       networks running all kinds of AI tools today, from chatbots mimicking cartoon       characters to scientific research assistants. Now, he has an urgent warning       for AI developers, as he explained in a Sky News interview . The race to       develop ever-more-powerful AI systems is escalating at a pace that, in his       view, is far too reckless.               And its not just about which company builds the best chatbot or who gets the       most funding. Bengio believes that the rapid, unregulated push toward        advanced AI could have catastrophic consequences if safety isnt treated as a       top priority.               Bengio described watching developers racing against each other, getting       sloppy, or taking dangerous shortcuts. Though speed can make the difference        in breaking ground on a new kind of product worth billions and playing       catch-up to a rival, it may not be worth it to society.               That pressure has only intensified for AI developers with the rise of Chinese       AI firms like DeepSeek , whose advanced chatbot capabilities have caught the       attention of Western companies and governments alike. Instead of slowing down       and carefully considering the risks, major tech firms are accelerating their       AI development in an all-out sprint for superiority. Bengio worries this will       lead to rushed deployments, inadequate safety measures, and systems that       behave in ways we dont yet fully understand.               Bengio explained that he has been warning about the need for stronger AI       oversight, but recent events have made his message feel even more urgent. The       current moment is a "turning point," where we either implement meaningful       regulations and safety protocols or risk letting AI development spiral into       something unpredictable.               After all, more and more AI systems dont just process information but can        make autonomous decisions. These AI agents are capable of acting on their own       rather than simply responding to user inputs. They're exactly what Bengio        sees as the most dangerous path forward. With enough computing power, an AI       that can strategize, adapt, and take independent actions could quickly become       difficult to control should humans want to take back the reins.              AI takeover               The problem isnt just theoretical. Already, AI models are making financial       trades, managing logistics, and even writing and deploying software with       minimal human oversight. Bengio warns that were only a few steps away from       much more complex, potentially unpredictable AI behavior. If a system like       this is deployed without strict safeguards, the consequences could range from       annoying hiccups in service to full-on security and economic crises.               Bengio isnt calling for a halt to AI development. He made clear that he's an       optimist about AI's abilities when used responsibly for things like medical       and environmental research. He just sees a need for a priority shift to more       thoughtful and deliberate work on AI technology. His unique perspective may       carry some weight when he calls for AI developers to put ethics and safety       ahead of competing with rival companies. That's why he participates in policy       discussions at events like the upcoming International AI Safety Summit in       Paris,               He also thinks regulation needs to be bolstered by companies willing to take       responsibility for their systems. They need to invest as much in safety       research as they do in performance improvements, he claims, though that       balance is hard to imagine appearing in today's AI melee. In an industry        where speed equals dominance, no company wants to be the first to hit the       brakes.               The global cooperation Bengio pitches might not appear immediately, but as        the AI arms race continues, warnings from Bengio and others in similar       positions of prestige grow more urgent. He hopes the industry will recognize       the risks now rather than when a crisis forces the matter. The question is       whether the world is ready to listen before its too late.              ======================================================================       Link to news story:       https://www.techradar.com/computing/artificial-intelligence/ai-godfather-sound       s-the-alarm-on-autonomous-ai              $$       --- SBBSecho 3.20-Linux        * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)       SEEN-BY: 105/81 106/201 128/187 129/305 153/7715 154/110 218/700 226/30       SEEN-BY: 227/114 229/110 111 114 206 300 307 317 400 426 428 470 664       SEEN-BY: 229/700 705 266/512 291/111 320/219 322/757 342/200 396/45       SEEN-BY: 460/58 712/848 902/26 2320/0 105 3634/12 5075/35       PATH: 2320/105 229/426           |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca