Just a sample of the Echomail archive
Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.
|    CONSPRCY    |    How big is your tinfoil hat?    |    2,445 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 2,427 of 2,445    |
|    Mike Powell to All    |
|    Pentagon, Anthropic clash over AI safegu    |
|    17 Feb 26 11:23:36    |
      TZUTC: -0500       MSGID: 2185.consprcy@1:2320/105 2df9d491       PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0       TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0       BBSID: CAPCITY2       CHRS: ASCII 1       FORMAT: flowed       Pentagon may sever Anthropic relationship over AI safeguards - Claude maker       expresses concerns over 'hard limits around fully autonomous weapons and mass       domestic surveillance'              By Benedict Collins published 21 hours ago              The Pentagon wants to use AI models for warfighting, but Anthropic says no               The Pentagon and Anthropic are in a standoff over usage of Claude        The AI model was reportedly used to capture Nicolas Maduro        Anthropic refuses to let its models be used in "fully autonomous weapons       and mass domestic surveillance"              A rift between the Pentagon and several AI companies has emerged over how their       models can be used as part of operations.              The Pentagon has requested AI providers Anthropic, OpenAI, Google, and xAI to       allow the use of their models for "all lawful purposes". Anthropic has       voiced fears its Claude models would be used in autonomous weapons systems and       mass domestic surveillance, with the Pentagon threatening to terminate its $200       million contract with the AI provider in response.              $200 million standoff over AI weapons              Speaking to Axios, an anonymous Trump administration advisor said one of the       companies has agreed to allow the Pentagon full use of its model, with the       other two showing flexibility in how their AI models can be used.              The Pentagon's relationship with Anthropic has been shaken since January over       the use of its Claude models, with the Wall Street Journal reporting that       Claude was used in the US military operation to capture Venezuelan       then-President Nicolas Maduro.              An Anthropic spokesperson told Axios that the company has "not discussed the       use of Claude for specific operations with the Department of War". The       company did state that its Usage Policy with the Pentagon was under review,       with specific reference to "our hard limits around fully autonomous weapons       and mass domestic surveillance."              Chief Pentagon spokesman Sean Parnell stated that "Our nation requires that       our partners be willing to help our warfighters win in any fight."              Security experts, policy makers, and Anthropic Chief Executive Dario Amodei       have called for greater regulation on AI development and increased requirements       on safeguarding, with specific reference to the use of AI in weapons systems       and military technology.                     https://www.techradar.com/pro/security/pentagon-may-sever-anthropic-relationshi       p-over-ai-safeguards-claude-maker-expresses-concerns-over-hard-limits-around-fu       lly-autonomous-weapons-and-mass-domestic-surveillance              $$       --- SBBSecho 3.28-Linux        * Origin: Capitol City Online (1:2320/105)       SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700       SEEN-BY: 226/30 227/114 229/110 134 206 300 307 317 400 426 428 470       SEEN-BY: 229/664 700 705 266/512 291/111 320/219 322/757 342/200 396/45       SEEN-BY: 460/58 633/280 712/848 902/26 2320/0 105 304 3634/12 5075/35       PATH: 2320/105 229/426           |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca