home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 2,360 of 2,445   
   Mike Powell to All   
   When machines remember us   
   10 Feb 26 17:16:36   
   
   TZUTC: -0500   
   MSGID: 2118.consprcy@1:2320/105 2df0ec8a   
   PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   FORMAT: flowed   
   When machines remember us: Rethinking privacy in the age of humanoids   
      
   Opinion By Dr. Najwa Aaraj published 8 hours ago   
      
   How humanoid AI will redefine privacy, trust, and dignity   
      
   As policymakers race to regulate AI, a more intimate form of artificial   
   intelligence is emerging quietly, yet profoundly. The next revolution in   
   technology will not arrive as an app or an algorithm. It will walk toward us,   
   look us in the eye, and ask how it can help.   
      
   Humanoid robots are poised to leave the laboratory and step into our daily   
   lives: greeting guests in hotels, assisting patients in hospitals, tutoring   
   children, guiding us in malls, and eventually sharing our workplaces and homes.   
      
   Goldman Sachs forecasts that consumer sales will surpass one million units by   
   2035, a signal that this future is not speculative, but rapidly approaching. As   
   their forms become familiar, their presence will test one of humanity's   
   oldest instincts: the desire for privacy.   
      
   New era of trust   
      
   Until now, our digital existence has unfolded through screens and sensors we   
   could switch off. A phone slips into a pocket; a smart speaker rests quietly on   
   a shelf. But a humanoid is different. It observes, learns, reasons and acts   
   continuously.   
      
   It can read tone, posture, and emotion, capturing data far beyond what a   
   microphone or camera could record. In the age of humanoids, privacy will no   
   longer mean simply protecting what we say. It will mean defining what machines   
   are allowed to know about who we are.   
      
   This shift demands a new kind of trust. For decades, technology companies have   
   asked for our "consent" through lengthy forms and hidden clauses. Yet no   
   checkbox can capture the complexity of interacting with a learning, adaptive   
   robot.   
      
   When a humanoid helps an elderly patient stand, it must analyze posture,   
   predict balance, and detect hesitation. Every gesture produces intimate data.   
   But who owns those fleeting moments; the patient, the hospital, or the   
   robot's creator? And how can we ensure such data serves human dignity rather   
   than convenience alone?   
      
   Privacy preserving technologies   
      
   Existing privacy laws were built for files, not faces, for static storage, not   
   dynamic interaction. With humanoids, privacy becomes fluid, negotiated in real   
   time through movement, proximity and context.   
      
   Policymakers will need adaptive regulatory frameworks that evolve as quickly as   
   these systems do, incorporating continuous risk assessments and ethical design   
   principles from the very start. This is privacy by architecture, engineering   
   discretion so that it is not optional, but automatic.   
      
   At the core of this architecture lie cryptography and cryptographic protocols,   
   the science that makes privacy enforceable by design.   
      
   They enable humanoids to learn and respond to human needs without revealing the   
   underlying data. Rather than trusting that sensitive information won't be   
   misused, cryptographic techniques ensure it cannot be accessed in the first   
   place. This is the difference between policy promises and mathematical   
   guarantees.   
      
   In a world where humanoids continuously observe, interpret, and act, such   
   guarantees are essential. Encryption and privacy-preserving technologies can   
   transform ethical intentions into operational safeguards, anchoring trust in   
   the code itself.   
      
   Modern privacy engineering already offers tools for this vision. Techniques   
   such as federated learning, homomorphic encryption, and secure multi-party   
   computation allow AI systems to learn from local data without exposing it.   
      
   A humanoid can thus improve its assistance over time while keeping sensitive   
   information within its own encrypted domain. Privacy, in this sense, is not   
   just a social value, it is a scientific discipline advancing in parallel with   
   robotics.   
      
   Yet, the code behind humanoids must reflect more than just technical function,   
   it must embody social norms. In many cultures, cues like posture, gaze, and   
   proximity signal respect or intrusion.   
      
   Robots that move among us must be attuned not only to our privacy but to our   
   customs, boundaries, and emotional comfort. Trust will depend not just on what   
   machines can do, but on how gracefully and respectfully they do it.   
      
   If we embed privacy and dignity at the heart of humanoid systems, through both   
   code and conduct, these machines can help us reclaim control over data that   
   today flows unchecked through digital platforms.   
      
   A care humanoid can allow elderly individuals to live independently without   
   constant human oversight. A humanoid tutor can keep a child's learning data   
   safer than a cloud-based platform by processing it locally. The goal is not to   
   reject these technologies, but to guide them toward humane, transparent, and   
   ethical ends.   
      
   Respect, discretion and care   
      
   As a scientist and researcher, I see robotics as a mirror, reflecting not only   
   our engineering ambition but our ethical imagination. At the Technology   
   Innovation Institute, we are building physical artificial intelligence that   
   must engage with the world in all its complexity.   
      
   This means designing not only for function, but for respect, discretion, and   
   care. As we teach machines to perceive us, we are also redefining - with   
   intention - what it means to be truly seen.   
      
   The task before policymakers, scientists, and citizens is to move from reaction   
   to anticipation, to write the rules of coexistence before machines arrive at   
   our doorsteps. Privacy, once a personal concern, must now become a shared   
   design principle.   
      
   Humanoids are arriving at a defining moment for society. Their emergence will   
   test our ability to govern technology with foresight, ethics, and compassion.   
      
   If we succeed, we will build a future where physical artificial intelligence   
   safeguards, rather than sacrifices, human potential; proving that innovation   
   and integrity can coexist by design.   
      
      
   This article was produced as part of TechRadarPro's Expert Insights channel   
   where we feature the best and brightest minds in the technology industry today.   
   The views expressed here are those of the author and are not necessarily those   
   of TechRadarPro or Future plc. If you are interested in contributing find out   
   more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro   
      
      
   https://www.techradar.com/pro/when-machines-remember-us-rethinking-privacy-in-t   
   he-age-of-humanoids   
      
   $$   
   --- SBBSecho 3.28-Linux   
    * Origin: Capitol City Online (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700   
   SEEN-BY: 226/30 227/114 229/110 134 206 300 307 317 400 426 428 470   
   SEEN-BY: 229/664 700 705 266/512 291/111 320/219 322/757 342/200 396/45   
   SEEN-BY: 460/58 633/280 712/848 902/26 2320/0 105 304 3634/12 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca