home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   EARTH      Uhh, that 3rd rock from the sun?      8,931 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 8,564 of 8,931   
   ScienceDaily to All   
   Researchers expand ability of robots to    
   20 Jun 23 22:30:28   
   
   MSGID: 1:317/3 64927d13   
   PID: hpt/lnx 1.9.0-cur 2019-01-08   
   TID: hpt/lnx 1.9.0-cur 2019-01-08   
    Researchers expand ability of robots to learn from videos    
    Robots able to accomplish tasks after watching people perform them in any   
   environment    
      
     Date:   
         June 20, 2023   
     Source:   
         Carnegie Mellon University   
     Summary:   
         New work has enabled robots to learn household chores by   
         watching videos of people performing everyday tasks in their   
         homes. Vision-Robotics Bridge, or VRB for short, uses the   
         concept of affordances to teach the robot how to interact with   
         an object. Affordances have their roots in psychology and refer   
         to what an environment offers an individual. The concept has   
         been extended to design and human-computer interaction to refer   
         to potential actions perceived by an individual. With VRB, two   
         robots successfully learned 12 tasks including opening a drawer,   
         oven door and lid; taking a pot off the stove; and picking up a   
         telephone, vegetable or can of soup.   
      
      
         Facebook Twitter Pinterest LinkedIN Email   
      
   ==========================================================================   
   FULL STORY   
   ==========================================================================   
   New work from Carnegie Mellon University has enabled robots to learn   
   household chores by watching videos of people performing everyday tasks   
   in their homes.   
      
   The research could help improve the utility of robots in the home,   
   allowing them to assist people with tasks like cooking and cleaning. Two   
   robots successfully learned 12 tasks including opening a drawer, oven   
   door and lid; taking a pot off the stove; and picking up a telephone,   
   vegetable or can of soup.   
      
   "The robot can learn where and how humans interact with different objects   
   through watching videos," said Deepak Pathak, an assistant professor   
   in the Robotics Institute at CMU's School of Computer Science. "From   
   this knowledge, we can train a model that enables two robots to   
   complete similar tasks in varied environments."  Current methods of   
   training robots require either the manual demonstration of tasks by   
   humans or extensive training in a simulated environment. Both are time   
   consuming and prone to failure. Past research by Pathak and his students   
   demonstrated a novel method in which robots learn from observing humans   
   complete tasks. However, WHIRL, short for In-the-Wild Human Imitating   
   Robot Learning, required the human to complete the task in the same   
   environment as the robot.   
      
   Pathak's latest work, Vision-Robotics Bridge, or VRB for short, builds   
   on and improves WHIRL. The new model eliminates the necessity of human   
   demonstrations as well as the need for the robot to operate within an   
   identical environment.   
      
   Like WHIRL, the robot still requires practice to master a task. The   
   team's research showed it can learn a new task in as little as 25 minutes.   
      
   "We were able to take robots around campus and do all sorts of tasks,"   
   said Shikhar Bahl, a Ph.D. student in robotics. "Robots can use this   
   model to curiously explore the world around them. Instead of just   
   flailing its arms, a robot can be more direct with how it interacts."   
   To teach the robot how to interact with an object, the team applied the   
   concept of affordances. Affordances have their roots in psychology and   
   refer to what an environment offers an individual. The concept has been   
   extended to design and human-computer interaction to refer to potential   
   actions perceived by an individual.   
      
   For VRB, affordances define where and how a robot might interact with   
   an object based on human behavior. For example, as a robot watches a   
   human open a drawer, it identifies the contact points -- the handle --   
   and the direction of the drawer's movement -- straight out from the   
   starting location. After watching several videos of humans opening   
   drawers, the robot can determine how to open any drawer.   
      
   The team used videos from large datasets such as Ego4D and Epic   
   Kitchens. Ego4D has nearly 4,000 hours of egocentric videos of daily   
   activities from across the world. Researchers at CMU helped collect some   
   of these videos. Epic Kitchens features similar videos capturing cooking,   
   cleaning and other kitchen tasks.   
      
   Both datasets are intended to help train computer vision models.   
      
   "We are using these datasets in a new and different way," Bahl said. "This   
   work could enable robots to learn from the vast amount of internet   
   and YouTube videos available."  More information is available on the   
   project's website and in a paper presented in June at the Conference on   
   Vision and Pattern Recognition.   
      
       * RELATED_TOPICS   
             o Health_&_Medicine   
                   # Medical_Education_and_Training # Workplace_Health #   
                   Medical_Devices # Human_Biology # Medical_Topics #   
                   Infant's_Health # Staying_Healthy # Elder_Care   
       * RELATED_TERMS   
             o Robotic_surgery o Nanorobotics o Human_cloning   
             o Personalized_medicine o Therapy_dog o Tattoo o   
             Transmission_(medicine) o Placebo_effect   
      
   ==========================================================================   
   Story Source: Materials provided by Carnegie_Mellon_University. Original   
   written by Aaron Aupperlee. Note: Content may be edited for style   
   and length.   
      
      
   ==========================================================================   
      
      
   Link to news story:   
   https://www.sciencedaily.com/releases/2023/06/230620113807.htm   
      
   --- up 1 year, 16 weeks, 1 day, 10 hours, 50 minutes   
    * Origin: -=> Castle Rock BBS <=- Now Husky HPT Powered! (1:317/3)   
   SEEN-BY: 15/0 106/201 114/705 123/120 153/7715 218/700 226/30 227/114   
   SEEN-BY: 229/110 112 113 307 317 400 426 428 470 664 700 291/111 292/854   
   SEEN-BY: 298/25 305/3 317/3 320/219 396/45 5075/35   
   PATH: 317/3 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca