home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 2,405 of 2,445   
   Mike Powell to All   
   Researchers teach AI to correct own mist   
   14 Feb 26 12:35:00   
   
   TZUTC: -0500   
   MSGID: 2163.consprcy@1:2320/105 2df5f0b7   
   PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   FORMAT: flowed   
   Swiss scientists want to make long AI-generated videos even better by   
   preventing them from 'degrading into randomness' - is that a good idea? I am   
   not so sure   
      
   By Efosa Udinmwen published 21 hours ago   
      
   EPFL researchers teach AI to correct its own video mistakes   
      
       AI-generated videos often lose coherence over time due to a problem called   
   drift   
       Models trained on perfect data struggle when handling imperfect real-world   
   input   
       EPFL researchers developed retraining by error recycling to limit   
   progressive degradation   
      
   AI-generated videos often lose coherence as sequences grow longer, a problem   
   known as drift.  This issue occurs because each new frame is generated based on   
   the previous one, so any small error, such as a distorted object or slightly   
   blurred face, is amplified over time.   
      
   Large language models trained exclusively on ideal datasets struggle to handle   
   imperfect input, which is why videos usually become unrealistic after a few   
   seconds.   
      
   Recycling errors to improve AI performance   
      
   Generating videos that maintain logical continuity for extended periods remains   
   a major challenge in the field.  Now, researchers at EPFL's Visual   
   Intelligence for Transportation (VITA) laboratory have introduced a method   
   called retraining by error recycling.   
      
   Unlike conventional approaches that try to avoid errors, this method   
   deliberately feeds the AI's own mistakes back into the training process.  By   
   doing so, the model learns to correct errors in future frames, limiting the   
   progressive degradation of images.   
      
   The process involves generating a video, identifying discrepancies between   
   produced frames and intended frames, and retraining the AI on these   
   discrepancies to refine future output.   
      
   Current AI video systems typically produce sequences that remain realistic for   
   less than 30 seconds before shapes, colors, and motion logic deteriorate.   
      
   By integrating error recycling, the EPFL team has produced videos that resist   
   drift over longer durations, potentially removing strict time constraints on   
   generative video.   
      
   This advancement allows AI systems to create more stable sequences in   
   applications such as simulations, animation, or automated visual storytelling.   
      
   Although this approach addresses drift, it does not eliminate all technical   
   limitations.  Retraining by recycling errors increases computational demand and   
   may require continuous monitoring to prevent overfitting to specific mistakes.   
   Large-scale deployment may face resource and efficiency constraints, as well as   
   the need to maintain consistency across diverse video content.   
      
   Whether feeding AI its own errors is truly a good idea remains uncertain, as   
   the method could introduce unforeseen biases or reduce generalization in   
   complex scenarios.   
      
   The development at VITA Lab shows that AI can learn from its own errors,   
   potentially extending the time limits of video generation.   
      
   However, how this method will perform outside controlled testing or in creative   
   applications remains unclear, which suggests caution before assuming it can   
   fully solve the drift problem.   
      
   Via TechXplore   
      
      
   https://www.techradar.com/pro/swiss-scientists-want-to-make-long-ai-generated-v   
   ideos-even-better-by-preventing-them-from-degrading-into-randomness-is-that-a-g   
   ood-idea-i-am-not-so-sure   
      
   $$   
   --- SBBSecho 3.28-Linux   
    * Origin: Capitol City Online (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700   
   SEEN-BY: 226/30 227/114 229/110 134 206 300 307 317 400 426 428 470   
   SEEN-BY: 229/664 700 705 266/512 291/111 320/219 322/757 342/200 396/45   
   SEEN-BY: 460/58 633/280 712/848 902/26 2320/0 105 304 3634/12 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca