home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 1,788 of 2,445   
   Mike Powell to All   
   AI power consumption out   
   28 Sep 25 08:53:58   
   
   TZUTC: -0500   
   MSGID: 1537.consprcy@1:2320/105 2d3e79f7   
   PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   FORMAT: flowed   
   This graph alone shows how global AI power consumption is getting out of hand   
   very quickly - and it's not just about hyperscalers or OpenAI   
      
   Date:   
   Sat, 27 Sep 2025 16:02:00 +0000   
      
   Description:   
   Data center racks could consume 1MW each by 2030 as AI reshapes    
   infrastructure and forces new thinking about cooling and power delivery.   
      
   FULL STORY   
      
   Long considered the basic unit of a data center, the rack is being reshaped    
   by the rise of AI, and a new graph (linked below) from Lennox Data Centre   
   Solutions shows how quickly this change is unfolding.   
      
   Where they once consumed only a few kilowatts, projections from the firm   
   suggest by 2030 an AI-focused rack could reach 1MW of power use, a scale that   
   was once reserved for entire facilities.    
      
   Average data center racks are expected to reach 30-50kW in the same period,   
   reflecting a steady climb in compute density, and the contrast with AI   
   workloads is striking.   
      
   New demands for power delivery and cooling    
      
   According to projections, a single AI rack can use 20 to 30 times the energy   
   of its general-purpose counterpart, creating new demands for power delivery   
   and cooling infrastructure.    
      
   Ted Pulfer, director at Lennox Data Centre Solutions, said cooling has become   
   central to the industry.    
      
   Cooling, once part of the supporting infrastructure, has now moved to the   
   forefront of the conversation, driven by increasing compute densities, AI   
   workloads and growing interest in approaches such as liquid cooling, he said.    
      
   Pulfer described the level of industry collaboration now taking place.   
   Manufacturers, engineers and end users are all working more closely than    
   ever, sharing insights and experimenting together both in the lab and in   
   real-world deployments. This hands-on cooperation is helping to tackle some    
   of the most complex cooling challenges weve faced, he said.    
      
   The aim of delivering 1MW of power to a rack is also reshaping how systems    
   are built.    
      
   Instead of traditional lower-voltage AC, the industry is moving towards   
   high-voltage DC, such as +/-400V. This reduces power loss and cable size,   
   Pulfer explained.    
      
   Cooling is handled by facility central CDUs which manage the liquid flow to   
   rack manifolds. From there, the fluid is delivered to individual cold plates   
   mounted directly on the servers hottest components.    
      
   Most data centers today rely on cold plates, but the approach has limits.   
   Microsoft has been testing microfluidics , where tiny grooves are etched into   
   the back of the chip itself, allowing coolant to flow directly across the   
   silicon.    
      
   In early trials, this removed heat up to three times more effectively than   
   cold plates, depending on workload, and reduced GPU temperature rise by 65%.    
      
   By combining this design with AI that maps hotspots across the chip,    
   Microsoft was able to direct coolant with greater precision.    
      
   Although hyperscalers could dominate this space, Pulfer believes that smaller   
   operators still have room to compete.    
      
   At times, the volume of orders moving through factories can create delivery   
   bottlenecks, which opens the door for others to step in and add value. In    
   this fast-paced market, agility and innovation continue to be key strengths   
   across the industry, he said.    
      
   What is clear is that power and heat rejection are now central issues, no   
   longer secondary to compute performance.    
      
   As Pulfer puts it, Heat rejection is essential to keeping the worlds digital   
   foundations running smoothly, reliably and sustainably.    
      
   By the end of the decade, the shape and scale of the rack itself may    
   determine the future of digital infrastructure.   
      
   ======================================================================   
   Link to news story:   
   https://www.techradar.com/pro/security/this-graph-alone-shows-how-global-ai-po   
   wer-consumption-is-getting-out-of-hand-very-quickly-and-its-not-just-about-hyp   
   erscalers-or-openai   
      
   $$   
   --- SBBSecho 3.28-Linux   
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700   
   SEEN-BY: 226/30 227/114 229/110 111 206 300 307 317 400 426 428 470   
   SEEN-BY: 229/664 700 705 266/512 291/111 320/219 322/757 342/200 396/45   
   SEEN-BY: 460/58 712/848 902/26 2320/0 105 304 3634/12 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca