Just a sample of the Echomail archive
SCIPHYSI:
[ << oldest | < older | list | newer > | newest >> ]
|  Message 178,373 of 178,646  |
|  Mild Shock to Mild Shock  |
|  Algorithm introduced in Hogwild! SGD (Ni  |
|  01 Dec 25 17:31:48  |
 XPost: sci.physics.relativity, comp.lang.prolog From: janburse@fastmail.fm Hi, PRAM effects are a little bit contrived in AI accelerators, since they work with matrix tiles, that are locally cached to the tensor core. But CRCW is quite cool for machine learning. When the weights get updated. ChatGPT suggested me to read this paper: Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent https://arxiv.org/pdf/1106.5730 Didn't read yet... You might also have read the recent report how Google trained Gemini. They had to deal with other issues as well, like failure of a whole tensore core. Bye Mild Shock schrieb: > Hi, > > Simulation is not so easy. You would need an > element of non-determinism, or if you want > call it randomness. Because PRAM has this > > instructions, ERCW, CRCW, etc.. > > - Concurrent read concurrent write (CRCW)— > multiple processors can read and write. A > CRCW PRAM is sometimes called a concurrent > random-access machine. > https://en.wikipedia.org/wiki/Parallel_RAM > > Modelling via von Neuman what happens there > can be quite challenging. At least it doesn't > allow for a direct modelling. > > What a later processor sees, depends extremly > on the timing and which processor "wins" the > write. > > Also I don't know what it would buy you > intellectually to simulate a PRAM on a random > von Neuman machine. The random von Neuman > > machine could need more steps than the PRAM > in summary, because it has to simulate a PRAM. > But I guess its the intellectual questioning > > that needs also a revision when confronted > with the new architecture of unified memory > and tensor processing cores. > > Bye > > Maciej Woźniak schrieb: >> On 12/1/2025 12:15 PM, Mild Shock wrote: >>> Hi, >>> >>> You wrote: >>> >>> > No, they don't, they just add one (or some) >>> > more layer on top of it. >>> >>> Techically they are not von Neuman architecture. >>> Unified Memory with Multiple Tensor Cores is >>> not von Neuman architecture. >> >> We can use von Neumann architecture >> to emulate other architectures, but as long as it >> is performed by our computers it is technically >> von Neumann's. >> > --- SoupGate-Win32 v1.05 * Origin: you cannot sedate... all the things you hate (1:229/2) |
[ << oldest | < older | list | newer > | newest >> ]
(c) 1994, bbs@darkrealms.ca