GPU VRAM due to it proximity to massive hot chip and hot VRM MOSFETs and due to higher VRAM clocks - usually is much hotter than system RAM.Peter_Hucker wrote: ↑Sat Apr 22, 2023 8:41 pm It would be better if Folding stored the data on the GPU, but perhaps the computation makes this impossible.
So when not having ECC on GPU VRAM - as it with participating 995 from 1000 cards - it's much safer to keep more data in system RAM then in GPU hotter VRAM. And one can have ECC system RAM when build with ECC enabled CPU and ECC supporting board, it not so times higher priced compared to ECC VRAM GPUs.
Einstein@Home do so, but it compute at least 2 WUs at different machines until it obtain identical results, if 2 results differ - it re-releases same WU at another machine until it obtain 2 completely identical results. So it have very low required PCIe bandwidth as whole their WU in GPU VRAM. And I saw at least 1 from 100 Einstein WUs were broken and then re-released, with mild OCed chipclock and not OCed at all stock VRAM clock, with just ~1500 MB of used VRAM at these days pretty weak GPU.Peter_Hucker wrote: ↑Sat Apr 22, 2023 8:41 pm A lot of Boinc projects will load a few GB of data onto the GPU at the start, and it can refer to that much faster than having to get it from main RAM.
GPUGRID made with same as FAH's OpenMM but only with CUDA support - compute very differently compared to Einstein@Home, their ATMbetas use little amount of GPU VRAM ~10 times less than Einstein, but use nearly same PCIe utilization as FAH - to store results of computation in much colder and less prone to errors system RAM.