The Nvidia bias of the folding client will slow down this sprint to a snail race. None of my Nvidia GPUs got one of this (unrewarding) 134xx WUs while my AMD cards get nothing other - without being utilized efficiently. What a mess... the ancient GTX 970 gets giant WUs that take hours to fold while the Vega is not half utilized with this tiny WUs. Maybe you should rethink the assignment algorithm?
Cheers
COVID Moonshot - Unefficient Assignment!!
Moderators: Site Moderators, FAHC Science Team
-
- Posts: 1996
- Joined: Sun Mar 22, 2020 5:52 pm
- Hardware configuration: 1: 2x Xeon [email protected], 512GB DDR4 LRDIMM, SSD Raid, Win10 Ent 20H2, Quadro K420 1GB, FAH 7.6.21
2: Xeon [email protected], 32GB DDR4, NVME, Win10 Pro 20H2, Quadro M1000M 2GB, FAH 7.6.21 (actually have two of these)
3: [email protected], 12GB DDR3, SSD, Win10 Pro 20H2, GTX 750Ti 2GB, GTX 1080Ti 11GB, FAH 7.6.21 - Location: UK
Re: COVID Moonshot - Unefficient Assignment!!
What flag for client-type are are you running (beta, advanced, "none")? ... I ask as I believe there are beta and advanced GPU projects which are non Covid-19 - the 134** Projects are I believe public (no flag) ... This may mean that GPU slots running beta or advanced flags may get non 134** WUs due to the nature of the flags - a beta flag will get a beta WU in preference to advanced or public - an advanced flag will get an advanced WU in preference to public ... If I am right then removing a beta or advanced flag may push the AS towards the highest priority public WUs which is I believe the 134** Covid-19 ones.
Last edited by Neil-B on Tue Aug 04, 2020 2:01 pm, edited 1 time in total.
2x Xeon E5-2697v3, 512GB DDR4 LRDIMM, SSD Raid, W10-Ent, Quadro K420
Xeon E3-1505Mv5, 32GB DDR4, NVME, W10-Pro, Quadro M1000M
i7-960, 12GB DDR3, SSD, W10-Pro, GTX1080Ti
i9-10850K, 64GB DDR4, NVME, W11-Pro, RTX3070
(Green/Bold = Active)
Xeon E3-1505Mv5, 32GB DDR4, NVME, W10-Pro, Quadro M1000M
i7-960, 12GB DDR3, SSD, W10-Pro, GTX1080Ti
i9-10850K, 64GB DDR4, NVME, W11-Pro, RTX3070
(Green/Bold = Active)
-
- Site Admin
- Posts: 7937
- Joined: Tue Apr 21, 2009 4:41 pm
- Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2 - Location: W. MA
Re: COVID Moonshot - Unefficient Assignment!!
As written about several times before in connection with these WUs, data from these projects is being collected towards modifying the assignment algorithm. It is not going to happen overnight.
iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
-
- Pande Group Member
- Posts: 467
- Joined: Fri Feb 22, 2013 9:59 pm
Re: COVID Moonshot - Unefficient Assignment!!
> The Nvidia bias of the folding client will slow down this sprint to a snail race. None of my Nvidia GPUs got one of this (unrewarding) 134xx WUs while my AMD cards get nothing other - without being utilized efficiently. What a mess... the ancient GTX 970 gets giant WUs that take hours to fold while the Vega is not half utilized with this tiny WUs. Maybe you should rethink the assignment algorithm?
We have someone digging into benchmark data from 17100 (the benchmarking project) right now! We're actively working on refining the GPUSpecies to use more of the valid 2-255 range so that we can better refine these projects with live data. Right now, the GPUSpecies uses a narrow range (2-7) and manually-defined categories that don''t work well, especially for projects like 134xx that have a very different workload than other projects. The benchmark project includes a variety of different workloads, so we'll better be able to cluster GPUs that achieve equivalent performance.
Longer-term, we are working on a way to ensure your GPU really can deliver the PPD you expect with a more clever approach.
Thanks so much for bearing with us---we're generating a ton of useful data for the COVID Moonshot that will, with some luck, produce a new COVID-19 therapeutic candidate!
~ John Chodera // MSKCC
We have someone digging into benchmark data from 17100 (the benchmarking project) right now! We're actively working on refining the GPUSpecies to use more of the valid 2-255 range so that we can better refine these projects with live data. Right now, the GPUSpecies uses a narrow range (2-7) and manually-defined categories that don''t work well, especially for projects like 134xx that have a very different workload than other projects. The benchmark project includes a variety of different workloads, so we'll better be able to cluster GPUs that achieve equivalent performance.
Longer-term, we are working on a way to ensure your GPU really can deliver the PPD you expect with a more clever approach.
Thanks so much for bearing with us---we're generating a ton of useful data for the COVID Moonshot that will, with some luck, produce a new COVID-19 therapeutic candidate!
~ John Chodera // MSKCC