Fair distribution for ppd BigAdv / Gpu
Moderators: Site Moderators, FAHC Science Team
-
- Posts: 75
- Joined: Sun Jan 25, 2009 12:21 am
- Hardware configuration: HP Xeons Z600 (12/24 @ 3.0 Ghz) + SLI Quadro K5000 + Quadro K5000
HP Xeons Z620 (24/48 @ 2.7 Ghz) + GeForce Titan + Geforce 1070 - Location: https://itunes.apple.com/fr/book/le-cal ... 2004?mt=11
- Contact:
Fair distribution for ppd BigAdv / Gpu
The computing power of the gpu is higher, why the number of points is lower ?
@+
*_*
@+
*_*
Re: Fair distribution for ppd BigAdv / Gpu
because GPUs are very limited in what computations they can do. They can do very simple simulations very very very fast.
Also, GPUs haven't yet got the QRB (Quick Return Bonus) system rolled out to them yet.
Also, GPUs haven't yet got the QRB (Quick Return Bonus) system rolled out to them yet.
Re: Fair distribution for ppd BigAdv / Gpu
I have 6 Nvidia GPU folding, so should I switch over to CPU and BigAdv instead instead of using GPU?
I wonder because that is the message I get by looking at the PPD being produced by different system.
The energy consumtion will also be lower if a stop using GPU.
I wonder because that is the message I get by looking at the PPD being produced by different system.
The energy consumtion will also be lower if a stop using GPU.
Re: Fair distribution for ppd BigAdv / Gpu
Welcome to the foldingforum, khgsw.
A lot depends on your hardware. The newer GPUs do produce nice PPD, but if you have hardware that is capable of running bigadv WU (8 or more CPU Cores, as reported by your OS plus plenty of RAM) that choice is certainly recommended. As far as "switching over" that also depends on your hardware. Many people find that they can successfully run both for a total PPD exceeding either one alone. It does take some experimentation to find the balance that is optimum for your system(s).
You can save power by reducing your GPU folding, but you can also reduce power some by removing your overclocking settings.
Without detailed information about your system(s) I can only speak in generatlities. If you describe your hardware, you'll probably get more specific responses from people with similar systems.
A lot depends on your hardware. The newer GPUs do produce nice PPD, but if you have hardware that is capable of running bigadv WU (8 or more CPU Cores, as reported by your OS plus plenty of RAM) that choice is certainly recommended. As far as "switching over" that also depends on your hardware. Many people find that they can successfully run both for a total PPD exceeding either one alone. It does take some experimentation to find the balance that is optimum for your system(s).
You can save power by reducing your GPU folding, but you can also reduce power some by removing your overclocking settings.
Without detailed information about your system(s) I can only speak in generatlities. If you describe your hardware, you'll probably get more specific responses from people with similar systems.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
-
- Posts: 1122
- Joined: Wed Mar 04, 2009 7:36 am
- Hardware configuration: 3 - Supermicro H8QGi-F AMD MC 6174=144 cores 2.5Ghz, 96GB G.Skill DDR3 1333Mhz Ubuntu 10.10
2 - Asus P6X58D-E i7 980X 4.4Ghz 6GB DDR3 2000 A-Data 64GB SSD Ubuntu 10.10
1 - Asus Rampage Gene III 17 970 4.3Ghz DDR3 2000 2-500GB Segate 7200.11 0-Raid Ubuntu 10.10
1 - Asus G73JH Laptop i7 740QM 1.86Ghz ATI 5870M
Re: Fair distribution for ppd BigAdv / Gpu
I think that it still holds true that bigadv will produce more PPD than bigadv and GPU folding if you follow Stanfords guide lines of not using smp 7. If you follow the guide lines then to fold bigadv and GPU would you not need to run -smp 6 or are there some of the newer nVidia cards that will not affect smp 8 bigadv adversely. ?
2 - SM H8QGi-F AMD 6xxx=112 cores @ 3.2 & 3.9Ghz
5 - SM X9QRI-f+ Intel 4650 = 320 cores @ 3.15Ghz
2 - I7 980X 4.4Ghz 2-GTX680
1 - 2700k 4.4Ghz GTX680
Total = 464 cores folding
-
- Posts: 660
- Joined: Mon Oct 25, 2010 5:57 am
- Hardware configuration: a) Main unit
Sandybridge in HAF922 w/200 mm side fan
--i7 [email protected] GHz
--ASUS P8P67 DeluxeB3
--4GB ADATA 1600 RAM
--750W Corsair PS
--2Seagate Hyb 750&500 GB--WD Caviar Black 1TB
--EVGA 660GTX-Ti FTW - Signature 2 GPU@ 1241 Boost
--MSI GTX560Ti @900MHz
--Win7Home64; FAH V7.3.2; 327.23 drivers
b) 2004 HP a475c desktop, 1 core Pent 4 [email protected] GHz; Mem 2GB;HDD 160 GB;Zotac GT430PCI@900 MHz
WinXP SP3-32 FAH v7.3.6 301.42 drivers - GPU slot only
c) 2005 Toshiba M45-S551 laptop w/2 GB mem, 160GB HDD;Pent M 740 CPU @ 1.73 GHz
WinXP SP3-32 FAH v7.3.6 [Receiving Core A4 work units]
d) 2011 lappy-15.6"-1920x1080;i7-2860QM,2.5;IC Diamond Thermal Compound;GTX 560M 1,536MB u/c@700;16GB-1333MHz RAM;HDD:500GBHyb w/ 4GB SSD;Win7HomePrem64;320.18 drivers FAH 7.4.2ß - Location: Saratoga, California USA
Re: Fair distribution for ppd BigAdv / Gpu
I tried several configurations back in April to see what the combined effects would be. This set of trials didn't include an -SMP6.
Conclusions:
The best overall production was with one GPU slot and -bigadv SMP, at -SMP 7. That hasn't been an issue for the last couple of months. When we needed to switch off of -bigadv a few weeks ago when the servers were down, I did switch to -SMP 8 to ensure that none of the problem WUs would be EUEed.
The highest SMP only config was (no surprise here) -SMP 8.
The -SMP 8 along with the GPU was lower total PPD than -SMP 7 with GPU, but it was STILL considerably more ppd than the SMP/bigadv only.
BTW, after having the Sandy Bridge system online for three months, I finally decided to try out the next level of CPU overclock. I'm going to try taking the factory 3.9 GHz to 4.6 GHz and see how that affects the overall production. I'll retry these configs over the next couple of weeks to see what those numbers come out to be.
Conclusions:
The best overall production was with one GPU slot and -bigadv SMP, at -SMP 7. That hasn't been an issue for the last couple of months. When we needed to switch off of -bigadv a few weeks ago when the servers were down, I did switch to -SMP 8 to ensure that none of the problem WUs would be EUEed.
The highest SMP only config was (no surprise here) -SMP 8.
The -SMP 8 along with the GPU was lower total PPD than -SMP 7 with GPU, but it was STILL considerably more ppd than the SMP/bigadv only.
BTW, after having the Sandy Bridge system online for three months, I finally decided to try out the next level of CPU overclock. I'm going to try taking the factory 3.9 GHz to 4.6 GHz and see how that affects the overall production. I'll retry these configs over the next couple of weeks to see what those numbers come out to be.
GreyWhiskers wrote: my Sandy Bridge system (see sig) has an i7 2600k (3.90 Ghz) and the GTX 560 Ti (950 MHz). The entire system is pulling a steady 288 watts as measured by the CyberPower UPS, less monitor which is on a different UPS.
I returned to the base - SMP 7 with GPU. That gives a larger total PPD, equating to more science return, at the expense of some 135 watts to run the GPU. This doesn't seem set and forget, though. If I leave an SMP client at -SMP 7, and if one of the P101xx WUs gets assigned, it will EUE and dump the WU, not good for the program. Since the P6900 type WUs take about 2.5 days to complete, one could set the -SMP 7 when it starts, and then back to -SMP 8 before it terminates. That may give a little more productivity, while making sure that we don't EUE any of the other -SMP projects that may be sent.
Base: All running v6 clients.
SMP P6901 -bigadv -SMP 7 w/bonus - TPF: 33:29 ppd: 32,736 [will scale up with more CPU O/C]
GPU TPF: P6801 - TPF: 1:21 ppd: 14,378
Total production: 47,114 ppd
CPU Load ~~87/88%
Power: 288 watts system without monitor
Test 1 - turn off GPU folding (stay at -SMP 7)
SMP P6901 -bigadv w/bonus - TPF: 32:05 ppd: 33,739
Total production: 33,739 ppd
CPU Load ~ 87/88%
Power: 153 watts system without monitor [--> GTX 560 Ti at 950 GHz core clock consumes ~135 watts while folding]
Test 2 - turn off GPU folding; run at -SMP 8
SMP P6901 -bigadv w/bonus - TPF: 30:19 ppd: 36,438 [will scale up with more CPU O/C]
Total production: 36,438 ppd
CPU Load - 100%
Power: 153-162 watts system without monitor
Test 3 - Turn on GPU folding, leaving SMP -8
SMP P6901 -bigadv w/bonus - TPF: 38:18 ppd: 26,514 [will scale up with more CPU O/C]
GPU TPF: P6801 - TPF: 1:21 ppd: 14,378
Total production: 40,892 ppd
CPU Load 100%
Power: 288 watts system without monitor
GPU client notes. V6 systray. Set and forget. Plunks a finished WU every ~ 2.27 hours. Did NOT set the "slightly higher" flag - the GPU doesn't need the advantage
Re: Fair distribution for ppd BigAdv / Gpu
Running a Gpu alongside Bigadv smp get's progressively worse as the host Cpu gets faster or it's core count increases,
mainly due to the Bigadv bonus multi.....
mainly due to the Bigadv bonus multi.....
Re: Fair distribution for ppd BigAdv / Gpu
Right. The GPU client needs some CPU resources which varies quite a bit depending on which GPU you have. The bottom line is whether it's better to devote those CPU resources to supplying data to the GPU or to helping with the SMP WU. Books could be written on that subject, but in the final analysis, it depends on your hardware so the only "best" answer is whatever you find works best on your system.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
Re: Fair distribution for ppd BigAdv / Gpu
Or on the other hand if you're really all out to return the Bigadv Wu's as fast as possible from the science viewpoint
why slow it down by running a Gpu as well......
why slow it down by running a Gpu as well......
Re: Fair distribution for ppd BigAdv / Gpu
There's always a trade-off. Some GPUs use very, very little CPU, especially when it's a HyperThreaded virtual CPU. The heavy FP work is done by the GPU and it doesn't compete for the FPU which is busy doing SMP work. If the amount that it slows down SMP is insignificant then getting more work done might be the right answer. YMMV.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
-
- Pande Group Member
- Posts: 2058
- Joined: Fri Nov 30, 2007 6:25 am
- Location: Stanford
Re: Fair distribution for ppd BigAdv / Gpu
We have been considering QRB for GPUs, which should finish the rebalancing we've had in mind to do. There are some issues to work out though, which is why we haven't made that change now.
Prof. Vijay Pande, PhD
Departments of Chemistry, Structural Biology, and Computer Science
Chair, Biophysics
Director, Folding@home Distributed Computing Project
Stanford University
Departments of Chemistry, Structural Biology, and Computer Science
Chair, Biophysics
Director, Folding@home Distributed Computing Project
Stanford University
-
- Posts: 260
- Joined: Tue Dec 04, 2007 5:09 am
- Hardware configuration: GPU slots on home-built, purpose-built PCs.
- Location: Eagle River, Alaska
Re: Fair distribution for ppd BigAdv / Gpu
It's really a tough call, and believe me, I've been there. For a while, I had a small farm of 9800GX2s, then I upgraded to a farm of GTX 295s. One had to be stoic and keep a stiff upper lip when receiving the monthly power bill. I sold off the GTX 295s while the used market was still strong for them, bought some lower powered Fermi cards and upgraded CPUs to I7 Lynnfields. This was right at the time that 8-thread and higher CPU architectures became so effective with the newer (at that time) bigadv work units. I can't claim that I timed the march of science and technology correctly, as it was mainly luck that I made my system configuration changes then. (Maybe in a few months the high powered GPUs will again be formidable, and I no longer have any. Maybe I should....nah, my wife no longer has a minor stroke when she sees the power bill. )I have 6 Nvidia GPU folding, so should I switch over to CPU and BigAdv instead instead of using GPU?..The energy consumtion will also be lower if a stop using GPU.
Every time I reconfigure a system, I have to remind myself that today's technology may not be an optimum solution even just a few months in the future. It's just the nature of the game. There's no way around it.
Re: Fair distribution for ppd BigAdv / Gpu
Sounds like we have followed very similar paths in our past Leonardo,
Went through the QMD disaster, managed to sell off more than a few ATi cards after Gpu1 was pulled,
it's a never ending "work in progress" keeping up with what produces the best results (both science and ppd) for
a given self imposed power budget, due to the subtropical location it's "watercool everything" for me here and sadly I've still a
watercooled 295 and a couple of 275's that "missed the boat" on "for sale" forums,
currently have 2 x 480's and 2 x 470's (all watercooled ) sitting idle as my power budget made it more prudent
to run an extra 970 rig, but with new changes I'll have to get out the calculator once again and see if it's better to sell off one
970 rig and fire up the Gpu's again....
For the quoted question:
The only way to know for sure which is the best configuration is to run all combinations available and measure their power usage,
that way at least you'll have some numbers for all your hardware, and they can be more than handy for comparing prospective
upgrades when the time comes.....
Went through the QMD disaster, managed to sell off more than a few ATi cards after Gpu1 was pulled,
it's a never ending "work in progress" keeping up with what produces the best results (both science and ppd) for
a given self imposed power budget, due to the subtropical location it's "watercool everything" for me here and sadly I've still a
watercooled 295 and a couple of 275's that "missed the boat" on "for sale" forums,
currently have 2 x 480's and 2 x 470's (all watercooled ) sitting idle as my power budget made it more prudent
to run an extra 970 rig, but with new changes I'll have to get out the calculator once again and see if it's better to sell off one
970 rig and fire up the Gpu's again....
For the quoted question:
The only way to know for sure which is the best configuration is to run all combinations available and measure their power usage,
that way at least you'll have some numbers for all your hardware, and they can be more than handy for comparing prospective
upgrades when the time comes.....
-
- Posts: 260
- Joined: Tue Dec 04, 2007 5:09 am
- Hardware configuration: GPU slots on home-built, purpose-built PCs.
- Location: Eagle River, Alaska
Re: Fair distribution for ppd BigAdv / Gpu
I had 8 total GTX 295s folding for few months. One day it hit me - "STOP THE MADNESS." Hey, I'm in Alaska, and even here I had to open the windows to keep the office cool enough.
Re: Fair distribution for ppd BigAdv / Gpu
LOL,Leonardo wrote:I had 8 total GTX 295s folding for few months. One day it hit me - "STOP THE MADNESS." Hey, I'm in Alaska, and even here I had to open the windows to keep the office cool enough.
Imagine me back a little running 2 x 9800GX2's 2 x 295's and various single 9 and 200 series cards that took the Gpu count to 14.....
all in the one small room,
The watercooling did it's job as they never missed a beat, even highly overclocked and with ambient temps in the high 30's C...
When you actually walked in the room it was another story,
never mind "perspire" it was more like slowly melting.....