CPU vs GPU work units
Moderators: Site Moderators, FAHC Science Team
-
- Posts: 127
- Joined: Tue Mar 24, 2020 12:47 pm
CPU vs GPU work units
Do CPU work units perform computations that GPU cannot do, or do CPU work units just perform a slower implementation of GPU work units? Is there any value in CPU work units that cannot be obtained faster with GPU work units?
-
- Posts: 2522
- Joined: Mon Feb 16, 2009 4:12 am
- Location: Greenwood MS USA
Re: CPU vs GPU work units
I know that in the past, GPUs were limited to implicit solvation while CPUs could do explicit solvation, I am told this is no longer the case. (either from Core_17 to Core_21, or from Core_21 to Core_22)
https://en.wikipedia.org/wiki/Implicit_solvation
https://en.wikipedia.org/wiki/Water_model
https://en.wikipedia.org/wiki/List_of_F ... home_cores
(this is backwards of how most people think of computers, but F@H spends most of its time doing SIMD)
In any case, the SIMD engine in a CPU has a fully functional CPU to do the logical work, while the much larger equivalent of the SIMD engine in a GPU has a weak logic unit designed primarily for graphics.
https://en.wikipedia.org/wiki/SIMD
There are three compelling reasons to continue CPU folding
1) in some scenarios, you want that better logic of a CPU.
2) some proteins are 'too small' for modern GPUs, the volunteers would complain if their GPU was only partially used since their Points Per Day would be reduced. Meanwhile there are now 256 thread CPUs from AMD that approach GPU speed.
viewtopic.php?f=38&t=35286
3) there are a great many PCs with only CPUs, or with GPUs too simple to Fold. CPU folding works back to 2001 hardware, while GPU Folding from 2013 is really iffy to meet minimum specs.
https://en.wikipedia.org/wiki/Implicit_solvation
https://en.wikipedia.org/wiki/Water_model
https://en.wikipedia.org/wiki/List_of_F ... home_cores
(this is backwards of how most people think of computers, but F@H spends most of its time doing SIMD)
In any case, the SIMD engine in a CPU has a fully functional CPU to do the logical work, while the much larger equivalent of the SIMD engine in a GPU has a weak logic unit designed primarily for graphics.
https://en.wikipedia.org/wiki/SIMD
There are three compelling reasons to continue CPU folding
1) in some scenarios, you want that better logic of a CPU.
2) some proteins are 'too small' for modern GPUs, the volunteers would complain if their GPU was only partially used since their Points Per Day would be reduced. Meanwhile there are now 256 thread CPUs from AMD that approach GPU speed.
viewtopic.php?f=38&t=35286
3) there are a great many PCs with only CPUs, or with GPUs too simple to Fold. CPU folding works back to 2001 hardware, while GPU Folding from 2013 is really iffy to meet minimum specs.
Last edited by JimboPalmer on Mon Aug 17, 2020 5:09 pm, edited 1 time in total.
Tsar of all the Rushers
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
-
- Posts: 127
- Joined: Tue Mar 24, 2020 12:47 pm
Re: CPU vs GPU work units
I appreciate the reply, but it really does not answer my question. Let me try to ask in another way. In the context of Folding@Home, do CPU work units do something that is actually different than GPU work units? Are the CPU work units providing information that cannot be obtained from GPU work units? I realize it may be more appropriate for small work units to be assigned to CPUs, but are you getting different information? Again, answer this in the context of FAH.
You correctly inferred my reason for asking this question, which is does it make sense to do CPU work for FAH if you have a GPU already doing work. If you do get different information, then it does make sense. If not, then I think it becomes a personal decision if you want to spend electricity on the CPU. Of course, if all you have is CPU, then it makes sense to do CPU. Every little bit helps.
Just curious.
You correctly inferred my reason for asking this question, which is does it make sense to do CPU work for FAH if you have a GPU already doing work. If you do get different information, then it does make sense. If not, then I think it becomes a personal decision if you want to spend electricity on the CPU. Of course, if all you have is CPU, then it makes sense to do CPU. Every little bit helps.
Just curious.
Re: CPU vs GPU work units
If you're happy to use your cpu for FAH, do so. If you're not, don't.
CPU units are different and absolutely have value. Whether that value is sufficient for you to dedicate cpu time to the project is something only you can answer. Modern high-core count cpus do very significant work.
CPU units are different and absolutely have value. Whether that value is sufficient for you to dedicate cpu time to the project is something only you can answer. Modern high-core count cpus do very significant work.
single 1070
-
- Posts: 2522
- Joined: Mon Feb 16, 2009 4:12 am
- Location: Greenwood MS USA
Re: CPU vs GPU work units
F@H is ALWAYS a personal decision. You are volunteering your money to pay electric bills and the wear and tear of operating your equipment.
F@H obviously finds value in CPU WUs or it would not issue them. (I fold on 3 GPUs and 10 CPUs, the GPUs make 1.2 million PPD total and the CPUs only add 50,000 PPD)
I can't tell you what you value and what you don't.
I know that until recently, CPUs did operations that GPUs could not. I suspect that now CPUs are just more efficient at some Operations than a GPU. From a PR point of view, both CPU folders and GPU folders will be unhappy if CPU folding dies. (The researchers are currently working on a Core_a8 for CPUs, so the researchers think CPUs are valuable. Rumor has it that 50% speedups are possible on CPUs newer than 2013 using AVX2 and FMA
https://en.wikipedia.org/wiki/Advanced_ ... tensions_2
F@H obviously finds value in CPU WUs or it would not issue them. (I fold on 3 GPUs and 10 CPUs, the GPUs make 1.2 million PPD total and the CPUs only add 50,000 PPD)
I can't tell you what you value and what you don't.
I know that until recently, CPUs did operations that GPUs could not. I suspect that now CPUs are just more efficient at some Operations than a GPU. From a PR point of view, both CPU folders and GPU folders will be unhappy if CPU folding dies. (The researchers are currently working on a Core_a8 for CPUs, so the researchers think CPUs are valuable. Rumor has it that 50% speedups are possible on CPUs newer than 2013 using AVX2 and FMA
https://en.wikipedia.org/wiki/Advanced_ ... tensions_2
Last edited by JimboPalmer on Mon Aug 17, 2020 5:37 pm, edited 1 time in total.
Tsar of all the Rushers
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
-
- Posts: 1996
- Joined: Sun Mar 22, 2020 5:52 pm
- Hardware configuration: 1: 2x Xeon [email protected], 512GB DDR4 LRDIMM, SSD Raid, Win10 Ent 20H2, Quadro K420 1GB, FAH 7.6.21
2: Xeon [email protected], 32GB DDR4, NVME, Win10 Pro 20H2, Quadro M1000M 2GB, FAH 7.6.21 (actually have two of these)
3: [email protected], 12GB DDR3, SSD, Win10 Pro 20H2, GTX 750Ti 2GB, GTX 1080Ti 11GB, FAH 7.6.21 - Location: UK
Re: CPU vs GPU work units
@JimboPalmer beat me to it but since I've type it ...
Everything is a personal decision, irrespective of any differences to be honest.
I'll add that from my understanding the GPUs are serviced at the moment by cores based on OpenMM and the CPU cores uses Gromacs ... over the years theses have developed towards each other and are maybe more similar now than they used to be ... some researchers prefer one over the other - others less so but it mean there is demand for both from the researchers point of view (again, repeat, as I understand it - I may be wrong).
Another factor CPU folding cores exist is simply is that there are some 2.1M CPUs folding and some 0.4M GPUs ... Whilst GPUs folding cores may progress science quicker there are 5x more CPUs being donated.
Hopefully some researchers will confirm with the precision you are looking for the precise (if any) differences in capability of the CPU cores and the GPU cores but for many people their personal choice more comes down to what they have available and what they have chosen to donate.
Everything is a personal decision, irrespective of any differences to be honest.
I'll add that from my understanding the GPUs are serviced at the moment by cores based on OpenMM and the CPU cores uses Gromacs ... over the years theses have developed towards each other and are maybe more similar now than they used to be ... some researchers prefer one over the other - others less so but it mean there is demand for both from the researchers point of view (again, repeat, as I understand it - I may be wrong).
Another factor CPU folding cores exist is simply is that there are some 2.1M CPUs folding and some 0.4M GPUs ... Whilst GPUs folding cores may progress science quicker there are 5x more CPUs being donated.
Hopefully some researchers will confirm with the precision you are looking for the precise (if any) differences in capability of the CPU cores and the GPU cores but for many people their personal choice more comes down to what they have available and what they have chosen to donate.
2x Xeon E5-2697v3, 512GB DDR4 LRDIMM, SSD Raid, W10-Ent, Quadro K420
Xeon E3-1505Mv5, 32GB DDR4, NVME, W10-Pro, Quadro M1000M
i7-960, 12GB DDR3, SSD, W10-Pro, GTX1080Ti
i9-10850K, 64GB DDR4, NVME, W11-Pro, RTX3070
(Green/Bold = Active)
Xeon E3-1505Mv5, 32GB DDR4, NVME, W10-Pro, Quadro M1000M
i7-960, 12GB DDR3, SSD, W10-Pro, GTX1080Ti
i9-10850K, 64GB DDR4, NVME, W11-Pro, RTX3070
(Green/Bold = Active)
-
- Posts: 127
- Joined: Tue Mar 24, 2020 12:47 pm
Re: CPU vs GPU work units
I may be posting this twice, but I think my last post was dropped for some reason.
To my knowledge (which may be wrong), a CPU can do anything a GPU can do, although it might be at a much slower pace. I don't believe the reverse is true. I think a CPU can do more complex logic (like multiple branches of if statements) that are simply not possible (or at least efficient) to do on a GPU.
I guess my question boils down to this: Does FAH get different information from CPU work units than from GPU work units? Or are CPU just generating the same information? I could think of a CPU doing something like "pre-screening" for GPU processing, but I don't know. Which is why I am asking.
To my knowledge (which may be wrong), a CPU can do anything a GPU can do, although it might be at a much slower pace. I don't believe the reverse is true. I think a CPU can do more complex logic (like multiple branches of if statements) that are simply not possible (or at least efficient) to do on a GPU.
I guess my question boils down to this: Does FAH get different information from CPU work units than from GPU work units? Or are CPU just generating the same information? I could think of a CPU doing something like "pre-screening" for GPU processing, but I don't know. Which is why I am asking.
-
- Posts: 2522
- Joined: Mon Feb 16, 2009 4:12 am
- Location: Greenwood MS USA
Re: CPU vs GPU work units
(while I was a programmer, I have never worked in molecular dynamics, I wrote mostly communication code until the internet and inventory/payroll code after. Lots of networking, lots of database work)
https://www.youtube.com/watch?v=4-DN8-USf74
When I worked at a open pit mine, we used huge 320 to 380 ton trucks when moving waste and much smaller 30 to 180 ton trucks when moving ore. I suspect F@H does something similar, those 512 to 4096 core GPUs are discovering interesting sites on proteins to attack, and the 2 to 128 core CPUs are testing drugs to see how well they attack the discovered sites. (this is just my understanding, again I have not seen this code, I certainly have not written this code)
Our issue with helping, is that F@H is designed as a hobby for your PC, not for you. Since you don't need to know what each WU is doing, they do not put the explanation in layman's terms.
(I fire up each new PC, tune the OS and the hardware and forget it: turn off sleep when plugged in, use more than 4 gig of dual channel RAM for CPUs, etc)
https://www.youtube.com/watch?v=4-DN8-USf74
When I worked at a open pit mine, we used huge 320 to 380 ton trucks when moving waste and much smaller 30 to 180 ton trucks when moving ore. I suspect F@H does something similar, those 512 to 4096 core GPUs are discovering interesting sites on proteins to attack, and the 2 to 128 core CPUs are testing drugs to see how well they attack the discovered sites. (this is just my understanding, again I have not seen this code, I certainly have not written this code)
Our issue with helping, is that F@H is designed as a hobby for your PC, not for you. Since you don't need to know what each WU is doing, they do not put the explanation in layman's terms.
(I fire up each new PC, tune the OS and the hardware and forget it: turn off sleep when plugged in, use more than 4 gig of dual channel RAM for CPUs, etc)
Last edited by JimboPalmer on Mon Aug 17, 2020 6:02 pm, edited 1 time in total.
Tsar of all the Rushers
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
-
- Posts: 127
- Joined: Tue Mar 24, 2020 12:47 pm
Re: CPU vs GPU work units
Ok, I am not saying CPU is more or less valuable than GPU. Every little bit helps. However, my undergraduate degree was a double major in computational mathematics and chemistry so I have an interest in both. I am just curious if different information is being extracted. That is all. I am not making a judgement on anything. I just want to acquire information.
My doctorate was in physical chemistry. I used computers to assign NMR resonance frequencies from combined multiple dimension NMR spectra of RNA to help define structure. I know for certain that algorithm I used there would not port to a GPU. Like I said multiple times, I just am interesting in knowing if FAH uses these to gather different information.
I am sorry if I offended anyone. I just always ask questions. It is part of my nature.
My doctorate was in physical chemistry. I used computers to assign NMR resonance frequencies from combined multiple dimension NMR spectra of RNA to help define structure. I know for certain that algorithm I used there would not port to a GPU. Like I said multiple times, I just am interesting in knowing if FAH uses these to gather different information.
I am sorry if I offended anyone. I just always ask questions. It is part of my nature.
-
- Posts: 2522
- Joined: Mon Feb 16, 2009 4:12 am
- Location: Greenwood MS USA
Re: CPU vs GPU work units
OK the good news is that the science code is open source!mwroggenbuck wrote:However, my undergraduate degree was a double major in computational mathematics and chemistry so I have an interest in both. I just want to acquire information.
CPUs http://www.gromacs.org/
GPUs http://openmm.org/
I have not read any of this code as my experience is largely inventory/accounting/database design. You have the right back ground to understand the source code.
What I am not sure you will learn, and I think you want, is when to the researchers choose one over the other. As was mentioned above, some is what have they used in the past. "We have always done it that way" lives in many fields. Some of what is implied by Moonshot, is we will do this quickly, not most structured.
I wish you luck!
Tsar of all the Rushers
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
-
- Posts: 127
- Joined: Tue Mar 24, 2020 12:47 pm
Re: CPU vs GPU work units
You made me think of something I had not. It is possible that they are using two different algorithms to obtain the same information (protein folding trajectories). One algorithm may do better in one situation, and the other may do better in a different. The choice is up to the researcher. It is possible that both algorithms could be implemented in GPU, but gromacs has not gone that way (either because it works now and we need now, or it does not transfer easily)
In any event, every CPU/GPU cycle is trying to help solve this terrible virus, and really that is all that matters. Thanks to all who have weighed in on this.
In any event, every CPU/GPU cycle is trying to help solve this terrible virus, and really that is all that matters. Thanks to all who have weighed in on this.
-
- Site Moderator
- Posts: 6986
- Joined: Wed Dec 23, 2009 9:33 am
- Hardware configuration: V7.6.21 -> Multi-purpose 24/7
Windows 10 64-bit
CPU:2/3/4/6 -> Intel i7-6700K
GPU:1 -> Nvidia GTX 1080 Ti
§
Retired:
2x Nvidia GTX 1070
Nvidia GTX 675M
Nvidia GTX 660 Ti
Nvidia GTX 650 SC
Nvidia GTX 260 896 MB SOC
Nvidia 9600GT 1 GB OC
Nvidia 9500M GS
Nvidia 8800GTS 320 MB
Intel Core i7-860
Intel Core i7-3840QM
Intel i3-3240
Intel Core 2 Duo E8200
Intel Core 2 Duo E6550
Intel Core 2 Duo T8300
Intel Pentium E5500
Intel Pentium E5400 - Location: Land Of The Long White Cloud
- Contact:
Re: CPU vs GPU work units
If you're talking about the output information from CPU/GPU WUs,then it would be the same (atom/molecule positions, temperatures, velocity, etc.). However, the information from the CPU can't directly be fed to the GPU for processing. If Project A starts using CPU, then it must continue using that (including any follow-up) to ensure consistency between the generated datasets.mwroggenbuck wrote:...Does FAH get different information from CPU work units than from GPU work units? Or are CPU just generating the same information? I could think of a CPU doing something like "pre-screening" for GPU processing, but I don't know...
AFAIK, I haven't read about the CPU Projects which "screens" data and then use the GPU to process the "chosen ones" but that could be an approach. Whether or not it is used by researchers, is something that I don't know.
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time
Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time
Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
Re: CPU vs GPU work units
GPU amplified CPU processing...
Probably the next phase.
The CPU can do more complex mathematical calculations than the GPU,
In theory the CPU can do things, a GPU can not.
However, in the context of FAH, I don't know if CPU WUs are actually doing calculations that couldn't be done on GPUs.
Modern GPUs now are equipped with more Double floating point precision circuits than GPUs from before, in that such calculations in theory, should no longer form a bottleneck on GPUs.
GPUs work best in looping calculations. Input gets processed, written to output, and that output is looped back into the input.
Where GPUs don't work well (yet), is where new data needs to be merged or replaced with the output data, or where constant new data needs to be presented to the input.
Nvidia is working on incorporating ARM coprocessors inside their GPUs, to alleviate some of that '1 core per GPU' people are talking about.
They do this especially for deep learning, AI, and perhaps Folding can make use of this too in the near future.
As you may already know,
GPUs work a lot faster, because they have more cores (usually over 40) and thousands of shaders doing the work at ~2Ghz, vs CPUs which usually are limited to 8 cores / 16 threads, at 4Ghz.
Probably the next phase.
The CPU can do more complex mathematical calculations than the GPU,
In theory the CPU can do things, a GPU can not.
However, in the context of FAH, I don't know if CPU WUs are actually doing calculations that couldn't be done on GPUs.
Modern GPUs now are equipped with more Double floating point precision circuits than GPUs from before, in that such calculations in theory, should no longer form a bottleneck on GPUs.
GPUs work best in looping calculations. Input gets processed, written to output, and that output is looped back into the input.
Where GPUs don't work well (yet), is where new data needs to be merged or replaced with the output data, or where constant new data needs to be presented to the input.
Nvidia is working on incorporating ARM coprocessors inside their GPUs, to alleviate some of that '1 core per GPU' people are talking about.
They do this especially for deep learning, AI, and perhaps Folding can make use of this too in the near future.
As you may already know,
GPUs work a lot faster, because they have more cores (usually over 40) and thousands of shaders doing the work at ~2Ghz, vs CPUs which usually are limited to 8 cores / 16 threads, at 4Ghz.
Re: CPU vs GPU work units
there are several analysis code bases that are used by scientists in stand-alone environments on whatever hardware is being used for their personal research. FAH has adapted two of them for cloud-sourced computing by donors like yourself.Neil-B wrote:I'll add that from my understanding the GPUs are serviced at the moment by cores based on OpenMM and the CPU cores uses Gromacs ... over the years theses have developed towards each other and are maybe more similar now than they used to be ... some researchers prefer one over the other - others less so but it mean there is demand for both from the researchers point of view (again, repeat, as I understand it - I may be wrong).
Sometimes they develop toward each other; sometimes not. The fact is that (A) they are developed by different teams and (B) each one gets a new algorithm added from time to time. If a scientist needs to do a different type of analysis, (s)he can propose an enhancement to the internal methodology of his/her favorite analysis package. A couple of years later, that same method will likely be added to the other code.
The COVID Moonshot needed some new methods and FAH has recently been adding special algorithms to OpenMM.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
-
- Posts: 127
- Joined: Tue Mar 24, 2020 12:47 pm
Re: CPU vs GPU work units
My original question has been answered by PantherX and Bruce and JimboPalmer (and everyone else). The outputs are the same, but the algorithms are different. Given my background, I was just curious. I was not trying to say that one method (or its hardware implementation) was better than the other--I just wanted to know what was going on. Thanks to a lot of people, I now have a much better idea of the process.
Keep those CPU/GPU cycles going.
Keep those CPU/GPU cycles going.