Why are CPU projects worth so few points relative to GPU?
Moderators: Site Moderators, FAHC Science Team
Re: Why are CPU projects worth so few points relative to GPU
Well, I still consider the original question to be why CPU projects are worth so few points relative to GPU. It's really very simple. CPU projects have many, many times fewer atoms than GPU projects so if you adjust for scientific value, you'll see that the points are inherently fair.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
-
- Site Admin
- Posts: 7937
- Joined: Tue Apr 21, 2009 4:41 pm
- Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2 - Location: W. MA
Re: Why are CPU projects worth so few points relative to GPU
Or even if the CPU and GPU projects are working on systems with similar amounts of atoms, the GPU projects have the WUs do many times the number of time steps.
iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
-
- Posts: 50
- Joined: Tue May 05, 2020 5:34 am
Re: Why are CPU projects worth so few points relative to GPU
Agree and I believe the original author of this post thus can set it to solved.
-
- Posts: 19
- Joined: Mon Jan 07, 2008 12:06 am
Re: Why are CPU projects worth so few points relative to GPU
Ok, for a moment, let us forget about points and amount of science done.
Are work units that are assigned to cpus able to also be completed by gpus, or are (some/all) of the work units assigned to cpus only capable of being run/solved on a cpu? Is there something special about these wu's that make them suitable for only cpus?
If not, and if all wu's are capable of being run on a gpu (e.g., the ones which are being run on cpus), then given the amount of science done by gpus compared to cpus, why is F@H even bothering with cpu wu's? Why isn't F@H strictly a gpu project?
Are work units that are assigned to cpus able to also be completed by gpus, or are (some/all) of the work units assigned to cpus only capable of being run/solved on a cpu? Is there something special about these wu's that make them suitable for only cpus?
If not, and if all wu's are capable of being run on a gpu (e.g., the ones which are being run on cpus), then given the amount of science done by gpus compared to cpus, why is F@H even bothering with cpu wu's? Why isn't F@H strictly a gpu project?
-
- Site Moderator
- Posts: 6986
- Joined: Wed Dec 23, 2009 9:33 am
- Hardware configuration: V7.6.21 -> Multi-purpose 24/7
Windows 10 64-bit
CPU:2/3/4/6 -> Intel i7-6700K
GPU:1 -> Nvidia GTX 1080 Ti
§
Retired:
2x Nvidia GTX 1070
Nvidia GTX 675M
Nvidia GTX 660 Ti
Nvidia GTX 650 SC
Nvidia GTX 260 896 MB SOC
Nvidia 9600GT 1 GB OC
Nvidia 9500M GS
Nvidia 8800GTS 320 MB
Intel Core i7-860
Intel Core i7-3840QM
Intel i3-3240
Intel Core 2 Duo E8200
Intel Core 2 Duo E6550
Intel Core 2 Duo T8300
Intel Pentium E5500
Intel Pentium E5400 - Location: Land Of The Long White Cloud
- Contact:
Re: Why are CPU projects worth so few points relative to GPU
Welcome to the F@H Forum The_Bad_Penguin,
Some Projects are only CPU while others are only GPU. Occasionally, there can be Projects that can make use of CPU and GPU but that's not duplication of work, rather, it's complementary work being done to gain a better understanding of the Project.The_Bad_Penguin wrote:...Are work units that are assigned to cpus able to also be completed by gpus, or are (some/all) of the work units assigned to cpus only capable of being run/solved on a cpu?...
CPUs are really good at serial work while GPUs are really good at parallel work. Plus, CPUs can can perform a variety of complex mathematical instructions while GPUs can't achieve a similar level of complexity.The_Bad_Penguin wrote:...Is there something special about these wu's that make them suitable for only cpus?...
Because GPUs can only perform X calculations while CPUs can perform Y calculations. Together, it provides a much better datasets and helps researchers.The_Bad_Penguin wrote:...Why isn't F@H strictly a gpu project?
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time
Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time
Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
-
- Site Admin
- Posts: 7937
- Joined: Tue Apr 21, 2009 4:41 pm
- Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2 - Location: W. MA
Re: Why are CPU projects worth so few points relative to GPU
To also answer the question a bit differently, the data in a WU for GPU processing is formatted as needed by the Gromacs code used. Similarly the data is specific in a GPU WU to be read in and processed by the OpenMM code in the GPU core.
In both case the data from the raw description of a protein system often can be converted into WUs suitable for either GPU or CPU processing, but the WUs themselves are not interchangable.
As for why out is not exclusively GPU, another reason is that there are a lot more CPUs whose use can be donated than GPUs.
In both case the data from the raw description of a protein system often can be converted into WUs suitable for either GPU or CPU processing, but the WUs themselves are not interchangable.
As for why out is not exclusively GPU, another reason is that there are a lot more CPUs whose use can be donated than GPUs.
iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
-
- Posts: 19
- Joined: Mon Jan 07, 2008 12:06 am
Re: Why are CPU projects worth so few points relative to GPU
Thank you for the responses, this is sort of where I was trying to go.
@PantherX: I agree with the general statement that "CPUs can can perform a variety of complex mathematical instructions while GPUs can't achieve a similar level of complexity."
But, I suppose the question remains, is this what is specifically happening with the F@H cpu wu's? Do they actually use "complex mathematical instructions" that "GPUs can't achieve".
So, let us take a brief journey to hypothetical land, a place where you can purchase an amd 3900x cpu for $400 or an amd 3970x cpu for $1900; and a nvidia 2080ti super-duper-fancy-name-of-the-week gpu for $1600 or a generic run-of-the mill video card for $100.
Where do you get the most "science done" for the $2000?
Is the same (complex mathematical instructions) science being done with a 3970x and $100 gpu as it is with a 3900x and 2080ti?
Or is the 3970x and $100 gpu doing (very?) different (complex mathematical instructions) than the 3900x and 2080ti, and thus it is "apples and oranges" to try to compare the scientific work done, as the work done between the two choices is vastly different?
Sorry for splitting hairs . . .
@Joe_H: Thanks for mentioning that "there are a lot more CPUs whose use can be donated than GPUs."
@PantherX: I agree with the general statement that "CPUs can can perform a variety of complex mathematical instructions while GPUs can't achieve a similar level of complexity."
But, I suppose the question remains, is this what is specifically happening with the F@H cpu wu's? Do they actually use "complex mathematical instructions" that "GPUs can't achieve".
So, let us take a brief journey to hypothetical land, a place where you can purchase an amd 3900x cpu for $400 or an amd 3970x cpu for $1900; and a nvidia 2080ti super-duper-fancy-name-of-the-week gpu for $1600 or a generic run-of-the mill video card for $100.
Where do you get the most "science done" for the $2000?
Is the same (complex mathematical instructions) science being done with a 3970x and $100 gpu as it is with a 3900x and 2080ti?
Or is the 3970x and $100 gpu doing (very?) different (complex mathematical instructions) than the 3900x and 2080ti, and thus it is "apples and oranges" to try to compare the scientific work done, as the work done between the two choices is vastly different?
Sorry for splitting hairs . . .
@Joe_H: Thanks for mentioning that "there are a lot more CPUs whose use can be donated than GPUs."
Re: Why are CPU projects worth so few points relative to GPU
If - and it's a big if with loads of history and weirdness - points = science, then the place to put your $2000 is on the best gpu you can buy for that money.
FAH has never really been about dedicated folding hardware. The premise was and still is that you have hardware for whatever reason you have it, and you let it run FAH when you're not using it yourself. So if you need a 3970x for whatever you do on the rig then let your 3970x do the best science available. If you have a 2080ti for gaming for example and when you're not gaming you are OK with it running FAH then great, thanks very much.
The dedicated hardware really started with bigadv. These were special units only suitable for those with loads of cpu cores and people built very expensive systems with multi-socket mobos and expensive cpus. Today, because of the quick return bonus brought in around that time, a 2080ti left to run 24/7 will produce vastly more points than any cpu system.
If you're determined to build a dedicated rig, my advice would be to buy a good used gpu with good cooling. The rest of the system (other than perhaps the PSU) can be unfit for modern use for anything else.
FAH has never really been about dedicated folding hardware. The premise was and still is that you have hardware for whatever reason you have it, and you let it run FAH when you're not using it yourself. So if you need a 3970x for whatever you do on the rig then let your 3970x do the best science available. If you have a 2080ti for gaming for example and when you're not gaming you are OK with it running FAH then great, thanks very much.
The dedicated hardware really started with bigadv. These were special units only suitable for those with loads of cpu cores and people built very expensive systems with multi-socket mobos and expensive cpus. Today, because of the quick return bonus brought in around that time, a 2080ti left to run 24/7 will produce vastly more points than any cpu system.
If you're determined to build a dedicated rig, my advice would be to buy a good used gpu with good cooling. The rest of the system (other than perhaps the PSU) can be unfit for modern use for anything else.
single 1070
Re: Why are CPU projects worth so few points relative to GPU
If I had to build a folding rig (for 24/7 use) from scratch, I would choose an Intel processor (an old Xeon for example) and one or several water-cooled GPU(s). The latter is all the more important if you have to "live" with the rig, ie at home or in your office. High-end hardware, like the 2080 ti, will produce a lot of heat and only water cooling can handle it without too much noise and excess ambient heat. I would probably not use the ti version either, rather the super, less extreme, more balanced.
Otherwise, the 3970X (Zen2 in general) would NOT be a good choice, as its main weakness is precisely located where FAH demands the most: the AVX instruction set - a domain where the Xeon and the X299 platform perform the best. For folding, it would be very difficult (and expensive) to cool a Zen 2 processor and a good part of that heat would not really benefit science.
Otherwise, the 3970X (Zen2 in general) would NOT be a good choice, as its main weakness is precisely located where FAH demands the most: the AVX instruction set - a domain where the Xeon and the X299 platform perform the best. For folding, it would be very difficult (and expensive) to cool a Zen 2 processor and a good part of that heat would not really benefit science.
-
- Site Moderator
- Posts: 6986
- Joined: Wed Dec 23, 2009 9:33 am
- Hardware configuration: V7.6.21 -> Multi-purpose 24/7
Windows 10 64-bit
CPU:2/3/4/6 -> Intel i7-6700K
GPU:1 -> Nvidia GTX 1080 Ti
§
Retired:
2x Nvidia GTX 1070
Nvidia GTX 675M
Nvidia GTX 660 Ti
Nvidia GTX 650 SC
Nvidia GTX 260 896 MB SOC
Nvidia 9600GT 1 GB OC
Nvidia 9500M GS
Nvidia 8800GTS 320 MB
Intel Core i7-860
Intel Core i7-3840QM
Intel i3-3240
Intel Core 2 Duo E8200
Intel Core 2 Duo E6550
Intel Core 2 Duo T8300
Intel Pentium E5500
Intel Pentium E5400 - Location: Land Of The Long White Cloud
- Contact:
Re: Why are CPU projects worth so few points relative to GPU
F@H aims to understand protein folding at an atomic level. To do that, it uses molecular simulation to do that. Specifically, it is Markov state models which it uses. Now, that is achieved by using:The_Bad_Penguin wrote:...is the 3970x and $100 gpu doing (very?) different (complex mathematical instructions) than the 3900x and 2080ti, and thus it is "apples and oranges" to try to compare the scientific work done, as the work done between the two choices is vastly different?...
1) GROMACS
2) OpenMM
GROMACS is mainly used on CPUs while OpenMM is mainly used by GPUs. In F@H's case, FahCore_a4 and FahCore_a7 use GROMACS while FahCore_21 and FahCore_22 uses OpenMM via OpenCL. There are plans to get OpenMM via CUDA but that's unknown ETA. Recently, GROMACS has suppored off-loading to GPU, that means, the CPUs would perform serial tasks (since they are exceptionally good at it) while the parallel tasks are given to the GPU (since they are exceptionally good at it) so the overall result is the best parts of CPU and GPU are used together to speed up GROMACS. Time will tell if F@H chooses to use this but we would simply have to wait and see what happens
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time
Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time
Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
Re: Why are CPU projects worth so few points relative to GPU
I build my own systems, and can go either way. Usually, that has meant GPUs for the obvious benefit in output. But a couple of years ago, I noted that the CPU projects (mainly from the Voelz lab at the time) looked very interesting scientifically, not that I am an expert on the subject. But the peptides (smaller than proteins I believe) can be used for a variety of intriguing purposes.
So I first tried out a Ryzen 3700X, and found it was not bad at around 275 k PPD. Then I built a Ryzen 3950X on Ubuntu 18.04.4 early this year, and when the COVID rush came along, I built another. They are getting over 600 k PPD. When you look at the power consumed, they are almost as efficient as the GPUs, so I don't feel any loss there. I am due for a GPU next, but that will be only next year with a 7nm card.
So I first tried out a Ryzen 3700X, and found it was not bad at around 275 k PPD. Then I built a Ryzen 3950X on Ubuntu 18.04.4 early this year, and when the COVID rush came along, I built another. They are getting over 600 k PPD. When you look at the power consumed, they are almost as efficient as the GPUs, so I don't feel any loss there. I am due for a GPU next, but that will be only next year with a 7nm card.
-
- Posts: 2522
- Joined: Mon Feb 16, 2009 4:12 am
- Location: Greenwood MS USA
Re: Why are CPU projects worth so few points relative to GPU
[Just chit chat]
When CPUs have 'enough' threads, they get about the same PPD as GPUs. Epycs and Threadrippers with 128 or more threads are very competitive with GPUs. My i3 with 3 threads, not so much. Soit is not so much why are CPUs less able to keep up, but why is it so expensive to buy a CPU that can keep up?
When CPUs have 'enough' threads, they get about the same PPD as GPUs. Epycs and Threadrippers with 128 or more threads are very competitive with GPUs. My i3 with 3 threads, not so much. Soit is not so much why are CPUs less able to keep up, but why is it so expensive to buy a CPU that can keep up?
Tsar of all the Rushers
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
Re: Why are CPU projects worth so few points relative to GPU
There's one more fact about parallel vs. serial computations.
As has already been stated. GPUs work very effectively if the calculations need a high degreee of parallelism. CPUs work very effectively with relatively low degrees of parallelism.
Proteins come in all sizes. The degree of parallelism depends directly on the number of atoms. Some projects are constructed with massive numbers of atoms ... some projects are constructed from relatively few atoms (but still quite a number). If a scientist is preparing a new project, he can consider that there are lots of CPUs and fewer GPU and the gpus span a range from a few hundred shaders (cores) to many thousands of them. If the protein is small, it can run efficiently on CPUs with relatively few Cores or on GPUs with relaviely few shaders. If the protein being analyzed has a very large number of atoms it will be efficient on a GPU with large numbers of shaders. That protein with relatively few atoms can't use the extra shaders on big GPUs very effectively.
As has already been stated. GPUs work very effectively if the calculations need a high degreee of parallelism. CPUs work very effectively with relatively low degrees of parallelism.
Proteins come in all sizes. The degree of parallelism depends directly on the number of atoms. Some projects are constructed with massive numbers of atoms ... some projects are constructed from relatively few atoms (but still quite a number). If a scientist is preparing a new project, he can consider that there are lots of CPUs and fewer GPU and the gpus span a range from a few hundred shaders (cores) to many thousands of them. If the protein is small, it can run efficiently on CPUs with relatively few Cores or on GPUs with relaviely few shaders. If the protein being analyzed has a very large number of atoms it will be efficient on a GPU with large numbers of shaders. That protein with relatively few atoms can't use the extra shaders on big GPUs very effectively.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
Re: Why are CPU projects worth so few points relative to GPU
Projects have a baseline PPD and a QRB.
Once you surpass the point where QRB becomes higher than PPD in performance, you'll see the PPDs fly, and get very near GPU PPDs.
And that means a high Ghz CPU, and/or a high CPU count.
Like mentioned, Top tier Ryzens (3900x, 3950x) and Threadrippers are currently the only ones coming close to GPUs in terms of performance; but threadrippers are out of a lot of people's budget when they try building a new system.
That could change in a year, as technology doesn't stand still.
It's just right now, most CPUs don't have the processing power to surpass that barrier, where QRB shoots through the roof (like with GPUs).
I'm expecting in a year or two, Ryzen to adopt to 5nm.
And if they do, I do hope they'll come with SMT4, as the AM4 socket is limited to 16 CPU cores; and if they limit their CPUs with 16 cores, 32 threads, their upcoming 5nm CPUs would run at 75W.
That's 25 Watts they can still assign to either higher boost, or more threads.
SMT4 would allow for 16C 64T on a CPU, which is within the boundaries of Windows 10 home, which has a max CPU count of 64.
Upcoming Threadrippers at 5nm will more than likely either need Linux, or Windows Enterprise to run.
But with high end next gen CPUs, we'll definitely see a closing between CPU and GPU on FAH.
It wouldn't be outside of the realm of possibility to see 1M PPD on next gen Ryzens, and 2-4M PPD on next gen Threadrippers.
Once you surpass the point where QRB becomes higher than PPD in performance, you'll see the PPDs fly, and get very near GPU PPDs.
And that means a high Ghz CPU, and/or a high CPU count.
Like mentioned, Top tier Ryzens (3900x, 3950x) and Threadrippers are currently the only ones coming close to GPUs in terms of performance; but threadrippers are out of a lot of people's budget when they try building a new system.
That could change in a year, as technology doesn't stand still.
It's just right now, most CPUs don't have the processing power to surpass that barrier, where QRB shoots through the roof (like with GPUs).
I'm expecting in a year or two, Ryzen to adopt to 5nm.
And if they do, I do hope they'll come with SMT4, as the AM4 socket is limited to 16 CPU cores; and if they limit their CPUs with 16 cores, 32 threads, their upcoming 5nm CPUs would run at 75W.
That's 25 Watts they can still assign to either higher boost, or more threads.
SMT4 would allow for 16C 64T on a CPU, which is within the boundaries of Windows 10 home, which has a max CPU count of 64.
Upcoming Threadrippers at 5nm will more than likely either need Linux, or Windows Enterprise to run.
But with high end next gen CPUs, we'll definitely see a closing between CPU and GPU on FAH.
It wouldn't be outside of the realm of possibility to see 1M PPD on next gen Ryzens, and 2-4M PPD on next gen Threadrippers.
-
- Posts: 19
- Joined: Mon Jan 07, 2008 12:06 am
Re: Why are CPU projects worth so few points relative to GPU
Would love to find an affordable 3990x, but "rumors" are that the next gen Ryzens (4Q2020?) will be ~20% more efficient. And certainly, Ryzen/Threadripper 5nm would be a sight to behold.
Don't know that I would agree that "But with high end next gen CPUs, we'll definitely see a closing between CPU and GPU on FAH," because as there will be a new gen high-end cpu's, there will also be a new gen of high-end gpu's.
Starting to see web articles/blogs/videos that "NVIDIA GeForce RTX 3090 rumors: up to 60-90% faster than RTX 2080 Ti"
I would assume that a rumored nvidia 3090 ($2000?) [4m ppd?] would cost less than a 3990x ($3000?) [3m ppd?] or next gen threadripper 64t/128t ($???) [4m+ ppd?].
Don't know that I would agree that "But with high end next gen CPUs, we'll definitely see a closing between CPU and GPU on FAH," because as there will be a new gen high-end cpu's, there will also be a new gen of high-end gpu's.
Starting to see web articles/blogs/videos that "NVIDIA GeForce RTX 3090 rumors: up to 60-90% faster than RTX 2080 Ti"
I would assume that a rumored nvidia 3090 ($2000?) [4m ppd?] would cost less than a 3990x ($3000?) [3m ppd?] or next gen threadripper 64t/128t ($???) [4m+ ppd?].