What are the technical differences between CPU & GPU WUs
Moderators: Site Moderators, FAHC Science Team
-
- Posts: 168
- Joined: Tue Apr 07, 2020 2:38 pm
What are the technical differences between CPU & GPU WUs
Like the title says, I'm curious why WUs are divided into distinct CPU & GPU categories. Is it to play to their particular strengths; are there things that only one can handle, but not the other; or some other reason?
Edit: Prediction after thinking about it, GPU WUs probably contain the rendering components of the simulations (AKA, lots of 3D matrix arithmetic). The previous eliminates the CPU as the tool for the job, but what about the other direction, what if anything makes the CPU WUs more suited to CPUs?
Edit: Prediction after thinking about it, GPU WUs probably contain the rendering components of the simulations (AKA, lots of 3D matrix arithmetic). The previous eliminates the CPU as the tool for the job, but what about the other direction, what if anything makes the CPU WUs more suited to CPUs?
Last edited by NoMoreQuarantine on Thu Apr 09, 2020 4:23 pm, edited 1 time in total.
Re: What are the technical differences between CPU & GPU WUs
They use different hardware in your computer.
Computers all have CPUs which are general purpose processors. A GPU is a specialized piece of equipment designed to compute images and is has superior 3D computing capabilities. Older computers and laptops may not even have a GPU, using what was originally called a VGA.
The software designed to compute with them is decidedly different so the WUs are different. The slowest GPU has about the same compute capabilies as the CPU and the high powered GPUs (typically used to make games very fast) may be 100X as fast.
Computers all have CPUs which are general purpose processors. A GPU is a specialized piece of equipment designed to compute images and is has superior 3D computing capabilities. Older computers and laptops may not even have a GPU, using what was originally called a VGA.
The software designed to compute with them is decidedly different so the WUs are different. The slowest GPU has about the same compute capabilies as the CPU and the high powered GPUs (typically used to make games very fast) may be 100X as fast.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
-
- Posts: 168
- Joined: Tue Apr 07, 2020 2:38 pm
Re: What are the technical differences between CPU & GPU WUs
Thanks for the response bruce. What you said lines up pretty well with my assumptions, so that's gratifying
-
- Posts: 2522
- Joined: Mon Feb 16, 2009 4:12 am
- Location: Greenwood MS USA
Re: What are the technical differences between CPU & GPU WUs
Hardware
CPUs have Floating Point math units that do the Brute Force computing but they also have sophisticated logic to make subtle decisions.
A typical CPU may have 8 cores each of which has 4 wide SIMD units. (min 1, max 256)
https://en.wikipedia.org/wiki/SIMD
GPUs are way more powerful, but considerably less sophisticated. They may have 5000 cores, but with limited logic. Due to their huge size they tend have slower clock speeds.
https://en.wikipedia.org/wiki/General-p ... sing_units
Software
CPUs currently use the GROMACS algorithm in an a7 core.
GPUs use OpenMM on top of OpenCL in an older core 21 or a newer core 22.
https://en.wikipedia.org/wiki/GROMACS
https://en.wikipedia.org/wiki/Molecular ... ng_on_GPUs
https://en.wikipedia.org/wiki/OpenCL
CPUs have Floating Point math units that do the Brute Force computing but they also have sophisticated logic to make subtle decisions.
A typical CPU may have 8 cores each of which has 4 wide SIMD units. (min 1, max 256)
https://en.wikipedia.org/wiki/SIMD
GPUs are way more powerful, but considerably less sophisticated. They may have 5000 cores, but with limited logic. Due to their huge size they tend have slower clock speeds.
https://en.wikipedia.org/wiki/General-p ... sing_units
Software
CPUs currently use the GROMACS algorithm in an a7 core.
GPUs use OpenMM on top of OpenCL in an older core 21 or a newer core 22.
https://en.wikipedia.org/wiki/GROMACS
https://en.wikipedia.org/wiki/Molecular ... ng_on_GPUs
https://en.wikipedia.org/wiki/OpenCL
Last edited by JimboPalmer on Fri Apr 17, 2020 9:24 pm, edited 1 time in total.
Tsar of all the Rushers
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
-
- Posts: 168
- Joined: Tue Apr 07, 2020 2:38 pm
Re: What are the technical differences between CPU & GPU WUs
Thank you for providing an explanation and additional reading JimboPalmer.
Re: What are the technical differences between CPU & GPU WUs
I hope they have not forgotten about CUDA. Maybe after this mad rush is over (if ever), they can get around to it.JimboPalmer wrote:GPUs use OpenMM on top of OpenCL in an older core 21 or a newer core 22.
-
- Posts: 78
- Joined: Wed Mar 25, 2020 2:39 am
- Location: Canada
Re: What are the technical differences between CPU & GPU WUs
CUDA is just nVidia's proprietary API for GPU Compute. OpenCL works just as well and has broader platform compatibility due to it being an open standard.JimF wrote:I hope they have not forgotten about CUDA. Maybe after this mad rush is over (if ever), they can get around to it.JimboPalmer wrote:GPUs use OpenMM on top of OpenCL in an older core 21 or a newer core 22.
Re: What are the technical differences between CPU & GPU WUs
I will let others take you apart on that one. It is more than I can bear just to think about it.Frogging101 wrote:CUDA is just nVidia's proprietary API for GPU Compute. OpenCL works just as well and has broader platform compatibility due to it being an open standard.
-
- Posts: 78
- Joined: Wed Mar 25, 2020 2:39 am
- Location: Canada
Re: What are the technical differences between CPU & GPU WUs
I don't know what you're trying to say. The OpenCL and CUDA APIs are just two different ways of programming the same thing. The latter is only supported by nVidia.JimF wrote:I will let others take you apart on that one. It is more than I can bear just to think about it.Frogging101 wrote:CUDA is just nVidia's proprietary API for GPU Compute. OpenCL works just as well and has broader platform compatibility due to it being an open standard.
-
- Posts: 2522
- Joined: Mon Feb 16, 2009 4:12 am
- Location: Greenwood MS USA
Re: What are the technical differences between CPU & GPU WUs
[This is a rant, it is only a rant, in the event of real discussion, I will shut up. All my GPUs are Nvidia, the last AMD GPU I used was a HD 2600 Pro on Core_11]
In the bad old days, F@H had to have one Core for AMD and one Core for Nvidia. The Nvidia Core used very optimized CUDA. (optimized for Fermi, I bet) This locked you into Nvidia cards in the future.
This double the amount of work the programmer had to do, and made it challenging to add features to F@H, while keeping both Cores in sync.
At some point, (2008) Nvidia grudgingly made a emulation layer of OpenCL on top of CUDA. It was the clunkiest, (it uses polled I/O so consumes an entire CPU thread needlessly) wimpiest (Nvidia has been on version 1.2 for 5 years, the current specification is 2.2. AMD is on version 2.0) version possible just so Nvidia can claim to support the standard. Nvidia still very much wants you to use CUDA, so they continue to sell Nvida GPUs.
AMD abandoned their proprietary interfaces for OpenCL and so has no reason to code their version poorly.
Sadly, Apple has depreciated OpenCL, so it would be foolhardy to write a GPU Core for Macs knowing that it won't last.
So yes, a CUDA core would be faster, but that is (at least in part) because Nvidia has crippled their implementation of OpenCL.
[/Rant]
In the bad old days, F@H had to have one Core for AMD and one Core for Nvidia. The Nvidia Core used very optimized CUDA. (optimized for Fermi, I bet) This locked you into Nvidia cards in the future.
This double the amount of work the programmer had to do, and made it challenging to add features to F@H, while keeping both Cores in sync.
At some point, (2008) Nvidia grudgingly made a emulation layer of OpenCL on top of CUDA. It was the clunkiest, (it uses polled I/O so consumes an entire CPU thread needlessly) wimpiest (Nvidia has been on version 1.2 for 5 years, the current specification is 2.2. AMD is on version 2.0) version possible just so Nvidia can claim to support the standard. Nvidia still very much wants you to use CUDA, so they continue to sell Nvida GPUs.
AMD abandoned their proprietary interfaces for OpenCL and so has no reason to code their version poorly.
Sadly, Apple has depreciated OpenCL, so it would be foolhardy to write a GPU Core for Macs knowing that it won't last.
So yes, a CUDA core would be faster, but that is (at least in part) because Nvidia has crippled their implementation of OpenCL.
[/Rant]
Tsar of all the Rushers
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
-
- Site Moderator
- Posts: 6986
- Joined: Wed Dec 23, 2009 9:33 am
- Hardware configuration: V7.6.21 -> Multi-purpose 24/7
Windows 10 64-bit
CPU:2/3/4/6 -> Intel i7-6700K
GPU:1 -> Nvidia GTX 1080 Ti
§
Retired:
2x Nvidia GTX 1070
Nvidia GTX 675M
Nvidia GTX 660 Ti
Nvidia GTX 650 SC
Nvidia GTX 260 896 MB SOC
Nvidia 9600GT 1 GB OC
Nvidia 9500M GS
Nvidia 8800GTS 320 MB
Intel Core i7-860
Intel Core i7-3840QM
Intel i3-3240
Intel Core 2 Duo E8200
Intel Core 2 Duo E6550
Intel Core 2 Duo T8300
Intel Pentium E5500
Intel Pentium E5400 - Location: Land Of The Long White Cloud
- Contact:
Re: What are the technical differences between CPU & GPU WUs
You may want to go and buy a lottery ticketJimF wrote:I hope they have not forgotten about CUDA. Maybe after this mad rush is over (if ever), they can get around to it.JimboPalmer wrote:GPUs use OpenMM on top of OpenCL in an older core 21 or a newer core 22.
Disclaimer, the information below is obtained while under development which means that numbers, values and time may change once it is in production.
There's a CUDA version of FahCore_22 in development, not ETA yet but that's a great start: https://www.reddit.com/r/pcmasterrace/c ... s/fkylryx/
Performance of CUDA FahCore_22 is up to 50% more (did you read my disclaimer?): https://www.reddit.com/r/pcmasterrace/c ... s/fkyjt9d/
AMD is not left behind and there's a chance that ROCm will be supported (really happy that you read my disclaimer): https://www.reddit.com/r/pcmasterrace/c ... s/fkyrqpp/
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time
Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time
Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
-
- Posts: 1996
- Joined: Sun Mar 22, 2020 5:52 pm
- Hardware configuration: 1: 2x Xeon [email protected], 512GB DDR4 LRDIMM, SSD Raid, Win10 Ent 20H2, Quadro K420 1GB, FAH 7.6.21
2: Xeon [email protected], 32GB DDR4, NVME, Win10 Pro 20H2, Quadro M1000M 2GB, FAH 7.6.21 (actually have two of these)
3: [email protected], 12GB DDR3, SSD, Win10 Pro 20H2, GTX 750Ti 2GB, GTX 1080Ti 11GB, FAH 7.6.21 - Location: UK
Re: What are the technical differences between CPU & GPU WUs
Did anyone not spot the disclaimer?
2x Xeon E5-2697v3, 512GB DDR4 LRDIMM, SSD Raid, W10-Ent, Quadro K420
Xeon E3-1505Mv5, 32GB DDR4, NVME, W10-Pro, Quadro M1000M
i7-960, 12GB DDR3, SSD, W10-Pro, GTX1080Ti
i9-10850K, 64GB DDR4, NVME, W11-Pro, RTX3070
(Green/Bold = Active)
Xeon E3-1505Mv5, 32GB DDR4, NVME, W10-Pro, Quadro M1000M
i7-960, 12GB DDR3, SSD, W10-Pro, GTX1080Ti
i9-10850K, 64GB DDR4, NVME, W11-Pro, RTX3070
(Green/Bold = Active)
-
- Posts: 168
- Joined: Tue Apr 07, 2020 2:38 pm
Re: What are the technical differences between CPU & GPU WUs
It's great to hear that 50% improved CUDA performance and ROCm support have been officially and categorically confirmed. What was that about a disclaimer though?PantherX wrote:Disclaimer, the information below is obtained while under development which means that numbers, values and time may change once it is in production.
There's a CUDA version of FahCore_22 in development, not ETA yet but that's a great start: https://www.reddit.com/r/pcmasterrace/c ... s/fkylryx/
Performance of CUDA FahCore_22 is up to 50% more (did you read my disclaimer?): https://www.reddit.com/r/pcmasterrace/c ... s/fkyjt9d/
AMD is not left behind and there's a chance that ROCm will be supported (really happy that you read my disclaimer): https://www.reddit.com/r/pcmasterrace/c ... s/fkyrqpp/
-
- Site Moderator
- Posts: 6986
- Joined: Wed Dec 23, 2009 9:33 am
- Hardware configuration: V7.6.21 -> Multi-purpose 24/7
Windows 10 64-bit
CPU:2/3/4/6 -> Intel i7-6700K
GPU:1 -> Nvidia GTX 1080 Ti
§
Retired:
2x Nvidia GTX 1070
Nvidia GTX 675M
Nvidia GTX 660 Ti
Nvidia GTX 650 SC
Nvidia GTX 260 896 MB SOC
Nvidia 9600GT 1 GB OC
Nvidia 9500M GS
Nvidia 8800GTS 320 MB
Intel Core i7-860
Intel Core i7-3840QM
Intel i3-3240
Intel Core 2 Duo E8200
Intel Core 2 Duo E6550
Intel Core 2 Duo T8300
Intel Pentium E5500
Intel Pentium E5400 - Location: Land Of The Long White Cloud
- Contact:
Re: What are the technical differences between CPU & GPU WUs
It was to ensure that that none of the information above should be taken for granted... drivers can change, pirorities can shift. If you're reading this, you have really good eyesight or have mastered copy/pasteNoMoreQuarantine wrote:...What was that about a disclaimer though?
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time
Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time
Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
-
- Posts: 32
- Joined: Fri Mar 06, 2020 5:20 pm
Re: What are the technical differences between CPU & GPU WUs
My room is always Hot.