Points CPU vs GPU

Moderators: Site Moderators, FAHC Science Team

PantherX
Site Moderator
Posts: 6986
Joined: Wed Dec 23, 2009 9:33 am
Hardware configuration: V7.6.21 -> Multi-purpose 24/7
Windows 10 64-bit
CPU:2/3/4/6 -> Intel i7-6700K
GPU:1 -> Nvidia GTX 1080 Ti
§
Retired:
2x Nvidia GTX 1070
Nvidia GTX 675M
Nvidia GTX 660 Ti
Nvidia GTX 650 SC
Nvidia GTX 260 896 MB SOC
Nvidia 9600GT 1 GB OC
Nvidia 9500M GS
Nvidia 8800GTS 320 MB

Intel Core i7-860
Intel Core i7-3840QM
Intel i3-3240
Intel Core 2 Duo E8200
Intel Core 2 Duo E6550
Intel Core 2 Duo T8300
Intel Pentium E5500
Intel Pentium E5400
Location: Land Of The Long White Cloud
Contact:

Re: Points CPU vs GPU

Post by PantherX »

There are some architectural differences between the CPU and GPU. This is a nice summary:
...A GPU can handle large amounts of data in many streams, performing relatively simple operations on them, but is ill-suited to heavy or complex processing on a single or few streams of data. A CPU is much faster on a per-core basis (in terms of instructions per second) and can perform complex operations on a single or few streams of data more easily, but cannot efficiently handle many streams simultaneously...
https://www.howtogeek.com/128221/why-ar ... d-of-gpus/

When it comes down to mathematics (F@H uses specific instruction sets) there is a fundamental difference between them:
...CPUs dedicate the majority of their core real-estate to scalar/superscalar operations...GPUs, on the other hand, dedicate most of their core real-estate to a Single Instruction Multiple Data (SIMD)...
https://www.quora.com/Why-does-a-GPU-pe ... than-a-CPU
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time

Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
HaloJones
Posts: 906
Joined: Thu Jul 24, 2008 10:16 am

Re: Points CPU vs GPU

Post by HaloJones »

For what it's worth, I have a P13872 running on a 3600/11t for an estimated 367Kppd. That's pretty serious points for a 90W part.
single 1070

Image
Neil-B
Posts: 1996
Joined: Sun Mar 22, 2020 5:52 pm
Hardware configuration: 1: 2x Xeon [email protected], 512GB DDR4 LRDIMM, SSD Raid, Win10 Ent 20H2, Quadro K420 1GB, FAH 7.6.21
2: Xeon [email protected], 32GB DDR4, NVME, Win10 Pro 20H2, Quadro M1000M 2GB, FAH 7.6.21 (actually have two of these)
3: [email protected], 12GB DDR3, SSD, Win10 Pro 20H2, GTX 750Ti 2GB, GTX 1080Ti 11GB, FAH 7.6.21
Location: UK

Re: Points CPU vs GPU

Post by Neil-B »

... I like that particular Project, not because of the points but because it is so very quick to process (TPF 17s) - for some reason that makes me feel good - definitely anomalous points (as there is no way my CPU slot is doing 4x normal levels of science) but they turn up every now and then - I see 655k ppd for the 28 minutes or so they take to process (32core slot on slowish Xeon) … but it is perhaps the exception rather than the rule :)
2x Xeon E5-2697v3, 512GB DDR4 LRDIMM, SSD Raid, W10-Ent, Quadro K420
Xeon E3-1505Mv5, 32GB DDR4, NVME, W10-Pro, Quadro M1000M
i7-960, 12GB DDR3, SSD, W10-Pro, GTX1080Ti
i9-10850K, 64GB DDR4, NVME, W11-Pro, RTX3070

(Green/Bold = Active)
MeeLee
Posts: 1339
Joined: Tue Feb 19, 2019 10:16 pm

Re: Points CPU vs GPU

Post by MeeLee »

It may seem unfair, but I just figured out, that a Ryzen 3900x gets more points than the work it does, when compared to an RTX 2080 Ti.

A Ryzen 3900x is rated at 820 GFLOPS, which is a performance right inbetween a GT730 and a GT 1030; but more often than not, will it run much less than this (as it never reaches the optimal 4,4Ghz on all cores).
While an RTX 2080 Ti is rated at ~14-15Tflops (aftermarket models have a higher boost frequency, but the reference cards are rated at 14,2Tflops).

An RTX 2080 Ti is rated about 17,5x faster in GFLOPS terms. So a fair rating for comparing a 3,2-4M PPD RTX 2080 Ti, would be if a 3900x could get between 180 and 228 k PPD.
And that's where it's reported to land. A user mentioned getting 170-200k PPD on his Ryzen 3900x.

So while GPUs fold a lot more efficient, and it's PPD difference may seem unfair; it appears I was wrong about the scoring.
The scoring IS fair between the two!
You get about the same score per performance.
Although I think by default a CPU should get a slightly higher bonus, for being more flexible, and for engineers writing code in that old, inefficient language; and users actually crunching on it.
A lot of users probably crunch out of pity, because if people knew about the efficiency differences, everyone would fold on GPUs. And CPU folding would come near to a halt.
Post Reply