OK. Before moving on, if yous are paying attention you will notice two oddities in the "Modern CPU" data, namely:
Q1: How do you get 32 logical processors out of 24 physical with the i9-13900's? A: Because only 8 have dual threads,
Q2: Why are the PPD's so low for the i9-13900KF versus the Ryzen 9 7950X when their GB6 performance is virtually identical? Hint: look at the numbers of data points. This is a useful warning about taking summary data at face value, including what is here.
OK, Now let me introduce my own elderly devices, which are pretty much at the same age and performance level as the Macs in the previous post, and rated between 93,000 and 133,000 PPD when using all logical processors.
HP Z600, Z800, (2010) Z440 (2014)
Intel Xeon X5690 3.5 GHz (6 cores) x2 1C= 532 MC=3739 cf 3885/93240 (24LP)
Intel Xeon X5660 2.8 GHz (6 cores) x2 1C= 460 MC=3175 cf 5299/127176 (24LP)
Intel Xeon E5-1650 v3 3.5 GHz (6 cores) 1C=1063 MC=5204 cf 11083/132996 (12LP)
I have used data from my own devices because the GB6 Processor benchmarks are for a single Xeon. By way of getting some current data I set Z601 to folding yesterday, for the first time since 2020. It uses 2 x X5660s, but I limited the number of LP's to 12 with the HP Performance Advisor, by denying the folding app access to the second thread on each core. I do this to keep temperatures down (77-80 deg C) with a relatively small performance penalty, because there's only about a 20% addition from the second thread when there's one being fully utilised, as is easily demonstrated if you switch off hyperthreading. As luck would have it I got two WUs with almost identical run-times. They both had 4 day deadlines, and wanted 15.71 versus 15.57 hours of Z601's time. Of course Z601 is a big old box with 48GB of RAM and a Quadro 5000 to keep warm, so it used about 250W with folding paused, and 450 W when Folding in the way described. My current electricity usage rates are (AUD) 64.625 cents/hour at Peak (6hrs 2pm to 8pm), 26.367 cents/hour at off-peak (9 hrs 10pm to 7 AM) and 34.386 cents/hour at other times (9 hrs).
If we now suppose that the base points for a WU reflects its scientific value, and the Quick Return Bonus reflects the added value of finishing sooner rather than later, then I should try to find a continuous block of 15.7 hours to run in, even though this means that 6.7 hours will be at the more expensive "shoulder" rate. So I do this, and 450W per hour x 15.71 hours requires 7.07 KWh, which costs me $2.11. In theory FaH rewards me with 48,575 points for this if you apply the Quick Return Bonus algorithm from
https://foldingathome.org/support/faq/points/ with k=0.75, though in fact I only got 40,505 and the Excel solver says you'd get this with k=0.52149609. But then my actual time from firing up Z601 to the return date/time seems to be only 13.84 hours so who knows? The main point is that it would cost me around $52.00 per million points this way. Or, if I stop Z601 after the 9 hours of off-peak, then start up when the next off peak period begins 15 hours later, my elapsed time is now 30.71 hours, so I only get 28,970 points, and though my cost drops to $1.86, this works out at $64.34 per million points.
I have included the full 450W cost of running Z601 because that is in fact what it would cost if I added it to my current folding collection.
By contrast, LAR Systems does its GPU cost estimates for the GPU maximum power consumption alone, ignoring its supporting box, and they apply a rate of (presumably) US$0.10 per KWh, which would be near enough to AU$0.15 per kWh and thus about half my cheapest rate, so their costs per Mpoint are not directly comparable, but they are so extremely different that it hardly matters, as below,
https://folding.lar.systems/gpu_ppd/overall_ranks GPUs
#001 RTX 4090 (AD102) Linux 26.7 MPPD Win 19.8 MPPD (n’s 54,637/292,429) $/MP @ 450W and 10c/KWh = $0.041
#080 RTX 2060 (TU106) Linux 1.8 MPPD Win 1.6 MPPD (n’s=10,703/85,761) $/MP @ 175W and 10c/KWh = $0.221
#114 GTX 1080 (GP104) Linux 1.8 MPPD Win 1.6 MPPD (n’s=6,686/112,647) $/MP @ 180W and 10c/KWh = $0.365
#112 GTX 1070 (GP104) Linux 1.2 MPPD Win 1.6 MPPD (n’s=87,761/121,786) $/MP @ 150W and 10c/KWh = $0.298
#146 GTX 1060 6GB (GP104) Linux 0.80 MPPD Win 0.45 MPPD (n’s=638/133) $/MP @ 125W and 10c/KWh = $0.367
Allowing an additional 250W or so for the box supporting the GPU with its CPU doing very little, it became pretty obvious that there was no reason to waste electricity on CPU folding on any of my devices, and so I invested in some elderly GPUs when prices became reasonable. Relative to my pre-folding careless consumption of electricity, the three RTX2060s in three Z440s routinely grind out 2.5 MPPD during off-peak hours with any necessary extension into that adjacent "shoulder" period, for an incremental cost around $0.32 per MPPD.
In summary, IMHO old CPUs can provide useful support to old GPUs, and the old GPUs work just fine.