If
Grandpa_01 wrote:kiore I believe the answer is that for 762x WU's PG requires the 2.25 core, for any other they do not, the core will not be changed by the servers on any WU other than the 762x WU if you never get a 762x WU the core will not get changed, so that says to me that the 2.22 core produces the same quality science as does 2.25 only faster since it is an optimised core for cards other than Kepler. The 2.25 and the 762x WU's were both designed for Keppler so they could be utilised. PG could easily set the servers up to require the 2.25 core and would if there was a problem with the science being done with the 2.22 core. At least they have had no problem doing so in the past. There is really no reason to penalise folders folding on cards other than Kepler IMHO that would just create more discontent.
![Wink :wink:](./images/smilies/icon_wink.gif)
If that were the case, then why have the 2.25 core & 762x WU's run on anything other then Kepler cards??? I know it's possible to filter non-Kepler cards the same as Stanford was able to filter the QMD WU's to run only on Intel CPU's years ago. All Kepler cards have a unique identifier, just as all Fermi cards have a unique identifier.
What he's got a GF104 (GTX460 aka Fermi), well no 762x for him... a GF108 (GT430 aka Fermi), nope no 762x for that machine either. It's aready being done, you can't run these work units on anything older than a Fermi, so don't even try to say they can't filter them out for anything older than Kepler. It can be done even if it's not being done. Sadly this seems to actually enforce your opinion, even if it might be an incorrect opinion...
Without knowing the programming specifics, there's no way to know what you are or aren't doing to a work unit by changing the core version. Sure you may get more points & quicker turn-around, but if its corrupting the data in any way, it's not worth it.
There's only one reason that people should be folding, and that's for science. Science that might one day save your great grandchild's life or their great grandchilds life though better understanding & thus potential cures for cancers & other deadly / dibilitating diseases. Corrupted data means bad science & further delays.
I know for a fact that this swapping of cores is happening quite frequently and on a large number of teams. But unless Stanford makes a statement about it one way or the other, I will never support it & will continue to run what Stanford gives me with the software they give me, even if that means my hardware isn't turning in as much work as it once was on the exact same work units.
As I said in my team forum, Stanford is the big looser with core 2.25. A 30% - 40% reduction in performance on Fermi cards means an overall reduction in completed work assignments turned in every day. It would benefit Stanford to release an official statement on this matter, especially if any kind of data corruption may happen. The only thing worse than a reduction in completed work, is completed work that is useless. Unfortunately, it could take weeks or months to determine if any harm is being done
![Sad :(](./images/smilies/icon_sad.gif)
Folding rig: EVGA Z370 Classified K w/i7-8700 & Hyper 212 EVO - WIN7 PRO 64bit - EVGA 1660 Ti XC Gaming (soon to be water cooled) - Corsair Vengeance 16GB DDR4-2666 dual channel memory - Samsung 970 Pro 512GB M.2 SSD - EVGA SuperNova 850 Platinum PSU