7im wrote:No, speed is not everything. Science is everything. Speed is only one component.
As you noted, the addition of the QRB was a suppliment to the existing points system at the time. In the past, science = points, and it was linear. But the quickness of the return also increased the scientific value, and that added value went mostly uncredited. Before the QRB, that time factor was not accounted for by the points system.
The addition of the QRB was an attempt to assign points to represent the added scientific value of faster returns of work units. And the QRB, overall, has been helpful to the project. But in hindsight, some sort of loyalty bonus should have been given to existing folders before the QRB was started to normalize the previous contributions with the exponential nature of future contributions. A sort of previous points "conversion" to the new points values. That way 10 years of loyal folding couldn't be outstripped in only 10 days on the new points system. But like they say, hindsight is 20/20, foresight is not.
In my opinion, all existing GPU points should be "QRB" adjusted (1 time bonus) so that past contributions more closely match future contributions, going back to where SMP QRB started. That's only fair, right?! Unfortunately, Pande Group doesn't keep enough WU data to make that happen any more. Oh well. Fold on.
Thanks 7im, for the support, insight, and clarity.
I hope I have not created a fire-storm around this issue, and I
do understand both sides of the argument.
But here's the simple math (and why I think GPU QRB is not such a good idea):
Given the same WU...
Let's say that we have two GPUs: one that is older and capable of 4:30 TPF, and another
newer model that is capable of 3:05 TPF.
The older GPU will complete the WU in about 7:30:00, but the newer GPU will complete that same WU in about 5:08:20.
Therein
is the inherent QRB - the newer GPU will be able to complete
more WUs over the same [longer] period of time than the older one.
More WUs = More points. Is my 'math' wrong???
And to 7im's observation that PG doesn't keep enough stats to implement a 'consistency' bonus (which is reasonable) - well, there is EOC stats (and others).
EDIT: I realize that the external sites do not have access to PRCG or stats related to specific hardware - F@H does not publish that info for download.
I am constantly on EOC stats, and Jason Rabel does a remarkable job of keeping the data "neat & tidy". I'm fairly certain he has detailed stats going back to 2004.
Maybe EOC can be approached to provide some statistical/advisory help in this area (if PG would agree to consider '
consistency-of-contribution' as a factor in awarding points).
To the point of "speed is everything", I have two remarks for that...
ONE:
I understand that some projects are seemingly more
urgent than others.
PG - with a little extra work - could do a better job with their assignment servers.
It has been my observation that when the client-app is attempting to get a new WU, it does/can transmit the "hardware info".
If some project is deemed to be a high-priority project requiring a minimum hardware level, then the assignment server(s) should not give one of those WUs to significantly slower hardware.
Additionally, just because BIGADV, etc are set on the client side, shouldn't automatically mean that the assignment server MUST honor that switch.
The assignment server should consider it a request that can be declined when necessary.
TWO:
The assertion that some professors or grad-students need results/data quickly in order to complete certain papers/theses -
and are somehow entitled to preferential treatment - is a little hard to swallow.
PG and its project designers need to remember the scope and magnitude of the collective computing power that is being donated by hundreds of teams and many thousands of people -
for free.
By rigging the system to effectively penalize contributors with older hardware, they must realize that many dedicated folders are already frustrated and feeling that F@H is becoming an exclusive club for the techno-elite.
The long term effect can be chilling. You risk losing a significant portion of long term contributors if you are hedging your bets on bolstering a
smaller-but-better class of contributors.
In a not-too-indirect way, we contributors are subsidizing Stanford & PG.
Many of us have invested thousands of dollars to build DC/HPC systems.
On top of that, there is the recurring expense of electricity (etc) to keep them running.
A little patience & gratitude from those whose careers are significantly dependent on
our contributions is warranted.
If the majority of contributors get "fed-up" and stop contributing, how would Stanford/PG make-up that computing-power shortfall?
Would corporate entities step in to help? Would the Stanford Board of Trustees and/or Alumni Associations pony-up to buy equivalent computing resources?
The vast majority of F@H contributors have an abiding belief in the promises & potential cures/remedies that can be derived from the research.
Far too many of us have lost loved-ones to Cancer, Alzheimer's, Parkinson's, etc.
We all want to help put an end to these horrid diseases and alleviate human suffering.
Stanford & PG contribute the
brains, and we [collectively] contribute the
brawn.
But when some contributors are being told (in a not-so-thinly-veiled way) that "your contributions are insufficient", it sets a bad tone.
That bad-tone can find resonance with people like me - who
do have the technical sufficiency, but appreciate
everyone's contribution.
Please note that these are my own opinions, and do not necessarily reflect the prevailing attitude of my team.
I am not - in any way - acting in a spokesperson capacity for the KWSN.