Suggested Change to the PPD System

Moderators: Site Moderators, FAHC Science Team

Post Reply
k1wi
Posts: 909
Joined: Tue Sep 22, 2009 10:48 pm

Re: Suggested Change to the PPD System

Post by k1wi »

Thank you for your analysis and input grandpa.

You raise an interesting point and I think that there is a key phrase "right now there is a rather unique situation". My proposal alters/fixes the long term trend but does not discuss the short term fluctuations & short term market trends. You are proposing that there needs to be a method in which we can account for these short term fluctuations. That is, we alter graph 2 in a given time period to reflect short term changes.

That is, if it becomes more difficult to provide 4x the 'average' computer power, then we should manipulate the curve from graph 2(first graph on page 1), in the short term, to compensate this. It raises some interesting questions as to how we measure the relative difficulty - do we take the average, do we keep it so the top 10% always earn the same amount of ppd/day?

How the distribution of computers changes has an effect, depending on how we measure performance improvement. If we were to measure it based on the median or mean computer, then there will be an effect if the 'mean' computer becomes a tablet. In my mind that is why I'd want PG to look into how we measure performance improvement - after all, they have the client statistics at their disposal.

There is an argument to not changing altering the point-in-time graph, and that is that in the short term there will always be relative fluctuations. I don't want to land on either side of that argument, but I would like to recognise it. That is, while the top end is currently a lot more expensive in the short term, there might be a situation where those chips actually drop rapidly in price. Would it not be better for people to hold off buying MP/4p computers while prices are relatively high, and defer purchases of them until they are relatively good value for money? In otherwords, in the short term people will take the price signals of the market and ppd will fluctuate, but over the long term, by taking the price signals, completed science (having accounted for increasing computational power) will be higher.

If you were to successfully argue that yes we should manipulate the points curve at a given time, then the question then becomes: "is the short term return worth the level of complexity required to undertake such fine tuning of the points system?"

Edit: fixed typo deferce to defer!
bruce
Posts: 20824
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: Suggested Change to the PPD System

Post by bruce »

k1wi wrote:That is, if it becomes more difficult to provide 4x the 'average' computer power, then we should manipulate the curve from graph 2(first graph on page 1), in the short term, to compensate this. It raises some interesting questions as to how we measure the relative difficulty - do we take the average, do we keep it so the top 10% always earn the same amount of ppd/day?
The current benchmarking policy is aimed at measuring the difficulty of a project, compared to other projects for the same platform and compensating for that variation by adjusting the baseline points.

You're suggesting that somebody measure the 'average' computer and compensating for the trend in how that average changes. How do you plan to measure changes to the 'average' computer within each platform?
k1wi
Posts: 909
Joined: Tue Sep 22, 2009 10:48 pm

Re: Suggested Change to the PPD System

Post by k1wi »

I'm suggesting it *might* be a way of quantifying the technological improvement - perhaps some sort of 'average' measure becomes one of the components that is used to measure technological improvement. The easiest/simplest way would be to measure how the PPD of the median client performs (I am unclear as to whether you are talking about tablets vs. laptops vs. Desktops, or whether you are talking Uni-proc vs. SMP vs GPUs)- but would this be the most accurate or appropriate, I don't think so. There are many ways of defining 'average computer' after all, some of which might be applicable, others might not. I am admitting that as a member of the community I'm not in the best place to determine what tools PG should use to measure technological improvement - PG is in the best position to determine how to measure technological improvement. I would say that given that they are already "measuring the difficulty of a project, compared to other projects for the same platform and compensating for that variation by adjusting the baseline points", they are probably quite well placed to devise an appropriate methodology.

I also suggested some sort of use of quantiles. I don't think that ppd needs to be platform dependent (again, not sure if of the context you use the term platform), but perhaps it needs to be aware of the different classes of hardware. But in any event, because we are applying the normalisation, unless there is any manipulation of the sort Grandpa is suggesting, all clients will be affected equally. There is always going to be a trade off between accuracy and simplicity, and so long as the normalisation/adjustment rate is relatively on the money it will still be an improvement over the current system.
bruce
Posts: 20824
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: Suggested Change to the PPD System

Post by bruce »

In this context, I used "platform" to distinguish between Uniprocessor/SMP/PS3/ATI-GPU/NV-GPU/Bigadv. I can't think of a rational way to benchmark all of them with the same project.

What PPD should an i5 running SMP get compared to four uniprocessor clients on the same machine? What PPD should a 16-core bigadv machine get compared to running four WUs set for -smp 4? ... and those are the easy ones. It's more difficult to compare CPU-based folding with PS3 or GPU-based folding because you can't run N copies of an identical WU.

We pretty much have to rely on the Pande Group's idea of scientific value to answer that since there's no explicit test that can be related to hardware speed. (and even if there were, I'd vote for the PG's assessment over the ;) Tom's Hardware ;) approach.
k1wi
Posts: 909
Joined: Tue Sep 22, 2009 10:48 pm

Re: Suggested Change to the PPD System

Post by k1wi »

Bruce - I do agree with you (hence why in the post of mine you quoted I stated "There is an argument to not changing altering the point-in-time graph, and that is that in the short term there will always be relative fluctuations. I don't want to land on either side of that argument, but I would like to recognise it." I don't see how my proposal would remove the ability for PG to continue to set relative scientific value across different platforms as they already do? My two previous posts were an attempt to raise some of the issues faced by an attempt to solve short term fluctuations, not yet provide a revised formula!

Furthermore, I've never proposed running the one project across all clients! Under the QRB system, once it rolls out to all clients, the shape of the QRB curve will be the same across all clients, because they will all share the same underlying formula. They may be shifted up or down relative to other platforms depending on PG's determination of their scientific value and how you could attempt to measure their 'relative performance', but that is the case today as it would continue to be under my proposal to account for technological improvement. One project across all clients has been suggested by others, but it is not something that I have suggested or supported? As to the PPD that an i5 running a single SMP vs. 4 uni proc clients - well I think that PG has made it clear that a single client, which returns individual WUs faster should earn the higher PPD. Nothing in any of my proposals changes that or the ability for PG to adjust scientific value.

Therefore, I'm not for a second saying that we shouldn't rely on PG's idea of scientific value... In fact, I am probably advocating an expansion of it - that is, under the proposal they can determine and set the rate of scientific advancement after all. (This point is why I have only theorised how PG could potentially measure technological improvement).

Grandpa has raised quite a valid observation that there are short term fluctuations. I attempted to raise some of the issues - I would love people's input as to how short term fluctuations could be accounted for in a way that would be worth the added complexity. If they are able to do so, then it would then be a matter of whether it is possible to include them in my revised PPD formula.

P.S. Has anyone been able to advance my theories around accounting for technological improvement in the speed ratio?
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: Suggested Change to the PPD System

Post by MtM »

k1wi wrote:MtM suggested his proposal:
We make ba make instead off 1000 points, 100 points ( 10%), and smp instead of 100 points, 10 (10%).
We use 10% as a start as it's easy to use.

The next round, we use Y which I described should be based on the computation speed increase. This speed increase should be applied to both trajectories again, so let's assume we have a 10% speedup which have caused ba to be making 1000 points again, and smp 100 points = we normalize down by 10% for both and publish the 10% number so people can still see the how much computational/scientific effort was needed to make earn x credit. How does this influence the relative progression of a ba versus a regular smp instance? The BA instance will still earn the same amount of points more than the SMP instance relative to the total points obtainable.
First of all, I disagree with this proposal for a number of reasons, first that it is ambiguous:

1. Why are we differentiating between BA and SMP? It would be simpler to simply write PPD1 = PPD2/y1, which is exactly the concept I proposed in the original post and exactly the formula I proposed in the post prior to your suggestion.

2. By talking about BA and SMP separately, you are increasing the complexity of the adjustment - It is quite easy for people to read that proposal and suggest perhaps you are normalising BA to 1000 points and SMP to 100.

3. The 1/10th figure is completely arbitrary and appears 'simply thrown out there'. Amongst other things, massive. Far too large for a single adjustment. Why choose 10%? Why not make the value of new points half the value of original points? I have had to justify every single element of my posts including exactly how to calculate the value of technological improvement (and have the examples of how it can be calculated then used as a vector to shoot down my entire proposal) and I think so should you.

4. If what you are attempting to propose has the same effect as my original proposal, why not use the formula that I proposed in the prior post? As per point 1, my formula is much more simple.
Wait what? I didn't suggest that, I explained MY INITIAL SUGGESTION using that example.

1. We are not, the 10% is the same for smp and ba and only used to showcase that relative numbers are not changing!

2/3/4 Your idea? You came up with 'dollar value'... I came up using Y as computational relationship. This is not your idea :!:
k1wi wrote:P.S. Has anyone been able to advance my theories around accounting for technological improvement in the speed ratio?
Seriously, stop this :!:

Do you really need me to quote you again on your dollar value and link to the post where I said you should use the computational ratio instead? Admit you had a totaly different idea in mind and you changed this after me pointing out the obvious flaws in it and offering this alternative which has a much higher chance of being succesfull.

Really, it's not done to try and claim something which is so obviously not your own idea.

The ppd / Y is the only thing which is the same, but the implications from using a dollar value ( which has ZERO chance of working, ZERO :!: ) as opposed to MY IDEA OF USING THE COMPUTATIONAL SPEED INCREASE to calculate Y makes ppd / y a totally different formula entirely.

This is where you first corrected YOUR IDEA OF A DOLLAR VALUE -> viewtopic.php?p=210444#p210444
This is where I made my first suggestion of using computational capabilities -> viewtopic.php?p=210435#p210435 which I made admitting the idea might not be totally original as I remember there being simular discussions about this in the past and this idea might have been brought up there by someone else.

Don't take credit for something which isn't your idea. Unless you can proof you're the one who made a simular posts like mine a long time ago, you're definetly not the one who concieved this idea.

To me it seems like you took my suggestion, complicated it with adding n to both sides of the equation ( and yeah, that means it's exactly the same ) and are doing a whole lot of talking around the issue so to hide this fact.
k1wi wrote:Furthermore, I've never proposed running the one project across all clients!
Maybe other people are subconsiously reading some ( parts ) of my suggestions and attributing them to you as well ;) That wouldn't suprise me as it sounds like something which could happen if you claim other parts of my suggestions are indeed actually your own.
ChasR
Posts: 402
Joined: Sun Dec 02, 2007 5:36 am
Location: Atlanta, GA

Re: Suggested Change to the PPD System

Post by ChasR »

I'm fundamentally opposed to routine "normalization" of the ppd. Just as I rail against the QRB for devaluing all past and present work, I'm opposed to devaluing future work. If the scientific value of a WU is 100 points, I should get 100 ppd if I do one of them per day in 2004 on a P4 and 10,000 ppd if I do 100 of them per day in 2012 on faster hardware and 100,000 ppd if I do 1000 of them per day on 2020 on even faster hardware. At some future date the points system will have to be reset to remove some zeros, but not as a regular occurrence.
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: Suggested Change to the PPD System

Post by MtM »

ChasR wrote:I'm fundamentally opposed to routine "normalization" of the ppd. Just as I rail against the QRB for devaluing all past and present work, I'm opposed to devaluing future work. If the scientific value of a WU is 100 points, I should get 100 ppd if I do one of them per day in 2004 on a P4 and 10,000 ppd if I do 100 of them per day in 2012 on faster hardware and 100,000 ppd if I do 1000 of them per day on 2020 on even faster hardware. At some future date the points system will have to be reset to remove some zeros, but not as a regular occurrence.
What's the difference between 'removing the zero's' once and removing them when needed? And what do you gain with doing it once? It wouldn't be a solution, because it's a temporary fix, you remove the 'zero's' but they will be back again later, resulting in doing it 'once' again... and again. Which is the same as saying you will need a periodic normalization?

You will also still get points based on scientific value. ( that is, following the method I proposed for dealing with ppd = ppd / y and without any of the ideas which are made in this thread which cut the tie between science and points... ).
ChasR
Posts: 402
Joined: Sun Dec 02, 2007 5:36 am
Location: Atlanta, GA

Re: Suggested Change to the PPD System

Post by ChasR »

Except for the obviously unsustainable QRB, there really isn't a need to remove the zeros, at least not for more years than I need to worry about. As I read what is proposed, I see that if I am unable to upgrade my hardware, my ppd will go down on each "normalization". Not many folks will be fond of that. Am I missing something?
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: Suggested Change to the PPD System

Post by MtM »

No that's correct on everything ( except the QRB part, but we covered this already previously ). I agree the problem isn't as actual now as in it will be in the future ( I said so already earlier ), and your ppd will drop if you don't upgrade.

But I can not convieve a solution where ppd will not drop and where the totals are still normalized.

Also, I think this is the only defendable proposal if you're going to normalize, because since you would know Y you can still look up the actual scientific contribution ( prior to normalization ).

About the QRB, don't you agree that when you can control where machine's would appear on the slope ( region ) there is apart from the problem pointed out with a bigadv project just making the deadline, not a fundamental flaw with QRB?
bruce
Posts: 20824
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: Suggested Change to the PPD System

Post by bruce »

Is your proposal a suggested alternative to increasing the number of cores and decreasing the deadlines for Bigadv? The change from Bigadv-8 to bigadv-16 was an example of maintaining some kind of parity in the light of technological changes. That change was made because of technological advances and a similar change might have to be done again as the 'average' computational power continues to increase, no matter how the points formula might be adjusted.

At one time, the to 10% of FAH clients were running on (more or less) the equivalent to an i7. Technology changed and the top 20% were soon running an i7 or better so the bar was raised in an attempt to realign bigadv with the top 10% of the clients. This change produced a tremendous amount of discussion and quite a number of unhappy donors.

If you had made your proposal at that time, you would have reduced the number of points that everybody was earning by some factor because of technological advances. It would not have reduced the number of BA clients from 20% back to 10%. Reducing everybody's PPD would have produced even more long, heated discussions but would not have solved the fundamental problem of WU shortages (where there were too many clients that consider themselves part of the top 10% compared to the number of WUs available to those clients).

(My estimates of 10% and 20% are just guesses. Substitute your own guesses if you like.)
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: Suggested Change to the PPD System

Post by MtM »

Deadline's and minimum cores are used to define limits for a certain part of the folding@home framework.

You're describing it as if it's arbitrary and bigadv work units could a> theorectially run on the same hardware as smp and b> deadline is not an intricate part of folding@home defining the actual scientific value.

I don't think you should ever use the above two for any thing else then deffining ba ( and not as percentage of total folders/top folders ). I don't understand how you could describe ba as anything other then a minimum specification, where specification is a solid sum of hardware requirements, and not a fluctuating fraction of the donor base. The last part of the sentence is a result of the first part and not the other way around.

If you take my proposal, you could change ba paramaters as miminum core counts/deadline's, and then calculate their y factor by running ( a certain benchmark consisting of common fahcore functions / a project which you use as benchmark :?: ) on a benchmark machine consisting of the minimum requirements ( for instance ). Then you could normalize the credit for BA credit using Y.

Since you publish this Y value, it would be possible to calculate the ppd one would have gotten without normalisation ( to which precision depends on what method of getting Y is used ). This number is the actual scientific worth.

Edit:

I have to admit I don't like this, I would like to leave out normalization all together but in the coming years that would mean we have people earning 100 milion ppd, and having totals in the trillions.

I would only use this method if done sporadicly, like anual, and not more often. That also means planning changes like the minimum amount of cores for BA would have to conincide with the anual normalization round.
ChasR
Posts: 402
Joined: Sun Dec 02, 2007 5:36 am
Location: Atlanta, GA

Re: Suggested Change to the PPD System

Post by ChasR »

The problem I see with BA16 values can be attributed directly to the i5 benchmark machine. The ppd of all the 690x BA WUs on MP machines are anomalies due to benchmarking them on a machine that doesn't even qualify for the work. Setting the value of the WU such that, on the benchmark i5, BA produces 20% more than normal smp, results in a MP machine producing 2.5x the ppd. Benchmark all the cpu WUs on a proper MP machine, set the value of BA work to whatever % more than regular smp PG deems to be proper and all will be well until the benchmark machine becomes obsolete. THe more powerful the new benchmark machine, the longer until its obsolescence.

Any scheme that causes a donors ppd to go down regularly on the same hardware will be wildly unpopular. The scheme proposed will merely reverse the exponential curve, with old hardware getting less and less ppd over time, approaching infinitely less.
Last edited by ChasR on Mon Mar 19, 2012 7:18 pm, edited 1 time in total.
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: Suggested Change to the PPD System

Post by MtM »

ChasR wrote:The problem I see with BA16 values can be attributed directly to the i5 benchmark machine. The ppd of all the 690x BA WUs on MP machines are anomalies due to benchmarking them on a machine that doesn't even qualify for the work. Setting the value of the WU such that, on the benchmark i5, BA produces 20% more than normal smp, results in a MP machine producing 2.5x the ppd. Benchmark the all the cpu WUs on a proper MP machine, set the value of BA work to whatever % more than regular smp PG deems to be proper and all will be well until the benchmark machine becomes obsolete. THe more powerful the new benchmark machine, the longer until its obsolescence.
I agree that there is a need for benchmarking to be done on the right kind of hardware. Not because the average ba16 ppd is flawed as this can be controlled by pg. I agree it should be done because it would be easy to predict where on the slope the machines running would fall, preventing machines to get to far on the hockey stick side.

I'm not as certain as you are this means the resulting credit for most BA work to be less then it is now.
ChasR wrote:Any scheme that causes a donors ppd to go down regularly on the same hardware will be wildly unpopular. The scheme proposed will merely reverse the exponential curve, with old hardware getting less and less ppd over time, approaching infinitely less.
Yes, but what would you actually propose? You already said the points should be normalized at a given time in the future, what would you use as conditions for this to happen? And if you don't normalize ppd as well, normalizing points has a very limited effect.
bruce
Posts: 20824
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: Suggested Change to the PPD System

Post by bruce »

Suppose you do change the benchmark machine to be more representative of BA hardware. Stanford is going to still need to use the top 10% of the hardware to study some small fraction of the initial simulation of whatever projects they'll be working on next year which cannot be completed within the lifetime of the scientist. As hardware improves, science finds more challenging simulations to be studied.

That's not solved by adjusting the points but by defining the "top 10%" as a class of problems that changes over time.
Post Reply