Page 6 of 11
Re: Unbalanced Scoring
Posted: Wed Aug 06, 2008 9:49 pm
by FordGT90Concept
Xilikon wrote:I don't see how a Q6600 is being outpaced 10x by a 8800GT. I have both of them myself (I own 5 quads and a 8800GT, 8800GTS 640MB, 9800GT and GTX 260). Each box has a Q6600 at 3.2 GHz and it produce about 4200 PPD while a 8800GT on the same box do about 4500 PPD. PPD is a metric which mean Points Per Day and is a average of the numbers of points generated by period of 24 hours. While the GPU can do 10 WU in the same time frame and a CPU will do about 2-2.5 WU per day, a GPU WU value is 480 points with a SMP WU is worth 1760 Points. I can pull that since each Q6600 run 2 Linux clients under VMWare to optimize the production.
You're using SMP instead of many instances of the old 5.04 console. SMP easily produces 2x the points compared to an equal number of instances of 5.04 console running. This again stresses the inconsistances across the vairous clients. It wasn't a problem back when there was really only one client with a GUI and console version available. Now that there is variety, a lot of inconsistencies have come to the surface.
Re: Unbalanced Scoring
Posted: Wed Aug 06, 2008 9:53 pm
by Xilikon
Since nobody picked my stick yet, I'll respond to that :
While SMP is easily double the points of the console client, it's offset by the hassle factor, risk of losses and very tight deadlines. Stanford often toss bonuses for the extra difficulties (be time, loss risk or hassle factor). This is what not a lot of people is aware and this often account for 50% of the extra points.
Re: Unbalanced Scoring
Posted: Wed Aug 06, 2008 9:58 pm
by 7im
Xilikon wrote:Since nobody picked my stick yet, I'll respond to that :
While SMP is easily double the points of the console client, it's offset by the hassle factor, risk of losses and very tight deadlines. Stanford often toss bonuses for the extra difficulties (be time, loss risk or hassle factor). This is what not a lot of people is aware and this often account for 50% of the extra points.
I'll pick...
... that's a bit of an exageration (50%?!), but not untrue. While scientific production is the PRIMARY consideration for points, a few other factors may be considered as well from one client or another. And no, GURU, that is not more "proof" of a conspiracy. If Stanford took bribes, we wouldn't have AMD clients (bribes from Intel), Linux or Mac clients (bribes from M$), or GPU clients for 2 different companies (bribes from either), and clearly we do have those clients.
Re: Unbalanced Scoring
Posted: Wed Aug 06, 2008 10:19 pm
by bapriebe
FordGT90Concept wrote:If a million P2 computers started folding because their time is valuable, F@H becomes that more effective.
The economists would have a field day figuring out how the scoring system should work here. Maybe Pande et al should take their colleagues in the dismal science out to lunch.
One million old P4's, let alone P2's, would at best contribute 1Petaflop and need a good chunk of one large power station dedicated to running them. 10,000 GPU's would overtop that lot for a fraction of the operating cost. If their capital or personal budgets can't support a computer upgrade, there is nothing besides their egos and electricity bills that prevents them from using their older computers for F@H.
Turning the points system on its head just to reduce the embarrassment factor for a few hypensensitive types does not seem sensible. I
want to know how much science is done by a given platform: it's a valuable input into my future purchasing decisions. I for one would much rather see people encouraged to ditch those old systems in favour of faster and much more power-efficient systems.
Query to Pande Group: if the non-GPU-equipped systems were to disappear tomorrow, what exactly would the effect be on net science? Are the CPU and SMP clients absolutely essential? If so, how many is too many and how many is too few?
Re: Unbalanced Scoring
Posted: Wed Aug 06, 2008 10:23 pm
by L7R
I've been thinking why this debate still goes on? I think it will continue until both of these two persons read and understand that GPU's really are so much more powerful today than x86 CPU's. At least in straightforward calculations. Actually they are even more powerfull but GPU clients still needs more optimizing to them (at least for ATI cards). It can't be hard to understand why 1teraflops GPU's can be fast.
And in gaming industry there were always need to have more and more powerful videocards and it took years of fierce competition (there are only two left) until we came here. At the same time x86 processors have gone only few minor enchancements (SSE, etc) and got two or four times the cores. Gpu's have 800 (eight hundred) shader processor units or.... cores.
But more importantly:
What will happen when rest of the people realize GPU's really double their performance (and PPD's) in every single year. That will be a shock to many if todays PPD's sounds unreasonable.
btw. In next week, videocard performance will be doubled again to two teraflops when Ati's new card comes.
Re: Unbalanced Scoring
Posted: Wed Aug 06, 2008 10:38 pm
by FordGT90Concept
bapriebe wrote:One million old P4's, let alone P2's, would at best contribute 1Petaflop and need a good chunk of one large power station dedicated to running them. 10,000 GPU's would overtop that lot for a fraction of the operating cost. If their capital or personal budgets can't support a computer upgrade, there is nothing besides their egos and electricity bills that prevents them from using their older computers for F@H.
The power is donated by the contributors. If they feel it isn't worth it, they won't contribute on those machines. Everyone has their own, varying cost/benefit factor. If they feel it is worth the energy, who has the right to stop them from helping out in their own little way?
bapriebe wrote:Turning the points system on its head just to reduce the embarrassment factor for a few hypensensitive types does not seem sensible. I want to know how much science is done by a given platform: it's a valuable input into my future purchasing decisions. I for one would much rather see people encouraged to ditch those old systems in favour of faster and much more power-efficient systems.
That's consumerism. I never recommend buying a new computer unless it is necessary in a cost/benefit sense. In these parts, there is a lot of sub-2 GHz computers in use because all they use it for is internet, email, and some accounting. If they want to fold on it, why stop them? Just because they add a new task to the computer that is flexible (uses idle clocks) doesn't demand they need to upgrade their hardware prior to running it. That just doesn't make cost/benefit sense.
L7R wrote:I've been thinking why this debate still goes on? I think it will continue until both of these two persons read and understand that GPU's really are so much more powerful today than x86 CPU's.
Only in FlOp sense. CPUs rape GPUs in arithmatic and almost always in available memory. Pande Group choose not to exploit those things probably due to the ease of keeping everything in floating-point decimals.
Re: Unbalanced Scoring
Posted: Wed Aug 06, 2008 10:51 pm
by JBurton57
FordGT90Concept wrote:
Only in FlOp sense. CPUs rape GPUs in arithmatic and almost always in available memory. Pande Group choose not to exploit those things probably due to the ease of keeping everything in floating-point decimals.
I'm not a computer programmer, so I'll ask you this:
What about a case like
http://www.tomshardware.com/reviews/nvi ... 54-11.html
On the next page here:
http://www.tomshardware.com/reviews/nvi ... 54-12.html
they claim to be able to be able to accelerate the task quite a bit by processing the task in parallel. If the CPU can do it in integer math so much better, then why isn't it done that way in the first place?
Re: Unbalanced Scoring
Posted: Wed Aug 06, 2008 11:22 pm
by FordGT90Concept
We ended up choosing a code snippet we had that takes a height map and calculates the corresponding normal map.
Normal calculations are extremly FlOp intensive. Your input and outputs could be in the form of integers but the heavy lifting is all double- and single-precision floating point decimals--exactly what GPUs are designed to handle.
Re: Unbalanced Scoring
Posted: Wed Aug 06, 2008 11:26 pm
by JBurton57
FordGT90Concept wrote:We ended up choosing a code snippet we had that takes a height map and calculates the corresponding normal map.
Normal calculations are extremly FlOp intensive. Your input and outputs could be in the form of integers but the heavy lifting is all double- and single-precision floating point decimals--exactly what GPUs are designed to handle.
Then why is it hard to believe that Pandegroup needs the heavy lifting of a GPU for folding?
Re: Unbalanced Scoring
Posted: Thu Aug 07, 2008 12:07 am
by FordGT90Concept
Scaling prior to and after calculations can move the bulk of the calculations to the ALU while the FPU only handles translations of incoming and outgoing results. Surprise, surprise, 64-bit longs are far more suited to those kinds of operations than 32-bit. They have to commit to 64-bit before it is even worth discussing decimal -> integer -> decimal type calculations. Now that GPUs are available, I doubt they would put anymore effort into CPUs. :/
Re: Unbalanced Scoring
Posted: Thu Aug 07, 2008 12:25 am
by JBurton57
FordGT90Concept wrote:Scaling prior to and after calculations can move the bulk of the calculations to the ALU while the FPU only handles translations of incoming and outgoing results. Surprise, surprise, 64-bit longs are far more suited to those kinds of operations than 32-bit. They have to commit to 64-bit before it is even worth discussing decimal -> integer -> decimal type calculations. Now that GPUs are available, I doubt they would put anymore effort into CPUs. :/
Okay. Now, I understand what you've said only in the abstract, and I know nothing about protein folding, so I'm not going to make an argument about whether or not what you've said is true.
That all having been established, Stanford has been working with/on the Tinker and Gromacs cores for almost a decade. Long before GPGPUs were available, and during the time when 64-bit processors became available. Why didn't they latch onto 64-bit long ago, when the tech first became available? They've done other things that seem crazier than that to me, for example developing the GPU1 core, which used DirectX to try to harness the GPUs. Is programming the decimal > integer > decimal calculations so difficult that it'd be easier to implement FLOPs in DirectX?
Re: Unbalanced Scoring
Posted: Thu Aug 07, 2008 12:31 am
by P5-133XL
FordGT90Concept wrote:Now that GPUs are available, I doubt they would put anymore effort into CPUs. :/
If CPU's have been replaced by GPU's and they have no further value, then you don't have to worry about quitting your folding efforts because of the scoring disrepency. That's because CPU's folding will no longer exist as a client like Deadliness or GPU1 folding. Your entire arguement becomes moot. If on the otherhand, they still have scientific value then folding on CPU's will continue.
I'm done here. As far as I can tell the sole purpose of this thread is to bait the community into an arguement. I don't think anything said here will accomplish anything.
Re: Unbalanced Scoring
Posted: Thu Aug 07, 2008 2:02 am
by Guru
It's funny. I've read many people's comments and a lot of it has some logic to it. I fail to understand why I can take that information into account, counter it with indisputable logic, and no one takes my opinion seriously.
I'll ask this. Is it impossible to make it so that there is the original scoring system along with showing other data? I still like the idea of comparing scores in order to compare hardware, but comparing GPUs and CPUs, specifically when the software is highly optimized in favor of GPUs, is like comparing apples to oranges. It's like putting a speed boat on pavement and saying a scooter is faster.
On the other hand, showing all of the other data I've already mentioned will make other comparisons possible. This would make everything more competitive, allow all devices both large and small to compete and be compared not only in scores but in hardware to hardware comparisons, and would be a huge gain for the science benefiting from the project with all contributions both large and small. (Honestly it's obvious to spot the ones who do not care about the science involved when they are ready to toss out any old technology that is contributing...)
The ones that disagree with this logic are truly idiots.
Re: Unbalanced Scoring
Posted: Thu Aug 07, 2008 2:23 am
by FordGT90Concept
JBurton57 wrote:Is programming the decimal > integer > decimal calculations so difficult that it'd be easier to implement FLOPs in DirectX?
They both have their challenges. With DirectX, you have to deal with extensive Microsoft libs; with float -> long -> float conversions, you have to constantly check for overflows and alter the scales if need be. Neither is ideal but they work in a pinch.
It doesn't take much effort to compile a 32-bit application into a 64-bit binary. For example, Valve did it to their entire Source engine in a matter of a few days. So...yeah.
P5-133XL wrote:As far as I can tell the sole purpose of this thread is to bait the community into an arguement. I don't think anything said here will accomplish anything.
I think both sides have made valid arguments. I also think the immediate solution is quite simple and works for everyone (just adding some extra fields). All I do now is wait...
Re: Unbalanced Scoring
Posted: Fri Aug 08, 2008 12:37 am
by v00d00
Guru wrote:It's funny. I've read many people's comments and a lot of it has some logic to it. I fail to understand why I can take that information into account, counter it with indisputable logic, and no one takes my opinion seriously.
I'll ask this. Is it impossible to make it so that there is the original scoring system along with showing other data? I still like the idea of comparing scores in order to compare hardware, but comparing GPUs and CPUs, specifically when the software is highly optimized in favor of GPUs, is like comparing apples to oranges. It's like putting a speed boat on pavement and saying a scooter is faster.
On the other hand, showing all of the other data I've already mentioned will make other comparisons possible. This would make everything more competitive, allow all devices both large and small to compete and be compared not only in scores but in hardware to hardware comparisons, and would be a huge gain for the science benefiting from the project with all contributions both large and small. (Honestly it's obvious to spot the ones who do not care about the science involved when they are ready to toss out any old technology that is contributing...)
The ones that disagree with this logic are truly idiots.
So how much did Ford pay you to come here and stir up some trouble by trolling? How many beers was it?
Funny how you registered and posted only on this thread. I wonder if you and Ford are one and the same person.