Stonecold wrote:100x seems like a bit of an over-exaggeration. It just seems a little too large of an increase (and by such a generic number) to be completely true. I may be wrong though, but this is just my speculation.
100x actually isn't that much if you realise where we came from in 2000.
I respectfully disagree- a 100X speedup is a big deal. While the projected speedup could be looked at in terms of the increased timescales the project has achieved, the new methodology also means we can get 100x more work done than we do now. That's like moving the project performance into the 100's of PFLOPS range without an attendant huge increase in actual processing power.
It's a big deal for F@H in terms of the implications of such a large speed up. Therefore such big jump in coding efficiency is very important.
In terms of 'how is such a large increase even possible', in the worlds of coding you can get often small incremental returns and occasionally really big returns in efficiency. Just today I was working on an algorithm that I was able to sped up by 500x for no loss in accuracy, but achieving it required the change of a block of code.
What does makes this speed up a big deal is that there is generally a marginal return in efficiency as you optimise code; the low hanging fruit gets picked off earliest. Given how efficient the F@H algorithms are, it must represent a very clever way of thinking.
bruce wrote:
For those of you who are expecting this announcement to imply you'll earn 100x as many points, that's simply not going to happen. This methodology will still be benchmarked by the traditional methods.
What methods are the PG considering to standardize points per work unit between work units run using the more efficient cores and the older less efficient cores? Are you going to divide the points per day by 100 on the benchmark machine?
bruce wrote:
For those of you who are expecting this announcement to imply you'll earn 100x as many points, that's simply not going to happen. This methodology will still be benchmarked by the traditional methods.
What methods are the PG considering to standardize points per work unit between work units run using the more efficient cores and the older less efficient cores? Are you going to divide the points per day by 100 on the benchmark machine?
Both old and new WU's are benchmarked against a common machine to produce equivalent PPD's between all WU's both old and new. What matters for points is how much faster your machine can get the work done as compared to the benchmark machine. Thus the fact that it is 100x faster for everyone is not going to inherently produce any more PPD. So all that extra work being done just goes to science.
bruce wrote:
For those of you who are expecting this announcement to imply you'll earn 100x as many points, that's simply not going to happen. This methodology will still be benchmarked by the traditional methods.
What methods are the PG considering to standardize points per work unit between work units run using the more efficient cores and the older less efficient cores? Are you going to divide the points per day by 100 on the benchmark machine?
Points are not based on the number of atoms in the protein or on the number of nanoseconds of folding time that is simulated, but rather the time it takes you to finish the assignment. The benchmarking FAQ assigns a certain number of points for one day's computing on the benchmark machine.
Just done one of these a 7030 it was very slow and took the machine ages, it did seem like it was doing a lot of work though I though it wasnt working at first the tpf was that slow not the most pleasant experience.
csvanefalk wrote:...Has there been any reports on the outcomes of these projects? It would be interesting to see how far it has progressed.
I have seen no science reports, if that's what you mean, but that's not unusual. It's common for very little to be said until somebody has published a paper, but you've already seen that. There was a blog post in February.