Page 4 of 4

Re: Suggestion [or how to eliminate slack time between WUs?]

Posted: Sat Nov 29, 2008 9:10 pm
by codysluder
That's about like Mr.Nosmo's company buying extra delivery trucks without hiring extra drivers. If there are extra trucks, then can be unloaded/reloaded while the driver is out with another truck so when he comes back he can immediately go out on another run. (Sorry, Charlie, but your union didn't negotiate for any breaks when you're supposed to be working.)

The Pande Group in more interested in eliminating slack time DURING WUs rather than eliminating slack time between WUs. They're all about having as few trucks as possible, not as few drivers as possible.

Over the years, the financial guru's at my company have emphasized different methods to maximize profit (ROI). One method is just-in-time manufacturing which focuses on "idle" inventory. They'd go crazy if they saw truckloads of parts sitting at the dock waiting to be moved. The just-in-time methodology applies directly to WUs in that if you have two WUs and are only working on one of them (or on both at half speed) you're not supporting the objectives of the "company" and you'd better get with the program or be prepared to be disciplined. (Hopefully the long awaited change to the points system will penalize people like shatteredsilicon who suggest we should run extra clients.)

The logic tree that 7im suggested needs to consider BOTH minimizing the idle time between WUs and the idle time when you've got more WUs than your hardware can run at full speed.

Re: Suggestion [or how to eliminate slack time between WUs?]

Posted: Sun Nov 30, 2008 1:17 am
by Sahkuhnder
codysluder wrote:The just-in-time methodology applies directly to WUs in that if you have two WUs and are only working on one of them (or on both at half speed) you're not supporting the objectives of the "company" and you'd better get with the program or be prepared to be disciplined. (Hopefully the long awaited change to the points system will penalize people like shatteredsilicon who suggest we should run extra clients.)
As one who is personally disgusted by the methods of shatteredsilicon and his blatant disregard of The Pande Group policy concerning how they desire us to run their clients, I still feel obligated to come to his defense. He knows what he is doing is bad, but does it anyway.
shatteredsilicon wrote:You're not supposed to be pre-loading units as a lot of people citing PG advisories will tell you, because "it's bad for the project".
That being said, he has contributed a couple million points and so his dedication should count for something. I prefer to think of the long awaited points change as less as you suggested to penalize people like him, but more to reward those of us who have and do follow The Pande Group policy and contribute by running their clients as they are supposed to be run in the scientifically optimal way.

Is he doing it just for the extra points or because he really believes he knows better than the researchers at The Pande Group about how their clients should be run?

The truth will be revealed by the new points system. If at that time he then changes how his clients are run then he was only doing it all along for the boost to his personal point total and therefore his hypocrisy will be exposed. If he sticks to his claimed convictions that he really does know better than The Pande Group, even though to continue do so now under the new system is costing him points, then at least he will show that he actually believes his own claims that he knows better than the Stanford researchers.

Re: Suggestion [or how to eliminate slack time between WUs?]

Posted: Sun Nov 30, 2008 6:26 am
by 7im
People respond better to positive reinforcement. The eventual points revamp will promote "better" behavior, i.e. better alignment of the science with the points.

Pande Group does not take punitive action, except in rare and extreme cases, such as zeroing the points and closing the account of a trojaned client.

Re: Suggestion [or how to eliminate slack time between WUs?]

Posted: Sun Nov 30, 2008 12:32 pm
by shatteredsilicon
Sahkuhnder wrote:
shatteredsilicon wrote:You're not supposed to be pre-loading units as a lot of people citing PG advisories will tell you, because "it's bad for the project".
Is he doing it just for the extra points or because he really believes he knows better than the researchers at The Pande Group about how their clients should be run?

The truth will be revealed by the new points system. If at that time he then changes how his clients are run then he was only doing it all along for the boost to his personal point total and therefore his hypocrisy will be exposed. If he sticks to his claimed convictions that he really does know better than The Pande Group, even though to continue do so now under the new system is costing him points, then at least he will show that he actually believes his own claims that he knows better than the Stanford researchers.
You're actually missing the point of most of my arguments. I am all for changing the points policy - this has been the main point all along. What annoys me is that there is a whole array of misdesign in how the contributions are valued, from points values of WUs being based on non-sensical reference hardware, to the fact that points awarded fail to take into account how quickly the results get returned. All this means that the points don't strictly reflect the usefulness of the contribution. If the points system gets changed to fully address these issues, I will be one happy folder. :)

Re: Suggestion [or how to eliminate slack time between WUs?]

Posted: Sun Nov 30, 2008 6:32 pm
by 7im
Well, you're half right, on both points, but that means you're also half wrong as well.

What HW is nonsensical? I've explained that point already. It doesn't matter which GPU is used to do the benchmarks. The GPUs DO NOT get an ATI or NV wrapper until AFTER they are benchmarked. They could pick ANY GPU to do the benchmarks, and all it would do is slide the scale of PPD, because the POINTS are based on SCIENCE completed, and NOT on the hardware speed.

And because the GPU and SMP clients DO have very short deadlines, the points are bumped up over what the clients with long deadlines get. NO, there isn't a direct linear or variable percentage for more points in shorter time, but the High Performance clients are a step in the right direction. Either/Or is better than nothing, but not as good as a graduated scale, as I hope they will impliment soon.

And yes, I am tired of explaining this, and tired of waiting for this, but Stanford did announce and is working on a plan to better align the points and science, hopefully in the graduated scale model to better promote the project, and if not just to end this silly debate. :roll:

Re: Suggestion [or how to eliminate slack time between WUs?]

Posted: Sun Nov 30, 2008 6:54 pm
by Sahkuhnder
shatteredsilicon wrote:You're actually missing the point of most of my arguments. I am all for changing the points policy - this has been the main point all along.
I am for changing the points policy too, but I certainly understand that Stanford has higher priorities and may not be able to make the changes anytime soon. What about until then? What is the best way for all of us contribute to the project?

shatteredsilicon wrote:What annoys me is that there is a whole array of misdesign in how the contributions are valued, from points values of WUs being based on non-sensical reference hardware, to the fact that points awarded fail to take into account how quickly the results get returned. All this means that the points don't strictly reflect the usefulness of the contribution.
You recognize that the "points awarded fail to take into account how quickly the results get returned" and that "the points don't strictly reflect the usefulness of the contribution". You are aware that Stanford wants us to return the results as soon as possible. So why not ignore the current imperfect points system then and simply do what is best by following the wishes of the researchers as to what is optimal for the science?

I do understand your desire to keep your processors folding 24/7 without downtime during the changing of WUs and in case of network or server problems. You should be commended on your commitment to maximize your contribution to that level. Perhaps you would entertain a suggestion. Here is your current approach:
shatteredsilicon wrote:But since we don't live in an ideal world - you can pre-load a WU by using 2 clients for each "resource" (CPU or GPU). You run the client with -oneunit and set up a process to monitor the current WU progress. When it hits some threshold (e.g. 99%), it fires up the secondary client and lets the previous one finish.
The high-speed clients run the WUs with the tightest deadlines and appear to be under pressure to be folded and have their results returned back absolutely as soon as possible. The uniprocessor WUs don't seem to be as critical to be returned in as short of a time period. With this is mind how about setting up one uniprocessor client per CPU core that engages during the downtimes? This would keep the CPU folding and not waste any cycles but would also not delay the speedy folding and return of any high-speed client WUs. IMHO it would be preferable to stall the return of uniprocessor WUs instead of high-speed client WUs - and this still satisfies the OP suggestion to eliminate the slack time.

shatteredsilicon wrote:If the points system gets changed to fully address these issues, I will be one happy folder. :)
I look forward to that too. Despite the inevitable minor problems and issues that are bound to arise, I feel we are all contributing our efforts toward something very important. I guess I'm one happy folder right now. :D

Re: Suggestion [or how to eliminate slack time between WUs?]

Posted: Mon Dec 01, 2008 1:01 am
by shatteredsilicon
Sahkuhnder wrote:I do understand your desire to keep your processors folding 24/7 without downtime during the changing of WUs and in case of network or server problems. You should be commended on your commitment to maximize your contribution to that level. Perhaps you would entertain a suggestion. Here is your current approach:
shatteredsilicon wrote:But since we don't live in an ideal world - you can pre-load a WU by using 2 clients for each "resource" (CPU or GPU). You run the client with -oneunit and set up a process to monitor the current WU progress. When it hits some threshold (e.g. 99%), it fires up the secondary client and lets the previous one finish.
That was my suggestion on how to work around the problem. Never did I imply that this is something I do. Just because I can see how I'd do it if I were to try to (and I could probably come up with a solution that works in under half an hour), doesn't mean that I have implemented such a solution and am using it. I haven't bothered since if Stanford cannot be bothered to benefit their project with such easy and obvious optimizations implementable in under an hour, I find it hard to work up the motivation to do it myself, especially since I keep getting told off for even suggesting such things.

Re: Suggestion [or how to eliminate slack time between WUs?]

Posted: Mon Dec 01, 2008 8:15 am
by bruce
Let's get back to the original topic.

There is already a enhancement suggestion on the To Do list that the client be modified to download a new WU before trying to upload a result. That's a straightforward suggestion that should be easy to program. (Of course I don't know if or when it might be accepted.) All of the other suggestions that I've seen seem to be more complicated and would have a smaller benefit than this one.

If this actually happened, would you folks agree that it's a big step forward?

Re: Suggestion [or how to eliminate slack time between WUs?]

Posted: Mon Dec 01, 2008 12:40 pm
by shatteredsilicon
It is a step forward, but it's only 50% of the way there. I'd rather like to see a manual-only option to pre-load when there's a few percent still left to go (for 24-7 running only), with predictive timing so that it can guess how long it'll take to finish off the current WU and how long it'll take to download the next WU (guesstimated - take the average of last few download speeds), and try to line it up so that the new WU will be downloaded and ready just when the previous WU is completed and ready to upload.

Re: Suggestion [or how to eliminate slack time between WUs?]

Posted: Mon Dec 01, 2008 5:28 pm
by Sahkuhnder
shatteredsilicon wrote:Never did I imply that this is something I do.
I misunderstood your posts then. I never intended for you to feel you were being "told off". If you are running your clients according Stanford's guidelines then I would like to please offer my apology.

bruce wrote:...that the client be modified to download a new WU before trying to upload a result.

If this actually happened, would you folks agree that it's a big step forward?
Yes, definitely.

shatteredsilicon wrote:I'd rather like to see a manual-only option...
The ability to manually enable or disable the function is a great suggestion. :D

Re: Suggestion [or how to eliminate slack time between WUs?]

Posted: Mon Dec 01, 2008 7:32 pm
by MtM
7im wrote:Well, you're half right, on both points, but that means you're also half wrong as well.

What HW is nonsensical? I've explained that point already. It doesn't matter which GPU is used to do the benchmarks. The GPUs DO NOT get an ATI or NV wrapper until AFTER they are benchmarked. They could pick ANY GPU to do the benchmarks, and all it would do is slide the scale of PPD, because the POINTS are based on SCIENCE completed, and NOT on the hardware speed.

And because the GPU and SMP clients DO have very short deadlines, the points are bumped up over what the clients with long deadlines get. NO, there isn't a direct linear or variable percentage for more points in shorter time, but the High Performance clients are a step in the right direction. Either/Or is better than nothing, but not as good as a graduated scale, as I hope they will impliment soon.

And yes, I am tired of explaining this, and tired of waiting for this, but Stanford did announce and is working on a plan to better align the points and science, hopefully in the graduated scale model to better promote the project, and if not just to end this silly debate. :roll:
It's not that 'silly' ;)
bruce wrote:Let's get back to the original topic.

There is already a enhancement suggestion on the To Do list that the client be modified to download a new WU before trying to upload a result. That's a straightforward suggestion that should be easy to program. (Of course I don't know if or when it might be accepted.) All of the other suggestions that I've seen seem to be more complicated and would have a smaller benefit than this one.

If this actually happened, would you folks agree that it's a big step forward?
shatteredsilicon wrote:It is a step forward, but it's only 50% of the way there. I'd rather like to see a manual-only option to pre-load when there's a few percent still left to go (for 24-7 running only), with predictive timing so that it can guess how long it'll take to finish off the current WU and how long it'll take to download the next WU (guesstimated - take the average of last few download speeds), and try to line it up so that the new WU will be downloaded and ready just when the previous WU is completed and ready to upload.

Absolutely agree with ShatterdSillicon :)

Re: Suggestion [or how to eliminate slack time between WUs?]

Posted: Mon Dec 01, 2008 8:35 pm
by codysluder
bruce wrote:If this actually happened, would you folks agree that it's a big step forward?
Yes . . . a BIG one.

I'd challenge the 50% statement. I'm sure it depends on both the particular project and your connection, but my DSL download speed is many times faster than my upload speed and the uploads seem to be much bigger than the downloads. My client wastes a lot more time uploading than downloading. Get the Pande Group to start downloading first and I'll be very happy.

Re: Suggestion [or how to eliminate slack time between WUs?]

Posted: Wed Dec 10, 2008 4:11 am
by matheusber
without much talking, I'm a big fan on downloading and uploading in the same time. as was said, my upload is 1/3 of my download. and 2673 makes me send 100MB ! if my upload is just for that, that's quite an hour and some more of what could be almost 20% of a 2669 as next wu.

that's why all my quads run more than one client. ( for this and for the hangs :( )

if this was addressed, would be a great thing for me

thanks,

matheus

ps: I wrote a little script to do this. if you're using linux, it may solve your problem too (I'm running it for some days, no problem though). it is a very simple one :)

Code: Select all

#!/bin/bash

saida="true"

# zerando o arquivo de saida

> saida.fah

while true
do

  ./fah6&
  PID=$!

  while $saida
  do
    tail -10 FAHlog.txt | grep -i "Sending work to server" > /dev/null
    status=$?
    echo status $status
    if [ $status -eq 0 ]; then
      kill $PID
      saida="false"
      sleep 60
    else
      sleep 300
    fi

  done
  saida="true"
done
as I'm using 6.23R1, it takes some arguments from client.cfg, as -smp. but this can be easily addressed in the script :)