Suggestion [or how to eliminate slack time between WUs?]
Moderators: Site Moderators, FAHC Science Team
-
- Posts: 1024
- Joined: Sun Dec 02, 2007 12:43 pm
Re: Suggestion [or how to eliminate slack time between WUs?]
That's about like Mr.Nosmo's company buying extra delivery trucks without hiring extra drivers. If there are extra trucks, then can be unloaded/reloaded while the driver is out with another truck so when he comes back he can immediately go out on another run. (Sorry, Charlie, but your union didn't negotiate for any breaks when you're supposed to be working.)
The Pande Group in more interested in eliminating slack time DURING WUs rather than eliminating slack time between WUs. They're all about having as few trucks as possible, not as few drivers as possible.
Over the years, the financial guru's at my company have emphasized different methods to maximize profit (ROI). One method is just-in-time manufacturing which focuses on "idle" inventory. They'd go crazy if they saw truckloads of parts sitting at the dock waiting to be moved. The just-in-time methodology applies directly to WUs in that if you have two WUs and are only working on one of them (or on both at half speed) you're not supporting the objectives of the "company" and you'd better get with the program or be prepared to be disciplined. (Hopefully the long awaited change to the points system will penalize people like shatteredsilicon who suggest we should run extra clients.)
The logic tree that 7im suggested needs to consider BOTH minimizing the idle time between WUs and the idle time when you've got more WUs than your hardware can run at full speed.
The Pande Group in more interested in eliminating slack time DURING WUs rather than eliminating slack time between WUs. They're all about having as few trucks as possible, not as few drivers as possible.
Over the years, the financial guru's at my company have emphasized different methods to maximize profit (ROI). One method is just-in-time manufacturing which focuses on "idle" inventory. They'd go crazy if they saw truckloads of parts sitting at the dock waiting to be moved. The just-in-time methodology applies directly to WUs in that if you have two WUs and are only working on one of them (or on both at half speed) you're not supporting the objectives of the "company" and you'd better get with the program or be prepared to be disciplined. (Hopefully the long awaited change to the points system will penalize people like shatteredsilicon who suggest we should run extra clients.)
The logic tree that 7im suggested needs to consider BOTH minimizing the idle time between WUs and the idle time when you've got more WUs than your hardware can run at full speed.
-
- Posts: 43
- Joined: Sun Dec 02, 2007 5:28 am
- Location: Vegas Baby! Yeah!
Re: Suggestion [or how to eliminate slack time between WUs?]
As one who is personally disgusted by the methods of shatteredsilicon and his blatant disregard of The Pande Group policy concerning how they desire us to run their clients, I still feel obligated to come to his defense. He knows what he is doing is bad, but does it anyway.codysluder wrote:The just-in-time methodology applies directly to WUs in that if you have two WUs and are only working on one of them (or on both at half speed) you're not supporting the objectives of the "company" and you'd better get with the program or be prepared to be disciplined. (Hopefully the long awaited change to the points system will penalize people like shatteredsilicon who suggest we should run extra clients.)
That being said, he has contributed a couple million points and so his dedication should count for something. I prefer to think of the long awaited points change as less as you suggested to penalize people like him, but more to reward those of us who have and do follow The Pande Group policy and contribute by running their clients as they are supposed to be run in the scientifically optimal way.shatteredsilicon wrote:You're not supposed to be pre-loading units as a lot of people citing PG advisories will tell you, because "it's bad for the project".
Is he doing it just for the extra points or because he really believes he knows better than the researchers at The Pande Group about how their clients should be run?
The truth will be revealed by the new points system. If at that time he then changes how his clients are run then he was only doing it all along for the boost to his personal point total and therefore his hypocrisy will be exposed. If he sticks to his claimed convictions that he really does know better than The Pande Group, even though to continue do so now under the new system is costing him points, then at least he will show that he actually believes his own claims that he knows better than the Stanford researchers.
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: Suggestion [or how to eliminate slack time between WUs?]
People respond better to positive reinforcement. The eventual points revamp will promote "better" behavior, i.e. better alignment of the science with the points.
Pande Group does not take punitive action, except in rare and extreme cases, such as zeroing the points and closing the account of a trojaned client.
Pande Group does not take punitive action, except in rare and extreme cases, such as zeroing the points and closing the account of a trojaned client.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.
-
- Posts: 87
- Joined: Tue Jul 08, 2008 2:27 pm
- Hardware configuration: 1x Q6600 @ 3.2GHz, 4GB DDR3-1333
1x Phenom X4 9950 @ 2.6GHz, 4GB DDR2-1066
3x GeForce 9800GX2
1x GeForce 8800GT
CentOS 5 x86-64, WINE 1.x with CUDA wrappers
Re: Suggestion [or how to eliminate slack time between WUs?]
You're actually missing the point of most of my arguments. I am all for changing the points policy - this has been the main point all along. What annoys me is that there is a whole array of misdesign in how the contributions are valued, from points values of WUs being based on non-sensical reference hardware, to the fact that points awarded fail to take into account how quickly the results get returned. All this means that the points don't strictly reflect the usefulness of the contribution. If the points system gets changed to fully address these issues, I will be one happy folder.Sahkuhnder wrote:Is he doing it just for the extra points or because he really believes he knows better than the researchers at The Pande Group about how their clients should be run?shatteredsilicon wrote:You're not supposed to be pre-loading units as a lot of people citing PG advisories will tell you, because "it's bad for the project".
The truth will be revealed by the new points system. If at that time he then changes how his clients are run then he was only doing it all along for the boost to his personal point total and therefore his hypocrisy will be exposed. If he sticks to his claimed convictions that he really does know better than The Pande Group, even though to continue do so now under the new system is costing him points, then at least he will show that he actually believes his own claims that he knows better than the Stanford researchers.
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: Suggestion [or how to eliminate slack time between WUs?]
Well, you're half right, on both points, but that means you're also half wrong as well.
What HW is nonsensical? I've explained that point already. It doesn't matter which GPU is used to do the benchmarks. The GPUs DO NOT get an ATI or NV wrapper until AFTER they are benchmarked. They could pick ANY GPU to do the benchmarks, and all it would do is slide the scale of PPD, because the POINTS are based on SCIENCE completed, and NOT on the hardware speed.
And because the GPU and SMP clients DO have very short deadlines, the points are bumped up over what the clients with long deadlines get. NO, there isn't a direct linear or variable percentage for more points in shorter time, but the High Performance clients are a step in the right direction. Either/Or is better than nothing, but not as good as a graduated scale, as I hope they will impliment soon.
And yes, I am tired of explaining this, and tired of waiting for this, but Stanford did announce and is working on a plan to better align the points and science, hopefully in the graduated scale model to better promote the project, and if not just to end this silly debate.
What HW is nonsensical? I've explained that point already. It doesn't matter which GPU is used to do the benchmarks. The GPUs DO NOT get an ATI or NV wrapper until AFTER they are benchmarked. They could pick ANY GPU to do the benchmarks, and all it would do is slide the scale of PPD, because the POINTS are based on SCIENCE completed, and NOT on the hardware speed.
And because the GPU and SMP clients DO have very short deadlines, the points are bumped up over what the clients with long deadlines get. NO, there isn't a direct linear or variable percentage for more points in shorter time, but the High Performance clients are a step in the right direction. Either/Or is better than nothing, but not as good as a graduated scale, as I hope they will impliment soon.
And yes, I am tired of explaining this, and tired of waiting for this, but Stanford did announce and is working on a plan to better align the points and science, hopefully in the graduated scale model to better promote the project, and if not just to end this silly debate.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.
-
- Posts: 43
- Joined: Sun Dec 02, 2007 5:28 am
- Location: Vegas Baby! Yeah!
Re: Suggestion [or how to eliminate slack time between WUs?]
I am for changing the points policy too, but I certainly understand that Stanford has higher priorities and may not be able to make the changes anytime soon. What about until then? What is the best way for all of us contribute to the project?shatteredsilicon wrote:You're actually missing the point of most of my arguments. I am all for changing the points policy - this has been the main point all along.
You recognize that the "points awarded fail to take into account how quickly the results get returned" and that "the points don't strictly reflect the usefulness of the contribution". You are aware that Stanford wants us to return the results as soon as possible. So why not ignore the current imperfect points system then and simply do what is best by following the wishes of the researchers as to what is optimal for the science?shatteredsilicon wrote:What annoys me is that there is a whole array of misdesign in how the contributions are valued, from points values of WUs being based on non-sensical reference hardware, to the fact that points awarded fail to take into account how quickly the results get returned. All this means that the points don't strictly reflect the usefulness of the contribution.
I do understand your desire to keep your processors folding 24/7 without downtime during the changing of WUs and in case of network or server problems. You should be commended on your commitment to maximize your contribution to that level. Perhaps you would entertain a suggestion. Here is your current approach:
The high-speed clients run the WUs with the tightest deadlines and appear to be under pressure to be folded and have their results returned back absolutely as soon as possible. The uniprocessor WUs don't seem to be as critical to be returned in as short of a time period. With this is mind how about setting up one uniprocessor client per CPU core that engages during the downtimes? This would keep the CPU folding and not waste any cycles but would also not delay the speedy folding and return of any high-speed client WUs. IMHO it would be preferable to stall the return of uniprocessor WUs instead of high-speed client WUs - and this still satisfies the OP suggestion to eliminate the slack time.shatteredsilicon wrote:But since we don't live in an ideal world - you can pre-load a WU by using 2 clients for each "resource" (CPU or GPU). You run the client with -oneunit and set up a process to monitor the current WU progress. When it hits some threshold (e.g. 99%), it fires up the secondary client and lets the previous one finish.
I look forward to that too. Despite the inevitable minor problems and issues that are bound to arise, I feel we are all contributing our efforts toward something very important. I guess I'm one happy folder right now.shatteredsilicon wrote:If the points system gets changed to fully address these issues, I will be one happy folder.
-
- Posts: 87
- Joined: Tue Jul 08, 2008 2:27 pm
- Hardware configuration: 1x Q6600 @ 3.2GHz, 4GB DDR3-1333
1x Phenom X4 9950 @ 2.6GHz, 4GB DDR2-1066
3x GeForce 9800GX2
1x GeForce 8800GT
CentOS 5 x86-64, WINE 1.x with CUDA wrappers
Re: Suggestion [or how to eliminate slack time between WUs?]
That was my suggestion on how to work around the problem. Never did I imply that this is something I do. Just because I can see how I'd do it if I were to try to (and I could probably come up with a solution that works in under half an hour), doesn't mean that I have implemented such a solution and am using it. I haven't bothered since if Stanford cannot be bothered to benefit their project with such easy and obvious optimizations implementable in under an hour, I find it hard to work up the motivation to do it myself, especially since I keep getting told off for even suggesting such things.Sahkuhnder wrote:I do understand your desire to keep your processors folding 24/7 without downtime during the changing of WUs and in case of network or server problems. You should be commended on your commitment to maximize your contribution to that level. Perhaps you would entertain a suggestion. Here is your current approach:
shatteredsilicon wrote:But since we don't live in an ideal world - you can pre-load a WU by using 2 clients for each "resource" (CPU or GPU). You run the client with -oneunit and set up a process to monitor the current WU progress. When it hits some threshold (e.g. 99%), it fires up the secondary client and lets the previous one finish.
Re: Suggestion [or how to eliminate slack time between WUs?]
Let's get back to the original topic.
There is already a enhancement suggestion on the To Do list that the client be modified to download a new WU before trying to upload a result. That's a straightforward suggestion that should be easy to program. (Of course I don't know if or when it might be accepted.) All of the other suggestions that I've seen seem to be more complicated and would have a smaller benefit than this one.
If this actually happened, would you folks agree that it's a big step forward?
There is already a enhancement suggestion on the To Do list that the client be modified to download a new WU before trying to upload a result. That's a straightforward suggestion that should be easy to program. (Of course I don't know if or when it might be accepted.) All of the other suggestions that I've seen seem to be more complicated and would have a smaller benefit than this one.
If this actually happened, would you folks agree that it's a big step forward?
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
-
- Posts: 87
- Joined: Tue Jul 08, 2008 2:27 pm
- Hardware configuration: 1x Q6600 @ 3.2GHz, 4GB DDR3-1333
1x Phenom X4 9950 @ 2.6GHz, 4GB DDR2-1066
3x GeForce 9800GX2
1x GeForce 8800GT
CentOS 5 x86-64, WINE 1.x with CUDA wrappers
Re: Suggestion [or how to eliminate slack time between WUs?]
It is a step forward, but it's only 50% of the way there. I'd rather like to see a manual-only option to pre-load when there's a few percent still left to go (for 24-7 running only), with predictive timing so that it can guess how long it'll take to finish off the current WU and how long it'll take to download the next WU (guesstimated - take the average of last few download speeds), and try to line it up so that the new WU will be downloaded and ready just when the previous WU is completed and ready to upload.
-
- Posts: 43
- Joined: Sun Dec 02, 2007 5:28 am
- Location: Vegas Baby! Yeah!
Re: Suggestion [or how to eliminate slack time between WUs?]
I misunderstood your posts then. I never intended for you to feel you were being "told off". If you are running your clients according Stanford's guidelines then I would like to please offer my apology.shatteredsilicon wrote:Never did I imply that this is something I do.
Yes, definitely.bruce wrote:...that the client be modified to download a new WU before trying to upload a result.
If this actually happened, would you folks agree that it's a big step forward?
The ability to manually enable or disable the function is a great suggestion.shatteredsilicon wrote:I'd rather like to see a manual-only option...
-
- Posts: 1579
- Joined: Fri Jun 27, 2008 2:20 pm
- Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot - Location: The Netherlands
- Contact:
Re: Suggestion [or how to eliminate slack time between WUs?]
It's not that 'silly'7im wrote:Well, you're half right, on both points, but that means you're also half wrong as well.
What HW is nonsensical? I've explained that point already. It doesn't matter which GPU is used to do the benchmarks. The GPUs DO NOT get an ATI or NV wrapper until AFTER they are benchmarked. They could pick ANY GPU to do the benchmarks, and all it would do is slide the scale of PPD, because the POINTS are based on SCIENCE completed, and NOT on the hardware speed.
And because the GPU and SMP clients DO have very short deadlines, the points are bumped up over what the clients with long deadlines get. NO, there isn't a direct linear or variable percentage for more points in shorter time, but the High Performance clients are a step in the right direction. Either/Or is better than nothing, but not as good as a graduated scale, as I hope they will impliment soon.
And yes, I am tired of explaining this, and tired of waiting for this, but Stanford did announce and is working on a plan to better align the points and science, hopefully in the graduated scale model to better promote the project, and if not just to end this silly debate.
bruce wrote:Let's get back to the original topic.
There is already a enhancement suggestion on the To Do list that the client be modified to download a new WU before trying to upload a result. That's a straightforward suggestion that should be easy to program. (Of course I don't know if or when it might be accepted.) All of the other suggestions that I've seen seem to be more complicated and would have a smaller benefit than this one.
If this actually happened, would you folks agree that it's a big step forward?
shatteredsilicon wrote:It is a step forward, but it's only 50% of the way there. I'd rather like to see a manual-only option to pre-load when there's a few percent still left to go (for 24-7 running only), with predictive timing so that it can guess how long it'll take to finish off the current WU and how long it'll take to download the next WU (guesstimated - take the average of last few download speeds), and try to line it up so that the new WU will be downloaded and ready just when the previous WU is completed and ready to upload.
Absolutely agree with ShatterdSillicon
-
- Posts: 1024
- Joined: Sun Dec 02, 2007 12:43 pm
Re: Suggestion [or how to eliminate slack time between WUs?]
Yes . . . a BIG one.bruce wrote:If this actually happened, would you folks agree that it's a big step forward?
I'd challenge the 50% statement. I'm sure it depends on both the particular project and your connection, but my DSL download speed is many times faster than my upload speed and the uploads seem to be much bigger than the downloads. My client wastes a lot more time uploading than downloading. Get the Pande Group to start downloading first and I'll be very happy.
-
- Posts: 31
- Joined: Thu Mar 13, 2008 7:24 pm
Re: Suggestion [or how to eliminate slack time between WUs?]
without much talking, I'm a big fan on downloading and uploading in the same time. as was said, my upload is 1/3 of my download. and 2673 makes me send 100MB ! if my upload is just for that, that's quite an hour and some more of what could be almost 20% of a 2669 as next wu.
that's why all my quads run more than one client. ( for this and for the hangs )
if this was addressed, would be a great thing for me
thanks,
matheus
ps: I wrote a little script to do this. if you're using linux, it may solve your problem too (I'm running it for some days, no problem though). it is a very simple one
as I'm using 6.23R1, it takes some arguments from client.cfg, as -smp. but this can be easily addressed in the script
that's why all my quads run more than one client. ( for this and for the hangs )
if this was addressed, would be a great thing for me
thanks,
matheus
ps: I wrote a little script to do this. if you're using linux, it may solve your problem too (I'm running it for some days, no problem though). it is a very simple one
Code: Select all
#!/bin/bash
saida="true"
# zerando o arquivo de saida
> saida.fah
while true
do
./fah6&
PID=$!
while $saida
do
tail -10 FAHlog.txt | grep -i "Sending work to server" > /dev/null
status=$?
echo status $status
if [ $status -eq 0 ]; then
kill $PID
saida="false"
sleep 60
else
sleep 300
fi
done
saida="true"
done
Still waiting for smp folding under FreeBSD (32 or 64 bits) username: Nenhum_de_Nos
http://eternamente.info
http://eternamente.info