I noticed that the size of the work units is highly variable, from as little as 2903 points to 145,432 points!
The largest one is not a problem, it landed on one of my new/fast machines and will be done in 10.5 hours.
The problem is my 2nd largest job: 72,700 points. It landed on a "retired" laptop and has an ETA of 7.04 days!
I could let it run, but I noticed the job has a timeout of just 1 day 9 hours, and an expiration of 4 days 9 hours
from now so I'm afraid my result would be too late to be useful.
I see how to pause a job, but that would only delay it more.
I'd like some way to:
1: Avoid wasting CPUs on a job that won't finish in time.
2: Avoid "failing to deliver" and holding up somebody's research.
I guess I'd like to know:
1: How do I kill a job? (and properly report to the mother-ship that I am punting it, so it can get reassigned ASAP)
2: Set a limit on the job size for each PC or "slot"? (I tried editing them, but didn't see any settings that I was confident/foolish enough to alter)
Thinking big picture, is there a bug here or did I (or maybe the researcher who set the job up) just dork something up?
Any ideas how this happened and how can we prevent it from happening again? (either to me or some other enthusiastic but clueless newbie
![Smile :-)](./images/smilies/icon_smile.gif)