Low PPD on some WUs
Moderator: Site Moderators
Forum rules
Please read the forum rules before posting.
Please read the forum rules before posting.
Re: Low PPD on some WUs
Yup, issue solved
Thanks everyone
Thanks everyone
Re: Low PPD on some WUs
Did you read my post and fail to understand it?
You can fold with the CPU at some number LESS THAN 7 provided you leave a little bit of idle time rather than forcing too many threads to compete with each other.
Post Task manager again.
You can fold with the CPU at some number LESS THAN 7 provided you leave a little bit of idle time rather than forcing too many threads to compete with each other.
Post Task manager again.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
Re: Low PPD on some WUs
I know... but compared to the PPD I get with the video card WU it's only like a 1/10 lessbruce wrote:Did you read my post and fail to understand it?
You can fold with the CPU at some number LESS THAN 7 provided you leave a little bit of idle time rather than forcing too many threads to compete with each other.
Post Task manager again.
Plus... a 13xxx project is not even 3 hours... I can just reenable cpu folding afterwards
I still don't get why having the cpu folding set to low priority and the gpu folding set to normal priority didn't work... it shouldn't be affecting the gpu usage that much... a 75% decrease is pretty drastic
Even when I had 7 png optimization processes and cpu folding going the gpu usage dropped down to like 90%.. but not 25%
It dropped down to about 40% at low priority and 90% at normal priority
I had 5% cpu free.. it was only using 6-7 even at a higher priority than the cpu folding and uses 1-2 without cpu folding
It actually uses less without cpu folding... about 3x less
And it ONLY happens with 13xxx projects...
Re: Low PPD on some WUs
Without cpu foldingbruce wrote:Post Task manager again.
With cpu folding
And with cpu folding it drops the gpu usage from 98 to 25%
Okay, with 7 threads on the cpu folding the gpu usage drops from 98 to 40%
With 6 threads on the cpu folding the gpu usage drops from 98 to 94%
With 4 threads 98 to 96%
With 8 threads the gpu usage actually goes UP and is at about 70%
Why would going from 7 to 8 threads on the cpu WU cause the gpu usage to go from 40 to 70%?
https://www.youtube.com/watch?v=9yMwsiVHqWs
The video is slightly laggy because I turned the frame rate down to cut down on cpu usage
Re: Low PPD on some WUs
I can't answer all your questions -- I don't have a detailed enough knowledge about how GPU drivers work when there's contention for the CPU. I do know it's important, though.
A lot depends on timing. The FahCore for the CPU keeps the CPU as busy as possible doing heavy calculations. Most of the time, the FahCores for the GPUs are busy moving data across the PCIe bus or they're in a spin-wait so they're always available to process the next I/O without waiting for the high-latency overhead of interrupting another task.
FahCore_21 also occasionally does several seconds of heavy-compute processing is support of the analysis, itself.
Considering CPU separately, it should be noted that FahCore_a4 is totally different. It will start as many compute-bound threads as you let it, but if they get interrupted frequently, FahCore_a4's performance will suffer. In other words, 7 CPUs will accomplish less folding than, say 4, if we assuming that the sum of all non-FAH processing exceeds an average of a couple of CPUs. i.e.- using 2 CPUs for something else plus 7 CPUs for FAHCore_a4 will guarantee that one of the 7 will lag behind the processing done by the other 6.
This also gets distorted somewhat when you consider HyperThreading.
In your previous example, you apparently had 7 threads which were getting a total of 23% which would have been inferior to setting 2 threads each getting 12% but mostly staying in sync.
A lot depends on timing. The FahCore for the CPU keeps the CPU as busy as possible doing heavy calculations. Most of the time, the FahCores for the GPUs are busy moving data across the PCIe bus or they're in a spin-wait so they're always available to process the next I/O without waiting for the high-latency overhead of interrupting another task.
FahCore_21 also occasionally does several seconds of heavy-compute processing is support of the analysis, itself.
Considering CPU separately, it should be noted that FahCore_a4 is totally different. It will start as many compute-bound threads as you let it, but if they get interrupted frequently, FahCore_a4's performance will suffer. In other words, 7 CPUs will accomplish less folding than, say 4, if we assuming that the sum of all non-FAH processing exceeds an average of a couple of CPUs. i.e.- using 2 CPUs for something else plus 7 CPUs for FAHCore_a4 will guarantee that one of the 7 will lag behind the processing done by the other 6.
This also gets distorted somewhat when you consider HyperThreading.
In your previous example, you apparently had 7 threads which were getting a total of 23% which would have been inferior to setting 2 threads each getting 12% but mostly staying in sync.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
Re: Low PPD on some WUs
Well, I think I'll just keep the threads at 6 from now onbruce wrote:I can't answer all your questions -- I don't have a detailed enough knowledge about how GPU drivers work when there's contention for the CPU. I do know it's important, though.
A lot depends on timing. The FahCore for the CPU keeps the CPU as busy as possible doing heavy calculations. Most of the time, the FahCores for the GPUs are busy moving data across the PCIe bus or they're in a spin-wait so they're always available to process the next I/O without waiting for the high-latency overhead of interrupting another task.
FahCore_21 also occasionally does several seconds of heavy-compute processing is support of the analysis, itself.
Considering CPU separately, it should be noted that FahCore_a4 is totally different. It will start as many compute-bound threads as you let it, but if they get interrupted frequently, FahCore_a4's performance will suffer. In other words, 7 CPUs will accomplish less folding than, say 4, if we assuming that the sum of all non-FAH processing exceeds an average of a couple of CPUs. i.e.- using 2 CPUs for something else plus 7 CPUs for FAHCore_a4 will guarantee that one of the 7 will lag behind the processing done by the other 6.
This also gets distorted somewhat when you consider HyperThreading.
In your previous example, you apparently had 7 threads which were getting a total of 23% which would have been inferior to setting 2 threads each getting 12% but mostly staying in sync.
My PPD is at least close to the same if not better... and it won't conflict with 13xxx projects
The possible benefit from folding with 6 threads versus 7 is small... probably 5%, and it could just cause problems
This way it won't overload all the cores... I see all 8 threads getting a pretty equal load right now with 6 threads going... that's good
Re: Low PPD on some WUs
Gah... I was comparing the PPD with different amount of threads on the CPU and I got a bad work unit and it dumped the WU... was it probably because I was changing the amount of threads a bunch?
I figured out 6 threads gives about 23,000 PPD, 4 threads gives 18,000 PPD, and I never figured out what 7 threads gave...
I figured out 6 threads gives about 23,000 PPD, 4 threads gives 18,000 PPD, and I never figured out what 7 threads gave...
Re: Low PPD on some WUs
On a different WU, 7 threads gave 16,000 PPD and 6 threads gave 21,000?
So 6 threads is better, and takes less computer resources
Hmm...
That doesn't seem right though.. I would think 7 would be better than 4... I could understand how 6 would be better than 7 though
Yup, it's still calculating... I'll wait longer next time
So 6 threads is better, and takes less computer resources
Hmm...
That doesn't seem right though.. I would think 7 would be better than 4... I could understand how 6 would be better than 7 though
Yup, it's still calculating... I'll wait longer next time
Re: Low PPD on some WUs
4 threads 11,000, 6 threads 15,300, 7 threads 14,900
Now I'm getting 6 threads 14,500
But still, 6 seems the way to go
It'll give the video card WU more PPD and take less computer resources and not cause conflicts with 13xxx projects
Now I'm getting 6 threads 14,500
But still, 6 seems the way to go
It'll give the video card WU more PPD and take less computer resources and not cause conflicts with 13xxx projects
-
- Site Moderator
- Posts: 6359
- Joined: Sun Dec 02, 2007 10:38 am
- Location: Bordeaux, France
- Contact:
Re: Low PPD on some WUs
So who was right ?toTOW wrote:Can you try to pause the CPU slot while running p130xx ? Does it help ?
If it helps, reduce your CPU slot to 6 threads ... it should help too as a long term solution ...
-
- Posts: 2040
- Joined: Sat Dec 01, 2012 3:43 pm
- Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441
Re: Low PPD on some WUs
This may lead to a general question:
A 4 core cpu with hyperthreading has 8 threads or logical cores where each of 4 core has 2 threads.
But when one thread is free and 7 threads are busy then there must be 4 cores busy.
FAH by default leaves one cpu core free for each GPU slot.
Shouldn't this calculate to 2 threads/logical cores on a hyperthreading CPU?
Why does FAH not check for physical cores but threads/logical cores?
This issue may now rise because core_21 takes more cpu resources on heavy 13000 projects.
And also hit other users?
A 4 core cpu with hyperthreading has 8 threads or logical cores where each of 4 core has 2 threads.
But when one thread is free and 7 threads are busy then there must be 4 cores busy.
FAH by default leaves one cpu core free for each GPU slot.
Shouldn't this calculate to 2 threads/logical cores on a hyperthreading CPU?
Why does FAH not check for physical cores but threads/logical cores?
This issue may now rise because core_21 takes more cpu resources on heavy 13000 projects.
And also hit other users?
Re: Low PPD on some WUs
This was a request during early development of FAHClient but they wasted a lot of time on it and it never worked as required. They backed off to threads.foldy wrote:Shouldn't this calculate to 2 threads/logical cores on a hyperthreading CPU?
Why does FAH not check for physical cores but threads/logical cores?
There's one redeeming factor, though. Use an example of 8-threaded HT CPU, you'll have 6 threads sharing all four SSE units so they'll work at the speed of about 60% of a dedicated CPU. The seventh thread will have exclusive use of the SSE unit, while sharing with the GPU driver thread which uses virtually no FP/SSE operations so that CPU thread will get 100% of the dedicated CPU speed. [The only exception is Core_21 which uses a few seconds of floating point at every checkpoint, but it's not really that significant.]
Adding the first four threads contributes more to Core_a4 than adding any of the last four, and adding a GPU thread is almost the same as leaving that thread idle, from the perspective of any app that's using mostly floating point.
For CPUs from either AMD or Intel, SSE/FPU operations use shared hardware but most other operations have access to unshared resources. That makes sense since, except for scientific apps and the more sophisticated 3D games, almost everything else written doesn't use floating point.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
Folding@home causes my video card to do weird things
So.. if I run a stress test my video card starts throttling when the VRM temp hits 110, but if I'm running folding@home the VRM temp goes all the way up to 135 and shuts my video card off...
I originally thought my video card started throttling 60 seconds after hitting the power limit.. but it turns out it just takes about that long for the VRM temp to hit 110
And when I run folding@home I can't underclock my video card
I just ran a stress test while running folding@home and my video card didn't throttle even when the VRM temp hit 120+
Why does folding@home disable throttling when the VRM temp goes up... it'll go all the way to 135 and shut my video card off...
(My video card doesn't have a power limit, that's why it got so hot)
https://www.youtube.com/watch?v=MEPNzrBoyBM
Without folding@home running, my video card will start throttling when the VRM temp hits 110... but with folding@home running it never throttles with VRM temp...
At least I haven't gotten any folding@home errors in a while.. heh
And thank god I fixed that PPD issue on 13xxx projects...
At least my video card shut itself off.. heh... VRM temp would of gone easily 160+ if it didn't
It also causes my GPU to not throttle when the core temp goes up... normally my video card starts throttling if the gpu temp stays above 97 for more than a few seconds (first it tries to turn fan speed to 100% to cool the gpu down)
But it went all the way up to 101 and shut my video card off..
My video card shuts itself off at only 101°C... that's pretty sad... my 7850 doesn't even force fans to 100% until 102
https://www.youtube.com/watch?v=l_D0ezitJbY
And that was only 1050mhz... I had it at 1100mhz with the VRM temp one
So... now my video card fan seems to have a mind of it's own
My core temp keeps going from 84 to 90... it keeps turning the fan speed down and back up... I have it set to 40% but it seems to not be keeping it at that
Well... my fan seems to be stable now.. maybe I had something running or something that I didn't realize and it was causing that to happen. dunno
Actually.. the fan speed percentage stayed the same, but the rpm was changing...
No idea :/
I originally thought my video card started throttling 60 seconds after hitting the power limit.. but it turns out it just takes about that long for the VRM temp to hit 110
And when I run folding@home I can't underclock my video card
I just ran a stress test while running folding@home and my video card didn't throttle even when the VRM temp hit 120+
Why does folding@home disable throttling when the VRM temp goes up... it'll go all the way to 135 and shut my video card off...
(My video card doesn't have a power limit, that's why it got so hot)
https://www.youtube.com/watch?v=MEPNzrBoyBM
Without folding@home running, my video card will start throttling when the VRM temp hits 110... but with folding@home running it never throttles with VRM temp...
At least I haven't gotten any folding@home errors in a while.. heh
And thank god I fixed that PPD issue on 13xxx projects...
At least my video card shut itself off.. heh... VRM temp would of gone easily 160+ if it didn't
It also causes my GPU to not throttle when the core temp goes up... normally my video card starts throttling if the gpu temp stays above 97 for more than a few seconds (first it tries to turn fan speed to 100% to cool the gpu down)
But it went all the way up to 101 and shut my video card off..
My video card shuts itself off at only 101°C... that's pretty sad... my 7850 doesn't even force fans to 100% until 102
https://www.youtube.com/watch?v=l_D0ezitJbY
And that was only 1050mhz... I had it at 1100mhz with the VRM temp one
So... now my video card fan seems to have a mind of it's own
My core temp keeps going from 84 to 90... it keeps turning the fan speed down and back up... I have it set to 40% but it seems to not be keeping it at that
Well... my fan seems to be stable now.. maybe I had something running or something that I didn't realize and it was causing that to happen. dunno
Actually.. the fan speed percentage stayed the same, but the rpm was changing...
No idea :/
-
- Site Moderator
- Posts: 6359
- Joined: Sun Dec 02, 2007 10:38 am
- Location: Bordeaux, France
- Contact:
Re: Low PPD on some WUs
You definitely need to improve your card and/or case cooling
-
- Posts: 2040
- Joined: Sat Dec 01, 2012 3:43 pm
- Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441
Re: Low PPD on some WUs
I think your GPU fans make more noise at 100% than some case fans which you can even throttle from 12V to 7V.Bryman wrote:I do have some case fans I could snap onto my case... but it would make more noise... I prefer just to have my video card about 5°C hotter and have the side of my case open with no case fans
You can also use a case fan and but it directly near the GPU using some wire tie and still keep your case open.
You can also change your GPUs fan curve using a software tool, so it does increase fan speed earlier.
I think MSI Afterburner allows that.