Re: Quad-core 2Ghz vs Dual-core 4Ghz - Which faster?
Posted: Sun Nov 23, 2008 11:38 pm
I have a quad running 3 SMP clients on cores 2-4, and an ATI GPU client on core 1 + a uniprocessor client on core 1 picking up the slack. This box is on Vista 64-bit by the way.
The SMP clients average about 1200-1250 PPD each, so roughly 3600-3700 PPD on 3 cores.
If I run one SMP client on all cores at idle priority and the GPU client on low, the SMP gets about 2000 PPD (it varies some, say 1700-2200).
If I run one SMP client on 2 cores, it produces about the same, e.g. 2000-2100 PPD.
If I run one SMP client on 3 cores, it varies a lot, I've seen from 1400-2500 PPD. Generally it seems 2 cores are faster, and for sure one SMP for each core is the fastest - by far.
My work PC has a Core 2 Duo, which when idle and running one SMP client runs at about 1500-1600 PPD in Vista. When I use it (standard office use basically) it slows down a great deal, when I checked it today I was down at just 500 PPD. It doesn't do a great deal of work otherwise, but I guess the constant processor use by higher priority processes really messes up the MPI. I think I'm going to stick a uniprocessor client on this one instead.
The SMP clients average about 1200-1250 PPD each, so roughly 3600-3700 PPD on 3 cores.
If I run one SMP client on all cores at idle priority and the GPU client on low, the SMP gets about 2000 PPD (it varies some, say 1700-2200).
If I run one SMP client on 2 cores, it produces about the same, e.g. 2000-2100 PPD.
If I run one SMP client on 3 cores, it varies a lot, I've seen from 1400-2500 PPD. Generally it seems 2 cores are faster, and for sure one SMP for each core is the fastest - by far.
My work PC has a Core 2 Duo, which when idle and running one SMP client runs at about 1500-1600 PPD in Vista. When I use it (standard office use basically) it slows down a great deal, when I checked it today I was down at just 500 PPD. It doesn't do a great deal of work otherwise, but I guess the constant processor use by higher priority processes really messes up the MPI. I think I'm going to stick a uniprocessor client on this one instead.