Page 1 of 1

Turing (RTX) Linux CLI Overclocking

Posted: Mon May 06, 2019 8:18 pm
by gordonbb
Since buying my first Turing card, the RTX 2070, I noticed that the conventional method of setting the Graphics Clock Offset to overclock viz:

Code: Select all

nvidia-settings -a [gpu:0]/GPUGraphicsClockOffset[3]=65
did not work but the offset could be set using the PowerMizer section in the nvidia-settings GUI. This, however, is most inconvenient as it requires a connected Monitor, Keyboard and Mouse. As I prefer to run my folding rigs "headless" (i.e. without connected Monitors, Keyboards or Mice) I have been scouring the NVidia documentation trying to figure this out.

Finally, a post on the Arch Linux Wiki has provided the answer solving this issue. It appears the new Syntax is:

Code: Select all

nvidia-settings -a [gpu:0]/GPUGraphicsClockOffsetAllPerformanceLevels=65
I have tested this on a GTX1660 Ti and it works and it also works on Pascal Cards. 

Note that, as per the usual methods you will still need:

Code: Select all

Option         "Coolbits" "12"
in each "Screen" section in /etc/X11/xorg.conf to enable graphics & memory overclocking (Coolbit 8 ) and manual fan control (Coolbit 4)

Re: Turing (RTX) Linux CLI Overclocking

Posted: Tue May 07, 2019 3:20 am
by MeeLee
coolbits 28 enables them all together.
can you verify with the GUI if the settings take effect?

Re: Turing (RTX) Linux CLI Overclocking

Posted: Wed May 08, 2019 12:02 am
by gordonbb
MeeLee wrote:coolbits 28 enables them all together.
can you verify with the GUI if the settings take effect?
No, coolbits 28 includes bit 5 (coolbits=16) which is for voltage control. As I stated coolbits=4 is for Fan Control and =8 is for Graphics clock offset so to enable both you set coolbits=12

And yes, I did verify with the GUI that the Offsets were applied in the PowerMizer section though observing the graphics clock increase while running

nvidia-smi -i -q ...

Will do the same via the CLI