My Maxwells are getting 13000 & 13001, & failing

Moderators: Site Moderators, FAHC Science Team

Gary480six
Posts: 93
Joined: Mon Jan 21, 2008 6:42 pm

Re: My Maxwells are getting 13000 & 13001, & failing

Post by Gary480six »

runpaint,

You say it is Maxwell cards - but you do not say if it's Maxwell 1 (GTX750/750Ti) or Maxwell 2 (GTX970/980) hardware.

I just posted about this yesterday here in another section of the forum.

And what Bruce is telling you was the exact solution for me - though I only have GTX750 and GTX750Ti cards.

Update your Nvidia video card drivers to the latest version. (I cannot confirm if this will help with GTX970/980 cards - but I assume so)

What I found, when looking back over the last two Months of my log files... was that this had been happening often. That I would get a P13000 or P13001 work unit and it would fail at 0%. The only difference was that I would get one 'bad' p13000, followed by five successful core 18 work units - so I never noticed.
It was only after getting 5+ bad P13000 work units in a row (and the FAILED warning), that the problem was revealed.

I updated my drivers and my PCs are now happily crunching away on P13000 and P13001 work units.
7im
Posts: 10179
Joined: Thu Nov 29, 2007 4:30 pm
Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
Location: Arizona
Contact:

Re: My Maxwells are getting 13000 & 13001, & failing

Post by 7im »

runpaint wrote:It's 0.0.52, I thought it updated automatically.
Yes, and no.

The work units that you fold each contain a "minimum required core version" setting. If you haven't folded any work units that require the newer .55 FAHCore version, then no upgrade was done. However, when you do fold your first work unit that requires the newer version, the client will download the newer version automatically. There are ways to induce the update, but that goes even further off the current topic.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
bruce
Posts: 20824
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: My Maxwells are getting 13000 & 13001, & failing

Post by bruce »

FAH can automatically update the FahCore to a new version but that's not the issue here. The nVidia Driver version needs to be update and you have to do that.
Breach
Posts: 204
Joined: Sat Mar 09, 2013 8:07 pm
Location: Brussels, Belgium

Re: My Maxwells are getting 13000 & 13001, & failing

Post by Breach »

7im wrote:
runpaint wrote:It's 0.0.52, I thought it updated automatically.
Yes, and no.

The work units that you fold each contain a "minimum required core version" setting. If you haven't folded any work units that require the newer .55 FAHCore version, then no upgrade was done. However, when you do fold your first work unit that requires the newer version, the client will download the newer version automatically. There are ways to induce the update, but that goes even further off the current topic.
Perhaps it's worthwhile to also mention that the core version is also client-type specific - 0.0.55 is *not* an upgrade to 0.0.52 right now, but the beta version of the core.

With no flags 0.0.52 would be downloaded and installed, e.g. in my case here: C:\ProgramData\FAHClient\cores\web.stanford.edu\~pande\Win32\AMD64\NVIDIA\Fermi\Core_17.fah
Even if you switch to beta, 0.0.55 would be downloaded and installed: C:\ProgramData\FAHClient\cores\web.stanford.edu\~pande\Win32\AMD64\NVIDIA\Fermi\beta\Core_17.fah
However, if you remove the beta flag then the non-beta (0.0.52) version will be used - not the beta one, even if it's newer and still there.
Windows 11 x64 / 5800X@5Ghz / 32GB DDR4 3800 CL14 / 4090 FE / Creative Titanium HD / Sennheiser 650 / PSU Corsair AX1200i
kyleb
Pande Group Member
Posts: 272
Joined: Fri Mar 12, 2010 8:53 pm

Re: My Maxwells are getting 13000 & 13001, & failing

Post by kyleb »

I've reverted projects 13000 and 13001 to beta only for now.

*EDIT* no longer in beta, see next post
kyleb
Pande Group Member
Posts: 272
Joined: Fri Mar 12, 2010 8:53 pm

Re: My Maxwells are getting 13000 & 13001, & failing

Post by kyleb »

OK, after looking further I've restricted these projects to non-maxwell cards.
Breach
Posts: 204
Joined: Sat Mar 09, 2013 8:07 pm
Location: Brussels, Belgium

Re: My Maxwells are getting 13000 & 13001, & failing

Post by Breach »

kyleb wrote:OK, after looking further I've restricted these projects to non-maxwell cards.
Aren't they supposed to work with Maxwells with 347.xx drivers?
Last edited by Breach on Sat Feb 28, 2015 7:47 pm, edited 1 time in total.
Windows 11 x64 / 5800X@5Ghz / 32GB DDR4 3800 CL14 / 4090 FE / Creative Titanium HD / Sennheiser 650 / PSU Corsair AX1200i
kyleb
Pande Group Member
Posts: 272
Joined: Fri Mar 12, 2010 8:53 pm

Re: My Maxwells are getting 13000 & 13001, & failing

Post by kyleb »

For now, I want to be conservative and eliminate any issues. We can figure out looser restrictions if needed in the future.
Gary480six
Posts: 93
Joined: Mon Jan 21, 2008 6:42 pm

Re: My Maxwells are getting 13000 & 13001, & failing

Post by Gary480six »

kyleb wrote:OK, after looking further I've restricted these projects to non-maxwell cards.
Noooooo.........

As others have stated, there is no problem with the P13000 and P13001 work units. It's just that many of us are still using older Nvidia drivers with our GPU Folding.

It was happening to me. My version 7 client had the dreaded FAILED message from crashing too many P13000 work units.

But on advice from Bruce and 7im, I updated my drivers to the latest available from Nvidia - and the failures Stopped.

Rather than pull the P13000s from the Maxwell cards, why not make an announcement about updating the drivers instead?
HayesK
Posts: 342
Joined: Sun Feb 22, 2009 4:23 pm
Hardware configuration: hardware folding: 24 GPUs (8-GTX980Ti, 2-GTX1060, 2-GTX1070, 6-1080Ti, 2-1660Ti, 4-2070 Super)
hardware idle: 7-2600K, 1-i7-950, 2-i7-930, 2-i7-920, 3-i7-860, 2-Q9450, 4-L5640, 1-Q9550, 2-GTS450, 2-GTX550Ti, 3-GTX560Ti, 3-GTX650Ti, 11-GTX650Ti-Boost, 4-GTX660Ti, 2-GTX670, 6-GTX750Ti, 7-GTX970
Location: La Porte, Texas

Re: My Maxwells are getting 13000 & 13001, & failing

Post by HayesK »

completed 72x-p13000/13001 on my GTX750Ti since january 25. not aware of any failures.
linux client 7.44, nvidia 346.22 (cuda 5.0, cuda driver 7000), Ubuntu 14.04,

hfm benchmark data below

Code: Select all

 Project ID: 13000
 Core: ZETA
 Credit: 17123
 Frames: 100

 Name: F63-P67A-i2600K-6C-4.6-1866+2xGTX750Ti-U1404-V744 Slot 01
 Number of Frames Observed: 300
 Min. Time / Frame : 00:12:20 - 67,970 PPD
 Avg. Time / Frame : 00:12:24 - 67,423 PPD

 Name: F63-P67A-i2600K-6C-4.6-1866+2xGTX750Ti-U1404-V744 Slot 02
 Number of Frames Observed: 166
 Min. Time / Frame : 00:12:38 - 65,564 PPD
 Avg. Time / Frame : 00:12:42 - 65,048 PPD

 Name: F64-P8P67-i2600K-8C-4.3+2x750Ti-1600-U1404-V744 Slot 01
 Number of Frames Observed: 300
 Min. Time / Frame : 00:12:20 - 67,970 PPD
 Avg. Time / Frame : 00:12:26 - 67,152 PPD

 Name: F64-P8P67-i2600K-8C-4.3+2x750Ti-1600-U1404-V744 Slot 02
 Number of Frames Observed: 300
 Min. Time / Frame : 00:12:27 - 67,017 PPD
 Avg. Time / Frame : 00:12:34 - 66,086 PPD

 Name: F65-P8P67-i2600K-6C-4.5+2x750Ti-1600-U1404-V744 Slot 01
 Number of Frames Observed: 300
 Min. Time / Frame : 00:12:17 - 68,386 PPD
 Avg. Time / Frame : 00:12:23 - 67,559 PPD

 Name: F65-P8P67-i2600K-6C-4.5+2x750Ti-1600-U1404-V744 Slot 02
 Number of Frames Observed: 300
 Min. Time / Frame : 00:12:20 - 67,970 PPD
 Avg. Time / Frame : 00:12:25 - 67,287 PPD

Code: Select all

 Project ID: 13001
 Core: ZETA
 Credit: 17123
 Frames: 100

 Name: F63-P67A-i2600K-6C-4.6-1866+2xGTX750Ti-U1404-V744 Slot 01
 Number of Frames Observed: 300
 Min. Time / Frame : 00:12:19 - 68,109 PPD
 Avg. Time / Frame : 00:12:34 - 66,086 PPD

 Name: F63-P67A-i2600K-6C-4.6-1866+2xGTX750Ti-U1404-V744 Slot 02
 Number of Frames Observed: 300
 Min. Time / Frame : 00:12:37 - 65,694 PPD
 Avg. Time / Frame : 00:12:42 - 65,048 PPD

 Name: F64-P8P67-i2600K-8C-4.3+2x750Ti-1600-U1404-V744 Slot 01
 Number of Frames Observed: 256
 Min. Time / Frame : 00:12:21 - 67,833 PPD
 Avg. Time / Frame : 00:12:26 - 67,152 PPD

 Name: F64-P8P67-i2600K-8C-4.3+2x750Ti-1600-U1404-V744 Slot 02
 Number of Frames Observed: 300
 Min. Time / Frame : 00:12:28 - 66,883 PPD
 Avg. Time / Frame : 00:12:37 - 65,694 PPD

 Name: F65-P8P67-i2600K-6C-4.5+2x750Ti-1600-U1404-V744 Slot 01
 Number of Frames Observed: 300
 Min. Time / Frame : 00:12:17 - 68,386 PPD
 Avg. Time / Frame : 00:12:22 - 67,696 PPD

 Name: F65-P8P67-i2600K-6C-4.5+2x750Ti-1600-U1404-V744 Slot 02
 Number of Frames Observed: 300
 Min. Time / Frame : 00:12:19 - 68,109 PPD
 Avg. Time / Frame : 00:12:25 - 67,287 PPD
folding for OCF T32
<= 10-GPU ( 8-GTX980Ti, 2-RTX2070Super ) as HayesK =>
<= 24-GPU ( 3-650TiBoost, 1-660Ti, 3-750Ti, 1-960m, 4-970, 2-1060, 2-1070, 6-1080Ti, 2-1660Ti, 2-2070Super )
as HayesK_ALL_18SjyNbF8VdXaNAFCVfG4rAHUyvtdmoFvX =>
Post Reply