GPU server status 171.67.108.21, 171.64.65.71,171.67.108.26
Moderators: Site Moderators, FAHC Science Team
-
- Pande Group Member
- Posts: 2058
- Joined: Fri Nov 30, 2007 6:25 am
- Location: Stanford
Re: GPU server status 171.67.108.21, 171.64.65.71,171.67.108.26
all good to to hear. Sounds like this fix finally did it.
Prof. Vijay Pande, PhD
Departments of Chemistry, Structural Biology, and Computer Science
Chair, Biophysics
Director, Folding@home Distributed Computing Project
Stanford University
Departments of Chemistry, Structural Biology, and Computer Science
Chair, Biophysics
Director, Folding@home Distributed Computing Project
Stanford University
-
- Posts: 56
- Joined: Tue Jul 15, 2008 11:15 pm
- Hardware configuration: ASUS M3N-HT deluxe,AMD6400 duel 3.2gig, GeForce9800 GTX C-760 M-1140 S-1900,4 gig OCZ ddr
- Location: Missouri,USA
Re: GPU server status 171.67.108.21, 171.64.65.71,171.67.108.26
VijayPande wrote:all good to to hear. Sounds like this fix finally did it.
Good job Sir(s)
Seems to all be working well....
thnx
-
- Posts: 138
- Joined: Mon Dec 24, 2007 11:18 pm
- Hardware configuration: UserNames: weedacres_gpu ...
- Location: Eastern Washington
Re: GPU server status 171.67.108.21, 171.64.65.71,171.67.108.26
My regards to you and your staff. I know from experience what you've been going through.VijayPande wrote:all good to to hear. Sounds like this fix finally did it.
I just switched one of my gpu client over to my backup folder in the hopes that I can move out the stuck workunits. -send all and -send xx still fails. I'm hoping that autosend will pick them up.
added:
Well that did not work. When the first project completed it sent that wu in just fine, but did not recognize the 7 completed workunits sitting in the work folder.
Code: Select all
+ No unsent completed units remaining.
Last edited by weedacres on Sat Feb 20, 2010 6:58 pm, edited 1 time in total.
-
- Site Moderator
- Posts: 6986
- Joined: Wed Dec 23, 2009 9:33 am
- Hardware configuration: V7.6.21 -> Multi-purpose 24/7
Windows 10 64-bit
CPU:2/3/4/6 -> Intel i7-6700K
GPU:1 -> Nvidia GTX 1080 Ti
§
Retired:
2x Nvidia GTX 1070
Nvidia GTX 675M
Nvidia GTX 660 Ti
Nvidia GTX 650 SC
Nvidia GTX 260 896 MB SOC
Nvidia 9600GT 1 GB OC
Nvidia 9500M GS
Nvidia 8800GTS 320 MB
Intel Core i7-860
Intel Core i7-3840QM
Intel i3-3240
Intel Core 2 Duo E8200
Intel Core 2 Duo E6550
Intel Core 2 Duo T8300
Intel Pentium E5500
Intel Pentium E5400 - Location: Land Of The Long White Cloud
- Contact:
Re: GPU server status 171.67.108.21, 171.64.65.71,171.67.108.26
So far everything is going smoothly and I hope that's the last of any bugs we have seen.
Hats off to all the people involved especially Dr. Vijay and Joe..... hope you guys enjoy this weekend as you guys really deserved it.
Hats off to all the people involved especially Dr. Vijay and Joe..... hope you guys enjoy this weekend as you guys really deserved it.
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time
Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time
Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
-
- Pande Group Member
- Posts: 2058
- Joined: Fri Nov 30, 2007 6:25 am
- Location: Stanford
Re: GPU server status 171.67.108.21, 171.64.65.71,171.67.108.26
That's great to hear. Sounds like we should still be around just in case, but hopefully it won't be another stressful weekend.
Prof. Vijay Pande, PhD
Departments of Chemistry, Structural Biology, and Computer Science
Chair, Biophysics
Director, Folding@home Distributed Computing Project
Stanford University
Departments of Chemistry, Structural Biology, and Computer Science
Chair, Biophysics
Director, Folding@home Distributed Computing Project
Stanford University
-
- Posts: 56
- Joined: Tue Jul 15, 2008 11:15 pm
- Hardware configuration: ASUS M3N-HT deluxe,AMD6400 duel 3.2gig, GeForce9800 GTX C-760 M-1140 S-1900,4 gig OCZ ddr
- Location: Missouri,USA
Re: GPU server status 171.67.108.21, 171.64.65.71,171.67.108.26
I have one that wont send but I'm wondering if it's the project instead of the server..
I think this is the first newer project (3469) I've gotten today and seems to be the first that wont send...?
I think this is the first newer project (3469) I've gotten today and seems to be the first that wont send...?
Code: Select all
[19*------------------------------*
[19:43:29] Folding@Home GPU Core
[19:43:29] Version 1.31 (Tue Sep 15 10:57:42 PDT 2009)
[19:43:29]
[19:43:29] Compiler : Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.762 for 80x86
[19:43:29] Build host: amoeba
[19:43:29] Board Type: Nvidia
[19:43:29] Core :
[19:43:29] Preparing to commence simulation
[19:43:29] - Looking at optimizations...
[19:43:29] DeleteFrameFiles: successfully deleted file=work/wudata_00.ckp
[19:43:29] - Created dyn
[19:43:29] - Files status OK
[19:43:29] - Expanded 19326 -> 137900 (decompressed 713.5 percent)
[19:43:29] Called DecompressByteArray: compressed_data_size=19326 data_size=137900, decompressed_data_size=137900 diff=0
[19:43:29] - Digital signature verified
[19:43:29]
[19:43:29] Project: 3469 (Run 0, Clone 104, Gen 3)
[19:43:29]
[19:43:29] Assembly optimizations on if available.
[19:43:29] Entering M.D.
[19:43:35] Tpr hash work/wudata_00.tpr: 1054042935 241014301 3359274258 1601222210 227517600
[19:43:35]
[19:43:35] Calling fah_main args: 14 usage=100
[19:43:35]
[19:43:35] Working on Fs-peptide-GBSA
[19:43:35] Client config found, loading data.
[19:43:35] Starting GUI Server
[19:44:20] Completed 1%
[19:45:04] Completed 2%
[19:45:49] Completed 3%
[19:46:34] Completed 4%
[19:47:18] Completed 5%
[19:48:03] Completed 6%
[19:48:48] Completed 7%
[19:49:32] Completed 8%
[19:50:17] Completed 9%
[19:51:01] Completed 10%
[19:51:46] Completed 11%
[19:52:31] Completed 12%
[19:53:15] Completed 13%
[19:54:00] Completed 14%
[19:54:45] Completed 15%
[19:55:29] Completed 16%
[19:56:14] Completed 17%
[19:56:59] Completed 18%
[19:57:43] Completed 19%
[19:58:28] Completed 20%
[19:59:12] Completed 21%
[19:59:57] Completed 22%
[20:00:42] Completed 23%
[20:01:26] Completed 24%
[20:02:11] Completed 25%
[20:02:56] Completed 26%
[20:03:40] Completed 27%
[20:04:25] Completed 28%
[20:05:10] Completed 29%
[20:05:54] Completed 30%
[20:06:39] Completed 31%
[20:07:23] Completed 32%
[20:08:08] Completed 33%
[20:08:53] Completed 34%
[20:09:37] Completed 35%
[20:10:22] Completed 36%
[20:11:07] Completed 37%
[20:11:51] Completed 38%
[20:12:36] Completed 39%
[20:13:20] Completed 40%
[20:14:05] Completed 41%
[20:14:50] Completed 42%
[20:15:34] Completed 43%
[20:16:19] Completed 44%
[20:17:04] Completed 45%
[20:17:48] Completed 46%
[20:18:33] Completed 47%
[20:19:18] Completed 48%
[20:20:02] Completed 49%
[20:20:47] Completed 50%
[20:21:31] Completed 51%
[20:22:16] Completed 52%
[20:23:01] Completed 53%
[20:23:45] Completed 54%
[20:24:30] Completed 55%
[20:25:15] Completed 56%
[20:25:59] Completed 57%
[20:26:44] Completed 58%
[20:27:29] Completed 59%
[20:28:13] Completed 60%
[20:28:58] Completed 61%
[20:29:42] Completed 62%
[20:30:27] Completed 63%
[20:31:12] Completed 64%
[20:31:56] Completed 65%
[20:32:41] Completed 66%
[20:33:26] Completed 67%
[20:34:10] Completed 68%
[20:34:55] Completed 69%
[20:35:39] Completed 70%
[20:36:24] Completed 71%
[20:37:09] Completed 72%
[20:37:53] Completed 73%
[20:38:38] Completed 74%
[20:39:23] Completed 75%
[20:40:07] Completed 76%
[20:40:52] Completed 77%
[20:41:37] Completed 78%
[20:42:21] Completed 79%
[20:43:06] Completed 80%
[20:43:50] Completed 81%
[20:44:35] Completed 82%
[20:45:20] Completed 83%
[20:46:04] Completed 84%
[20:46:49] Completed 85%
[20:47:34] Completed 86%
[20:48:18] Completed 87%
[20:49:03] Completed 88%
[20:49:47] Completed 89%
[20:50:32] Completed 90%
[20:51:17] Completed 91%
[20:52:01] Completed 92%
[20:52:46] Completed 93%
[20:53:31] Completed 94%
[20:54:15] Completed 95%
[20:55:00] Completed 96%
[20:55:45] Completed 97%
[20:56:29] Completed 98%
[20:57:14] Completed 99%
[20:57:58] Completed 100%
[20:57:58] Successful run
[20:57:58] DynamicWrapper: Finished Work Unit: sleep=10000
[20:58:08] Reserved 65832 bytes for xtc file; Cosm status=0
[20:58:08] Allocated 65832 bytes for xtc file
[20:58:08] - Reading up to 65832 from "work/wudata_00.xtc": Read 65832
[20:58:08] Read 65832 bytes from xtc file; available packet space=786364632
[20:58:08] xtc file hash check passed.
[20:58:08] Reserved 6456 6456 786364632 bytes for arc file=<work/wudata_00.trr> Cosm status=0
[20:58:08] Allocated 6456 bytes for arc file
[20:58:08] - Reading up to 6456 from "work/wudata_00.trr": Read 6456
[20:58:08] Read 6456 bytes from arc file; available packet space=786358176
[20:58:08] trr file hash check passed.
[20:58:08] Allocated 560 bytes for edr file
[20:58:08] Read bedfile
[20:58:08] edr file hash check passed.
[20:58:08] Logfile not read.
[20:58:08] GuardedRun: success in DynamicWrapper
[20:58:08] GuardedRun: done
[20:58:08] Run: GuardedRun completed.
[20:58:09] + Opened results file
[20:58:09] - Writing 73360 bytes of core data to disk...
[20:58:09] Done: 72848 -> 69373 (compressed to 95.2 percent)
[20:58:09] ... Done.
[20:58:09] DeleteFrameFiles: successfully deleted file=work/wudata_00.ckp
[20:58:09] Shutting down core
[20:58:09]
[20:58:09] Folding@home Core Shutdown: FINISHED_UNIT
[20:58:13] CoreStatus = 64 (100)
[20:58:13] Sending work to server
[20:58:13] Project: 3469 (Run 0, Clone 104, Gen 3)
[20:58:13] + Attempting to send results [February 20 20:58:13 UTC]
[21:03:22] - Couldn't send HTTP request to server
[21:03:22] + Could not connect to Work Server (results)
[21:03:22] (171.67.108.21:8080)
[21:03:22] + Retrying using alternative port
[21:03:43] - Couldn't send HTTP request to server
[21:03:43] + Could not connect to Work Server (results)
[21:03:43] (171.67.108.21:80)
[21:03:43] - Error: Could not transmit unit 00 (completed February 20) to work server.
[21:03:43] Keeping unit 00 in queue.
[21:03:43] Project: 3469 (Run 0, Clone 104, Gen 3)
[21:03:43] + Attempting to send results [February 20 21:03:43 UTC]
[21:04:04] - Couldn't send HTTP request to server
[21:04:04] + Could not connect to Work Server (results)
[21:04:04] (171.67.108.21:8080)
[21:04:04] + Retrying using alternative port
[21:04:25] - Couldn't send HTTP request to server
[21:04:25] + Could not connect to Work Server (results)
[21:04:25] (171.67.108.21:80)
[21:04:25] - Error: Could not transmit unit 00 (completed February 20) to work server.
[21:04:25] + Attempting to send results [February 20 21:04:25 UTC]
[21:17:09] - Unknown packet returned from server, expected ACK for results
[21:17:09] Could not transmit unit 00 to Collection server; keeping in queue.
[21:17:09] Project: 3469 (Run 0, Clone 104, Gen 3)
[21:17:09] + Attempting to send results [February 20 21:17:09 UTC]
[21:20:18] - Couldn't send HTTP request to server
[21:20:18] + Could not connect to Work Server (results)
[21:20:18] (171.67.108.21:8080)
[21:20:18] + Retrying using alternative port
[21:20:39] - Couldn't send HTTP request to server
[21:20:39] + Could not connect to Work Server (results)
[21:20:39] (171.67.108.21:80)
[21:20:39] - Error: Could not transmit unit 00 (completed February 20) to work server.
[21:20:39] + Attempting to send results [February 20 21:20:39 UTC]
Folding@Home Client Shutdown.
--- Opening Log file [February 20 21:23:12 UTC]
# Windows GPU Console Edition #################################################
###############################################################################
Folding@Home Client Version 6.23
http://folding.stanford.edu
###############################################################################
###############################################################################
Launch directory: C:\Documents and Settings\Owner\Application Data\Folding@home-gpu
Arguments: -gpu 0
[21:23:12] - Ask before connecting: No
[21:23:12] - User name: stv911 (Team 4)
[21:23:12] - User ID: 71A3C4C479B58F8C
[21:23:12] - Machine ID: 2
[21:23:12]
[21:23:12] Loaded queue successfully.
[21:23:12] Initialization complete
[21:23:12] - Preparing to get new work unit...
[21:23:12] + Attempting to get work packet
[21:23:12] Project: 3469 (Run 0, Clone 104, Gen 3)
[21:23:12] + Attempting to send results [February 20 21:23:12 UTC]
[21:23:12] - Connecting to assignment server
[21:23:13] - Successful: assigned to (171.64.122.70).
[21:23:13] + News From Folding@Home: Welcome to Folding@Home
[21:23:13] Loaded queue successfully.:43:29]
-
- Posts: 138
- Joined: Mon Dec 24, 2007 11:18 pm
- Hardware configuration: UserNames: weedacres_gpu ...
- Location: Eastern Washington
Re: GPU server status 171.67.108.21, 171.64.65.71,171.67.108.26
Autosend does not pick them up either.weedacres wrote:My regards to you and your staff. I know from experience what you've been going through.VijayPande wrote:all good to to hear. Sounds like this fix finally did it.
I just switched one of my gpu client over to my backup folder in the hopes that I can move out the stuck workunits. -send all and -send xx still fails. I'm hoping that autosend will pick them up.
added:
Well that did not work. When the first project completed it sent that wu in just fine, but did not recognize the 7 completed workunits sitting in the work folder.I haven't seen an autosend message yet so I'll let it run and see what happens.Code: Select all
+ No unsent completed units remaining.
Why would the client ignore wuresults_xx files that are in the work folder?
-
- Posts: 270
- Joined: Sun Dec 02, 2007 2:26 pm
- Hardware configuration: Folders: Intel C2D E6550 @ 3.150 GHz + GPU XFX 9800GTX+ @ 765 MHZ w. WinXP-GPU
AMD A2X64 3800+ @ stock + GPU XFX 9800GTX+ @ 775 MHZ w. WinXP-GPU
Main rig: an old Athlon Barton 2500+ @2.25 GHz & 2* 512 MB RAM Apacer, Radeon 9800Pro, WinXP SP3+ - Location: Belgium, near the International Sea-Port of Antwerp
Re: GPU server status 171.67.108.21, 171.64.65.71,171.67.108.26
.weedacres wrote:
Autosend does not pick them up either.
Why would the client ignore wuresults_xx files that are in the work folder?
It would ignore those when they are not noted as finished/ready -to-upload in the queue.dat file.
You can use Qfix to fix the corrupt(ed) queue.dat file and to restore it to what is the real status of the contents of the Work folder ...
http://foldingforum.org/viewtopic.php?f=8&t=191
Qfix for Windows can be found here: http://linuxminded.xs4all.nl/?target=so ... s.plc#qfix Use the 2nd item for use with a v6 Cient ! (- Windows/x86 : qfix.exe (10.00 KB))
The article linked to in the first URL explains the use of Qfix / it can obviously also be used when the WU is completely finished in the case of a corrupt list of WU's in the Work folder (either finished-unsent or deleted-sent/uploaded)
It will check the contants of the Work folder and correct the queue.dat info to mirror the content as-is ...
( the latter was used a lot in the SMP1 days when queue.dat file corruption happened regularly. I used it myself).
.
.
- stopped Linux SMP w. HT on [email protected] GHz
....................................
Folded since 10-06-04 till 09-2010
....................................
Folded since 10-06-04 till 09-2010
-
- Posts: 138
- Joined: Mon Dec 24, 2007 11:18 pm
- Hardware configuration: UserNames: weedacres_gpu ...
- Location: Eastern Washington
Re: GPU server status 171.67.108.21, 171.64.65.71,171.67.108.26
Thanks for the tip.noorman wrote:.weedacres wrote:
Autosend does not pick them up either.
Why would the client ignore wuresults_xx files that are in the work folder?
It would ignore those when they are not noted as finished/ready -to-upload in the queue.dat file.
You can use Qfix to fix the corrupt(ed) queue.dat file and to restore it to what is the real status of the contents of the Work folder ...
http://foldingforum.org/viewtopic.php?f=8&t=191
Qfix for Windows can be found here: http://linuxminded.xs4all.nl/?target=so ... s.plc#qfix Use the 2nd item for use with a v6 Cient ! (- Windows/x86 : qfix.exe (10.00 KB))
The article linked to in the first URL explains the use of Qfix / it can obviously also be used when the WU is completely finished in the case of a corrupt list of WU's in the Work folder (either finished-unsent or deleted-sent/uploaded)
It will check the contants of the Work folder and correct the queue.dat info to mirror the content as-is ...
( the latter was used a lot in the SMP1 days when queue.dat file corruption happened regularly. I used it myself).
.
.
I did try qfix last week when this started but grabbed the 1st one not the second. I just tried it and it does recognize the wuresults_xx files, which the 1st one didn't. I then tried -send all and -send xx with the same results.
I heard from someone last week that qfix did not work on gpu clients. Have you been able to use it on gpu results files?
Re: GPU server status 171.67.108.21, 171.64.65.71,171.67.108.26
Well so far so good here with the fixes, my farm is resuming normal service. I decided on fresh install for a lot of my clients the work folders were full of junk thx to the server issues. Probably lost 1000's of points no matter.
Will be keeping an eye on things & hopefully it will need minimal intervention.
Teddy
Will be keeping an eye on things & hopefully it will need minimal intervention.
Teddy
-
- Posts: 270
- Joined: Sun Dec 02, 2007 2:26 pm
- Hardware configuration: Folders: Intel C2D E6550 @ 3.150 GHz + GPU XFX 9800GTX+ @ 765 MHZ w. WinXP-GPU
AMD A2X64 3800+ @ stock + GPU XFX 9800GTX+ @ 775 MHZ w. WinXP-GPU
Main rig: an old Athlon Barton 2500+ @2.25 GHz & 2* 512 MB RAM Apacer, Radeon 9800Pro, WinXP SP3+ - Location: Belgium, near the International Sea-Port of Antwerp
Re: GPU server status 171.67.108.21, 171.64.65.71,171.67.108.26
.weedacres wrote:Thanks for the tip.noorman wrote:.weedacres wrote:
Autosend does not pick them up either.
Why would the client ignore wuresults_xx files that are in the work folder?
It would ignore those when they are not noted as finished/ready -to-upload in the queue.dat file.
You can use Qfix to fix the corrupt(ed) queue.dat file and to restore it to what is the real status of the contents of the Work folder ...
http://foldingforum.org/viewtopic.php?f=8&t=191
Qfix for Windows can be found here: http://linuxminded.xs4all.nl/?target=so ... s.plc#qfix Use the 2nd item for use with a v6 Cient ! (- Windows/x86 : qfix.exe (10.00 KB))
The article linked to in the first URL explains the use of Qfix / it can obviously also be used when the WU is completely finished in the case of a corrupt list of WU's in the Work folder (either finished-unsent or deleted-sent/uploaded)
It will check the contants of the Work folder and correct the queue.dat info to mirror the content as-is ...
( the latter was used a lot in the SMP1 days when queue.dat file corruption happened regularly. I used it myself).
.
.
I did try qfix last week when this started but grabbed the 1st one not the second. I just tried it and it does recognize the wuresults_xx files, which the 1st one didn't. I then tried -send all and -send xx with the same results.
I heard from someone last week that qfix did not work on gpu clients. Have you been able to use it on gpu results files?
I 've never needed to use Qfix after the named SMP(1) days again; I don't know if it works on the GPU2 Client queue.dat
I would find it terrible that it wouldn't because it easily fixed a lot of things 'gone wrong' and saved lots of Work from 'the bin' !
I 'll have to check if this is so and if something can be done about it !
.
- stopped Linux SMP w. HT on [email protected] GHz
....................................
Folded since 10-06-04 till 09-2010
....................................
Folded since 10-06-04 till 09-2010
-
- Posts: 136
- Joined: Wed May 27, 2009 4:48 pm
- Hardware configuration: Dell Studio 425 MTS-Core i7-920 c0 stock
evga SLI 3x o/c Core i7-920 d0 @ 3.9GHz + nVidia GTX275
Dell 5150 + nVidia 9800GT
Re: GPU server status 171.67.108.21, 171.64.65.71,171.67.108.26
I still have 10 WU's that I suspect were not uploaded but still give a "Server has already received unit" message after tweaking the queue.dat file to mark the units with a wu_results_XX.dat file hanging around as ready to upload instead of finished.
Here is the list of WU's that I still have wu_results_xx.dat files for:
First GPU:
Second GPU:
Is there any hope:
1. If I can find out if they are actually marked/uploaded in the system as noted
2. If I was the one that uploaded them
3. Did I get any credit for them if #1 & #2 are true
4. If not, how I can upload them to make #3 happen
Thanks,
Dan
PS: Here is a quick and dirty list of the WUs referenced above:
Here is the list of WU's that I still have wu_results_xx.dat files for:
First GPU:
Code: Select all
Index 2: finished 445.00 pts (252.125 pt/hr) 122 X min speed
server: 171.67.108.21:8080; project: 3470
Folding: run 16, clone 51, generation 1; benchmark 0; misc: 500, 200, 11 (be)
issue: Sat Feb 13 21:20:36 2010; begin: Sat Feb 13 21:20:09 2010
end: Sat Feb 13 23:06:03 2010; due: Mon Feb 22 21:20:09 2010 (9 days)
--
Index 3: finished 445.00 pts (256.033 pt/hr) 124 X min speed
server: 171.67.108.21:8080; project: 3470
Folding: run 14, clone 112, generation 2; benchmark 0; misc: 500, 200, 11 (be)
issue: Sat Feb 13 23:06:53 2010; begin: Sat Feb 13 23:06:26 2010
end: Sun Feb 14 00:50:43 2010; due: Mon Feb 22 23:06:26 2010 (9 days)
--
Index 4: finished 783.00 pts (347.186 pt/hr) 266 X min speed
server: 171.67.108.21:8080; project: 5781
Folding: run 22, clone 982, generation 3; benchmark 0; misc: 500, 200, 11 (be)
issue: Sun Feb 14 00:51:32 2010; begin: Sun Feb 14 00:51:05 2010
end: Sun Feb 14 03:06:24 2010; due: Thu Mar 11 00:51:05 2010 (25 days)
--
Index 5: finished 783.00 pts (349.380 pt/hr) 268 X min speed
server: 171.67.108.21:8080; project: 5781
Folding: run 28, clone 199, generation 3; benchmark 0; misc: 500, 200, 11 (be)
issue: Sun Feb 14 03:06:47 2010; begin: Sun Feb 14 03:06:48 2010
end: Sun Feb 14 05:21:16 2010; due: Thu Mar 11 03:06:48 2010 (25 days)
--
Index 6: finished 783.00 pts (349.294 pt/hr) 268 X min speed
server: 171.67.108.21:8080; project: 5781
Folding: run 35, clone 554, generation 3; benchmark 0; misc: 500, 200, 11 (be)
issue: Sun Feb 14 05:21:39 2010; begin: Sun Feb 14 05:21:39 2010
end: Sun Feb 14 07:36:09 2010; due: Thu Mar 11 05:21:39 2010 (25 days)
--
Index 7: finished 783.00 pts (342.836 pt/hr) 263 X min speed
server: 171.67.108.21:8080; project: 5781
Folding: run 6, clone 539, generation 4; benchmark 0; misc: 500, 200, 11 (be)
issue: Sun Feb 14 07:36:32 2010; begin: Sun Feb 14 07:36:33 2010
end: Sun Feb 14 09:53:35 2010; due: Thu Mar 11 07:36:33 2010 (25 days)
--
Index 8: finished 783.00 pts (70.046 pt/hr) 53.7 X min speed
server: 171.67.108.21:8080; project: 5781
Folding: run 13, clone 935, generation 4; benchmark 0; misc: 500, 200, 11 (be)
issue: Sun Feb 14 09:53:58 2010; begin: Sun Feb 14 09:53:59 2010
end: Sun Feb 14 21:04:41 2010; due: Thu Mar 11 09:53:59 2010 (25 days)
Second GPU:
Code: Select all
Index 8: finished 783.00 pts (200.142 pt/hr) 153 X min speed
server: 171.67.108.21:8080; project: 5781
Folding: run 15, clone 764, generation 3; benchmark 0; misc: 500, 200, 11 (be)
issue: Sat Feb 13 20:46:12 2010; begin: Sat Feb 13 20:46:18 2010
end: Sun Feb 14 00:41:02 2010; due: Wed Mar 10 20:46:18 2010 (25 days)
preferred: Sun Feb 28 20:46:18 2010 (15 days)
--
Index 9: finished 783.00 pts (199.194 pt/hr) 153 X min speed
server: 171.67.108.21:8080; project: 5781
Folding: run 22, clone 303, generation 3; benchmark 0; misc: 500, 200, 11 (be)
issue: Sun Feb 14 00:41:20 2010; begin: Sun Feb 14 00:41:26 2010
end: Sun Feb 14 04:37:17 2010; due: Thu Mar 11 00:41:26 2010 (25 days)
preferred: Mon Mar 01 00:41:26 2010 (15 days)
--
Index 0: finished 783.00 pts (199.802 pt/hr) 153 X min speed
server: 171.67.108.21:8080; project: 5781
Folding: run 33, clone 15, generation 3; benchmark 0; misc: 500, 200, 11 (be)
issue: Sun Feb 14 04:37:34 2010; begin: Sun Feb 14 04:37:41 2010
end: Sun Feb 14 08:32:49 2010; due: Thu Mar 11 04:37:41 2010 (25 days)
preferred: Mon Mar 01 04:37:41 2010 (15 days)
--
Index 1: finished 783.00 pts (50.108 pt/hr) 38.4 X min speed
server: 171.67.108.21:8080; project: 5781
Folding: run 9, clone 612, generation 4; benchmark 0; misc: 500, 200, 11 (be)
issue: Sun Feb 14 08:33:06 2010; begin: Sun Feb 14 08:33:12 2010
end: Mon Feb 15 00:10:46 2010; due: Thu Mar 11 08:33:12 2010 (25 days)
preferred: Mon Mar 01 08:33:12 2010 (15 days)
1. If I can find out if they are actually marked/uploaded in the system as noted
2. If I was the one that uploaded them
3. Did I get any credit for them if #1 & #2 are true
4. If not, how I can upload them to make #3 happen
Thanks,
Dan
PS: Here is a quick and dirty list of the WUs referenced above:
Code: Select all
P3470, r16, c51, g1
P3470, r14, c112, g2
P5781, r22, c982, g3
P5781, r28, c199, g3
P5781, r35, c554, g3
P5781, r6, c539, g4
P5781, r13, c935, g4
P5781, r15, c764, g3
P5781, r22, c303, g3
P5781, r33, c15, g3
P5781, r9, c612, g4
Last edited by DrSpalding on Sun Feb 21, 2010 1:40 am, edited 1 time in total.
Not a real doctor, I just play one on the 'net!
Re: GPU server status 171.67.108.21, 171.64.65.71,171.67.108.26
Could you please explain how you tweaked queue.dat?DrSpalding wrote:after tweaking the queue.dat file to mark the units with a wu_results_XX.dat file hanging around as ready to upload instead of finished.
-
- Posts: 136
- Joined: Wed May 27, 2009 4:48 pm
- Hardware configuration: Dell Studio 425 MTS-Core i7-920 c0 stock
evga SLI 3x o/c Core i7-920 d0 @ 3.9GHz + nVidia GTX275
Dell 5150 + nVidia 9800GT
Re: GPU server status 171.67.108.21, 171.64.65.71,171.67.108.26
I got the source to qd (qd.c) and figured out which flag to tweak and looked at other queue entries that were done processing and waiting to upload.bollix47 wrote:Could you please explain how you tweaked queue.dat?DrSpalding wrote:after tweaking the queue.dat file to mark the units with a wu_results_XX.dat file hanging around as ready to upload instead of finished.
Edit: And an implied hex dump from hd and an old binary editor I happen to have.
Last edited by DrSpalding on Sun Feb 21, 2010 1:49 am, edited 1 time in total.
Not a real doctor, I just play one on the 'net!
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: GPU server status 171.67.108.21, 171.64.65.71,171.67.108.26
The qfix tool referenced above is one way.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.