108.21 not accepting - then pointed to CS6

Moderators: Site Moderators, FAHC Science Team

Post Reply
noorman
Posts: 270
Joined: Sun Dec 02, 2007 2:26 pm
Hardware configuration: Folders: Intel C2D E6550 @ 3.150 GHz + GPU XFX 9800GTX+ @ 765 MHZ w. WinXP-GPU
AMD A2X64 3800+ @ stock + GPU XFX 9800GTX+ @ 775 MHZ w. WinXP-GPU
Main rig: an old Athlon Barton 2500+ @2.25 GHz & 2* 512 MB RAM Apacer, Radeon 9800Pro, WinXP SP3+
Location: Belgium, near the International Sea-Port of Antwerp

108.21 not accepting - then pointed to CS6

Post by noorman »

.

Code: Select all

[14:10:00] Completed 98%
[14:11:24] Completed 99%
[14:11:59] - Autosending finished units... [April 6 14:11:59 UTC]
[14:11:59] Trying to send all finished work units
[14:11:59] + No unsent completed units remaining.
[14:11:59] - Autosend completed
[14:11:59] + Working...
[14:12:48] Completed 100%
[14:12:48] Successful run
[14:12:48] DynamicWrapper: Finished Work Unit: sleep=10000
[14:12:58] Reserved 109420 bytes for xtc file; Cosm status=0
[14:12:58] Allocated 109420 bytes for xtc file
[14:12:58] - Reading up to 109420 from "work/wudata_04.xtc": Read 109420
[14:12:58] Read 109420 bytes from xtc file; available packet space=786321044
[14:12:58] xtc file hash check passed.
[14:12:58] Reserved 21912 21912 786321044 bytes for arc file=<work/wudata_04.trr> Cosm status=0
[14:12:58] Allocated 21912 bytes for arc file
[14:12:58] - Reading up to 21912 from "work/wudata_04.trr": Read 21912
[14:12:58] Read 21912 bytes from arc file; available packet space=786299132
[14:12:58] trr file hash check passed.
[14:12:58] Allocated 560 bytes for edr file
[14:12:58] Read bedfile
[14:12:58] edr file hash check passed.
[14:12:58] Logfile not read.
[14:12:58] GuardedRun: success in DynamicWrapper
[14:12:58] GuardedRun: done
[14:12:58] Run: GuardedRun completed.
[14:12:59] + Opened results file
[14:12:59] - Writing 132404 bytes of core data to disk...
[14:12:59] Done: 131892 -> 130899 (compressed to 99.2 percent)
[14:12:59]   ... Done.
[14:12:59] DeleteFrameFiles: successfully deleted file=work/wudata_04.ckp
[14:12:59] Shutting down core 
[14:12:59] 
[14:12:59] Folding@home Core Shutdown: FINISHED_UNIT
[14:13:02] CoreStatus = 64 (100)
[14:13:02] Unit 4 finished with 99 percent of time to deadline remaining.
[14:13:02] Updated performance fraction: 0.987387
[14:13:02] Sending work to server
[14:13:02] Project: 10503 (Run 328, Clone 97, Gen 0)


[14:13:02] + Attempting to send results [April 6 14:13:02 UTC]
[14:13:02] - Reading file work/wuresults_04.dat from core
[14:13:02]   (Read 131411 bytes from disk)
[14:13:02] Connecting to http://171.67.108.21:8080/
[14:13:04] Posted data.
[14:13:04] Initial: 0000; - Uploaded at ~64 kB/s
[14:13:04] - Averaged speed for that direction ~52 kB/s
[14:13:04] - Server does not have record of this unit. Will try again later.
[14:13:04] - Error: Could not transmit unit 04 (completed April 6) to work server.
[14:13:04] - 1 failed uploads of this unit.
[14:13:04]   Keeping unit 04 in queue.
[14:13:04] Trying to send all finished work units
[14:13:04] Project: 10503 (Run 328, Clone 97, Gen 0)


[14:13:04] + Attempting to send results [April 6 14:13:04 UTC]
[14:13:04] - Reading file work/wuresults_04.dat from core
[14:13:04]   (Read 131411 bytes from disk)
[14:13:04] Connecting to http://171.67.108.21:8080/
[14:13:08] Posted data.
[14:13:08] Initial: 0000; - Uploaded at ~32 kB/s
[14:13:08] - Averaged speed for that direction ~48 kB/s
[14:13:08] - Server does not have record of this unit. Will try again later.
[14:13:08] - Error: Could not transmit unit 04 (completed April 6) to work server.
[14:13:08] - 2 failed uploads of this unit.


[14:13:08] + Attempting to send results [April 6 14:13:08 UTC]
[14:13:08] - Reading file work/wuresults_04.dat from core
[14:13:08]   (Read 131411 bytes from disk)
[14:13:08] Connecting to http://171.67.108.26:8080/
[14:13:09] - Couldn't send HTTP request to server
[14:13:09] + Could not connect to Work Server (results)
[14:13:09]     (171.67.108.26:8080)
[14:13:09] + Retrying using alternative port
[14:13:09] Connecting to http://171.67.108.26:80/
[14:13:10] - Couldn't send HTTP request to server
[14:13:10]   (Got status 503)
[14:13:10] + Could not connect to Work Server (results)
[14:13:10]     (171.67.108.26:80)
[14:13:10]   Could not transmit unit 04 to Collection server; keeping in queue.
[14:13:10] + Sent 0 of 1 completed units to the server
[14:13:10] - Preparing to get new work unit...
[14:13:10] + Attempting to get work packet
[14:13:10] - Will indicate memory of 2046 MB
[14:13:10] - Connecting to assignment server
[14:13:10] Connecting to http://assign-GPU.stanford.edu:8080/
[14:13:11] Posted data.
[14:13:11] Initial: 40AB; - Successful: assigned to (171.64.65.71).
[14:13:11] + News From Folding@Home: Welcome to Folding@Home
[14:13:11] Loaded queue successfully.
[14:13:11] Connecting to http://171.64.65.71:8080/
[14:13:12] Posted data.
[14:13:12] Initial: 0000; - Receiving payload (expected size: 89109)
[14:13:12] Conversation time very short, giving reduced weight in bandwidth avg
[14:13:12] - Downloaded at ~174 kB/s
[14:13:12] - Averaged speed for that direction ~82 kB/s
[14:13:12] + Received work.
[14:13:12] Trying to send all finished work units
[14:13:12] Project: 10503 (Run 328, Clone 97, Gen 0)


[14:13:12] + Attempting to send results [April 6 14:13:12 UTC]
[14:13:12] - Reading file work/wuresults_04.dat from core
[14:13:12]   (Read 131411 bytes from disk)
[14:13:12] Connecting to http://171.67.108.21:8080/
[14:13:14] Posted data.
[14:13:14] Initial: 0000; - Uploaded at ~64 kB/s
[14:13:14] - Averaged speed for that direction ~51 kB/s
[14:13:14] - Server does not have record of this unit. Will try again later.
[14:13:14] - Error: Could not transmit unit 04 (completed April 6) to work server.
[14:13:14] - 3 failed uploads of this unit.


[14:13:14] + Attempting to send results [April 6 14:13:14 UTC]
[14:13:14] - Reading file work/wuresults_04.dat from core
[14:13:14]   (Read 131411 bytes from disk)
[14:13:14] Connecting to http://171.67.108.26:8080/
[14:13:16] - Couldn't send HTTP request to server
[14:13:16] + Could not connect to Work Server (results)
[14:13:16]     (171.67.108.26:8080)
[14:13:16] + Retrying using alternative port
[14:13:16] Connecting to http://171.67.108.26:80/
[14:13:16] - Couldn't send HTTP request to server
[14:13:16]   (Got status 503)
[14:13:16] + Could not connect to Work Server (results)
[14:13:16]     (171.67.108.26:80)
[14:13:16]   Could not transmit unit 04 to Collection server; keeping in queue.
[14:13:16] + Sent 0 of 1 completed units to the server
[14:13:16] + Closed connections
[14:13:16] 
[14:13:16] + Processing work unit
[14:13:16] Core required: FahCore_11.exe
[14:13:16] Core found.
[14:13:16] Working on queue slot 05 [April 6 14:13:16 UTC]
[14:13:16] + Working ...
[14:13:16] - Calling '.\FahCore_11.exe -dir work/ -suffix 05 -checkpoint 15 -verbose -lifeline 1908 -version 623'

[14:13:16] 
[14:13:16] *------------------------------*
[14:13:16] Folding@Home GPU Core
[14:13:16] Version 1.31 (Tue Sep 15 10:57:42 PDT 2009)
[14:13:16] 
[14:13:16] Compiler  : Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.762 for 80x86 
[14:13:16] Build host: amoeba
[14:13:16] Board Type: Nvidia
[14:13:16] Core      : 
[14:13:16] Preparing to commence simulation
[14:13:16] - Looking at optimizations...
[14:13:16] DeleteFrameFiles: successfully deleted file=work/wudata_05.ckp
[14:13:16] - Created dyn
[14:13:16] - Files status OK
[14:13:16] - Expanded 88597 -> 447307 (decompressed 504.8 percent)
[14:13:16] Called DecompressByteArray: compressed_data_size=88597 data_size=447307, decompressed_data_size=447307 diff=0
[14:13:16] - Digital signature verified
[14:13:16] 
[14:13:16] Project: 10103 (Run 33, Clone 5, Gen 10)
[14:13:16] 
[14:13:16] Assembly optimizations on if available.
[14:13:16] Entering M.D.
[14:13:22] Tpr hash work/wudata_05.tpr:  3589874957 2720480723 1183591085 3488171980 3394489799
[14:13:22] 
[14:13:22] Calling fah_main args: 14 usage=100
[14:13:22] 
[14:13:23] Working on p10103_lambda_370K
[14:13:25] Client config found, loading data.
[14:13:25] Starting GUI Server
[14:14:57] Completed 1%
[14:16:29] Completed 2%
.

pointing the client to CS6 doesn't help at all either :? :e(

a joint register for the WS group could help with pointing them to 'Active' CSes and would make it easy to take CS6 out of the loop (in this case)

.
- stopped Linux SMP w. HT on [email protected] GHz
....................................
Folded since 10-06-04 till 09-2010
bruce
Posts: 20824
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: 108.21 not accepting - then pointed to CS6

Post by bruce »

noorman wrote:pointing the client to CS6 doesn't help at all either :? :e(

a joint register for the WS group could help with pointing them to 'Active' CSes and would make it easy to take CS6 out of the loop (in this case)

.
You cannot "point" a client at a CS. The WU will only be accepted by a couple of servers and the WU knows which ones they are. This is not something you can do anything about. You need to wait until the Pande Group fixes the server(s) in question.

As to what changes in this structure might or might not happen in the future, I have no comment.
noorman
Posts: 270
Joined: Sun Dec 02, 2007 2:26 pm
Hardware configuration: Folders: Intel C2D E6550 @ 3.150 GHz + GPU XFX 9800GTX+ @ 765 MHZ w. WinXP-GPU
AMD A2X64 3800+ @ stock + GPU XFX 9800GTX+ @ 775 MHZ w. WinXP-GPU
Main rig: an old Athlon Barton 2500+ @2.25 GHz & 2* 512 MB RAM Apacer, Radeon 9800Pro, WinXP SP3+
Location: Belgium, near the International Sea-Port of Antwerp

Re: 108.21 not accepting - then pointed to CS6

Post by noorman »

bruce wrote:
noorman wrote:pointing the client to CS6 doesn't help at all either :? :e(

a joint register for the WS group could help with pointing them to 'Active' CSes and would make it easy to take CS6 out of the loop (in this case)

.
You cannot "point" a client at a CS. The WU will only be accepted by a couple of servers and the WU knows which ones they are. This is not something you can do anything about. You need to wait until the Pande Group fixes the server(s) in question.

As to what changes in this structure might or might not happen in the future, I have no comment.
.

I might have put it in the wrong way; I didn't suggest that I, myself, could or should be able to point the Client to send Results here or there, but that 'the system' did and didn't make a good job of it ...

You might pass on the suggestion; it might help PG in the long run.
I have had other suggestions implemented by PG before.

I would indeed expect the WU to return to the WS of origin or an 'Active' and 'online' CS. At the moment this has not been happening, clearly.


.
- stopped Linux SMP w. HT on [email protected] GHz
....................................
Folded since 10-06-04 till 09-2010
Post Reply