Last two WUs from 171.64.65.56 not credited
Moderators: Site Moderators, FAHC Science Team
Last two WUs from 171.64.65.56 not credited
My last two SMP-A2 WUs from server 171.64.65.56 have not been credited.
p2677-r28-c2-g73 at 4:58U, Jan 24 (11.5 hours ago)
p2677-r33-c9-g73 at 14:20U, Jan 23 (24 hours ago)
It is now folding p2669-r6-c97-g89 at 56%
User name: Martin_Q6600_VM (Team 96377)
Is anyone experiencing the same ?
p2677-r28-c2-g73 at 4:58U, Jan 24 (11.5 hours ago)
p2677-r33-c9-g73 at 14:20U, Jan 23 (24 hours ago)
It is now folding p2669-r6-c97-g89 at 56%
User name: Martin_Q6600_VM (Team 96377)
Is anyone experiencing the same ?
Project 2677: no Credits
Hi everyone,
I have the problem that some projects have not yet been credited. Could someone please check?
I have the problem that some projects have not yet been credited. Could someone please check?
- 2677 (Run 0, Clone 48, Gen 75) [sent January 23 13:45:11 UTC]
- 2677 (Run 7, Clone 22, Gen 75) [sent January 24 00:30:13 UTC]
- 2677 (Run 10, Clone 81, Gen 75) [sent January 24 10:29:20 UTC]
Code: Select all
Folding@Home Client Version 6.29
http://folding.stanford.edu
###############################################################################
###############################################################################
Launch directory: /XXXX/XXXXX
Executable: ./fah6
Arguments: -smp -local -verbosity 9
[17:17:39] - Ask before connecting: No
[17:17:39] - User name: JayxG (Team 70335)
[17:17:39] - User ID: 3694F06933XXXXXX
[17:17:39] - Machine ID: 1
[17:17:39]
[17:17:39] Loaded queue successfully.
[17:17:39] - Preparing to get new work unit...
... (project has been credited) ...
[03:26:51] - Connecting to assignment server
[03:26:51] Connecting to http://assign.stanford.edu:8080/
[03:26:53] Posted data.
[03:26:53] Initial: 40AB; - Successful: assigned to (171.64.65.56).
[03:26:53] + News From Folding@Home: Welcome to Folding@Home
[03:26:53] Loaded queue successfully.
[03:26:53] Connecting to http://171.64.65.56:8080/
[03:26:59] Posted data.
[03:27:00] Initial: 0000; - Receiving payload (expected size: 4846207)
[03:27:09] - Downloaded at ~525 kB/s
[03:27:09] - Averaged speed for that direction ~393 kB/s
[03:27:09] + Received work.
[03:27:09] Trying to send all finished work units
[03:27:09] + No unsent completed units remaining.
[03:27:09] + Closed connections
[03:27:09]
[03:27:09] + Processing work unit
[03:27:09] Core required: FahCore_a2.exe
[03:27:09] Core found.
[03:27:09] Working on queue slot 03 [January 23 03:27:09 UTC]
[03:27:09] + Working ...
[03:27:09] - Calling './mpiexec -np 4 -host 127.0.0.1 ./FahCore_a2.exe -dir work/ -nice 19 -suffix 03 -checkpoint 15 -verbose -lifeline 4597 -version 629'
[03:27:09]
[03:27:09] *------------------------------*
[03:27:09] Folding@Home Gromacs SMP Core
[03:27:09] Version 2.10 (Sun Aug 30 03:43:28 CEST 2009)
[03:27:09]
[03:27:09] Preparing to commence simulation
[03:27:09] - Ensuring status. Please wait.
[03:27:10] Called DecompressByteArray: compressed_data_size=4845695 data_size=24027949, decompressed_data_size=24027949 diff=0
[03:27:10] - Digital signature verified
[03:27:10]
[03:27:10] Project: 2677 (Run 0, Clone 48, Gen 75)
[03:27:10]
[03:27:10] Assembly optimizations on if available.
[03:27:10] Entering M.D.
[03:27:20] (Run 0, Clone 48, Gen 75)
[03:27:20]
[03:27:20] Entering M.D.
[03:33:18] pleted 2500 out of 250000 steps (1%)
[03:39:05] Completed 5000 out of 250000 steps (2%)
[03:44:52] Completed 7500 out of 250000 steps (3%)
...
[13:28:38] Completed 245000 out of 250000 steps (98%)
[13:35:08] Completed 247500 out of 250000 steps (99%)
[13:41:38] Completed 250000 out of 250000 steps (100%)
[13:41:40] DynamicWrapper: Finished Work Unit: sleep=10000
[13:41:50]
[13:41:50] Finished Work Unit:
[13:41:50] - Reading up to 21177360 from "work/wudata_03.trr": Read 21177360
[13:41:50] trr file hash check passed.
[13:41:50] - Reading up to 27141264 from "work/wudata_03.xtc": Read 27141264
[13:41:51] xtc file hash check passed.
[13:41:51] edr file hash check passed.
[13:41:51] logfile size: 186188
[13:41:51] Leaving Run
[13:41:52] - Writing 48654948 bytes of core data to disk...
[13:41:54] ... Done.
[13:42:02] - Shutting down core
[13:42:02]
[13:42:02] Folding@home Core Shutdown: FINISHED_UNIT
[13:45:11] CoreStatus = 64 (100)
[13:45:11] Unit 3 finished with 86 percent of time to deadline remaining.
[13:45:11] Updated performance fraction: 0.860244
[13:45:11] Sending work to server
[13:45:11] Project: 2677 (Run 0, Clone 48, Gen 75)
[13:45:11] + Attempting to send results [January 23 13:45:11 UTC]
[13:45:11] - Reading file work/wuresults_03.dat from core
[13:45:12] (Read 48654948 bytes from disk)
[13:45:12] Connecting to http://171.64.65.56:8080/
[13:53:29] Posted data.
[13:53:29] Initial: 0000; - Uploaded at ~95 kB/s
[13:53:31] - Averaged speed for that direction ~110 kB/s
[13:53:31] + Results successfully sent
[13:53:31] Thank you for your contribution to Folding@Home.
[13:53:31] + Number of Units Completed: 144
Code: Select all
[13:53:36] Connecting to http://assign.stanford.edu:8080/
[13:53:37] Posted data.
[13:53:37] Initial: 40AB; - Successful: assigned to (171.64.65.56).
[13:53:37] + News From Folding@Home: Welcome to Folding@Home
[13:53:37] Loaded queue successfully.
[13:53:37] Connecting to http://171.64.65.56:8080/
[13:53:45] Posted data.
[13:53:45] Initial: 0000; - Receiving payload (expected size: 4841237)
[13:54:18] - Downloaded at ~143 kB/s
[13:54:18] - Averaged speed for that direction ~330 kB/s
[13:54:18] + Received work.
[13:54:18] Trying to send all finished work units
[13:54:18] + No unsent completed units remaining.
[13:54:18] + Closed connections
[13:54:18]
[13:54:18] + Processing work unit
[13:54:18] Core required: FahCore_a2.exe
[13:54:18] Core found.
[13:54:18] Working on queue slot 04 [January 23 13:54:18 UTC]
[13:54:18] + Working ...
[13:54:18] - Calling './mpiexec -np 4 -host 127.0.0.1 ./FahCore_a2.exe -dir work/ -nice 19 -suffix 04 -checkpoint 15 -verbose -lifeline 4597 -version 629'
[13:54:18]
[13:54:18] *------------------------------*
[13:54:18] Folding@Home Gromacs SMP Core
[13:54:18] Version 2.10 (Sun Aug 30 03:43:28 CEST 2009)
[13:54:18]
[13:54:18] Preparing to commence simulation
[13:54:18] - Ensuring status. Please wait.
[13:54:19] Called DecompressByteArray: compressed_data_size=4840725 data_size=24039181, decompressed_data_size=24039181 diff=0
[13:54:19] - Digital signature verified
[13:54:19]
[13:54:19] Project: 2677 (Run 7, Clone 22, Gen 75)
[13:54:19]
[13:54:19] Assembly optimizations on if available.
[13:54:19] Entering M.D.
[13:54:29] (Run 7, Clone 22, Gen 75)
[13:54:29]
[13:54:29] Entering M.D.
[14:01:12] pleted 2500 out of 250000 steps (1%)
[14:07:39] Completed 5000 out of 250000 steps (2%)
[14:13:58] Completed 7500 out of 250000 steps (3%)
...
[00:15:00] Completed 245000 out of 250000 steps (98%)
[00:20:51] Completed 247500 out of 250000 steps (99%)
[00:26:39] Completed 250000 out of 250000 steps (100%)
[00:26:40] DynamicWrapper: Finished Work Unit: sleep=10000
[00:26:50]
[00:26:50] Finished Work Unit:
[00:26:50] - Reading up to 21189024 from "work/wudata_04.trr": Read 21189024
[00:26:50] trr file hash check passed.
[00:26:50] - Reading up to 27157340 from "work/wudata_04.xtc": Read 27157340
[00:26:50] xtc file hash check passed.
[00:26:50] edr file hash check passed.
[00:26:50] logfile size: 186913
[00:26:50] Leaving Run
[00:26:52] - Writing 48683413 bytes of core data to disk...
[00:26:55] ... Done.
[00:27:03] - Shutting down core
[00:27:03]
[00:27:03] Folding@home Core Shutdown: FINISHED_UNIT
[00:30:13] CoreStatus = 64 (100)
[00:30:13] Unit 4 finished with 85 percent of time to deadline remaining.
[00:30:13] Updated performance fraction: 0.858383
[00:30:13] Sending work to server
[00:30:13] Project: 2677 (Run 7, Clone 22, Gen 75)
[00:30:13] + Attempting to send results [January 24 00:30:13 UTC]
[00:30:13] - Reading file work/wuresults_04.dat from core
[00:30:14] (Read 48683413 bytes from disk)
[00:30:14] Connecting to http://171.64.65.56:8080/
[00:36:45] Posted data.
[00:36:45] Initial: 0000; - Uploaded at ~121 kB/s
[00:36:46] - Averaged speed for that direction ~113 kB/s
[00:36:46] + Results successfully sent
[00:36:46] Thank you for your contribution to Folding@Home.
[00:36:46] + Number of Units Completed: 145
Code: Select all
[00:39:00] Connecting to http://assign.stanford.edu:8080/
[00:39:01] Posted data.
[00:39:01] Initial: 40AB; - Successful: assigned to (171.64.65.56).
[00:39:01] + News From Folding@Home: Welcome to Folding@Home
[00:39:01] Loaded queue successfully.
[00:39:01] Connecting to http://171.64.65.56:8080/
[00:39:08] Posted data.
[00:39:08] Initial: 0000; - Receiving payload (expected size: 4844288)
[00:39:21] - Downloaded at ~363 kB/s
[00:39:21] - Averaged speed for that direction ~374 kB/s
[00:39:21] + Received work.
[00:39:21] + Closed connections
[00:39:21]
[00:39:21] + Processing work unit
[00:39:21] Core required: FahCore_a2.exe
[00:39:21] Core found.
[00:39:21] Working on queue slot 05 [January 24 00:39:21 UTC]
[00:39:21] + Working ...
[00:39:21] - Calling './mpiexec -np 4 -host 127.0.0.1 ./FahCore_a2.exe -dir work/ -nice 19 -suffix 05 -checkpoint 15 -verbose -lifeline 23964 -version 629'
[00:39:22]
[00:39:22] *------------------------------*
[00:39:22] Folding@Home Gromacs SMP Core
[00:39:22] Version 2.10 (Sun Aug 30 03:43:28 CEST 2009)
[00:39:22]
[00:39:22] Preparing to commence simulation
[00:39:22] - Ensuring status. Please wait.
[00:39:23] Called DecompressByteArray: compressed_data_size=4843776 data_size=24028029, decompressed_data_size=24028029 diff=0
[00:39:23] - Digital signature verified
[00:39:23]
[00:39:23] Project: 2677 (Run 10, Clone 81, Gen 75)
[00:39:23]
[00:39:23] Assembly optimizations on if available.
[00:39:23] Entering M.D.
[00:39:33] Run 10, Clone 81, Gen 75)
[00:39:33]
[00:39:33] Entering M.D.
[00:45:31] pleted 2500 out of 250000 steps (1%)
[00:51:15] Completed 5000 out of 250000 steps (2%)
[00:57:03] Completed 7500 out of 250000 steps (3%)
...
[10:14:20] Completed 245000 out of 250000 steps (98%)
[10:20:03] Completed 247500 out of 250000 steps (99%)
[10:25:43] Completed 250000 out of 250000 steps (100%)
[10:25:44] DynamicWrapper: Finished Work Unit: sleep=10000
[10:25:55]
[10:25:55] Finished Work Unit:
[10:25:55] - Reading up to 21176928 from "work/wudata_05.trr": Read 21176928
[10:25:55] trr file hash check passed.
[10:25:55] - Reading up to 27166740 from "work/wudata_05.xtc": Read 27166740
[10:25:55] xtc file hash check passed.
[10:25:55] edr file hash check passed.
[10:25:55] logfile size: 186130
[10:25:55] Leaving Run
[10:25:58] - Writing 48679934 bytes of core data to disk...
[10:26:00] ... Done.
[10:26:06] - Shutting down core
[10:26:06]
[10:26:06] Folding@home Core Shutdown: FINISHED_UNIT
[10:29:20] CoreStatus = 64 (100)
[10:29:20] Unit 5 finished with 86 percent of time to deadline remaining.
[10:29:20] Updated performance fraction: 0.860499
[10:29:20] Sending work to server
[10:29:20] Project: 2677 (Run 10, Clone 81, Gen 75)
[10:29:20] + Attempting to send results [January 24 10:29:20 UTC]
[10:29:20] - Reading file work/wuresults_05.dat from core
[10:29:20] (Read 48679934 bytes from disk)
[10:29:20] Connecting to http://171.64.65.56:8080/
[10:35:54] Posted data.
[10:35:55] Initial: 0000; - Uploaded at ~120 kB/s
[10:35:56] - Averaged speed for that direction ~117 kB/s
[10:35:56] + Results successfully sent
[10:35:56] Thank you for your contribution to Folding@Home.
[10:35:56] + Number of Units Completed: 146
Re: Project 2677: no Credits
Same problem, posted about it here : http://foldingforum.org/viewtopic.php?f=44&t=13033
-
- Posts: 131
- Joined: Sun Dec 02, 2007 6:29 am
- Hardware configuration: 1. C2Q 8200@2880 / W7Pro64 / SMP2 / 2 GPU - GTS250/GTS450
2. C2D 6300@3600 / XPsp3 / SMP2 / 1 GPU - GT240 - Location: Florida
Re: Last two WUs from 171.64.65.56 not credited
Yes, me!
P2677 5-29-74
P2677 8-22-75
Could be more, just noticed points going down.
P2677 5-29-74
P2677 8-22-75
Could be more, just noticed points going down.
-
- Posts: 85
- Joined: Fri Feb 13, 2009 12:38 pm
- Hardware configuration: Linux & CPUs
- Location: USA
Re: Last two WUs from 171.64.65.56 not credited
I have a few credits missing also , I received 7 credits out of these 23 WU:
Code: Select all
********************** Jan 22 2010 Central Time
[05:07:09] Project: 2662 (Run 0, Clone 439, Gen 48) ***Box LG
[05:07:09] + Attempting to send results [January 23 05:07:09 UTC]
[05:08:24] + Results successfully sent
[05:55:42] Project: 2671 (Run 26, Clone 89, Gen 199) ***Box LO
[05:55:42] + Attempting to send results [January 23 05:55:42 UTC]
[05:58:20] + Results successfully sent
********************** Jan 23 2010 Central Time
[07:37:25] Project: 2669 (Run 0, Clone 106, Gen 146) ***Box LF
[07:37:25] + Attempting to send results [January 23 07:37:25 UTC]
[07:38:36] + Results successfully sent
[07:57:22] Project: 2669 (Run 14, Clone 140, Gen 103) ***Box LP
[07:57:22] + Attempting to send results [January 23 07:57:22 UTC]
[07:58:38] + Results successfully sent
[10:37:03] Project: 2677 (Run 18, Clone 52, Gen 74) ***Box LL
[10:37:03] + Attempting to send results [January 23 10:37:03 UTC]
[10:39:36] + Results successfully sent
[11:06:12] Project: 2677 (Run 13, Clone 77, Gen 74) ***Box LN
[11:06:12] + Attempting to send results [January 23 11:06:12 UTC]
[11:08:44] + Results successfully sent
[12:43:59] Project: 2677 (Run 27, Clone 77, Gen 74) ***Box LD
[12:43:59] + Attempting to send results [January 23 12:43:59 UTC]
[12:46:32] + Results successfully sent
[13:10:15] Project: 2662 (Run 2, Clone 309, Gen 61) ***Box LK
[13:10:15] + Attempting to send results [January 23 13:10:15 UTC]
[13:11:48] + Results successfully sent
[14:25:26] Project: 2677 (Run 25, Clone 75, Gen 74) ***Box LO
[14:25:26] + Attempting to send results [January 23 14:25:26 UTC]
[14:27:58] + Results successfully sent
[14:40:34] Project: 2677 (Run 8, Clone 1, Gen 76) ***Box LM
[14:40:34] + Attempting to send results [January 23 14:40:34 UTC]
[14:43:04] + Results successfully sent
[16:15:20] Project: 2677 (Run 27, Clone 73, Gen 74) ***Box LA
[16:15:20] + Attempting to send results [January 23 16:15:20 UTC]
[16:17:52] + Results successfully sent
[18:59:33] Project: 2662 (Run 1, Clone 163, Gen 99) ***Box LP
[18:59:33] + Attempting to send results [January 23 18:59:33 UTC]
[19:00:52] + Results successfully sent
[21:37:42] Project: 2677 (Run 35, Clone 36, Gen 72) ***Box LN
[21:37:42] + Attempting to send results [January 23 21:37:42 UTC]
[21:40:12] + Results successfully sent
[22:51:32] Project: 2669 (Run 13, Clone 53, Gen 170) ***Box LO
[22:51:32] + Attempting to send results [January 23 22:51:32 UTC]
[22:52:42] + Results successfully sent
[01:31:55] Project: 2677 (Run 23, Clone 50, Gen 74) ***Box LM
[01:31:55] + Attempting to send results [January 24 01:31:55 UTC]
[01:34:27] + Results successfully sent
[03:48:29] Project: 2677 (Run 38, Clone 17, Gen 75) ***Box LH
[03:48:29] + Attempting to send results [January 24 03:48:29 UTC]
[03:51:00] + Results successfully sent
[05:30:15] Project: 2677 (Run 9, Clone 69, Gen 72) ***Box LP
[05:30:15] + Attempting to send results [January 24 05:30:15 UTC]
[05:32:48] + Results successfully sent
********************** Jan 24 2010 Central Time
[07:49:54] Project: 2662 (Run 1, Clone 322, Gen 65) ***Box LO
[07:49:54] + Attempting to send results [January 24 07:49:54 UTC]
[07:51:11] + Results successfully sent
[07:54:38] Project: 2669 (Run 17, Clone 146, Gen 132) ***Box LE
[07:54:38] + Attempting to send results [January 24 07:54:38 UTC]
[07:55:56] + Results successfully sent
[08:18:56] Project: 2669 (Run 2, Clone 141, Gen 168) ***Box LN
[08:18:56] + Attempting to send results [January 24 08:18:56 UTC]
[08:20:03] + Results successfully sent
[10:00:37] Project: 2669 (Run 5, Clone 139, Gen 149) ***Box LB
[10:00:37] + Attempting to send results [January 24 10:00:37 UTC]
[10:01:46] + Results successfully sent
[10:05:30] Project: 2677 (Run 7, Clone 58, Gen 74) ***Box LL
[10:05:30] + Attempting to send results [January 24 10:05:30 UTC]
[10:08:44] + Results successfully sent
[12:41:55] Project: 2669 (Run 12, Clone 39, Gen 120) ***Box LK
[12:41:55] + Attempting to send results [January 24 12:41:55 UTC]
[12:43:01] + Results successfully sent
[12:43:01] + Number of Units Completed: 341
Re: Last two WUs from 171.64.65.56 not credited
The server may be having some problems. I've shut it down for the moment.
-
- Posts: 94
- Joined: Thu Nov 13, 2008 4:18 pm
- Hardware configuration: q6600 @ 3.3Ghz windows xp-sp3 one SMP2 (2.15 core) + 1 9800GT native GPU2
Athlon x2 6000+ @ 3.0Ghz ubuntu 8.04 smp + asus 9600GSO gpu2 in wine wrapper
5600X2 @ 3.19Ghz ubuntu 8.04 smp + asus 9600GSO gpu2 in wine wrapper
E5200 @ 3.7Ghz ubuntu 8.04 smp2 + asus 9600GT silent gpu2 in wine wrapper
E5200 @ 3.65Ghz ubuntu 8.04 smp2 + asus 9600GSO gpu2 in wine wrapper
E6550 vmware ubuntu 8.4.1
q8400 @ 3.3Ghz windows xp-sp3 one SMP2 (2.15 core) + 1 9800GT native GPU2
Athlon II 620 @ 2.6 Ghz windows xp-sp3 one SMP2 (2.15 core) + 1 9800GT native GPU2 - Location: Calgary, Canada
Re: Project 2677: no Credits
How you guys can keep track of it?Mactin wrote:Same problem, posted about it here : http://foldingforum.org/viewtopic.php?f=44&t=13033
The only thing I can tell that FAHmon reports constantly over 55000 PPD for the last week (depending on the 7 GPU it can jump up over 57000, but it never goes under 55000) but EOC reports under 48000 (the latest reported 24 hour average is 45,785 )
currently I am running 7 Project 2677 out of the 11 smp client, so I guess it is very much the cause of my unexplainable decline.
Edit: I just seen that Peter shut down the server
Re: Last two WUs from 171.64.65.56 not credited
Topics merged and moved--this is a server issue rather than a WU issue. I've fixed the underlying problem and restarted the server. Further returns should be credited properly from here on out. I've pulled the records of the attempted returns that weren't credited; we'll run a recredit on those (ETA likely during the week). Unfortunately we can't query individual work units for users for this issue. Thanks for your patience.
-
- Posts: 85
- Joined: Fri Feb 13, 2009 12:38 pm
- Hardware configuration: Linux & CPUs
- Location: USA
Re: Last two WUs from 171.64.65.56 not credited
Long as She's Fixed , I'm good to Go ... Thanks again for your dedication , kasson
Been Folding since September of '03 , a dozen credits lost now and again is no big deal for me.
On the other hand , I do like to Rant and Rave once in a while , it lowers my blood pressure
Been Folding since September of '03 , a dozen credits lost now and again is no big deal for me.
On the other hand , I do like to Rant and Rave once in a while , it lowers my blood pressure
-
- Posts: 60
- Joined: Mon Aug 03, 2009 6:43 pm
- Hardware configuration: AMD Ryzen 7 5700G with Radeon Graphics 3.80 GHz
16.0 GB
HP Pavilion Desktop model TP01-2xxx without discrete video card
Windows 11 Pro
Native client running 1 slot, 8 cores
Re: Last two WUs from 171.64.65.56 not credited
I think I had two for Project 2677 that fell into this episode.kasson wrote:Topics merged and moved--this is a server issue rather than a WU issue. I've fixed the underlying problem and restarted the server. Further returns should be credited properly from here on out. I've pulled the records of the attempted returns that weren't credited; we'll run a recredit on those (ETA likely during the week). Unfortunately we can't query individual work units for users for this issue. Thanks for your patience.
Good to know your on top of this on the weekend Kasson. No doubts from me about your dedication. Your work ethic keeps me inspired.
I take it you're not a football fan.
Folding for Cures
Re: Last two WUs from 171.64.65.56 not credited
I know you're probably busy with the SMP2 rollout, but is there any ETA on the recredit?kasson wrote:Topics merged and moved--this is a server issue rather than a WU issue. I've fixed the underlying problem and restarted the server. Further returns should be credited properly from here on out. I've pulled the records of the attempted returns that weren't credited; we'll run a recredit on those (ETA likely during the week). Unfortunately we can't query individual work units for users for this issue. Thanks for your patience.
Re: Last two WUs from 171.64.65.56 not credited
The Pande Group is particularly careful about anything associated with credits and several people must take action before the recredit can be completed.
Recredits always take longer than they predict. They rarely make predictions, mostly because nobody knows how soon the other people will be able to do their portion of the process.
Recredits always take longer than they predict. They rarely make predictions, mostly because nobody knows how soon the other people will be able to do their portion of the process.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
-
- Posts: 60
- Joined: Mon Aug 03, 2009 6:43 pm
- Hardware configuration: AMD Ryzen 7 5700G with Radeon Graphics 3.80 GHz
16.0 GB
HP Pavilion Desktop model TP01-2xxx without discrete video card
Windows 11 Pro
Native client running 1 slot, 8 cores
Re: Last two WUs from 171.64.65.56 not credited
Three weeks.kasson wrote:...I've pulled the records of the attempted returns that weren't credited; we'll run a recredit on those (ETA likely during the week). Unfortunately we can't query individual work units for users for this issue. Thanks for your patience.
Folding for Cures
-
- Pande Group Member
- Posts: 2058
- Joined: Fri Nov 30, 2007 6:25 am
- Location: Stanford
Re: Last two WUs from 171.64.65.56 not credited
I'll talk to Dr. Kasson about the recredit he has in mind. It may have been that a recredit is not possible for some reason or that it required additional information or help from others on the staff. I'll ask him to post.
In general, we try to handle recrediing very carefully and it will take us a long time to do, since we have learned the hard way that rushing them (especially when we have not quite complete information) only makes the situation worse, as some people get recredited and others don't and people get even more upset.
In general, we try to handle recrediing very carefully and it will take us a long time to do, since we have learned the hard way that rushing them (especially when we have not quite complete information) only makes the situation worse, as some people get recredited and others don't and people get even more upset.
Prof. Vijay Pande, PhD
Departments of Chemistry, Structural Biology, and Computer Science
Chair, Biophysics
Director, Folding@home Distributed Computing Project
Stanford University
Departments of Chemistry, Structural Biology, and Computer Science
Chair, Biophysics
Director, Folding@home Distributed Computing Project
Stanford University
-
- Posts: 60
- Joined: Mon Aug 03, 2009 6:43 pm
- Hardware configuration: AMD Ryzen 7 5700G with Radeon Graphics 3.80 GHz
16.0 GB
HP Pavilion Desktop model TP01-2xxx without discrete video card
Windows 11 Pro
Native client running 1 slot, 8 cores
Re: Last two WUs from 171.64.65.56 not credited
Thank you for the answer, Dr. Pande.VijayPande wrote:I'll talk to Dr. Kasson about the recredit he has in mind. It may have been that a recredit is not possible for some reason or that it required additional information or help from others on the staff. I'll ask him to post.
In general, we try to handle recrediing very carefully and it will take us a long time to do, since we have learned the hard way that rushing them (especially when we have not quite complete information) only makes the situation worse, as some people get recredited and others don't and people get even more upset.
Just bumping to see what's going on. I can wait--I'm not going anywhere.
Folding for Cures