I am not fussed about PPD in general, but lately my three RTX 2060s are getting Project 18251 jobs that run for around 10-11 hours and are showing around 500K PPD performance against the usual 1.5-2M PPD for these devices and their settings, on other projects. Have just updated to latest NVIDIA studio drivers with no obvious difference. LARS shows an "average" PPD of 2M or so for 2060s on this project, but right now I have Z441 at 505K PPD on its 9th hour, of Project (18251,41,0,53) and next to it Z442 is at 2.2M PPD on a different project. This is typical. If those running Project 18251 are poverty-stricken researchers who can't afford the usual rate, that's fine, since points aren't really worth anything anyway. But maybe there is something odd about the combination - like, I note the absence of high-end GPUs working on this project in the LARs list, and maybe it's not suitable for us low-end donors?
PS: By accident, after writing the above I started a Z800 with a Dell GTX1080 in it, and it picked up a Project 18251 job (206,3,25)and it is running at 1.5 MPPD. This is just a bit better than LAR's average for that Project and a 1080. Both my GTX 10xx and RTX 2060 GPUs are using driver 566.14. So why is it different for 2060s?
Project 18251 very low PPD on RTX 2060s
Moderators: Site Moderators, FAHC Science Team
-
- Posts: 39
- Joined: Wed Mar 18, 2020 2:55 pm
- Hardware configuration: HP Z600 (5) HP Z800 (3) HP Z440 (3)
ASUS Turbo GTX 1060, 1070, 1080, RTX 2060 (3)
Dell GTX 1080 - Location: Sydney Australia
Project 18251 very low PPD on RTX 2060s
Last edited by appepi on Thu Nov 21, 2024 3:21 pm, edited 1 time in total.
-
- Posts: 520
- Joined: Fri Apr 03, 2020 2:22 pm
- Hardware configuration: ASRock X370M PRO4
Ryzen 2400G APU
16 GB DDR4-3200
MSI GTX 1660 Super Gaming X
Re: Project 18251 very low PPD on RTX 2060s
Wow.... I'm not the only one!
From what I understand in Discord, it's a strange project that didn't scale well with larger GPU's, thus the assignment. But I've run into the same issue on my 1660 Super, but only on later projects. My earlier runs gave full points, in my case 1.2-1.4M if I recall correctly. The researcher did some runs and could find no errors on the later ones that ran slow for me.
In my case I did find one stick of memory testing bad. So I yanked it out and am currently running in single channel mode waiting for memory to arrive. But I picked up another 18251.... and it ran slow again. Single channel memory might cause a slight slowdown, but not half usual points. So I'm still at a loss as to what triggered it exactly.
If I figure anything out I'll let you know. I will probably also pass this on to the researcher. He was very helpful in looking into the issue when I first reported it on Discord, and he might want your PRCG info. Also, if you use HFM or have old logs, take a look and see if all your runs of this project ran at the current speed. Mine were fine until a certain date, then started running half speed. Assuming it was the later found hardware issue, I shrugged it off.
If it makes you feel better I think some runs on my 1660 Super were almost 14 hours. But they stayed stable and completed.
From what I understand in Discord, it's a strange project that didn't scale well with larger GPU's, thus the assignment. But I've run into the same issue on my 1660 Super, but only on later projects. My earlier runs gave full points, in my case 1.2-1.4M if I recall correctly. The researcher did some runs and could find no errors on the later ones that ran slow for me.
In my case I did find one stick of memory testing bad. So I yanked it out and am currently running in single channel mode waiting for memory to arrive. But I picked up another 18251.... and it ran slow again. Single channel memory might cause a slight slowdown, but not half usual points. So I'm still at a loss as to what triggered it exactly.
If I figure anything out I'll let you know. I will probably also pass this on to the researcher. He was very helpful in looking into the issue when I first reported it on Discord, and he might want your PRCG info. Also, if you use HFM or have old logs, take a look and see if all your runs of this project ran at the current speed. Mine were fine until a certain date, then started running half speed. Assuming it was the later found hardware issue, I shrugged it off.
If it makes you feel better I think some runs on my 1660 Super were almost 14 hours. But they stayed stable and completed.
Fold them if you get them!
-
- Posts: 39
- Joined: Wed Mar 18, 2020 2:55 pm
- Hardware configuration: HP Z600 (5) HP Z800 (3) HP Z440 (3)
ASUS Turbo GTX 1060, 1070, 1080, RTX 2060 (3)
Dell GTX 1080 - Location: Sydney Australia
Re: Project 18251 very low PPD on RTX 2060s
Hmmm ... Very helpful to know it's not just me. And just to put the barista-grade fern on top, I had two of these low-PPD 18251 jobs running when I went to shut down the devices this morning. Ordinarily I try to limit folding to 10pm - 7am local time, (9 hours/day) when electricity is at the cheapest "off peak" rate. Lately I have been letting them start 2 hours earlier and use 8pm-10pm which is "shoulder" rates or 22% more per kWh. I set the LAR systems timer to finish at 7am, and by the time I wake up (being retired, this is at a more civilized hour) the jobs are usually done. If not, I let them finish up and stop at "shoulder" rates as long as they will end before 2pm, when we hit peak rates at 119% more. It is so rare for a job to need to run that long after 7 am that I noticed the repeated occurrence of these long-running Project 18251 ones quickly. One of the Project 18251 jobs will end not long after 2pm, so it is using 7 hours extra at "shoulder" rates, and costing me an extra 8.54 off-peak hours beyond what I aim at donating to Folding. I let it run. The other was planning to gobble up several of the expensive "peak" hours as well, so I paused it and shut the device down. It can start again at 8pm and will yield even fewer points, and slow down the research. This is not a good outcome from anyone's point of view, but there is no mechanism for preventing a device taking on a job whose ETA is beyond a designated limit.
I also had a closer look at the LAR systems "Project PPD by GPU" listing for Project 18251. I don't know how to insert a local images here, but at https://folding.lar.systems/projects/fo ... site_links ypu can see that there are two very different rankings for a 2060, namely #6 with 2.3M PPD and 0.985 MPts for 10hr 54min average work, and #22 with 0.7M PPD and 0.32 Mpts for 11hrs 8 min average work - the latter being the sort of performance I am getting, the former being what I would expect. Anyway, maybe these diverse averages reflect a change in the project at some point, or else an unknown difference between sub-populations of 2060s. I also note that a 1660 (rank #10) is supposedly averaging 1.7 MPPD and 0.12 Mpts for 2hr 38 min work. Yes well, maybe this curious interaction is the cyber-equivalent of the penicillin mould in the Petrie dish?....
I also had a closer look at the LAR systems "Project PPD by GPU" listing for Project 18251. I don't know how to insert a local images here, but at https://folding.lar.systems/projects/fo ... site_links ypu can see that there are two very different rankings for a 2060, namely #6 with 2.3M PPD and 0.985 MPts for 10hr 54min average work, and #22 with 0.7M PPD and 0.32 Mpts for 11hrs 8 min average work - the latter being the sort of performance I am getting, the former being what I would expect. Anyway, maybe these diverse averages reflect a change in the project at some point, or else an unknown difference between sub-populations of 2060s. I also note that a 1660 (rank #10) is supposedly averaging 1.7 MPPD and 0.12 Mpts for 2hr 38 min work. Yes well, maybe this curious interaction is the cyber-equivalent of the penicillin mould in the Petrie dish?....