Change in BA requirements
Moderators: Site Moderators, FAHC Science Team
-
- Pande Group Member
- Posts: 2058
- Joined: Fri Nov 30, 2007 6:25 am
- Location: Stanford
Re: Change in BA requirements
PS For those who haven't seen our previous blog posts on BA, these posts might be interesting, especially to put all of this in context at least in terms of how PG has been thinking about this in the past:
http://folding.typepad.com/news/2012/02 ... llout.html
http://folding.typepad.com/news/2011/11 ... -2012.html
http://folding.typepad.com/news/2012/02 ... llout.html
http://folding.typepad.com/news/2011/11 ... -2012.html
Prof. Vijay Pande, PhD
Departments of Chemistry, Structural Biology, and Computer Science
Chair, Biophysics
Director, Folding@home Distributed Computing Project
Stanford University
Departments of Chemistry, Structural Biology, and Computer Science
Chair, Biophysics
Director, Folding@home Distributed Computing Project
Stanford University
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: Change in BA requirements
Here are some other resources...
FAH FAQ:SMP - What are Big Adv work units?
FAH Configuration Guide: Big Adv
Original BA Forum Topic by Kasson.
FAH FAQ:SMP - What are Big Adv work units?
The underline was my emphasis.What are bigadv Work Units?
Big Advanced (bigadv or BA) is an experimental type of Folding@home WUs intended for the most powerful CPUs in Folding@home. Our goal is to use bigadv to work on projects that are particularly large (memory utilization, upload/download requirement) and require a large amount of computation. We are all fortunate in that processors get faster over time, so the highest-performing tier of donor machines also gets faster over time. We have a lot of exciting science being enabled by FAH donors, and it takes place at all levels of computational requirement and performance sensitivity.
These units have extremely tight deadlines and require a minimum of sixteen physical CPU cores. Some systems, especially for hyperthreaded CPUs, may not be able to complete the units in time, so this core count is not an absolute requirement. Bigadv Work Units can at times consume approximately 750 MB of RAM per CPU core, so you may need 12 GB of RAM as well. They are also larger WUs and take longer to upload. In return for these requirements we add an additional 20% of points to bigadv Work Units, on top of the Quick Return Bonus awarded to all SMP WUs. We recognize that donors work hard to optimize their setups, but please keep in mind that BA is very much experimental and that future changes not just could happen but most likely will.
Please see the Configuration FAQ for more information on how to get bigadv WUs. The Folding Support Forum is frequented by many hardware experts who may be able to help answer your questions about specific hardware and setups.
FAH Configuration Guide: Big Adv
FAH FAQ:Points - How is QRB Determined?Big Advanced
v6: -bigadv
V7: client-type bigadv
Sets a client preference to request extra large work units for multi-CPU socket class server systems. A minimum of 16 CPU cores is required for Assignment Server access, and to meet the extremely short deadlines.
In 2009, Dr. Kasson introduced a an experimental WU category called “bigadv”, intended for some of the most powerful computers participating in FAH. Currently, bigadv WUs require a minimum of 16 CPU cores and have very tight completion deadlines, although that minimum requirement has been known to change over time. These WUs have a high scientific priority, and are so computationally demanding they could not run anywhere else on Folding@home. They also consume much more RAM and Internet bandwidth, but in return a 20% increase in point value was added on top of the existing Quick Return Bonus points system.
Note: BigAdv work units are only available to Linux operating systems at this time. However, Windows availability may return in the future.
Original BA Forum Topic by Kasson.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Re: Change in BA requirements
https://docs.google.com/spreadsheet/ccc ... pY1E#gid=0PS For those who haven't seen our previous blog posts on BA, these posts might be interesting, especially to put all of this in context at least in terms of how PG has been thinking about this in the past:
http://folding.typepad.com/news/2012/02 ... llout.html
http://folding.typepad.com/news/2011/11 ... -2012.html
Nov. 14, 2011 Approx. 327,883 active WIN CPU
FEB. 12 2012 Approx. 247,335 active WIN CPU
Jan. 13 2014 Approx. 159,043 active WIN CPU
Did the policy work in the Past?
Does a correlation look probable between the participation of WIN CPU and Linux?
Transparency and Accountability, the necessary foundation of any great endeavor!
Re: Change in BA requirements
Many pages ago, I made an attempt to explain how science (Stanford) and the Donors who are involved in the points-race can come to different conclusions. My theories were based on some visible facts, but also some assumptions on my part. To date, those assumptions have neither been clearly supported nor clearly denied by Stanford. Eventually they probably will be, but in the meantime we still have some areas about which we can speculate.
Some of your statements may be a priori premises but others are speculations, some with merit and some (perhaps) without. I challenge the following:
Science has a need to study bigger and bigger proteins as time goes on and as hardware becomes more powerful but that does not imply that the need to study small proteins goes away. Looking at psummary, I see a lot of proteins with ~550 atoms, a fair number with ~250 atoms and even one with only 134 atoms. Assign 134 atoms to a machine with 64 active cores and that puts only two atoms on each thread. It probably won't run, but even if it does, it will be TERRIBLY INEFFICIENT. A protein with 550 atoms gives only 8 or 9 atoms per thread. When you suggest that SMP will die a natural death, you're also proposing that FAH must prohibit certain types of research. I doubt that's going to happen.
Some of your statements may be a priori premises but others are speculations, some with merit and some (perhaps) without. I challenge the following:
mdk777 wrote:regular smp will die a natural death (no surprise) and yes, over a very long period of time, BA will cease to have any exclusivity or special bonus...Again, something that has been happening gradually over time anyway.
Science has a need to study bigger and bigger proteins as time goes on and as hardware becomes more powerful but that does not imply that the need to study small proteins goes away. Looking at psummary, I see a lot of proteins with ~550 atoms, a fair number with ~250 atoms and even one with only 134 atoms. Assign 134 atoms to a machine with 64 active cores and that puts only two atoms on each thread. It probably won't run, but even if it does, it will be TERRIBLY INEFFICIENT. A protein with 550 atoms gives only 8 or 9 atoms per thread. When you suggest that SMP will die a natural death, you're also proposing that FAH must prohibit certain types of research. I doubt that's going to happen.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
-
- Posts: 45
- Joined: Sat Oct 27, 2012 6:17 pm
- Hardware configuration: AMD Opteron 2 x 6274 (32 Cores)
AMD FX-8350 (8 Cores)
Intel i7-4790K (8 Cores)
Intel i7-4790K (8 Cores)
Intel i7-4771K (8 Cores)
Intel i7-3770K (8 Cores)
Intel i7-3770K (8 Cores)
Intel i7-3770K (8 Cores)
Intel i7-3770S (8 Cores)
Intel i7-3930K (12 Cores)
Nvidia GPUs:
GTX 780ti
GTX 780ti
GTX 780ti
GTX 780ti
GTX 780
GTX 690
GTX 690
AMD GPUs:
HD 7970 GBE
HD 7970 GBE
HD 7990
HD 7990
HD 7990
R9 295X2
R9 295X2
R9 295X2 - Location: Dallas, TX
Re: Change in BA requirements
@ bruce: As I understand it, GPU is now capable of doing all the calculations that can be done by CPU/SMP with the new Core 17, and I assume that is also true of BA(?) I'm curious as to the correlation you've drawn between the number of atoms in a given protein and how that impacts folding efficiency on any given processor: Is there a natural break-point between the size of a protein and the type of system best suited to process it? If so, are you implying that SMP will continue to live on for a very long time, simply because it is the most efficient way to process small proteins, rather than devote two-thousand+ core GPUs to solve the same problem?
Re: Change in BA requirements
I expect that SMP will continue to be supported far into the future.
The original goal of FAH was to make use of unused computer resources. That's still an important goal. There are probably very few single-CPU computers in people's homes, but that original design used the CPU at a low priority, allowing the owner to continue to use the computer for other things even with FAH folding.
The machine I'm using at the moment is a Quad with a supported GPU. With a GPU slot actively folding, I have three unused CPUs. The researchers can put them to good use. That has not changed and for as long as people continue to use desktop or laptop computers, I don't see that changing.
With my 3 free CPUs, data for the protein is logically divided up into thirds. Each CPU solves one third of the problem and then those three partial solutions are then integrated into single a unified solution. This process is repeated many, many times. If the protein is tiny, compared to the 3 cores, then each partial solution can be slower than solving the whole protein with a single CPU, if only because then there's no integration step. If a protein is HUGE compared to the 3 on my system, lots of memory are required and each partial solution becomes very, very slow. There's a wide range where solutions are efficient but there are also extremes that might be called either a maximum or a minimum number of atoms that works efficiently when compared to the number of cores. For example, a 20 atom protein couldn't possibly use 64 cores effectively whereas the proteins being assigned to the BigAdv hardware which have in excess of 1.3 million atoms would be very inefficient with my 3 cores.
When science needs to study too many larger proteins compared to the number of BA machines, it produces inefficiencies. When science needs to study too many smaller proteins compared to the number machines with fewer cores it also produces inefficiencies.
You've probably heard the word "scaling" used elsewhere and this is a partial explanation of what they're talking about.
Nevertheless, as somebody suggested many posts earlier, and home computer running a gpu has some resources that can be put to use. Some Donors have said that they don't feel it's worth their effort to run SMP on those free cores. I disagree with that approach.
Elsewhere, there has been a discussion of porting FAH to smart phones/tablets/etc. Why would anybody want to do that when at the level of today's technology, my 3 cores can do more work than my tablet or my phone? Sure, the technology of tablets is improving more rapidly than the technology of the classic PC, so sooner or later, that'll change but today FAH can still use a reasonable mixture of high-end and low-end technology.
The original goal of FAH was to make use of unused computer resources. That's still an important goal. There are probably very few single-CPU computers in people's homes, but that original design used the CPU at a low priority, allowing the owner to continue to use the computer for other things even with FAH folding.
The machine I'm using at the moment is a Quad with a supported GPU. With a GPU slot actively folding, I have three unused CPUs. The researchers can put them to good use. That has not changed and for as long as people continue to use desktop or laptop computers, I don't see that changing.
With my 3 free CPUs, data for the protein is logically divided up into thirds. Each CPU solves one third of the problem and then those three partial solutions are then integrated into single a unified solution. This process is repeated many, many times. If the protein is tiny, compared to the 3 cores, then each partial solution can be slower than solving the whole protein with a single CPU, if only because then there's no integration step. If a protein is HUGE compared to the 3 on my system, lots of memory are required and each partial solution becomes very, very slow. There's a wide range where solutions are efficient but there are also extremes that might be called either a maximum or a minimum number of atoms that works efficiently when compared to the number of cores. For example, a 20 atom protein couldn't possibly use 64 cores effectively whereas the proteins being assigned to the BigAdv hardware which have in excess of 1.3 million atoms would be very inefficient with my 3 cores.
When science needs to study too many larger proteins compared to the number of BA machines, it produces inefficiencies. When science needs to study too many smaller proteins compared to the number machines with fewer cores it also produces inefficiencies.
You've probably heard the word "scaling" used elsewhere and this is a partial explanation of what they're talking about.
Nevertheless, as somebody suggested many posts earlier, and home computer running a gpu has some resources that can be put to use. Some Donors have said that they don't feel it's worth their effort to run SMP on those free cores. I disagree with that approach.
Elsewhere, there has been a discussion of porting FAH to smart phones/tablets/etc. Why would anybody want to do that when at the level of today's technology, my 3 cores can do more work than my tablet or my phone? Sure, the technology of tablets is improving more rapidly than the technology of the classic PC, so sooner or later, that'll change but today FAH can still use a reasonable mixture of high-end and low-end technology.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
Re: Change in BA requirements
Well, I think I follow your line of thought...but it just reinforces my argument...that it doesn't make sense to force(or re-balance)BA machines to regular smp.
But, I might just be missing your point.
In my mind, the time and energy spent on this "re-balancing" (which hasn't worked in the past anyway) should have been spent on recruitment...selling more people on running smp...instead of hammering on people to give up on BA.
how about a banking analogy?
I need small depositors because I make most of my profit on fees.
Large depositors are great too because they help with my cash/lending ratio.
So, my profits are running low and I post an announcement:
"Deposits will be limit to 50K per account."
Local lottery winner comes in to open his 5 million dollar account. He is told that he will need to open 100 accounts...OK...weird...but whatever...until he reads that he will be charged fees on all 100 accounts.
About the same time...every other large account holder is on the phone trying to figure out why their accounts have been split, and why their fees have multiplied.
Soon the local news is covering the fiasco of angry customers...
Question?
Has this process had a positive, or negative impact on the total number of bank customers? Do people watching the news flock to the bank because they now see that small accounts are so valued by this bank? Do any of the large customers stay?
But, I might just be missing your point.
In my mind, the time and energy spent on this "re-balancing" (which hasn't worked in the past anyway) should have been spent on recruitment...selling more people on running smp...instead of hammering on people to give up on BA.
how about a banking analogy?
I need small depositors because I make most of my profit on fees.
Large depositors are great too because they help with my cash/lending ratio.
So, my profits are running low and I post an announcement:
"Deposits will be limit to 50K per account."
Local lottery winner comes in to open his 5 million dollar account. He is told that he will need to open 100 accounts...OK...weird...but whatever...until he reads that he will be charged fees on all 100 accounts.
About the same time...every other large account holder is on the phone trying to figure out why their accounts have been split, and why their fees have multiplied.
Soon the local news is covering the fiasco of angry customers...
Question?
Has this process had a positive, or negative impact on the total number of bank customers? Do people watching the news flock to the bank because they now see that small accounts are so valued by this bank? Do any of the large customers stay?
Last edited by mdk777 on Tue Jan 14, 2014 4:13 am, edited 1 time in total.
Transparency and Accountability, the necessary foundation of any great endeavor!
-
- Posts: 1024
- Joined: Sun Dec 02, 2007 12:43 pm
Re: Change in BA requirements
I think you missed bruce's statement about the wide range of WUs and you're turning it into an assumed narrow range. There are a lot of non-bigadv wus that will run fine on 16 to 20 cores that are waiting a long time for someone to process them. Otherwise their proposed change would be unscientific.mdk777 wrote:Well, I think I follow your line of thought...but it just reinforces my argument...that it doesn't make sense to force(or re-balance)BA machines to regular smp.
But, I might just be missing your point.
In my mind, the time and energy spent on this "re-balancing" (which hasn't worked in the past anyway) should have been spent on recruitment...selling more people on running smp...instead of hammering on people to give up on BA.
What percentage of wus for projects with numbers between 8101 to 8105 are truly more important than wus for projects with numbers lower than 8100 or greater than 8200?
Why hasn't rebalancing worked in the past?
Re: Change in BA requirements
because the processing power never made the transition.Why hasn't rebalancing worked in the past?
They were indeed dropped from BA, but didn't increase the number of machines running smp...
people retired machines, people quit, they moved to different projects...sure some stayed...but not enough.
Win cpu went from 327k to 159k during that period of time.
No, I understand that a wide range of classes are suited to a wide range of WU.I think you missed bruce's statement about the wide range of WUs and you're turning it into an assumed narrow range. There are a lot of non-bigadv wus that will run fine on 16 to 20 cores that are waiting a long time for someone to process them. Otherwise their proposed change would be unscientific.
application/matching should indeed be made on a best fit criteria, and not an artificial moving % class ratio.
The 1% to 5% ratio discussed earlier appears to be totally political, and not based on scientific requirement. My point exactly!
Transparency and Accountability, the necessary foundation of any great endeavor!
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: Change in BA requirements
Those declining numbers sure do look scary. But cherry picking a thin slice of the total picture is not truly informative. That's why they added a new column to the osstats page showing active cores, not just active clients. With the world constantly upgrading from P4s to Dual Cores to Quad cores and beyond, many people are running fewer clients but on more and faster cores. Client counts droppingmdk777 wrote:...
https://docs.google.com/spreadsheet/ccc ... pY1E#gid=0
Nov. 14, 2011 Approx. 327,883 active WIN CPU
FEB. 12 2012 Approx. 247,335 active WIN CPU
Jan. 13 2014 Approx. 159,043 active WIN CPU
Did the policy work in the Past?
Does a correlation look probable between the participation of WIN CPU and Linux?
Only showing part of the whole picture is like the tale of the blind men and the elephant. Which part of the elephant are you showing us?
I'm not saying that more cores fully explains the lower client count, but please don't try to make a point by only showing us the elephant's tail.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Re: Change in BA requirements
So those are 2- 4P machines? AKA probably bigadv machines which may be affected by this change? Just curious. Assuming saying a random number of 50% of those machines won't be able to fold bigadv anymore due to the changes and says half of those folders decide to not to fold SMP because of the points to power consumption ratio what do we suppose is going to happen?7im wrote: Those declining numbers sure do look scary. But cherry picking a thin slice of the total picture is not truly informative. That's why they added a new column to the osstats page showing active cores, not just active clients. With the world constantly upgrading from P4s to Dual Cores to Quad cores and beyond, many people are running fewer clients but on more and faster cores. Client counts dropping
Only showing part of the whole picture is like the tale of the blind men and the elephant. Which part of the elephant are you showing us?
I'm not saying that more cores fully explains the lower client count, but please don't try to make a point by only showing us the elephant's tail.
Remember many here have already ceased folding and moved to another DC project. Also remember that these machines that have been repurposed are still reflected in the core count and will be until that 50 day wall is reached and they no longer return a WU.
In this case time will tell.
-
- Posts: 128
- Joined: Thu Dec 06, 2007 9:48 pm
- Location: Norway
Re: Change in BA requirements
Well, taking a closer look on the 2013-numbers, as shown https://docs.google.com/spreadsheet/ccc ... ring#gid=0 for windows is interesting...7im wrote:Those declining numbers sure do look scary. But cherry picking a thin slice of the total picture is not truly informative. That's why they added a new column to the osstats page showing active cores, not just active clients. With the world constantly upgrading from P4s to Dual Cores to Quad cores and beyond, many people are running fewer clients but on more and faster cores. Client counts dropping
Only showing part of the whole picture is like the tale of the blind men and the elephant. Which part of the elephant are you showing us?
I'm not saying that more cores fully explains the lower client count, but please don't try to make a point by only showing us the elephant's tail.
I've only looked on windows, and one very easy one first, how many clients you'll need to run per TFLOPS is constant at 229.8 clients/TFLOPS +- 0.2 clients/TFLOPS, with the differences due to rounding off.
A much more interesting statistics is core/cpu, this starts at 1.26 core/client for so slight increase for so very unexpectedly dropping to 1.12 core/client. Then suddenly, with only a small increase in #clients from one day to the next it jumps from 1.14 core/client to 1.94 core/client and a couple weeks later it reaches the max of 2.025 core/client. In the following months core/client is slowly dropping, for so having a large drop down to 1.44 core/client. This period is coupled with a large spike of clients so can be understandable. As the spike in #clients is passed and starts to drop again core/client increases back to 2.00 core/client, for so slowly going down again. This trend with slowly dropping core/client has continued for the last 3+ months, and is now at 1.82 core/client.
Now having a large influx of single-core clients during the summer is of course possible, atleast back in the "old days" Hardocp and the Australians was swapping #1 and #2 depending on who had the winter, and some of these users switching to smp can also explain how it jumped back-up to 2 core/client.
Having a steady decline for the last months on the other hand doesn't make so much sence, especially since isn't single-core clients basically EOL now? These last months does not indicate users is swapping-out their computers to more powerful ones, instead it can look like the users with quad-core and more powerful computers is instead shutting-off their computers and it's the low-end computers that's kept running instead.
Another possibility for less core/client is Core17 needing a core, but if not mis-read something the launch for this was in November while the slow drop has been happening for longer than this.
BTW, even if the roughly 24k GPU-clients is added with a core each to the Windows-numbers, this still only gives 1.97 core/client for Windows. If single-core really is EOL, would have expected to have atleast 2 core/client as average.
-
- Posts: 10179
- Joined: Thu Nov 29, 2007 4:30 pm
- Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
- Location: Arizona
- Contact:
Re: Change in BA requirements
You are both cherry picking data again, and not considering all possibilities.
Viper, do the math. They aren't all 2-4P machines. To go from 300k to 150k you only need to upgrade everyone from a dual core to a quad core. I'm not saying that's what happened, just that lots of small upgrades are more likely than your guess of a few large BA upgrades.
RD, Average core count drops the last few months as people transition from core 15 and 16 which don't need a dedicated core to core 17 which does need a dedicated core. I can't prove that, but seems plausible.
Again, try to account for the larger picture, not just the portion of it that suits your viewpoint.
Viper, do the math. They aren't all 2-4P machines. To go from 300k to 150k you only need to upgrade everyone from a dual core to a quad core. I'm not saying that's what happened, just that lots of small upgrades are more likely than your guess of a few large BA upgrades.
RD, Average core count drops the last few months as people transition from core 15 and 16 which don't need a dedicated core to core 17 which does need a dedicated core. I can't prove that, but seems plausible.
Again, try to account for the larger picture, not just the portion of it that suits your viewpoint.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Re: Change in BA requirements
Just an observation and a thought I'd like to share with Dr. Pande.
If we really do want to "take the broader look at things", people are not really trying to "cherry-pick" data about a decline in FAH participation. If you broaden out, what is really happening is that they don't understand the Bigadv policy very well as it pertains to exactly how equipment is designated as not meeting Bigadv expectations. You may not realize it, but within the Folding Teams where "enthusiasts" live that do Bigadv, we regularly advise Bigadv newbies to beware of future changes to the Bigadv requirements. We don't actually know when a Bigadv machine will bump against no longer qualifying either. So, we typically advise people getting into Bigadv to purchase the most powerful machine they can afford.
What we have here is what feels like a one-way street of communications about the Bigadv policy regarding what is expected both now and in the future. Everyone appreciates it when you actually speak to us in this forum and share what you know. But you also need to realize that dropping a statement like "we want Bigadv to be in the top 5% (or even 1%)" leaves people with more questions unless you take time to explain how PG arrives at the 5%. It would also be useful to ask questions here to test whether the very people you are speaking to are actually understanding what you just said. An example would be the "5%" statement. How about asking if people understood what that exactly means? Without more explanation around these important issues, then supposition and interpretation sets-in. That leads to people going off into further rat-holes of discussion rather than being able to stay on point and gain understanding.
I hope that you will take this as professional advice and not a criticism. Bigadv Folders are a very technical, inquisitive, enthusiastic component of your Folding community. They need a bit more explanation at times, especially for things that you share here. The absence of fact and detail is what is hurting so much right now, not all the specific answers to the complex issues that you are facing. It is OK to provide more explanation while knowing that you are still working out the details of any change that will be made.
Thank-you.
If we really do want to "take the broader look at things", people are not really trying to "cherry-pick" data about a decline in FAH participation. If you broaden out, what is really happening is that they don't understand the Bigadv policy very well as it pertains to exactly how equipment is designated as not meeting Bigadv expectations. You may not realize it, but within the Folding Teams where "enthusiasts" live that do Bigadv, we regularly advise Bigadv newbies to beware of future changes to the Bigadv requirements. We don't actually know when a Bigadv machine will bump against no longer qualifying either. So, we typically advise people getting into Bigadv to purchase the most powerful machine they can afford.
What we have here is what feels like a one-way street of communications about the Bigadv policy regarding what is expected both now and in the future. Everyone appreciates it when you actually speak to us in this forum and share what you know. But you also need to realize that dropping a statement like "we want Bigadv to be in the top 5% (or even 1%)" leaves people with more questions unless you take time to explain how PG arrives at the 5%. It would also be useful to ask questions here to test whether the very people you are speaking to are actually understanding what you just said. An example would be the "5%" statement. How about asking if people understood what that exactly means? Without more explanation around these important issues, then supposition and interpretation sets-in. That leads to people going off into further rat-holes of discussion rather than being able to stay on point and gain understanding.
I hope that you will take this as professional advice and not a criticism. Bigadv Folders are a very technical, inquisitive, enthusiastic component of your Folding community. They need a bit more explanation at times, especially for things that you share here. The absence of fact and detail is what is hurting so much right now, not all the specific answers to the complex issues that you are facing. It is OK to provide more explanation while knowing that you are still working out the details of any change that will be made.
Thank-you.
Re: Change in BA requirements
Nope, not excluding other variables you mentioned.You are both cherry picking data again, and not considering all possibilities.
Just pointing out that the case for linkage/correlation is extremely poor.
Reducing BA has not appeared in the past to have had any positive correlation with increasing smp participation.
The assumption that pushing/nudging/re-balancing BA folders to run regular smp just does not seem to have any historical evidence of success.
Why repeat the failed experiment again?
Transparency and Accountability, the necessary foundation of any great endeavor!