Joe_H wrote: ↑Tue May 31, 2022 2:46 pm
JimboPalmer wrote: ↑Tue May 31, 2022 2:13 pm
F@H does not currently use AVX-512; there are 3 reasons:
Fourth reason - during initial testing of the first CPU folding core to support AVX there was little improvement in processing speed over AVX-256. It would have required creation and maintenance of an additional variant of the folding core for little gain to the F@h project. The later implementations of AVX-512 might have changed that cost-benefit calculation, but I don't know if that has been tested. Currently there is support for SSE2, AVX and AVX2.
I think they're going the opposite route.
Avx 512 can generate very precise results, but in terms of folding (finding variations on a constant), AVX512 is extremely power hungry and slow..
A few scientific articles appeared that deep learning (specifically boinc, which is closely related to fah), might be superseded by ai.
Essentially 4 or 8 bit shaders, using an entirely different method of testing variations than 512.
It is hands down hundreds of times faster and more energy efficient to focus on the variations, and thus don't have so much data that are constants being reprocessed.
In essence, Folding is a language of quantum computers (with answers ranging from -x to +x, rather than a binary language.
The computers that should be doing this work is quantum computers. And I think it's entirely possible that a few hours of quantum computer work, could easily result in more success, than a few thousands to tens of thousands of computing hours using the current traditional model.
But until we fully understand quantum computing, or can even begin to make programs for it (scientists think only regular computers will be able to make or generate quantum computer programs that are actually effective programs, as for humans to make such a program is probably going to be too difficult), we will be stuck with our current model until we adopt ai.
I see more like AI being the goto platform, rather than avx512.