This is for AI crap. There is no interest to even think about it folding. Those 20 arm cores would do some folding for sure, but I can find 3000 ways to fold better and faster.
nVidia lost the plot to the point where they don't bother showing FP32 performance on their 5090 anymore, let alone this one.
It would fold pretty well, but it wouldn't fold anywhere close to as well as it might seem based on its price. I can't find their FP32 numbers but I'm guessing it'll be somewhere between the 5080 and the 5090.
That machine is made for low-precision floating point arithmetic for the purpose of "AI" crap, so it has a lot of Tensor cores (fast, low-precision) and relatively few CUDA cores (slower, higher-precision). Molecular simulations can't make use of FP4, so that massive 1 PFLOP of FP4 performance might as well be 0 FLOPS. Simulations can't even make use of FP16 (although I think OpenMM uses FP16 in one place, but it's not a place where performance matters), and even FP32 takes effort to use correctly, with FP64 being used when needed.
That thing runs a customized version of Ubuntu, so I'm guessing it would technically be able to fold. No clue if the GPU is on the whitelist.
Does it even come close to 4090/5090 in CUDA core count?
That GPU will not going to be on the whitelist, but if one is adventurous enough to buy one and use it for other thing rather than AI, more than welcome to drop the PCI ID here