Blog post: "Unified GPU/SMP benchmarking scheme ..."

Moderators: Site Moderators, FAHC Science Team

bruce
Posts: 20824
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: Blog post: "Unified GPU/SMP benchmarking scheme ..."

Post by bruce »

somata wrote:
bruce wrote:Based on forum reports, there have been several recent revisions of AMD drivers which have contained bugs that interfere with the reliable use of OpenCL for AMD GPUs. Until AMD resolves those issues, there's little incentive for a researcher to release new projects for AMD.
Does that not imply the current AMD projects are using a (possibly severely) simplified model, one that fits within the current limitations of AMD's OpenCL platform? So simple, it would seem, that despite having >1 PFLOPS available for the platform nobody bothers to run new projects on it because the results are... less than desired?
Probably not, since the FahCore was designed to work with OpenCL Version X.XX. Then it's AMD's (or NVidia's) responsibility to deliver drivers that work with whatever the software can throw at it. Moreover, if the model successfully worked successfully with Driver version Y.YY and fails to work with version (Y.YY+1) that would seem to imply that when the drivers were optimized to work better with some new games, that additional OpenCL bugs were introduced that were not caught by driver testing.

It should also be noted that new drivers must be tested on ALL operating systems. Just because an OpenCL core works on Windows 7 (32-bit), that doesn't prove it will work on Ubuntu or on Windows7 (64-bit). Providing dependable drivers is a time-consuming (and expensive) process and there is always a natural conflict with the ever-present pressures to cut costs.
somata wrote:I guess it's just a shame that Nvidia had to get everyone hooked on CUDA instead of endorsing an open standard like OpenCL. Ok, so it appears there are problems with AMD's OpenCL drivers, but what about Nvidia's? When GPU3 was introduced I had hoped there would be "one core to rule them all" based on OpenCL and it would run seamlessly on either platform. That way PG's attention could be focused on maintaining as few cores as possible. But nooo, apparently GPGPU is still too immature to get correct behavior/good performance if you abstract too far from the hardware, so CUDA is still preferred on Nvidia, completely defeating the purpose of OpenCL. :roll:
NVidia spent a lot of money developing CUDA and ATI did not. That turns out to have been a wise decision on NVidia's part. In many respects, CUDA and OpenCL are very similar with maybe 90% of the internal code being identical. I predict that they will always keep maybe 10% of extra features to maintain the competitive advantage that they gained from their investment in a proprietary language. (Maybe its 80/20, not 90/10.) As long as the FahCore doesn't use those extra 10-20% of features, it's very easy to convert a FahCore between CUDA and OpenCL and there's a strong reason to believe that if CUDA works, the OpenCL core will too.

I have not seen any recent announcements from PG about the direction of their GPU cores except that they're working on something that will be called FahCore_17. Whatever it turns out to be will not be released by OpenMM/PG until they're confident that it works with current drivers, whether that's NVidia or AMD or both, along with whatever version of OpenCL and/or CUDA that we will have at that point.
#64-Lux
Posts: 18
Joined: Sat Jan 26, 2008 5:35 pm

Re: Blog post: "Unified GPU/SMP benchmarking scheme ..."

Post by #64-Lux »

Unfortunately for FAH, the state of OpenCL support is generally poor.
keldor314 wrote:Nvidia's OpenCL implementation really stinks. I don't believe they've even touched it since the early fermi days, at least, no more so than necessary to have it run (somewhat) on new hardware. I suspect they're trying to play politics and sink the API so that people use Cuda. Or something. Their OpenCL support before Fermi was actually pretty decent, so yeah. In fact, my old GTX 295 outperforms my GTX 680 on highly compute bound code, which, given the code (very small working set, lots of arithmetic), makes no sense whatsoever.
As for AMD, their OpenCL implementation is merely unstable. I've had it crash the computer (not just the program, the whole OS, forcing a reboot) with code that runs perfectly fine on Nvidia hardware (apart from the incorrect results from an unrelated regression from *2 years ago* that they still haven't fixed, but never mind that...). Actually, that particular piece of code breaks tesla too, but it does run on fermi and kepler. This is on an older card (HD5770, so juniper), so maybe it works on GCN, but it's still not good.
My dad also ported my work to OSX, and has had his own set of problems with bugs in OpenCL, some quite severe. I think it took Apple 2 months to get the new Macbook Pro to even get through OpenCL initialization without crashing hard, forcing a reboot. On both CPU and GPU devices.
Another person argues that GPGPU is passing phenomenon (or has already passed)
Nick wrote:That really shows NVidia is throwing the baby out with the bath water when it comes to consumer GPGPU. For it to be anywhere near useful, a standard is needed which requires developers to write code only once and not spend time tweaking for specific architectures. I'm afraid HSA will fare the same fate. AMD is investing a lot of money into it, but unless they get NVIDIA and Intel on board, it's not going to pay off. AVX2, on the other hand, will do great since it can be used to implement any compute language and auto-vectorize any legacy code.
That said, NVIDIA's apparent decision to no longer invest in consumer GPGPU hardly matters to my conclusion about ray-tracing performance. Even the HD 7970 only outperforms the CPU by 3x while consuming 2x more power and costs more than that CPU (and you also still need a CPU so you can't actually look at the GPU's power and cost in isolation). And again, it will look worse against Haswell.
Besides, regardless of how much an IHV's implementation of OpenCL "stinks", it's really sad how poorly these GPUs perform against the CPU, for which OpenCL wasn't even developed in the first place. So many TFLOPS, so little result. It tells us something about how fragile the GPU architecture and its driver really are when using it for something outside of its comfort zone. Meanwhile the CPU is making big strides to fix its weaknesses at a relatively low cost, and there's plenty more potential.
Both quotes are taken from this very interesting thread.

Edit: To add - at one point in time AMD/ATI had some FAH code incorporated into their automated regression testing of drivers. I get the impression that's no longer the case.
JimF
Posts: 651
Joined: Thu Jan 21, 2010 2:03 pm

Re: Blog post: "Unified GPU/SMP benchmarking scheme ..."

Post by JimF »

These discussions, led by consumers and a few programmers, inevitably start from the premise that an open standard (e.g., OpenCL) is Good, and a proprietary one (e.g. CUDA) is Bad, and that Nvidia is being mean to us by keeping it proprietary. Conversely, the sun would shine and all would be good if there were one universal standard.

But why? The only reason Nvidia developed CUDA (or that AMD supports OpenCL for that matter) is the belief that it will help them make money. In effect, Nvidia is willing to spend the development dollars on proprietary software (and supporting hardware) that will allow them to gain a peformance advantage in the market. If they opened that up, they would lose an incentive to support it. Conversely, by supporting OpenCL, AMD is saying that they don't want to bear all the development costs of a proprietary technique, and are willing to accept less optimal performance to save some money. Why is that necessarily good?

I am not going to prejudge the issue (I have both types of cards for different purposes), but note in passing that it is not entirely clear that both companies will be in the graphics business in the next few years. Intel is catching up for most relevant desktop purposes, as we all know, leaving narrower markets for the GPU companies. Whether both can survive is an interesting marketing question, but if Nvidia were to go away, how useful would the OpenCL standard be? You would have only one company (AMD) supporting it on dedicated graphics cards, and it would be by all accounts 10 percent less efficient than CUDA would have been (maybe a lot less than that in some applications - I am not a programmer).

I suggest that everyone avoid ideological positions that will have no relationship to the final outcome anyway, since that will be determined by market forces beyond our control.
bruce
Posts: 20824
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: Blog post: "Unified GPU/SMP benchmarking scheme ..."

Post by bruce »

I, too, try to stay away from ideological positions. I, too, have GPUs both NVidia and AMD/ATI (and from Intel).

AMD isn't the only company supporting OpenCL, though their support is critical ... but rather it's more accurate to point out that it's the only option that they support. NVidia reportedly supports both CUDA and OpenCL, (with an obvious bias toward CUDA.) Intel also claims to support OpenCL on their recent GPUs, though I don't think it's quite up to the task yet.

If a bug-free version of OpenCL is available from NVidia (and the FAH performance approaches that of CUDA) and if a bug-free version of OpenCL is available from AMD, I'm sure Stanford would be happy to support them both with a single FahCore. It also seems quite likely that if a bug-free version of OpenCL is available from Intel, we probably could expect support of Intel GPUs, too. (Wouldn't that be nice.) The only official word from FAH regarding Intel, however, was a long time ago and was that the Intel graphic chipsets were not powerful enough to run FAH. Nevertheless, it's interesting to speculate if they'd say the same thing about current hardware if it can be made to work.
Jesse_V
Site Moderator
Posts: 2850
Joined: Mon Jul 18, 2011 4:44 am
Hardware configuration: OS: Windows 10, Kubuntu 19.04
CPU: i7-6700k
GPU: GTX 970, GTX 1080 TI
RAM: 24 GB DDR4
Location: Western Washington

Re: Blog post: "Unified GPU/SMP benchmarking scheme ..."

Post by Jesse_V »

JimF wrote:Conversely, the sun would shine and all would be good if there were one universal standard.
Oh really? http://xkcd.com/927/
F@h is now the top computing platform on the planet and nothing unites people like a dedicated fight against a common enemy. This virus affects all of us. Lets end it together.
somata
Posts: 10
Joined: Mon Jun 16, 2008 2:13 am

Re: Blog post: "Unified GPU/SMP benchmarking scheme ..."

Post by somata »

JimF wrote:These discussions, led by consumers and a few programmers, inevitably start from the premise that an open standard (e.g., OpenCL) is Good, and a proprietary one (e.g. CUDA) is Bad, and that Nvidia is being mean to us by keeping it proprietary. Conversely, the sun would shine and all would be good if there were one universal standard.
Terms like "Good" and "Bad" are so arbitrary as to be useless. Good for what? Of course there are advantages to a proprietary standard, namely speed and full feature support. But the advantages to an open standard are development efficiency and a broader user base. So why might I justifiably think CUDA is bad? Because I can't use it! :wink: I think the reason OpenCL is having trouble gaining momentum is that there just aren't enough truly compute-bound applications (and even fewer that map well to GPGPU) in the desktop space to really take advantage of the potentially greater user base. CUDA seems to fill the current niche and will probably continue to for as long as GPGPU remains a niche technology, which seems likely for the foreseeable future.
JimF wrote:I am not going to prejudge the issue (I have both types of cards for different purposes), but note in passing that it is not entirely clear that both companies will be in the graphics business in the next few years. Intel is catching up for most relevant desktop purposes, as we all know, leaving narrower markets for the GPU companies.
For desktop purposes Intel graphics have always been fine, it's only ever been gaming where they suffered. For as far as Intel has come, nothing on their CPU nor IGP roadmaps even approaches the 3+ TFLOPS and nearly 200 GB/s memory bandwidth available on high-end, consumer-grade video cards. Of course, those specs won't be enough to save discrete GPUs from market forces if we start running out of mainstream applications to utilize all that power. I hope that won't happen but fear that it will, especially given the onset of the so-called post-PC era.
#64-Lux wrote:Another person argues that GPGPU is passing phenomenon (or has already passed)
I generally share that view. As I alluded to above, while for some things GPUs are and will continue to be vastly faster than any CPU, there just aren't enough of those applications to bring GPGPU out of niche status. Applications that only saw marginal improvements with GPGPU, like video encoding, will eventually be eclipsed by CPUs if they haven't been already.

Currently FaH happens to be the sort of application that can greatly benefit from GPGPU (at least for Nvidia), but there are many converging trends in the computer industry that make me fear for the future of FaH and performance junkies like myself. :wink:
mmonnin
Posts: 324
Joined: Wed Dec 05, 2007 1:27 am

Re: Blog post: "Unified GPU/SMP benchmarking scheme ..."

Post by mmonnin »

JimF wrote:These discussions, led by consumers and a few programmers, inevitably start from the premise that an open standard (e.g., OpenCL) is Good, and a proprietary one (e.g. CUDA) is Bad, and that Nvidia is being mean to us by keeping it proprietary. Conversely, the sun would shine and all would be good if there were one universal standard.

But why? The only reason Nvidia developed CUDA (or that AMD supports OpenCL for that matter) is the belief that it will help them make money. In effect, Nvidia is willing to spend the development dollars on proprietary software (and supporting hardware) that will allow them to gain a peformance advantage in the market. If they opened that up, they would lose an incentive to support it. Conversely, by supporting OpenCL, AMD is saying that they don't want to bear all the development costs of a proprietary technique, and are willing to accept less optimal performance to save some money. Why is that necessarily good?
One could make a similar analogy with Windows and Linux. Both can have their benefits/downfalls.
Ben_Lamb
Posts: 28
Joined: Sun Sep 30, 2012 10:41 am

Re: Blog post: "Unified GPU/SMP benchmarking scheme ..."

Post by Ben_Lamb »

Whats happened to the gpu qrb ? This problem at Standfords end seems to be taking an awful long time to resolve and I cant realy see what the problem could be considering that 8057 was dishing out the bonus without problems. Is this backpedaling by Standford who have realized it wasnt the best idea after all ?
Last edited by Ben_Lamb on Thu Dec 27, 2012 5:33 pm, edited 1 time in total.
bruce
Posts: 20824
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: Blog post: "Unified GPU/SMP benchmarking scheme ..."

Post by bruce »

Ben_Lamb wrote:Whats happened to the gpu qrb ? This problem at Standfords end seems to be taking an awful long time to resolve and I cant realy see what the problem could be considering that 8057 was dishing out the bonus without problems. Is this backpedaling by Standford who have realized it wasnt the best idea after all.
Please don't expect an answer to that question. If you had joined the betateam, you'd know there's no answer to your question and you wouldn't have even asked it. You can't get around those facts by asking in a public forum.

The purpose of the beta test was NOT just about points or bonuses. You can read all the information that's available so I won't repeat any of it here.

Your public denunciation of Stanford for suspected "backpedaling" is trolling/flaming and doesn't belong in this forum. Nobody can prove you either right or wrong and I predict that Stanford isn't going to respond.
Ben_Lamb
Posts: 28
Joined: Sun Sep 30, 2012 10:41 am

Re: Blog post: "Unified GPU/SMP benchmarking scheme ..."

Post by Ben_Lamb »

I dared asked a question - right or wrong doesn't come into it.

As for trolling/flaming - whatever - dont care anymore as wont be posting again - its a waste of time with attitudes like this.
Joe_H
Site Admin
Posts: 7940
Joined: Tue Apr 21, 2009 4:41 pm
Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2
Location: W. MA

Re: Blog post: "Unified GPU/SMP benchmarking scheme ..."

Post by Joe_H »

The PG has posted exactly what is the status on QRB for GPU folding in the blog post of Dec. 6th:
However, Quick Return Bonus for the GPU clients has not been introduced at this stage, but will be introduced once we work out an issue on our side.
As for your question, as phrased it can be read exactly as bruce characterized it. If you wanted a different response, then consider using words with less emotional baggage if you do come back and post.

In any case, given the date that was announced I did not expect anything new to show up until sometime next year. Based on my experience of working for many years at a university, not any inside knowledge of PG, it would be unreasonable to expect this issue to be resolved in 3 weeks at this time of year. This period of time includes end of semester teaching, final exams, and grading for a start. Then faculty and grad students tend to take vacation or trips home to visit with family during the holidays. I expect only a minimum are available to monitor the folding servers, let alone make changes. So putting together meetings to work on new issues can be difficult at best. Add that F@H has research groups at 7 or 8 different schools now, coordinating changes to fix an issue involves additional scheduling problems as not all schools follow a common academic schedule.
Image

iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
k1wi
Posts: 909
Joined: Tue Sep 22, 2009 10:48 pm

Re: Blog post: "Unified GPU/SMP benchmarking scheme ..."

Post by k1wi »

Joe_H wrote:Then faculty and grad students tend to take vacation or trips home to visit with family during the holiday.
I envy those grad students!
jimerickson
Posts: 533
Joined: Tue May 27, 2008 11:56 pm
Hardware configuration: Parts:
Asus H370 Mining Master motherboard (X2)
Patriot Viper DDR4 memory 16gb stick (X4)
Nvidia GeForce GTX 1080 gpu (X16)
Intel Core i7 8700 cpu (X2)
Silverstone 1000 watt psu (X4)
Veddha 8 gpu miner case (X2)
Thermaltake hsf (X2)
Ubit riser card (X16)
Location: ames, iowa

Re: Blog post: "Unified GPU/SMP benchmarking scheme ..."

Post by jimerickson »

soon...
Ben_Lamb
Posts: 28
Joined: Sun Sep 30, 2012 10:41 am

Re: Blog post: "Unified GPU/SMP benchmarking scheme ..."

Post by Ben_Lamb »

Joe_H wrote:The PG has posted exactly what is the status on QRB for GPU folding in the blog post of Dec. 6th:
However, Quick Return Bonus for the GPU clients has not been introduced at this stage, but will be introduced once we work out an issue on our side.
As for your question, as phrased it can be read exactly as bruce characterized it. If you wanted a different response, then consider using words with less emotional baggage if you do come back and post.

In any case, given the date that was announced I did not expect anything new to show up until sometime next year. Based on my experience of working for many years at a university, not any inside knowledge of PG, it would be unreasonable to expect this issue to be resolved in 3 weeks at this time of year. This period of time includes end of semester teaching, final exams, and grading for a start. Then faculty and grad students tend to take vacation or trips home to visit with family during the holidays. I expect only a minimum are available to monitor the folding servers, let alone make changes. So putting together meetings to work on new issues can be difficult at best. Add that F@H has research groups at 7 or 8 different schools now, coordinating changes to fix an issue involves additional scheduling problems as not all schools follow a common academic schedule.
Apologies all round - my response was childish and my questions were poorly worded.

I must admit I have got my knickers in a twist over this whole gpu qrb thing and I had grown unreasonably impatient waiting for it to be rolled out. My mention of backpedaling wasn't intended to be inflammatory as I have been genuinely considering the possibility that the qrb could be cancelled due to unforeseen consequences of its implementation that may have come to light. I am not an academic and have no knowledge of how things work at university, all I know is the stuff at my end ie. a hungry electricity meter and expensive hardware which I am unsure about leading me to fish for opinions on what is going on. Deep down I know only pande group know and they wont be divulging such information.
.
'
mmonnin
Posts: 324
Joined: Wed Dec 05, 2007 1:27 am

Re: Blog post: "Unified GPU/SMP benchmarking scheme ..."

Post by mmonnin »

Some day, maybe 'soon' ;), the beta WUs such as 8057 will return without the beta part and QRB will return along with it.
Post Reply