It does use docker. But you can start from the base image nvidia/opencl:... which is already setup as a normal linux system with nvidia drivers, and just SSH in to install FAH and run it. If you have docker-fu, you can set up your own docker container to automate it and avoid having to do the SSH bit.iceman1992 wrote:So you didn't use docker? Okay, I guess I'll try it out after they figure out the overload. No use renting machines if they'll just idle
Own hardware vs. cloud computing
Moderators: Site Moderators, FAHC Science Team
Re: Own hardware vs. cloud computing
-
- Posts: 49
- Joined: Tue Mar 24, 2020 11:24 am
- Location: Finland
Vast.ai slim tutorial
These quick steps should get you started with folding on the Vast.ai platform.
- 1. Create an account on Vast.ai and add some funds. There should be a $1 free bonus when you add a credit card for the first time, even without adding any funds.
2. Setting up SSH
- Download PuTTY from here: https://www.chiark.greenend.org.uk/~sgt ... atest.html. Install.
- Open PuTTYgen and click 'Generate'.
- Copy everything in the 'Public key...' box. Go to https://vast.ai/console/account/ and paste the public key under 'Change SSH Key', click 'Set SSH Key'.
- In PuTTYgen, click 'Save private key' and save it in a convenient location. This will be needed when connecting.
3. Renting
- Go to https://vast.ai/console/create/. Click 'Edit image & config...'. Scroll down and select 'nvidia/opencl'. In the dropdown list, select 'devel-ubuntu18.04'.
- Choose how much disk space you want to allocate. 2.00 GB was enough for me. Click 'Select' at the bottom.
- Find yourself a machine you want to rent and choose 'RENT'. From now on you will be billed as long as the instance exists.
4. Connecting
- You will find your newly created instance in https://vast.ai/console/instances/.
- Wait for the instance to be provisioned, this may take a few minutes.
- In the top part of the instance information window you will find the 'address', for example 'ssh3.vast.ai', and the 'port number', for example '22424'.
- In PuTTY's 'Host Name' box you should enter the address in the following format: root@address, the example above would become [email protected]
- Paste the port number into the 'Port' box.
- In the Category list to the left, go to Connection --> SSH --> Auth. In the 'Private key file...' box, browse for the private key you saved in step 2.
- In the Category list, go back to Session and make sure the address and port at still the same. Select 'Open' at the bottom.
- You should receive a popup asking to accept the key. Click Yes.
- You should now be connected and greeted with the command line.
5. Installing and configuring FAH
- Update and upgrade the instance:- Install FAH:Code: Select all
apt update apt upgrade -y apt install -y wget nano
- Enter your username, team number and passkey. Choose if you want FAH to start at boot. The installer might fail to perform the post-install steps, ignore it.Code: Select all
wget https://download.foldingathome.org/releases/public/release/fahclient/debian-stable-64bit/v7.5/fahclient_7.5.1_amd64.deb dpkg -i fahclient_7.5.1_amd64.deb
- Edit the config file with your own data if it hasn't been populated already, and remember every GPU needs its own slot (ask me how I know ):Code: Select all
nano /etc/fahclient/config.xml
- To save the config file and exit nano: 1. Ctrl+X, 2. YCode: Select all
<config> <!-- User Information --> <passkey v='1234567890xxxxxxxxx'/> <team v='123456'/> <user v='ItsMe'/> <!-- Folding Slots --> <slot id='0' type='GPU'/> </config>
6. Start FAHCode: Select all
FAHClient
Last edited by Jorgeminator on Sat Apr 11, 2020 9:11 am, edited 1 time in total.
-
- Posts: 22
- Joined: Tue Oct 27, 2009 10:44 am
- Hardware configuration: Windows 10 Pro
AMD Threadripper 3960X (24C/48T)
2x nVIDIA RTX 2080 Ti
Re: Own hardware vs. cloud computing
A note on the economics:
I have a box that does ~6mil PPD: 2x RTX 2080 Ti + AMD Ryzen 3960X. It uses approximately 850W. Even with the sky high electricity rates where I live ($0.30/kWh), that works out to ~$6/day.
Yeah, the hardware costed me just under $5k, but you can imagine the economics of this if you actually ran this somewhere where electricity is cheap.
The takeaway here is: the cloud providers actually make a lot of profit with their GPUs. If you don't need the amortize the cost of the hardware (i.e. someone bought the GPU for gaming anyways), it's only a fraction of the price to operate.
I have a box that does ~6mil PPD: 2x RTX 2080 Ti + AMD Ryzen 3960X. It uses approximately 850W. Even with the sky high electricity rates where I live ($0.30/kWh), that works out to ~$6/day.
Yeah, the hardware costed me just under $5k, but you can imagine the economics of this if you actually ran this somewhere where electricity is cheap.
The takeaway here is: the cloud providers actually make a lot of profit with their GPUs. If you don't need the amortize the cost of the hardware (i.e. someone bought the GPU for gaming anyways), it's only a fraction of the price to operate.
-
- Site Moderator
- Posts: 6986
- Joined: Wed Dec 23, 2009 9:33 am
- Hardware configuration: V7.6.21 -> Multi-purpose 24/7
Windows 10 64-bit
CPU:2/3/4/6 -> Intel i7-6700K
GPU:1 -> Nvidia GTX 1080 Ti
§
Retired:
2x Nvidia GTX 1070
Nvidia GTX 675M
Nvidia GTX 660 Ti
Nvidia GTX 650 SC
Nvidia GTX 260 896 MB SOC
Nvidia 9600GT 1 GB OC
Nvidia 9500M GS
Nvidia 8800GTS 320 MB
Intel Core i7-860
Intel Core i7-3840QM
Intel i3-3240
Intel Core 2 Duo E8200
Intel Core 2 Duo E6550
Intel Core 2 Duo T8300
Intel Pentium E5500
Intel Pentium E5400 - Location: Land Of The Long White Cloud
- Contact:
Re: Own hardware vs. cloud computing
Appreciate the detailed instructions, Jorgeminator
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time
Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time
Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
-
- Posts: 390
- Joined: Sun Dec 02, 2007 4:53 am
- Hardware configuration: FX8320e (6 cores enabled) @ stock,
- 16GB DDR3,
- Zotac GTX 1050Ti @ Stock.
- Gigabyte GTX 970 @ Stock
Debian 9.
Running GPU since it came out, CPU since client version 3.
Folding since Folding began (~2000) and ran Genome@Home for a while too.
Ran Seti@Home prior to that. - Location: UK
- Contact:
Re: Own hardware vs. cloud computing
Cheers for that. Im not in a position to do it currently, but down the line, the thought of running a solar or hybrid system to fold would interest me. I already built several systems that run from PicoPSU, for use in campervans, mostly on itx boards running the lowest power intel cpus. With the right setup its easy to keep power usage very low, but not low enough that you could run it 24 hours a day. But for a few hours a day its possible. One of the systems had a 1050 in it and was capable of reasonable gaming while still only using about 130w. Taking that principle and increasing to something that could drive 4 low power nvidia cards would interest me, but while keeping total wattage to less than 400w (but that would require about 5kw of solar and a 500ah battery based on 2 hours of solar per day, a 30% overprovision and a 24v system voltage).Endgame124 wrote:v00d00 wrote:It would be interesting to see a case study based on what could be run from a 8kw solar system as part of a household as well. If the power was generated for free and you maximised production by using low wattage cards like those mining 1060's @ 75w maybe into a low power 12-24v setup using a PicoPSU. How viable would that be as a long term, 'fire and forget', style folding solution. Connect them in directly to the battery bank and not via the inverter.
I have a 9kw Solar system on my home, and on average am producing 6kwh / day surplus while also accounting for all my home usage and folding with a 1080ti and running Rosetta at home on 4 older systems (q9650, A10-5800k, A10-7870K, i3-370m).
I have a EVGA 1660 super on the way to experiment with best PPD / watt - depending what I find, I may step up to a 2060 super
-
- Posts: 523
- Joined: Fri Mar 23, 2012 5:16 pm
Re: Own hardware vs. cloud computing
https://twitter.com/foldingathome/statu ... 1171490816
I wonder if this is applicable for vast.ai ? Will this make setup easier?
I wonder if this is applicable for vast.ai ? Will this make setup easier?
Re: Own hardware vs. cloud computing
If you just select the regular "nvidia/opencl" container on vast.ai and put in a deploy script, then I think that is less work than using the folding@home container. That is, unless vast.ai puts a folding container in as one of their standard selections. I guess they might do that if someone asks?
By the way, thanks to core 22 v 0.0.13 with CUDA support, the cost calculation changes a bit: Previously, under OpenCL folding, it was often best to rent a 2080Ti, but now a pair of 1080Ti's can give a better PPD per USD efficiency.
By the way, thanks to core 22 v 0.0.13 with CUDA support, the cost calculation changes a bit: Previously, under OpenCL folding, it was often best to rent a 2080Ti, but now a pair of 1080Ti's can give a better PPD per USD efficiency.
Online: GTX 1660 Super + occasional CPU folding in the cold.
Offline: Radeon HD 7770, GTX 1050 Ti 4G OC, RX580
Re: Own hardware vs. cloud computing
I've tried the free tiers from AWS, Cloud, and Azure, and they just throttle the crap out of any serious compute engines.
It costs around $100/yr to run a quad core 2,3Ghz CPU on average.
Meanwhile my Atomic Pi units, who don't get throttled, but operate at 4 cores 1,7Ghz outdid the cloud services in performance.
Meanwhile their initial cost was ~$40-50/unit (lower with more units, higher with less units), and run about $1.5 a month per unit.
I haven't tried the GPU services, but from what I read, the amount you pay on GPU compute is ridiculous!
You can quite literally buy a top end GPU for it, with 1 year of folding on cloud service!
Especially with the RTX3080 with is ridiculously low priced (same price as a 2070 Super, same performance as a 2080 Ti).
It costs around $100/yr to run a quad core 2,3Ghz CPU on average.
Meanwhile my Atomic Pi units, who don't get throttled, but operate at 4 cores 1,7Ghz outdid the cloud services in performance.
Meanwhile their initial cost was ~$40-50/unit (lower with more units, higher with less units), and run about $1.5 a month per unit.
I haven't tried the GPU services, but from what I read, the amount you pay on GPU compute is ridiculous!
You can quite literally buy a top end GPU for it, with 1 year of folding on cloud service!
Especially with the RTX3080 with is ridiculously low priced (same price as a 2070 Super, same performance as a 2080 Ti).
Last edited by MeeLee on Tue Sep 29, 2020 11:52 pm, edited 1 time in total.
Re: Own hardware vs. cloud computing
GPU folding on AWS, Google Cloud, Azure and IBM Cloud are all prohibitively expensive compared to folding at home, unless you're benefiting from a free offer from them. Vast.ai is not quite as terrible:
One year of folding on vast.ai, with a non-interruptible 1080Ti instance now costs about 10 cents/hour for 3M PPD in CUDA. That works out to about 876,6 USD per year. The 1080Ti had an MSRP of $699 and is now selling for about $400 used.
Power usage of the whole PC would be around 350 W, or 3068 kWh/year. If you're paying California rates of about 15 cents/kWh that's about $460/year in electricity. With Norwegian electricity prices of 4 cents/kWh, that is only 123 USD/year in electricity costs.
So if you're in California, buying a used 1080Ti and folding with it yourself is a bit of a wash compared to folding in vast.ai. If you have to run an air conditioner because of folding, it pushes cloud folding into being better than folding at home with this particular card. Cloud folding is not economical in Norway, particulary because waste heat is less of an issue here - we can use it for heating.
One year of folding on vast.ai, with a non-interruptible 1080Ti instance now costs about 10 cents/hour for 3M PPD in CUDA. That works out to about 876,6 USD per year. The 1080Ti had an MSRP of $699 and is now selling for about $400 used.
Power usage of the whole PC would be around 350 W, or 3068 kWh/year. If you're paying California rates of about 15 cents/kWh that's about $460/year in electricity. With Norwegian electricity prices of 4 cents/kWh, that is only 123 USD/year in electricity costs.
So if you're in California, buying a used 1080Ti and folding with it yourself is a bit of a wash compared to folding in vast.ai. If you have to run an air conditioner because of folding, it pushes cloud folding into being better than folding at home with this particular card. Cloud folding is not economical in Norway, particulary because waste heat is less of an issue here - we can use it for heating.
Indeed. Thanks to the CUDA update, less powerful cards like the GTX 1060, GTX 1660 Ti and the P106-100 mining card have gained a boost in folding power. And the RTX 3080 at the same MSRP as the 1080 Ti is folding at a much higher rate. So with these cards folding at home can be much better than folding on vast.ai, even if you live in California.MeeLee wrote:Especially with the RTX3080 with is ridiculously low priced (same price as a 2070 Super, same performance as a 2080 Ti).
Online: GTX 1660 Super + occasional CPU folding in the cold.
Offline: Radeon HD 7770, GTX 1050 Ti 4G OC, RX580
Re: Own hardware vs. cloud computing
How does CUDA aid in performance?gunnarre wrote: Indeed. Thanks to the CUDA update, less powerful cards like the GTX 1060, GTX 1660 Ti and the P106-100 mining card have gained a boost in folding power. And the RTX 3080 at the same MSRP as the 1080 Ti is folding at a much higher rate. So with these cards folding at home can be much better than folding on vast.ai, even if you live in California.
My understanding is that it only helps out GPUS when you have more than 1 GPU, one of them being not fully utilized.
-
- Site Moderator
- Posts: 6986
- Joined: Wed Dec 23, 2009 9:33 am
- Hardware configuration: V7.6.21 -> Multi-purpose 24/7
Windows 10 64-bit
CPU:2/3/4/6 -> Intel i7-6700K
GPU:1 -> Nvidia GTX 1080 Ti
§
Retired:
2x Nvidia GTX 1070
Nvidia GTX 675M
Nvidia GTX 660 Ti
Nvidia GTX 650 SC
Nvidia GTX 260 896 MB SOC
Nvidia 9600GT 1 GB OC
Nvidia 9500M GS
Nvidia 8800GTS 320 MB
Intel Core i7-860
Intel Core i7-3840QM
Intel i3-3240
Intel Core 2 Duo E8200
Intel Core 2 Duo E6550
Intel Core 2 Duo T8300
Intel Pentium E5500
Intel Pentium E5400 - Location: Land Of The Long White Cloud
- Contact:
Re: Own hardware vs. cloud computing
Instead of the simulation using OpenCL platform for simulation, it will use the CUDA platform which does provide a decent performance gain. Kepler or newer GPUs will be using the CUDA functionality and the speed up can be from 15% to 100% where traditional Projects are in the 15% range while the free energy Projects (Moonshot currently uses them) will be near the 100% range. For some more details, you can read the blog post: https://foldingathome.org/2020/09/28/fo ... a-support/MeeLee wrote:...How does CUDA aid in performance?
My understanding is that it only helps out GPUS when you have more than 1 GPU, one of them being not fully utilized.
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time
Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time
Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
Re: Own hardware vs. cloud computing
OpenCL 1.2 is sufficient to process FAH assignments on either NVidia or AMD GPUs (or some others). There is little incentive to enhance this code since being Open and being universally accepted are its main objectives. On the other hand, CUDA is proprietary code that is a great competitive advantage for NVidia. It is regularly enhanced/optimized to support every feature in their latest hardware. After a number of years of upgrades, CUDA does work better than it did when OpenCL was originally distributed ... but you can't use it on your non nV GPUs.
Posting FAH's log:
How to provide enough info to get helpful support.
How to provide enough info to get helpful support.
Re: Own hardware vs. cloud computing
Cloud folders in particular should be aware that there is a vulnerability in the FAHControl GUI before version 7.6.20 that could allow your cloud instances to execute code on your GUI machine: viewtopic.php?f=108&t=36471
Online: GTX 1660 Super + occasional CPU folding in the cold.
Offline: Radeon HD 7770, GTX 1050 Ti 4G OC, RX580
-
- Posts: 2040
- Joined: Sat Dec 01, 2012 3:43 pm
- Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441
Re: Own hardware vs. cloud computing
This is more a theoretical issue as you would not expect your own cloud instance to be evil and attack your PC running FahControl GUI. But if you update to FahClient 7.6.21 or later then you are also theoretical safe.
Re: Own hardware vs. cloud computing
7.6.20 is also safe from the vulnerability. 7.6.13 has the vulnerability.
If you're running an instance on Azure or AWS, then it's even less likely than your PC coming with pre-installed malware, yes. But if you rent on places like vast.ai which is an open marketplace where anyone can sell computing, then it's still a slim chance but slightly higher.
If you're running an instance on Azure or AWS, then it's even less likely than your PC coming with pre-installed malware, yes. But if you rent on places like vast.ai which is an open marketplace where anyone can sell computing, then it's still a slim chance but slightly higher.
Online: GTX 1660 Super + occasional CPU folding in the cold.
Offline: Radeon HD 7770, GTX 1050 Ti 4G OC, RX580