Run AUTOMATIC1111/stable-diffusion-webui on the cloud, for (almost) free
Using stable-diffusion-webui requires a lot of power. My Macbook Pro 14 running on 16GB of memory and the (now over 2 years old!) M1 Pro chip is extremely slow on the tasks that things such as detailing, more steps, and bigger models like SDXL require. So here is how I've been running automatic1111/webui on RunPod.io, renting a GPU of my choice hourly.
HINT 💡: To save money, only rent a GPU for really computational tasks, like detailers, face swappers, etc. I prefer to use most models on my M1 Pro, which only takes a max of 1 minute for even the most realistic models (eg. EpiCRealism ).
-
Sign in to RunPod.io
-
Add any amount you would like, I added $10 which is enough for A LOT.
-
Go to Community Cloud (or secure, up to you).
-
Pick a GPU. For reference, a 3090 does most of my ADetailer/Reactor tasks in under a minute (compared to my laptop's hour)
-
Click the Search for a template box
-
Start typing "Stable Diffusion" and click "RunPod Stable Diffusion":
-
Customize storage depending to your preference, otherwise click Continue → Deploy The defaults usually work for me when I use one model and some extras.
-
Once deployed, you are ready to open webui. Click the first button, HTTP Service [Port 3000] and then Jupyter Lab [Port 8888] to open the notebook.
-
To get the output images, go to the output folder in the notebook app.
Need help setting this up for your business? 📞Book me for a call.