Simple Docker image to run Automatic1111 Stable Diffusion Web UI and Oobabooga Text Generation Web UI together for easy experimentation on RunPod.
Based on official RunPod container templates.
You can use the automatic-oobabooga template on RunPod to start your pod.
Warning
This image is only tested on A40 RunPod instances. I cannot guarantee it will work on other instances.
If you wish to use a custom template, you can use the following configuration:
- Use
v1k45/automatic-oobabooga:1.0
as the image name in RunPod instance creation. - Mount a volume to
/workspace
to persist data between restarts. - Expose the following ports:
- 3000: Automatic1111 Stable Diffusion Web UI
- 4000: Oobabooga Text Generation Web UI
- 5000: Oobabooga Text Generation API
- 8888: Jupyter Notebook
- 80: NGINX
- 22: SSH
- Set environment variables as needed. Refer to the Environment Variables section for more information.
Docker Image: https://hub.docker.com/r/v1k45/automatic-oobabooga
This image can be used directly without any customizations to run both web UIs together. It runs services on the following ports:
Service | Port |
---|---|
Oobabooga Text Generation Web UI | 4000 |
Oobabooga Text Generation API | 5000 |
Automatic1111 Stable Diffusion Web UI | 3000 |
Jupyter Notebook | 8888 |
NGINX | 80 |
SSH | 22 |
Variable | Description | Default |
---|---|---|
AUTOMATIC_UI_USERNAME |
Username for Automatic1111 Stable Diffusion Web UI | autobooga |
AUTOMATIC_UI_PASSWORD |
Password for Automatic1111 Stable Diffusion Web UI | Randonly generated during first startup |
AUTOMATIC_API_KEY |
API Key for Automatic1111 Stable Diffusion | Randonly generated during first startup |
AUTOMATIC_EXTRA_ARGS |
Extra Arguments for Automatic1111 Stable Diffusion | None - Provide to append to default arguments |
AUTOMATIC1111_CLI_ARGS |
CLI Arguments for Automatic1111 Stable Diffusion | None - Provide to overwrite all arguments |
OOBABOOGA_UI_USERNAME |
Username for Oobabooga Text Generation Web UI | autobooga |
OOBABOOGA_UI_PASSWORD |
Password for Oobabooga Text Generation Web UI | Randonly generated during first startup |
OOBABOOGA_API_KEY |
API Key for Oobabooga Text Generation | Randonly generated during first startup |
OOBABOOGA_DEFAULT_MODEL |
Default Model for Oobabooga Text Generation | None - Provide to download during startup |
OOBABOOGA_MODEL_FILE |
Model File for Oobabooga Text Generation | None - Provide to select a specific file to download in the model |
OOBABOOGA_EXTRA_ARGS |
Extra Arguments for Oobabooga Text Generation | None - Provide to append to default arguments |
OOBABOOGA_CLI_ARGS |
CLI Arguments for Oobabooga Text Generation | None - Provide to overwrite all arguments |
CLOUDFLARE_TUNNEL_TOKEN |
Token for Cloudflare Remote Tunnel | None - Provide to enable Cloudflare Remote Tunnel |
JUPYTER_PASSWORD |
Password for Jupyter Notebook | Randonly generated during first startup |
The credentials for both web UIs are randomly generated during the first startup. You can find them in the logs of the container. You can also set them manually using the environment variables.
Autogenerated credentials are printed in the logs in JSON format with the environment variables names as keys.
The credentials are then saved in /workspace/.auth.json
in the container. You can also update the credentials in this file.
The Automatic1111 Stable Diffusion Web UI is available at http://<host>:3000
. You can login with the credentials provided in the logs or set manually.
If you choose to expose the ports in RunPod, you can access the web UI directly from the RunPod dashboard using the Connect
button.
You can customize the arguments for the model by setting the AUTOMATIC_EXTRA_ARGS
environment variable. Refer to the Automatic1111 Stable Diffusion Web UI Documentation for more information on the arguments.
Example of setting the arguments:
AUTOMATIC_EXTRA_ARGS="--no-download-sd-model --do-not-download-clip "
You can also set the AUTOMATIC1111_CLI_ARGS
environment variable to entirely replace the arguments. This will also replace the default arguments, including the API key.
There are some extensions installed to help with the usage of the web UI:
Please refer to the respective repositories for more information on the extensions.
Note
The first load may take time because the default model is being downloaded.
The Oobabooga Text Generation Web UI is available at http://<host>:4000
. You can login with the credentials provided in the logs or set manually. The OpenAI-compatible API is available at http://<host>:5000
.
If you choose to expose the ports in RunPod, you can access the web UI directly from the RunPod dashboard using the Connect
button.
You can customize the arguments for the model by setting the OOBABOOGA_EXTRA_ARGS
environment variable. Refer to the Oobabooga Text Generation Web UI Documentation for more information on the arguments.
Example of setting the arguments:
# to download a specific model
OOBABOOGA_DEFAULT_MODEL="unsloth/Llama-3.2-3B-Instruct-GGUF"
OOBABOOGA_MODEL_FILE="Llama-3.2-3B-Instruct-Q6_K.gguf"
# to set the model and loader during startup
OOBABOOGA_EXTRA_ARGS="--model unsloth_Llama-3.2-3B-Instruct-Q6_K --loader llama.cpp --n-gpu-layers=150 ---n_ctx=4096"
You can also set the OOBABOOGA_CLI_ARGS
environment variable to entirely replace the arguments. This will also replace the default arguments, including the API key.
No models are downloaded by default. You can download the default model by setting the OOBABOOGA_DEFAULT_MODEL
environment variable. You can also download a specific model file by setting the OOBABOOGA_MODEL_FILE
environment variable.
Oobabooga Text Generation Web UI has options to download and manage models. You can download models from the web UI.
You may want to set the OOBABOOGA_EXTRA_ARGS
environment variable to set the --model
and --loader
arguments for the model you downloaded.
Please refer to the Oobabooga Text Generation Web UI for more information on the usage.
NGINX is used as a reverse proxy to serve the web UIs. It won't be of much use if you are accessing it through the RunPod dashboard.
Each service is available at its own subdomain for easy access. The subdomains are:
sd.<host>
for Automatic1111 Stable Diffusion Web UIob.<host>
for Oobabooga Text Generation Web UIob-api.<host>
for Oobabooga Text Generation APIjp.<host>
for Jupyter Notebook
The host
is your custom domain pointing to the RunPod instance. More on this in the Cloudflare Remote Tunnel section.
RunPod instances have their own domain names assigned to them. Each exposed port is available at a subdomain of the RunPod domain with the port number as part of the subdomain.
While this is a very convenient way to access the services on shared infrastructure, updating the domain name every time you spin up a new instance can be cumbersome. You may have a workflow where you spin up a RunPod instance when you want and then delete it when you are done. Generally this is to save on costs when you are not using the instance.
If you have your own domain name, you can configure it to point to the RunPod instance when you spin it up. This way you can access the services using your own domain name. You don't have to update the domain name every time you spin up a new instance.
This is made possible by using a reverse tunnel to the RunPod instance.
Create a Remote Tunnel in Cloudflare and point the subdomains to HTTP
and localhost
(Nginx). You can also point to individual services directly, but it is easier to manage with Nginx.
Once you have created the tunnel, make note of the token and set the CLOUDFLARE_TUNNEL_TOKEN
environment variable in the container.
Make sure to map each subdomain to localhost
in the tunnel configuration. If your domain is example.com
, you need to point the subdomains to http://localhost
:
sd.example.com
ob.example.com
ob-api.example.com
jp.example.com
See the Cloudflare documentation for more information.
You can build the image locally using the following command:
docker buildx bake
Change the build parameters in the docker-bake.hcl
file as needed.
The Dockerfile extends from cuda 11.8 devel image by NVIDIA. It installs the required dependencies and sets up the services. It is based on the official RunPod container templates.
Python dependencies of individual services are installed in separate virtual environments to avoid conflicts. This causes the image to be larger than usual.
After startup, the source repositories are cloned moved to /workspace
so that the data is persisted between restarts. You can use a Network Volume to persist the models and other data even after the container is deleted.
Read scripts/start.sh
for more information on how the services are started.