Build Your Own Private AI Image Generator: Step-by-Step with Docker Model Runner and Open WebUI
Introduction
We've all faced the frustration of relying on cloud AI image services: worrying about privacy, running out of credits, or having your creative dragon-in-a-suit rejected by overzealous filters. What if you could run a powerful image generation model entirely on your own machine, with a sleek chat interface? That's exactly what Docker Model Runner now enables. By combining it with Open WebUI, you can create your own private DALL-E—no subscriptions, no data leaving your computer, and full control. In this guide, you'll learn how to pull an image generation model, launch Open WebUI, and start generating images locally, step by step.

What You Need
- Docker Desktop (macOS) or Docker Engine (Linux) installed and running.
- At least 8 GB of free RAM for a small model (more is better for larger models).
- GPU (optional but highly recommended): NVIDIA with CUDA, Apple Silicon with MPS, or CPU fallback. A GPU significantly speeds up generation.
- Basic familiarity with the terminal or command line.
To verify your setup, run docker model version in your terminal. If you see a version number without errors, you're ready to proceed.
Step-by-Step Guide
Step 1: Pull an Image Generation Model
Docker Model Runner uses a compact packaging format called DDUF (Diffusers Unified Format) to distribute image generation models through Docker Hub—just like any other OCI artifact. This single-file format bundles everything needed: text encoder, VAE, UNet/DiT, and scheduler configuration.
Open your terminal and pull the stable-diffusion model by running:
docker model pull stable-diffusion
This downloads approximately 6.94 GB of data. Once complete, confirm the model is ready with:
docker model inspect stable-diffusion
You should see output similar to:
{
"id": "sha256:5f60862074a4c585126288d08555e5ad9ef65044bf490ff3a64855fc84d06823",
"tags": ["docker.io/ai/stable-diffusion:latest"],
"created": 1768470632,
"config": {
"format": "diffusers",
"architecture": "diffusers",
"size": "6.94GB",
"diffusers": {
"dduf_file": "stable-diffusion-xl-base-1.0-FP16.dduf",
"layout": "dduf"
}
}
}
Under the hood, Docker Model Runner stores the model locally as a DDUF file. At runtime, it unpacks and loads the components seamlessly.
Step 2: Launch Open WebUI
Here's where the magic happens. Docker Model Runner includes a built-in launch command that automatically wires up Open WebUI to your local inference endpoint. No manual configuration needed. In the same terminal, run:
docker model launch openwebui
This command will:
- Start the inference backend for the pulled model.
- Expose a 100% OpenAI-compatible API (including the
POST /v1/images/generationsendpoint). - Launch Open WebUI in your default browser, ready to use.
You'll see log output as the services initialize. After a few moments, Open WebUI will open automatically at http://localhost:3000. If it doesn't, manually navigate to that address.
Step 3: Generate Your First Image
With Open WebUI open, you'll see a familiar chat interface. To generate an image, simply type a prompt in the chat box and press Enter. For example:
"A dragon wearing a business suit, photorealistic, sitting in a boardroom"

Open WebUI will send your request to the local Docker Model Runner API, which processes it using the Stable Diffusion model. The generated image will appear in the chat as a response. You can continue the conversation, refine prompts, or ask for variations—all without any cloud dependency.
Step 4: Customize and Explore
Once you've confirmed everything works, you can experiment with different models. Docker Model Runner supports other DDUF-packaged models available on Docker Hub. For example:
docker model pull another-model-name
Then switch your Open WebUI instance to use that model (the interface may require selecting a model). You can also adjust generation parameters like image size, steps, and guidance scale by editing the configuration files or using advanced prompts.
For power users, Docker Model Runner exposes its full API. You can integrate it with other applications by sending requests to http://localhost:12434/v1/images/generations. The API follows OpenAI's schema, making it compatible with many existing tools.
Tips for Best Results
- Monitor RAM usage: If your system has less than 16 GB of RAM, consider closing other applications before generating. The model can be memory-intensive.
- Leverage GPU acceleration: If you have an NVIDIA or Apple Silicon GPU, ensure your Docker setup uses it. On macOS, Docker Desktop automatically uses Metal Performance Shaders (MPS) when available. On Linux, you may need to install
nvidia-container-toolkit. - Keep models updated: Docker Model Runner periodically releases new model versions. Use
docker model pullwith the--updateflag to refresh. - Use descriptive prompts: The quality of images improves with detailed, specific descriptions. Include style keywords like "photorealistic", "anime", or "oil painting" to guide the model.
- Explore the Open WebUI settings: You can customize the interface, enable dark mode, and save chat history. Check the gear icon in the bottom left corner.
- Back up your model files: DDUF files are stored in Docker's data directory. If you ever need to reinstall, you can copy them to avoid re-downloading.
- Network considerations: The initial pull requires an internet connection. After that, all generation happens offline—perfect for privacy.
With these steps, you now have a fully functional, private AI image generator running on your own hardware. Experiment, create, and enjoy the freedom of local AI.
Related Articles
- Building and Deploying a Serverless Spam Classifier with Scikit-Learn and AWS
- Securing ClickHouse Deployments: How Docker Hardened Images Overcome CVE Blocks
- Stealthy Python Backdoor 'DEEP#DOOR' Exploits Tunneling to Exfiltrate Browser and Cloud Credentials
- How to Upgrade Your Container Security with Docker Hardened Images: A Step-by-Step Guide
- Cloudflare Unveils Dynamic Workflows: Durable Execution Now Follows the Tenant
- How to Set Up AWS Interconnect for Multi-Cloud and Last-Mile Connectivity
- AWS Launches Managed Private Connectivity Service with Last-Mile Option for Enterprise Networks
- Making ClickHouse Production-Ready: How Docker Hardened Images Solve Security Blocks