目次
私たちとつながる
Step-by-Step Guide to Installing Ollama, Open Web UI, and LiteLLM in Docker
⚙️ Why This Guide Matters
Setting up an AI environment with Ollama, Open Web UI, and LiteLLM using Docker can be frustrating due to scattered documentation and outdated examples. This comprehensive guide solves that by walking you through every step clearly. For deeper AI integration insights, check out our post on AI-powered smart home automation.
📦 Prerequisites and System Setup
Before diving into the Docker setup, ensure you have Docker and Docker Compose installed. Allocate enough disk space, especially if you’re mounting persistent volumes. This guide assumes basic knowledge of terminal usage and YAML. If you’re new to smart tech setups, start with our practical DIY automation guide.
🧠 Step 1: Deploy Ollama Container
Ollama is a self-hosted solution that allows local large language model serving. Add the following to your Docker Compose file:
services:
ollama:
container_name: ollama
image: ollama/ollama
ports:
- '11434:11434'
restart: unless-stopped
volumes:
- ./ai-server/ollama-openwebui/ollama/:/root/.ollama
This mounts model data for persistence. To explore more efficient local setups, read our post on local AI processing for smart environments.
🌐 Step 2: Configure Open Web UI
Open Web UI allows you to interact with your models in a browser interface. Make sure to define credentials and API base properly:
openwebui:
container_name: openwebui
depends_on:
- litellm
environment:
- LITELLM_API_BASE=http://litellm:4000
- LITELLM_MASTER_KEY=your_master_key
- WEBUI_SECRET_KEY=your_secret_key
- ENABLE_AUTH=true
- DEFAULT_USERNAME=admin
- DEFAULT_PASSWORD=admin123
ports:
- '10000:8080'
If you’re interested in integrating this into your broader home automation dashboard, learn how it fits with ホームアシスタント.
🐘 Step 3: Add PostgreSQL for LiteLLM
LiteLLM stores data such as prompts, responses, and analytics in a PostgreSQL database. Configure it like this:
litellm_postgres:
image: postgres:latest
container_name: litellm_postgres
environment:
POSTGRES_DB=litellmdb
POSTGRES_USER=litellmuser
POSTGRES_PASSWORD=your_password
volumes:
- /mnt/storage-system/local-storage/application-data/ai-server/litellm/postgres:/var/lib/postgresql/data
ports:
- '5432:5432'
Using persistent volumes ensures your database isn’t lost on container restarts. For additional database integration tips, visit our section on custom automation solutions.
⚡ Step 4: Integrate Redis for Performance
Redis is used by LiteLLM for fast data caching and job queuing. It’s easy to configure:
litellm_redis:
image: redis:latest
container_name: litellm_redis
command: redis-server --requirepass your_redis_password
volumes:
- ./ai-server/litellm/redis:/data
ports:
- '6379:6379'
This drastically improves response speed and system efficiency. For more on smart performance tuning, explore our smart tech innovations archive.
🧬 Step 5: Deploy LiteLLM Core
LiteLLM connects the database and Redis, acting as the backend engine for inference and API access:
litellm:
image: ghcr.io/berriai/litellm-database:main-stable
container_name: litellm
depends_on:
- litellm_postgres
- litellm_redis
environment:
- DATABASE_URL=postgresql://litellmuser:your_password@litellm_postgres:5432/litellmdb
- REDIS_HOST=litellm_redis
- REDIS_PASSWORD=your_redis_password
- LITELLM_MASTER_KEY=your_master_key
- SALT_KEY=your_salt_key
ports:
- '4000:4000'
volumes:
- ./ai-server/litellm/config.yaml:/app/config.yaml
command: --config /app/config.yaml --detailed_debug
If you’re building AI systems for specific family or lifestyle needs, see how others are leveraging home automation solutions.
🧪 Test and Validate the Setup
Once all containers are running, visit http://localhost:10000
to log in with the default username and password. You can test API endpoints using curl or Postman. For full setup testing and custom deployment use cases, check our DIY tutorials section.
Frequently Asked Questions
❓What does LiteLLM do in this stack?
LiteLLM proxies calls to different LLMs and adds functionality like logging, usage limits, and analytics. It’s ideal for routing multiple model types in one unified API.
❓Can Ollama work without a GPU?
Yes, it works on CPUs too, but with slower inference. For performance tips with or without GPU, explore our detailed article on home automation setup trends.
❓Is Open Web UI required?
No, it’s optional but recommended for ease of interaction, especially for those unfamiliar with APIs or command-line tools.
❓Where are the models stored?
Models are stored in the volume mounted to the Ollama container at /root/.ollama
. You can back up or reuse them across machines.
❓Can I monitor usage and logs?
Yes, use docker logs
or build a log forwarding system. For advanced monitoring, consider reading about product features for smart diagnostics.