Containers and Docker Compose
This project includes two Docker Compose setups and per-service Dockerfiles. Use the Make targets to keep commands short and consistent.
docker-compose.yml: uses prebuilt registry images; production-likedocker-compose-local.yml: builds backend/frontend locally from the Dockerfiles
Services & Ports
- Services:
- ollama: Runs the Ollama server; persists models in
ollama_data(/root/.ollama). - backend: FastAPI app at port 8000; depends on healthy Ollama; health:
GET /health. - frontend: Streamlit UI at port 8501; depends on healthy backend.
- ollama-init: One-shot helper that pulls
gemma3:270mafter Ollama is healthy.
- ollama: Runs the Ollama server; persists models in
- Ports:
11434→ Ollama (bound to127.0.0.1in Compose)8000→ Backend API (0.0.0.0:8000)8501→ Frontend (0.0.0.0:8501)
- Health and startup order:
depends_on+ healthchecks gate readiness. - Persistence:
ollama_datavolume stores downloaded models across restarts.
Architecture (Mermaid)
graph TD
U[User Browser] -->|HTTP :8501| FE[frontend]
FE[frontend\nStreamlit] -->|calls :8000| BE[backend\nFastAPI]
BE -->|OLLAMA_HOST http://ollama:11434| OL[ollama]
OI[ollama-init\noneshot] -.->|pull gemma3:270m| OL
VOL[(ollama_data volume)] --- OL
BE -.depends_on (healthy).-> OL
FE -.depends_on (healthy).-> BE
OI -.depends_on (healthy).-> OL
Environment and Secrets
- Backend uses
OLLAMA_HOST=http://ollama:11434internally. APP_DEBUG=falsefor production-like runs.- Streamlit secrets:
- Local dev: mount
./src/frontend/.streamlit/secrets.toml→/app/.streamlit/secrets.toml(read-only). - Never commit secrets.
- Local dev: mount
Local Development (Compose builds images locally)
- Build and run:
make local-up - Stop:
make local-down - Tail logs:
make local-logs - What it does:
- Builds images from
src/backend/Dockerfileandsrc/frontend/Dockerfile. - Mounts Streamlit secrets for the frontend (if present).
- Pulls the small
gemma3:270mmodel viaollama-init.
- Builds images from
Production-like (Use Registry Images)
- Start services in the background:
make compose-up - Stop services:
make compose-down - Tail logs:
make compose-logs - Images:
docker-compose.ymlreferences GHCR images:ghcr.io/mlops-2526q1-mds-upc/tikka-backend:latestghcr.io/mlops-2526q1-mds-upc/tikka-frontend:latest
- Ollama:
ollama/ollama:latestfrom Docker Hub. - Makefile push targets publish to GHCR:
make push-frontend-dockermake push-backend-docker
CI/CD, Multi-Arch Builds, and VM Rollout
- Build & Deploy (GitHub Actions):
.github/workflows/deploy.yml- Triggers on pushes to
main. - Steps: Setup QEMU/Buildx → auth GHCR/GCP → build backend/frontend (multi-arch) with model download via secret → push images → SSH to VM to run
run.sh(pull + restart services) → post-deploy API verification. - Tags:
latestand short SHA.
- Triggers on pushes to
- Tests (Code + API):
.github/workflows/tests.yml- Unit tests:
uv sync+make test. - Bruno API tests against deployed environment run post-deploy in
deploy.yml.
- Unit tests:
- Docs Validation:
.github/workflows/docs.yml- Regenerates OpenAPI via
make api-docs; fails ifdocs/docs/api.htmlis outdated; builds MkDocs site and uploads artifact.
- Regenerates OpenAPI via
- Multi-arch: Docker Buildx + QEMU produce
linux/amd64andlinux/arm64images. - Registry:
ghcr.io/mlops-2526q1-mds-upc/tikka-backendghcr.io/mlops-2526q1-mds-upc/tikka-frontend
Production Compose Services (Summary)
ollama:- Image:
ollama/ollama:latest - Env:
OLLAMA_HOST=0.0.0.0 - Volume:
ollama_data:/root/.ollama - Healthcheck:
ollama list - Restart:
unless-stopped
- Image:
backend:- Image:
ghcr.io/mlops-2526q1-mds-upc/tikka-backend:latest - Env:
OLLAMA_HOST=http://ollama:11434,APP_DEBUG=${APP_DEBUG:-false} - Depends on:
ollamahealthy - Healthcheck: GET
http://127.0.0.1:8000/health - Restart:
unless-stopped
- Image:
frontend:- Image:
ghcr.io/mlops-2526q1-mds-upc/tikka-frontend:latest - Ports:
0.0.0.0:8501:8501 - Depends on:
backendhealthy - Restart:
unless-stopped
- Image:
ollama-init:- Image:
ollama/ollama:latest - Env:
OLLAMA_HOST=http://ollama:11434 - Depends on:
ollamahealthy - Entrypoint:
ollama pull gemma3:270m - Restart:
no
- Image:
- Volumes:
ollama_data: { driver: local }
VM Deployment Details
- Deploy script on VM:
~/run.shperforms stop → cleanup → prune → pull → up in~/tikkamasalaifolder, then validates health checks. - Nginx config:
~/tikkamasalai/nginx/conf.d/default.confmounted into the container; Certbot volumes./certbot/wwwand./certbot/conf. - TLS: Certs under
/etc/letsencrypt/live/tikkamasalai.tech-0001/used for app and API server blocks.
See Also
- Deployment strategy, registry rollout, and API details: Deployment