--- title: Docker description: Deploy Afilmory using Docker for consistent, containerized deployments. createdAt: 2025-07-20T22:35:03+08:00 lastModified: 2025-11-23T19:40:52+08:00 order: 51 --- # Docker Deployment Deploy Afilmory using Docker for consistent, containerized deployments. Perfect for self-hosting or deploying to container platforms. ## Prerequisites - Docker installed - Docker Compose (optional, recommended) - PostgreSQL database (for SSR app with backend features) ## Quick Start ### Option 1: Use Pre-built Docker Setup Fork the [Afilmory Docker repository](https://github.com/Afilmory/docker) and customize: 1. Clone the repository 2. Configure `config.json` and `builder.config.ts` 3. Set environment variables in `.env` 4. Build and run ### Option 2: Build from Source Build your own Docker image from the Afilmory source. ## Configuration ### config.json Configure your site settings: ```json { "name": "Your Photo Gallery", "title": "Your Photo Gallery", "description": "Capturing beautiful moments in life", "url": "https://your-domain.com", "accentColor": "#fb7185", "author": { "name": "Your Name", "url": "https://your-website.com", "avatar": "https://your-avatar-url.com/avatar.png" }, "social": { "twitter": "@yourusername" } } ``` ### builder.config.ts Configure storage and builder settings: ```typescript import { defineBuilderConfig, githubRepoSyncPlugin } from '@afilmory/builder' export default defineBuilderConfig(() => ({ storage: { provider: 's3', bucket: process.env.S3_BUCKET_NAME!, region: process.env.S3_REGION!, accessKeyId: process.env.S3_ACCESS_KEY_ID!, secretAccessKey: process.env.S3_SECRET_ACCESS_KEY!, prefix: 'photos/', }, plugins: [ githubRepoSyncPlugin({ repo: { enable: true, url: process.env.BUILDER_REPO_URL!, token: process.env.GIT_TOKEN!, branch: process.env.BUILDER_REPO_BRANCH || 'main', }, }), ], })) ``` ### .env Set environment variables: ```bash # Database (for SSR with backend) PG_CONNECTION_STRING=postgresql://user:password@postgres:5432/afilmory # Storage (S3 example) S3_ACCESS_KEY_ID=your_access_key S3_SECRET_ACCESS_KEY=your_secret_key S3_BUCKET_NAME=your_bucket S3_REGION=us-east-1 # Git Repository Cache (optional) GIT_TOKEN=your_github_token BUILDER_REPO_URL=https://github.com/username/gallery-cache BUILDER_REPO_BRANCH=main # Application NODE_ENV=production PORT=3000 ``` ## Dockerfile Create a `Dockerfile` in your project root: ```dockerfile # Base stage FROM node:20-alpine AS base WORKDIR /app RUN corepack enable # Builder stage FROM base AS builder RUN apk update && apk add --no-cache git perl # Copy source (or clone from repo) COPY . . # Install dependencies RUN pnpm install --frozen-lockfile # Build the application RUN pnpm run build:manifest RUN pnpm --filter @afilmory/ssr build # Runner stage FROM base AS runner WORKDIR /app ENV NODE_ENV=production RUN apk add --no-cache curl wget # Create non-root user RUN addgroup --system --gid 1001 nodejs RUN adduser --system --uid 1001 nextjs USER nextjs # Copy build output COPY --from=builder --chown=nextjs:nodejs /app/apps/ssr/.next/standalone ./ COPY --from=builder --chown=nextjs:nodejs /app/apps/ssr/.next/static /app/apps/ssr/.next/static COPY --from=builder --chown=nextjs:nodejs /app/apps/ssr/public /app/apps/ssr/public EXPOSE 3000 CMD ["node", "apps/ssr/server.js"] ``` ## Docker Compose Create `docker-compose.yml`: ```yaml version: '3.8' services: afilmory: build: . ports: - '3000:3000' environment: - NODE_ENV=production env_file: - .env volumes: - ./config.json:/app/config.json:ro - ./builder.config.ts:/app/builder.config.ts:ro depends_on: - postgres restart: unless-stopped postgres: image: postgres:15-alpine environment: POSTGRES_DB: afilmory POSTGRES_USER: afilmory POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} volumes: - postgres_data:/var/lib/postgresql/data restart: unless-stopped volumes: postgres_data: ``` ## Building and Running ### Using Docker Compose (Recommended) ```bash # Build and start all services docker-compose up -d # View logs docker-compose logs -f afilmory # Stop services docker-compose down ``` ### Manual Docker Build ```bash # Build the image docker build -t afilmory . # Run the container docker run -d \ --name afilmory \ -p 3000:3000 \ --env-file .env \ -v $(pwd)/config.json:/app/config.json:ro \ -v $(pwd)/builder.config.ts:/app/builder.config.ts:ro \ afilmory ``` ## Storage Providers Configure any supported storage provider in `builder.config.ts`: - **S3**: AWS S3, MinIO, Cloudflare R2, etc. - **B2**: Backblaze B2 - **GitHub**: GitHub repository - **Local**: Local file system (mount volumes) - **Eagle**: Eagle 4 library See [Storage Providers](/storage/providers) for configuration details. ## Performance Tuning For large photo collections, optimize Docker resources: ```yaml services: afilmory: deploy: resources: limits: cpus: '4' memory: 8G reservations: cpus: '2' memory: 4G ``` Adjust based on your collection size and system resources. ## Troubleshooting **Build failures:** - Check Docker logs: `docker-compose logs afilmory` - Verify all environment variables are set - Ensure storage credentials are correct - Check disk space for build process **Memory issues:** - Increase Docker memory limits - Reduce builder concurrency in config - Process photos in smaller batches **Database connection:** - Verify PostgreSQL is running: `docker-compose ps` - Check connection string format - Ensure database is accessible from container **Storage access:** - Verify storage credentials - Check network connectivity from container - For local storage, ensure volumes are mounted correctly ## Production Considerations **Security:** - Use secrets management (Docker secrets, environment files) - Run as non-root user (already configured) - Keep images updated - Use read-only volumes where possible **Monitoring:** - Set up health checks - Monitor container resources - Log aggregation - Backup strategies **Scaling:** - Use orchestration (Kubernetes, Docker Swarm) - Load balancing for multiple instances - Shared storage for thumbnails/manifest - Database connection pooling ## Next Steps - Configure reverse proxy (nginx, Traefik) - Set up SSL certificates - Configure monitoring and logging - Implement backup strategies - Review [Docker documentation](https://docs.docker.com) for advanced features