Docker

Deploy BroxiAI integrations using Docker containers for consistent and scalable applications

Learn how to containerize your BroxiAI applications with Docker for consistent deployment across development, staging, and production environments.

Overview

Docker provides:

  • Consistent environments across all stages

  • Easy deployment and scaling

  • Isolated application dependencies

  • Simplified CI/CD pipelines

  • Portable containers across platforms

Basic Docker Setup

Dockerfile for BroxiAI Applications

Python Application Dockerfile

# Use official Python runtime as base image
FROM python:3.11-slim

# Set working directory
WORKDIR /app

# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1 \
    DEBIAN_FRONTEND=noninteractive

# Install system dependencies
RUN apt-get update && apt-get install -y \
    gcc \
    g++ \
    curl \
    && rm -rf /var/lib/apt/lists/*

# Copy requirements first for better caching
COPY requirements.txt .

# Install Python dependencies
RUN pip install --no-cache-dir --upgrade pip && \
    pip install --no-cache-dir -r requirements.txt

# Copy application code
COPY . .

# Create non-root user for security
RUN useradd --create-home --shell /bin/bash broxi && \
    chown -R broxi:broxi /app
USER broxi

# Expose port
EXPOSE 8000

# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
    CMD curl -f http://localhost:8000/health || exit 1

# Run application
CMD ["python", "main.py"]

Node.js Application Dockerfile

# Use official Node.js LTS image
FROM node:18-alpine

# Set working directory
WORKDIR /app

# Set environment variables
ENV NODE_ENV=production

# Install system dependencies
RUN apk add --no-cache \
    curl \
    dumb-init

# Copy package files first for better caching
COPY package*.json ./

# Install dependencies
RUN npm ci --only=production && \
    npm cache clean --force

# Copy application code
COPY . .

# Create non-root user
RUN addgroup -g 1001 -S nodejs && \
    adduser -S broxi -u 1001 -G nodejs && \
    chown -R broxi:nodejs /app
USER broxi

# Expose port
EXPOSE 3000

# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
    CMD curl -f http://localhost:3000/health || exit 1

# Run application with dumb-init
ENTRYPOINT ["dumb-init", "--"]
CMD ["node", "server.js"]

Requirements Files

requirements.txt (Python)

# Core dependencies
fastapi==0.104.1
uvicorn[standard]==0.24.0
pydantic==2.5.0
httpx==0.25.2
python-multipart==0.0.6

# Database
sqlalchemy==2.0.23
alembic==1.12.1
psycopg2-binary==2.9.9
redis==5.0.1

# Background tasks
celery==5.3.4
flower==2.0.1

# Monitoring
prometheus-client==0.19.0
structlog==23.2.0

# Security
cryptography==41.0.8
python-jose[cryptography]==3.3.0
passlib[bcrypt]==1.7.4

# Cloud integrations
boto3==1.35.0
google-cloud-storage==2.10.0
azure-storage-blob==12.19.0

# Development
pytest==7.4.3
pytest-asyncio==0.21.1
black==23.11.0
isort==5.12.0
flake8==6.1.0

package.json (Node.js)

{
  "name": "broxi-integration",
  "version": "1.0.0",
  "description": "BroxiAI integration service",
  "main": "server.js",
  "scripts": {
    "start": "node server.js",
    "dev": "nodemon server.js",
    "test": "jest",
    "test:watch": "jest --watch",
    "lint": "eslint .",
    "format": "prettier --write .",
    "build": "webpack --mode production"
  },
  "dependencies": {
    "express": "^4.18.2",
    "axios": "^1.6.0",
    "dotenv": "^16.3.1",
    "helmet": "^7.1.0",
    "cors": "^2.8.5",
    "compression": "^1.7.4",
    "morgan": "^1.10.0",
    "winston": "^3.11.0",
    "redis": "^4.6.10",
    "pg": "^8.11.3",
    "jsonwebtoken": "^9.0.2",
    "bcryptjs": "^2.4.3",
    "joi": "^17.11.0",
    "socket.io": "^4.7.4",
    "bull": "^4.12.0"
  },
  "devDependencies": {
    "nodemon": "^3.0.1",
    "jest": "^29.7.0",
    "supertest": "^6.3.3",
    "eslint": "^8.54.0",
    "prettier": "^3.1.0"
  },
  "engines": {
    "node": ">=18.0.0",
    "npm": ">=9.0.0"
  }
}

Multi-Stage Builds

Optimized Production Dockerfile

Multi-Stage Python Build

# Build stage
FROM python:3.11-slim as builder

WORKDIR /app

# Install build dependencies
RUN apt-get update && apt-get install -y \
    gcc \
    g++ \
    python3-dev \
    && rm -rf /var/lib/apt/lists/*

# Copy requirements and install dependencies
COPY requirements.txt .
RUN pip install --user --no-cache-dir -r requirements.txt

# Production stage
FROM python:3.11-slim as production

WORKDIR /app

# Install runtime dependencies only
RUN apt-get update && apt-get install -y \
    curl \
    && rm -rf /var/lib/apt/lists/*

# Copy Python packages from builder stage
COPY --from=builder /root/.local /root/.local

# Copy application code
COPY . .

# Create non-root user
RUN useradd --create-home --shell /bin/bash broxi && \
    chown -R broxi:broxi /app
USER broxi

# Set PATH to include user installed packages
ENV PATH=/root/.local/bin:$PATH

EXPOSE 8000

HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
    CMD curl -f http://localhost:8000/health || exit 1

CMD ["python", "main.py"]

Multi-Stage Node.js Build

# Build stage
FROM node:18-alpine as builder

WORKDIR /app

# Copy package files
COPY package*.json ./

# Install all dependencies (including dev)
RUN npm ci

# Copy source code
COPY . .

# Build application
RUN npm run build

# Production stage
FROM node:18-alpine as production

WORKDIR /app

# Install dumb-init
RUN apk add --no-cache dumb-init

# Copy package files
COPY package*.json ./

# Install only production dependencies
RUN npm ci --only=production && \
    npm cache clean --force

# Copy built application from builder stage
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/public ./public

# Create non-root user
RUN addgroup -g 1001 -S nodejs && \
    adduser -S broxi -u 1001 -G nodejs && \
    chown -R broxi:nodejs /app
USER broxi

EXPOSE 3000

HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
    CMD curl -f http://localhost:3000/health || exit 1

ENTRYPOINT ["dumb-init", "--"]
CMD ["node", "dist/server.js"]

Docker Compose

Development Environment

docker-compose.dev.yml

version: '3.8'

services:
  # Main application
  broxi-app:
    build:
      context: .
      dockerfile: Dockerfile.dev
    ports:
      - "8000:8000"
    volumes:
      - .:/app
      - /app/node_modules  # Prevent overwriting node_modules
    environment:
      - NODE_ENV=development
      - BROXI_API_TOKEN=${BROXI_API_TOKEN}
      - DATABASE_URL=postgresql://broxi:password@postgres:5432/broxi_dev
      - REDIS_URL=redis://redis:6379/0
    depends_on:
      - postgres
      - redis
    networks:
      - broxi-network
    restart: unless-stopped

  # PostgreSQL database
  postgres:
    image: postgres:15-alpine
    ports:
      - "5432:5432"
    environment:
      - POSTGRES_DB=broxi_dev
      - POSTGRES_USER=broxi
      - POSTGRES_PASSWORD=password
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./scripts/init-db.sql:/docker-entrypoint-initdb.d/init-db.sql
    networks:
      - broxi-network
    restart: unless-stopped

  # Redis cache
  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"
    volumes:
      - redis_data:/data
    command: redis-server --appendonly yes
    networks:
      - broxi-network
    restart: unless-stopped

  # Background task worker
  celery-worker:
    build:
      context: .
      dockerfile: Dockerfile.dev
    command: celery -A app.celery worker --loglevel=info
    volumes:
      - .:/app
    environment:
      - BROXI_API_TOKEN=${BROXI_API_TOKEN}
      - DATABASE_URL=postgresql://broxi:password@postgres:5432/broxi_dev
      - REDIS_URL=redis://redis:6379/0
    depends_on:
      - postgres
      - redis
    networks:
      - broxi-network
    restart: unless-stopped

  # Celery monitoring
  flower:
    build:
      context: .
      dockerfile: Dockerfile.dev
    command: celery -A app.celery flower --port=5555
    ports:
      - "5555:5555"
    environment:
      - REDIS_URL=redis://redis:6379/0
    depends_on:
      - redis
    networks:
      - broxi-network
    restart: unless-stopped

  # Monitoring
  prometheus:
    image: prom/prometheus:latest
    ports:
      - "9090:9090"
    volumes:
      - ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml
      - prometheus_data:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
      - '--web.console.libraries=/etc/prometheus/console_libraries'
      - '--web.console.templates=/etc/prometheus/consoles'
    networks:
      - broxi-network
    restart: unless-stopped

  grafana:
    image: grafana/grafana:latest
    ports:
      - "3001:3000"
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=admin
    volumes:
      - grafana_data:/var/lib/grafana
      - ./monitoring/grafana/dashboards:/etc/grafana/provisioning/dashboards
      - ./monitoring/grafana/datasources:/etc/grafana/provisioning/datasources
    networks:
      - broxi-network
    restart: unless-stopped

volumes:
  postgres_data:
  redis_data:
  prometheus_data:
  grafana_data:

networks:
  broxi-network:
    driver: bridge

Production Environment

docker-compose.prod.yml

version: '3.8'

services:
  # Reverse proxy
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf
      - ./nginx/ssl:/etc/nginx/ssl
      - static_files:/usr/share/nginx/html/static
    depends_on:
      - broxi-app
    networks:
      - broxi-network
    restart: unless-stopped

  # Main application
  broxi-app:
    build:
      context: .
      dockerfile: Dockerfile.prod
    expose:
      - "8000"
    environment:
      - NODE_ENV=production
      - BROXI_API_TOKEN=${BROXI_API_TOKEN}
      - DATABASE_URL=${DATABASE_URL}
      - REDIS_URL=${REDIS_URL}
      - SECRET_KEY=${SECRET_KEY}
    volumes:
      - static_files:/app/static
      - media_files:/app/media
    depends_on:
      - postgres
      - redis
    networks:
      - broxi-network
    restart: unless-stopped
    deploy:
      replicas: 3
      resources:
        limits:
          cpus: '0.5'
          memory: 512M
        reservations:
          cpus: '0.25'
          memory: 256M

  # Database
  postgres:
    image: postgres:15-alpine
    environment:
      - POSTGRES_DB=${POSTGRES_DB}
      - POSTGRES_USER=${POSTGRES_USER}
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data
    networks:
      - broxi-network
    restart: unless-stopped
    deploy:
      resources:
        limits:
          cpus: '1.0'
          memory: 1G

  # Cache
  redis:
    image: redis:7-alpine
    command: redis-server --requirepass ${REDIS_PASSWORD}
    volumes:
      - redis_data:/data
    networks:
      - broxi-network
    restart: unless-stopped
    deploy:
      resources:
        limits:
          cpus: '0.25'
          memory: 256M

  # Background workers
  celery-worker:
    build:
      context: .
      dockerfile: Dockerfile.prod
    command: celery -A app.celery worker --loglevel=info --concurrency=4
    environment:
      - BROXI_API_TOKEN=${BROXI_API_TOKEN}
      - DATABASE_URL=${DATABASE_URL}
      - REDIS_URL=${REDIS_URL}
    depends_on:
      - postgres
      - redis
    networks:
      - broxi-network
    restart: unless-stopped
    deploy:
      replicas: 2
      resources:
        limits:
          cpus: '0.5'
          memory: 512M

  # Task scheduler
  celery-beat:
    build:
      context: .
      dockerfile: Dockerfile.prod
    command: celery -A app.celery beat --loglevel=info
    environment:
      - DATABASE_URL=${DATABASE_URL}
      - REDIS_URL=${REDIS_URL}
    depends_on:
      - postgres
      - redis
    networks:
      - broxi-network
    restart: unless-stopped
    deploy:
      resources:
        limits:
          cpus: '0.25'
          memory: 256M

volumes:
  postgres_data:
    driver: local
  redis_data:
    driver: local
  static_files:
    driver: local
  media_files:
    driver: local

networks:
  broxi-network:
    driver: overlay
    attachable: true

secrets:
  postgres_password:
    external: true
  redis_password:
    external: true
  secret_key:
    external: true

Container Orchestration

Docker Swarm

Swarm Initialization

# Initialize swarm
docker swarm init --advertise-addr $(hostname -i)

# Create secrets
echo "your_postgres_password" | docker secret create postgres_password -
echo "your_redis_password" | docker secret create redis_password -
echo "your_secret_key" | docker secret create secret_key -

# Deploy stack
docker stack deploy -c docker-compose.prod.yml broxi-stack

# Scale services
docker service scale broxi-stack_broxi-app=5
docker service scale broxi-stack_celery-worker=3

# Monitor services
docker service ls
docker service logs broxi-stack_broxi-app

Service Configuration

# docker-compose.swarm.yml
version: '3.8'

services:
  broxi-app:
    image: broxi/app:latest
    deploy:
      replicas: 3
      update_config:
        parallelism: 1
        delay: 10s
        failure_action: rollback
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 3
      placement:
        constraints:
          - node.role == worker
      resources:
        limits:
          cpus: '0.5'
          memory: 512M
        reservations:
          cpus: '0.25'
          memory: 256M
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 60s
    networks:
      - broxi-overlay
    secrets:
      - postgres_password
      - redis_password
      - secret_key

networks:
  broxi-overlay:
    driver: overlay
    attachable: true

secrets:
  postgres_password:
    external: true
  redis_password:
    external: true
  secret_key:
    external: true

Advanced Docker Patterns

Multi-Architecture Builds

Buildx Configuration

# Create builder instance
docker buildx create --name multiarch --driver docker-container --use
docker buildx inspect --bootstrap

# Build multi-architecture images
docker buildx build \
    --platform linux/amd64,linux/arm64 \
    --tag broxi/app:latest \
    --tag broxi/app:v1.0.0 \
    --push \
    .

Multi-Architecture Dockerfile

FROM --platform=$BUILDPLATFORM python:3.11-slim as builder

ARG TARGETPLATFORM
ARG BUILDPLATFORM
ARG TARGETOS
ARG TARGETARCH

RUN echo "Building for $TARGETPLATFORM on $BUILDPLATFORM"

WORKDIR /app

# Install dependencies based on architecture
RUN if [ "$TARGETARCH" = "arm64" ] ; then \
        apt-get update && apt-get install -y gcc-aarch64-linux-gnu ; \
    else \
        apt-get update && apt-get install -y gcc ; \
    fi

COPY requirements.txt .
RUN pip install --user -r requirements.txt

FROM python:3.11-slim

COPY --from=builder /root/.local /root/.local
COPY . .

ENV PATH=/root/.local/bin:$PATH

EXPOSE 8000
CMD ["python", "main.py"]

Health Checks and Monitoring

Advanced Health Check

# health.py
import asyncio
import httpx
import redis
import psycopg2
from datetime import datetime

class HealthChecker:
    def __init__(self):
        self.checks = {
            'database': self.check_database,
            'redis': self.check_redis,
            'broxi_api': self.check_broxi_api,
            'disk_space': self.check_disk_space,
            'memory': self.check_memory
        }
    
    async def check_health(self):
        """Comprehensive health check"""
        results = {}
        overall_status = "healthy"
        
        for check_name, check_func in self.checks.items():
            try:
                result = await check_func()
                results[check_name] = result
                
                if not result.get('healthy', False):
                    overall_status = "unhealthy"
                    
            except Exception as e:
                results[check_name] = {
                    'healthy': False,
                    'error': str(e),
                    'timestamp': datetime.utcnow().isoformat()
                }
                overall_status = "unhealthy"
        
        return {
            'status': overall_status,
            'checks': results,
            'timestamp': datetime.utcnow().isoformat()
        }
    
    async def check_database(self):
        """Check database connectivity"""
        try:
            conn = psycopg2.connect(os.environ['DATABASE_URL'])
            cursor = conn.cursor()
            cursor.execute('SELECT 1')
            cursor.close()
            conn.close()
            
            return {
                'healthy': True,
                'latency_ms': 0,  # Could measure actual latency
                'timestamp': datetime.utcnow().isoformat()
            }
        except Exception as e:
            return {
                'healthy': False,
                'error': str(e),
                'timestamp': datetime.utcnow().isoformat()
            }
    
    async def check_redis(self):
        """Check Redis connectivity"""
        try:
            r = redis.from_url(os.environ['REDIS_URL'])
            r.ping()
            
            return {
                'healthy': True,
                'timestamp': datetime.utcnow().isoformat()
            }
        except Exception as e:
            return {
                'healthy': False,
                'error': str(e),
                'timestamp': datetime.utcnow().isoformat()
            }
    
    async def check_broxi_api(self):
        """Check BroxiAI API connectivity"""
        try:
            async with httpx.AsyncClient() as client:
                response = await client.get(
                    "https://api.broxi.ai/v1/health",
                    headers={"Authorization": f"Bearer {os.environ['BROXI_API_TOKEN']}"},
                    timeout=10.0
                )
                
                return {
                    'healthy': response.status_code == 200,
                    'status_code': response.status_code,
                    'timestamp': datetime.utcnow().isoformat()
                }
        except Exception as e:
            return {
                'healthy': False,
                'error': str(e),
                'timestamp': datetime.utcnow().isoformat()
            }

Container Security

Security Hardened Dockerfile

FROM python:3.11-slim

# Update and install security updates
RUN apt-get update && \
    apt-get upgrade -y && \
    apt-get install -y --no-install-recommends \
        curl \
        ca-certificates && \
    rm -rf /var/lib/apt/lists/*

# Create non-root user with specific UID/GID
RUN groupadd -r broxi --gid=1001 && \
    useradd -r -g broxi --uid=1001 --home-dir=/app --shell=/bin/bash broxi

WORKDIR /app

# Copy requirements and install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy application code
COPY --chown=broxi:broxi . .

# Remove unnecessary packages and files
RUN apt-get purge -y --auto-remove \
        gcc \
        python3-dev && \
    rm -rf /var/lib/apt/lists/* \
           /tmp/* \
           /var/tmp/* \
           ~/.cache

# Set file permissions
RUN chmod -R 755 /app && \
    chmod 644 /app/requirements.txt

# Switch to non-root user
USER broxi

# Remove shell access for security
RUN rm -rf /bin/bash /bin/sh

# Set security-focused environment variables
ENV PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1 \
    PYTHONPATH=/app \
    PATH=/home/broxi/.local/bin:$PATH

# Security labels
LABEL maintainer="security@yourcompany.com" \
      security.scan="enabled" \
      security.policy="strict"

EXPOSE 8000

# Use exec form for proper signal handling
CMD ["python", "-m", "app.main"]

Security Scanning

# Scan for vulnerabilities
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
    -v $HOME/Library/Caches:/root/.cache/ \
    aquasec/trivy:latest image broxi/app:latest

# Run security benchmarks
docker run --rm --net host --pid host --userns host --cap-add audit_control \
    -e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \
    -v /etc:/etc:ro \
    -v /usr/bin/containerd:/usr/bin/containerd:ro \
    -v /usr/bin/runc:/usr/bin/runc:ro \
    -v /usr/lib/systemd:/usr/lib/systemd:ro \
    -v /var/lib:/var/lib:ro \
    -v /var/run/docker.sock:/var/run/docker.sock:ro \
    --label docker_bench_security \
    docker/docker-bench-security

CI/CD with Docker

GitHub Actions

.github/workflows/docker.yml

name: Docker Build and Deploy

on:
  push:
    branches: [ main, develop ]
    tags: [ 'v*' ]
  pull_request:
    branches: [ main ]

env:
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}

jobs:
  test:
    runs-on: ubuntu-latest
    
    services:
      postgres:
        image: postgres:15
        env:
          POSTGRES_PASSWORD: postgres
          POSTGRES_DB: test_db
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5
        ports:
          - 5432:5432
      
      redis:
        image: redis:7
        options: >-
          --health-cmd "redis-cli ping"
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5
        ports:
          - 6379:6379
    
    steps:
    - uses: actions/checkout@v4
    
    - name: Set up Python
      uses: actions/setup-python@v4
      with:
        python-version: '3.11'
        cache: 'pip'
    
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip install -r requirements.txt
        pip install -r requirements-dev.txt
    
    - name: Run tests
      env:
        DATABASE_URL: postgresql://postgres:postgres@localhost:5432/test_db
        REDIS_URL: redis://localhost:6379/0
      run: |
        pytest --cov=app tests/
        coverage xml
    
    - name: Upload coverage
      uses: codecov/codecov-action@v3
      with:
        file: ./coverage.xml

  security:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v4
    
    - name: Security scan
      uses: securecodewarrior/github-action-add-sarif@v1
      with:
        sarif-file: 'security-scan-results.sarif'

  build:
    needs: [test, security]
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write
    
    steps:
    - name: Checkout
      uses: actions/checkout@v4
    
    - name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v3
    
    - name: Log in to Container Registry
      uses: docker/login-action@v3
      with:
        registry: ${{ env.REGISTRY }}
        username: ${{ github.actor }}
        password: ${{ secrets.GITHUB_TOKEN }}
    
    - name: Extract metadata
      id: meta
      uses: docker/metadata-action@v5
      with:
        images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
        tags: |
          type=ref,event=branch
          type=ref,event=pr
          type=semver,pattern={{version}}
          type=semver,pattern={{major}}.{{minor}}
          type=sha,prefix={{branch}}-
    
    - name: Build and push Docker image
      uses: docker/build-push-action@v5
      with:
        context: .
        platforms: linux/amd64,linux/arm64
        push: true
        tags: ${{ steps.meta.outputs.tags }}
        labels: ${{ steps.meta.outputs.labels }}
        cache-from: type=gha
        cache-to: type=gha,mode=max

  deploy:
    needs: build
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/main'
    
    steps:
    - name: Deploy to production
      uses: appleboy/ssh-action@v1.0.0
      with:
        host: ${{ secrets.DEPLOY_HOST }}
        username: ${{ secrets.DEPLOY_USER }}
        key: ${{ secrets.DEPLOY_KEY }}
        script: |
          cd /opt/broxi-app
          docker-compose pull
          docker-compose up -d
          docker system prune -f

GitLab CI/CD

.gitlab-ci.yml

stages:
  - test
  - security
  - build
  - deploy

variables:
  DOCKER_DRIVER: overlay2
  DOCKER_TLS_CERTDIR: "/certs"

services:
  - docker:dind
  - postgres:15
  - redis:7

before_script:
  - docker info

test:
  stage: test
  image: python:3.11
  services:
    - postgres:15
    - redis:7
  variables:
    POSTGRES_DB: test_db
    POSTGRES_USER: postgres
    POSTGRES_PASSWORD: postgres
    DATABASE_URL: postgresql://postgres:postgres@postgres:5432/test_db
    REDIS_URL: redis://redis:6379/0
  script:
    - pip install -r requirements.txt
    - pip install -r requirements-dev.txt
    - pytest --cov=app tests/
    - coverage xml
  artifacts:
    reports:
      coverage_report:
        coverage_format: cobertura
        path: coverage.xml
  coverage: '/TOTAL.+ ([0-9]{1,3}%)/'

security_scan:
  stage: security
  image: docker:latest
  script:
    - docker build -t $CI_PROJECT_NAME:$CI_COMMIT_SHA .
    - docker run --rm -v /var/run/docker.sock:/var/run/docker.sock 
        aquasec/trivy:latest image $CI_PROJECT_NAME:$CI_COMMIT_SHA
  allow_failure: true

build:
  stage: build
  image: docker:latest
  before_script:
    - echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER --password-stdin $CI_REGISTRY
  script:
    - docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
    - docker build -t $CI_REGISTRY_IMAGE:latest .
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
    - docker push $CI_REGISTRY_IMAGE:latest
  only:
    - main

deploy_staging:
  stage: deploy
  image: alpine:latest
  before_script:
    - apk add --no-cache openssh-client
    - eval $(ssh-agent -s)
    - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
    - mkdir -p ~/.ssh
    - chmod 700 ~/.ssh
  script:
    - ssh -o StrictHostKeyChecking=no $DEPLOY_USER@$STAGING_HOST "
        cd /opt/broxi-app &&
        docker-compose pull &&
        docker-compose up -d"
  environment:
    name: staging
    url: https://staging.broxi-app.com
  only:
    - develop

deploy_production:
  stage: deploy
  image: alpine:latest
  before_script:
    - apk add --no-cache openssh-client
    - eval $(ssh-agent -s)
    - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
    - mkdir -p ~/.ssh
    - chmod 700 ~/.ssh
  script:
    - ssh -o StrictHostKeyChecking=no $DEPLOY_USER@$PRODUCTION_HOST "
        cd /opt/broxi-app &&
        docker-compose pull &&
        docker-compose up -d"
  environment:
    name: production
    url: https://broxi-app.com
  when: manual
  only:
    - main

Best Practices

Performance Optimization

Image Optimization

  • Use multi-stage builds to reduce image size

  • Leverage Docker layer caching

  • Use .dockerignore to exclude unnecessary files

  • Choose appropriate base images (alpine vs slim)

  • Minimize the number of layers

Runtime Optimization

  • Set appropriate resource limits

  • Use health checks for reliability

  • Implement graceful shutdown handling

  • Configure proper logging

  • Monitor container metrics

Security Best Practices

Container Security

  • Run containers as non-root users

  • Use official base images

  • Regularly update base images

  • Scan images for vulnerabilities

  • Use minimal base images

  • Implement proper secret management

Network Security

  • Use custom networks instead of default bridge

  • Implement proper firewall rules

  • Use TLS/SSL for all communications

  • Limit exposed ports

  • Regular security audits

Monitoring and Logging

Container Monitoring

  • Implement comprehensive health checks

  • Monitor resource usage

  • Set up alerting for failures

  • Use centralized logging

  • Track application metrics

Troubleshooting

Common Issues

Container Startup Issues

# Check container logs
docker logs container_name

# Debug container
docker exec -it container_name /bin/sh

# Check resource usage
docker stats

# Inspect container configuration
docker inspect container_name

Network Issues

# List networks
docker network ls

# Inspect network
docker network inspect network_name

# Test connectivity
docker exec container_name ping other_container

Performance Issues

# Monitor real-time stats
docker stats --no-stream

# Check disk usage
docker system df

# Clean up resources
docker system prune -a

Next Steps

After Docker setup:

  1. Container Orchestration: Move to Kubernetes for advanced orchestration

  2. Monitoring Enhancement: Implement comprehensive observability

  3. Security Hardening: Regular security assessments and updates

  4. Performance Tuning: Optimize for your specific workload

  5. Disaster Recovery: Implement backup and recovery procedures


Last updated