Skip to main content

Architecture

The TTS Docker deployment consists of four main services that work together:

API Server

The API Server is the main entry point for all client requests.

Purpose

  • Routes incoming API requests to Lightning TTS workers
  • Manages WebSocket connections for streaming
  • Handles request queuing and load balancing
  • Provides unified API interface

Container Details

Image
string
quay.io/smallestinc/self-hosted-api-server:latest
Port
integer
7100 - Main API endpoint
Resources
object
  • CPU: 0.5-2 cores
  • Memory: 512 MB - 2 GB
  • No GPU required

Key Endpoints

EndpointMethodPurpose
/healthGETHealth check
/v1/speakPOSTSynchronous text-to-speech
/v1/speak/streamWebSocketStreaming text-to-speech

Environment Variables

LICENSE_KEY: Your license key
LIGHTNING_TTS_BASE_URL: Internal URL to Lightning TTS
API_BASE_URL: Internal URL to License Proxy

Logs

Key log messages:
✓ Connected to Lightning TTS at http://lightning-tts:8876
✓ License validation successful
✓ API server listening on port 7100

Dependencies

  • Requires Lightning TTS to be running
  • Requires License Proxy for validation
  • Optionally uses Redis for request coordination

Lightning TTS

The core text-to-speech engine powered by GPU acceleration.

Purpose

  • Converts text to high-quality speech audio
  • Processes both batch and streaming requests
  • Manages GPU resources and model inference
  • Handles voice synthesis and audio generation

Container Details

Image
string
quay.io/smallestinc/lightning-tts:latest
Port
integer
8876 - TTS service endpoint
Resources
object
  • CPU: 4-8 cores
  • Memory: 12-16 GB
  • GPU: 1x NVIDIA GPU (16+ GB VRAM)

GPU Requirements

Lightning TTS requires NVIDIA GPU with CUDA support:
GPU ModelVRAMPerformance
A10040-80 GBExcellent
A1024 GBExcellent
L424 GBVery Good
T416 GBGood

Environment Variables

LICENSE_KEY: Your license key
REDIS_URL: Redis connection string
PORT: Service port (default 8876)
GPU_DEVICE_ID: GPU to use (for multi-GPU)

Model Loading

On first startup, Lightning TTS:
  1. Loads TTS models from container (embedded)
  2. Validates model integrity
  3. Loads model into GPU memory
  4. Performs warmup inference
Models are embedded in the container - no separate download needed.

Logs

Key log messages:
✓ GPU detected: NVIDIA A10 (24GB)
✓ Model loaded successfully
✓ Warmup completed in 3.2s
✓ Server ready on port 8876

Performance

Typical performance metrics:
MetricValue
Real-time Factor0.1-0.3x
Cold Start30-60 seconds
Warm Inference100-300ms latency
Throughput50+ hours/hour (A10)

Dependencies

  • Requires License Proxy for validation
  • Requires Redis for request coordination
  • Requires NVIDIA GPU

License Proxy

Validates license keys and reports usage to Smallest servers.

Purpose

  • Validates license keys on startup
  • Reports usage metadata to Smallest
  • Provides grace period for offline operation
  • Acts as licensing gateway for all services

Container Details

Image
string
quay.io/smallestinc/license-proxy:latest
Port
integer
3369 - License validation endpoint (internal)
Resources
object
  • CPU: 0.25-1 core
  • Memory: 256-512 MB
  • No GPU required

Environment Variables

LICENSE_KEY: Your license key

Network Requirements

License Proxy requires outbound HTTPS access to:
  • console-api.smallest.ai on port 443
Ensure your firewall allows these connections.

Validation Process

  1. On startup, validates license key with Smallest servers
  2. Receives license terms and quotas
  3. Caches validation (valid for grace period)
  4. Periodically reports usage metadata

Usage Reporting

License Proxy reports only metadata:
Data ReportedExample
Audio duration3600 seconds
Request count150 requests
Features usedstreaming, voice selection
Response codes200, 400, 500
No audio or transcript data is transmitted to Smallest servers.

Offline Mode

If connection to license server fails:
  • Uses cached validation (24-hour grace period)
  • Continues serving requests
  • Logs warning messages
  • Retries connection periodically

Logs

Key log messages:
✓ License validated successfully
✓ License valid until: 2024-12-31
✓ Server listening on port 3369
⚠ Connection to license server failed, using cached validation

Redis

Provides caching and state management for the system.

Purpose

  • Request queuing and coordination
  • Session state for streaming connections
  • Caching of frequent requests
  • Performance optimization

Container Details

Image
string
redis:latest or redis:7-alpine
Port
integer
6379 - Redis protocol
Resources
object
  • CPU: 0.5-1 core
  • Memory: 512 MB - 1 GB
  • No GPU required

Configuration Options

Default configuration with minimal setup:
redis:
  image: redis:latest
  ports:
    - "6379:6379"

Data Stored

Redis stores:
  • Request queue state
  • WebSocket session data
  • Temporary audio chunks (streaming)
  • Worker status and health
Data in Redis is temporary and can be safely cleared. No persistent state is stored.

Health Check

Built-in health check:
healthcheck:
  test: ["CMD", "redis-cli", "ping"]
  interval: 5s
  timeout: 3s
  retries: 5

Service Dependencies

Startup order and dependencies:
  1. Redis - Starts immediately (5 seconds)
  2. License Proxy - Validates license (10-15 seconds)
  3. Lightning TTS - Loads models (30-60 seconds)
  4. API Server - Connects to services (5-10 seconds)

Resource Planning

Minimum Configuration

For development/testing:
Total Resources:
  CPU: 6 cores
  Memory: 16 GB
  GPU: 1x T4 (16 GB VRAM)
  Storage: 100 GB

Production Configuration

For production workloads:
Total Resources:
  CPU: 12 cores
  Memory: 32 GB
  GPU: 1x A10 (24 GB VRAM)
  Storage: 200 GB

Monitoring

Container Health

Check container status:
docker compose ps

Resource Usage

Monitor resource consumption:
docker stats

GPU Usage

Monitor GPU utilization:
watch -n 1 nvidia-smi

Logs

View service logs:
docker compose logs -f [service-name]

What’s Next?