πŸ“¦ Nexus Container Deployment

This document outlines all containers running on Nexusβ€”the single physical host running Ubuntu Server LTS that powers the entire HALO ecosystem. Nexus is responsible for managing backend APIs, automation workflows, database storage, ingress routing, and all HALO services.


🌐 Core Infrastructure Services

Container Name Description Compose File
traefik Ingress proxy with automatic service discovery, TLS termination, and path-based routing traefik.yml
postgres-db PostgreSQL 16.4 database storing configuration, workflow data, and system state postgres.yml
redis In-memory message broker for n8n worker queues, pub/sub messaging, and caching redis.yml
mosquitto MQTT broker for lightweight messaging between Home Assistant, Frigate, and Zigbee2MQTT mosquitto.yml

Traefik is the single entry point for all HTTP/HTTPS traffic, automatically discovering services through Docker labels and routing requests based on URL paths. It replaces the previous NGINX implementation with more dynamic service discovery.

PostgreSQL provides schema-isolated database services for Home Assistant, n8n, Omnia API, and Grafana.

Redis enables n8n’s distributed worker architecture and provides pub/sub messaging for real-time events.

Mosquitto handles all MQTT messaging for device events, camera detections, and automation triggers.


πŸ”— Workflow & Automation Services

Container Name Description Compose File
n8n Workflow automation engine with visual editor for multi-system orchestration n8n.yml
n8n-worker-1 Background worker for long-running n8n workflow tasks (Redis-backed) n8n.yml
node-red Low-latency reactive automation engine for time-sensitive device events node-red.yml

n8n coordinates complex workflows across Home Assistant, Frigate, Apollo, and Omnia API. The main container serves the web UI while workers process background jobs through Redis queues.

Node-RED provides instant response to MQTT device events and handles automations requiring sub-second latency.


🧩 HALO Application Services

Container Name Description Compose File
home-assistant Smart-home automation platform managing all devices and scenes home-assistant.yml
omnia-api Widget management, user profiles, and dashboard backend (planned) omnia-api.yml
frigate AI-powered camera system with Coral TPU acceleration frigate.yml
zigbee2mqtt Zigbee network bridge with Sonoff USB coordinator zigbee2mqtt.yml

Home Assistant runs as a containerized service on Nexus (not standalone), managing all physical devices through integrations with Zigbee2MQTT, MQTT, and Frigate.

Omnia (API) provides the backend for Omnia (Screen) dashboard UI, managing widget configurations and user data.

Frigate processes camera feeds with Google Coral TPU (USB passthrough) for real-time object detection and security monitoring.

Zigbee2MQTT bridges Zigbee devices to MQTT using the Sonoff Zigbee 3.0 USB Dongle Plus (USB passthrough).


πŸ“Š Monitoring & Maintenance Services

Container Name Description Compose File
grafana Metrics visualization and monitoring dashboards grafana.yml
watchtower Automated container image updates and restart management watchtower.yml

Grafana provides real-time metrics dashboards pulling data from PostgreSQL and system monitoring.

Watchtower automatically checks for container image updates and recreates containers with new versions on a scheduled basis.


🎨 Front-End Applications (Static Files)

Application Description Served By
omnia-ui React dashboard interface (Omnia Screen) Traefik

Omnia (Screen) is built as a React SPA and served as static files through Traefik from a mounted volume. Zero-downtime deployments by updating the volume contents.


πŸ”Œ External & Connected Systems

These systems connect to Nexus but do not run as containers on Nexus:

System Name Description Location
Apollo AI & LLM workloads with dedicated GPU Separate hardware

Apollo runs on separate hardware with its own GPU, exposing APIs that Nexus services call over the local network. This separation ensures AI workloads don’t compete with critical home automation services.


πŸ”§ Hardware Devices (USB Passthrough)

Device Name Purpose Connected To Container
Google Coral TPU (USB) Hardware acceleration for Frigate frigate
Sonoff Zigbee 3.0 USB Dongle Plus Zigbee network coordinator zigbee2mqtt

Both USB devices are passed through from the Nexus host to their respective containers, providing direct hardware access for optimal performance.


πŸ“ Network Architecture

All containers connect to one or more of three Docker networks:

Network Name Purpose Connected Services
nexus_frontnet Public-facing access (ingress only) traefik
nexus_appnet Internal service communication traefik, n8n, node-red, home-assistant, omnia-api, frigate, zigbee2mqtt, mosquitto, redis
nexus_dbnet Database access with restricted connectivity postgres-db, n8n, omnia-api, home-assistant, grafana

This three-tier isolation ensures services only communicate through intended channels, providing defense-in-depth security.


πŸš€ Deployment Commands

Containers are deployed using PowerShell scripts in nexus/scripts/:

# Deploy core infrastructure
.\scripts\deploy.ps1 -Files @("compose\networks.yml", "compose\traefik.yml", "compose\postgres.yml")

# Deploy messaging and workflows
.\scripts\deploy.ps1 -Files @("compose\redis.yml", "compose\mosquitto.yml", "compose\n8n.yml")

# Deploy HALO services
.\scripts\deploy.ps1 -Files @("compose\home-assistant.yml", "compose\frigate.yml", "compose\zigbee2mqtt.yml")

# Deploy monitoring
.\scripts\deploy.ps1 -Files @("compose\grafana.yml", "compose\watchtower.yml")

# Deploy specific service within a compose file
.\scripts\deploy.ps1 -Files @("compose\n8n.yml") -ServicesCsv "n8n"

πŸ“Š Container Status Summary

Active Services (Currently Deployed):

  • Core: traefik, postgres-db, redis, mosquitto
  • Workflows: n8n (+ workers), node-red
  • HALO: home-assistant, frigate, zigbee2mqtt
  • Monitoring: grafana, watchtower

Planned Services:

  • omnia-api (backend for Omnia dashboard)

Not Containerized:

  • Apollo (separate hardware with dedicated GPU)
  • Omnia (Screen) (static files served by Traefik)

πŸ” Service Discovery

Traefik automatically discovers services through Docker labels. Example labels in compose files:

labels:
  - "traefik.enable=true"
  - "traefik.http.routers.n8n.rule=PathPrefix(`/n8n`)"
  - "traefik.http.services.n8n.loadbalancer.server.port=5678"

No manual configuration requiredβ€”Traefik watches Docker for label changes and updates routing automatically.


πŸ“š See Also


All containers run on Nexus, the single physical host that powers HALO. Apollo is the only system running on separate hardware.


Back to top

Copyright © 2024-2025 HALO Project. All rights reserved.