Blueprints

Validated operating patterns with explicit environment posture, guardrails, and versioned implementation surfaces.

A blueprint is not a one-off environment file. It composes stable module contracts with policy profiles, environment state, and replaceable implementation packs so the same operating pattern can be reused cleanly across shared foundations, drills, staging, and production lanes.

Operating posture

Blueprints stay stable; environments, guardrails, and packaging surfaces change around them.

Reusable across named environments

Blueprint refs stay stable while state, secrets, approvals, and runtime context stay isolated per environment.

  • The same blueprint can drive shared, dev, drill, staging, prod, QA, or customer-specific lanes.
  • Live and drill cutovers stay separated by environment state instead of branching the blueprint itself.
  • Environment naming is not limited to dev, staging, and prod.
Profiles apply guardrails

Execution policy is applied around the blueprint contract, not hardcoded into every implementation path.

  • Naming, backend selection, timeouts, and connectivity expectations are policy-driven.
  • Manual gates, cost controls, and validation depth can vary by environment without changing the blueprint ref.
  • Every run still resolves through the same contract chain: module, driver, profile, pack, and probes.
Implementation surfaces are packaged cleanly

Blueprints remain the operating pattern while implementation assets are versioned on the surfaces they belong to.

  • HybridOps Core ships the runtime, module contracts, and blueprint definitions.
  • Terraform module sources can be consumed from registry or Git-backed module repos.
  • Ansible collections are packaged for Galaxy distribution rather than kept as local ad hoc scripts.

Reference Blueprints

Current reference blueprints organized by deployment scope.

On-Prem
Networking & GCP
DR
  • dr/postgresql-ha-backup-gcp@v1 PostgreSQL Backup to GCS
  • dr/postgresql-ha-failover-gcp@v1 PostgreSQL Failover to GCP
  • dr/postgresql-ha-failback-onprem@v1 PostgreSQL Failback to On-Prem
  • dr/postgresql-cloudsql-standby-gcp@v1 Cloud SQL Standby in GCP
  • dr/postgresql-cloudsql-promote-gcp@v1 Cloud SQL Promote in GCP
  • dr/postgresql-cloudsql-failback-onprem@v1 Cloud SQL Failback to On-Prem

Full blueprint step sequences, module compositions, and runbook links are in the blueprint index.

Module Catalog

Core module families behind the current validated platform paths.

Infrastructure

SDN, virtual networks, WAN hubs, edge foundations, and image lifecycle across on-prem, GCP, Azure, AWS, and Hetzner.

  • core/onprem/network-sdn
  • core/azure/vnet
  • org/gcp/wan-hub-network
  • org/hetzner/vyos-edge-foundation
  • org/gcp/wan-vpn-to-edge
  • core/onprem/template-image
Platform

PostgreSQL HA, RKE2 Kubernetes, ArgoCD, NetBox IPAM, edge observability, and DNS routing.

  • platform/onprem/postgresql-ha
  • platform/onprem/rke2-cluster
  • platform/k8s/argocd-bootstrap
  • platform/onprem/netbox
  • platform/network/edge-observability
  • platform/network/decision-service
DR & Storage

pgBackRest to GCS/S3, Cloud SQL replication, object storage repos, and DNS-based failover.

  • platform/onprem/postgresql-ha-backup
  • org/gcp/cloudsql-postgresql
  • org/gcp/cloudsql-external-replica
  • org/gcp/object-repo
  • platform/network/dns-routing

Full module contracts, lifecycle runbooks, and input/output references are in the module index. Implementation packaging lives on the appropriate surfaces: source in GitHub, Terraform modules in registry or Git-backed module repos, and Ansible collections through Galaxy.

Governed execution model

The same blueprint can operate across environments because policy, state, and execution are separated cleanly.

Environment posture

Blueprints bind to isolated environment state, not a fixed dev/staging/prod assumption.

Guardrails

Profiles apply policy for naming, approvals, connectivity, cost, and validation depth.

Run records

Drivers isolate workdirs and produce structured logs, state, and operator-facing verification outputs.

Full topology: on-prem primary runtime, always-on edge decisioning, event-driven cloud burst and DR. Hover any box for detail.

Prometheus scrapes on-prem cluster metrics and remote-writes them to the Thanos edge receiver for a global view.

The Decision service evaluates policy rules against aggregated Thanos metrics. If thresholds breach, it emits action signals.

DNS cutover module executes the traffic shift. Structured run records are written to external object storage.

Cloud target cluster activates (warm or cold), DR data promotes, and failover ingress begins receiving traffic.

HybridOps Executive Architecture Three-zone topology showing on-prem primary, Hetzner edge decisioning, and cloud burst/DR targets with data and control flows. HybridOps v1 baseline: on prem primary, always-on edge decisioning, event-driven cloud burst and DR On Prem Primary Hetzner Edge · Always On Cloud Burst / DR RKE2 workload cluster Primary runtime for platform and apps Prometheus per site Scrape cluster, services, and infra metrics GitOps agent Desired state sync for on prem workloads Stateful services externalized, replicated by policy. WAN edge HA pair IPsec, BGP, floating IP, secure ingress Thanos Receive + Query Global metrics view for policy and ops Decision Policy loop DNS cutover Action module Run records correlate with run IDs. Cold / warm cluster target Provisioned only on burst or DR event DR data target Replica promotion or backup restore endpoint Failover ingress Receives traffic after DNS cutover External object storage (GCS): Long-term metrics blocks and DR drill run records. Independent from on-prem. remote_write burst / DR trigger DNS action metrics blocks Data flow Control / policy action Key surface On-prem · Edge · Cloud Animated pulse

For detailed signal and control mapping, see the ADR overview. The execution model page explains the contract chain in detail.

WAN topology

Hetzner edge pair, BGP peering to GCP hub, and HA VPN tunnels — as deployed by the networking blueprints.

Three-zone WAN topology connecting on-prem workloads through a Hetzner edge pair to the GCP cloud hub. BGP route exchange and HA VPN tunnels provide redundant, automatically-converging connectivity.

Workload hosts, management network, and VLAN segments on the on-prem site. Routes are advertised via eBGP to the Hetzner edge pair for onward transit.

Primary/secondary edge pair with floating IP for automatic failover. Terminates IPsec and WireGuard VPN tunnels to GCP. BGP sessions maintained across both tunnels.

Cloud Router peers with the edge pair via eBGP over HA VPN. Dynamic routes propagate on-prem prefixes into the VPC. Cloud DNS handles failover routing policy.

HybridOps WAN topology: on-prem to Hetzner edge pair to GCP hub with BGP and HA VPN ON-PREM private range Workload Hosts app · db · k8s nodes Mgmt Network dedicated subnet VLAN Segments SDN-controlled Private ASN HETZNER EDGE WAN edge pair Floating IP automatic failover Edge Primary active · BGP up Edge Secondary passive · standby VPN Termination IPsec · WireGuard Private ASN GCP HUB cloud hub / VPC Cloud Router BGP peer · dynamic routes HA VPN Gateway 2 tunnels · 99.99% SLA VPC Workloads DR · burst capacity Cloud DNS failover + routing policy Cloud ASN LAN uplink eBGP peering HA VPN tunnel 1 tunnel 2 eBGP peering BGP route exchange propagates on-prem prefixes to GCP hub automatically on failover