Use cases

Eight platform scenarios proven end to end — each mapped to a delivery path, runbooks, and an Academy track.

Authoritative on-prem foundation

Stand up NetBox-backed IPAM, Proxmox SDN, and foundation services from a clean on-prem environment with deterministic state and repeatable rebuilds.

Validates
  • NetBox is used as the authoritative IPAM source for prefixes and VM inventory sync.
  • Management, data, and workload bridges are provisioned through the Proxmox SDN path.
  • Foundation VMs can be rebuilt and re-synchronised without manual IP assignment drift.

Reusable Proxmox SDN foundation

Deliver segmented VLAN-backed networking, optional host routing, NAT, and DHCP through one controlled Proxmox SDN path instead of hand-built bridge changes.

Validates
  • Zone-scoped SDN objects, host gateways, DHCP, and NAT are delivered through one repeatable module path.
  • The Proxmox GUI status mismatch is corrected non-destructively so healthy VNets no longer show false red errors.
  • The same topology model supports both host-routed lab/bootstrap mode and edge-routed production posture.

PostgreSQL HA failover and failback

Run PostgreSQL HA on-prem with Patroni and pgBackRest, restore the cluster into GCP during a DR event, and return it on-prem with checksum-verified application data.

Validates
  • Three-node on-prem PostgreSQL HA is provisioned and operated through the platform blueprint path.
  • The GCP recovery lane restores from pgBackRest and preserves seeded application row counts and checksums.
  • The isolated on-prem return lane restores the same dataset back from GCP-backed storage without touching the live dev database lane.

Hybrid WAN edge and site extension

Use a Hetzner-hosted VyOS edge pair as the public WAN anchor, extend on-prem routes through site-extension tunnels, and exchange prefixes with a GCP hub over redundant BGP sessions.

Validates
  • The Hetzner VyOS edge image path is proven on both Hetzner and Proxmox.
  • The GCP hub learns on-prem routes across both WAN legs through Cloud Router BGP peers.
  • On-prem prefixes are extended through the Hetzner edge pair without depending on a static public IP at the on-prem site.

RKE2 HA platform foundation

Bring up a highly available on-prem RKE2 control plane and worker pool with a clean kubeconfig handoff, then layer GitOps and workloads on top without changing the underlying execution model.

Validates
  • The on-prem RKE2 blueprint provisions a healthy highly available control plane and worker pool with kubeconfig output.
  • The same execution contract used for networking and DR is reused for platform services and workload preparation.
  • The platform path is ready for GitOps overlays without reworking the underlying environment model.

Managed PostgreSQL DR with Cloud SQL

Establish a managed Cloud SQL standby from the on-prem PostgreSQL HA source, promote it under control, and fail back into an isolated on-prem lane without touching the live service path.

Validates
  • Cloud SQL standby establishment is exercised from the on-prem PostgreSQL HA source through the managed replication path.
  • Managed promote is validated through an isolated DNS name instead of compromising the live service record.
  • Managed failback returns the isolated service path to on-prem cleanly, giving a real comparison against the self-managed DR lane.

Hybrid portal burst to GKE

Burst an authenticated portal web tier to GKE with GitOps, cloud-native secret delivery, and a controlled public cutover while authoritative identity and entitlement services remain upstream.

Delivery paths
Validates
  • A governed GKE burst cluster is provisioned on the shared hub network and bootstrapped with Argo CD and GCP Secret Manager.
  • The portal web tier serves from GKE while identity and entitlement services remain authoritative upstream dependencies by design.
  • Public cutover is validated after cluster health, app health, runtime-bundle provenance, and application-route checks all succeed.

Governed network emulation delivery

Provision governed EVE-NG environments on Proxmox for on-campus teaching or GCP for remote delivery; same blueprint model, same execution contract, different target.

Validates
  • EVE-NG is provisioned through a provider-neutral configuration module, with the on-prem and GCP paths sharing the same contract.
  • The GCP path encodes nested virtualization at the VM layer instead of leaving that requirement implicit in operator notes.
  • The on-prem and GCP blueprint contracts validate cleanly and give instructors a reusable starting point for controlled lab delivery.

DR failover and failback cycle

The full database-tier failover and recovery path — from on-prem primary to GCP restore and failback.

Full DR cycle from on-prem database HA through cloud failover and back. Every phase leaves a reviewable run record. Select a phase to walk through the sequence.

The HA cluster manages leader election on-prem. The backup engine ships incremental encrypted snapshots to cloud object storage.

The decision service evaluates aggregated metrics against policy thresholds. When breached, it emits the failover signal.

The cloud managed replica promotes to primary. DNS cutover redirects traffic to the cloud endpoint via low-TTL swap.

Once resolved, the failback blueprint restores the on-prem primary and reverses DNS. The drill leaves a reviewable record of every step in the cycle.

HybridOps DR flow: database HA to backup engine to cloud replica to DNS cutover to failback NORMAL STATE TRIGGER FAILOVER FAILBACK SEQUENCE 01 PRIMARY Database HA leader election on-prem cluster 02 BACKUP Backup Engine incremental · encrypted cloud object storage 03 TRIGGER Decision Svc metric thresholds automated cutover 04 REPLICA Cloud Replica managed replication warm standby active 05 CUTOVER DNS Cutover traffic → cloud low-TTL swap 06 RECORD Run Record structured · redacted review-ready 07 FAILBACK Failback Blueprint restore on-prem primary DNS restored every drill leaves a complete reviewable run record — no screenshots required