Installation Guide¶
This guide documents how the Terraform State Database component was installed and configured.
Prerequisites¶
Before installing, ensure these components are deployed:
- ✅ Hetzner CSI Driver (
hetzner-csi) - ✅ External Secrets Operator (
external-secrets) - ✅ OpenBao (for credential management)
Installation Overview¶
The installation consists of these major steps:
- Create Hetzner volume for persistent storage
- Deploy CloudNativePG operator
- Store credentials in OpenBao
- Create ExternalSecret for credential sync
- Create PersistentVolume pointing to Hetzner volume
- Deploy PostgreSQL cluster
- Create Flux Kustomizations
- Initialize database schema
Detailed Installation Steps¶
Step 1: Create Hetzner Volume¶
Add Terraform resource in terraform/environments/hetzner-mgmt-cluster/3-ekstra/main.tf:
resource "hcloud_volume" "terraform_state_db" {
name = "terraform-state-db-volume"
size = 10
location = "nbg1"
format = "ext4"
labels = {
purpose = "terraform-state-db"
cluster = "hetzner-mgmt"
component = "terraform-state-db"
managed-by = "terraform"
}
delete_protection = true
}
output "terraform_state_db_volume_id" {
description = "Volume ID for Terraform State Database storage"
value = hcloud_volume.terraform_state_db.id
}
Apply:
cd terraform/environments/hetzner-mgmt-cluster/3-ekstra
terraform apply -target=hcloud_volume.terraform_state_db
Note the output volume ID (e.g., 104482472).
Step 2: Store Credentials in OpenBao¶
Store database credentials in OpenBao:
# Port-forward to OpenBao
kubectl port-forward -n openbao svc/openbao 8200:8200 &
# Login
export VAULT_ADDR='http://127.0.0.1:8200'
vault login
# Store credentials
vault kv put secret/hetzner-mgmt/terraform-state-db/credentials \
username="terraform_backend" \
password="secret123"
Step 3: Deploy CloudNativePG Operator¶
The CloudNativePG operator manages PostgreSQL clusters. No manual PVC creation is needed - the operator handles this automatically.
Create component structure:
mkdir -p components/cloudnative-pg/base
mkdir -p components/terraform-state-db/{base,secrets}
mkdir -p cluster/hetzner-mgmt/{cloudnative-pg,terraform-state-db}
CloudNativePG automatically creates PVCs based on the cluster spec. The PVC will be named terraform-state-db-1 (cluster name + instance number).
Step 4: Create PersistentVolume¶
The PersistentVolume must be created manually to point to the Hetzner volume. CloudNativePG will create the PVC automatically and it will bind to this PV.
components/terraform-state-db/base/persistent-volume.yaml:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: terraform-state-db-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: hcloud-volumes
csi:
driver: csi.hetzner.cloud
volumeHandle: "104482472" # Your volume ID from Step 1
Important: Automatic PVC Creation
Do NOT create a PersistentVolumeClaim manually. CloudNativePG creates PVCs automatically with the name pattern <cluster-name>-<instance-number>. Creating a PVC manually will cause binding conflicts.
Step 5: Create PostgreSQL Cluster¶
The cluster spec is simplified to let CloudNativePG handle PVC creation:
components/terraform-state-db/base/postgres-cluster.yaml:
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: terraform-state-db
namespace: terraform-state
spec:
instances: 1
# Simple storage config - CloudNativePG creates the PVC
storage:
size: 10Gi
storageClass: hcloud-volumes
bootstrap:
initdb:
database: terraform_state
owner: terraform_backend
secret:
name: terraform-state-db-credentials
monitoring:
enablePodMonitor: true
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "1000m"
postgresql:
parameters:
max_connections: "100"
shared_buffers: "256MB"
Step 6: Deploy via Flux¶
Commit all files and push to trigger Flux deployment:
git add components/cloudnative-pg components/terraform-state-db cluster/hetzner-mgmt
git commit -m "feat: add Terraform state backend"
git push origin main
Monitor deployment:
# Watch Flux Kustomizations
flux get kustomizations -w
# Check CloudNativePG operator
kubectl get pods -n cnpg-system
# Check PostgreSQL cluster
kubectl get cluster -n terraform-state
kubectl get pods -n terraform-state
Expected progression:
1. CloudNativePG operator deploys
2. ExternalSecret syncs credentials
3. PostgreSQL cluster creates PVC terraform-state-db-1
4. PVC automatically binds to PV terraform-state-db-pv
5. Pod starts and initializes database
Step 7: Initialize Database Schema¶
Once the cluster is healthy, create the states table:
# Wait for cluster to be ready
kubectl wait --for=condition=ready cluster/terraform-state-db \
-n terraform-state --timeout=300s
# Create states table with proper permissions
kubectl exec -n terraform-state terraform-state-db-1 -- \
psql -U postgres -d terraform_state -c "
CREATE TABLE IF NOT EXISTS states (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL UNIQUE,
data TEXT
);
GRANT ALL PRIVILEGES ON TABLE states TO terraform_backend;
GRANT USAGE, SELECT ON SEQUENCE states_id_seq TO terraform_backend;
"
Verify:
kubectl exec -n terraform-state terraform-state-db-1 -- \
psql -U postgres -d terraform_state -c "\dt"
Output should show:
List of tables
Schema | Name | Type | Owner
--------+--------+-------+----------
public | states | table | postgres
(1 row)
Verification¶
Verify all components are working:
# 1. Check operator
kubectl get deployment -n cnpg-system cloudnative-pg
# 2. Check cluster status
kubectl get cluster -n terraform-state
# 3. Check PV binding
kubectl get pv terraform-state-db-pv
kubectl get pvc -n terraform-state
# 4. Check ExternalSecret
kubectl get externalsecret -n terraform-state
# 5. Verify table exists
kubectl exec -n terraform-state terraform-state-db-1 -- \
psql -U postgres -d terraform_state -c "SELECT COUNT(*) FROM states;"
Important Notes¶
PVC Management by CloudNativePG¶
CloudNativePG automatically manages PVCs:
- PVC Name: Always
<cluster-name>-<instance-number>(e.g.,terraform-state-db-1) - Creation: Automatic when cluster is created
- Binding: Automatic to matching PV based on size and storage class
- Do NOT: Manually create PVCs - this causes binding conflicts
Volume Persistence¶
The Hetzner volume persists across cluster deletions because:
- PersistentVolume has
Retainreclaim policy - Volume data is preserved on Hetzner Cloud
- To reuse after cluster recreation, clear the PV claimRef:
Troubleshooting PVC Binding Issues¶
If PVC doesn't bind:
# Check PVC status
kubectl describe pvc terraform-state-db-1 -n terraform-state
# Check PV status
kubectl get pv terraform-state-db-pv
# If PV is "Released", clear claimRef
kubectl patch pv terraform-state-db-pv -p '{"spec":{"claimRef":null}}'
# Delete and recreate cluster to retry binding
kubectl delete cluster terraform-state-db -n terraform-state
# Flux will recreate it automatically
Next Steps¶
- Usage Guide - Learn how to use the Terraform backend
- Configure Gitea Action Secrets with the connection string
- Test with a sample Terraform project