Skip to content

CloudNativePG

Previously I tested CloudNativePG in Kind. Now I want to try again CloudNativePG, but this time running in K3s. I am still interested in a single-node setup, I'll keep multi-nodes setup for later. I will also look into how to configure monitoring and backups.

Installing CNPG

Install the CloudNativePG operator:

kubectl apply --server-side -f \
    https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.27/releases/cnpg-1.27.0.yaml

# check for the status
kubectl rollout status deployment -n cnpg-system cnpg-controller-manager

The output of the final command should say: deployment "cnpg-controller-manager" successfully rolled out.

Create a deployment for local development, or suitable for DEV/TEST environments:

# ./examples/11-cloudnativepg
kubectl create namespace cnpg

kubectl apply -f cluster-01.yaml

Connect to the server

List the secrets that were created:

kubectl get secret -n cnpg
# Get the password of the super user:
PGPASSWORD=$(kubectl get secret -n cnpg pgcluster-superuser -o jsonpath="{.data.password}" | base64 -d)

# Or get the password of the app user:
PGPASSWORD=$(kubectl get secret -n cnpg pgcluster-app -o jsonpath="{.data.password}" | base64 -d)

Connect using pgsql (if you don't have it installed, follow my previous exercise when I deployed CNPG in Kind):

PGPASSWORD=$PGPASSWORD psql -h localhost -p 5432 -U postgres postgres
$ PGPASSWORD=$PGPASSWORD psql -h localhost -p 5432 -U postgres postgres
psql (17.5 (Ubuntu 17.5-1.pgdg24.04+1))
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off, ALPN: postgresql)
Type "help" for help.

postgres=#

Why localhost works.

The localhost connection works because of how K3s handles LoadBalancer services.

K3s includes a built-in load balancer called Klipper that automatically:

  1. Assigns LoadBalancer services to localhost (127.0.0.1)
  2. Maps the service port to a random high port on the host
  3. Makes the service accessible via localhost:port

To see the actual mapping:

# Check the external IP and port assigned
kubectl get svc -n cnpg cnpg-rw

# You'll see something like:
# NAME      TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
# cnpg-rw   LoadBalancer   10.43.x.x     localhost     5432:32001/TCP   5m

In this case, you could also connect using:

PGPASSWORD=$PGPASSWORD psql -h localhost -p 32001 -U postgres postgres

In other Kubernetes distributions: - You'd get a real external IP (like 192.168.1.100) - You'd need to use that IP instead of localhost - Or use port-forwarding: kubectl port-forward svc/cnpg-rw 5432:5432 -n cnpg

K3s convenience: K3s automatically port-forwards LoadBalancer services to localhost, which is why your current command works. This is a K3s-specific feature that makes local development easier.

Monitoring CNPG

How can I configure my CloudNativePG to send metrics to the OpenTelemetry Collector or directly to Prometheus in my monitoring namespace?

Read the documentation here: CloudNativePG Monitoring and here Quickstart Part 4: Monitor clusters with Prometheus and Grafana.

My first attempts here didn't produce satisfying results, because in my previous Monitoring exercise I installed the plain-vanilla Prometheus Helm chart in Kubernetes. This requires manual configuration for many things. Make sure to follow the section Improved Monitoring, too.

Enabling monitoring is pretty simple:


spec:
  instances: 1
  enableSuperuserAccess: true
  storage:
    size: 2Gi

  # Enable monitoring - metrics will be available on port 9187
  monitoring:
    enablePodMonitor: true

kubectl apply -f cluster-02.yaml

Install the Grafana Dashboard

CloudNativePG offers a configuration file to create a dashboard in Grafana. It is documented here: https://cloudnative-pg.io/documentation/1.27/monitoring/. The configuration file can be downloaded from :

https://github.com/cloudnative-pg/grafana-dashboards/blob/main/charts/cluster/grafana-dashboard.json

You can import the dashboard in Grafana at: Dashboards > New > Import.

CNPG Grafana Dashboard

If you followed the instructions at Improved Monitoring, the dashboard should work immediately when imported.

Backup

Reading time…

CloudNativePG offers different ways to enable backups. For now, I want to enable full backups to an Azure Storage Account.

Alternatives.

If I wanted to store backups locally, I would check how to use Longhorn and volume snapshots. For now, it is simpler to use an object store like Azure Storage Account or AWS S3. There is the option of using MinIO, but I prefer avoiding this tool because I am wary of the practices of its maintainers1.

Install the Barman Plugin

Install the Barman Plugin following the instructions here:

https://cloudnative-pg.io/plugin-barman-cloud/docs/installation/

Summary:

  1. Verify to run a version of CloudNativePG >= 1.26.0.
  2. Verify that cert-manager is installed and available.
# verify it is installed…
cmctl check api

cert-manager is a tool that creates TLS certificates for workloads in Kubernetes or OpenShift clusters and renews the certificates before they expire.

In my case, the cert-manager was not installed anywhere: I didn't have the client installed in my host, nor the cert-manager component in my K3s cluster.

Install the client in your host:

# install…
brew install cmctl

After installing, I get this error because cert-manager is not installed in my cluster:

cmctl check api
error: error finding the scope of the object: failed to get restmapping: unable to retrieve the complete list of server APIs: cert-manager.io/v1: no matches for cert-manager.io/v1, Resource=

Install the component in the cluster:

  1. Check what is the latest release here on GitHub.. At the time of this writing, it's 1.19.1.
  2. Install it using the commands below.
# install
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.19.1/cert-manager.yaml

# verify installation…
kubectl wait --for=condition=ready pod -l app.kubernetes.io/instance=cert-manager -n cert-manager --timeout=60s

The cert-manager installation automatically creates: - cert-manager namespace - Custom Resource Definitions (CRDs) - cluster-wide - ClusterRoles and ClusterRoleBindings - cluster-wide - cert-manager controller pods in the cert-manager namespace

# Verify the API is working
cmctl check api

Then install the Barman plugin in cnpg-system:

# Install the Barman plugin in the cnpg-system namespace (where CNPG operator runs)
kubectl apply -f https://github.com/cloudnative-pg/plugin-barman-cloud/releases/download/v0.7.0/manifest.yaml -n cnpg-system

The output should look like:

customresourcedefinition.apiextensions.k8s.io/objectstores.barmancloud.cnpg.io unchanged
serviceaccount/plugin-barman-cloud unchanged
role.rbac.authorization.k8s.io/leader-election-role unchanged
clusterrole.rbac.authorization.k8s.io/metrics-auth-role unchanged
clusterrole.rbac.authorization.k8s.io/metrics-reader unchanged
clusterrole.rbac.authorization.k8s.io/objectstore-editor-role unchanged
clusterrole.rbac.authorization.k8s.io/objectstore-viewer-role unchanged
clusterrole.rbac.authorization.k8s.io/plugin-barman-cloud unchanged
rolebinding.rbac.authorization.k8s.io/leader-election-rolebinding unchanged
clusterrolebinding.rbac.authorization.k8s.io/metrics-auth-rolebinding unchanged
clusterrolebinding.rbac.authorization.k8s.io/plugin-barman-cloud-binding unchanged
secret/plugin-barman-cloud-7g4226tm68 configured
service/barman-cloud unchanged
deployment.apps/barman-cloud unchanged
Warning: spec.privateKey.rotationPolicy: In cert-manager >= v1.18.0, the default value changed from `Never` to `Always`.
certificate.cert-manager.io/barman-cloud-client created
certificate.cert-manager.io/barman-cloud-server created
issuer.cert-manager.io/selfsigned-issuer created

Verify the deployment:

kubectl rollout status deployment -n cnpg-system barman-cloud

# should be:
deployment "barman-cloud" successfully rolled out

Enabling Backups

Now that the Barman Cloud Plugin is installed, I need to define an ObjectStore using my chosen backend: Azure Blob, like documented here.

Obtain the Storage Account connection string and create a secret in the right namespace like in the command below:

CONNSTRING='<conn string>'

kubectl create secret generic azure-creds \
  --from-literal=AZURE_STORAGE_CONNECTION_STRING=$CONNSTRING \
  -n cnpg

Deploy the Barman object store like in the provided example:

# ./examples/11-cloudnativepg
kubectl apply -f barman-store.yaml

Enable backups in the CNPG cluster manifest:

# This example is appropriate for a local development environment
# where we use a single node.
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: pgcluster
  namespace: cnpg
spec:
  instances: 1
  enableSuperuserAccess: true
  storage:
    size: 2Gi
  # Enable monitoring - metrics will be available on port 9187
  monitoring:
    enablePodMonitor: true
  postgresql:
    parameters:
      pg_stat_statements.max: "10000"
      pg_stat_statements.track: all
  managed:
    services:
      additional:
        - selectorType: rw
          serviceTemplate:
            metadata:
              name: cnpg-rw
            spec:
              type: LoadBalancer
  plugins:
    - name: barman-cloud.cloudnative-pg.io
      isWALArchiver: true
      parameters:
        barmanObjectName: backup-store
---
apiVersion: postgresql.cnpg.io/v1
kind: ScheduledBackup
metadata:
  name: daily-backup
  namespace: cnpg
spec:
  schedule: "0 0 0 * * *" # Midnight daily
  method: plugin
  pluginConfiguration:
    name: barman-cloud.cloudnative-pg.io

The file ./examples/11-cloudnativepg/luster-03.yaml is a working example to create base backups and WAL archives to a Barman object store named backup-store.

# ./examples/11-cloudnativepg
kubectl apply -f cluster-03.yaml

Start a backup like documented here:

kubectl apply -f backup.yaml

Check the status:

kubectl get backup -n cnpg
NAME             AGE   CLUSTER     METHOD   PHASE    ERROR
backup-example   16s   pgcluster   plugin   failed   rpc error: code = Unknown desc = missing key AZURE_CONNECTION_STRING, inside secret azure-creds

kubectl describe backup/backup-example -n cnpg

It works! 🎉 🎉 🎉.

However, the CNPG dashboard doesn't show backups. To fix, you need to apply the changes described here: https://github.com/cloudnative-pg/grafana-dashboards/issues/37

Replace cnpg_collector_first_recoverability_pointbarman_cloud_cloudnative_pg_io_first_recoverability_point Replace cnpg_collector_last_available_backup_timestampbarman_cloud_cloudnative_pg_io_last_available_backup_timestamp

The grafana-dashboard.json file in the examples folder already has the correct values.

The dashboard showing healthy base backups and WAL archiving look like in the following picture:

CNPG Grafana Dashboard with Healthy Backups

WAL archives are created every minute thanks to this configuration setting: archive_timeout: "60s", however only if there are transactions.

    parameters:
      pg_stat_statements.max: "10000"
      pg_stat_statements.track: all
      archive_timeout: "60s" # Force WAL switch every minute

If the PostgreSQL is not used, yet, generate some activity to verify that WAL archives are created properly:

# Create some activity to force WAL generation
PGPASSWORD=$PGPASSWORD psql -h localhost -p 5432 -U postgres postgres
CREATE TABLE IF NOT EXISTS test_wal_activity (id serial, data text, created_at timestamp default now());
INSERT INTO test_wal_activity (data) SELECT 'test data ' || generate_series(1,1000);

Summary

This tutorial successfully demonstrates a CloudNativePG setup in K3s suitable for non-production environments, covering three main areas: installation, monitoring, and backup configuration.

Next steps

In the future I will look into roduction deployments that include:

  • Multi-node PostgreSQL clusters for high availability.
  • Possibly, backups using Volume Snapshots.

Last modified on: 2025-10-26 18:45:09

RP