Skip to content

Rate this page
Thanks for your feedback
Thank you! The feedback has been submitted.

Get free database assistance or contact our experts for personalized support.

Install Percona Operator for PostgreSQL on Digital Ocean Kubernetes Service (DOKS)

This guide shows you how to deploy Percona Operator for PostgreSQL on Digital Ocean Kubernetes Service (DOKS) with default parameters.

The document assumes some experience with the platform. For more information on the DOKS, see the Digital Ocean Kubernetes Service official documentation .

What you will install

  • The Operator - the custom controller that uses the custom resources to install and manage the lifecycle of your database cluster. It consists of the following components:

    • the Operator Deployment - the controller Pod
    • The CustomResourceDefinitions (CRDs) add new API types (custom resources) to Kubernetes so that it can understand and manage them
    • Role-based access control (RBAC) is the system that controls who can perform which actions on which resources, using roles and bindings to enforce safe, predictable access.
  • The database cluster - the actual Percona Distribution for PostgreSQL cluster that the Operator creates for you when you apply the Custom Resource or install the Helm chart. It includes StatefulSets for PostgreSQL servers, Services and Secrets.

The default Percona Distribution for PostgreSQL configuration includes:

  • 3 PostgreSQL servers, one Primary and two replicas.
  • 3 pgBouncer instances - a lightweight connection pooler for PostgreSQL that sits between client applications and the database server to manage and reuse connections efficiently.
  • a pgBackRest repository host instance – a dedicated instance in your cluster that stores filesystem backups made with pgBackRest - a backup and restore utility.
  • a PMM client instance - a monitoring and management tool for PostgreSQL that provides a way to monitor and manage your database. It runs as a sidecar container in the database Pods.

Read more about the default components in the Architecture section.

To customize the installation, refer to Install Percona Operator for PostgreSQL with customized parameters.

Prerequisites

Install and configure the following:

  1. doctl Command Line Interface (CLI) to manage DOKS clusters.
  2. A Digital Ocean personal access token to grant account access to doctl
  3. kubectl to manage Kubernetes resources.

Create a DOKS cluster

  1. Decide on the following:

    • Cluster name.
    • Region where you will deploy the cluster
    • Kubernetes version
    • A node pool for the cluster to reside on.
  2. Create a Digital Ocean cluster following the official documentation .

  3. Add authentication token or a certificate to your kubectl configuration file to connect. Follow official documentation for steps

Install the Operator deployment

  1. Create a namespace for your cluster and export it as an environment variable to simplify further configuration:

    kubectl create namespace <namespace>
    export NAMESPACE=<namespace>
    
  2. Create the Custom Resource Definition, set up RBAC, and install the Operator Deployment using the the bundle file:

    kubectl apply --server-side -f https://raw.githubusercontent.com/percona/percona-postgresql-operator/v2.8.2/deploy/bundle.yaml -n $NAMESPACE
    
    Expected output
    customresourcedefinition.apiextensions.k8s.io/crunchybridgeclusters.postgres-operator.crunchydata.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/perconapgbackups.pgv2.percona.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/perconapgclusters.pgv2.percona.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/perconapgrestores.pgv2.percona.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/perconapgupgrades.pgv2.percona.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/pgadmins.postgres-operator.crunchydata.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/pgupgrades.postgres-operator.crunchydata.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/postgresclusters.postgres-operator.crunchydata.com serverside-applied
    serviceaccount/percona-postgresql-operator serverside-applied
    role.rbac.authorization.k8s.io/percona-postgresql-operator serverside-applied
    rolebinding.rbac.authorization.k8s.io/service-account-percona-postgresql-operator serverside-applied
    deployment.apps/percona-postgresql-operator serverside-applied
    

    As the result you will have the Operator Pod up and running.

Install Percona Distribution for PostgreSQL

  1. Create the Percona Distribution for PostgreSQL cluster:

    kubectl apply -f https://raw.githubusercontent.com/percona/percona-postgresql-operator/v2.8.2/deploy/cr.yaml -n $NAMESPACE
    
    Expected output
    perconapgcluster.pgv2.percona.com/cluster1 created
    
  2. Check the cluster status. Creation may take a few minutes:

    kubectl get pg -n $NAMESPACE
    
    Expected output
    NAME       ENDPOINT                         STATUS   POSTGRES   PGBOUNCER   AGE
    cluster1   cluster1-pgbouncer.default.svc   ready    3          3           30m
    

Verifying the cluster operation

When creation process is over, kubectl get pg command will show you the cluster status as ready, and you can try to connect to the cluster.

When the Operator deploys a database cluster, it generates several Secrets . Among them there is the Secret with the credentials of the default PostgreSQL user. This default user has the same username as the cluster name.

  1. Use kubectl get secrets -n <namespace> command to see the list of Secrets objects. The Secrets object you are interested in is named in the format <cluster_name>-pguser-<cluster_name> (where the <cluster_name> is the name of your Percona Distribution for PostgreSQL Cluster). For example, if your cluster name is cluster1, the Secret name will be cluster1-pguser-cluster1.

  2. Use the following command to get the password of this user. Replace the <cluster_name> and <namespace> placeholders with your values:

    kubectl get secret <cluster_name>-<user_name>-<cluster_name> -n <namespace> --template='{{.data.password | base64decode}}{{"\n"}}'
    
  3. To connect to PostgreSQL, you will use the pgbouncer service as the entry point to your cluster. To find this service, use the following command:

    kubectl get svc -n <namespace>
    

    Look for the service named <cluster-name>-pgbouncer.

    Sample output
    cluster1-ha          ClusterIP   34.118.234.155   <none>        5432/TCP   51m
    cluster1-ha-config   ClusterIP   None             <none>        <none>     51m
    cluster1-pgbouncer   ClusterIP   34.118.239.5     <none>        5432/TCP   51m
    cluster1-pods        ClusterIP   None             <none>        <none>     51m
    cluster1-primary     ClusterIP   None             <none>        5432/TCP   51m
    cluster1-replicas    ClusterIP   34.118.234.19    <none>        5432/TCP   51m
    
  4. Create a pod and start Percona Distribution for PostgreSQL inside. The following command will do this, naming the new Pod pg-client:

    kubectl run -n <namespace> -i --rm --tty pg-client --image=percona/percona-distribution-postgresql:17.7-2 --restart=Never -- bash -il
    

    It may require some time to execute the command and deploy the corresponding Pod.

  5. Run a container with psql tool and connect its console output to your terminal. Substitute the <namespace> placeholder with your value in the following command to connect as a cluster1 user to the cluster1 database via the PostgreSQL interactive terminal.

    [postgres@pg-client /]$ PGPASSWORD='pguser_password' psql -h cluster1-pgbouncer.<namespace>.svc.cluster.local -p 5432 -U cluster1 cluster1
    
    Sample output
    psql (17.7-2)
    SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
    Type "help" for help.
    pgdb=>
    

Delete the DOKS cluster

To delete the DOKS cluster, run the following command:

doctl kubernetes cluster delete <cluster-name>

The cluster deletion may take time.

Warning

After deleting the cluster, all data stored in it will be lost!


Last update: March 30, 2026
Created: March 30, 2026