Orchestrate CockroachDB with Kubernetes

On this page Carat arrow pointing down
Warning:
CockroachDB v1.0 is no longer supported as of November 10, 2018. For more details, refer to the Release Support Policy.

This page shows you how to orchestrate the deployment and management of an insecure 3-node CockroachDB cluster with Kubernetes, using the StatefulSet feature.

Warning:
Deploying an insecure cluster is not recommended for data in production. We'll update this page after improving the process to deploy secure clusters.

Step 1. Choose your deployment environment

Choose the environment where you will run CockroachDB with Kubernetes. The instructions below will adjust based on your choice.

It might also be helpful to review some Kubernetes-specific terminology:
Feature Description
instance A physical or virtual machine. In this tutorial, you'll run a Kubernetes script from your local workstation that will create 4 GCE or AWS instances and join them into a single Kubernetes cluster.
pod A pod is a group of one or more Docker containers. In this tutorial, each pod will run on a separate instance and contain one Docker container running a single CockroachDB node. You'll start with 3 pods and grow to 4.
StatefulSet A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets are considered stable as of Kubernetes version 1.9 after reaching beta in version 1.5.
persistent volume A persistent volume is a piece of networked storage (Persistent Disk on GCE, Elastic Block Store on AWS) mounted into a pod. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.

This tutorial assumes that dynamic volume provisioning is available. When that is not the case, persistent volume claims need to be created manually.
Feature Description
minikube This is the tool you'll use to run a single-node Kubernetes cluster inside a VM on your computer.
pod A pod is a group of one of more Docker containers. In this tutorial, each pod will run on a separate instance and contain one Docker container running a single CockroachDB node. You'll start with 3 pods and grow to 4.
StatefulSet A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets are considered stable as of Kubernetes version 1.9 after reaching beta in version 1.5.
persistent volume A persistent volume is a piece of local storage mounted into a pod. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.

When using minikube, persistent volumes are external temporary directories that endure until they are manually deleted or until the entire Kubernetes cluster is deleted.
persistent volume claim When pods are created (one per CockroachDB node), each pod will request a persistent volume claim to “claim” durable storage for its node.

Step 2. Install and start Kubernetes

From your local workstation, install prerequisites and start a Kubernetes cluster as described in the Kubernetes documentation:

The heart of this step is running a Kubernetes script that creates 4 GCE or AWS instances and joins them into a single Kubernetes cluster, all from your local workstation. You'll run subsequent steps from your local workstation as well.

Follow Kubernetes' documentation to install minikube and kubectl for your OS. Then start a local Kubernetes cluster:

icon/buttons/copy
$ minikube start
Starting local Kubernetes cluster...
Kubectl is now configured to use the cluster.

Step 3. Start the CockroachDB cluster

  1. From your local workstation, use our cockroachdb-statefulset.yaml file to create the StatefulSet:

    icon/buttons/copy
    $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset.yaml
    
  2. Use the kubectl get command to verify that the persistent volumes and corresponding claims were created successfully:

    icon/buttons/copy
    $ kubectl get persistentvolumes
    
    NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                           REASON    AGE
    pvc-52f51ecf-8bd5-11e6-a4f4-42010a800002   1Gi        RWO           Delete          Bound     default/datadir-cockroachdb-0             26s
    pvc-52fd3a39-8bd5-11e6-a4f4-42010a800002   1Gi        RWO           Delete          Bound     default/datadir-cockroachdb-1             27s
    pvc-5315efda-8bd5-11e6-a4f4-42010a800002   1Gi        RWO           Delete          Bound     default/datadir-cockroachdb-2             27s
    
  3. Wait a bit and then verify that three pods were created successfully. If you do not see three pods, wait longer and check again.

    icon/buttons/copy
    $ kubectl get pods
    
    NAME            READY     STATUS    RESTARTS   AGE
    cockroachdb-0   1/1       Running   0          2m
    cockroachdb-1   1/1       Running   0          2m
    cockroachdb-2   1/1       Running   0          2m
    
  1. Use our cockroachdb-statefulset.yaml file to create the StatefulSet:

    icon/buttons/copy
    $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset.yaml
    
  2. Use the kubectl get command to verify that the persistent volumes and corresponding claims were created successfully:

    icon/buttons/copy
    $ kubectl get persistentvolumes
    
    NAME      CAPACITY   ACCESSMODES   STATUS    CLAIM                           REASON    AGE
    pv0       1Gi        RWO           Bound     default/datadir-cockroachdb-0             27s
    pv1       1Gi        RWO           Bound     default/datadir-cockroachdb-1             26s
    pv2       1Gi        RWO           Bound     default/datadir-cockroachdb-2             26s
    
  3. Wait a bit and then verify that three pods were created successfully. If you do not see three pods, wait longer and check again.

    icon/buttons/copy
    $ kubectl get pods
    
    NAME            READY     STATUS    RESTARTS   AGE
    cockroachdb-0   1/1       Running   0          2m
    cockroachdb-1   1/1       Running   0          2m
    cockroachdb-2   1/1       Running   0          2m
    
Tip:
The StatefulSet configuration sets all CockroachDB nodes to write to stderr, so if you ever need access to a pod/node's logs to troubleshoot, use kubectl logs <podname> rather than checking the log on the pod itself.

Step 4. Use the built-in SQL client

  1. Start the built-in SQL client in a one-off interactive pod, using the cockroachdb-public hostname to access the CockroachDB cluster:

    icon/buttons/copy
    $ kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never \
    -- sql --insecure --host=cockroachdb-public
    
  2. Run some CockroachDB SQL statements:

    icon/buttons/copy
    > CREATE DATABASE bank;
    
    icon/buttons/copy
    > CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL);
    
    icon/buttons/copy
    > INSERT INTO bank.accounts VALUES (1234, 10000.50);
    
    icon/buttons/copy
    > SELECT * FROM bank.accounts;
    
    +------+----------+
    |  id  | balance  |
    +------+----------+
    | 1234 | 10000.50 |
    +------+----------+
    (1 row)
    
  3. When you're done with the SQL shell, use CTRL-D, CTRL-C, or \q to exit and delete the temporary pod.

Step 5. Simulate node failure

Based on the replicas: 3 line in the StatefulSet configuration, Kubernetes ensures that three pods/nodes are running at all times. If a pod/node fails, Kubernetes will automatically create another pod/node with the same network identity and persistent storage.

To see this in action:

  1. Stop one of CockroachDB nodes:

    icon/buttons/copy
    $ kubectl delete pod cockroachdb-2
    
    pod "cockroachdb-2" deleted
    
  2. Verify that the pod was restarted:

    icon/buttons/copy
    $ kubectl get pod cockroachdb-2
    
    NAME            READY     STATUS              RESTARTS   AGE
    cockroachdb-2   0/1       ContainerCreating   0          3s
    
  3. Wait a bit and then verify that the pod is ready:

    icon/buttons/copy
    $ kubectl get pod cockroachdb-2
    
    NAME            READY     STATUS    RESTARTS   AGE
    cockroachdb-2   1/1       Running   0          1m
    

Step 6. Scale the cluster

The Kubernetes script created 4 nodes, one master and 3 workers. Pods get placed only on worker nodes, so to ensure that you do not have two pods on the same node (as recommended in our production best practices), you need to add a new worker node and then edit your StatefulSet configuration to add another pod.

  1. Add a worker node:

  2. Use the kubectl scale command to add a pod to your StatefulSet:

    icon/buttons/copy
    $ kubectl scale statefulset cockroachdb --replicas=4
    
    statefulset "cockroachdb" scaled
    
  3. Verify that a fourth pod was added successfully:

    icon/buttons/copy
    $ kubectl get pods
    
    NAME            READY     STATUS    RESTARTS   AGE
    cockroachdb-0   1/1       Running   0          2h
    cockroachdb-1   1/1       Running   0          2h
    cockroachdb-2   1/1       Running   0          9m
    cockroachdb-3   1/1       Running   0          46s
    

To increase the number of pods in your cluster, use the kubectl scale command to alter the replicas: 3 configuration for your StatefulSet:

icon/buttons/copy
$ kubectl scale statefulset cockroachdb --replicas=4
statefulset "cockroachdb" scaled

Verify that a fourth pod was added successfully:

icon/buttons/copy
$ kubectl get pods
NAME            READY     STATUS    RESTARTS   AGE
cockroachdb-0   1/1       Running   0          2h
cockroachdb-1   1/1       Running   0          2h
cockroachdb-2   1/1       Running   0          9m
cockroachdb-3   1/1       Running   0          46s

Step 7. Stop the cluster

To shut down the CockroachDB cluster:

  1. Use the kubectl delete command to clean up all of the resources you created, including the logs and remote persistent volumes:

    icon/buttons/copy
    $ kubectl delete pods,statefulsets,services,persistentvolumeclaims,persistentvolumes,poddisruptionbudget \
    -l app=cockroachdb
    
  2. Run the cluster/kube-down.sh script in the kubernetes directory to stop Kubernetes.

Warning:
If you stop Kubernetes without first deleting resources, the remote persistent volumes will still exist in your cloud project.
  • If you plan to restart the cluster, use the minikube stop command. This shuts down the minikube virtual machine but preserves all the resources you created:

    icon/buttons/copy
    $ minikube stop
    
    Stopping local Kubernetes cluster...
    Machine stopped.
    

    You can restore the cluster to its previous state with minikube start.

  • If you do not plan to restart the cluster, use the minikube delete command. This shuts down and deletes the minikube virtual machine and all the resources you created, including persistent volumes:

    icon/buttons/copy
    $ minikube delete
    
    Deleting local Kubernetes cluster...
    Machine deleted.
    
    Tip:
    To retain logs, copy them from each pod's stderr before deleting the cluster and all its resources. To access a pod's standard error stream, run kubectl logs <podname>.

See Also


Yes No
On this page

Yes No