April 02, 2020
In my previous article, I introduced the Corda Kubernetes deployment, which makes it possible to deploy Corda Enterprise in a Kubernetes cluster using Helm charts. The goal of the Corda Kubernetes deployment is to make the journey into production an easier one.
I encourage you to read the first part of this article series if you haven’t, but to give you a summary, we started by understanding the complexities of a Corda deployment and the need to automate the process. Then we looked at the overall architecture picture and the prerequisites of a Kubernetes deployment. Finally, we made an initial registration of the Corda node using the Corda Kubernetes deployment and registered our node on the network. This was enough to set the stage, but now we need to perform the actual deployment.
In this article, we will learn how to use the Corda Kubernetes deployment. The deployment can be configured and customised based on the individual project requirements. We will start by looking at the configuration options, then we will see how we can compile and deploy the solution into a Kubernetes cluster. The main part of this article will focus on running the solution in a Kubernetes cluster and investigating potential issues that may arise throughout the deployment process to give you the tools you need to be successful in your deployment journey.
Let’s get started!
We use Helm to set up all the configuration required for deploying the Corda node. This works out to just editing one file, the “values.yaml” file. Let’s review a sample of the “values.yaml” file and see what we would usually be customising in this file.
The 3 main sections we should configure are as follows:
There are predefined values for most options, but some will need customisation based on your deployment. The values that I anticipate you will have to customise are defined in the Github repository.
The value for the option “corda.node.conf.legalName” should define the X.500 legal identity name of the Corda Node, this should follow the naming conventions of the Corda Network and the Node naming guidelines.
Other than that, you can customise the file based on your deployment environment. For example the Corda firewalls float component would require to have a publicly accessible IP address in order to accept incoming connections from other nodes on the network, so you can define the publicly facing IP address in the “values.yaml” file.
Since we already tackled the initial registration step in the previous article, we shall skip that step and move straight into the next one, which is Helm compilation. Helm compilation is performed by executing “helm template” command, which is wrapped in a script called “helm_compile.sh”. Compilation is the act of taking the set of Helm templates, applying the values defined in “values.yaml” file and generating the final set of Kubernetes resource definition files. These files can be directly applied to a Kubernetes cluster to create the requested resources.
Let’s review what resources this deployment currently uses and for what purpose:
Deploying the compiled Kubernetes resource definition files can be done in two ways, either by letting the “helm_compile.sh” script perform it automatically, or by running “kubectl apply -f” command targeting the folder. This effectively copies the definition files into the Kubernetes cluster, where Kubernetes takes over and starts taking actions based on those files. Kubernetes would normally be setting up the resources, routing them to each other and to external resources, making sure everything gets started up correctly, and if not, reporting the issues. We will cover investigating issues in a later section. It might be worth it at this point to make sure we enable Secure Shell (SSH) access to our deployed node, this will be useful in testing later in this article. Please note that SSH access should be disabled in production.
In order to enable SSH access to our deployed Corda node, we should make sure to define the following in the “values.yaml” configuration file.
Let’s see what happens as Kubernetes spins up the services. We will start by verifying that the pods are actually spinning up within the cluster first, this is done by running “kubectl get pods” command. The result will look something like this:
It is normal to see the pods with the following status:
In addition, seeing a pod in the “PENDING” status for an extended period of time indicates that there is an issue with the pod, please see the next section for more discussion on this point.
Once we have the pods up and running, we would surely like to see how the starting up goes and what the pod logs, this can be done by the command “kubectl logs -f <pod>”.
We can see examples of the Corda Enterprise node starting up in the following views:
The Corda Firewall bridge component starting up:
And the Corda Firewall float component starting up:
Share this post
Stay up to date on the latest news and articles related to Corda.