OCI #11 – Managed Kubernetes (Oracle Kubernetes Engine)

This recipe shows how to provision a new Oracle Container Engine for Kubernetes (managed Kubernetes platform running on Oracle Cloud Infrastructure) cluster.

Before you begin, make sure that you have installed and configured:
+ Terraform. See: Terraform Setup recipe, if needed.
+ CLI. See: CLI Setup recipe, if needed.


Everything around production-grade Linux containers at mass scale has really begun with this short demo back in 2013. Linux container engines make use of Linux kernel features (cgroups, namespaces) to isolate applications executed inside logical entities called containers, which are built from multi-layered templates called images. You can watch this comprehensive introduction video. Docker is the most popular Linux container engine, however, there are also a bit less mature alternatives like CNCF-incubating rkt or containerd.

Container Orchestration

Containerised applications are usually small in size and each of them very often performs a single, granular and specialised task. As a result, in order to build a complete, sometimes multi-domain solution, it is necessary to gather together a large number of diverse microservices distributed with redundant copies across multiple worker nodes, so that they operate in a faul-tolerant setup. Some of these microservices have to interact with each other, consume external services or expose own service endpoints. If you consider this requirement, you will see that the more services you host the more important appears the need for a container orchestration engine and an ecosystem around microservices architectures with tools and utilities that support the entire container-driven platform.


In 2014 Google has created Kubernetes, an open source project based on their in-house container orchestration engine called Borg. You can read more here. Subsequently, the Cloud Native Computing Foundation has been found to support, promote and incubate the entire ecosystem around containers. Kubernetes is considered as as one of or even the most important project in the entire ecosystem.

Managed Kubernetes

As software is becoming more and more multi-layered and sophisticated, professionals responsible for maintaining software solutions in their organizations tend to seek for various ways to continue keeping things as simple as possible. One way to do that assumes using managed cloud-based platforms. A managed platform takes thes great deal of operations work burden fom your team by delegating the repetitive housekeeping activities to your cloud-provider. Oracle Cloud Infrastructure features with managed Kubernetes platform called Oracle Container Engine for Kubernetes also known as Oracle Kubernetes Engine abbreviated OKE. You can read more about it here.

Target Infrastructure

In this short tutorial, we are going to provision a managed Kubernetes cluster with a node pool distributed across two availability domains. We will start with two (2) Linux-based worker nodes (virtual machines), but the general idea behind any kind of pool is that you should be able to scale this number up or down, at any time, based on your needs. To remain in control over various interconnectivity aspects, we are going to define custom networking resources such as Virtual Cloud Network (VCN) or dedicated VCN Subnets for the worker nodes and load balancers. The Kubernetes cluster will be provisioned together with a two-instance-large node pool on top of the custom networking. It may sound a bit complex, but it is not.


Provisioning Infrastructure and Managed Kubernetes

Provisioning a managed Kubernetes cluster together with the cloud networking resources can be done using a set of simple Terraform configurations which I’ve made available here on my GitHub account. I’ve structured the configuration files in such a way that you can copy the k8s module directory and use it in your project. The sample module block used to reference the module can be found in the modules.tf file:

/home/michal/git/blog-code/oci-11/infrastructure$ tree
├── k8s
│   ├── cluster.tf
│   ├── vars.tf
│   └── vcn.tf
├── locals.tf
├── modules.tf
├── provider.tf
├── vars.tf
└── vcn.tf

The main directory of your Terraform project is typically called the root module. The provider.tf file specifies the oci provider and, in this way, tells Terraform to use the plugin for Oracle Cloud Infrastructure. The variables used in the provider.tf file are defined in the vars.tf file together with another variable (vcn_cidr) which describes the address range for the Virtual Cloud Network (VCN). The VCN cloud resource as well as an Internet Gateway cloud resource are defined in the vcn.tf file. The locals.tf file contains a datasource that fetches the Availability Domains names and stores them as local values. Finally, the modules.tf file uses module section to include the resources from k8s folder as a Terraform module. You can find the variables and attributes such as VCN OCID or the internet gateway OCID from the root module passed as input parameters to the k8s module.

The k8s module directory contains the configuration files that will be used by Terraform to create the required subnets and provision the Kubernetes cluster including its workers node pool and cluster’s load balancer. The k8s/vars.tf file defines the input variables as well as variables local to the module. The k8s/vcn.tf file defines a route table, two security lists and four (4) subnets: one pair for the worker nodes and one pair for the load balancers. As you can imagine, each pair is spread across two availability domains. The networking resources are created within the VCN that has been defined in the root module and a route rule references the internet gateway that has also been defined in the root module. The Kubernetes cluster and the node pool are defined in the k8s/cluster.tf file.

Let’s provision the cluster:

  1. Clone the blog-code repository from my GitHub account, if you haven’t done it yet.
  2. Make sure you’ve set all required Terraform variables. For example, you can do it using TF_VAR_ environment variables. You can issue env | grep TF_VAR_ to list them and inspect if they are correct. If you are not sure what to do, see: Terraform Setup recipe.
  3. Initialize the working directory. Terraform will download the newest version of the provider plugin, unless already installed.

    ~/git/blog-code/oci-11> terraform init
    Initializing provider plugins...
    - Checking for available provider plugins on https://releases.hashicorp.com...
    - Downloading plugin for provider "oci" (3.11.2)...
    The following providers do not have any version constraints in configuration,
    so the latest version was installed.
    To prevent automatic upgrades to new major versions that may contain breaking
    changes, it is recommended to add version = "..." constraints to the
    corresponding provider blocks in configuration, with the constraint strings
    suggested below.
    * provider.oci: version = "~> 3.11"
    Terraform has been successfully initialized!
  4. Provision the cloud resources using single Terraform command:
    $ terraform apply --auto-approve
    module.k8s.oci_core_subnet.oke_loadbalancers_ad2_net: Creation complete after 1s
    module.k8s.oci_core_subnet.oke_workers_ad2_net: Creation complete after 1s
    module.k8s.oci_core_subnet.oke_loadbalancers_ad1_net: Creation complete after 1s
    module.k8s.oci_containerengine_cluster.k8s_cluster: Creating...
    module.k8s.oci_core_subnet.oke_workers_ad1_net: Creation complete after 2s
    module.k8s.oci_containerengine_cluster.k8s_cluster: Creation complete after 6m11s
    module.k8s.oci_containerengine_node_pool.k8s_nodepool: Creating...
    module.k8s.oci_containerengine_node_pool.k8s_nodepool: Creation complete after 2s
    Apply complete! Resources: 11 added, 0 changed, 0 destroyed.

Even though the provisioning of the node pool has been announced by Terraform as completed, it is not. The instances in your node pool still have to boot and install the required software and it can take a minute or two. Before you begin working with your cluster, please make sure you can see that the both worker nodes are ACTIVE like in the screen below:


The worker node instances are at the same time regular compute instances running in the same compartment in which you’ve provisioned the node pool. You can see their details in OCI Console:


Interacting with the managed cluster

To interact with the cluster you will use the kubectl tool. You need to download the config file that has been generated for the newly created cluster beforehand. To download the config file, execute the OCI CLI command from the code snippet below:

# oci/kube-dashboard.sh 1/2
mkdir ~/.kube
oci ce cluster create-kubeconfig --cluster-id $CLUSTER_OCID --file ~/.kube/config --region "$REGION"

This is how you test the connectivity:

$ kubectl cluster-info
Kubernetes master is running at https://czgembsgi4t.eu-frankfurt-1.clusters.oci.oraclecloud.com:6443
KubeDNS is running at https://czgembsgi4t.eu-frankfurt-1.clusters.oci.oraclecloud.com:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

Launching Kubernetes Dashboard

You can launch a local Kubernetes Dashboard that tunnels your interactions with the cluster’s API using kubectl proxy.

# oci/kube-dashboard.sh 2/2
$ export KUBECONFIG=~/.kube/config
$ kubectl proxy &
[1] 1219
Starting to serve on

Now you should be able to access the dashboard in your web browser under this link: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

To sign in, select Kubeconfig mode and use the same config file you’ve downloaded using OCI CLI oci ce cluster create-kubeconfig command.



You can find the code I presented in this chapter here.