OCI #11 – Managed Kubernetes (Oracle Kubernetes Engine)

This recipe shows how to provision a new Oracle Container Engine for Kubernetes (managed Kubernetes platform running on Oracle Cloud Infrastructure) cluster.

Before you begin, make sure that you have installed and configured:
+ Terraform. See: Terraform Setup recipe, if needed.
+ CLI. See: CLI Setup recipe, if needed.


Everything around production-grade Linux containers at mass scale has really begun with this short demo back in 2013. Linux container engines make use of Linux kernel features (cgroups, namespaces) to isolate applications executed inside logical entities called containers, which are built from multi-layered templates called images. You can watch this comprehensive introduction video. Docker is the most popular Linux container engine, however, there are also a bit less mature alternatives like CNCF-incubating rkt or containerd.

Container Orchestration

Containerised applications are usually small in size and each of them very often performs a single, granular and specialised task. As a result, in order to build a complete, sometimes multi-domain solution, it is necessary to gather together a large number of diverse microservices distributed with redundant copies across multiple worker nodes, so that they operate in a faul-tolerant setup. Some of these microservices have to interact with each other, consume external services or expose own service endpoints. If you consider this requirement, you will see that the more services you host the more important appears the need for a container orchestration engine and an ecosystem around microservices architectures with tools and utilities that support the entire container-driven platform.


In 2014 Google has created Kubernetes, an open source project based on their in-house container orchestration engine called Borg. You can read more here. Subsequently, the Cloud Native Computing Foundation has been found to support, promote and incubate the entire ecosystem around containers. Kubernetes is considered as as one of or even the most important project in the entire ecosystem.

Managed Kubernetes

As software is becoming more and more multi-layered and sophisticated, professionals responsible for maintaining software solutions in their organizations tend to seek for various ways to continue keeping things as simple as possible. One way to do that assumes using managed cloud-based platforms. A managed platform takes thes great deal of operations work burden fom your team by delegating the repetitive housekeeping activities to your cloud-provider. Oracle Cloud Infrastructure features with managed Kubernetes platform called Oracle Container Engine for Kubernetes also known as Oracle Kubernetes Engine abbreviated OKE. You can read more about it here.

Target Infrastructure

In this short tutorial, we are going to provision a managed Kubernetes cluster with a node pool distributed across two availability domains. We will start with two (2) Linux-based worker nodes (virtual machines), but the general idea behind any kind of pool is that you should be able to scale this number up or down, at any time, based on your needs. To remain in control over various interconnectivity aspects, we are going to define custom networking resources such as Virtual Cloud Network (VCN) or dedicated VCN Subnets for the worker nodes and load balancers. The Kubernetes cluster will be provisioned together with a two-instance-large node pool on top of the custom networking. It may sound a bit complex, but it is not.


Provisioning Infrastructure and Managed Kubernetes

Provisioning a managed Kubernetes cluster together with the cloud networking resources can be done using a set of simple Terraform configurations which I’ve made available here on my GitHub account. I’ve structured the configuration files in such a way that you can copy the k8s module directory and use it in your project. The sample module block used to reference the module can be found in the modules.tf file:

/home/michal/git/blog-code/oci-11/infrastructure$ tree
├── k8s
│   ├── cluster.tf
│   ├── vars.tf
│   └── vcn.tf
├── locals.tf
├── modules.tf
├── provider.tf
├── vars.tf
└── vcn.tf

The main directory of your Terraform project is typically called the root module. The provider.tf file specifies the oci provider and, in this way, tells Terraform to use the plugin for Oracle Cloud Infrastructure. The variables used in the provider.tf file are defined in the vars.tf file together with another variable (vcn_cidr) which describes the address range for the Virtual Cloud Network (VCN). The VCN cloud resource as well as an Internet Gateway cloud resource are defined in the vcn.tf file. The locals.tf file contains a datasource that fetches the Availability Domains names and stores them as local values. Finally, the modules.tf file uses module section to include the resources from k8s folder as a Terraform module. You can find the variables and attributes such as VCN OCID or the internet gateway OCID from the root module passed as input parameters to the k8s module.

The k8s module directory contains the configuration files that will be used by Terraform to create the required subnets and provision the Kubernetes cluster including its workers node pool and cluster’s load balancer. The k8s/vars.tf file defines the input variables as well as variables local to the module. The k8s/vcn.tf file defines a route table, two security lists and four (4) subnets: one pair for the worker nodes and one pair for the load balancers. As you can imagine, each pair is spread across two availability domains. The networking resources are created within the VCN that has been defined in the root module and a route rule references the internet gateway that has also been defined in the root module. The Kubernetes cluster and the node pool are defined in the k8s/cluster.tf file.

Let’s provision the cluster:

  1. Clone the blog-code repository from my GitHub account, if you haven’t done it yet.
  2. Make sure you’ve set all required Terraform variables. For example, you can do it using TF_VAR_ environment variables. You can issue env | grep TF_VAR_ to list them and inspect if they are correct. If you are not sure what to do, see: Terraform Setup recipe.
  3. Initialize the working directory. Terraform will download the newest version of the provider plugin, unless already installed.

    ~/git/blog-code/oci-11> terraform init
    Initializing provider plugins...
    - Checking for available provider plugins on https://releases.hashicorp.com...
    - Downloading plugin for provider "oci" (3.11.2)...
    The following providers do not have any version constraints in configuration,
    so the latest version was installed.
    To prevent automatic upgrades to new major versions that may contain breaking
    changes, it is recommended to add version = "..." constraints to the
    corresponding provider blocks in configuration, with the constraint strings
    suggested below.
    * provider.oci: version = "~> 3.11"
    Terraform has been successfully initialized!
  4. Provision the cloud resources using single Terraform command:
    $ terraform apply --auto-approve
    module.k8s.oci_core_subnet.oke_loadbalancers_ad2_net: Creation complete after 1s
    module.k8s.oci_core_subnet.oke_workers_ad2_net: Creation complete after 1s
    module.k8s.oci_core_subnet.oke_loadbalancers_ad1_net: Creation complete after 1s
    module.k8s.oci_containerengine_cluster.k8s_cluster: Creating...
    module.k8s.oci_core_subnet.oke_workers_ad1_net: Creation complete after 2s
    module.k8s.oci_containerengine_cluster.k8s_cluster: Creation complete after 6m11s
    module.k8s.oci_containerengine_node_pool.k8s_nodepool: Creating...
    module.k8s.oci_containerengine_node_pool.k8s_nodepool: Creation complete after 2s
    Apply complete! Resources: 11 added, 0 changed, 0 destroyed.

Even though the provisioning of the node pool has been announced by Terraform as completed, it is not. The instances in your node pool still have to boot and install the required software and it can take a minute or two. Before you begin working with your cluster, please make sure you can see that the both worker nodes are ACTIVE like in the screen below:


The worker node instances are at the same time regular compute instances running in the same compartment in which you’ve provisioned the node pool. You can see their details in OCI Console:


Interacting with the managed cluster

To interact with the cluster you will use the kubectl tool. You need to download the config file that has been generated for the newly created cluster beforehand. To download the config file, execute the OCI CLI command from the code snippet below:

# oci/kube-dashboard.sh 1/2
mkdir ~/.kube
oci ce cluster create-kubeconfig --cluster-id $CLUSTER_OCID --file ~/.kube/config --region "$REGION"

This is how you test the connectivity:

$ kubectl cluster-info
Kubernetes master is running at https://czgembsgi4t.eu-frankfurt-1.clusters.oci.oraclecloud.com:6443
KubeDNS is running at https://czgembsgi4t.eu-frankfurt-1.clusters.oci.oraclecloud.com:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

Launching Kubernetes Dashboard

You can launch a local Kubernetes Dashboard that tunnels your interactions with the cluster’s API using kubectl proxy.

# oci/kube-dashboard.sh 2/2
$ export KUBECONFIG=~/.kube/config
$ kubectl proxy &
[1] 1219
Starting to serve on

Now you should be able to access the dashboard in your web browser under this link: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

To sign in, select Kubeconfig mode and use the same config file you’ve downloaded using OCI CLI oci ce cluster create-kubeconfig command.



You can find the code I presented in this chapter here.

OCI #6 – Launching Compute instance using CLI

This recipe shows how to provision a new compute instance using Oracle Cloud Infrastructure CLI.

Before you start, you must install and configure Oracle Cloud Infrastructure CLI. See: CLI Setup recipe, if needed.

Target Infrastructure

In this short tutorial we are going to provision a new compute instance on Oracle Cloud Infrastructure. The instance will be a lightweight Linux-based virtual machine with just 1 OCPU (== 2 vCPUs) and 14GB memory. We will place the instance on a public subnet and assign an ephemeral public IP address. In this way, the virtual machine will be accessible from the Internet. To demonstrate it, we will enable web server, host a static page and open port 80. All steps will be performed with Oracle Cloud Infrastructure CLI.



Oracle Cloud Infrastructure resources are grouped together in Compartments. Every OCI CLI command must know the compartment in which it has to perform operations. Most often you will define a default compartment for CLI. You do it in ~/.oci/oci_cli_rc file. See: CLI Setup recipe, if needed.

This is the quickest way to display the name of the default Compartment that CLI is configured to work with:

oci iam compartment get --output table --query "data.{CompartmentName:\"name\"}"
| CompartmentName |
| Sandbox         |


Cloud-based compute instances that are spread across bare metal servers and numerous virtual machines require interconnectivity. Although cloud-computing networking, as seen by the user, is rather simplified and flattened especially when compared to the traditional on-premise topologies, you still need to design and roll out some networking configuration. In this way, you can decide which instances are accessible from the Internet while the others aren’t. You can group together the related instances in subnets and define security rules that allow the access on the selected ports.

Virtual Cloud Network

Virtual Cloud Network (VCN) is a software-defined network that provides a contiguous IPv4 address block, firewall and routing rules. VCN is subdivided into multiple, non-overlapping VCN subnets.

This is how you create a VCN using OCI CLI:

$ oci network vcn create --cidr-block --display-name demovcn --dns-label demovcn
  "data": {
    "cidr-block": "",
    "compartment-id": "ocid1.compartment.oc1..aaaaaaaavpjjlshvlm7nh6gxuhbsdzdbvhuiihenvyaqz6o4hrycscjtq75q",
    "default-dhcp-options-id": "ocid1.dhcpoptions.oc1.eu-frankfurt-1.aaaaaaaaklavzudx7vkb2pi42sx72zlws4mcsmgalbojbd2pqq7hvjhb2zfa",
    "default-route-table-id": "ocid1.routetable.oc1.eu-frankfurt-1.aaaaaaaamscrcftelrikngjckzje5fnqqoxbw7opmtdo55banhayrlajv75q",
    "default-security-list-id": "ocid1.securitylist.oc1.eu-frankfurt-1.aaaaaaaalq7jklywlm4qywy2smblsdcxqbadtk5asqdvkszqigs3uqnxhlna",
    "defined-tags": {},
    "display-name": "demovcn",
    "dns-label": "demovcn",
    "freeform-tags": {},
    "id": "ocid1.vcn.oc1.eu-frankfurt-1.aaaaaaaaazy45d63g6r7l7uvytioflhceuq6xu2fdjg6d6bptrhmckns6rhq",
    "lifecycle-state": "AVAILABLE",
    "time-created": "2018-10-22T21:51:26.228000+00:00",
    "vcn-domain-name": "demovcn.oraclevcn.com"
  "etag": "339cc316"
$ oci network vcn list --output table --query "data [*].{Name:\"display-name\", CIDR:\"cidr-block\", Domain:\"vcn-domain-name\"}"
| CIDR            | Domain                | Name    |
| | demovcn.oraclevcn.com | demovcn |

Internet Gateway

The traffic between instances in your VCN and the Internet goes through an Internet Gateway (IGW). An IGW exists in the context of your VCN. It can be disabled to immediately isolate your cloud resources from the Internet and then re-enabled when the problem is solved.

This is how you create a VCN Internet Gateway using OCI CLI. The first command queries for an OCID of the previously generated VCN and saves the result in a variable. The variable is used in the command that creates an IGW:

$ vcnOCID=`oci network vcn list --query "data [?\"display-name\"=='demovcn'].{OCID:\"id\"}" | grep OCID | awk -F'[\"|\"]' '{print $4}'`
$ echo $vcnOCID
$ oci network internet-gateway create --vcn-id $vcnOCID --display-name demoigw --is-enabled true > /dev/null



Compute instances must be launched within subnets. Before we create a subnet, we are going to prepare a Route Table and a Security List. Route Table will direct the outgoing traffic through the Internet Gateway, we’ve just created. Security List will contain Security Rules that define what kind of ingress and egress traffic is allowed for the subnet. You can think of Security Rules as an additional layer of a virtual firewall.

This is how you create a simple Route Table that directs the outgoing traffic to the Internet Gateway:

$ igwOCID=`oci network internet-gateway list --vcn-id $vcnOCID --query "data [?\"display-name\"=='demoigw'].{OCID:\"id\"}" | grep OCID | awk -F'[\"|\"]' '{print $4}'`
$ echo $igwOCID
$ oci network route-table create --vcn-id $vcnOCID --display-name publicrt --route-rules "[{\"cidrBlock\":\"\", \"networkEntityId\":\"$igwOCID\"}]" > /dev/null


This is how you create a Security List that allows the inbound traffic on ports 22 and 80 as well as any outbound traffic:

$ egress='[{"destination": "", "protocol": "all" }]'
$ ingress='[{"source": "", "protocol": "6", "tcpOptions": { "destinationPortRange": {"max": 22, "min": 22} } }, {"source": "", "protocol": "6", "tcpOptions": { "destinationPortRange": {"max": 80, "min": 80} } }]'
$ oci network security-list create --vcn-id $vcnOCID --display-name publicsl --egress-security-rules "$egress" --ingress-security-rules "$ingress" > /dev/null



This is how you create a subnet:

$ rtOCID=`oci network route-table list --vcn-id $vcnOCID --query "data [?\"display-name\"=='publicrt'].{OCID:\"id\"}" | grep OCID | awk -F'[\"|\"]' '{print $4}'`
$ echo $rtOCID
$ slOCID=`oci network security-list list --vcn-id $vcnOCID --query "data [?\"display-name\"=='publicsl'].{OCID:\"id\"}" | grep OCID | awk -F'[\"|\"]' '{print $4}'`
$ echo $slOCID
$ oci network subnet create --vcn-id $vcnOCID --display-name demosubnet --dns-label subnet  --cidr-block --prohibit-public-ip-on-vnic false --availability-domain "feDV:EU-FRANKFURT-1-AD-1" --route-table-id "$rtOCID" --security-list-ids "[\"$slOCID\"]" > /dev/null


Compute instance

There are two families of compute instances on Oracle Cloud Infrastructure:

  • Bare Metal Hosts
  • Virtual Machines

Bare Metal Hosts are powered by dedicated, single-tenant hardware with no hypervisor on board. They are very powerful and can provide a good foundation for large-scale enterprise computing. At the time of writing, the smallest bare metal server uses 36 OCPUs (== 72 vCPUs). Well… you do not always need such a mighty server, don’t you? Virtual machines, on the other hand, are powered by multi-tenant, hypervisor-managed servers. As a result, you can provision a lightweight virtual machine starting from just 1 OCPU (== 2 vCPUs). A Shape defines the profile of hardware resources of a compute instance. For example VM.Standard2.1 Shape delivers 1 OCPU and 15GB Memory.

An Image is a template that specifies the instance’s preinstalled software including the Operating System. It is used to initiate the boot volume that gets associated with the instance during provisioning. Oracle Cloud Infrastructure provides default images with various Operating Systems. You can create your own custom images, usually on top of the existing images. Let’s fetch the OCID of the newest CentOS 7 image and save it to a variable:

$ imageOCID=`oci compute image list --operating-system "CentOS" --operating-system-version 7 --sort-by TIMECREATED --query "data [0].{DisplayName:\"display-name\", OCID:\"id\"}" | grep OCID | awk -F'[\"|\"]' '{print $4}'`
$ echo $imageOCID

An instance has to exist within a subnet. Let’s fetch the OCID of the subnet, we’ve created, and save the OCID to a variable:

$ subnetOCID=`oci network subnet list --vcn-id $vcnOCID --query "data [?\"display-name\"=='demosubnet'].{OCID:\"id\"}" | grep OCID | awk -F'[\"|\"]' '{print $4}'`
$ echo $subnetOCID

A Linux-based compute instance requires an SSH public key to enable remote access. Please prepare a key pair before proceeding. See: SSH Keypair recipe, if needed. This guide assumes you store the public key in ~/.ssh/oci_id_rsa.pub

Oracle Cloud Infrastructure compute instances can use Cloud-Init user data to perform the initial configuration that takes place immediately after provisioning in an automated way. Let’s prepare a cloud-config file ~/cloud-init/michalsvm.cloud-config:

Now we are ready to launch the very first instance. This is how you do it using Oracle Cloud Infrastructure CLI:

oci compute instance launch --display-name michalsvm --availability-domain "feDV:EU-FRANKFURT-1-AD-1" --subnet-id "$subnetOCID" --private-ip --image-id "$imageOCID" --shape VM.Standard2.1 --ssh-authorized-keys-file ~/.ssh/oci_id_rsa.pub --user-data-file ~/cloud-init/michalsvm.cloud-config --wait-for-state RUNNING > /dev/null

It should take up to 1 or 2 minutes to complete the provisioning process. A mentioned before that our instance will get an ephemeral public IP address from the OCI public IPv4 address pool. Let’s find out the address our instance was given:

$ vmOCID=`oci compute instance list --display-name michalsvm --lifecycle-state RUNNING | grep \"id\" | awk -F'[\"|\"]' '{print $4}'`
$ echo $vmOCID
$ oci compute instance list-vnics --instance-id "$vmOCID" | grep public-ip | awk -F'[\"|\"]' '{print $4}'

You will still need to wait some additional seconds, until the boot process completes and SSH daemon starts accepting connections. This is how you connect to the machine:

$ ssh -i .ssh/oci_id_rsa opc@
The authenticity of host ' (' can't be established.
ECDSA key fingerprint is SHA256:tKs9JT5ubEVtBuKdKqux5ckktcLPRBrwWvWsjUaec4Q.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '' (ECDSA) to the list of known hosts.


Finally let’s check if we can access the web server:


OCI #5 – CLI Setup

This recipe shows how to install and configure Oracle Cloud Infrastructure CLI on your client machine.

Before you start, it is recommended that you read API Signing Key recipe to understand the concept of request signing.

Oracle Cloud Infrastructure REST API

Oracle Cloud Infrastructure exposes a comprehensive REST API to manage OCI resources and configurations. Every API request must be signed with Oracle Cloud Infrastructure API Request Signature and sent using secure HTTPS protocol with TLS 1.2. Signing a request is a multi-step process that can be seen as non-trivial. This is why you usually use tools like CLI, Terraform or custom SDK-based programs that encapsulate API calls and sign the requests on your behalf. All these tools eventually make calls to OCI REST API, therefore OCI REST API is the ultimate gateway to the cloud management plane.

Oracle Cloud Infrastructure CLI

Oracle Cloud Infrastructure CLI is a python-based command line utility that encapsulates API calls to OCI REST API. This simplifies the way you consume the API, because OCI CLI takes the burden of request signing. Furthermore, you can script API consumption using mature ecosystem of Python libraries.

Installing OCI CLI

Oracle has prepared two installation scripts. One for Linux/macOs with bash and one for Windows with Powershell. The two scripts perform similar steps. They install Python and virtualenv, create an isolated Python environment, install the latest version of CLI and alter the PATH variable. Alternatively, you can perform all these steps manually.

Installing OCI CLI on Linux or macOS

  1. Execute the following commands and follow the console-based installation wizard:
    bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"

Installing OCI CLI on Windows

  1. Launch Powershell console with Run as Administrator option
  2. Execute the following commands and follow the console-based installation wizard:
    Set-ExecutionPolicy RemoteSigned
    powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.ps1'))"

Configuring OCI CLI

CLI features with an embedded configuration wizard that generates API Signing key pair and creates CLI configuration file, based on the parameters given during configuration wizard run. You should identify a few Oracle Cloud Infrastructure Identifiers (OCIDs) before you launch the wizard.

  1. Sign in to OCI Console
  2. Go to Identity ➟ Users and copy the OCID of the user on whose behalf OCI CLI prepares, signs and sends OCI REST API requests.
  3. Go to Administration Tenancy Details and copy the OCID of the tenancy.
  4. Open a new command line window on your client machine and execute:
    oci setup config
  5. Provide the user OCID, the tenancy OCID and the region you are working with.
  6. Say Y(es) when asked if you want to generate a new RSA key pair, unless you prefer to use your own API Signing Key. To learn more on that topic, have a look at API Signing Key recipe.
  7. Say N(o) when asked if you want to write your private key passphrase to the config file, unless you do not mind storing in an open text.
  8. If you use default options for the remaining parameters, your config file will be generated as ~/.oci/config
  9. Finally, you should upload the generated public key to OCI and associate it with the user you’ve chosen in the second step. You can learn how to do it in API Signing Key recipe.

Majority, if not all, OCI REST API resource operations require Compartment OCID. You can define the default values for input parameters to OCI CLI commands, to avoid unnecessary typing, every time you invoke a CLI command. To add a default value for Compartment OCID, perform these steps:

  1. Sign in to OCI Console
  2. Go to Identity ➟ Compartments and copy the OCID of the compartment you would like to work with. If needed, create a new Compartment.
  3. Create a new file: ~/.oci/oci_cli_rc and place there the following lines:
    compartment-id = placeHereTheCompartmentOCID

Now, you should be ready to test OCI CLI. Let’s list the available CentOS Images:

oci compute image list --operating-system CentOS --output table --query "data [*].{Image:\"display-name\"}"
| Image                    |
| CentOS-7-2018.09.19-0    |
| CentOS-7-2018.08.15-0    |
| CentOS-7-2018.06.22-0    |
| CentOS-6.9-2018.06.22-0  |
| CentOS-6.10-2018.09.19-0 |
| CentOS-6.10-2018.08.15-0 |

You can find the complete reference of CLI commands here.