OCI #11 – Managed Kubernetes (Oracle Kubernetes Engine)

This recipe shows how to provision a new Oracle Container Engine for Kubernetes (managed Kubernetes platform running on Oracle Cloud Infrastructure) cluster.

Before you begin, make sure that you have installed and configured:
+ Terraform. See: Terraform Setup recipe, if needed.
+ CLI. See: CLI Setup recipe, if needed.

Containers

Everything around production-grade Linux containers at mass scale has really begun with this short demo back in 2013. Linux container engines make use of Linux kernel features (cgroups, namespaces) to isolate applications executed inside logical entities called containers, which are built from multi-layered templates called images. You can watch this comprehensive introduction video. Docker is the most popular Linux container engine, however, there are also a bit less mature alternatives like CNCF-incubating rkt or containerd.

Container Orchestration

Containerised applications are usually small in size and each of them very often performs a single, granular and specialised task. As a result, in order to build a complete, sometimes multi-domain solution, it is necessary to gather together a large number of diverse microservices distributed with redundant copies across multiple worker nodes, so that they operate in a faul-tolerant setup. Some of these microservices have to interact with each other, consume external services or expose own service endpoints. If you consider this requirement, you will see that the more services you host the more important appears the need for a container orchestration engine and an ecosystem around microservices architectures with tools and utilities that support the entire container-driven platform.

Kubernetes

In 2014 Google has created Kubernetes, an open source project based on their in-house container orchestration engine called Borg. You can read more here. Subsequently, the Cloud Native Computing Foundation has been found to support, promote and incubate the entire ecosystem around containers. Kubernetes is considered as as one of or even the most important project in the entire ecosystem.

Managed Kubernetes

As software is becoming more and more multi-layered and sophisticated, professionals responsible for maintaining software solutions in their organizations tend to seek for various ways to continue keeping things as simple as possible. One way to do that assumes using managed cloud-based platforms. A managed platform takes thes great deal of operations work burden fom your team by delegating the repetitive housekeeping activities to your cloud-provider. Oracle Cloud Infrastructure features with managed Kubernetes platform called Oracle Container Engine for Kubernetes also known as Oracle Kubernetes Engine abbreviated OKE. You can read more about it here.

Target Infrastructure

In this short tutorial, we are going to provision a managed Kubernetes cluster with a node pool distributed across two availability domains. We will start with two (2) Linux-based worker nodes (virtual machines), but the general idea behind any kind of pool is that you should be able to scale this number up or down, at any time, based on your needs. To remain in control over various interconnectivity aspects, we are going to define custom networking resources such as Virtual Cloud Network (VCN) or dedicated VCN Subnets for the worker nodes and load balancers. The Kubernetes cluster will be provisioned together with a two-instance-large node pool on top of the custom networking. It may sound a bit complex, but it is not.

oci-11-infra

Provisioning Infrastructure and Managed Kubernetes

Provisioning a managed Kubernetes cluster together with the cloud networking resources can be done using a set of simple Terraform configurations which I’ve made available here on my GitHub account. I’ve structured the configuration files in such a way that you can copy the k8s module directory and use it in your project. The sample module block used to reference the module can be found in the modules.tf file:

/home/michal/git/blog-code/oci-11/infrastructure$ tree
.
├── k8s
│   ├── cluster.tf
│   ├── vars.tf
│   └── vcn.tf
├── locals.tf
├── modules.tf
├── provider.tf
├── vars.tf
└── vcn.tf

The main directory of your Terraform project is typically called the root module. The provider.tf file specifies the oci provider and, in this way, tells Terraform to use the plugin for Oracle Cloud Infrastructure. The variables used in the provider.tf file are defined in the vars.tf file together with another variable (vcn_cidr) which describes the address range for the Virtual Cloud Network (VCN). The VCN cloud resource as well as an Internet Gateway cloud resource are defined in the vcn.tf file. The locals.tf file contains a datasource that fetches the Availability Domains names and stores them as local values. Finally, the modules.tf file uses module section to include the resources from k8s folder as a Terraform module. You can find the variables and attributes such as VCN OCID or the internet gateway OCID from the root module passed as input parameters to the k8s module.

The k8s module directory contains the configuration files that will be used by Terraform to create the required subnets and provision the Kubernetes cluster including its workers node pool and cluster’s load balancer. The k8s/vars.tf file defines the input variables as well as variables local to the module. The k8s/vcn.tf file defines a route table, two security lists and four (4) subnets: one pair for the worker nodes and one pair for the load balancers. As you can imagine, each pair is spread across two availability domains. The networking resources are created within the VCN that has been defined in the root module and a route rule references the internet gateway that has also been defined in the root module. The Kubernetes cluster and the node pool are defined in the k8s/cluster.tf file.

Let’s provision the cluster:

  1. Clone the blog-code repository from my GitHub account, if you haven’t done it yet.
  2. Make sure you’ve set all required Terraform variables. For example, you can do it using TF_VAR_ environment variables. You can issue env | grep TF_VAR_ to list them and inspect if they are correct. If you are not sure what to do, see: Terraform Setup recipe.
  3. Initialize the working directory. Terraform will download the newest version of the provider plugin, unless already installed.

    ~/git/blog-code/oci-11> terraform init
    Initializing provider plugins...
    - Checking for available provider plugins on https://releases.hashicorp.com...
    - Downloading plugin for provider "oci" (3.11.2)...
    
    The following providers do not have any version constraints in configuration,
    so the latest version was installed.
    
    To prevent automatic upgrades to new major versions that may contain breaking
    changes, it is recommended to add version = "..." constraints to the
    corresponding provider blocks in configuration, with the constraint strings
    suggested below.
    
    * provider.oci: version = "~> 3.11"
    
    Terraform has been successfully initialized!
  4. Provision the cloud resources using single Terraform command:
    $ terraform apply --auto-approve
    ...
    module.k8s.oci_core_subnet.oke_loadbalancers_ad2_net: Creation complete after 1s
    module.k8s.oci_core_subnet.oke_workers_ad2_net: Creation complete after 1s
    module.k8s.oci_core_subnet.oke_loadbalancers_ad1_net: Creation complete after 1s
    module.k8s.oci_containerengine_cluster.k8s_cluster: Creating...
    module.k8s.oci_core_subnet.oke_workers_ad1_net: Creation complete after 2s
    module.k8s.oci_containerengine_cluster.k8s_cluster: Creation complete after 6m11s
    
    module.k8s.oci_containerengine_node_pool.k8s_nodepool: Creating...
    module.k8s.oci_containerengine_node_pool.k8s_nodepool: Creation complete after 2s
    
    Apply complete! Resources: 11 added, 0 changed, 0 destroyed.
    

Even though the provisioning of the node pool has been announced by Terraform as completed, it is not. The instances in your node pool still have to boot and install the required software and it can take a minute or two. Before you begin working with your cluster, please make sure you can see that the both worker nodes are ACTIVE like in the screen below:

oci-11-nodepool

The worker node instances are at the same time regular compute instances running in the same compartment in which you’ve provisioned the node pool. You can see their details in OCI Console:

oci-11-nodepool-instances

Interacting with the managed cluster

To interact with the cluster you will use the kubectl tool. You need to download the config file that has been generated for the newly created cluster beforehand. To download the config file, execute the OCI CLI command from the code snippet below:

# oci/kube-dashboard.sh 1/2
CLUSTER_OCID={put-here-the-cluster-ocid}
REGION={put-here-your-region}
mkdir ~/.kube
oci ce cluster create-kubeconfig --cluster-id $CLUSTER_OCID --file ~/.kube/config --region "$REGION"

This is how you test the connectivity:

$ kubectl cluster-info
Kubernetes master is running at https://czgembsgi4t.eu-frankfurt-1.clusters.oci.oraclecloud.com:6443
KubeDNS is running at https://czgembsgi4t.eu-frankfurt-1.clusters.oci.oraclecloud.com:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

Launching Kubernetes Dashboard

You can launch a local Kubernetes Dashboard that tunnels your interactions with the cluster’s API using kubectl proxy.

# oci/kube-dashboard.sh 2/2
$ export KUBECONFIG=~/.kube/config
$ kubectl proxy &
[1] 1219
Starting to serve on 127.0.0.1:8001

Now you should be able to access the dashboard in your web browser under this link: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

To sign in, select Kubeconfig mode and use the same config file you’ve downloaded using OCI CLI oci ce cluster create-kubeconfig command.

oci-11-dashboard-welcome

oci-11-dashboard

You can find the code I presented in this chapter here.

OCI #9 – Custom Image for Compute

This recipe shows how to create a new Custom Image for Compute instances.

Before you start, you must install and configure Oracle Cloud Infrastructure CLI ( see: CLI Setup recipe, if needed) and Terraform ( see: Terraform Setup recipe, if needed).

An Image is a template that specifies the instance’s preinstalled software including the Operating System. It is used to initiate a boot volume that gets associated with an instance during provisioning. Oracle Cloud Infrastructure provides default images with various Operating Systems. You can create your own custom images to install additional software on top of an existing image.

You create a new Custom Image from an existing compute instance. This explains the recommended steps you should take, namely:

  1. provision a new compute instance (I like to call it an image builder instance)
  2. install software you would like your new custom image to include
  3. create a new custom image from the instance
  4. terminate the compute instance

If you use scripts, the entire process can be automated and parametrized.

Create Custom Image

First, let’s launch a new compute instance to use it as a base for our image. If you would like your new custom image to include an installation of Node.js on top of CentOS 7.5, you have to provision the instance using Oracle-provided CentOS 7.5 image, install Node.js and use the instance to create a new custom image.

You can use a simple stack I’ve prepared. It will launch a single virtual machine with CentOS 7.5 in a newly created VCN. To do so:

git clone https://github.com/mtjakobczyk/blog-code.git
cd blog-code/oci-09/
# Create a SSH Keypar for the image builder instance
mkdir keys
ssh-keygen -t rsa -b 2048 -C "yourName@example.com" -f keys/oci_id_rsa

Now you should see this:

tree git/blog-code/oci-09
git/blog-code/oci-09
├── build_image.sh
├── infrastructure
│   ├── basevm.tf
│   └── provider.tf
├── keys
│   ├── oci_id_rsa ⇽ 
│   └── oci_id_rsa.pub ⇽ 
└── scripts
    ├── image_building_simple.sh
    └── wait_for_ssh_on_vm.sh

Terraform will need to know the compartment OCID. You can pass it using environment variable. Replace the placeholder and execute the code snippet below:

# oci-09/build_image.sh 1/5
custom_image_shape="VM.Standard2.1"
custom_image_base="CentOS-7-2018.10.12-0"
custom_image_name="$custom_image_base-Node.js-v10.0.0"
export TF_VAR_compartment_ocid={put-here-your-compartment}

Now you can provision the instance by executing the code snippet. Please use the same console tab as before to reuse the shell variables:

# oci-09/build_image.sh 2/5
export TF_VAR_custom_image_shape="$custom_image_shape"
export TF_VAR_custom_image_base="$custom_image_base"
cd infrastructure
terraform init
echo "IB: Provisioning image building infrastructure"
terraform apply -auto-approve 2>&1 | tee ../terraform.out
vm_ip=`cat ../terraform.out | grep image_builder_vm_public_ip | grep -o '[0-9]\+\.[0-9]\+\.[0-9]\+\.[0-9]\+'`
vm_ocid=`cat ../terraform.out | grep image_builder_vm_ocid | awk '{print $3}'`
echo "IB ip: $vm_ip ocid: $vm_ocid"
cd ..
echo "IB: Waiting for SSH on VM"
./scripts/wait_for_ssh_on_vm.sh

Next, install the software you would like your custom image to include. As you can see I am using an external shell script that is executed on the instance to perform the installation of Node.js. If you would like to install any other software, this is the script you should customise (scripts/image_building_simple.sh). The installation script is executed synchronously. In this way we can be sure that the image would never get created too early. Please use the same console tab as before to reuse the shell variables:

# oci-09/build_image.sh 3/5
echo "IB: Executing image building script"
ssh -i keys/oci_id_rsa -o StrictHostKeyChecking=no -l opc $vm_ip "bash -s" < scripts/image_building_simple.sh
echo "IB: Ready"

We use OCI CLI to create the custom image:

# oci-09/build_image.sh 4/5
oci compute image create --display-name $custom_image_name --instance-id "$vm_ocid" --wait-for-state AVAILABLE

Finally, we use Terraform to destroy the image builder instance

# oci-09/build_image.sh 5/5
cd infrastructure
terraform destroy -auto-approve
unset TF_VAR_custom_image_shape
unset TF_VAR_custom_image_base
unset TF_VAR_compartment_ocid

You can find the entire script here.

Now you should be able to see the new custom image:

oci compute image list --operating-system CentOS --operating-system-version 7 --output table --query "data [*].{Image:\"display-name\"}"
+---------------------------------------+
| Image                                 |
+---------------------------------------+
| CentOS-7-2018.11.16-0                 |
| CentOS-7-2018.10.12-0                 |
| CentOS-7-2018.09.19-0                 |
| CentOS-7-2018.10.12-0-Node.js-v10.0.0 | 
+---------------------------------------+

oci-09-step1

OCI #8 – Launching Compute instance using Terraform

This recipe shows how to provision a new compute instance using Oracle Cloud Infrastructure Terraform provider.

Before you start, you must install and configure Terraform for OCI. See: Terraform Setup recipe, if needed.

Target Infrastructure

In this short tutorial we are going to provision a new compute instance on Oracle Cloud Infrastructure. The instance will be a lightweight Linux-based virtual machine with just 1 OCPU (== 2 vCPUs) and 15GB memory. We will place the instance on a public subnet and assign an ephemeral public IP address. In this way, the virtual machine will be accessible from the Internet. To demonstrate it, we will enable web server, host a static page and open port 80. All steps will be performed with Oracle Cloud Infrastructure Terraform provider.

oci-06-infra

Infrastructure Code project

We are going to define and manage the infrastructure as code using Terraform. Let’s prepare a new directory and initialize the Terraform project:

  1. Create a new infrastructure code project directory and step into the folder
  2. Define the provider configuration (provider.tf) together with the variables that will be mapped from operating system environment variables we’ve defined during setup. If you haven’t done it, see: Terraform Setup recipe:
    # provider.tf 1/2
    variable "tenancy_ocid" {}
    variable "user_ocid" {}
    variable "fingerprint" {}
    variable "region" {}
    variable "private_key_path" {}
    variable "private_key_password" {}
    
    provider "oci" {
      tenancy_ocid = "${var.tenancy_ocid}"
      user_ocid = "${var.user_ocid}"
      fingerprint = "${var.fingerprint}"
      region = "${var.region}"
      private_key_path = "${var.private_key_path}"
      private_key_password = "${var.private_key_password}"
    }
  3. Initialize the working directory. Terraform will download the newest version of the provider plugin, unless already installed.

    ~/git/blog-code/oci-08> terraform init
    Initializing provider plugins...
    - Checking for available provider plugins on https://releases.hashicorp.com...
    - Downloading plugin for provider "oci" (3.5.0)...
    
    The following providers do not have any version constraints in configuration,
    so the latest version was installed.
    
    To prevent automatic upgrades to new major versions that may contain breaking
    changes, it is recommended to add version = "..." constraints to the
    corresponding provider blocks in configuration, with the constraint strings
    suggested below.
    
    * provider.oci: version = "~> 3.5"
    
    Terraform has been successfully initialized!

Compartments

Oracle Cloud Infrastructure resources are grouped together in Compartments. The resources you define in the infrastructure code very often require that you specify the compartment in which the resources are created.

  1. Sign in to OCI Console
  2. Go to Identity  Compartments and copy the OCID of the compartment you would like to work with. If needed, create a new Compartment.
  3. Please add the following variable to your environment (for example in ~/.profile):
    export TF_VAR_compartment_ocid={put-here-the-compartment-ocid}
  4. Read the newly added variable to the current bash session:

    $ source ~/.profile
  5. Add a new Terraform variable to provider.tf. In this way you will capture the Compartment OCID into Terraform project

    # provider.tf 2/2
    variable "compartment_ocid" {}

You can find the entire provider.tf file here.

Networking

Cloud-based compute instances that are spread across bare metal servers and numerous virtual machines require interconnectivity. Although cloud-computing networking, as seen by the user, is rather simplified and flattened especially when compared to the traditional on-premise topologies, you still need to design and roll out some networking configuration. In this way, you can decide which instances are accessible from the Internet while the others aren’t. You can group together the related instances in subnets and define the security rules that allow the access on the selected ports.

  1. Go to the infrastructure code project directory
  2. Create a new configuration file called vcn.tf ( you can use a different name, but make sure you use the .tf extension )

Virtual Cloud Network

Virtual Cloud Network (VCN) is a software-defined network that provides a contiguous IPv4 address block, firewall and routing rules. VCN is subdivided into multiple, non-overlapping VCN subnets.

This is how you define a VCN in Terraform HCL. Please add this section to the vcn.tf configuration file:

# vcn.tf 1/5
resource "oci_core_virtual_network" "demo_vcn" {
  compartment_id = "${var.compartment_ocid}"
  display_name = "demo-vcn"
  cidr_block = "192.168.10.0/24"
  dns_label = "demovcn"
}

Internet Gateway

The traffic between instances in your VCN and the Internet goes through an Internet Gateway (IGW). An IGW exists in the context of your VCN. It can be disabled to immediately isolate your cloud resources from the Internet and then re-enabled when the problem is solved.

This is how you define an IGW in Terraform HCL. Please add this section to the vcn.tf configuration file:

# vcn.tf 2/5
resource "oci_core_internet_gateway" "demo_igw" {
  compartment_id = "${var.compartment_ocid}"
  vcn_id = "${oci_core_virtual_network.demo_vcn.id}"
  display_name = "demo-igw"
  enabled = "true"
}

Subnet

Compute instances must be launched within subnets. Before we create a subnet, we are going to prepare a Route Table and a Security List. Route Table will direct the outgoing traffic through the Internet Gateway, we’ve just defined. Security List will contain Security Rules that define what kind of ingress and egress traffic is allowed for the subnet. You can think of Security Rules as an additional layer of a virtual firewall.

This is how you define a simple Route Table that directs the entire (0.0.0.0/0) outgoing traffic to the Internet Gateway:

# vcn.tf 3/5
resource "oci_core_route_table" "demo_rt" {
  compartment_id = "${var.compartment_ocid}"
  vcn_id = "${oci_core_virtual_network.demo_vcn.id}"
  display_name = "demo-rt"
  route_rules {
    destination = "0.0.0.0/0"
    network_entity_id = "${oci_core_internet_gateway.demo_igw.id}"
  }
}

This is how you define a Security List that allows the inbound traffic on ports 22 and 80 as well as any outbound traffic:

# vcn.tf 4/5
resource "oci_core_security_list" "demo_sl" {
  compartment_id = "${var.compartment_ocid}"
  vcn_id = "${oci_core_virtual_network.demo_vcn.id}"
  display_name = "demo-sl"
  egress_security_rules = [
    { destination = "0.0.0.0/0" protocol = "all"}
  ]
  ingress_security_rules = [
    { protocol = "6", source = "0.0.0.0/0", tcp_options { "max" = 22, "min" = 22 }},
    { protocol = "6", source = "0.0.0.0/0", tcp_options { "max" = 80, "min" = 80 }}
  ]
}

Finally, this is how you define a Subnet in Terraform HCL:

# vcn.tf 5/5
resource "oci_core_subnet" "demo_subnet" {
  compartment_id = "${var.compartment_ocid}"
  vcn_id = "${oci_core_virtual_network.demo_vcn.id}"
  display_name = "demo-subnet"
  availability_domain = "${local.ad_1_name}"
  cidr_block = "192.168.10.0/30"
  route_table_id = "${oci_core_route_table.demo_rt.id}"
  security_list_ids = ["${oci_core_security_list.demo_sl.id}"]
  dhcp_options_id = "${oci_core_virtual_network.demo_vcn.default_dhcp_options_id}"
  dns_label = "demosubnet"
}

As you could see ithere are some interdependencies between various resource objects. For example, the subnet object references the route table object we’ve defined in the same file using the id (OCID). You can read more on HCL interpolation here.

You can find the entire vcn.tf file here.

Availability Domain

A subnet (and consequently the compute instances within) have to be explicitly created in a one of the availability domains that the region is built of. You can think of an availability domain as if it was a single data center, while a region as if it was a mesh of interconnected data centers in close, but still safely isolated, proximity.

If you look closer at demo_subnet resource from the previous section, you will discover that availability_domainfield uses a reference to the “local” value.

  1. Create a new configuration file called locals.tf ( you can use a different name, but make sure you use the .tf extension )

Add these two sections to locals.tf:

# locals.tf 1/2
data "oci_identity_availability_domains" "ads" {
  compartment_id = "${var.tenancy_ocid}"
}
locals {
  ad_1_name = "${lookup(data.oci_identity_availability_domains.ads.availability_domains[0],"name")}"
  ad_2_name = "${lookup(data.oci_identity_availability_domains.ads.availability_domains[1],"name")}"
  ad_3_name = "${lookup(data.oci_identity_availability_domains.ads.availability_domains[2],"name")}"
}

The code above forces Terraform to query OCI REST API for the list of availability domains and extracts their names into the local variables. We can then reference these variables in any .tf file, as we did in vcn.tf using ${local.ad_1_name}.

Compute Instance

There are two families of compute instances on Oracle Cloud Infrastructure:

  • Bare Metal Hosts
  • Virtual Machines

Bare Metal Hosts are powered by dedicated, single-tenant hardware with no hypervisor on board. They are very powerful and can provide a good foundation for large-scale enterprise computing. At the time of writing, the smallest bare metal server uses 36 OCPUs (== 72 vCPUs). Well… you do not always need such a mighty server, don’t you? Virtual machines, on the other hand, are powered by multi-tenant, hypervisor-managed servers. As a result, you can provision a lightweight virtual machine starting from just 1 OCPU (== 2 vCPUs). A Shape defines the profile of hardware resources of a compute instance. For example VM.Standard2.1 Shape delivers 1 OCPU and 15GB Memory.

An Image is a template that specifies the instance’s preinstalled software including the Operating System. It is used to initiate the boot volume that gets associated with the instance during provisioning. Oracle Cloud Infrastructure provides default images with various Operating Systems. You can create your own custom images, usually on top of the existing images. Let’s fetch the OCID of a particular CentOS image and save it to a local variable. Add these two sections to locals.tf:

# locals.tf 2/2
data "oci_core_images" "centos_linux_image" {
  compartment_id = "${var.tenancy_ocid}"
  operating_system = "CentOS"
}
locals {
  centos_linux_image_ocid = "${lookup(data.oci_core_images.centos_linux_image.images[0],"id")}"
}

You can find the entire locals.tf file here.

A Linux-based compute instance requires an SSH public key to enable remote access. Please prepare a key pair before proceeding. See: SSH Keypair recipe, if needed. This guide assumes you store the public key in ~/.ssh/oci_id_rsa.pub

Oracle Cloud Infrastructure compute instances can use Cloud-Init user data to perform the initial configuration that takes place immediately after provisioning in an automated way. Let’s prepare a cloud-config file ./cloud-init/vm.cloud-config:

Now we are ready to define the very first instance.

  1. Create a new configuration file called compute.tf ( you can use a different name, but make sure you use the .tf extension )
# compute.tf 1/1
resource "oci_core_instance" "demo_vm" {
  compartment_id = "${var.compartment_ocid}"
  display_name = "demo-vm"
  availability_domain = "${local.ad_1_name}"

  source_details {
    source_id = "${local.centos_linux_image_ocid}"
    source_type = "image"
  }
  shape = "VM.Standard2.1"
  create_vnic_details {
    subnet_id = "${oci_core_subnet.demo_subnet.id}"
    display_name = "primary-vnic"
    assign_public_ip = true
    private_ip = "192.168.10.2"
    hostname_label = "michalsvm"
  }
  metadata {
    ssh_authorized_keys = "${file("~/.ssh/oci_id_rsa.pub")}"
    user_data = "${base64encode(file("./cloud-init/vm.cloud-config"))}"
  }
  timeouts {
    create = "5m"
  }
}

You can find the entire compute.tf file here.

Infrastructure Code

Let’s revise the infrastructure code we’ve created. We have our infrastructure resource definitions (network and compute) distributed across a few .tf files in the same folder. Additionally, we have a cloud config files with initialization commands for the compute instance and, in a separate directory, public SSH key to enable remote access to the compute instance.

~/git/blog-code/oci-08> tree
.
├── cloud-init
│   └── vm.cloud-config
├── compute.tf
├── locals.tf
├── provider.tf
└── vcn.tf

~/.ssh> tree
.
├── known_hosts
├── oci_id_rsa
└── oci_id_rsa.pub

This is the same folder in which you’ve already run terraform init as described at the top of this recipe.

You can find the Terraform code I am presenting in this chapter here.

Provisioning infrastructure using Terraform

Provisioning a cloud infrastructure using Terraform is very easy. You just need to issue a single command: terraform apply. Terraform will compare the definitions from .tf files with the tfstate file, if any, and prepare a plan. You will be able to revise and accept the plan. Finally, Terraform will create and issue a sequence of OCI REST API calls to orchestrate the creation and provisioning of cloud-based resources.

~/git/blog-code/oci-08> terraform apply

...

Plan: 6 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

...

oci_core_instance.demo_vm: Still creating... (10s elapsed)
oci_core_instance.demo_vm: Still creating... (20s elapsed)
oci_core_instance.demo_vm: Still creating... (30s elapsed)

oci-08-step1

oci_core_instance.demo_vm: Creation complete after 1m36s (ID: ocid1.instance.oc1.eu-frankfurt-1.abthe...m2ij7nq2dh753dkzegrzmeuf3pzzbuojzc5dza)

Apply complete! Resources: 6 added, 0 changed, 0 destroyed.

oci-08-step2

You should soon see your first instance, created with Terraform, up and running.

Please go into Console or use CLI to identify the public ephemeral IP address assigned to the VM.

You will still need to wait some additional seconds, until the boot process completes and SSH daemon starts accepting connections. This is how you connect to the machine:

$ ssh -i ~/.ssh/oci_id_rsa opc@130.61.62.137
The authenticity of host '130.61.62.137 (130.61.62.137)' can't be established.
ECDSA key fingerprint is SHA256:tKs9JT5ubEVtBuKdKqux5ckktcLPRBrwWvWsjUaec4Q.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '130.61.62.137' (ECDSA) to the list of known hosts.
-bash-4.2$ sudo systemctl status nginx
 nginx.service - The nginx HTTP and reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2018-10-30 18:48:19 GMT; 2s ago

Finally let’s check if we can access the web server:

oci-08-step3

You can find how to launch the same instance using CLI here.

You can find the Terraform code I presented in this chapter here.

OCI #6 – Launching Compute instance using CLI

This recipe shows how to provision a new compute instance using Oracle Cloud Infrastructure CLI.

Before you start, you must install and configure Oracle Cloud Infrastructure CLI. See: CLI Setup recipe, if needed.

Target Infrastructure

In this short tutorial we are going to provision a new compute instance on Oracle Cloud Infrastructure. The instance will be a lightweight Linux-based virtual machine with just 1 OCPU (== 2 vCPUs) and 14GB memory. We will place the instance on a public subnet and assign an ephemeral public IP address. In this way, the virtual machine will be accessible from the Internet. To demonstrate it, we will enable web server, host a static page and open port 80. All steps will be performed with Oracle Cloud Infrastructure CLI.

oci-06-infra

Compartments

Oracle Cloud Infrastructure resources are grouped together in Compartments. Every OCI CLI command must know the compartment in which it has to perform operations. Most often you will define a default compartment for CLI. You do it in ~/.oci/oci_cli_rc file. See: CLI Setup recipe, if needed.

This is the quickest way to display the name of the default Compartment that CLI is configured to work with:

oci iam compartment get --output table --query "data.{CompartmentName:\"name\"}"
+-----------------+
| CompartmentName |
+-----------------+
| Sandbox         |
+-----------------+

Networking

Cloud-based compute instances that are spread across bare metal servers and numerous virtual machines require interconnectivity. Although cloud-computing networking, as seen by the user, is rather simplified and flattened especially when compared to the traditional on-premise topologies, you still need to design and roll out some networking configuration. In this way, you can decide which instances are accessible from the Internet while the others aren’t. You can group together the related instances in subnets and define security rules that allow the access on the selected ports.

Virtual Cloud Network

Virtual Cloud Network (VCN) is a software-defined network that provides a contiguous IPv4 address block, firewall and routing rules. VCN is subdivided into multiple, non-overlapping VCN subnets.

This is how you create a VCN using OCI CLI:

$ oci network vcn create --cidr-block 192.168.10.0/24 --display-name demovcn --dns-label demovcn
{
  "data": {
    "cidr-block": "192.168.10.0/24",
    "compartment-id": "ocid1.compartment.oc1..aaaaaaaavpjjlshvlm7nh6gxuhbsdzdbvhuiihenvyaqz6o4hrycscjtq75q",
    "default-dhcp-options-id": "ocid1.dhcpoptions.oc1.eu-frankfurt-1.aaaaaaaaklavzudx7vkb2pi42sx72zlws4mcsmgalbojbd2pqq7hvjhb2zfa",
    "default-route-table-id": "ocid1.routetable.oc1.eu-frankfurt-1.aaaaaaaamscrcftelrikngjckzje5fnqqoxbw7opmtdo55banhayrlajv75q",
    "default-security-list-id": "ocid1.securitylist.oc1.eu-frankfurt-1.aaaaaaaalq7jklywlm4qywy2smblsdcxqbadtk5asqdvkszqigs3uqnxhlna",
    "defined-tags": {},
    "display-name": "demovcn",
    "dns-label": "demovcn",
    "freeform-tags": {},
    "id": "ocid1.vcn.oc1.eu-frankfurt-1.aaaaaaaaazy45d63g6r7l7uvytioflhceuq6xu2fdjg6d6bptrhmckns6rhq",
    "lifecycle-state": "AVAILABLE",
    "time-created": "2018-10-22T21:51:26.228000+00:00",
    "vcn-domain-name": "demovcn.oraclevcn.com"
  },
  "etag": "339cc316"
}
$ oci network vcn list --output table --query "data [*].{Name:\"display-name\", CIDR:\"cidr-block\", Domain:\"vcn-domain-name\"}"
+-----------------+-----------------------+---------+
| CIDR            | Domain                | Name    |
+-----------------+-----------------------+---------+
| 192.168.10.0/24 | demovcn.oraclevcn.com | demovcn |
+-----------------+-----------------------+---------+
oci-06-step1

Internet Gateway

The traffic between instances in your VCN and the Internet goes through an Internet Gateway (IGW). An IGW exists in the context of your VCN. It can be disabled to immediately isolate your cloud resources from the Internet and then re-enabled when the problem is solved.

This is how you create a VCN Internet Gateway using OCI CLI. The first command queries for an OCID of the previously generated VCN and saves the result in a variable. The variable is used in the command that creates an IGW:

$ vcnOCID=`oci network vcn list --query "data [?\"display-name\"=='demovcn'].{OCID:\"id\"}" | grep OCID | awk -F'[\"|\"]' '{print $4}'`
$ echo $vcnOCID
ocid1.vcn.oc1.eu-frankfurt-1.aaaaaaaaazy45d63g6r7l7uvytioflhceuq6xu2fdjg6d6bptrhmckns6rhq
$ oci network internet-gateway create --vcn-id $vcnOCID --display-name demoigw --is-enabled true > /dev/null

oci-06-step2

Subnet

Compute instances must be launched within subnets. Before we create a subnet, we are going to prepare a Route Table and a Security List. Route Table will direct the outgoing traffic through the Internet Gateway, we’ve just created. Security List will contain Security Rules that define what kind of ingress and egress traffic is allowed for the subnet. You can think of Security Rules as an additional layer of a virtual firewall.

This is how you create a simple Route Table that directs the outgoing traffic to the Internet Gateway:

$ igwOCID=`oci network internet-gateway list --vcn-id $vcnOCID --query "data [?\"display-name\"=='demoigw'].{OCID:\"id\"}" | grep OCID | awk -F'[\"|\"]' '{print $4}'`
$ echo $igwOCID
ocid1.internetgateway.oc1.eu-frankfurt-1.aaaaaaaaq5hnwmwn47sgdfwjqlmmj4iyg4kzlpzgc756ijgclrmr4apmqu3a
$ oci network route-table create --vcn-id $vcnOCID --display-name publicrt --route-rules "[{\"cidrBlock\":\"0.0.0.0/0\", \"networkEntityId\":\"$igwOCID\"}]" > /dev/null

oci-06-step3

This is how you create a Security List that allows the inbound traffic on ports 22 and 80 as well as any outbound traffic:

$ egress='[{"destination": "0.0.0.0/0", "protocol": "all" }]'
$ ingress='[{"source": "0.0.0.0/0", "protocol": "6", "tcpOptions": { "destinationPortRange": {"max": 22, "min": 22} } }, {"source": "0.0.0.0/0", "protocol": "6", "tcpOptions": { "destinationPortRange": {"max": 80, "min": 80} } }]'
$ oci network security-list create --vcn-id $vcnOCID --display-name publicsl --egress-security-rules "$egress" --ingress-security-rules "$ingress" > /dev/null

oci-06-step4

oci-06-step5

This is how you create a subnet:

$ rtOCID=`oci network route-table list --vcn-id $vcnOCID --query "data [?\"display-name\"=='publicrt'].{OCID:\"id\"}" | grep OCID | awk -F'[\"|\"]' '{print $4}'`
$ echo $rtOCID
ocid1.routetable.oc1.eu-frankfurt-1.aaaaaaaa2o4vufktzoslnrxamo6ydqlgtbahs2vwttsl4a3brbs2dz5rjgzq
$ slOCID=`oci network security-list list --vcn-id $vcnOCID --query "data [?\"display-name\"=='publicsl'].{OCID:\"id\"}" | grep OCID | awk -F'[\"|\"]' '{print $4}'`
$ echo $slOCID
ocid1.securitylist.oc1.eu-frankfurt-1.aaaaaaaapfspvn5uvqvoqeop7rgb2x744ldobxh3vz5cwjvzx2p6h2gbhaba
$ oci network subnet create --vcn-id $vcnOCID --display-name demosubnet --dns-label subnet  --cidr-block 192.168.10.0/30 --prohibit-public-ip-on-vnic false --availability-domain "feDV:EU-FRANKFURT-1-AD-1" --route-table-id "$rtOCID" --security-list-ids "[\"$slOCID\"]" > /dev/null

oci-06-step6

Compute instance

There are two families of compute instances on Oracle Cloud Infrastructure:

  • Bare Metal Hosts
  • Virtual Machines

Bare Metal Hosts are powered by dedicated, single-tenant hardware with no hypervisor on board. They are very powerful and can provide a good foundation for large-scale enterprise computing. At the time of writing, the smallest bare metal server uses 36 OCPUs (== 72 vCPUs). Well… you do not always need such a mighty server, don’t you? Virtual machines, on the other hand, are powered by multi-tenant, hypervisor-managed servers. As a result, you can provision a lightweight virtual machine starting from just 1 OCPU (== 2 vCPUs). A Shape defines the profile of hardware resources of a compute instance. For example VM.Standard2.1 Shape delivers 1 OCPU and 15GB Memory.

An Image is a template that specifies the instance’s preinstalled software including the Operating System. It is used to initiate the boot volume that gets associated with the instance during provisioning. Oracle Cloud Infrastructure provides default images with various Operating Systems. You can create your own custom images, usually on top of the existing images. Let’s fetch the OCID of the newest CentOS 7 image and save it to a variable:

$ imageOCID=`oci compute image list --operating-system "CentOS" --operating-system-version 7 --sort-by TIMECREATED --query "data [0].{DisplayName:\"display-name\", OCID:\"id\"}" | grep OCID | awk -F'[\"|\"]' '{print $4}'`
$ echo $imageOCID
ocid1.image.oc1.eu-frankfurt-1.aaaaaaaav3frw3wod63glppeb2hhh4ao7c6kntgt5jvxy4imiihclgkta7ja

An instance has to exist within a subnet. Let’s fetch the OCID of the subnet, we’ve created, and save the OCID to a variable:

$ subnetOCID=`oci network subnet list --vcn-id $vcnOCID --query "data [?\"display-name\"=='demosubnet'].{OCID:\"id\"}" | grep OCID | awk -F'[\"|\"]' '{print $4}'`
$ echo $subnetOCID
ocid1.subnet.oc1.eu-frankfurt-1.aaaaaaaanx3hypjebusub6w5lavut3tmkknmvkn4bxdfcgk5l677haurswaa

A Linux-based compute instance requires an SSH public key to enable remote access. Please prepare a key pair before proceeding. See: SSH Keypair recipe, if needed. This guide assumes you store the public key in ~/.ssh/oci_id_rsa.pub

Oracle Cloud Infrastructure compute instances can use Cloud-Init user data to perform the initial configuration that takes place immediately after provisioning in an automated way. Let’s prepare a cloud-config file ~/cloud-init/michalsvm.cloud-config:

Now we are ready to launch the very first instance. This is how you do it using Oracle Cloud Infrastructure CLI:

oci compute instance launch --display-name michalsvm --availability-domain "feDV:EU-FRANKFURT-1-AD-1" --subnet-id "$subnetOCID" --private-ip 192.168.10.2 --image-id "$imageOCID" --shape VM.Standard2.1 --ssh-authorized-keys-file ~/.ssh/oci_id_rsa.pub --user-data-file ~/cloud-init/michalsvm.cloud-config --wait-for-state RUNNING > /dev/null

It should take up to 1 or 2 minutes to complete the provisioning process. A mentioned before that our instance will get an ephemeral public IP address from the OCI public IPv4 address pool. Let’s find out the address our instance was given:

$ vmOCID=`oci compute instance list --display-name michalsvm --lifecycle-state RUNNING | grep \"id\" | awk -F'[\"|\"]' '{print $4}'`
$ echo $vmOCID
ocid1.instance.oc1.eu-frankfurt-1.abtheljtf6ta2df3bmq5h3v3acaa3muqojcjplldv2mam6irvwoucyg4vxua
$ oci compute instance list-vnics --instance-id "$vmOCID" | grep public-ip | awk -F'[\"|\"]' '{print $4}'
130.61.93.17

You will still need to wait some additional seconds, until the boot process completes and SSH daemon starts accepting connections. This is how you connect to the machine:

$ ssh -i .ssh/oci_id_rsa opc@130.61.93.17
The authenticity of host '130.61.93.17 (130.61.93.17)' can't be established.
ECDSA key fingerprint is SHA256:tKs9JT5ubEVtBuKdKqux5ckktcLPRBrwWvWsjUaec4Q.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '130.61.93.17' (ECDSA) to the list of known hosts.
-bash-4.2$

oci-06-step7

Finally let’s check if we can access the web server:

oci-06-step8