OCI #11 – Managed Kubernetes (Oracle Kubernetes Engine)

This recipe shows how to provision a new Oracle Container Engine for Kubernetes (managed Kubernetes platform running on Oracle Cloud Infrastructure) cluster.

Before you begin, make sure that you have installed and configured:
+ Terraform. See: Terraform Setup recipe, if needed.
+ CLI. See: CLI Setup recipe, if needed.

Containers

Everything around production-grade Linux containers at mass scale has really begun with this short demo back in 2013. Linux container engines make use of Linux kernel features (cgroups, namespaces) to isolate applications executed inside logical entities called containers, which are built from multi-layered templates called images. You can watch this comprehensive introduction video. Docker is the most popular Linux container engine, however, there are also a bit less mature alternatives like CNCF-incubating rkt or containerd.

Container Orchestration

Containerised applications are usually small in size and each of them very often performs a single, granular and specialised task. As a result, in order to build a complete, sometimes multi-domain solution, it is necessary to gather together a large number of diverse microservices distributed with redundant copies across multiple worker nodes, so that they operate in a faul-tolerant setup. Some of these microservices have to interact with each other, consume external services or expose own service endpoints. If you consider this requirement, you will see that the more services you host the more important appears the need for a container orchestration engine and an ecosystem around microservices architectures with tools and utilities that support the entire container-driven platform.

Kubernetes

In 2014 Google has created Kubernetes, an open source project based on their in-house container orchestration engine called Borg. You can read more here. Subsequently, the Cloud Native Computing Foundation has been found to support, promote and incubate the entire ecosystem around containers. Kubernetes is considered as as one of or even the most important project in the entire ecosystem.

Managed Kubernetes

As software is becoming more and more multi-layered and sophisticated, professionals responsible for maintaining software solutions in their organizations tend to seek for various ways to continue keeping things as simple as possible. One way to do that assumes using managed cloud-based platforms. A managed platform takes thes great deal of operations work burden fom your team by delegating the repetitive housekeeping activities to your cloud-provider. Oracle Cloud Infrastructure features with managed Kubernetes platform called Oracle Container Engine for Kubernetes also known as Oracle Kubernetes Engine abbreviated OKE. You can read more about it here.

Target Infrastructure

In this short tutorial, we are going to provision a managed Kubernetes cluster with a node pool distributed across two availability domains. We will start with two (2) Linux-based worker nodes (virtual machines), but the general idea behind any kind of pool is that you should be able to scale this number up or down, at any time, based on your needs. To remain in control over various interconnectivity aspects, we are going to define custom networking resources such as Virtual Cloud Network (VCN) or dedicated VCN Subnets for the worker nodes and load balancers. The Kubernetes cluster will be provisioned together with a two-instance-large node pool on top of the custom networking. It may sound a bit complex, but it is not.

oci-11-infra

Provisioning Infrastructure and Managed Kubernetes

Provisioning a managed Kubernetes cluster together with the cloud networking resources can be done using a set of simple Terraform configurations which I’ve made available here on my GitHub account. I’ve structured the configuration files in such a way that you can copy the k8s module directory and use it in your project. The sample module block used to reference the module can be found in the modules.tf file:

/home/michal/git/blog-code/oci-11/infrastructure$ tree
.
├── k8s
│   ├── cluster.tf
│   ├── vars.tf
│   └── vcn.tf
├── locals.tf
├── modules.tf
├── provider.tf
├── vars.tf
└── vcn.tf

The main directory of your Terraform project is typically called the root module. The provider.tf file specifies the oci provider and, in this way, tells Terraform to use the plugin for Oracle Cloud Infrastructure. The variables used in the provider.tf file are defined in the vars.tf file together with another variable (vcn_cidr) which describes the address range for the Virtual Cloud Network (VCN). The VCN cloud resource as well as an Internet Gateway cloud resource are defined in the vcn.tf file. The locals.tf file contains a datasource that fetches the Availability Domains names and stores them as local values. Finally, the modules.tf file uses module section to include the resources from k8s folder as a Terraform module. You can find the variables and attributes such as VCN OCID or the internet gateway OCID from the root module passed as input parameters to the k8s module.

The k8s module directory contains the configuration files that will be used by Terraform to create the required subnets and provision the Kubernetes cluster including its workers node pool and cluster’s load balancer. The k8s/vars.tf file defines the input variables as well as variables local to the module. The k8s/vcn.tf file defines a route table, two security lists and four (4) subnets: one pair for the worker nodes and one pair for the load balancers. As you can imagine, each pair is spread across two availability domains. The networking resources are created within the VCN that has been defined in the root module and a route rule references the internet gateway that has also been defined in the root module. The Kubernetes cluster and the node pool are defined in the k8s/cluster.tf file.

Let’s provision the cluster:

  1. Clone the blog-code repository from my GitHub account, if you haven’t done it yet.
  2. Make sure you’ve set all required Terraform variables. For example, you can do it using TF_VAR_ environment variables. You can issue env | grep TF_VAR_ to list them and inspect if they are correct. If you are not sure what to do, see: Terraform Setup recipe.
  3. Initialize the working directory. Terraform will download the newest version of the provider plugin, unless already installed.

    ~/git/blog-code/oci-11> terraform init
    Initializing provider plugins...
    - Checking for available provider plugins on https://releases.hashicorp.com...
    - Downloading plugin for provider "oci" (3.11.2)...
    
    The following providers do not have any version constraints in configuration,
    so the latest version was installed.
    
    To prevent automatic upgrades to new major versions that may contain breaking
    changes, it is recommended to add version = "..." constraints to the
    corresponding provider blocks in configuration, with the constraint strings
    suggested below.
    
    * provider.oci: version = "~> 3.11"
    
    Terraform has been successfully initialized!
  4. Provision the cloud resources using single Terraform command:
    $ terraform apply --auto-approve
    ...
    module.k8s.oci_core_subnet.oke_loadbalancers_ad2_net: Creation complete after 1s
    module.k8s.oci_core_subnet.oke_workers_ad2_net: Creation complete after 1s
    module.k8s.oci_core_subnet.oke_loadbalancers_ad1_net: Creation complete after 1s
    module.k8s.oci_containerengine_cluster.k8s_cluster: Creating...
    module.k8s.oci_core_subnet.oke_workers_ad1_net: Creation complete after 2s
    module.k8s.oci_containerengine_cluster.k8s_cluster: Creation complete after 6m11s
    
    module.k8s.oci_containerengine_node_pool.k8s_nodepool: Creating...
    module.k8s.oci_containerengine_node_pool.k8s_nodepool: Creation complete after 2s
    
    Apply complete! Resources: 11 added, 0 changed, 0 destroyed.
    

Even though the provisioning of the node pool has been announced by Terraform as completed, it is not. The instances in your node pool still have to boot and install the required software and it can take a minute or two. Before you begin working with your cluster, please make sure you can see that the both worker nodes are ACTIVE like in the screen below:

oci-11-nodepool

The worker node instances are at the same time regular compute instances running in the same compartment in which you’ve provisioned the node pool. You can see their details in OCI Console:

oci-11-nodepool-instances

Interacting with the managed cluster

To interact with the cluster you will use the kubectl tool. You need to download the config file that has been generated for the newly created cluster beforehand. To download the config file, execute the OCI CLI command from the code snippet below:

# oci/kube-dashboard.sh 1/2
CLUSTER_OCID={put-here-the-cluster-ocid}
REGION={put-here-your-region}
mkdir ~/.kube
oci ce cluster create-kubeconfig --cluster-id $CLUSTER_OCID --file ~/.kube/config --region "$REGION"

This is how you test the connectivity:

$ kubectl cluster-info
Kubernetes master is running at https://czgembsgi4t.eu-frankfurt-1.clusters.oci.oraclecloud.com:6443
KubeDNS is running at https://czgembsgi4t.eu-frankfurt-1.clusters.oci.oraclecloud.com:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

Launching Kubernetes Dashboard

You can launch a local Kubernetes Dashboard that tunnels your interactions with the cluster’s API using kubectl proxy.

# oci/kube-dashboard.sh 2/2
$ export KUBECONFIG=~/.kube/config
$ kubectl proxy &
[1] 1219
Starting to serve on 127.0.0.1:8001

Now you should be able to access the dashboard in your web browser under this link: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

To sign in, select Kubeconfig mode and use the same config file you’ve downloaded using OCI CLI oci ce cluster create-kubeconfig command.

oci-11-dashboard-welcome

oci-11-dashboard

You can find the code I presented in this chapter here.

OCI #8 – Launching Compute instance using Terraform

This recipe shows how to provision a new compute instance using Oracle Cloud Infrastructure Terraform provider.

Before you start, you must install and configure Terraform for OCI. See: Terraform Setup recipe, if needed.

Target Infrastructure

In this short tutorial we are going to provision a new compute instance on Oracle Cloud Infrastructure. The instance will be a lightweight Linux-based virtual machine with just 1 OCPU (== 2 vCPUs) and 15GB memory. We will place the instance on a public subnet and assign an ephemeral public IP address. In this way, the virtual machine will be accessible from the Internet. To demonstrate it, we will enable web server, host a static page and open port 80. All steps will be performed with Oracle Cloud Infrastructure Terraform provider.

oci-06-infra

Infrastructure Code project

We are going to define and manage the infrastructure as code using Terraform. Let’s prepare a new directory and initialize the Terraform project:

  1. Create a new infrastructure code project directory and step into the folder
  2. Define the provider configuration (provider.tf) together with the variables that will be mapped from operating system environment variables we’ve defined during setup. If you haven’t done it, see: Terraform Setup recipe:
    # provider.tf 1/2
    variable "tenancy_ocid" {}
    variable "user_ocid" {}
    variable "fingerprint" {}
    variable "region" {}
    variable "private_key_path" {}
    variable "private_key_password" {}
    
    provider "oci" {
      tenancy_ocid = "${var.tenancy_ocid}"
      user_ocid = "${var.user_ocid}"
      fingerprint = "${var.fingerprint}"
      region = "${var.region}"
      private_key_path = "${var.private_key_path}"
      private_key_password = "${var.private_key_password}"
    }
  3. Initialize the working directory. Terraform will download the newest version of the provider plugin, unless already installed.

    ~/git/blog-code/oci-08> terraform init
    Initializing provider plugins...
    - Checking for available provider plugins on https://releases.hashicorp.com...
    - Downloading plugin for provider "oci" (3.5.0)...
    
    The following providers do not have any version constraints in configuration,
    so the latest version was installed.
    
    To prevent automatic upgrades to new major versions that may contain breaking
    changes, it is recommended to add version = "..." constraints to the
    corresponding provider blocks in configuration, with the constraint strings
    suggested below.
    
    * provider.oci: version = "~> 3.5"
    
    Terraform has been successfully initialized!

Compartments

Oracle Cloud Infrastructure resources are grouped together in Compartments. The resources you define in the infrastructure code very often require that you specify the compartment in which the resources are created.

  1. Sign in to OCI Console
  2. Go to Identity  Compartments and copy the OCID of the compartment you would like to work with. If needed, create a new Compartment.
  3. Please add the following variable to your environment (for example in ~/.profile):
    export TF_VAR_compartment_ocid={put-here-the-compartment-ocid}
  4. Read the newly added variable to the current bash session:

    $ source ~/.profile
  5. Add a new Terraform variable to provider.tf. In this way you will capture the Compartment OCID into Terraform project

    # provider.tf 2/2
    variable "compartment_ocid" {}

You can find the entire provider.tf file here.

Networking

Cloud-based compute instances that are spread across bare metal servers and numerous virtual machines require interconnectivity. Although cloud-computing networking, as seen by the user, is rather simplified and flattened especially when compared to the traditional on-premise topologies, you still need to design and roll out some networking configuration. In this way, you can decide which instances are accessible from the Internet while the others aren’t. You can group together the related instances in subnets and define the security rules that allow the access on the selected ports.

  1. Go to the infrastructure code project directory
  2. Create a new configuration file called vcn.tf ( you can use a different name, but make sure you use the .tf extension )

Virtual Cloud Network

Virtual Cloud Network (VCN) is a software-defined network that provides a contiguous IPv4 address block, firewall and routing rules. VCN is subdivided into multiple, non-overlapping VCN subnets.

This is how you define a VCN in Terraform HCL. Please add this section to the vcn.tf configuration file:

# vcn.tf 1/5
resource "oci_core_virtual_network" "demo_vcn" {
  compartment_id = "${var.compartment_ocid}"
  display_name = "demo-vcn"
  cidr_block = "192.168.10.0/24"
  dns_label = "demovcn"
}

Internet Gateway

The traffic between instances in your VCN and the Internet goes through an Internet Gateway (IGW). An IGW exists in the context of your VCN. It can be disabled to immediately isolate your cloud resources from the Internet and then re-enabled when the problem is solved.

This is how you define an IGW in Terraform HCL. Please add this section to the vcn.tf configuration file:

# vcn.tf 2/5
resource "oci_core_internet_gateway" "demo_igw" {
  compartment_id = "${var.compartment_ocid}"
  vcn_id = "${oci_core_virtual_network.demo_vcn.id}"
  display_name = "demo-igw"
  enabled = "true"
}

Subnet

Compute instances must be launched within subnets. Before we create a subnet, we are going to prepare a Route Table and a Security List. Route Table will direct the outgoing traffic through the Internet Gateway, we’ve just defined. Security List will contain Security Rules that define what kind of ingress and egress traffic is allowed for the subnet. You can think of Security Rules as an additional layer of a virtual firewall.

This is how you define a simple Route Table that directs the entire (0.0.0.0/0) outgoing traffic to the Internet Gateway:

# vcn.tf 3/5
resource "oci_core_route_table" "demo_rt" {
  compartment_id = "${var.compartment_ocid}"
  vcn_id = "${oci_core_virtual_network.demo_vcn.id}"
  display_name = "demo-rt"
  route_rules {
    destination = "0.0.0.0/0"
    network_entity_id = "${oci_core_internet_gateway.demo_igw.id}"
  }
}

This is how you define a Security List that allows the inbound traffic on ports 22 and 80 as well as any outbound traffic:

# vcn.tf 4/5
resource "oci_core_security_list" "demo_sl" {
  compartment_id = "${var.compartment_ocid}"
  vcn_id = "${oci_core_virtual_network.demo_vcn.id}"
  display_name = "demo-sl"
  egress_security_rules = [
    { destination = "0.0.0.0/0" protocol = "all"}
  ]
  ingress_security_rules = [
    { protocol = "6", source = "0.0.0.0/0", tcp_options { "max" = 22, "min" = 22 }},
    { protocol = "6", source = "0.0.0.0/0", tcp_options { "max" = 80, "min" = 80 }}
  ]
}

Finally, this is how you define a Subnet in Terraform HCL:

# vcn.tf 5/5
resource "oci_core_subnet" "demo_subnet" {
  compartment_id = "${var.compartment_ocid}"
  vcn_id = "${oci_core_virtual_network.demo_vcn.id}"
  display_name = "demo-subnet"
  availability_domain = "${local.ad_1_name}"
  cidr_block = "192.168.10.0/30"
  route_table_id = "${oci_core_route_table.demo_rt.id}"
  security_list_ids = ["${oci_core_security_list.demo_sl.id}"]
  dhcp_options_id = "${oci_core_virtual_network.demo_vcn.default_dhcp_options_id}"
  dns_label = "demosubnet"
}

As you could see ithere are some interdependencies between various resource objects. For example, the subnet object references the route table object we’ve defined in the same file using the id (OCID). You can read more on HCL interpolation here.

You can find the entire vcn.tf file here.

Availability Domain

A subnet (and consequently the compute instances within) have to be explicitly created in a one of the availability domains that the region is built of. You can think of an availability domain as if it was a single data center, while a region as if it was a mesh of interconnected data centers in close, but still safely isolated, proximity.

If you look closer at demo_subnet resource from the previous section, you will discover that availability_domainfield uses a reference to the “local” value.

  1. Create a new configuration file called locals.tf ( you can use a different name, but make sure you use the .tf extension )

Add these two sections to locals.tf:

# locals.tf 1/2
data "oci_identity_availability_domains" "ads" {
  compartment_id = "${var.tenancy_ocid}"
}
locals {
  ad_1_name = "${lookup(data.oci_identity_availability_domains.ads.availability_domains[0],"name")}"
  ad_2_name = "${lookup(data.oci_identity_availability_domains.ads.availability_domains[1],"name")}"
  ad_3_name = "${lookup(data.oci_identity_availability_domains.ads.availability_domains[2],"name")}"
}

The code above forces Terraform to query OCI REST API for the list of availability domains and extracts their names into the local variables. We can then reference these variables in any .tf file, as we did in vcn.tf using ${local.ad_1_name}.

Compute Instance

There are two families of compute instances on Oracle Cloud Infrastructure:

  • Bare Metal Hosts
  • Virtual Machines

Bare Metal Hosts are powered by dedicated, single-tenant hardware with no hypervisor on board. They are very powerful and can provide a good foundation for large-scale enterprise computing. At the time of writing, the smallest bare metal server uses 36 OCPUs (== 72 vCPUs). Well… you do not always need such a mighty server, don’t you? Virtual machines, on the other hand, are powered by multi-tenant, hypervisor-managed servers. As a result, you can provision a lightweight virtual machine starting from just 1 OCPU (== 2 vCPUs). A Shape defines the profile of hardware resources of a compute instance. For example VM.Standard2.1 Shape delivers 1 OCPU and 15GB Memory.

An Image is a template that specifies the instance’s preinstalled software including the Operating System. It is used to initiate the boot volume that gets associated with the instance during provisioning. Oracle Cloud Infrastructure provides default images with various Operating Systems. You can create your own custom images, usually on top of the existing images. Let’s fetch the OCID of a particular CentOS image and save it to a local variable. Add these two sections to locals.tf:

# locals.tf 2/2
data "oci_core_images" "centos_linux_image" {
  compartment_id = "${var.tenancy_ocid}"
  operating_system = "CentOS"
}
locals {
  centos_linux_image_ocid = "${lookup(data.oci_core_images.centos_linux_image.images[0],"id")}"
}

You can find the entire locals.tf file here.

A Linux-based compute instance requires an SSH public key to enable remote access. Please prepare a key pair before proceeding. See: SSH Keypair recipe, if needed. This guide assumes you store the public key in ~/.ssh/oci_id_rsa.pub

Oracle Cloud Infrastructure compute instances can use Cloud-Init user data to perform the initial configuration that takes place immediately after provisioning in an automated way. Let’s prepare a cloud-config file ./cloud-init/vm.cloud-config:

Now we are ready to define the very first instance.

  1. Create a new configuration file called compute.tf ( you can use a different name, but make sure you use the .tf extension )
# compute.tf 1/1
resource "oci_core_instance" "demo_vm" {
  compartment_id = "${var.compartment_ocid}"
  display_name = "demo-vm"
  availability_domain = "${local.ad_1_name}"

  source_details {
    source_id = "${local.centos_linux_image_ocid}"
    source_type = "image"
  }
  shape = "VM.Standard2.1"
  create_vnic_details {
    subnet_id = "${oci_core_subnet.demo_subnet.id}"
    display_name = "primary-vnic"
    assign_public_ip = true
    private_ip = "192.168.10.2"
    hostname_label = "michalsvm"
  }
  metadata {
    ssh_authorized_keys = "${file("~/.ssh/oci_id_rsa.pub")}"
    user_data = "${base64encode(file("./cloud-init/vm.cloud-config"))}"
  }
  timeouts {
    create = "5m"
  }
}

You can find the entire compute.tf file here.

Infrastructure Code

Let’s revise the infrastructure code we’ve created. We have our infrastructure resource definitions (network and compute) distributed across a few .tf files in the same folder. Additionally, we have a cloud config files with initialization commands for the compute instance and, in a separate directory, public SSH key to enable remote access to the compute instance.

~/git/blog-code/oci-08> tree
.
├── cloud-init
│   └── vm.cloud-config
├── compute.tf
├── locals.tf
├── provider.tf
└── vcn.tf

~/.ssh> tree
.
├── known_hosts
├── oci_id_rsa
└── oci_id_rsa.pub

This is the same folder in which you’ve already run terraform init as described at the top of this recipe.

You can find the Terraform code I am presenting in this chapter here.

Provisioning infrastructure using Terraform

Provisioning a cloud infrastructure using Terraform is very easy. You just need to issue a single command: terraform apply. Terraform will compare the definitions from .tf files with the tfstate file, if any, and prepare a plan. You will be able to revise and accept the plan. Finally, Terraform will create and issue a sequence of OCI REST API calls to orchestrate the creation and provisioning of cloud-based resources.

~/git/blog-code/oci-08> terraform apply

...

Plan: 6 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

...

oci_core_instance.demo_vm: Still creating... (10s elapsed)
oci_core_instance.demo_vm: Still creating... (20s elapsed)
oci_core_instance.demo_vm: Still creating... (30s elapsed)

oci-08-step1

oci_core_instance.demo_vm: Creation complete after 1m36s (ID: ocid1.instance.oc1.eu-frankfurt-1.abthe...m2ij7nq2dh753dkzegrzmeuf3pzzbuojzc5dza)

Apply complete! Resources: 6 added, 0 changed, 0 destroyed.

oci-08-step2

You should soon see your first instance, created with Terraform, up and running.

Please go into Console or use CLI to identify the public ephemeral IP address assigned to the VM.

You will still need to wait some additional seconds, until the boot process completes and SSH daemon starts accepting connections. This is how you connect to the machine:

$ ssh -i ~/.ssh/oci_id_rsa opc@130.61.62.137
The authenticity of host '130.61.62.137 (130.61.62.137)' can't be established.
ECDSA key fingerprint is SHA256:tKs9JT5ubEVtBuKdKqux5ckktcLPRBrwWvWsjUaec4Q.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '130.61.62.137' (ECDSA) to the list of known hosts.
-bash-4.2$ sudo systemctl status nginx
 nginx.service - The nginx HTTP and reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2018-10-30 18:48:19 GMT; 2s ago

Finally let’s check if we can access the web server:

oci-08-step3

You can find how to launch the same instance using CLI here.

You can find the Terraform code I presented in this chapter here.

OCI #7 – Terraform Setup

This recipe shows how to install and configure Terraform on your client machine.

Before you start, you must create a key pair for API Signing Key. See: API Signing Key recipe, if needed.

Oracle Cloud Infrastructure REST API

Oracle Cloud Infrastructure exposes a comprehensive REST API to manage OCI resources and configurations. Every API call request must be signed with Oracle Cloud Infrastructure API signature and sent using secure HTTPS protocol with TLS 1.2. Signing a request is a multi-step process that can be seen as non-trivial. This is why you usually use tools like CLI, Terraform or custom SDK-based programs that encapsulate API calls and sign the requests on your behalf. All these tools eventually make calls to OCI REST API, therefore OCI REST API is the ultimate gateway to the cloud management plane.

Terraform

Terraform is a declarative, agentless infrastructure provisioning tool. You declare the infrastructure as code using a JSON-like HCL (HashiCorp Language) declarative language. For example, this is how you define a sample OCI virtual network:

resource "oci_core_virtual_network" "my_vcn" {
	  cidr_block = "192.168.21.0/24"
	  dns_label = "corefra"
	  compartment_id = "${var.compartment_ocid}"
	  display_name = "core-fra-vcn"
}

Terraform calculates an execution plan and issues a series of direct API calls to the provider’s cloud management plane, every time you run terraform apply command. The tool knows what API calls the plan has to eventually produce because it tracks the as-isinfrastructure state locally in a .tfstate file. Out-of-the-box, the tool supports a number of cloud providers including Oracle Cloud Infrastructure.

Installing Terraform

Each Terraform release comes as a single binary file. You can find the binary package for your operating system under this link.

  1. Download and unpack the binary file for your operationg system from https://www.terraform.io/downloads.html.
  2. Execute the following command:
    $ terraform --version
    Terraform v0.11.10

You should notice that a new hidden directory gets created (~/.terraform.d).

Configuring Terraform for OCI

Although Terraform supports Oracle Cloud Infrastructure out-of-the-box, each infrastructure project you are working on still has to know the API connection details like tenancy OCID, user OCID, path to the private key and the fingerprint of the corresponding public key you upload and associate with the OCI user. Terraform is able to map the operating system environment variables that follow a specific naming convention to Terraform variables. We are going to leverage this approach when configuring our environment.

  1. Sign in to OCI Console
  2. Go to Identity ➟ Users and note down the OCID of the user on whose behalf Terraform OCI Provider prepares, signs and makes OCI REST API requests.
  3. If you haven’t done it already, please upload the public part of your API Signing Key and note down the fingerprint of the public key.
    See: API Signing Key recipe (“Uploading the public key”), if needed.
  4. Go to AdministrationTenancy Details and note down the OCID of the tenancy.
  5. Please add the following variables to your environment (for example in ~/.profile):
    #Terraform
    export TF_VAR_tenancy_ocid={put-here-the-tenancy-ocid}
    export TF_VAR_user_ocid={put-here-the-user-ocid}
    export TF_VAR_fingerprint={put-here-the-public-key-fingerprint}
    export TF_VAR_region={put-here-the-region} # for example: eu-frankfurt-1
    export TF_VAR_private_key_path={put-here-the-path-to-the-private-key}
    export TF_VAR_private_key_password={put-here-the-private-key-password}
  6. Let’s read the newly added variables to the bash session:

    source ~/.profile

Now you are ready to start defining Terraform projects.

Testing the connectivity

We are going to test the Terraform configuration by preparing a sample environment project. The project won’t provision anything but just read the CentOS images currently available on OCI.

  1. Create a new directory for your project. For example: ~/projects/oci_tf_test
    mkdir  ~/projects/oci_tf_test
  2. Define the provider configuration (provider.tf) together with the variables that will be mapped from operating system environment variables we’ve defined earlier:

    # ~/projects/oci_tf_test/provider.tf
    variable "tenancy_ocid" {}
    variable "user_ocid" {}
    variable "fingerprint" {}
    variable "region" {}
    variable "private_key_path" {}
    variable "private_key_password" {}
    
    provider "oci" {
      tenancy_ocid = "${var.tenancy_ocid}"
      user_ocid = "${var.user_ocid}"
      fingerprint = "${var.fingerprint}"
      region = "${var.region}"
      private_key_path = "${var.private_key_path}"
      private_key_password = "${var.private_key_password}"
    }
  3. Initialize the working directory. Terraform will download the newest version of the provider plugin.

    ~/projects/oci_tf_test> terraform init
    Initializing provider plugins...
    - Checking for available provider plugins on https://releases.hashicorp.com...
    - Downloading plugin for provider "oci" (3.5.0)...
    
    The following providers do not have any version constraints in configuration,
    so the latest version was installed.
    
    To prevent automatic upgrades to new major versions that may contain breaking
    changes, it is recommended to add version = "..." constraints to the
    corresponding provider blocks in configuration, with the constraint strings
    suggested below.
    
    * provider.oci: version = "~> 3.5"
    
    Terraform has been successfully initialized!
  4. Define another test configuration file (data.tf) that will make Terraform fetching the available CentOS images and display them in the output:

    # ~/projects/oci_tf_test/data.tf
    data "oci_core_images" "centos_images" {
      compartment_id = "${var.tenancy_ocid}"
      operating_system = "CentOS"
    }
    output "centos_images" {
      value = "${data.oci_core_images.centos_images.images}"
    }
  5. Run terraform by applying the configuration:

    $ terraform apply

You should see a collection of JSON objects that describe the CentOS images available on OCI.

You can filter the output to see the information you are interested in like this:

$ terraform apply | grep display_name
        display_name = CentOS-7-2018.10.12-0,
        display_name = CentOS-7-2018.09.19-0,
        display_name = CentOS-7-2018.08.15-0,
        display_name = CentOS-6.10-2018.10.12-0,
        display_name = CentOS-6.10-2018.09.19-0,
        display_name = CentOS-6.10-2018.08.15-0,

Oracle Cloud Infrastructure Provider documentation can be found here.