OCI #8 – Launching Compute instance using Terraform

This recipe shows how to provision a new compute instance using Oracle Cloud Infrastructure Terraform provider.

Before you start, you must install and configure Terraform for OCI. See: Terraform Setup recipe, if needed.

Target Infrastructure

In this short tutorial we are going to provision a new compute instance on Oracle Cloud Infrastructure. The instance will be a lightweight Linux-based virtual machine with just 1 OCPU (== 2 vCPUs) and 15GB memory. We will place the instance on a public subnet and assign an ephemeral public IP address. In this way, the virtual machine will be accessible from the Internet. To demonstrate it, we will enable web server, host a static page and open port 80. All steps will be performed with Oracle Cloud Infrastructure Terraform provider.

oci-06-infra

Infrastructure Code project

We are going to define and manage the infrastructure as code using Terraform. Let’s prepare a new directory and initialize the Terraform project:

  1. Create a new infrastructure code project directory and step into the folder
  2. Define the provider configuration (provider.tf) together with the variables that will be mapped from operating system environment variables we’ve defined during setup. If you haven’t done it, see: Terraform Setup recipe:
    # provider.tf 1/2
    variable "tenancy_ocid" {}
    variable "user_ocid" {}
    variable "fingerprint" {}
    variable "region" {}
    variable "private_key_path" {}
    variable "private_key_password" {}
    
    provider "oci" {
      tenancy_ocid = "${var.tenancy_ocid}"
      user_ocid = "${var.user_ocid}"
      fingerprint = "${var.fingerprint}"
      region = "${var.region}"
      private_key_path = "${var.private_key_path}"
      private_key_password = "${var.private_key_password}"
    }
  3. Initialize the working directory. Terraform will download the newest version of the provider plugin, unless already installed.

    ~/git/blog-code/oci-08> terraform init
    Initializing provider plugins...
    - Checking for available provider plugins on https://releases.hashicorp.com...
    - Downloading plugin for provider "oci" (3.5.0)...
    
    The following providers do not have any version constraints in configuration,
    so the latest version was installed.
    
    To prevent automatic upgrades to new major versions that may contain breaking
    changes, it is recommended to add version = "..." constraints to the
    corresponding provider blocks in configuration, with the constraint strings
    suggested below.
    
    * provider.oci: version = "~> 3.5"
    
    Terraform has been successfully initialized!

Compartments

Oracle Cloud Infrastructure resources are grouped together in Compartments. The resources you define in the infrastructure code very often require that you specify the compartment in which the resources are created.

  1. Sign in to OCI Console
  2. Go to Identity  Compartments and copy the OCID of the compartment you would like to work with. If needed, create a new Compartment.
  3. Please add the following variable to your environment (for example in ~/.profile):
    export TF_VAR_compartment_ocid={put-here-the-compartment-ocid}
  4. Read the newly added variable to the current bash session:

    $ source ~/.profile
  5. Add a new Terraform variable to provider.tf. In this way you will capture the Compartment OCID into Terraform project

    # provider.tf 2/2
    variable "compartment_ocid" {}

You can find the entire provider.tf file here.

Networking

Cloud-based compute instances that are spread across bare metal servers and numerous virtual machines require interconnectivity. Although cloud-computing networking, as seen by the user, is rather simplified and flattened especially when compared to the traditional on-premise topologies, you still need to design and roll out some networking configuration. In this way, you can decide which instances are accessible from the Internet while the others aren’t. You can group together the related instances in subnets and define the security rules that allow the access on the selected ports.

  1. Go to the infrastructure code project directory
  2. Create a new configuration file called vcn.tf ( you can use a different name, but make sure you use the .tf extension )

Virtual Cloud Network

Virtual Cloud Network (VCN) is a software-defined network that provides a contiguous IPv4 address block, firewall and routing rules. VCN is subdivided into multiple, non-overlapping VCN subnets.

This is how you define a VCN in Terraform HCL. Please add this section to the vcn.tf configuration file:

# vcn.tf 1/5
resource "oci_core_virtual_network" "demo_vcn" {
  compartment_id = "${var.compartment_ocid}"
  display_name = "demo-vcn"
  cidr_block = "192.168.10.0/24"
  dns_label = "demovcn"
}

Internet Gateway

The traffic between instances in your VCN and the Internet goes through an Internet Gateway (IGW). An IGW exists in the context of your VCN. It can be disabled to immediately isolate your cloud resources from the Internet and then re-enabled when the problem is solved.

This is how you define an IGW in Terraform HCL. Please add this section to the vcn.tf configuration file:

# vcn.tf 2/5
resource "oci_core_internet_gateway" "demo_igw" {
  compartment_id = "${var.compartment_ocid}"
  vcn_id = "${oci_core_virtual_network.demo_vcn.id}"
  display_name = "demo-igw"
  enabled = "true"
}

Subnet

Compute instances must be launched within subnets. Before we create a subnet, we are going to prepare a Route Table and a Security List. Route Table will direct the outgoing traffic through the Internet Gateway, we’ve just defined. Security List will contain Security Rules that define what kind of ingress and egress traffic is allowed for the subnet. You can think of Security Rules as an additional layer of a virtual firewall.

This is how you define a simple Route Table that directs the entire (0.0.0.0/0) outgoing traffic to the Internet Gateway:

# vcn.tf 3/5
resource "oci_core_route_table" "demo_rt" {
  compartment_id = "${var.compartment_ocid}"
  vcn_id = "${oci_core_virtual_network.demo_vcn.id}"
  display_name = "demo-rt"
  route_rules {
    destination = "0.0.0.0/0"
    network_entity_id = "${oci_core_internet_gateway.demo_igw.id}"
  }
}

This is how you define a Security List that allows the inbound traffic on ports 22 and 80 as well as any outbound traffic:

# vcn.tf 4/5
resource "oci_core_security_list" "demo_sl" {
  compartment_id = "${var.compartment_ocid}"
  vcn_id = "${oci_core_virtual_network.demo_vcn.id}"
  display_name = "demo-sl"
  egress_security_rules = [
    { destination = "0.0.0.0/0" protocol = "all"}
  ]
  ingress_security_rules = [
    { protocol = "6", source = "0.0.0.0/0", tcp_options { "max" = 22, "min" = 22 }},
    { protocol = "6", source = "0.0.0.0/0", tcp_options { "max" = 80, "min" = 80 }}
  ]
}

Finally, this is how you define a Subnet in Terraform HCL:

# vcn.tf 5/5
resource "oci_core_subnet" "demo_subnet" {
  compartment_id = "${var.compartment_ocid}"
  vcn_id = "${oci_core_virtual_network.demo_vcn.id}"
  display_name = "demo-subnet"
  availability_domain = "${local.ad_1_name}"
  cidr_block = "192.168.10.0/30"
  route_table_id = "${oci_core_route_table.demo_rt.id}"
  security_list_ids = ["${oci_core_security_list.demo_sl.id}"]
  dhcp_options_id = "${oci_core_virtual_network.demo_vcn.default_dhcp_options_id}"
  dns_label = "demosubnet"
}

As you could see ithere are some interdependencies between various resource objects. For example, the subnet object references the route table object we’ve defined in the same file using the id (OCID). You can read more on HCL interpolation here.

You can find the entire vcn.tf file here.

Availability Domain

A subnet (and consequently the compute instances within) have to be explicitly created in a one of the availability domains that the region is built of. You can think of an availability domain as if it was a single data center, while a region as if it was a mesh of interconnected data centers in close, but still safely isolated, proximity.

If you look closer at demo_subnet resource from the previous section, you will discover that availability_domainfield uses a reference to the “local” value.

  1. Create a new configuration file called locals.tf ( you can use a different name, but make sure you use the .tf extension )

Add these two sections to locals.tf:

# locals.tf 1/2
data "oci_identity_availability_domains" "ads" {
  compartment_id = "${var.tenancy_ocid}"
}
locals {
  ad_1_name = "${lookup(data.oci_identity_availability_domains.ads.availability_domains[0],"name")}"
  ad_2_name = "${lookup(data.oci_identity_availability_domains.ads.availability_domains[1],"name")}"
  ad_3_name = "${lookup(data.oci_identity_availability_domains.ads.availability_domains[2],"name")}"
}

The code above forces Terraform to query OCI REST API for the list of availability domains and extracts their names into the local variables. We can then reference these variables in any .tf file, as we did in vcn.tf using ${local.ad_1_name}.

Compute Instance

There are two families of compute instances on Oracle Cloud Infrastructure:

  • Bare Metal Hosts
  • Virtual Machines

Bare Metal Hosts are powered by dedicated, single-tenant hardware with no hypervisor on board. They are very powerful and can provide a good foundation for large-scale enterprise computing. At the time of writing, the smallest bare metal server uses 36 OCPUs (== 72 vCPUs). Well… you do not always need such a mighty server, don’t you? Virtual machines, on the other hand, are powered by multi-tenant, hypervisor-managed servers. As a result, you can provision a lightweight virtual machine starting from just 1 OCPU (== 2 vCPUs). A Shape defines the profile of hardware resources of a compute instance. For example VM.Standard2.1 Shape delivers 1 OCPU and 15GB Memory.

An Image is a template that specifies the instance’s preinstalled software including the Operating System. It is used to initiate the boot volume that gets associated with the instance during provisioning. Oracle Cloud Infrastructure provides default images with various Operating Systems. You can create your own custom images, usually on top of the existing images. Let’s fetch the OCID of a particular CentOS image and save it to a local variable. Add these two sections to locals.tf:

# locals.tf 2/2
data "oci_core_images" "centos_linux_image" {
  compartment_id = "${var.tenancy_ocid}"
  operating_system = "CentOS"
}
locals {
  centos_linux_image_ocid = "${lookup(data.oci_core_images.centos_linux_image.images[0],"id")}"
}

You can find the entire locals.tf file here.

A Linux-based compute instance requires an SSH public key to enable remote access. Please prepare a key pair before proceeding. See: SSH Keypair recipe, if needed. This guide assumes you store the public key in ~/.ssh/oci_id_rsa.pub

Oracle Cloud Infrastructure compute instances can use Cloud-Init user data to perform the initial configuration that takes place immediately after provisioning in an automated way. Let’s prepare a cloud-config file ./cloud-init/vm.cloud-config:

Now we are ready to define the very first instance.

  1. Create a new configuration file called compute.tf ( you can use a different name, but make sure you use the .tf extension )
# compute.tf 1/1
resource "oci_core_instance" "demo_vm" {
  compartment_id = "${var.compartment_ocid}"
  display_name = "demo-vm"
  availability_domain = "${local.ad_1_name}"

  source_details {
    source_id = "${local.centos_linux_image_ocid}"
    source_type = "image"
  }
  shape = "VM.Standard2.1"
  create_vnic_details {
    subnet_id = "${oci_core_subnet.demo_subnet.id}"
    display_name = "primary-vnic"
    assign_public_ip = true
    private_ip = "192.168.10.2"
    hostname_label = "michalsvm"
  }
  metadata {
    ssh_authorized_keys = "${file("~/.ssh/oci_id_rsa.pub")}"
    user_data = "${base64encode(file("./cloud-init/vm.cloud-config"))}"
  }
  timeouts {
    create = "5m"
  }
}

You can find the entire compute.tf file here.

Infrastructure Code

Let’s revise the infrastructure code we’ve created. We have our infrastructure resource definitions (network and compute) distributed across a few .tf files in the same folder. Additionally, we have a cloud config files with initialization commands for the compute instance and, in a separate directory, public SSH key to enable remote access to the compute instance.

~/git/blog-code/oci-08> tree
.
├── cloud-init
│   └── vm.cloud-config
├── compute.tf
├── locals.tf
├── provider.tf
└── vcn.tf

~/.ssh> tree
.
├── known_hosts
├── oci_id_rsa
└── oci_id_rsa.pub

This is the same folder in which you’ve already run terraform init as described at the top of this recipe.

You can find the Terraform code I am presenting in this chapter here.

Provisioning infrastructure using Terraform

Provisioning a cloud infrastructure using Terraform is very easy. You just need to issue a single command: terraform apply. Terraform will compare the definitions from .tf files with the tfstate file, if any, and prepare a plan. You will be able to revise and accept the plan. Finally, Terraform will create and issue a sequence of OCI REST API calls to orchestrate the creation and provisioning of cloud-based resources.

~/git/blog-code/oci-08> terraform apply

...

Plan: 6 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

...

oci_core_instance.demo_vm: Still creating... (10s elapsed)
oci_core_instance.demo_vm: Still creating... (20s elapsed)
oci_core_instance.demo_vm: Still creating... (30s elapsed)

oci-08-step1

oci_core_instance.demo_vm: Creation complete after 1m36s (ID: ocid1.instance.oc1.eu-frankfurt-1.abthe...m2ij7nq2dh753dkzegrzmeuf3pzzbuojzc5dza)

Apply complete! Resources: 6 added, 0 changed, 0 destroyed.

oci-08-step2

You should soon see your first instance, created with Terraform, up and running.

Please go into Console or use CLI to identify the public ephemeral IP address assigned to the VM.

You will still need to wait some additional seconds, until the boot process completes and SSH daemon starts accepting connections. This is how you connect to the machine:

$ ssh -i ~/.ssh/oci_id_rsa opc@130.61.62.137
The authenticity of host '130.61.62.137 (130.61.62.137)' can't be established.
ECDSA key fingerprint is SHA256:tKs9JT5ubEVtBuKdKqux5ckktcLPRBrwWvWsjUaec4Q.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '130.61.62.137' (ECDSA) to the list of known hosts.
-bash-4.2$ sudo systemctl status nginx
 nginx.service - The nginx HTTP and reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2018-10-30 18:48:19 GMT; 2s ago

Finally let’s check if we can access the web server:

oci-08-step3

You can find how to launch the same instance using CLI here.

You can find the Terraform code I presented in this chapter here.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s