OCI #8 – Launching Compute instance using Terraform

This recipe shows how to provision a new compute instance using Oracle Cloud Infrastructure Terraform provider.

Before you start, you must install and configure Terraform for OCI. See: Terraform Setup recipe, if needed.

Target Infrastructure

In this short tutorial we are going to provision a new compute instance on Oracle Cloud Infrastructure. The instance will be a lightweight Linux-based virtual machine with just 1 OCPU (== 2 vCPUs) and 15GB memory. We will place the instance on a public subnet and assign an ephemeral public IP address. In this way, the virtual machine will be accessible from the Internet. To demonstrate it, we will enable web server, host a static page and open port 80. All steps will be performed with Oracle Cloud Infrastructure Terraform provider.

oci-06-infra

Infrastructure Code project

We are going to define and manage the infrastructure as code using Terraform. Let’s prepare a new directory and initialize the Terraform project:

  1. Create a new infrastructure code project directory and step into the folder
  2. Define the provider configuration (provider.tf) together with the variables that will be mapped from operating system environment variables we’ve defined during setup. If you haven’t done it, see: Terraform Setup recipe:
    # provider.tf 1/2
    variable "tenancy_ocid" {}
    variable "user_ocid" {}
    variable "fingerprint" {}
    variable "region" {}
    variable "private_key_path" {}
    variable "private_key_password" {}
    
    provider "oci" {
      tenancy_ocid = "${var.tenancy_ocid}"
      user_ocid = "${var.user_ocid}"
      fingerprint = "${var.fingerprint}"
      region = "${var.region}"
      private_key_path = "${var.private_key_path}"
      private_key_password = "${var.private_key_password}"
    }
  3. Initialize the working directory. Terraform will download the newest version of the provider plugin, unless already installed.

    ~/git/blog-code/oci-08> terraform init
    Initializing provider plugins...
    - Checking for available provider plugins on https://releases.hashicorp.com...
    - Downloading plugin for provider "oci" (3.5.0)...
    
    The following providers do not have any version constraints in configuration,
    so the latest version was installed.
    
    To prevent automatic upgrades to new major versions that may contain breaking
    changes, it is recommended to add version = "..." constraints to the
    corresponding provider blocks in configuration, with the constraint strings
    suggested below.
    
    * provider.oci: version = "~> 3.5"
    
    Terraform has been successfully initialized!

Compartments

Oracle Cloud Infrastructure resources are grouped together in Compartments. The resources you define in the infrastructure code very often require that you specify the compartment in which the resources are created.

  1. Sign in to OCI Console
  2. Go to Identity  Compartments and copy the OCID of the compartment you would like to work with. If needed, create a new Compartment.
  3. Please add the following variable to your environment (for example in ~/.profile):
    export TF_VAR_compartment_ocid={put-here-the-compartment-ocid}
  4. Read the newly added variable to the current bash session:

    $ source ~/.profile
  5. Add a new Terraform variable to provider.tf. In this way you will capture the Compartment OCID into Terraform project

    # provider.tf 2/2
    variable "compartment_ocid" {}

You can find the entire provider.tf file here.

Networking

Cloud-based compute instances that are spread across bare metal servers and numerous virtual machines require interconnectivity. Although cloud-computing networking, as seen by the user, is rather simplified and flattened especially when compared to the traditional on-premise topologies, you still need to design and roll out some networking configuration. In this way, you can decide which instances are accessible from the Internet while the others aren’t. You can group together the related instances in subnets and define the security rules that allow the access on the selected ports.

  1. Go to the infrastructure code project directory
  2. Create a new configuration file called vcn.tf ( you can use a different name, but make sure you use the .tf extension )

Virtual Cloud Network

Virtual Cloud Network (VCN) is a software-defined network that provides a contiguous IPv4 address block, firewall and routing rules. VCN is subdivided into multiple, non-overlapping VCN subnets.

This is how you define a VCN in Terraform HCL. Please add this section to the vcn.tf configuration file:

# vcn.tf 1/5
resource "oci_core_virtual_network" "demo_vcn" {
  compartment_id = "${var.compartment_ocid}"
  display_name = "demo-vcn"
  cidr_block = "192.168.10.0/24"
  dns_label = "demovcn"
}

Internet Gateway

The traffic between instances in your VCN and the Internet goes through an Internet Gateway (IGW). An IGW exists in the context of your VCN. It can be disabled to immediately isolate your cloud resources from the Internet and then re-enabled when the problem is solved.

This is how you define an IGW in Terraform HCL. Please add this section to the vcn.tf configuration file:

# vcn.tf 2/5
resource "oci_core_internet_gateway" "demo_igw" {
  compartment_id = "${var.compartment_ocid}"
  vcn_id = "${oci_core_virtual_network.demo_vcn.id}"
  display_name = "demo-igw"
  enabled = "true"
}

Subnet

Compute instances must be launched within subnets. Before we create a subnet, we are going to prepare a Route Table and a Security List. Route Table will direct the outgoing traffic through the Internet Gateway, we’ve just defined. Security List will contain Security Rules that define what kind of ingress and egress traffic is allowed for the subnet. You can think of Security Rules as an additional layer of a virtual firewall.

This is how you define a simple Route Table that directs the entire (0.0.0.0/0) outgoing traffic to the Internet Gateway:

# vcn.tf 3/5
resource "oci_core_route_table" "demo_rt" {
  compartment_id = "${var.compartment_ocid}"
  vcn_id = "${oci_core_virtual_network.demo_vcn.id}"
  display_name = "demo-rt"
  route_rules {
    destination = "0.0.0.0/0"
    network_entity_id = "${oci_core_internet_gateway.demo_igw.id}"
  }
}

This is how you define a Security List that allows the inbound traffic on ports 22 and 80 as well as any outbound traffic:

# vcn.tf 4/5
resource "oci_core_security_list" "demo_sl" {
  compartment_id = "${var.compartment_ocid}"
  vcn_id = "${oci_core_virtual_network.demo_vcn.id}"
  display_name = "demo-sl"
  egress_security_rules = [
    { destination = "0.0.0.0/0" protocol = "all"}
  ]
  ingress_security_rules = [
    { protocol = "6", source = "0.0.0.0/0", tcp_options { "max" = 22, "min" = 22 }},
    { protocol = "6", source = "0.0.0.0/0", tcp_options { "max" = 80, "min" = 80 }}
  ]
}

Finally, this is how you define a Subnet in Terraform HCL:

# vcn.tf 5/5
resource "oci_core_subnet" "demo_subnet" {
  compartment_id = "${var.compartment_ocid}"
  vcn_id = "${oci_core_virtual_network.demo_vcn.id}"
  display_name = "demo-subnet"
  availability_domain = "${local.ad_1_name}"
  cidr_block = "192.168.10.0/30"
  route_table_id = "${oci_core_route_table.demo_rt.id}"
  security_list_ids = ["${oci_core_security_list.demo_sl.id}"]
  dhcp_options_id = "${oci_core_virtual_network.demo_vcn.default_dhcp_options_id}"
  dns_label = "demosubnet"
}

As you could see ithere are some interdependencies between various resource objects. For example, the subnet object references the route table object we’ve defined in the same file using the id (OCID). You can read more on HCL interpolation here.

You can find the entire vcn.tf file here.

Availability Domain

A subnet (and consequently the compute instances within) have to be explicitly created in a one of the availability domains that the region is built of. You can think of an availability domain as if it was a single data center, while a region as if it was a mesh of interconnected data centers in close, but still safely isolated, proximity.

If you look closer at demo_subnet resource from the previous section, you will discover that availability_domainfield uses a reference to the “local” value.

  1. Create a new configuration file called locals.tf ( you can use a different name, but make sure you use the .tf extension )

Add these two sections to locals.tf:

# locals.tf 1/2
data "oci_identity_availability_domains" "ads" {
  compartment_id = "${var.tenancy_ocid}"
}
locals {
  ad_1_name = "${lookup(data.oci_identity_availability_domains.ads.availability_domains[0],"name")}"
  ad_2_name = "${lookup(data.oci_identity_availability_domains.ads.availability_domains[1],"name")}"
  ad_3_name = "${lookup(data.oci_identity_availability_domains.ads.availability_domains[2],"name")}"
}

The code above forces Terraform to query OCI REST API for the list of availability domains and extracts their names into the local variables. We can then reference these variables in any .tf file, as we did in vcn.tf using ${local.ad_1_name}.

Compute Instance

There are two families of compute instances on Oracle Cloud Infrastructure:

  • Bare Metal Hosts
  • Virtual Machines

Bare Metal Hosts are powered by dedicated, single-tenant hardware with no hypervisor on board. They are very powerful and can provide a good foundation for large-scale enterprise computing. At the time of writing, the smallest bare metal server uses 36 OCPUs (== 72 vCPUs). Well… you do not always need such a mighty server, don’t you? Virtual machines, on the other hand, are powered by multi-tenant, hypervisor-managed servers. As a result, you can provision a lightweight virtual machine starting from just 1 OCPU (== 2 vCPUs). A Shape defines the profile of hardware resources of a compute instance. For example VM.Standard2.1 Shape delivers 1 OCPU and 15GB Memory.

An Image is a template that specifies the instance’s preinstalled software including the Operating System. It is used to initiate the boot volume that gets associated with the instance during provisioning. Oracle Cloud Infrastructure provides default images with various Operating Systems. You can create your own custom images, usually on top of the existing images. Let’s fetch the OCID of a particular CentOS image and save it to a local variable. Add these two sections to locals.tf:

# locals.tf 2/2
data "oci_core_images" "centos_linux_image" {
  compartment_id = "${var.tenancy_ocid}"
  operating_system = "CentOS"
}
locals {
  centos_linux_image_ocid = "${lookup(data.oci_core_images.centos_linux_image.images[0],"id")}"
}

You can find the entire locals.tf file here.

A Linux-based compute instance requires an SSH public key to enable remote access. Please prepare a key pair before proceeding. See: SSH Keypair recipe, if needed. This guide assumes you store the public key in ~/.ssh/oci_id_rsa.pub

Oracle Cloud Infrastructure compute instances can use Cloud-Init user data to perform the initial configuration that takes place immediately after provisioning in an automated way. Let’s prepare a cloud-config file ./cloud-init/vm.cloud-config:

Now we are ready to define the very first instance.

  1. Create a new configuration file called compute.tf ( you can use a different name, but make sure you use the .tf extension )
# compute.tf 1/1
resource "oci_core_instance" "demo_vm" {
  compartment_id = "${var.compartment_ocid}"
  display_name = "demo-vm"
  availability_domain = "${local.ad_1_name}"

  source_details {
    source_id = "${local.centos_linux_image_ocid}"
    source_type = "image"
  }
  shape = "VM.Standard2.1"
  create_vnic_details {
    subnet_id = "${oci_core_subnet.demo_subnet.id}"
    display_name = "primary-vnic"
    assign_public_ip = true
    private_ip = "192.168.10.2"
    hostname_label = "michalsvm"
  }
  metadata {
    ssh_authorized_keys = "${file("~/.ssh/oci_id_rsa.pub")}"
    user_data = "${base64encode(file("./cloud-init/vm.cloud-config"))}"
  }
  timeouts {
    create = "5m"
  }
}

You can find the entire compute.tf file here.

Infrastructure Code

Let’s revise the infrastructure code we’ve created. We have our infrastructure resource definitions (network and compute) distributed across a few .tf files in the same folder. Additionally, we have a cloud config files with initialization commands for the compute instance and, in a separate directory, public SSH key to enable remote access to the compute instance.

~/git/blog-code/oci-08> tree
.
├── cloud-init
│   └── vm.cloud-config
├── compute.tf
├── locals.tf
├── provider.tf
└── vcn.tf

~/.ssh> tree
.
├── known_hosts
├── oci_id_rsa
└── oci_id_rsa.pub

This is the same folder in which you’ve already run terraform init as described at the top of this recipe.

You can find the Terraform code I am presenting in this chapter here.

Provisioning infrastructure using Terraform

Provisioning a cloud infrastructure using Terraform is very easy. You just need to issue a single command: terraform apply. Terraform will compare the definitions from .tf files with the tfstate file, if any, and prepare a plan. You will be able to revise and accept the plan. Finally, Terraform will create and issue a sequence of OCI REST API calls to orchestrate the creation and provisioning of cloud-based resources.

~/git/blog-code/oci-08> terraform apply

...

Plan: 6 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

...

oci_core_instance.demo_vm: Still creating... (10s elapsed)
oci_core_instance.demo_vm: Still creating... (20s elapsed)
oci_core_instance.demo_vm: Still creating... (30s elapsed)

oci-08-step1

oci_core_instance.demo_vm: Creation complete after 1m36s (ID: ocid1.instance.oc1.eu-frankfurt-1.abthe...m2ij7nq2dh753dkzegrzmeuf3pzzbuojzc5dza)

Apply complete! Resources: 6 added, 0 changed, 0 destroyed.

oci-08-step2

You should soon see your first instance, created with Terraform, up and running.

Please go into Console or use CLI to identify the public ephemeral IP address assigned to the VM.

You will still need to wait some additional seconds, until the boot process completes and SSH daemon starts accepting connections. This is how you connect to the machine:

$ ssh -i ~/.ssh/oci_id_rsa opc@130.61.62.137
The authenticity of host '130.61.62.137 (130.61.62.137)' can't be established.
ECDSA key fingerprint is SHA256:tKs9JT5ubEVtBuKdKqux5ckktcLPRBrwWvWsjUaec4Q.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '130.61.62.137' (ECDSA) to the list of known hosts.
-bash-4.2$ sudo systemctl status nginx
 nginx.service - The nginx HTTP and reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2018-10-30 18:48:19 GMT; 2s ago

Finally let’s check if we can access the web server:

oci-08-step3

You can find how to launch the same instance using CLI here.

You can find the Terraform code I presented in this chapter here.

OCI #7 – Terraform Setup

This recipe shows how to install and configure Terraform on your client machine.

Before you start, you must create a key pair for API Signing Key. See: API Signing Key recipe, if needed.

Oracle Cloud Infrastructure REST API

Oracle Cloud Infrastructure exposes a comprehensive REST API to manage OCI resources and configurations. Every API call request must be signed with Oracle Cloud Infrastructure API signature and sent using secure HTTPS protocol with TLS 1.2. Signing a request is a multi-step process that can be seen as non-trivial. This is why you usually use tools like CLI, Terraform or custom SDK-based programs that encapsulate API calls and sign the requests on your behalf. All these tools eventually make calls to OCI REST API, therefore OCI REST API is the ultimate gateway to the cloud management plane.

Terraform

Terraform is a declarative, agentless infrastructure provisioning tool. You declare the infrastructure as code using a JSON-like HCL (HashiCorp Language) declarative language. For example, this is how you define a sample OCI virtual network:

resource "oci_core_virtual_network" "my_vcn" {
	  cidr_block = "192.168.21.0/24"
	  dns_label = "corefra"
	  compartment_id = "${var.compartment_ocid}"
	  display_name = "core-fra-vcn"
}

Terraform calculates an execution plan and issues a series of direct API calls to the provider’s cloud management plane, every time you run terraform apply command. The tool knows what API calls the plan has to eventually produce because it tracks the as-isinfrastructure state locally in a .tfstate file. Out-of-the-box, the tool supports a number of cloud providers including Oracle Cloud Infrastructure.

Installing Terraform

Each Terraform release comes as a single binary file. You can find the binary package for your operating system under this link.

  1. Download and unpack the binary file for your operationg system from https://www.terraform.io/downloads.html.
  2. Execute the following command:
    $ terraform --version
    Terraform v0.11.10

You should notice that a new hidden directory gets created (~/.terraform.d).

Configuring Terraform for OCI

Although Terraform supports Oracle Cloud Infrastructure out-of-the-box, each infrastructure project you are working on still has to know the API connection details like tenancy OCID, user OCID, path to the private key and the fingerprint of the corresponding public key you upload and associate with the OCI user. Terraform is able to map the operating system environment variables that follow a specific naming convention to Terraform variables. We are going to leverage this approach when configuring our environment.

  1. Sign in to OCI Console
  2. Go to Identity ➟ Users and note down the OCID of the user on whose behalf Terraform OCI Provider prepares, signs and makes OCI REST API requests.
  3. If you haven’t done it already, please upload the public part of your API Signing Key and note down the fingerprint of the public key.
    See: API Signing Key recipe (“Uploading the public key”), if needed.
  4. Go to AdministrationTenancy Details and note down the OCID of the tenancy.
  5. Please add the following variables to your environment (for example in ~/.profile):
    #Terraform
    export TF_VAR_tenancy_ocid={put-here-the-tenancy-ocid}
    export TF_VAR_user_ocid={put-here-the-user-ocid}
    export TF_VAR_fingerprint={put-here-the-public-key-fingerprint}
    export TF_VAR_region={put-here-the-region} # for example: eu-frankfurt-1
    export TF_VAR_private_key_path={put-here-the-path-to-the-private-key}
    export TF_VAR_private_key_password={put-here-the-private-key-password}
  6. Let’s read the newly added variables to the bash session:

    source ~/.profile

Now you are ready to start defining Terraform projects.

Testing the connectivity

We are going to test the Terraform configuration by preparing a sample environment project. The project won’t provision anything but just read the CentOS images currently available on OCI.

  1. Create a new directory for your project. For example: ~/projects/oci_tf_test
    mkdir  ~/projects/oci_tf_test
  2. Define the provider configuration (provider.tf) together with the variables that will be mapped from operating system environment variables we’ve defined earlier:

    # ~/projects/oci_tf_test/provider.tf
    variable "tenancy_ocid" {}
    variable "user_ocid" {}
    variable "fingerprint" {}
    variable "region" {}
    variable "private_key_path" {}
    variable "private_key_password" {}
    
    provider "oci" {
      tenancy_ocid = "${var.tenancy_ocid}"
      user_ocid = "${var.user_ocid}"
      fingerprint = "${var.fingerprint}"
      region = "${var.region}"
      private_key_path = "${var.private_key_path}"
      private_key_password = "${var.private_key_password}"
    }
  3. Initialize the working directory. Terraform will download the newest version of the provider plugin.

    ~/projects/oci_tf_test> terraform init
    Initializing provider plugins...
    - Checking for available provider plugins on https://releases.hashicorp.com...
    - Downloading plugin for provider "oci" (3.5.0)...
    
    The following providers do not have any version constraints in configuration,
    so the latest version was installed.
    
    To prevent automatic upgrades to new major versions that may contain breaking
    changes, it is recommended to add version = "..." constraints to the
    corresponding provider blocks in configuration, with the constraint strings
    suggested below.
    
    * provider.oci: version = "~> 3.5"
    
    Terraform has been successfully initialized!
  4. Define another test configuration file (data.tf) that will make Terraform fetching the available CentOS images and display them in the output:

    # ~/projects/oci_tf_test/data.tf
    data "oci_core_images" "centos_images" {
      compartment_id = "${var.tenancy_ocid}"
      operating_system = "CentOS"
    }
    output "centos_images" {
      value = "${data.oci_core_images.centos_images.images}"
    }
  5. Run terraform by applying the configuration:

    $ terraform apply

You should see a collection of JSON objects that describe the CentOS images available on OCI.

You can filter the output to see the information you are interested in like this:

$ terraform apply | grep display_name
        display_name = CentOS-7-2018.10.12-0,
        display_name = CentOS-7-2018.09.19-0,
        display_name = CentOS-7-2018.08.15-0,
        display_name = CentOS-6.10-2018.10.12-0,
        display_name = CentOS-6.10-2018.09.19-0,
        display_name = CentOS-6.10-2018.08.15-0,

Oracle Cloud Infrastructure Provider documentation can be found here.

OCI #6 – Launching Compute instance using CLI

This recipe shows how to provision a new compute instance using Oracle Cloud Infrastructure CLI.

Before you start, you must install and configure Oracle Cloud Infrastructure CLI. See: CLI Setup recipe, if needed.

Target Infrastructure

In this short tutorial we are going to provision a new compute instance on Oracle Cloud Infrastructure. The instance will be a lightweight Linux-based virtual machine with just 1 OCPU (== 2 vCPUs) and 14GB memory. We will place the instance on a public subnet and assign an ephemeral public IP address. In this way, the virtual machine will be accessible from the Internet. To demonstrate it, we will enable web server, host a static page and open port 80. All steps will be performed with Oracle Cloud Infrastructure CLI.

oci-06-infra

Compartments

Oracle Cloud Infrastructure resources are grouped together in Compartments. Every OCI CLI command must know the compartment in which it has to perform operations. Most often you will define a default compartment for CLI. You do it in ~/.oci/oci_cli_rc file. See: CLI Setup recipe, if needed.

This is the quickest way to display the name of the default Compartment that CLI is configured to work with:

oci iam compartment get --output table --query "data.{CompartmentName:\"name\"}"
+-----------------+
| CompartmentName |
+-----------------+
| Sandbox         |
+-----------------+

Networking

Cloud-based compute instances that are spread across bare metal servers and numerous virtual machines require interconnectivity. Although cloud-computing networking, as seen by the user, is rather simplified and flattened especially when compared to the traditional on-premise topologies, you still need to design and roll out some networking configuration. In this way, you can decide which instances are accessible from the Internet while the others aren’t. You can group together the related instances in subnets and define security rules that allow the access on the selected ports.

Virtual Cloud Network

Virtual Cloud Network (VCN) is a software-defined network that provides a contiguous IPv4 address block, firewall and routing rules. VCN is subdivided into multiple, non-overlapping VCN subnets.

This is how you create a VCN using OCI CLI:

$ oci network vcn create --cidr-block 192.168.10.0/24 --display-name demovcn --dns-label demovcn
{
  "data": {
    "cidr-block": "192.168.10.0/24",
    "compartment-id": "ocid1.compartment.oc1..aaaaaaaavpjjlshvlm7nh6gxuhbsdzdbvhuiihenvyaqz6o4hrycscjtq75q",
    "default-dhcp-options-id": "ocid1.dhcpoptions.oc1.eu-frankfurt-1.aaaaaaaaklavzudx7vkb2pi42sx72zlws4mcsmgalbojbd2pqq7hvjhb2zfa",
    "default-route-table-id": "ocid1.routetable.oc1.eu-frankfurt-1.aaaaaaaamscrcftelrikngjckzje5fnqqoxbw7opmtdo55banhayrlajv75q",
    "default-security-list-id": "ocid1.securitylist.oc1.eu-frankfurt-1.aaaaaaaalq7jklywlm4qywy2smblsdcxqbadtk5asqdvkszqigs3uqnxhlna",
    "defined-tags": {},
    "display-name": "demovcn",
    "dns-label": "demovcn",
    "freeform-tags": {},
    "id": "ocid1.vcn.oc1.eu-frankfurt-1.aaaaaaaaazy45d63g6r7l7uvytioflhceuq6xu2fdjg6d6bptrhmckns6rhq",
    "lifecycle-state": "AVAILABLE",
    "time-created": "2018-10-22T21:51:26.228000+00:00",
    "vcn-domain-name": "demovcn.oraclevcn.com"
  },
  "etag": "339cc316"
}
$ oci network vcn list --output table --query "data [*].{Name:\"display-name\", CIDR:\"cidr-block\", Domain:\"vcn-domain-name\"}"
+-----------------+-----------------------+---------+
| CIDR            | Domain                | Name    |
+-----------------+-----------------------+---------+
| 192.168.10.0/24 | demovcn.oraclevcn.com | demovcn |
+-----------------+-----------------------+---------+
oci-06-step1

Internet Gateway

The traffic between instances in your VCN and the Internet goes through an Internet Gateway (IGW). An IGW exists in the context of your VCN. It can be disabled to immediately isolate your cloud resources from the Internet and then re-enabled when the problem is solved.

This is how you create a VCN Internet Gateway using OCI CLI. The first command queries for an OCID of the previously generated VCN and saves the result in a variable. The variable is used in the command that creates an IGW:

$ vcnOCID=`oci network vcn list --query "data [?\"display-name\"=='demovcn'].{OCID:\"id\"}" | grep OCID | awk -F'[\"|\"]' '{print $4}'`
$ echo $vcnOCID
ocid1.vcn.oc1.eu-frankfurt-1.aaaaaaaaazy45d63g6r7l7uvytioflhceuq6xu2fdjg6d6bptrhmckns6rhq
$ oci network internet-gateway create --vcn-id $vcnOCID --display-name demoigw --is-enabled true > /dev/null

oci-06-step2

Subnet

Compute instances must be launched within subnets. Before we create a subnet, we are going to prepare a Route Table and a Security List. Route Table will direct the outgoing traffic through the Internet Gateway, we’ve just created. Security List will contain Security Rules that define what kind of ingress and egress traffic is allowed for the subnet. You can think of Security Rules as an additional layer of a virtual firewall.

This is how you create a simple Route Table that directs the outgoing traffic to the Internet Gateway:

$ igwOCID=`oci network internet-gateway list --vcn-id $vcnOCID --query "data [?\"display-name\"=='demoigw'].{OCID:\"id\"}" | grep OCID | awk -F'[\"|\"]' '{print $4}'`
$ echo $igwOCID
ocid1.internetgateway.oc1.eu-frankfurt-1.aaaaaaaaq5hnwmwn47sgdfwjqlmmj4iyg4kzlpzgc756ijgclrmr4apmqu3a
$ oci network route-table create --vcn-id $vcnOCID --display-name publicrt --route-rules "[{\"cidrBlock\":\"0.0.0.0/0\", \"networkEntityId\":\"$igwOCID\"}]" > /dev/null

oci-06-step3

This is how you create a Security List that allows the inbound traffic on ports 22 and 80 as well as any outbound traffic:

$ egress='[{"destination": "0.0.0.0/0", "protocol": "all" }]'
$ ingress='[{"source": "0.0.0.0/0", "protocol": "6", "tcpOptions": { "destinationPortRange": {"max": 22, "min": 22} } }, {"source": "0.0.0.0/0", "protocol": "6", "tcpOptions": { "destinationPortRange": {"max": 80, "min": 80} } }]'
$ oci network security-list create --vcn-id $vcnOCID --display-name publicsl --egress-security-rules "$egress" --ingress-security-rules "$ingress" > /dev/null

oci-06-step4

oci-06-step5

This is how you create a subnet:

$ rtOCID=`oci network route-table list --vcn-id $vcnOCID --query "data [?\"display-name\"=='publicrt'].{OCID:\"id\"}" | grep OCID | awk -F'[\"|\"]' '{print $4}'`
$ echo $rtOCID
ocid1.routetable.oc1.eu-frankfurt-1.aaaaaaaa2o4vufktzoslnrxamo6ydqlgtbahs2vwttsl4a3brbs2dz5rjgzq
$ slOCID=`oci network security-list list --vcn-id $vcnOCID --query "data [?\"display-name\"=='publicsl'].{OCID:\"id\"}" | grep OCID | awk -F'[\"|\"]' '{print $4}'`
$ echo $slOCID
ocid1.securitylist.oc1.eu-frankfurt-1.aaaaaaaapfspvn5uvqvoqeop7rgb2x744ldobxh3vz5cwjvzx2p6h2gbhaba
$ oci network subnet create --vcn-id $vcnOCID --display-name demosubnet --dns-label subnet  --cidr-block 192.168.10.0/30 --prohibit-public-ip-on-vnic false --availability-domain "feDV:EU-FRANKFURT-1-AD-1" --route-table-id "$rtOCID" --security-list-ids "[\"$slOCID\"]" > /dev/null

oci-06-step6

Compute instance

There are two families of compute instances on Oracle Cloud Infrastructure:

  • Bare Metal Hosts
  • Virtual Machines

Bare Metal Hosts are powered by dedicated, single-tenant hardware with no hypervisor on board. They are very powerful and can provide a good foundation for large-scale enterprise computing. At the time of writing, the smallest bare metal server uses 36 OCPUs (== 72 vCPUs). Well… you do not always need such a mighty server, don’t you? Virtual machines, on the other hand, are powered by multi-tenant, hypervisor-managed servers. As a result, you can provision a lightweight virtual machine starting from just 1 OCPU (== 2 vCPUs). A Shape defines the profile of hardware resources of a compute instance. For example VM.Standard2.1 Shape delivers 1 OCPU and 15GB Memory.

An Image is a template that specifies the instance’s preinstalled software including the Operating System. It is used to initiate the boot volume that gets associated with the instance during provisioning. Oracle Cloud Infrastructure provides default images with various Operating Systems. You can create your own custom images, usually on top of the existing images. Let’s fetch the OCID of the newest CentOS 7 image and save it to a variable:

$ imageOCID=`oci compute image list --operating-system "CentOS" --operating-system-version 7 --sort-by TIMECREATED --query "data [0].{DisplayName:\"display-name\", OCID:\"id\"}" | grep OCID | awk -F'[\"|\"]' '{print $4}'`
$ echo $imageOCID
ocid1.image.oc1.eu-frankfurt-1.aaaaaaaav3frw3wod63glppeb2hhh4ao7c6kntgt5jvxy4imiihclgkta7ja

An instance has to exist within a subnet. Let’s fetch the OCID of the subnet, we’ve created, and save the OCID to a variable:

$ subnetOCID=`oci network subnet list --vcn-id $vcnOCID --query "data [?\"display-name\"=='demosubnet'].{OCID:\"id\"}" | grep OCID | awk -F'[\"|\"]' '{print $4}'`
$ echo $subnetOCID
ocid1.subnet.oc1.eu-frankfurt-1.aaaaaaaanx3hypjebusub6w5lavut3tmkknmvkn4bxdfcgk5l677haurswaa

A Linux-based compute instance requires an SSH public key to enable remote access. Please prepare a key pair before proceeding. See: SSH Keypair recipe, if needed. This guide assumes you store the public key in ~/.ssh/oci_id_rsa.pub

Oracle Cloud Infrastructure compute instances can use Cloud-Init user data to perform the initial configuration that takes place immediately after provisioning in an automated way. Let’s prepare a cloud-config file ~/cloud-init/michalsvm.cloud-config:

Now we are ready to launch the very first instance. This is how you do it using Oracle Cloud Infrastructure CLI:

oci compute instance launch --display-name michalsvm --availability-domain "feDV:EU-FRANKFURT-1-AD-1" --subnet-id "$subnetOCID" --private-ip 192.168.10.2 --image-id "$imageOCID" --shape VM.Standard2.1 --ssh-authorized-keys-file ~/.ssh/oci_id_rsa.pub --user-data-file ~/cloud-init/michalsvm.cloud-config --wait-for-state RUNNING > /dev/null

It should take up to 1 or 2 minutes to complete the provisioning process. A mentioned before that our instance will get an ephemeral public IP address from the OCI public IPv4 address pool. Let’s find out the address our instance was given:

$ vmOCID=`oci compute instance list --display-name michalsvm --lifecycle-state RUNNING | grep \"id\" | awk -F'[\"|\"]' '{print $4}'`
$ echo $vmOCID
ocid1.instance.oc1.eu-frankfurt-1.abtheljtf6ta2df3bmq5h3v3acaa3muqojcjplldv2mam6irvwoucyg4vxua
$ oci compute instance list-vnics --instance-id "$vmOCID" | grep public-ip | awk -F'[\"|\"]' '{print $4}'
130.61.93.17

You will still need to wait some additional seconds, until the boot process completes and SSH daemon starts accepting connections. This is how you connect to the machine:

$ ssh -i .ssh/oci_id_rsa opc@130.61.93.17
The authenticity of host '130.61.93.17 (130.61.93.17)' can't be established.
ECDSA key fingerprint is SHA256:tKs9JT5ubEVtBuKdKqux5ckktcLPRBrwWvWsjUaec4Q.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '130.61.93.17' (ECDSA) to the list of known hosts.
-bash-4.2$

oci-06-step7

Finally let’s check if we can access the web server:

oci-06-step8

SSH Keypair

This guide shows how to create a new SSH Key Pair required to access Compute instances remotely.

Remote access to Oracle Cloud Infrastructure Compute instances that use Linux-based images is possible with the public key authentication through SSH v2 Protocol also known as Secure Shell. SSH Protocol employs assymetric cryptography to negotiate the parameters of a secure tunnel (symmetric encryption session key, cipher algorithm) for the communication between client and server.

How does it work

Asymmetric cryptography requires the presence of a key pair that consists of a private key and a public key. The public key gets uploaded into the newly created instance during provisioning and appended to .ssh/authorized_keys file on that instance. You will be able to connect to that instance from any client that has the corresponding private key. As soon as you log into the instance, you can add further public keys to allow multiple users to access the cloud-based host.

Generating the key on Linux and Mac

We will use ssh-keygen program that belongs to the open source OpenSSH suite to generate the authentication keys for use with SSH v2 protocol. We are going to employ RSA algorithm (-t rsa) and use the recommended 2048 bits (-b 2048). You will be prompted to enter a new passphrase for the newly generated private key twice.

$ ssh-keygen -t rsa -b 2048 -C "michal@cloudcomputingrecipes.com" -f ~/.ssh/oci_id_rsa
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /Users/michal/.ssh/oci_id_rsa.
Your public key has been saved in /Users/michal/.ssh/oci_id_rsa.pub.
The key fingerprint is:
SHA256:jib1OoFAkf6pYqBUXQhZN+Tody74P8NFQQJsmOwSrnA michal@cloudcomputingrecipes.com
The key's randomart image is:
+---[RSA 2048]----+
|  .+=.*=....     |
|  oo =++. ..     |
| o. +.o.    .    |
|. E+.o     .     |
|..oo.+..S..      |
|.o  +.++o  .     |
|+  ...oo+..      |
|o..  oo..+       |
|..    .o..o      |
+----[SHA256]-----+

This will create two corresponding keys in ~/.ssh folder:

  • oci_id_rsa – the private key
  • oci_id_rsa.pub – the public key
$ ls -l .ssh/ | grep oci_id_rsa
-rw-------  1 michal  staff  1766 Oct  3 18:09 oci_id_rsa
-rw-r--r--  1 michal  staff   414 Oct  3 18:09 oci_id_rsa.pub
$ cat .ssh/oci_id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC5AK7YSLMZSphuJVaZRzZJpE99ayKGeGM7C6Vz4RBIhtyQ13ArngMJwBaHCvr8O5lWdxssoJHB7TfRiaXjgorbB398SfbKRiDvZZAzuocOxLmkD564i6d5bBwFFc3yTsd20pn7npqgN9727pX0qnFY5N4NPBXClOfWxyf1R3ecTSOq8T+GU9P/jbDwJOrTGFwQEYM+mKJgSFMgMLi5MBQ8brc14Xr5NclKIBnFl7taxRAkFFD1YfBgG+hl7i7gc3NaItxQs/UDJwaqq+il7nb+ezny/9Ptf1lMHy8EFh5ER6PD5xsRfNlJ1LdkPLYLhiVHP4aiUkFrzsvddj8QFX4Z michal@cloudcomputingrecipes.com

Now, you can use the public key when provisioning a new Compute instance.

More information in “UNIX and Linux System Administration Handbook, 5th Edition” – Section 27.7 “SSH, the Secure Shell” or on https://www.ssh.com/ssh/protocol

 

OCI #5 – CLI Setup

This recipe shows how to install and configure Oracle Cloud Infrastructure CLI on your client machine.

Before you start, it is recommended that you read API Signing Key recipe to understand the concept of request signing.

Oracle Cloud Infrastructure REST API

Oracle Cloud Infrastructure exposes a comprehensive REST API to manage OCI resources and configurations. Every API request must be signed with Oracle Cloud Infrastructure API Request Signature and sent using secure HTTPS protocol with TLS 1.2. Signing a request is a multi-step process that can be seen as non-trivial. This is why you usually use tools like CLI, Terraform or custom SDK-based programs that encapsulate API calls and sign the requests on your behalf. All these tools eventually make calls to OCI REST API, therefore OCI REST API is the ultimate gateway to the cloud management plane.

Oracle Cloud Infrastructure CLI

Oracle Cloud Infrastructure CLI is a python-based command line utility that encapsulates API calls to OCI REST API. This simplifies the way you consume the API, because OCI CLI takes the burden of request signing. Furthermore, you can script API consumption using mature ecosystem of Python libraries.

Installing OCI CLI

Oracle has prepared two installation scripts. One for Linux/macOs with bash and one for Windows with Powershell. The two scripts perform similar steps. They install Python and virtualenv, create an isolated Python environment, install the latest version of CLI and alter the PATH variable. Alternatively, you can perform all these steps manually.

Installing OCI CLI on Linux or macOS

  1. Execute the following commands and follow the console-based installation wizard:
    bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"

Installing OCI CLI on Windows

  1. Launch Powershell console with Run as Administrator option
  2. Execute the following commands and follow the console-based installation wizard:
    Set-ExecutionPolicy RemoteSigned
    powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.ps1'))"

Configuring OCI CLI

CLI features with an embedded configuration wizard that generates API Signing key pair and creates CLI configuration file, based on the parameters given during configuration wizard run. You should identify a few Oracle Cloud Infrastructure Identifiers (OCIDs) before you launch the wizard.

  1. Sign in to OCI Console
  2. Go to Identity ➟ Users and copy the OCID of the user on whose behalf OCI CLI prepares, signs and sends OCI REST API requests.
  3. Go to Administration Tenancy Details and copy the OCID of the tenancy.
  4. Open a new command line window on your client machine and execute:
    oci setup config
  5. Provide the user OCID, the tenancy OCID and the region you are working with.
  6. Say Y(es) when asked if you want to generate a new RSA key pair, unless you prefer to use your own API Signing Key. To learn more on that topic, have a look at API Signing Key recipe.
  7. Say N(o) when asked if you want to write your private key passphrase to the config file, unless you do not mind storing in an open text.
  8. If you use default options for the remaining parameters, your config file will be generated as ~/.oci/config
  9. Finally, you should upload the generated public key to OCI and associate it with the user you’ve chosen in the second step. You can learn how to do it in API Signing Key recipe.

Majority, if not all, OCI REST API resource operations require Compartment OCID. You can define the default values for input parameters to OCI CLI commands, to avoid unnecessary typing, every time you invoke a CLI command. To add a default value for Compartment OCID, perform these steps:

  1. Sign in to OCI Console
  2. Go to Identity ➟ Compartments and copy the OCID of the compartment you would like to work with. If needed, create a new Compartment.
  3. Create a new file: ~/.oci/oci_cli_rc and place there the following lines:
    [DEFAULT]
    compartment-id = placeHereTheCompartmentOCID

Now, you should be ready to test OCI CLI. Let’s list the available CentOS Images:

oci compute image list --operating-system CentOS --output table --query "data [*].{Image:\"display-name\"}"
+--------------------------+
| Image                    |
+--------------------------+
| CentOS-7-2018.09.19-0    |
| CentOS-7-2018.08.15-0    |
| CentOS-7-2018.06.22-0    |
| CentOS-6.9-2018.06.22-0  |
| CentOS-6.10-2018.09.19-0 |
| CentOS-6.10-2018.08.15-0 |
+--------------------------+

You can find the complete reference of CLI commands here.

OCI #4 – API Signing Key

This guide shows how to create a new API Signing Key Pair that is required to use Oracle Cloud Infrastructure REST API.

Before you start, it is recommended that you create a dedicated service admin user for Oracle Cloud Infrastructure, instead of using your Oracle Cloud superuser. See: Service administrator account best practice, if needed.

Oracle Cloud Infrastructure exposes a comprehensive REST API to manage OCI resources and configurations. Every successful API call results in a management task being performed on behalf of a particular user defined in OCI. OCI must know how to associate an OCI REST API request with a particular user. This is done through signing the requests.

Signing a request is a multi-step process that can be seen as non-trivial. First, parts of the request are used to compose the signing string. Next, a private key is used to create the signature from the signing string. Finally, the signature is added together with some metadata to the Authorization header of the request. In order to authenticate the client and authorize the requested operation, the corresponding public key has to be uploaded and associated with the given OCI user.

Generating the key pair

We will use openssl program to generate the API Signing Key Pair. We are going to employ RSA algorithm, use the recommended 2048 bits and generate the keys in PEM format. You will be prompted to enter a new passphrase for the newly generated key twice. Remember to restrict the access to the private key.

$ openssl genrsa -out apiuser.pem -aes128 2048
Generating RSA private key, 2048 bit long modulus
.............+++
..+++
e is 65537 (0x10001)
Enter pass phrase for apiuser.pem:
Verifying - Enter pass phrase for apiuser.pem:
$ chmod go-r apiuser.pem
$ ls -l | grep pem
-rw-------    1 michal  staff     1766 Oct  3 21:24 apiuser.pem
$ openssl rsa -pubout -in apiuser.pem -out apiuser.pem.pub
Enter pass phrase for apiuser.pem:
writing RSA key
$ ls -l | grep pem
-rw-------    1 michal  staff     1766 Oct  3 21:24 apiuser.pem
-rw-r--r--    1 michal  staff      451 Oct  3 21:26 apiuser.pem.pub
$ cat apiuser.pem.pub
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA2UEWK5p5bX50/IyBsFke
VbhCLta42J5IgfMmLN7FRjOGT+CbL6aYHfRgNxvUgWqSYGbwgtNvOnp7Fre397Sa
qYVcH3w0R2O1WbQJJJmuqNhjQ01N48odN49nqeZQF9ED7SshBM+fAU7Dtt9XTuYG
5wnpK0DRlw4BFwfXoaLQJ4Gxhpsr2eA/JMCpJs4dFIEjTMshQBQ9JLYxBAo8cU6Z
s5kwRG7ZpygLVRGbpUiu4Iwu5fm2DhWNLQRHGBTjMFM9EfWRBawIoKHXBUMIQB4t
GMMqA7dFpKlJRhAPrM/Ai0k4fCNJOKfzLLTDOC3DGDcEZlljh17MiCApHWoHnewS
iQIDAQAB
-----END PUBLIC KEY-----

Uploading the public key

  1. Go to Identity ➟ Users and select the user you would like to be “api-enabled”.
  2. Click on Add Public Key and paste the newly generated public key.

oci-04-step1

You will also need the fingerprint of the key to sign the requests. The fingerprint for each associated public key can be found in API Keys tab.

oci-04-step2

OCI #3 – Service Admin user

This recipe shows how to create a new service admin user dedicated for Oracle Cloud Infrastructure.

Before you start, you must have an Oracle Cloud account. See: Cloud Account recipe, if needed.

Oracle Cloud Infrastructure uses a dedicated dashboard called OCI Console. There are two types of users that can sign in to OCI Console: federated and non-federated. The identity of federated users is asserted by a different system called an identity provider using Single Sign-on. Oracle Identity Cloud Service and Microsoft Active Directory Federation Services are two examples of an identity provider.

Superuser

The very first account that gets created for your Cloud Account becomes the super-user for all your Oracle Cloud PaaS and IaaS services. Using Unix naming, you can think of it as if it was the root user for your account. You can perform all tasks using this account and provision all kinds of available PaaS instances and IaaS resources. Furthermore, you will most probably use this account to access Oracle Support, in case you would like to read through the knowledge base or submit a service request. For a day-to-day job, this account is too powerful.

Service admin

It is recommended that you create a separate user admin account that is solely focused on Oracle Cloud Infrastructure. In order to create a new federated user to be your Oracle Cloud Infrastructure admin and use Oracle Cloud Single Sign-on powered by Oracle Identity Cloud Service follow these steps:

  1. Sign in to Oracle Cloud
  2. In My Services Console, in the top right corner, click on Users .
  3. Make sure you are in Users tab and click on Add
  4. Provide the details for the new user you are creating
  5. While in the Service Access step, scroll down to Compute and select OCI_Administrator Service Entitlement
    oci-03-step1
  6. Click Finish

You should soon get an activation link on the e-mail you used for the newly registered  user account. The link will lead you to password’s initial setup page. If you do not receive the link within a few minutes, just click on Can’t sign in link on the user login page and provide your login (usually the e-mail address).

What has just happened ? You’ve created a new federated user account managed in Oracle Identity Cloud Service (IDCS). IDCS is the default identity provider for federated users in Oracle Cloud Infrastructure. OCI_Administrator group members in IDCS are mapped to the Administrators group in OCI.

From now on, you can use this account to sign in to Oracle Cloud Infrastructure. To learn more, visit Accessing OCI Console recipe.

Fine-grained security model

Cloud Security Alliance’s Security Guidance says about the concept of “lower-level administrative accounts” called “service administrators” “that can only manage parts of the service”. You could imagine creating additional admin users that are allowed to manage only parts (selected types of resources inside particular compartments) of Oracle Cloud Infrastructure tenancy. This can be achieved with a blend of OCI Policies and non-federated user accounts that you define directly in Oracle Cloud Infrastructure. You will soon find a dedicated recipe that will tell more about it.