Part 1: Building Simple Infra in Oracle Cloud using Terraform

Mohammed Binsabbar
8 min readJun 14, 2021

In this blog, I will show you how you can create a basic simple infrastructure fully as code. We will be using Terraform 14 and already built modules from https://github.com/Binsabbar/oracle-cloud-terraform.

What you will build

By the end of this blog, you will have built a virtual cloud network (VCN) that contains 2 subnets, and 2 instances. You will apply network security group rules to a compute instance to allow incoming SSH connection to port 22. The full code used in this article can be found here.

Requirements:

Setup your Oracle API Key

1. Login to your oracle account

2. Go to Identity & Security then Click on Users

3. Click on your username then API Keys

4. Click Add API Key and let Oracle Generate API Key Pair for you

5. Download Private Key, then Click Add

6. Run: mkdir ~/.oci and move the downloaded key there. Rename it to oci_ida_rsa.

7. Copy your configuration from UI and store them in ~/.oci/config . You can find your configuration by clicking on View Configuration file.

Prepare Terraform local environment

Fire up VSCode or your favourite editor, and let’s get started. Our directory will contain the following:

main.tf
configurations.tf
.auto.tfvars
provider.tf
variables.tf

Here is the provider.tf file:

// provider.tfprovider "oci" {
tenancy_ocid = var.tenancy_ocid
user_ocid = var.user_ocid
fingerprint = var.fingerprint
private_key_path = var.private_key_path
region = var.region
}

We declare the required variables in the variables.tf file:

variable "tenancy_ocid" { type = string }
variable "user_ocid" { type = string }
variable "fingerprint" { type = string }
variable "private_key_path" { type = string }
variable "region" { type = string }

Lastly, we set the values for the above variables in the .auto.tfvars file. Copy the values from ~/.oci/config file.

//.auto.tfvarstenancy_ocid=""
user_ocid=""
fingerprint=""
private_key_path="~/.oci/oci_id_rsa.pem"
region=""

Before we start building our infra, we need to tell terraform which version of the oci provider it should use:

// configurations.tfterraform {
required_providers {
oci = {
source = "hashicorp/oci"
version = "~> 4.16.0"
}
}
}

Run terraform init in the current directory.

Now your work environment is ready. Your infrastructure state will be stored locally in your machine. You can change that and store it in the cloud by updating configurations.tf file accodingly.

Building Virtual Cloud Network

You need a VCN in order to provision some of your infrastructure. Hence, it is the 1st thing we will create. The network module can be found here.

Examining the module variables.tf file, we can see a list of expected input to the module. In short the module expect the following:

  • Name for the network: we will use the name mynetwork
  • CIDR Block for the VCN, the default value is 192.168.0.0/16. Since, it is the default, we do not have to set it.
  • Map of public subnet and private subnet objects, we will create two subnets, private1 and public1.
  • List of ingress ports for public subnets, since we are going to use network security groups to open port 22, we will leave this list empty, so the module does not create any ingress rule for the public subnet.
  • compartment ID, we will use the root compartment, which is the tenant id.

Read more about Security List and Network Security Groups to understand the difference.

Using the network module

Create a main.tf file with the following:

module "network" {
source = "github.com/Binsabbar/oracle-cloud-terraform//modules/network?ref=v1.0"
name = "mynetwork"
compartment_id = var.tenancy_ocid
cidr_block = "192.168.0.0/16"
allowed_ingress_ports = [] # we do not want to allow any ingress ports using security list, we will network security group for that
private_subnets = {
"private1" = {
cidr_block = "192.168.1.0/24"
security_list_ids = []
optionals = {}
}
}
public_subnets = {
"public1" = {
cidr_block = "192.168.2.0/24"
security_list_ids = []
optionals = {}
}
}
}
  • Now run terraform init to download the module.
  • Run terraform plan to check what terraform will create. Here is a snippet of the output.
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:# module.network.oci_core_default_dhcp_options.dhcp_options will be created
+ resource "oci_core_default_dhcp_options" "dhcp_options" {
+ defined_tags = (known after apply)
+ display_name = (known after apply)
+ freeform_tags = (known after apply)
+ id = (known after apply)
+ manage_default_resource_id = (known after apply)
+ state = (known after apply)
+ time_created = (known after apply)
+ options {
+ custom_dns_servers = []
+ search_domain_names = (known after apply)
+ server_type = "VcnLocalPlusInternet"
+ type = "DomainNameServer"
}
}
# module.network.oci_core_default_route_table.public_route_table will be created
+ resource "oci_core_default_route_table" "public_route_table" {
+ defined_tags = (known after apply)
+ display_name = "defaultRouteTable"
+ freeform_tags = (known after apply)
+ id = (known after apply)
+ manage_default_resource_id = (known after apply)
+ state = (known after apply)
+ time_created = (known after apply)
+ route_rules {
+ cidr_block = (known after apply)
+ description = (known after apply)
+ destination = "0.0.0.0/0"
+ destination_type = "CIDR_BLOCK"
+ network_entity_id = (known after apply)
}
}
...
...
...
Plan: 10 to add, 0 to change, 0 to destroy.
  • Run terraform apply -auto-approve to create you VCN. Here is a snippet from the terminal output:
  • Open Oracle Cloud console and you can see that your VCN is created
VCN
Two subnets

Creating Compute instances

Now we have VCN, we can create instances and place them inside the subnets we created. We will be using this module.

Checking the variables.tf file of the module, we can see what the module expects as an input. Basically, it is a map of instance configuration object. Let’s talk about some of the variables:

  • availability_domain_name: Each oracle cloud region has a number of availability domains (AD), we put the name of the AD here.
  • fault_domain_name: Each AD contains 3 fault domains (FD), here we specify which fault domain to use.
  • autherized_keys: List of SSH-RSA keys that you can use to ssh to the instance when it is created.

Terraform data can help us get the 1st two required names. Still in the main.tf , let’s use data to ask Oracle to pass back the name of AD and FD names. Open your main.tf file and add the following two blocks

data "oci_identity_availability_domain" "ad" {
compartment_id = var.tenancy_ocid
ad_number = 1
}
data "oci_identity_fault_domains" "fd" {
availability_domain = data.oci_identity_availability_domain.ad.name
compartment_id = var.tenancy_ocid
}

The above code will retrieve the name of the 1st AD in the account and all the FDs in our account under that AD.

Before creating our instances, we need to generate a pair of SSH RSA keys. If you do not have one already, generate one usingssh-keygen. We will use the public key for the authorized_keys parameter of the instances module.

Lastly, we can utilize the module common-config to get the name of OS and VM Shapes. Check the output.tf file for the module above.

Add the following to your main.tf file

module "common" {
source = "github.com/Binsabbar/oracle-cloud-terraform//modules/common-config?ref=v1.0"
}

To create instances, add the following instance configuration to your main.tf:

module "instances" {
source = "github.com/Binsabbar/oracle-cloud-terraform//modules/instances?ref=v1.0"
instances = {
"machine-1" = {
availability_domain_name = data.oci_identity_availability_domain.ad.name
fault_domain_name = data.oci_identity_fault_domains.fd.fault_domains[0].name
compartment_id = var.tenancy_ocid
volume_size = 50
state = module.common.instance_config.instance_state.RUNNING
autherized_keys = "PUT_YOUR_SSH_PUB_RSA_KEY_HERE"
config = {
shape = module.common.instance_config.shapes.micro
image_id = module.common.instance_config.images_ids.ubuntu_20
network_sgs_ids = []
subnet = module.network.public_subnets.public1
}
}

"machine-2" = {
availability_domain_name = data.oci_identity_availability_domain.ad.name
fault_domain_name = data.oci_identity_fault_domains.fd.fault_domains[0].name
compartment_id = var.tenancy_ocid
volume_size = 50
state = module.common.instance_config.instance_state.RUNNING
autherized_keys = "PUT_YOUR_SSH_PUB_RSA_KEY_HERE"
config = {
shape = module.common.instance_config.shapes.micro
image_id = module.common.instance_config.images_ids.ubuntu_20
network_sgs_ids = []
subnet = module.network.private_subnets.private2
}
}
}
}

Run terraform apply -auto-approve to create the two instances above.

Configuring Network Security Groups for SSH Access

By default, every network access is denied. So we must create certain security rules to open connections between instances within the network.

For the 1st instance machine-1 we need to allow incoming SSH connection to port 22 from your IP (use curl ifconfig.co to get your IP).

For the 2nd instance machine-2 we need to allow incoming SSH connection to port 22 from public1 subnet only.

Let’s use network-sg module to achieve the above. Add the following to your main.tf file:

module "nsg" {
source = "github.com/Binsabbar/oracle-cloud-terraform//modules/network-sg?ref=v1.0"
vcn_id = module.network.vcn.id
compartment_id = var.tenancy_ocid
network_security_groups = {"machine-1-rules" = {
"ssh-from-my-ip" = {
direction = "INGRESS"
protocol = "tcp"
port = 22
ips = ["YOUR_IP_HERE"]
}
}
"machine-2-rules" = {
"ssh-from-public1-subnet" = {
direction = "INGRESS"
protocol = "tcp"
port = 22
ips = [module.network.public_subnets.public1.cidr_block]
}
}
}
}

Before you apply the changes, update the instances module config block for machine-1 and machine-2 to use the network security group that we declared above.

module "instances" {
source = "github.com/Binsabbar/oracle-cloud-terraform//modules/instances?ref=v1.0"
instances = {
"machine-1" = {
availability_domain_name = data.oci_identity_availability_domain.ad.name
fault_domain_name = data.oci_identity_fault_domains.fd.fault_domains[0].name
compartment_id = var.tenancy_ocid
volume_size = 50
state = module.common.instance_config.instance_state.RUNNING
autherized_keys = "PUT_YOUR_SSH_PUB_RSA_KEY_HERE"
config = {
shape = module.common.instance_config.shapes.micro
image_id = module.common.instance_config.images_ids.ubuntu_20
network_sgs_ids = [module.nsg.networks_sg.machine-1-rules]
subnet = module.network.public_subnets.public1
}
}
"machine-2" = {
availability_domain_name = data.oci_identity_availability_domain.ad.name
fault_domain_name = data.oci_identity_fault_domains.fd.fault_domains[0].name
compartment_id = var.tenancy_ocid
volume_size = 50
state = module.common.instance_config.instance_state.RUNNING
autherized_keys = "PUT_YOUR_SSH_PUB_RSA_KEY_HERE"
config = {
shape = module.common.instance_config.shapes.micro
image_id = module.common.instance_config.images_ids.ubuntu_20
network_sgs_ids = [module.nsg.networks_sg.machine-2-rules]
subnet = module.network.public_subnets.public1
}
}
}
}

Run terrform apply -auto-approve

SSH to the instances

We need to know the public IP address of machine-1 to be able to SSH to it. Add the last block to your main.tf file, which will print out the machine public ip:

output "machine-1-ip" {value = module.instances.instances.machine-1.public_ip
}
output "machine-2-private-ip" {value = module.instances.instances.machine-2.private_ip
}

Run terraform apply -auto-approve and copy the IP in the output

output of terraform apply with IPs of machines

Now your 1st basic oracle cloud infrastructure ready for use!

Terminal from machine-1

Tip: SSH to machine-2 using ProxyJump

Add the following to your ~/.ssh/config file

Host machine-1
Hostname PUBLIC_IP_HERE
User ubuntu
IdentityFile ~/.ssh/id_rsa
Host machine-2
Hostname PRIVATE_IP_HERE
User ubuntu
IdentityFile ~/.ssh/id_rsa
ProxyJump machine-1

Come back to check the rest of this Oracle Cloud Infrastructure 5 parts series with Part 2 by Abeer Alotaibi.

Thanks,

Binsabbar

--

--

Mohammed Binsabbar

DevOps Engineer, who love building great and useful stuff