Using Terraform and Ansible to run a Node.js app on Google Compute Engine

Today I've taken a first look at Terraform and Ansible. I've used Terraform to configure an infrastructure consisting of three NGINX reverse proxies behind a load balancer forwarding incoming requests to a small Node.js app on Google Compute Engine. If you like to, check out the notes I've taken or use the final code on GitHub.

Preparing our Project

We'll create our initial project structure first:

$ tree terraform-ansible-nodejs-google-compute-engine
terraform-ansible-nodejs-google-compute-engine
├── ansible
│   └── templates
└── terraform   

Creating a Project on Google Cloud Platform

If you haven't done it already, sign up for Google's Cloud Platform, add your billing details and create a new project. I named mine "compute-engine-playground".

You will need to generate SSH keys as follows:

$ ssh-keygen -f ~/.ssh/gcloud_id_rsa
# press <Enter> when asked (twice) for a pass-phrase

Download your credentials from Google Cloud Console for Terraform as JSON to terraform/google-compute-engine-account.json and for Ansible as P12 to ansible/pkey.p12. You'll need to convert the key for Ansible by running the following command:

$ openssl pkcs12 -in pkey.p12 -passin pass:notasecret -nodes -nocerts | openssl rsa -out pkey.pem

You should have the following file now:

$ tree terraform-ansible-nodejs-google-compute-engine 
terraform-ansible-nodejs-google-compute-engine
├── ansible
│   ├── pkey.p12
│   ├── pkey.pem
│   └── templates
└── terraform
    └── google-compute-engine-account.json

Using Terraform to set up the infrastructure on Google Cloud Platform

Based on Terraform's example of a basic two-tier architecture in Google Cloud, we start off with installing HashiCorp's Terraform on our machines as described here.

Within the terraform directory, you should create a file named variables.tf now:

variable "region" {
  default = "europe-west1"
}

variable "region_zone" {
  default = "europe-west1-b"
}

variable "project_name" {
  description = "The ID of the Google Cloud project"
}

variable "credentials_file_path" {
  description = "Path to the JSON file used to describe your account credentials"
  default = "google-compute-engine-account.json"
}

variable "public_key_path" {
  description = "Path to file containing public key"
  default = "~/.ssh/gcloud_id_rsa.pub"
}

The next step is to define the entire infrastructure in terraform/setup.tf:

provider "google" {
  region = "${var.region}"
  project = "${var.project_name}"
  credentials = "${file("${var.credentials_file_path}")}"
}

resource "google_compute_http_health_check" "default" {
  name = "tf-www-basic-check"
  request_path = "/"
  check_interval_sec = 1
  healthy_threshold = 1
  unhealthy_threshold = 10
  timeout_sec = 1
}

resource "google_compute_target_pool" "default" {
  name = "tf-www-target-pool"
  instances = ["${google_compute_instance.www.*.self_link}"]
  health_checks = ["${google_compute_http_health_check.default.name}"]
}

resource "google_compute_forwarding_rule" "default" {
  name = "tf-www-forwarding-rule"
  target = "${google_compute_target_pool.default.self_link}"
  port_range = "80"
}

# web (nginx reverse proxies)
resource "google_compute_instance" "www" {
  count = 3

  name = "tf-www-${count.index}"
  machine_type = "f1-micro"
  zone = "${var.region_zone}"
  tags = ["web"]

  disk {
    image = "ubuntu-os-cloud/ubuntu-1404-trusty-v20160314"
  }

  network_interface {
    network = "default"
    access_config {
      # Ephemeral
    }
  }

  metadata {
    ssh-keys = "root:${file("${var.public_key_path}")}"
  }

  service_account {
    scopes = ["https://www.googleapis.com/auth/compute.readonly"]
  }
}

# app (Node.js)
resource "google_compute_instance" "app" {
  name = "tf-app"
  machine_type = "f1-micro"
  zone = "${var.region_zone}"
  tags = ["app"]

  disk {
    image = "ubuntu-os-cloud/ubuntu-1404-trusty-v20160314"
  }

  network_interface {
    network = "default"
    access_config {
      # Ephemeral
    }
  }

  metadata {
    ssh-keys = "root:${file("${var.public_key_path}")}"
  }

  service_account {
    scopes = ["https://www.googleapis.com/auth/compute.readonly"]
  }
}

resource "google_compute_firewall" "default" {
  name = "tf-www-firewall"
  network = "default"

  allow {
    protocol = "tcp"
    ports = ["80"]
  }

  source_ranges = ["0.0.0.0/0"]
  target_tags = ["web"]
}

output "Public IP (Load Balancer)" {
  value = "${google_compute_forwarding_rule.default.ip_address}"
}

output "NGINX Instance IPs" {
  value = "${join(" ", google_compute_instance.www.*.network_interface.0.access_config.0.assigned_nat_ip)}"
}

output "App IP" {
  value = "${google_compute_instance.app.0.network_interface.0.access_config.0.assigned_nat_ip}"
}

As soon as you run terraform apply from within the terraform directory, you'll be asked for some variables which are being stored in terraform.tfvars. Afterwards your infrastructure should be created.

When using JetBrains IntelliJ IDEA, WebStorm or similar, you should install support for .tf files.

Set up the NGINX reverse proxies and the Node.js app with Ansible

Next, we'll set up the NGINX reverse proxies and the Node.js app with Ansible. After having installed Ansible it is worth taking a look at Ansible's Google Cloud Platform guide. We can use an Ansible plugin to handle Google's Cloud Platform resources as dynamic inventory, i.e. we can work with our resources easily with tags and other properties and do not have to store single IPs or other mutative information locally.

To get the plugin working, download gce.py to ansible/gce.py and gce.ini to ansible/gce.ini. Fill in your credentials in ansible/gce.ini.

Now it's time to write a playbook in ansible/setup.yml to set up everything:

- hosts: tag_web
  tasks:
  - name: NGINX | Adding NGINX signing key
    apt_key: url=http://nginx.org/keys/nginx_signing.key state=present

  - name: NGINX | Adding sources.list deb url for NGINX
    lineinfile: dest=/etc/apt/sources.list line="deb http://nginx.org/packages/mainline/ubuntu/ trusty nginx"

  - name: NGINX Plus | Adding sources.list deb-src url for NGINX
    lineinfile: dest=/etc/apt/sources.list line="deb-src http://nginx.org/packages/mainline/ubuntu/ trusty nginx"

  - name: NGINX | Updating apt cache and install NGINX
    apt:
      name: nginx
      state: latest
      update_cache: yes

  - name: NGINX | Overwrite /etc/nginx/conf.d/default.conf
    template: src=templates/nginx-default.conf.j2 dest=/etc/nginx/conf.d/default.conf owner=root group=root mode=0644

  - name: NGINX | Restart NGINX
    service:
      name: nginx
      state: restarted

- hosts: tag_app
  tasks:
  - name: Node.js app | Install Node.js
    apt:
      name: nodejs
      state: latest
      update_cache: yes

  - name: Node.js app | Install npm
    apt:
      name: npm
      state: latest

  - name: Node.js app | Install PM2
    npm:
      name: pm2
      state: present
      global: yes

  - name: Node.js app | Add Node.js application
    template: src=templates/hello.js.j2 dest=~/hello.js owner=root group=root mode=0644

  - name: Node.js app | PM2 requires Node.js to be available as node
    shell: ln -s /usr/bin/nodejs /usr/bin/node
    ignore_errors: yes

  - name: Node.js app | Start application with PM2
    shell: pm2 start ~/hello.js -f

You should have noticed that the host's tags in the playbook map to the tags we have used in Terraform's setup.yml.

This playbook uses two templates:

The NGINX configuration in ansible/templates/nginx-default.conf.j2:

server {
    listen 80;

    server_name {{ ansible_ssh_host }};

    location / {
        proxy_pass http://{{ hostvars['tf-app']['gce_private_ip'] }}:8080/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

The Node.js app in ansible/templates/hello.js.j2:

var http = require('http');
http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello World (' + req.connection.remoteAddress + ')!');
}).listen(8080, '{{ ansible_all_ipv4_addresses.0 }}');
console.log('Server running at http://{{ ansible_all_ipv4_addresses.0 }}:8080/');

Now we can play the playbook and provision our Google Compute Engine infrastructure from within the ansible directory:

$ ansible-playbook -i gce.py setup.yml

And that's it! If everything worked fine, you should be able to call the load balancer's IP (which has been previously returned by Terraform when running terraform apply) and see the Node.js app responding to the request via one of the NGINX reverse proxies:

$ curl 146.148.27.134
Hello World (10.132.0.3)!

$ curl 146.148.27.134
Hello World (10.132.0.5)!

Again, the code is available on GitHub. Play along with it as much as you want.

Please share your feedback in the comments below and, if you liked this post, follow me on Twitter and github.

Blog Comments powered by Disqus.

Next Post Previous Post