Week 13
Production Deployment

Quiz 9: Production Prep 15 mins

There will be a quiz today. It will be worth 2% of your final grade.

Assignment Reminder

Final project - GIFTR is due by 5:00 pm April 17, 2020.
This is the final deadline. There will be no extensions.

Counts for 30% of your MAD9124 final grade.

Architecture

Of all cloud infrastructure service providers, Amazon Web Services (AWS) is far and away the market leader. We will take advantage of their generous education credits to learn how to deploy the final project using a typical architectural pattern.

Our main web service API application will be bundled into a Docker container which can be auto-scaled with service demand. This service container cluster will be accessed via an HTTP/HTTPS load-balancer, which can also manage the secure HTTPS connection with the client application. The Express containers will talk to the MongoDB service running in a high-availability cluster.

Additionally, your client application from MAD9022 could be served from a global content delivery network.

The key service components that we will need include:

Backend Web Service

  • Docker Hub (image repository)
  • Amazon Virtual Private Cloud (VPC)
  • AWS Certificate Manager
  • Amazon HTTP/HTTP Application Load Balancer (ELB)
  • Amazon Elastic Container Service (ECS) with Fargate
  • MongoDB Atlas (deployed to a managed Amazon EC2 Cluster)

Frontend Client APP

  • Amazon Simple Storage Service (S3)
  • Amazon CloudFront (CDN)
  • GitHub PWA Private Repo

Setup Hosted MongoDB

We will use the free tier of the MongoDB Atlas service to deploy a managed MongoDB instance to the same AWS region as our production Express server containers.

Create a MongoDB cloud account

From the MongoDB home page, click the green Start Free button. MongoDB home page

Fill in the form to create your free account. Please use your Algonquin College email address. Atlas sign-up form

You should shortly receive a confirmation email from MongoDB Atlas. Click the Sign In button in that email. Confirmation email

That will take you to the MongoDB Atlas login page. MongoDB Atlas Login

Create Database Cluster

Follow the prompts to create your first Project and Cluster.

create cluster

Choose the Shared Clusters option -- the free one. shared clusters

Configure Cluster

Choose AWS as the cloud provider. DO NOT choose 'Multi-Region ...'

Choose the N. Virginia AWS Region. cluster configuration - AWS region

Under the heading Cluster Tier, choose M0 Sandbox -- this is the free one. cluster configuration - size

Do not select any Additional Settings. They are not available in the free tier.

Set the Cluster Name to GIFTR. cluster configuration - name

Verify your settings and click the green Create Cluster button.

You should now see the Clusters Dashboard while your new cluster is being provisioned. There should be a blue sandbox label on your cluster -- this means the free tier. cluster configuration - name

Setup Connection Security

There are still a few more steps. Click the connect button under your cluster name to bring up an information modal.

We will be accessing this cluster from multiple locations – home, school, AWS. We could (and in a real app SHOULD) whitelist only those IP addresses that need to connect to the database. But, for this project we can simply allow all. cluster configuration - IP addresses

Click the Add a Different IP Address button, and then enter 0.0.0.0/0 for the IP address (CDIR notation) and click Add IP Address.

Next you will be asked to create an new administrative user for your database cluster. I called mine madadmin and selected the autogenerate password option for a reasonably secure random password.

create database user

Copy the password

Don't forget to click the show button next to the password, and then copy it to your config JSON file. You will never see this password again.

We need to get the connection string details for this new database cluster. Click the Choose connection method button at the bottom of the modal. create database user

We want the middle option Connect your application. cluster connection options

Copy the hostname portion of the connection string. We will need that in our config JSON file. cluster connection string

Copy the authentication database name portion of the connection string. We will need that in our config JSON file too. cluster connection string

Connect Mongoose to the Atlas Cluster

Up until now, the connection string to tell Mongoose how to open a connection to the database took this format:

mongodb://hostname:port/database-name

e.g.

mongodb://localhost:27017/mad9124

Hosted Atlas Cluster

The full connection string for MongoDB databases hosted on an Atlas cluster look a little different. Here is what mine looks like:

mongodb+srv://<username>:<password>@giftr-1p8fa.mongodb.net/test?retryWrites=true&w=majority

(Don't use this one. Go get your own.)

From the above examples

The scheme changes to mongodb+srv://

Database user credentials are inserted: <username>:<password>@

The hostname becomes _something_.mongodb.net

The default database /test is used to authenticate the db user.

There are some other options set at the end: ?retryWrites=true&w=majority

Update the connectDatabase.js Module

It is a very common practice to set up a final integration testing or staging environment that very closely mirrors the production environment. That is what we are going to do for the final project. This will allow us to simplify some of the deployment details.

Remember we are retrieving the configuration variables using the config.get() method. We will set the NODE_ENV environment variable tostage, so you will need to create a new /config/stage.json file with your connection credentials. e.g.

{
  "db": {
    "scheme": "mongodb+srv",
    "username": "madadmin",
    "password": "yourGuessIsAsGoodAsMine",
    "host": "giftr-1p8fa.mongodb.net",
    "authSource": "test",
    "name": "w20-final-giftr"
  }
}

WARNING

In a final production deployment, you would not store the database username and password in this config file. They should be injected into the runtime container using environment variables.

But, this will let us better help you with troubleshooting and help us with grading your assignment.

Then in the /startup/connectDatabase.js module, we need to conditionally construct the connection string based on the target scheme: mongodb v.s. mongodb+srv.

The updated file should look something like this ...

const config = require('config')
const logger = require('./logger')
const mongoose = require('mongoose')

module.exports = () => {
  const {scheme, host, port, name, username, password, authSource} = config.get('db')
  const credentials = username && password ? `${username}:${password}@` : ''

  let connectionString = `${scheme}://${credentials}${host}`

  if (scheme === 'mongodb') {
    connectionString += `:${port}/${name}?authSource=${authSource}`
  } else {
    connectionString += `/${authSource}?retryWrites=true&w=majority`
  }

  mongoose
    .connect(
      connectionString,
      {
        useNewUrlParser: true,
        useUnifiedTopology: true,
        useCreateIndex: true,
        useFindAndModify: false,
        dbName: name
      }
    )
    .then(() => {
      logger.log('info', `Connected to MongoDB @ ${name}...`)
    })
    .catch(err => {
      logger.log('error', `Error connecting to MongoDB ...`, err)
      process.exit(1)
    })
}

OK. LET'S TEST IT!

Update the scripts key of the package.json file to add a stage script that is the same as the dev script except that we will set NODE_ENV=stage.




 



"scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "dev": "API_JWTKEY=supersecretkey nodemon server.js",
    "stage": "NODE_ENV=stage API_JWTKEY=supersecretkey nodemon server.js",
    "start": "node server.js"
  },

Now run npm run stage in the terminal and use Postman to make sure that everything is working.

You can still visually check the contents of the database with MongoDB Compass. Just use full the connection string from the Atlas dashboard.

cluster connection options

Don't forget to replace <password> with your real password. e.g.

mongodb+srv://madadmin:yourGuessIsAsGoodAsMine@giftr-1p8fa.mongodb.net/test

Health Check Route

Most production deployments will have some kind of automated periodic monitoring to see if your deployed service is still up and running. We will facilitate this by creating a simple HTTP route handler in the main app.js module. It could be anything that we choose. The AWS HTTP Load Balancer will default to the root path for our service, so let's just use that for now.

app.get('/', (req, res) => res.send({data: {healthStatus: 'UP'}}))

The AWS container management service will regularly poll this route looking for a 200 response code. You will be able to see the health status in the AWS CloudWatch console.

Publish a Docker Image

In most development teams these days, everyone needs to know a little about DevOps. You could do a whole course on Docker, containers, and Kubernetes, but for now we are going to simulate a scenario that you are very like to encounter. As a junior developer, a more senior developer on the team has already worked out the correct procedure to bundle and deploy your application, and they will give you some key files and instructions.

Starting in module 5, you have been using Docker as an easy way to run MongoDB in your local development environment. We used a pre-made Docker Image definition (mongo:bionic) that we pulled from the Docker Hub repository.

You can create and publish our own Docker Image to bundle your Express web service application. This image can then be use to deploy one or more copies of your application on a cloud hosting service like AWS, Azure, or Google Cloud.

If you haven't already ...

Create a free Docker Hub account.
Download Docker Desktop and run the installer.

Create a Dockerfile

The Dockerfile is the recipe for creating a Docker Image. It should be placed in the top level of your project folders. Note the capitalization and there is no extension - just Dockerfile.

This will use the official node version 12 image from Docker Hub as the starting point. It then creates the directory structure that our project requires and copies your code from the local project folder into the container image.

Make sure that your project folder structure matches the Dockerfile.

FROM node:12-slim

ENV API_PORT="80"
ENV DEBUG="api:*"

RUN mkdir -p /app /app/config /app/exceptions /app/logs /app/middleware /app/models /app/public /app/routes /app/startup

COPY config/ /app/config/
COPY exceptions/ /app/exceptions/
COPY middleware/ /app/middleware/
COPY models/ /app/models/
COPY public/ /app/public/
COPY routes/ /app/routes/
COPY startup/ /app/startup/
COPY server.js app.js package.json /app/

WORKDIR /app
RUN npm install --unsafe-perm

EXPOSE 80
CMD node server.js

Split server.js and app.js

That final line in the Dockerfile, CMD node server.js. That is the command that will be invoked when the deployed container is started. If you have all of your Express application defined in app.js, then you could change the last line of the Dockerfile to be CMD node app.js.

Or, you might want to create both a server.js file and an app.js file. This is a quite common practice as it separates the code for defining middleware and route handlers in the app.js file which is then imported into the server.js module which then only holds the instructions spinning up the Node.js HTTP server, passing in the app.js module as a configuration object to the server (see lines 5 and 11 in the code example below).

Here is an example server.js file serving only HTTP.
This is what you should use for your final project.





 





 















'use strict'

const http = require('http')
const logger = require('./startup/logger')
const app = require('./app')

/**
 * Create HTTP server.
 * HTTP server listen on provided port, on all network interfaces.
 */
const server = http.createServer(app)
const port = process.env.API_PORT || 3030
server.listen(port)
server.on('error', onError)
server.on('listening', onListening)

/**
 * Common listener callback functions
 */
function onError(err) {
  logger.log('error', `Express failed to listen on port ${this.address().port} ...`, err.stack)
}
function onListening() {
  logger.log('info', `Express is listening on port ${this.address().port} ...`)
}

TIP

If you are going to be running your Node.js server with HTTPS, you must set it up this way.

Here is how it would look if we set it up to use HTTPS with a "let's encrypt" certificate.
This is for your future reference only. DO NOT use this for your final project.

'use strict'

const http = require('http')
const https = require('https')
const fs = require('fs')
const logger = require('./startup/logger')
const app = require('./app')

/**
 * Create HTTP server.
 * HTTP server listen on provided port, on all network interfaces.
 */
const server = http.createServer(app)
const port = process.env.API_PORT || 3030
server.listen(port)
server.on('error', onError)
server.on('listening', onListening)

/**
 * Create HTTPS server.
 * HTTPS server listen on standard port, on all network interfaces.
 */
if (process.env.NODE_ENV === 'production') {
  const options = {
    key: fs.readFileSync('/etc/letsencrypt/live/mad9124.rocks/privkey.pem'),
    cert: fs.readFileSync('/etc/letsencrypt/live/mad9124.rocks/fullchain.pem'),
    ca: fs.readFileSync('/etc/letsencrypt/live/mad9124.rocks/chain.pem')
  }
  const serverSSL = https.createServer(options, app)
  const TLSPort = process.env.APP_TLSPORT || 443
  serverSSL.listen(TLSPort)
  serverSSL.on('error', onError)
  serverSSL.on('listening', onListening)
}

/**
 * Common listener callback functions
 */
function onError(err) {
  logger.log('error', `Express failed to listen on port ${this.address().port} ...`, err.stack)
}
function onListening() {
  logger.log('info', `Express is listening on port ${this.address().port} ...`)
}

Simplified app.js

Now that the app.js module doesn't need any code for creating the HTTP server, it can be simplified to something like this ...

require('./startup/connectDatabase')()
const express = require('express')
const app = express()

// Apply global middleware with app.use()

// Add the health check route
app.get('/', (req, res) => res.send({data: {healthStatus: 'UP'}}))

// Link the auth and api route handler modules

// Apply the global error handler middleware

// Export the `app` object
module.exports = app

Build a local Docker image

We have created the Dockerfile and refactored our code. It is time to build the container image with the docker build command. Use the --tag= command option to set the name of the local Docker Image file that will be created. The :latest suffix is the version label.

docker build --tag=giftr-api-w20:latest .

This will create an Ubuntu Linux container with Node v12 pre configured and then copy in your project files as defined in the Dockerfile. Then it will run npm install in your project root folder (in the container) to ensure that all required dependencies are correctly installed.

When this process is complete you will have a new Docker Image that you can use to create a complete isolated runtime of your Express web service application. You can run it locally to test it.

Here is an updated docker-compose.yml file that will spin up your new container on your local machine for testing. First, make sure that you stop any other Express server that you might have running.

version: '3.1'
services:
  mongo:
    image: mongo:bionic
    container_name: mongo
    environment:
      MONGO_INITDB_ROOT_USERNAME: madadmin
      MONGO_INITDB_ROOT_PASSWORD: supersecretpassword
    ports:
      - 27017:27017
    restart: always
    volumes:
      - ./data/mongo:/data/db

  express:
    image: giftr-api-w20
    container_name: giftr-api-w20
    depends_on:
      - mongo
    environment:
      API_JWTKEY: keur0uhwg802fkzh6e72lw0m69g3xv
      API_PORT: 80
      NODE_ENV: 'stage'
    ports:
      - 3030:80
    command: node server.js

Now run docker-compose up -d. If everything is configured correctly, your new API server container will spin up and try to authenticate with the MongoDB Atlas server that you setup earlier.

Try sending some test requests from Postman to localhost:3030. Then check the MongoDB Atlas database with MongoDB Compass to visually verify the requests went to the correct database.

If everything looks good we can publish the Docker Image on Docker Hub.

Tag the image

Before you can push the image up to Docker Hub, you need to tag the local image with your Docker Hub username prefix. Replace <username> with your Docker Hub username.

docker tag giftr-api-w20 <username>/giftr-api-w20

Push the image to Docker Hub

Make sure that you are logged into Docker Hub. This command will prompt you for your Docker Hub username and password.

docker login docker.io

Now you can push it. Replace <username> with your Docker Hub username.

docker push <username>/giftr-api-w20

Congratulations!

You have successfully published your first Docker Image.

Deploying to AWS

Now it is time to setup the hosting environment on AWS for your Docker container to run.

AWS Classroom Account

By now you should have received an email invitation to join our AWS Classroom and sign-up for a free AWS Educate - Student Account. There are many benefits attached to the free student account which you can use to continue your learning over the summer.

Once you have accepted the invitation and logged into the AWS Educate portal, find the link to "My Classrooms". You should see one listed for "Mobile API Development". Click the blue "Go to classroom" button on the right. AWS classrooms list

You will now be prompted to agree to the terms and conditions for using the service. Vocareum dashboard

Your browser should be redirected to the Vocareum Dashboard. Click on the card for Mobile API Development. This will open a summary page for your AWS Educate Classroom Account. It has some helpful FAQs and you can see your remaining credit balance for AWS services. You have been given a $50 for this classroom, which is more than enough to cover what we will do, and give you some credits to play with over the summer. AWS account summary

Click on the AWS Console button and you will be automatically logged into the AWS Console with your AWS Educate Classroom account.

WARNING

This may trigger the "pop-up blocker" in your browser. You will need to grant permission for this site. Look for the warning notice in the URL bar.

Now you know how to get logged in. We can start doing real work! AWS console

Configure the Elastic Container Service

1. Go to the ECS console

Type ECS into the Find Services search box on the AWS Console and then select the Elastic Container Service AWS console - search

2. Start the Setup Wizard

On the next screen you should see a Get Started button. Click that to start the "First Run Wizard". This is a more automated path to provision your container service cluster. ECS console - get started

3. Define AWS ECS Container/Task/Service

Now define the runtime parameters for your container service. There are several related parts and they fit together like Russian nesting dolls.

  • A container definition links to a Docker image.
  • A task sets the resources for a container definition.
  • A service may include one or more related tasks.
  • A cluster may host one or more services.

ECS hierarchy diagram

Start with the inner most box and work out.

Container Definition

Choose the custom configuration card and click the configure button. ECS wizard step 1

Set these settings in the Standard section of the configuration form:

  • Container name: giftr-api-container
  • Image: <Docker-Hub-username>/giftr-api-w20
  • Soft limit: 500
  • Port mapping: 80

ECS wizard step 2

Expand the Advanced container configuration section of the form and scroll down to the Environment Variables heading. Add the following key:value pairs:

  • NODE_ENV = stage
  • API_PORT = 80
  • API_JWTKEY = <your-secret-random-key>

TIP

Use the genKey.js script from week 11 to generate a new random key.

ECS wizard step 3

Then scroll all the way to the bottom and check the Auto-configure CloudWatch Logs option next to Log Configuration. Then click the blue Update button in the bottom right corner. ECS wizard step 4

Task Definition

The slide over dialog will close and you will be back to the main ECS Configuration Wizard. Scroll down and click the Edit button on the Task Definition line. Change the task definition name to giftr-api-task and click the Save button. ECS wizard step 5

Now click the Next button in the bottom right corner. ECS wizard step 6

Service Definition

The service defaults to no load balancer. Change the selection in the middle of the page to Application Load Balancer. This will set it for port 80 HTTP. ECS wizard step 7

Next click the Edit button on the Define your service line. Set the number of desired tasks to be 2 and then click the blue Save button. ECS wizard step 8

The main ECS configuration wizard screen should now look like this screenshot. Click the blue Next button. ECS wizard step 9

Cluster Definition

Set the cluster name to mad9124, and click the blue Next button. ECS wizard step 10

Take one last look at all of the settings on the Review page. ECS wizard step 11

Then click the blue Create button, and watch the progress spinners for 5 to 10 minutes. ECS wizard step 12

WOOO HOOOO! You have launched your first AWS CloudFormation Stack!

When all 10 provisioning steps have a green check mark, you can click the View Service button to go back to the ECS Dashboard. This will take to detail view for your newly provisioned service. It should look something like this ... ECS service details

4. Test with Load Balancer address with Postman

So, now we should run some Postman test. But what is the URL?

In the ECS service definition, we created an application load balancer. This will be the "front door" URL for your API service cluster.

To find the load balancer's public DNS name, we need to go to the EC2 Console section of AWS.

Click on the Services top menu drop-down and then select EC2 from the top of the Compute service list. Navigate to EC2

This will take you to the main EC2 Dashboard. Select Load Balancers from the Resource options in the middle of the page. EC2 dashboard

You will now see the list of active load balancers on your AWS Classroom account. There should only be one at this point. Click the check box next to the name of your load balancer to see the details at the bottom of the screen. EC2 load balancer details

Copy the DNS name and use that as the hostname for your Postman tests.

In a real app, you could now use this URL in the fetch code of your frontend client application.

You would also want to turn on HTTPS for the load balancer, or your browser will block access to "mixed content requests". At the moment, the AWS Classroom accounts don't have access to the AWS Certificate Manager Service, so we cannot do this right now.

Congratulations!

You have successfully deployed a load balanced redundant API web service container cluster connected to a separate redundant database cluster — all hosted on Amazon Web Services.

For additional resources

The procedure above is an excellent introduction to containers and AWS. This can be a great talking point in a job interview, but there is more to learn. Please review these additional online resources.

Last Updated: 4/8/2020, 10:59:38 PM