Dockerize your [rails] app from development to production AWS ECS

Ly Channa
10 min readJan 2, 2020

--

Docker is a great tool to isolate your env from the host machine. You can consider Docker as a virtual machine like VMWare or VirtualBox but with a much lighter, more friendly, and flexible.

This blog is dedicated for Ruby on Rails apps, but you might find it useful for other technologies as well since Docker and AWS ECS are technologies independent.

So why docker for your app?

Isolate environment: This is very challenging. Imagine your computer is a MacBook Pro but your app is RubyonRails and requires you to run on Ubuntu Server. Without Docker will need to install libs on the host OS, every update to the MacOSx might break your app dependency and eventually take time to fix in the host OS but might not work on Ubuntu Server in AWS. Docker frees us from the host OS and we are free to change anything on our host.

Work everywhere: We will use docker in local development and production. If it works locally It guarantees to work on a production server as well since it is the same isolated image.

Team collaboration: Once you build a docker image, the image can be maintained and used across a team of developers. No more installation issues on application breaks on some versions of Macosx or Ubuntu desktop. Your team will have more time to work on your product rather than spend hours or days on installation on a certain host OS.

Big Ecosystem: Almost everything if not all, can be found in docker. Many good images are made available for you to kick start. Some of them are very well crafted and open-sourced ready for you to use.

Easy deployment: I mean so much easier, not just for developers but also for cloud providers. Have you ever faced the issue of, for example, if you use Go lang, Elixir/Phoenix Framework, or any other technologies but your PaaS of choices like Heroku or Amazon Elastic Beanstalk does not support it yet? Some PaaS make its solution available for certain languages of their choice. e.g. Heroku started with RubyonRails first and added to support other languages later. With Docker we no longer need to wait for them to support our favorite tech, it can be run seamlessly as long as the platform supports Docker.

Provisioning resources: Docker will take care of the resource given to the containers. It will keep monitoring the containers and restart if necessary. With Docker you don’t need a process/resource monitoring tool like Monit or God to monitor, Docker will provision it for you.

Better resource management: Because of its lightweight, multiple docker containers can be put in a host machine and you still have a high level of isolation leveraging the existing resources from the host machine making your infrastructure more cost-effective.

Get started with Dockerfile

In this article, the docker is done with RubyonRails app as a showcase. However, Docker itself is a platform and language agnostic. If your development is not ruby on rails you still can follow along with this article to learn how to build a docker for an app in general and especially how to use Docker image for AWS ECS in production.

At the root of your Rails project, create plain text and name it Dockerfile.

If you don’t have Docker installed yet, you can download and then install it on your machine from here https://docs.docker.com/v17.09/engine/installation/#supported-platforms

#Dockerfile

FROM ruby:2.3
LABEL maintainer="Channa <channa.info@gmail.com>"
# Updating nodejs version
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
# Install dependencies
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
build-essential mysql-client nodejs yarn && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
WORKDIR /app
COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock
RUN bundle install# Install the application
COPY . /app

From: Here I use the ruby:2.3 image. You can find a docker ruby image for your specific version from the docker repository https://hub.docker.com/.

RUN: we run the command inside our OS image. In this case, Ruby2.3 is a Debian-based image.

RUN can be referred to: You ssh to the OS instance and then run the command apt-get update … with root user to install the package and dependencies inside the instance.

WORDIR: navigate to /app directory (change directory). If the directory does not exist then it will create it. The /app is where we store our project.

WORDIR can be referred to: You ssh to the instance and then you check the /app if it does not exist you create it and then cd /app.

COPY: to copy a local file to the docker image. Here we copy 2 files to /app directory inside our docker image.

So in short from the Docker script above we pull the ruby:2.3 image from the Docker hub and then we start to install OS dependencies for our rails project ( MySQL client, yarn for react-rails, node js, asset pipeline, …). After dependencies get installed we create the /app directory to put our rails project. We copy Gemfile and Gemfile.lock to the directory and afterward, we can install all the rails libraries required by our project. If everything is ok the last step is to copy our rails project to the root of /app.

Build a docker image:

docker build -t app_dev:1.0.0 .
Build a docker image for a rails project

Here we build our image with -t (tagging) with app_dev and version 1.0.0. if a version is not provided, for instance, we omit, for example, 1.0.0, then the default version ‘lastest’ will be used.

After finishing the build successfully, you can validate the existence of the image with the following command:

> docker image ls#output
REPOSITORY TAG IMAGE ID CREATED SIZE
app_dev 1.0.0 409520d85c3b 3 minutes ago 1.74GB

You might notice the value in the IMAGE ID column. Every image will have a unique image id. we will use this image id a lot to reference the image in later command.

As you can see app_dev that was specified after the -t option is being listed and labeled under the REPOSITORY column. In reality, it is a repository address for your docker image. In the case of Docker hup or Amazon ECS repository, the tagging will be the address of the repository that you get from either Docker hup or Amazon ECS repo. In most cases, you might have it name anything because you have not had or need to push to any docker repository however once you have the repository you might need to create one from an existing one to be able to push to remote docker repo.

# to create a new tag 
docker tag app_dev:1.0.0 new-repo-uri:v1.0.0
REPOSITORY TAG IMAGE ID CREATED SIZE
app_dev 1.0.0 409520d85c3b 34 hours ago 1.74GB
new-repo-uri v1.0.0 409520d85c3b 34 hours ago 1.74GB

Behind the scene, Docker creates a new tag and links to the existing image(same image id: 409520d85c3b). So this happens instantly and no double storage penalty

Run the image (create a container)

After building the image, we can use the image for our development. We can launch the image with the following command:

docker run -it -p 3000:3000 409520d85c3b bundle exec rails s -b 0.0.0.0 -p 3000

docker run: to run a command in a new container. A docker container is a running instance snapshot from the docker image. docker run command will always launch a new container from the image. If you have an existing container for your image and you want to run on that container instead of launching a new one, you can use docker exec instead. docker exec uses the same args as docker run except that instead of receiving image id for docker run, docker exec receives container id instead.

-it params for an interactive console. You will see the log output in the console.

-p: to map the external port (outside the container) to the internal port (inside the container). We want to access the web from our browser with port 3000 which map to the port 3000 for rails s in the container.

bundle exec rails s -b 0.0.0.0 -p 3000 that come after image id is the command to run the container. This can be anything.

In short, the command above can be described as:

  • Launch a container from this image 409520d85c3b
  • Ssh into the container
  • Run command bundle exec rails s -b 0.0.0.0 -p 3000 with interactive console
  • Expose 3000 from the container to port 3000 in the host machine.

In many cases, you might want to ssh to an existing container or a new container to run some commands for debugging purpose or to import some data you can just use something like this:

docker run -it 409520d85c3b bash

to launch bash script and run appropriate command afterward

Validate existing container

Now you can validate the existence of the docker container with the following command in the host console:

docker container ls
# or
docker ps

Some examples of interaction between host and container

Example:1- if you wanna ssh to the container to access the terminal ( bash here because you need to tell the command to run )

docker exec -it container_id bash

Example:2- If you wanna run the Rails console within the container

# ssh to the container
docker exec -it container_id bash
# Now already in the container
cd /app # cd to your rails project root
bundle exec rails c # launch rails c

Or you can also do this in short

# ssh in the container and run: bundle exec rails c
docker exec -it container_id bundle exec rails c

Example:3 You might need to import the database to the container

# copy file app.sql from host to container and name it tmp_app.sql
docker cp app.sql tmp_app.sql
# ssh to the container
docker exec -it container_id bash
# run mysql command to import the data the app_development database
mysql -u root -p app_development < tmp_app.dat

Pretty straightforward.

Container is just a running instance of an image — a snapshot of the image. Once it terminates your data will be gone. If you want to launch a new container with docker run then it will be a new snapshot that actually not related at all with the previous one. To make the change in container persistent then you need to mount a volume. which is covered in the next section.

Micro Services: Persistencies and separation of services in Docker

After understanding how to use Dockerfile to build an image for your application, now you might question how about Mysql Database, Memcached, Redis, Sidekiq, and so on. An easy solution is to install those services inside the same image using Dockerfile, however, Docker provides a better solution called Docker Compose. Docker Compose has its own configuration file that separates from Dockerfile to manage and coordinate dependencies.

To start with docker-compose you create a docker-compose.yml file at the root of our project.

From the docker-compose.yml above, we define 5 services with the following name:

  • memcached ( in line 4 )
  • db ( in line 9)
  • redis ( in line 16 )
  • web ( in line 21 )
  • sidekiq ( in line 46 )

OjO

As these services are located in separate containers, Docker needs a mechanism to communicate between services. A traditional approach is to use IP address, however Docker provides a better alternative by using the name service directly, for example:

In a single machine your might referrer to the MySQL host as ‘localhost

however, in docker-compose you just need to use the MySQL service name will ‘db

Let’s walk through some of the services

In memcached section(line 4) we use memcached:1.5-alpine from docker hub and we create a volume for it in /data under the volumes section.

Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. You can learn more from here.

In db section(line 9) we set the environment to allow an empty password. Imagine environment variables are predefined configurations to allow you to customize docker images. When people create a docker image they usually use the environment variable to allow image customization. When it comes to customizing the image environment variable is the place you might search for.

There are many interesting configurations under the web section(line 21). In the build section, there is a dockerfile: Dockerfile. A service normally needs an image that is hosted in a remote repository ( docker hub ) but this is not always the case. This section allows you to build an image and uses it from your Dockerfile.

The command node allows you to run a command when the service web starts. This is the place where you boot your app.

In the port node, we expose container port 3000 (default rails server port ) to the outside world with 3000 too. Everything your run in a container will not be exposed to the outside container via network unless you specify in the port section.

under the depends_on node, we list all the dependencies we need. Here it says before the web service is able to start the db, redis and mecached must be up and running first.

let's dig down into the environment setting of the web.

RAILS_ENV: 'development'      
REDIS_URL: 'redis://redis:6379/12'
DATABASE_HOST: 'db'
DATABASE_NAME_DEVELOPMENT: 'avocado_docker_dev' MEMCACHED_SERVER: 'memcached:11211'

As mentioned earlier, to connect to redis service from our web service we just need to refer to the container with the service name: redis:6379 ( redis as a host and 6379 )

The same for MySQL server and Memcached server ‘db’ and ‘memcached:11211’ consequently.

Start docker-compose:

Docker-compose is nothing more than defining and running multiple containers for your app in one command. Its building blocks are totally on a single docker container too.

# To start your services.This command will build your images and run
docker-compose up
# You can also run the build command
docker-compose build
# To see all the container
docker ps
# You can ssh and run to your docker container as normal
docker exec -it container_id bash
...

What’s next

Now you have the app running on Docker on your local machine. In the next post, I will show how to use AWS ECS both EC2 and Fargate to deploy your application in the cloud along with free auto-renewal SSL cert provided by Amazon ACM, Amazon Elastic load balancer, and auto-scaling.

Let me know your thought!

--

--

Ly Channa
Ly Channa

Written by Ly Channa

Highly skilled: REST API, OAuth2, OpenIDConnect, SSO, TDD, RubyOnRails, CI/CD, Infrastruct as Code, AWS.

Responses (1)