Shrink the size of an AMI

I know what you're thinking. "I wish I could create my own AMIs (Amazon Machine Images). Not the traditional way - spawn an EC2 instance, make changes, then hit the EC2 API to use that instance to create a new AMI from it - but actually build a filesystem from scratch in an empty volume, and use that volume to create a bootable AMI."

I mean, who HASN'T thought that before.

On a serious note, there are a few reasons why the traditional process for creating AMIs may not be suitable:

  1. Shrinking AMI size - since the base AMIs provided by Amazon are 8GB big, the traditional process can only create AMIs of 8GB or larger - you can increase the size of the new AMI, but not shrink it. The problem with large AMIs is that they take a long time to spawn volumes from, and those volumes take a long time to turn into a new AMI.
  2. Using a new OS or distribution - what if you want to create an AMI for an operating system or Linux distribution that AWS does not provide a base AMI for, then you would have to create a new AMI.
  3. Fewer filesystem image "layers" - Every AMI can be created from another one, and can itself be used to create other AMIs. However Amazon does not create a brand new image for each one, rather it works out the changes between parent and child, and the child only has the 'changes' from the parent. Eventually, if you want to work out the actual data in your AMI that will become a new volume, you have to trudge your way through all of it's parents for the 'unchanged' bits that any AMI in the chain inherited and did not change. Creating a brand new AMI from an empty volume may be more efficient, but of course it will cost more, as you are no longer paying to store just the changes.

Shrinking an AMI might be a useful exercise for some, but has a few really annoying and obscure steps. One way to do it is described below.

Create a New EC2 Instance, Making Sure It Is Reachable By SSH

Create a new basic EC2 instance based on any Linux distribution you want (like Ubuntu). It will only be used to SSH into and copy files from one place to another.

Make ABSOLUTELY SURE you create it a subnet in the first availability zone, ending in the letter 'a'. Do NOT leave the 'Subnet' option set to 'No preference'. Also make sure you check the 'Auto-Assign Public IP' box. All volumes you create will have to be in the same availability zone, so choosing the first one ending in the letter 'a' makes sure of this.

Ensure you choose a suitable SSH key when asked, so that you can SSH to it. Also, create a new security group (or choose an existing security group) that allows you SSH access to it from your location (it is OK to use as the source IP, as this instance will not be around for very long).

Create a New Volume From the AMI You Want To Shrink

We want to create a new volume from which to copy all our files from, so will create it from an existing AMI.

Create a New Empty Volume That Will Become Your New Shrunken AMI

copy Files From the Source Volume to the Target Volume Using the EC2 Instance

Repeat the above steps for the 'target-volume', but this time change Device to /dev/sdg .

Then, SSH into your instance, and mount both volumes:

$ sudo mkdir -p /mnt/source /mnt/target
$ sudo mount /dev/xvdf1 /mnt/source
$ sudo parted -s -a optimal /dev/xvdg mktable msdos mkpart primary 0% 100% toggle 1 boot # Partition target volume
$ sudo mkfs.ext4 /dev/xvdg1 # Make filesystem on target volume
Creating filesystem with 524032 4k blocks and 131072 inodes
Filesystem UUID: 49c66eaa-2def-4689-bf18-4b8426ee6cb6
$ sudo e2label /dev/xvdg1 cloudimg-rootfs
$ sudo mount /dev/xvdg1 /mnt/target

BE SURE TO TAKE NOTE OF THE FILESYSTEM UUID OF THE NEW PARTITION ON THE TARGET VOLUME, as printed out by mkfs.ext4! It is very important and we will need it later.

Copy Files Across To The New Volume

Copy across all files from the source volume to the target volume. Obviously, the target volume should be of sufficient size to accommodate all the files! If not, you will have to create a new one that does have space.

sudo tar -C /mnt/source -c . | sudo tar -C /mnt/target -xv

Now, we need to change the filesystem UUID in the copied files to the new one. Find the old one by inspecting the old filesystem, and then use sed to change that old one to the new one in /boot/grub/grub.cfg:

$ sudo blkid -s UUID -o value /dev/xvdf1

# Note we use the new UUID here, as output by mkfs.ext4 above
$ sudo sed -i -e 's/567ab888-a3b5-43d4-a92a-f594e8653924/49c66eaa-2def-4689-bf18-4b8426ee6cb6/g' /mnt/target/boot/grub/grub.cfg

Now we run 'grub-install' to install the boot loader on the volume:

$ sudo grub-install --root-directory=/mnt/target /dev/xvdg

Now we should remove the volumes from the instance. First we unmount them:

$ sudo umount /mnt/source /mnt/target

Then we remove them both from the instance from the Volumes page of the AWS console:

Create a New AMI From The Smaller Volume

Create snapshot  from the target volume. Select it on the Volumes page, and choose Actions -> Create Snapshot. Enter 'smaller-ami' for Name, and leave Description blank. Click the snapshot ID link that now gets shown.

Once the snapshot finishes creating (the spinner next to it stops spinning), select the snapshot, and choose Actions -> Create Image. Choose 'Hardware-assisted virtualization' for 'Virtualization type', and fill in a name of your choice (such as 'smaller-ami' - we can reuse the name of the snapshot). Leave all other options as the defaults, and click 'Create'. The AMI ID should be displayed, clicking which will take you to the AMI page showing it's creation progress.

Now, if you create a new instance using that AMI, it should boot up start just like a normal instance, but with a smaller virtual disk attached to it by default!


These instructions are quite complex, covering a lot of topics from AWS volume and snapshot management, to Linux partition management, and quite frankly it is best to automate them as they are quite tricky to carry out. But sometimes there really is a need to shrink volumes, and hopefully you will find them helpful if you come across that need.

If you were following along to test the process out, please remember to delete all instances, AMIs, volumes, and snapshots created, as otherwise Amazon will continue to bill you.

A Tale of Two Deployments Part 2 - Why Agile Infrastructure Decisions Matter

We already found out, from Part 1 of this story, about all the problems that can go wrong during a live deployment, and how an immutable-server based deployment pipeline can help deal with them. Now, we will find out why it is common end up with an unstable infrastructure, where code is being pushed to manually managed servers, and how to avoid getting there without breaking the bank.

Read More

Using `eval` Directly on the Results of Command Substitution Considered Harmful

I use the following pattern quite a lot in my Bash scripts:


set -e
eval $(command_with_shell_output)
# act on new environment variable

Here, the output of a command gives us some shell statements (like possibly exporting some environment variables) which the shell evaluates in the context of the script. ssh-agent is an example of a command that follows this pattern, for example.

set -e tells Bash to stop running the script of any commands fail. This is good. But the problem is, when command_with_shell_output fails, the script keeps running! The way around this is to break out the command substitution and evaluation into separate statements:


set -e
eval "${output}"
# act on new environment variable

This will successfully stop the script from carrying on if the command substitution fails.

Init Scripts for Web Apps on Linux, and Why You Should Be Using Them

I have seen too many servers running production applications, where if the server has to be rebooted, someone has to log into it and restart the app. Everyone knows of these magical arcane things called 'init scripts' which start various services when a server boots up, but for some reason they are often not used by developers - perhaps due to fear of complexity. While they were fairly complex a while back, things have gotten much easier with the advent of newer technologies on the scene.

Although it could get very complex in the past to write these, it is not actually that complex these days, as we have modern 'init daemons' to help us. An 'init daemon' is what controls how the system starts up and shuts down, and init scripts are the scripts we write to tell it how to start up or shut down our apps.

Upstart (Older Ubuntus, Amazon Linux, Old Red Hat Enterprise Linux)

'Upstart' is an init daemon written and popularized by Ubuntu. Newer Ubuntu versions have replaced it with Systemd, but for compatibility's sake, let you use Upstart scripts anyway.

Here is a sample script that starts a Rails app called 'app' from the directory '/srv/app'. It starts it as the 'appuser' user, sets some environment variables, tells it to handle up to 4 requests concurrently, and does some super-cool restart-if-abnormally-terminated magic. Note: the puma web server is NOT told to start in the background with the '--daemon' flag - Upstart will take care of that.

# /etc/init/app.conf
description "My App Server"

start on runlevel [2345]
stop on runlevel [016]

setuid appuser
chdir /srv/app

# restarts service if it abnormally terminates...
# ...but quit trying if it fails 5 times in 60 seconds
respawn limit 5 60

env RAILS_ENV=production
env SECRET_KEY_BASE=foobar

exec bin/bundle exec puma -w4 -e production --preload -b tcp://localhost:3000/

The app can be controlled with:

sudo service app start/stop/restart/status

Annoyingly, Amazon Linux, as used on AWS, wishes to stay compatible with the ancient Red Hat Linux 6, and has an ancient version of Upstart that does not support the 'setuid' stanza. Thus we are forced to use 'su' to do the user changing for us, like so:

description "My App Server" 

start on runlevel [2345]
stop on runlevel [016] 

respawn limit 5 60

env RAILS_ENV=production
env SECRET_KEY_BASE=foobar

exec su -s /bin/bash -c 'cd /srv/app && bin/bundle exec puma -w4 -e production --preload -b tcp://localhost:3000/' appuser

In this case, the app is controlled with:

sudo initctl start/stop/restart/status app

Find out more about Upstart at

Systemd (Almost Every Current Linux Distribution)

'Systemd' is an init daemon that has slowly replaced Upstart. It is more than an init daemon actually - but let's ignore that for now, as we do not want to get into a flame war (

Again, here is a sample script that starts a Rails app called 'app' from the directory '/srv/app'. It starts it as the 'appuser' user, sets some environment variables, tells it to handle up to 4 requests concurrently, and does it's own restart-if-abnormally-terminated magic. Note: as above, the puma web server is NOT told to start in the background with the '--daemon' flag - Systemd will take care of that.

# /etc/systemd/system/app.service
# Run after these

Environment="RAILS_ENV=production" "SECRET_KEY_BASE=foobar"
# Always restart on abnormal termination
# Note that first argument must be an absolute path, rest are arguments to it
ExecStart=/srv/app/bin/bundle exec puma -w4 -e production --preload -b tcp://localhost:3000/
# Startup/shutdown grace period

# Run before this

The app can be controlled with:

sudo systemctl start/stop/restart/status app

The status sub-command is particularly cool, showing the app's status, standard output and process tree. Passing it the '-l' flag adds even more information to the output.

However, to actually make it start on boot, you need to 'enable' the service with:

sudo systemctl enable app

More at:

A Note on Restarting Apps on Deployment

Since these are system services, and you are not doing a deploy as root (I hope!), issuing a restart during a deploy will require you to use 'sudo'. You could always just give the user complete sudo access (as is the default if you are using EC2 instances, and using the default user), or you could limit the user to only being able to run certain commands as root using sudo - this is the topic for another blog post.


There you have it. It really is simple, once you know how, to create an init script for whatever flavour of Linux you are running on. No more having to log into the server to restart the application just because writing init scripts is too much hassle.