From the BlogSubscribe Now

Blue/Green Deploys with Kubernetes and Amazon ELB

At Octoblu, we deploy very frequently and we’re tired of our users seeing the occasional blip when a new version is put into production.

Though we’re using Amazon Opsworks to more easily manage our infrastructure, our updates can take a while for dependencies to be installed before the service restarts – not a great experience.

Enter Kubernetes.

We knew that moving to an immutable infrastructure approach would help us deploy our apps, which range from extremely simple web services, to complex near-real-time messaging systems, quicker and easier.

Containerization is the future of app deployment, but managing and scaling a bunch of Docker instances, managing all the port mappings, is not a simple proposition.

Kubernetes simplified that part of our deployment strategy. However, we still had a problem, while Kubernetes is spinning up new versions of our docker instances, we could enter a state where old and new versions were in the mix. If we shut down the old before bringing up the new, we would also have a brief (sometimes not so brief) period of downtime.

Blue/Green Deploys

I first read about Blue/Green deploys in Martin Fowler’s excellent article BlueGreenDeployment, a simple, but powerful concept. We started to build out a way to do this in Kubernetes. After some complicated attempts, we came up with a simple idea: use Amazon ELBs as the router. Kubernetes handles the complexities of routing your request to the appropriate minion by listening to a given port on all minions, making ELB load balancing a piece of cake. Have the ELB listen on port 80 and 443, then route the request to the Kubernetes port on all minions.

Blue or Green?

The next problem was figuring out whether blue or green is currently active. Another simple idea, store a blue port and a green port as tags in the ELB and look at the current configuration of the ELB to see which one is currently live. No need to store the value somewhere that may not be accurate.

Putting it all together.

We currently use a combination of Travis CI and Amazon CodeDeploy to kick off the blue/green deploy process.

The following is part of a script that runs on our Trigger Service deploy. You can check out the code on GitHub if you want to see how it all works together.

I’ve added some annotation to help explain what is happening.


SCRIPT_DIR=`dirname $0`

export PATH=/usr/local/bin:$PATH
export AWS_DEFAULT_REGION=us-west-2

# Query ELB to get the blue port label
BLUE_PORT=`aws elb describe-tags --load-balancer-name triggers-octoblu-com | jq '.TagDescriptions[0].Tags[] | select(.Key == "blue") | .Value | tonumber'`

# Query ELB to get the green port label
GREEN_PORT=`aws elb describe-tags --load-balancer-name triggers-octoblu-com | jq '.TagDescriptions[0].Tags[] | select(.Key == "green") | .Value | tonumber'`

# Query ELB to figure out the current port
OLD_PORT=`aws elb describe-load-balancers --load-balancer-name triggers-octoblu-com | jq '.LoadBalancerDescriptions[0].ListenerDescriptions[0].Listener.InstancePort'`

# figure out if the new color is blue or green
if [ "${OLD_PORT}" == "${BLUE_PORT}" ]; then


# crazy template stuff, don't ask.
# Some people, when confronted with a problem,
# think "I know, I'll use regular expressions."
# Now they have two problems.
# -- jwz
perl -pe $REPLACE_REGEX $SCRIPT_DIR/triggers-service-blue-service.yaml.tmpl > $SCRIPT_DIR/triggers-service-blue-service.yaml
perl -pe $REPLACE_REGEX $SCRIPT_DIR/triggers-service-green-service.yaml.tmpl > $SCRIPT_DIR/triggers-service-green-service.yaml

# Always create both services
kubectl delete -f $SCRIPT_DIR/triggers-service-${NEW_COLOR}-service.yaml
kubectl create -f $SCRIPT_DIR/triggers-service-${NEW_COLOR}-service.yaml

# destroy the old version of the new color
kubectl stop rc -lname=triggers-service-${NEW_COLOR}
kubectl delete rc -lname=triggers-service-${NEW_COLOR}
kubectl delete pods -lname=triggers-service-${NEW_COLOR}
kubectl create -f $SCRIPT_DIR/triggers-service-${NEW_COLOR}-controller.yaml

# wait for Kubernetes to bring up the instances properly
while [ "$x" -lt 20 -a -z "$KUBE_STATUS" ]; do
   sleep 10
   echo "Checking kubectl status, attempt ${x}..."
   KUBE_STATUS=`kubectl get pod -o json -lname=triggers-service-${NEW_COLOR} | jq ".items[][\"triggers-service-${NEW_COLOR}\"].ready" | uniq | grep true`

if [ -z "$KUBE_STATUS" ]; then
  echo "triggers-service-${NEW_COLOR} is not ready, giving up."
  exit 1

# remove the port mappings on the ELB
aws elb delete-load-balancer-listeners --load-balancer-name triggers-octoblu-com --load-balancer-ports 80
aws elb delete-load-balancer-listeners --load-balancer-name triggers-octoblu-com --load-balancer-ports 443

# create new port mappings
aws elb create-load-balancer-listeners --load-balancer-name triggers-octoblu-com --listeners Protocol=HTTP,LoadBalancerPort=80,InstanceProtocol=HTTP,InstancePort=${NEW_PORT}
aws elb create-load-balancer-listeners --load-balancer-name triggers-octoblu-com --listeners Protocol=HTTPS,LoadBalancerPort=443,InstanceProtocol=HTTP,InstancePort=${NEW_PORT},SSLCertificateId=arn:aws:iam::822069890720:server-certificate/

# reconfigure the health check
aws elb configure-health-check --load-balancer-name triggers-octoblu-com --health-check Target=HTTP:${NEW_PORT}/healthcheck,Interval=30,Timeout=5,UnhealthyThreshold=2,HealthyThreshold=2

Oops happens!

Sometimes Peter makes a mistake. We have to quickly rollback to a prior version. If it is the off-cluster, rollback is as simple as re-mapping the ELB to forward to the old ports. Sometimes Peter tries to fix his mistake with a new deploy and now we have a real mess.

Because this happened more than once, we created oops. Oops allows us to instantly rollback to the off cluster, simply by executing oops-rollback, or quickly re-deploy a previous version oops-deploy git-commit.

We add an .oopsrc to all our apps that looks something like this:

"elb-name": "triggers-octoblu-com",
"application-name": "triggers-service",
"deployment-group": "master",
"s3-bucket": "octoblu-deploy"

oops list will show us all available deployments.

We are always looking for ways to get better results, if you have some suggestions, let us know.

iTunes Connect: Invalid Binary

Invalid Binary.

Gee thanks Apple for that insightful, descriptive message.  Surely with all your advanced binary scanning, static analysis, Application Uploader, etc. all you can give us is a most unhelpful “Invalid Binary”?

If you are suffering from “Invalid Binary” issues, and have done everything short of sacrificing small farm animals, try this trick.

If your Entitlements.plist file was generated with an version of Xcode prior Xcode 3.2.3, remove Entitlements.plist and regenerate it using Xcode 3.2.3.  You don’t need to change any of the options generated on your new Entitlements.plist file, just recompile and submit again. Hopefully this helps someone.

How To Build iPhone 3.0 and iOS4 Apps On The Same Machine

It’s actually really easy, here’s how I’ve set it up:

Install the XCode 3.2.3 with iOS4 in /Developer

Install XCode 3.2.2 with iPhone OS 3.2 in /Developer322

Install latest XCode 4.0 developer preview in /DeveloperBeta

This makes it trivially easy to support the older SDKs and toolsets for handling your legacy iPhone applications.

A Brave New World

No, not a dystopian novel about eugenics, this is my re-attempt to commit to a blogging world.

Hello all faithful followers (yes, Mom, I’m talking to you). Today is a new day, a new dawn, a fresh start, what have you. I have decided that I need to make a new commitment to blogging. I need to force myself to practice and dramatically improve my writing and communication skills. I had previously maintained a blog over at, but blogged very infrequently and unprofessionally.

Times have changed. I have been stockpiling a lot of topics over the last couple of years as I have worked in and on my business (Integrum Technologies), helped to create a revolutionary movement (Gangplank), continued to faithfully serve (SanTan Christian Center), and have been experimenting with new technologies and dealing with human and technical challenges.

I am steeling my discipline to blog once per week on any one of the above topics. We’ll see how it goes…

Note: Considering it took me 2 weeks to even get this posted up, I’m not feeling very optimistic :-/