Hello, I'm Alan

Wandong Winter Wonderer Jun 21

I’d been sitting there thinking leading up to this ride that given this ride has a 100km & a 150km option, maybe this is the one to try and attempt my first 150km ride. End the end I decided, sure let’s give it a spin, maybe I’ll fail, but I won’t know unless I try.

Up extra early for the drive up to Wandong, with the predicted low of -2, was definitely going to be fresh. Very eerie driving under (not through, but under!) some fog just out of Cragieburn, I arrive, get setup, register and pass my light check.

Off we start with a decent crowd of riders for the middle of winter, very still, but the sun is out and shining nicely. Take a glance after I’ve been riding for about 10 mins, the bike computer says it’s -4.3!

Warmed up a bit from -4.3 when I stopped to take this photo

Frosty paddocks

Get in to a good rhythm and look at the frost over the landscape, and the day just start to wake up, quickly through Broadford, and start climbing the short steep hill up Sugarleaf Creek Rd, push down on the pedals, and start having my back wheel spin on a sealed road!

Warmed up enough for fingerless gloves

Rolling hills toward Seymour, on to Pyalong - Seymour Rd crossing the route of another Randonée from earlier in the year. First control at Seymour had some hot soup waiting for me being guarded by a retired tank! Hit the spot quickly, and on the fairly flat road through Mangalore and to Avenel to the newsagency for a shop control before backtracking back to Seymour.

Obligatory bike photo

Was about midday at this stage, and seeing the Mangalore pub, put the VB jingle in my head. Seeing a sign for Thales Explosive Ordinace Services with some of the font being in Comic Sans was slightly disconcerting.

Seeing this started playing the jingle to a VB ad

Explosives and the use of Comic Sans does nothing to inspire confidence, but does show me as being a little of a font snob

Back to the same control in Seymour for extra soup and some fruit cake for my lunch stop. By this stage I was the firm holder of the Lanterne Rouge for the 150km riders, not having seen anyone else since Avenell.

Warm Soup at the Seymour control

Bit slower out of Seymour taking the other route back to Broadford, along a mostly single lane road, back in to Broadford where one of the 200km riders overtaken me while I was stopping for a feed out of bag of goodies. 125km down at this point and seriously considered pulling the pin as I’d already put in a good day and was getting tired, but still had about 3 hours, so decided to have about 5mins more off the bike and then slowly proceed on toward the last control at Kilmore.

Between Broadford and Kilmore was 3 seperate hills with my thighs screaming they’d had enough climbing, so the stops picked up in frequency with the will power definitely waning. In to the last control at a service station where I’d been hanging out for a curried egg sandwich, but nothing to be had, so a muffin was eaten while sitting outside the door with some conufsed looks of drivers filling up, but was beyond caring at this point.

My first ever ride where I started with long shadows in the morning, and seen them again in the afternoon

Slowly back on the road toward Kilmore East and then Wandong going even slower, watching the shadows getting long, and some very beautiful afternoon light across the paddocks and trees, with one last climb to get back to Wandong. Running of fumes, in to the final control, along with some more soup and party pies, with a bit of a chat with some more 200km riders who’d caught up to me.

Very happy I did it, but feels like my bike fit may be a little off, so will probably do a few more 100km rides before attempting something longer again.

Stripped down Rails Mar 17

Still seems like enough people don’t know, you can rip whole bits out of Rails, which is great when making an API app.

The rails-api gem is great, but if you depend on Rails, it drags in things you may have no use for in an API app, such as sprockets.

My Gemfile

source 'https://rubygems.org'

gem 'railties', '4.2.0'
gem 'actionpack', '4.2.0'
gem 'activemodel', '4.2.0'
gem 'activerecord', '4.2.0'
gem 'activesupport', '4.2.0'

gem 'pg'

gem 'active_model_serializers', '~> 0.8.3'
gem 'unicorn'
gem 'jwt'
gem 'rack-cors', require: 'rack/cors'
gem 'rails-api'

group :development, :test do
  gem 'rspec-rails'

Also, in config/applicaiton.rb you probably have a line require 'rails/all', but you can get away with less Rails than that. I only include the bits I need:

require 'active_model/railtie'
require 'active_record/railtie'
require 'action_controller/railtie'
require 'rails/test_unit/railtie'

I should point out that I need test_unit there even though I use rspec, as rspec depends on it

Gems included by the bundle:
  * actionpack (4.2.0)
  * actionview (4.2.0)
  * active_model_serializers (0.8.3)
  * activemodel (4.2.0)
  * activerecord (4.2.0)
  * activesupport (4.2.0)
  * arel (6.0.0)
  * builder (3.2.2)
  * bundler (1.8.5)
  * diff-lcs (1.2.5)
  * erubis (2.7.0)
  * i18n (0.7.0)
  * json (1.8.2)
  * jwt (1.4.1)
  * kgio (2.9.3)
  * loofah (2.0.1)
  * mini_portile (0.6.2)
  * minitest (5.5.1)
  * nokogiri (
  * pg (0.18.1)
  * rack (1.6.0)
  * rack-cors (0.3.1)
  * rack-test (0.6.3)
  * rails-api (0.4.0)
  * rails-deprecated_sanitizer (1.0.3)
  * rails-dom-testing (1.0.5)
  * rails-html-sanitizer (1.0.2)
  * railties (4.2.0)
  * raindrops (0.13.0)
  * rake (10.4.2)
  * rspec-core (3.2.2)
  * rspec-expectations (3.2.0)
  * rspec-mocks (3.2.1)
  * rspec-rails (3.2.1)
  * rspec-support (3.2.2)
  * thor (0.19.1)
  * thread_safe (0.3.5)
  * tzinfo (1.2.2)
  * unicorn (4.8.3)

I’m still trying to get rid of the dependency on nokogiri, but that’s a mission for another day.

Testing Rails within a shipping Docker Container Mar 12

I really like having solid artefacts that are known good. I’ve seen some people that use different containers for testing than deployment, which to me doesn’t seem like it quite makes sense, and is one more difference between test and deployment.

I do use a few techniques to build my containers quicker, but even if not used my testing technique still works out.

Build script that builds container and runs tests


set -e
set -x

SHA1=`git rev-parse --short HEAD | tr -d "\n"`

trap onexit EXIT

function onexit {
  docker rm -f $DB_CONTAINER  > /dev/null || true

echo "--- Starting database"
docker rm -f $DB_CONTAINER || true #make sure the container doesn't exist to start with
docker run -d -i -t --name $DB_CONTAINER postgres:9.4.0

echo "--- Building container"
docker build -t $DB_CONTAINER:$SHA1 .

echo "--- Running tests"
docker run --rm -i -t --link=$DB_CONTAINER:postgres $CONTAINER_URL:$SHA1 ops/docker_tests

Contents of ops/docker_tests


# This script is ran by build, within a docker container

set -e
set -x

export DATABASE_URL=postgresql://postgres@$POSTGRES_PORT_5432_TCP_ADDR/app_test
bundle exec rake db:setup
bundle exec rake

The only downside is that I need to include the test gems in the project, but for the peace of mind I get, I’m happy to do this.

Docker speedup tips Feb 27

There are two different parts of speeding up a container, the building and then the pushing of bytes across the network.

With building, Docker will cache commands as they are ran, so that only the line of a Dockerfile (and any subsequent lines) that have been modified will run.

COPY is also cached, so the COPY will only be ran again if the contents of the file change. Usually I will have two main sections to a Dockerfile, those that setup a container that rarely change (installing any requisite packages and such), and then any commands to build a version of a container for an application deployment

I also make sure to setup a .dockerignore, which to exclude the .git directory (because you shouldn’t need it in a deployment).

Ruby Speedups

First trick I tried is

COPY Gemfile /gems
COPY Gemfile.lock /gems
RUN bundle install --deployment --path /gems

ADD . /app

This way bundle install only gets ran if you change your gems, which is a great speedup, but if you have a lot of gems, this still takes a very long time to re-run from scratch.

I now run a script that installs gems in to a directory mounted from the local filesystem, and then COPY that directory in to my container being built.

mkdir -p $GEM_PATH
cp -p Gemfile* $GEM_PATH
#chown because some things set 0600
docker run --rm -i -t -v $GEM_PATH:/gems -w /gems ruby:2.1.5 /bin/sh -c "bundle install --deployment --path /gems && chown -R `id -u` /gems/*"

I then just have a line in my Dockerfile to COPY the gems in

COPY .gems /gems
RUN bundle install --deployment --path /gems

The bundle install here just sets some files so my rails app runs correctly, it’s very very quick because it realises it already has all the files it expects

Try and keep the amount of things in your container small, and the amount of steps that change often small (so a deployment has a smaller diffset). Go delete those unused files!

Rails + JWT Feb 19

I’ve been experimenting with JWT for some client side apps (mainly using React), and I am now in the process of setting up centralised accounts at work.

The important part is only one app is authoriative for a user, and then there are multiple apps which serve API requests for their respective app’s clients.

JWT allows a user to be cryptographically verified without communication required between an API server and our user store. Most of the time it appears JWT is used with a single secret using SHA-256 HMAC, which for a single server works quite well. (When using a shared secret jwt.io is a handy resource for verifying everything is OK)

In our case however a shared secret would mean that any of our apps can generate a valid JWT, which leaves us a larger security issue.

It’s not quite clear how to do it, but the ruby jwt gem supports using RSA public/private key pairs, so now we can verify that the JWT in question came from the correct source but not extend our trust to more services. Also because we store the keys in environment variables, I have base64 encoded the certs so we don’t have escaping issues

private_key = OpenSSL::PKey::RSA.new(Base64.decode64(ENV['JWT_PRIVATE_KEY'])
JWT.encode({ sub: id.to_s, exp: 1.day.from_now.to_i }, private_key), 'RS256')

A JWT can be any JSON object, but the fields sub and exp are reserved. The sub field if used has to be a string to identify the user, with the exp is the date the JWT expires (in seconds since epoch format)

Only the client side we can use the public key to verify that our private key signed this JWT

jwt_public_key = OpenSSL::PKey::RSA.new(Base64.decode64(ENV['JWT_PUBLIC_KEY']))
payload, header = JWT.decode(jwt_string, jwt_public_key)

Lastly the spec says that a client should be able to send an Authorization header of ‘Bearer ’ to authenticate, I have a before_filter in Rails to authenticate these requests

def login_required
  authorization = request.headers['HTTP_AUTHORIZATION']
  if authorization.blank?
    render status: :forbidden, text: 'No authorization header set'

  jwt_string = authorization.split(' ')[1]

  payload, header = JWT.decode(jwt_string, jwt_public_key)

  @current_user = User.find_by!(user_store_id: jwt['sub'])
rescue JWT::DecodeError
  render status: :forbidden, text: 'Invalid header set'

Docker + Elastic Beanstalk Feb 12

Some of you may have seen my talks around Melbourne about some things I’ve used Docker for. No, it’s not for everything. It’s really not for things that store state locally (eg. databases).

Late 2014, I started a new position, and became responsible for (amongst other things), looking after ops. The existing solution was giving us some issues, and I’d been wanting to experiment with using Docker for running apps in production, not just doing running CI tasks.

For a while CoreOS had been on my list to experiment with, but although it seemed great for service worker type applications, I couldn’t see a good way of deploying web applications that need to be deployed with zero downtime. After this I just went with straight Ubuntu instances and installing docker on them.

From here I hooked up Buildkite to run a script everything someone pushes to GitHub. When this repository is pushed to, a script runs that builds the container setting up any library dependencies needed, and then pushes this container to the docker registry, and would then SSH to each server in staging and run another scrip that pulls the container down and restarts to run it. It would run serially to make sure each node is up and serving requests again before the next one got pulled down to update & restart. Overall is worked reasonably well, but wasn’t very cloud like, as any time I added or removed an instance, I’d have to tell Buildkite what hosts to now deploy to.

Recently I discovered that Elastic Beanstalk, can also deploy Docker containers, taking care of setting up ELB, RDS and managing zero downtime deployments.

It’s been a bit of futzing around to make all the parts work together. Firstly, setup a new IAM user, with this policy

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "Stmt1420763850000",
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Sid": "Stmt1420691017000",
            "Effect": "Allow",
            "Action": [
            "Resource": [

Most permissions are straightforward. For S3, you’ll need a bucket you are using for deploys that the deploy user can write to.

You’ll also need access to a bucket that AWS manages that start with ‘elasticbeanstalk’, otherwise you get some odd errors.

When a build is good, you’ll need to create a new application version on Elastic Beanstalk, which you can then deploy to an Elastic Beanstalk Environment

Create an application version (once your docker container is pushed)

CONTAINERURL is where the docker container has been pushed to APPNAME is the application name in Elastic Beanstalk

SHA1=`git rev-parse --short HEAD | tr -d "\n"`


cat ops/Dockerrun.aws.json.template | sed "s#<TAG>#${SHA1}#" | sed "s#<CONTAINER_URL>#${CONTAINER_URL}#" > Dockerrun.aws.json
zip -r $DEPLOY_FILE Dockerrun.aws.json

aws s3 cp $DEPLOY_FILE s3://application-deploy/$APP_NAME/$DEPLOY_FILE

aws elasticbeanstalk create-application-version --application-name $APP_NAME --version-label $SHA1 --source-bundle S3Bucket=application-deploy,S3Key=$APP_NAME/$DEPLOY_FILE

When you want to deploy an application version to an environment

aws elasticbeanstalk update-environment --environment-name $APP_ENV --version-label ${SHA1}

It will return quickly, you’ll have to watch Beanstalk to see that the deploy has gone OK

Logstash Fun Aug 29

I’ve been hearing about Logstash for quite a while, but just never got around to it. I finally have it installed and running in a way I’m generally happy with, even though at the moment its just nginx & rails getting sent to it.

Logstash + Rails

I know people have mentioned lograge, but I didn’t like how it swallowed the normal rails log, so I extract the parts I wanted, and now log to a second logfile for logstash specifcially. I’ve also added a method to our application controller that returns a hash of events for logstash, so its as easy as

logstash_payload[:search_terms] = %w[array of search terms]

and it will get sent up to elasticsearch


Elasticsearch looks like its actually rather cool and I keep hearing good things about it. I installed via the offical cookbook, and was going fine for about a day until we ran out of file handles due to a bug in the cookbook version. Once the limit was increased we had a red health status and some indices that just couldn’t be brought back, so I deleted them and got back to green.

Next problem was chewing through a good chunk of space very quickly and filling the partition ES was on. This time around just had some unallocated shards which I cleaned up by rerouting.

Almost OK until I see the cluster status back to yellow. This time it was because we are only running a single ES node, and a default somewhere is setting the number of replicas to 1 which is impossible with a single node.

I created a template for all logstash indices to have 0 repliacs by sending a PUT to _template/logstash with

    "template": "logstash-*",
    "settings": {
        "index.number_of_replicas": 0

I’m using the Sense extension to manage my ES node. For security I can’t connect directly to ES from my local computer, so I tunnel 9200 on my es node back to my computer using SSH (ssh -L 9200:localhost)


I love Kibana, but its still under very rapid development, and I’ve found its not so unusual to break it, so at the moment I’m specifying a specific known good commit when I deploy it.

Chef Gotchas Jul 16

I’ve been running chef now for about two years, and I guess things we’re obvious to me until I spoke with some of my colleagues, and realised there are still some gotchas out there. Here are the bigger ones I’ve seen.

Deleting stuff from the filesystem doesn’t make it go away

Most people are using git to manage their chef artefacts and then using a script to upload them to the chef server so they get some source control. Keep in mind that if you delete a file from git, there is nothing to tell chef to remove that artefact.

My work around here for a databag, is to commit an empty hash, for a role, to keep the role’s name and description with no run list or attributes. (Read on for why I’m getting away from using roles altogether)

Chef only manages what you tell it

Using say sphinx for your search engine?

package :sphinx do
  version '2.0.8'

service :sphinx do
  action [:enable, :start]

Now you’ve gone and switched to elasticsearch, added the appropriate cookbook, and removed the one for sphinx. Just want to remind you that sphinx is still installed and running. In these cases you should reverse the service and installation, and of course on nodes without the package or service its a very quick no-op.

Roles aren’t versioned

You can lock your environments down to use specified cookbooks, and freeze your cookbooks at a specified version so you can safely have unverified cookbooks safely uploaded to your chef server, but you cannot do the same with roles. This is one of the reasons I’m a big fan of the wrapper cookbook concept I first seen mentioned by Jamie Windsor at Riot Games (producers of berkshelf)

The execute method needs a check condition

I don’t yet have a LWRP for stuff installed via npm, so I’m just using the command method at the moment.

command "npm install grunt-cli" do
  execute "npm install grunt-cli"

Only problem is this gets ran on every single chef run, which is less than efficient. I probably only want to install it if it is not already installed.

command "npm install grunt-cli" do
  execute "npm install grunt-cli"
  not_if { File.exits?("/usr/bin/grunt-cli") }

not_if will evaulate the block passed to it

Using chef to compile packages from source

I see this used a lot, and I used to do it myself. Problem is the number of hacks involved, and slow initial convergence. We have package management, lets use it.

I’d played with fpm, but still found it a lof of work until I was pointed on Twitter to fpm-cookery (if you’ve used hombrew you’ll get the hang of it quick smart). I’ve now setup a simple apt repository, and when I need to use packages that aren’t in the distro I use (such as Ruby 2.0.0p247 and Go 1.1.1), I package them up, test that they are correct, and then make them available to apt-get install on any machine. Also makes writing your cookbooks much, much simpler!

Big ball of mud repos

To be honest I’m not sure how much this one still happens, but once upon a time it was common to see people keep all of the cookbooks in the same big repo as the rest of their chef artefacts.

(Admitedly this was the easier with the tools at the time). Now, with projects such as Berkshelf or Librarian (I much prefer Berkshelf), which resolve dependencies and will do uploads for you. Berkshelf also has a Vagrant plugin so you can fire up a vagrant instance for a cookbook under dev & test easier.


Unless you specify the minute, hour, day and weekday they default to “*”, so that using cron like:

cron "update_lists" do
  command "rake lists:update"
  hour 3

when your intention was to run cron at 3am, you’ll actually end up running it 60 times between 3am and 4am, so don’t forget all the fields you intended to include!

Postfix Outbound SMTP auth Jun 30

I’ve recently started moving some of my mail hosting back to machines I control. Outbound SMTP can be a hassle for spam reasons, so I’m just using Mandrill.

I Added the following to my main.cf


and in /etc/postfix/sasl-passwords

smtp.mandrillapp.com username:password

Just run “postmap /etc/postfix/sasl-passwords” after editing.

Ruby OpenSSL issues Feb 20

Everybody has issues with OpenSSL trying to use verified certificates (for the record anytime someone whips out VERIFY_NONE, a kitty dies).

I found the magic piece of documentation, if you look at File.dirname(OpenSSL::Config::DEFAULTCONFIGFILE), you can drop you certs.pem there, and all will be well

Error 65 Freebsd Jul 25

Error 65 FreeBSD

When I got this, it mean ‘you are trying to set an invalid shell’

Install Andorid headless Jan 9

You want to install android headless?

Try this:

android update sdk -u -t android-8

Myki Sucks Nov 16

As comes as no surprise I quite like my technology and was looking forward to using Myki when I moved to Melbourne, and having used the Go Card in Brisbane was interested to see they’d made improvements such as having periodical tickets able to be sorted on cards (which I see Brisbane will soon have a similar concept (basically you pay no more after your 10th trip for the week))

Unfortunatly it’s been nothing like a positive experience all the way along.

The readers really struggle with my Myki in my wallet, so I’ve hard to take it out to actually use it.

If you setup auto credit card topup, and it fails just once, for any reason, your myki is blocked, and can’t be fixed without you posting your Myki back (at your expense), and can take ‘up to two weeks’. In the mean time? Too bad, buy another ticket.

The real source of this rant is today’s experience. I’ve been buying 31 day passes for the last couple of months, needlessly touching on and off (I’m not leavint the zones my pass is purchased for, so there is no real reason to have to do this), and have been running in to trouble with the old style metcard barriers (or frankenbarriers as I’ve heard them refferred to). Once again today it wouldn’t let me out, so as is easy to do at my 194 cm height, I simply stepped over the barrier where I was stopped by a ticket inspector who wanted to check my ticket (fair enough), and to inform me I had commited an offense (interferring with a barrier), and could be subject to a $180 fine.

After a brief discussion, I was told that I’d be let go with a warning, and that I should get my Myki replaced as it may be defective.

Later that day I went to Southern Cross to get a replacement where I was told, I’d have to go without my Myki for two weeks while a replacement is organised. I was offered a 'courtesy Myki’, but until I got the replacement in the mail, I’d have to purchase another pass or use short term tickets, or possibly just buy another 31 day pass as that is the cheapest option.

Hold on? I haven’t done anything wrong, and I’m the one that has to fork out more money, and the inconvenience.

I picked a different option, $9.80 admin fee to cancel my Myki, have my current pass refunded + the one that I’d purchased but hadn’t yet started + the $20ish credit I had on my Myki, and buy a montly Metcard, and be on my merry way home.

Over the last 12 years of using Metcard, I only remember having a few small issues (mainly leaving my ticket in my jeans pocket and it stopping working electroncially), but as they were tickets that would be rubbish in the most of one month, there simply wasn’t this inconvenicne factor.

Fix it:

I’m not just going to rant, I’m going to make some suggestions.

Make sure station staff at any premium station can replace a myki and its passes and any stored value on the spot. It should only take a couple of minutes, and shouldn’t need any forms.

In the short term? If I had have been handed a Myki with a two week pass on it, I probably would have put up with it.

Secondly, a failed credit card topup shouldn’t disable a card, it just shouldn’t have any credit added. Hat tip to the Go Card people when the same problem happen, they rang me, sorted out the problem over the phone and told me it would be fixed overnight (I assume this is because the busses & ferries aren’t connected to the network on the go, still annoying but far more reasonable)

Mobile Sites Oct 26

I am currently working on a mobile site for a customer, initially we decided to go with jQuery mobile, but in the end we’ve gone with Zepto instead. Main reasons are it’s still jQuery like, except it actually works on older slower devices, meaning more happy customers for little pain, and is also much more ‘web like’.

The second part was making sure no url changes were required, and if a page didn’t have a mobile view, then to revert to the standard view, which ruled out the 'mobile’ format hack I’ve seen in multiple places.

In the end if it’s determined the user is wanting a mobile version of the page, I just add the mobile_views directory to the views path, and everything after that just works.

def set_mobile_view
  if is_mobile_site?
    prepend_view_path (Rails.root + 'app'+ 'mobile_views').to_s

Empty Views on Android Oct 20

Empty Views on Android

One thing when I was working on TuneLocator for Android, was I couldn’t figure out how the empty view for a list view was supposed to work. Turns out in the XML they need to be at the same level

<ListView android:id="@android:id/list"
<TextView android:id="@id/android:empty"
          android:text="Finding nearby stations..."/>