The Heisel Test: Five questions for professional happiness

I was recently asked to list out the values I look for in a person (when hiring), or a team or company (when looking for a job). Since The Joel Test, and its several updates is a thing, I am tongue-firmly-in-cheek calling this The Heisel Test.

The Heisel Test: Five questions for professional happiness

  1. Do you put customer value and experience first?
  2. Do you move responsibly fast?
  3. Are you genuinely curious and open to new information?
  4. Do you empower your teams and teammates?
  5. Do you respect and have empathy for people?

Do you put customer value and experience first?

Why is this important? Well no matter what you do, you have (or hope to soon have) customers. You won’t be in business long without them.

There’s a good chance you have a really interesting mix of external customers who pay you in dollars, and internal customers who pay you in good will and cooperation.

You’re relentlessly focused on solving a real customer problem to add customer value. But you don’t stop there! You want your customer’s experience using your product and engaging your services to be as positive, fast, and frictionless as possible.

Continue reading

Posted in Kanban, Management | Leave a comment

Take notes in your 1:1 and share them

The most important meetings I have every week are on my one-on-ones with my engineering managers and the engineers on their teams.

The agenda is the same every week – at least 15 minutes to talk about whatever they want to talk about, and up to 15 minutes for me to talk about whatever I want with them. The best ones are usually 20-30 minutes without me saying much at all.

They’re about relationship building, they’re about gemba, they’re about family, friends, beer, bands, pets.


Image credit: derya

I’ve almost always taken notes during them, almost always with pen and paper so I can keep my eyes focused on the other person.

I’ve been spotty about what I do with the notes. The todo items, if any, would always end up in Things. But the subjects we talked about, the feedback I got, the feedback I gave would end up lost — either to my illegible handwriting or scanned into a deep dark Evernote archive. Some would get typed up for posterity and review season, but a lot wouldn’t because time and attention is finite resource.

Until recently, that is! A couple of folks in the Rands Leadership Slack mentioned that they type up their notes AND share them back with the other person.

Since then I’ve started making it habit to always type up my notes into a shared Google Doc per person – direct report, skip-level, peer, even my 1:1s with my boss – with a heading for the date, followed by the subjects we talked about, any questions I asked and the answers I heard and any feedback given or received.

It’s a beautiful thing because now I’ve got two great things I didn’t have before:

  • A feedback loop with the other person — they see exactly what I took away from our discussion and have a chance to correct anything I mistook
  • Instant accountability for myself — now the folks I’m meeting with know whether I actually typed up my notes, so they tend to get typed up same or next day.

So try this one weird trick after your next 1:1 – type up the notes and share a link back to other person. It’s easy with Google Docs or Evernote but even something as universal as an e-mail would do the trick.


Posted in Management | Leave a comment

Docker standards at Kabbage

I also posted this over at our Kabbage Tech Blog

In the five months my team’s been using Docker we’ve stolen adopted some standards to make our lives easier.

1. Build your own critical base images

Our application images have their own FROM inheritance chain. The application image depends on a Python Web application base image.

That web app image depends on an official Python image, which in turn depends on a Debian official image.

Those images are subject to change at the whim of their Github committers. Having dependencies change versions on you without notice is not cool.

Not cool bro

So we cloned the official Dockerfiles into our own git repo. We built the images and store them in our own Docker registry.

Every time we build our base or application images we know that nothing critical has changed out from underneath us.

2. Container expectations

Stop. Go read Shopify’s post on their container standards. The next section will now seem eerily similar because we stole adopted a bunch of their recommendations.


We copy everything in ./container/files over the root filesytem. This lets you add or override just about system config file that your application needs.


We expect this script to test your application, duh. Ours are shell scripts that run the unit, integration or complexity tests based on arguments.

Testing your app becomes a simple command:

docker-compose run web container/test [unit|pep8|ui|complexity]


We run this script as the last step before the CMD gets run.

This is what ours looks like:

echo "$(git rev-parse --abbrev-ref HEAD)--$(git rev-parse --short HEAD)" > /app/REVISION
echo "Bower install"
node node_modules/bower/bin/bower install

echo "Big Gulp build - minification"
node node_modules/gulp/bin/gulp.js build

/venv/bin/python /app/ collectstatic --noinput

3. Docker optimization

ADD, install, ADD

We run docker build a lot. Every developer’s push to a branch kicks off a docker build / test cycle on our CI server. So making docker build as fast as possible is critical to a short feedback loop.

Pulling in libraries via pip and npm can be slow. So we use the ADD, install, ADD method:

# Add and install reqs
ADD ./requirements.txt /app/
RUN /venv/bin/pip install -r /app/requirements.txt
ADD . /app

By adding and then installing requirements.txt, Docker can cache that step. You’ll only have to endure a re-install when you change something in your requirements.txt.

If you go the simpler route like below, you’d suffer a pip install every time you change YOUR code:

# Don't do this
ADD . /app
RUN /venv/bin/pip install -r /app/requirements.txt

Install & cleanup in a layer

We also deploy a lot. After every merge to master, an image gets built and deployed to our staging environment. Then our UI tests run and yell at us if we broke something.

Sometimes you need to install packages to compile your application’s dependencies. The naive approach to this looks like this:

RUN apt-get update -y
RUN apt-get install libglib2.0-dev
RUN pip install -r requirements.txt # has something that depends on libglib
RUN apt-get remove libglib2.0-dev
RUN apt-get autoremove

The problem with that approach is that each command creates a new layer in your docker image. So the layer that adds libglib will always be a contributor to your image’s size, even when you remove the lib a few commands later.

Each instruction in your Dockerfile will only ever increase the size of your image.

Instead, move add-then-install-then-delete steps into a script you call from your Dockerfile. Ours looks something like this:

ADD ./container/files/usr/local/bin/ /usr/local/bin/
RUN /usr/local/bin/
set -e # fail if any of these steps fail
apt-get -y update
apt-get -y install build-essential ... ... ...
#... do some stuff ...
apt-get remove -y build-essential ...
apt-get autoremove -y
rm -rf /var/lib/apt/lists/*

For more Docker image optimization tips check out CenturyLink Labs’ great article.

4. Volumes locally, baked in for deployment

While working on our top-of-the-line laptops, we use docker-compose to mount our code into a running container.

But deployment is a different story.

Our CI server bundles our source code, system dependencies, libraries and config files into one authoritative image.

Packaged software It’s like this, except not.

That image is what’s running on our QA, staging and production servers. If we have an issue, we can pull an exact copy of what’s live from the registry to diagnose on our laptops.

5. One purpose (not process) per container

Some folks are strict, die-hard, purists that insist you only run one process in a container. One container for nginx, one container for uwsgi, one container for syslog, etc.

We take a more pragmatic approach of one purpose per container. Our web application containers run nginx and uwsgi and syslog. Their purpose is to serve our Web application.

One container runs our Redis cache, it’s purpose is to serve our Redis cache. Another container serves our Redis sentinel instance. Another serves our OpenLDAP instances. And so on….

I’d rather have a moderate increase in image size (by adding processes related to the purpose). It’s better than having to orchestrate a bunch more containers to serve a single purpose.

6. No Single Points of Failure

You're gonna have a bad time

Docker makes it super-easy to deploy everything to a single host and hook them up via Docker links.

But then you’re a power-cycle away from disaster.

Docker is an amazing tool that makes a lot of things way easier. But you still need to put thought and effort into what containers you deploy onto what hosts. You’ll need to plan a load balancing strategy for your apps, and failover or cluster strategy for your master databases, etc.

Future standards

Docker is ready for prime time production usage, but that doesn’t mean it or its ecosystem is stagnant. There are a couple of things to consider going forward.

Docker 1.6 logging/syslog

Docker 1.6 introduces the concept of a per-host (not per-container) logging driver. In theory this would let us remove syslog from our base images. Instead we’d send logs from the containers, via the Docker daemon, to syslog installed on the host itself.

Docker Swarm

Docker swarm is a clustering system. As of this writing it’s at version 0.2.0 so it’s still early access.

Its promise is to take a bunch of Docker hosts and to treat them as if they’re one giant host. You tell Docker swarm “Here’s a container, get it running. I don’t need to know where!”

There’s features planned but not implemented that would allow you to use it without potentially creating the aforementioned single point of failure.

Posted in Uncategorized | Leave a comment

Docker orchestration with maestro-ng at Kabbage

I also posted this over at our Kabbage Tech Blog

At Kabbage, my team loves using Docker! We get a ton of parity between our development, testing and production environments.

We package up our code, configuration and system dependencies into a Docker image. That image becomes our immutable deployment unit.

I’ll cover how we build and package repeatable Docker images in another post. For now lets talk about how we deploy and run these images.

Too many cooks options

You have many options for managing the deployment and operation of your docker images. Early into our first Docker project, I assumed we’d use Shipyard for orchestration.

It had a nice GUI and an API. I’d planned to script Shipyard’s API to get the images and containers onto the hosts.

I found out the hard way that Shipyard can’t pull images onto remote Docker hosts! I thought for a hot minute about scripting something to handle that part. But that seemed more complicated than it was worth.

So I started running down the list with not much time left to get a working solution… — Had a GUI and an API but seemed way more complex than what we needed.

Fig/docker-compose — We were already using fig for our local development environments. Managing remote docker hosts isn’t its strong suit. It’s possible but slow because you deploy to each host in sequence.

Centurion — Looked promising. It was fig, but for remote systems. New Relic wrote it so it’s got some real-world usage. But the first thing I ran into when using it was Ruby traceback. I could’ve spent my time diagnosing it, but I had one more tool to try out.

maestro-ng — Looked a lot like Centurion and fig. It could pull images onto remote docker hosts, check! It’s written in Python, so if I ran into a problem I had a better chance of fixing the problem quickly.

Maestro-ng’s the winner

Maestro is a lot like fig. You configure your container — which image, environment variables, volumes, links, etc. — in a YAML file. You also configure the remote docker hosts, or “ships.”

Screenshot 2015-04-07 17.07.53

Plus, under the hood the yaml files are treated as Jinja2 templates. You can keep your configuration DRY with a base template for an application. In per-environment yaml files, you change only what’s needed!

Screenshot 2015-04-07 17.11.29

Deployment is a breeze. We use a Blue/Green deployment strategy so we can safely stop the running containers on our hosts. Here’s what our deploy script looks like:

# pull new image onto box
maestro -f $maestro_file pull $service

# stop the running service
maestro -f $maestro_file stop $service

# clean out old containers
maestro -f $maestro_file clean $service

# start the new containers with the new image
maestro -f $maestro_file start $service
Posted in Uncategorized | Leave a comment

Get Docker running on AWS OpsWorks

bhcmIBcI’ve spent the past couple of weeks at my new job doing a couple of things: hiring kick ass Python and UI engineers and getting some build-and-deploy infrastructure set up so the team can hit the ground running.

Long story short: I wanted a way to deploy pre-built Docker images from any repository to hosts running in OpsWorks.

I chose Docker because it would let me get a repeatable, consistent environment locally and on various non-production and production environments. And I’d get there a lot quicker than writing Puppet or Chef recipes and using Vagrant.

Screen Shot 2014-12-05 at 9.32.37 PMWhen it came time to get a non-local environment spun up I turned to AWS due to some networking and security issues around my team’s first project.

Time was of the essence, so I first turned to Beanstalk but found its Docker support problematic. Amazon announced but hasn’t yet released their Elastic Container Service. I ended up picking OpsWorks.

I couldn’t find a lot of advice on the 21st century version of man pages, so I’m writing this up in the hope it helps others, and that wiser folks tell me what I can do better!

Brief OpsWorks primer

Screen Shot 2014-12-05 at 9.34.47 PMOpsWorks is an engine for running Chef recipes based on lifecycle events in the course of a machine’s life.

You start by defining a layer, which is a group of machines that do similar tasks like serve your Web app, run memcache, or host Celery workers.

Then for that layer you define which recipes fire whenever a machine is setup, or an app is deployed to it, or it’s shutdown, etc.

AWS OpsWork and Docker deployment strategy

The best strategy I could find was on an AWS blog post.

Chris Barclay sets up a layer with recipes that install Docker. Application deployments require the OpsWorks instance to pull your code, including its Dockerfile from a git repo and build it locally before running it.

I didn’t like building the Docker images locally from git sources. It ruled out using pre-built community images and opened the door to random build issues on a subset of the boxen.

What I wanted was a way to deploy pre-built Docker images from any repository to hosts running in OpsWorks.

Improved OpsWorks and Docker deployment

I took the code from Chris Barclay and adopted it. You set some key environment variables in your OpsWork application definition and that tells the chef recipe what registry, image and tag to pull and, optionally, the registry username and password to authenticate with.
Here’s the instructions and source to get up and running:

  1. Set up a new stackinOpsWorks. Under Advanced set the following:
    • Chef version: 11.10
    • Use custom Chef cookbooks: https git url to a repo containing the recipes
    • Manage Berkshelf: Yes
    • Berkshelf version: 3.1.3
  2. Add a layer
    • Type: Other
    • Recipes
      • Setup: owdocker::install
      • Deploy: owdocker::docker-image-deploy
  3. Add an App
    • Type: Other
    • Repository type: Other
    • Environment variables:
      • registry_image: The path portion of a docker pull command ala: docker pull $registry_image
      • registry_tag: The tag of the image that should be pulled from the registry ala$registry_tag
      • layer: The shortname of the layer the image should be deployed to
      • service_port: The port on the HOST that will be connected to the container
      • container_port: The port on the CONTAINER that will be connected to the service port
      • registry_username: OPTIONAL username to login to the registry
      • registry_password: OPTIONAL password to login to the registry
      • registry_url: OPTIONAL url to a non registry ala

Posted in Programming, Python, Technology | 8 Comments

DotCloud: Try ALL THE PaaSes

For fun, I’m writing a series of blog posts breaking out what it takes to deploy this app to a variety of Platforms as a service. All of my sanitized config files are on GitHub.

Today I’ll cover deploying twitter-dedupe to DotCloud


0. General thoughts

DotCloud, like Heroku is easy to grok if you’re familiar with the 12 factor app pattern.

I didn’t find the documentation easy to navigate. I spent more time looking for what I needed than I did with Heroku.

DotCloud stores configuration in a JSON file on your container rather than exporting it as environment variables. That required a minor script wrapped around my daemon code.

I was surprised that DotCloud didn’t offer a way to run or test your application locally. This is the company that brought us Docker so I figured I’d get to use it locally to set up my image.

As you’ll see below, it’s surprisingly not easy to run a staging and production version of your app in DotCloud.

1. Provision redis

Adding Redis to my application was super easy. I added two lines to my dotcloud.yml file and I had a redis stack.

    type: redis

2. Deploy the daemon

  1. You configure what to run using a Supervisord config file. The one I used for twitter-dedupe was pretty simple.
  2. You deploy your code using aDotCloud’s command line tool:
    dotcloud push

DotCloud has git and hg integrations but I couldn’t tell from the documentation if I could select which branch gets pushed to DotCloud each time I invoke dotcloud push.

3. Access the logs

During development and for live troubleshooting there’s a handy command to tail the logs live:

dotcloud logs

There weren’t any built-in connections between DotCloud and Loggly.

That meant diving in and configuring syslog on my DotCloud container and wiring it up to Loggly’s syslog endpoint, or wiring Loggly into my application itself. Neither seemed appealing so I skipped it.

4. Do it all again for a staging environment

I couldn’t find any documentation or best practices for running multiple copies of the same application on DotCloud.

Each folder my computer could be tied to one, and from what I can tell, only one DotCloud application.

So to duplicate my application and having a staging environment I followed all the steps to set up my application again in a different folder.

I ended up with something like this:

├── slateliteprd
│   ├──
│   ├── dotcloud.yml
│   ├── requirements.txt
│   └── supervisord.conf
└── slatelitetst
    ├── dotcloud.yml
    ├── requirements.txt
    └── supervisord.conf
Posted in Python | Leave a comment

Heroku: Try ALL THE PaaSes

For fun, I’m writing a series of blog posts breaking out what it takes to deploy twitter-dedupe to a variety of Platforms as a service. All of my sanitized config files are on GitHub.

Today I’ll cover deploying twitter-dedupe to Heroku


0. General thoughts

Heroku is easy to grok if you’re familiar with the 12 factor app pattern.

I had that in mind when I was writing twitter-dedupe so it wasn’t surprising that I picked it to power @slatemaglite

The Heroku docs had answers for all the questions I had.

The Heroku toolbelt provides nice tools like .env files and foreman to manage and run your app in your local environment.

1. Provision redis

Provisioning redis was super easy. I added the Redis To Go add-on to my account.

I added some code to my app to look for the REDISTOGO environment variable set to a redis://url and I was off to the races.

I was a little frustrated by the need to put a relatively proprietary environment variable name into my code. Other Redis add on providers used similar patterns for their name. I don’t know why REDIS_URL wouldn’t suffice for them all.

Update: [Folks at Heroku agree this should be changed and are working on it](

2. Deploy the daemon

Deployment was a three step process: configure, deploy and then scale.

  1. You configure what to run using a Procfile. The one I used for twitter-dedupe was very simple.
  2. You deployyourcodetoHeroku using a Git-based workflow:
    git push heroku
  3. Somewhat confusingly on a new project, you need to scale from 0 to 1 or more instances after your deploy:
    heroku ps:scale daemon=1

3. Access the logs

During development and for live troubleshooting there’s a handy command to tail the logs live:

heroku logs --tail

There a lot of Logging addons as well. I decided I wanted to try Loggly on this project.

Heroku has the concept of Syslog drains which will send your log output to any syslog capable system.

Loggly has an easy integration with Heroku. It’ll give you the exact command to add the appropriate drain. It looks something like this:

heroku drains:add https://{{a url here}} --app {{ your app name here }} 

4. Do it all again for a staging environment

Heroku has the concept of forking applications.

So once I had my initial app up and running the way I wanted, I ran:

heroku fork -a myfirstapp mysecondapp

That copied all my addons and configuration. Then I did some get setup so I could push to both:

git remote add test
git push test master # Deploy
git remote rename heroku prod

After a deploy I needed to scale up test:

heroku ps:scale daemon=1 --app mysecondapp

And I had a running test environment. Deploying to it, testing and then deploying to prod looks like this:

heroku push test master
heroku logs --tail --app mysecondapp
# Do some verification
heroku push prod master
Posted in Programming, Python, Technology | Tagged | 1 Comment

Try ALL THE PaaSes

I chose to deploy twitter-dedpue to Heroku to power @slatemaglite.

For fun, I’m writing a series of blog posts breaking out what it takes to deploy this app to a variety of Platforms as a service. I intend to keep my (sanitized) config files on GitHub and probably some raw notes of what it took to get things set up.

For each service, my goal is to:

  1. Provision redis
  2. Deploy the daemon
  3. Access the logs
  4. Do it all again for a staging environment

I’ll be trying out these services:

  1. Heroku
  2. Dotcloud/Cloudcontrol
  3. Elastic Beanstalk
  5. Google Compute Engine
  6. Anything else someone recommends to me 🙂
Posted in Programming, Python | Leave a comment

RACI for new leaders

Understanding roles is a perennial issue, but especially as a company scales and small-group communication breaks down it becomes more and more of an issue….

  • Responsible. The people who do the actual work.
  • Accountable. The one person on the hook. ‘The Decider’ as Bush II puts it.
  • Consulted. Opinion contributors.
  • Informed. One way updates.

(Via. RACI — Just about anything)

First off, I love the RACI pnemonic and I’ve used it for years with my teams. In my experience mentoring new leaders there are couple of things to watch out specific to RACI:

  1. Make sure the person you’re making Accountable understands that it isn’t automatically assumed they’re Responsible as well. A lot of new leaders take on the assignment and don’t enlist their team for help performing the work.
  2. Ask them to think thoroughly about whom should be Consulted. Enthusiasm for a new assignment can lead them to run off with their team leave out some key stakeholder. Ask them up front who needs to be Consulted and provide some guidance based on your knowledge of the organization if they’re far afield.
Posted in Management | Leave a comment

OKRs: Adopting Objectives and Key Results

I’ve been looking for a way to up my and my team’s game as competitive pressures, deadlines and customer demand increases. 

OKRs got their start and Intel and made their leap to Google, LinkedIn and other valley companies. In a nutshell: An OKR is a qualitative objective that’s inspirational and some challenging quantitative key results that measure your progress against that objective.

It’s a simple concept, but there’s a lot of nuance to make it something that drives your team forward. Otherwise it’s just another way of doing MBOs.

Here’s my highlights thus far:

  • Objectives should be qualitative and inspirational, time bound (a quarter, a year, etc.), and can be achieved independently by the team.
  • Key results quantify the inspirational language, they are measurable
  • Key results should be hard — the sweet spot is when the team is 50% confident they can achieve it. More confident and you are not driving growth, less confident and your team will give up.
  • Key results should be something that happened because of what you did, not what you did. Good: Customer satisfaction score increases by 10% vs. Bad: Meet with customers, devise features that will please them, implement them.
  • OKRs should result in failure — If the team is achieving all the key results then they were set too low. Most companies and teams are extremely failure averse. If you adopt OKRs, you have to set the stage culturally with the team and with your management above you that you are going to try hard things and miss
  • OKRs are not a performance review. The quickest way to keep people from aiming high is to punish them for missing.
  • OKRs shouldn’t change during the Objective’s time box. OKRs should help focus the team.
  • Start small: One OKR per company with supporting OKRs per team
  • “OKRs are not the only thing you do, they are the one thing you must do.  Expect people to keep the ship running.”
  • OKRs at every level should be available publicly.

Here’s some sources to learn more about OKRs

Posted in Management | Leave a comment