Category Archives: Productivity

Continuous Delivery, not Continuous Deployment

Engineering teams like Etsy’s have popularized the idea of continuous deployment: infrastructure that automatically rolls-out newly minted code to production in a safe and gradual manner. In Lean Startup and Web Operations, Eric Ries explains the rationale behind continuous deployment: making a safe habit out of shipping product. I loved the motive, but it was clear that the practice as described required heavy operations infrastructure:

  • A continuous deployment server for automatic deploys after successful continuous integration runs
  • Live monitoring to discern if a code change negatively affected business metrics
  • Automatic rollback if business metrics are negatively affected
  • Incremental rollouts to production servers so as not to deploy to all servers at once
  • Code architecture that allows for both old and new code to run in production simultaneously while code rollouts are in-progress
  • Feature switches

While leading a team at LivingSocial, I set out to achieve the goal of safe code shipping as a habit but without the complicated and time-costly infrastructure. We were successful by incorporating good software engineering and deployment practices–all of which were generally good for us and didn’t require as much dedicated tooling or time. Later we discovered others outside the company were starting to do the same under the label “continuous delivery.” We have been even more successful with continuous delivery at LearnZillion, where I am today.

Unfortunately, the cost of continuous deployment infrastructure can discourage engineering teams from investing time in their development and deployment process because they don’t realize the lower-cost alternative, continuous delivery, is also a viable option. I want to share how we do continuous delivery at LearnZillion, so that others can achieve similar results without the overhead of extra infrastructure.

0. Assumptions

I am going to assume the year is 2015, or even 2010 or 2006, and that you have a deployment script or tool like Capistrano to automate the basic deployment steps for your application. As well, I’m going to assume your team or organization wants to do continuous delivery. If neither of these are in-place, start with them.

1. Individual responsibility

Although we work as a team, individuals are responsible for carrying work forward to completion and doing that well. Staff are responsible for taking features from initial definition to production shipment. Along the way, they collaborate with and incorporate input from the broader team and company. (See Multipliers and Drive for reasons to give employees meaningful responsibility in the workplace.)

With these responsibilities come expectations:

Do not break the site. Do not break features. Do not break the test suite. Do not commit code you did not write (this is a smell of a bad development database on your machine, failed merge, etc.). Run the tests regularly–especially before merging into the master branch. If the master branch code changes in-between your test suite run and your re-attempt at commit, run the tests again after cleanup, as appropriate.

Unfortunately, I have found that in many organizations, lack of trust is the default. A tech lead or manager is responsible for scrutinizing code from all team members, merging, deploying, and ensuring the application won’t break. This may make sense for new team members until they understand and are comfortable with the team conventions and have demonstrated that they are capable engineers. Otherwise, it should not be the norm.

2. Smallest overlap of responsibilities

We often pair a product designer (design, UX, HTML/CSS) with a full-stack engineer (SQL, Rails, Ruby, JavaScript, HTML/CSS) to work on a feature. However, we avoid assigning multiple engineers the same feature. We try to keep engineers working on “orthogonal capabilities.” (See “The Three Musketeers” and The Mythical Man Month for the rationale behind this approach.)

3. The master branch is sacred

We deploy to production from our master branch. Developers can depend on master as a reliable foundation to fork, merge, and rebase from. Features are developed, reviewed, and QA-ed in separate branches. If you have test failures, it’s most likely your code. Feature branches are only merged into master immediately before deployment. It is the responsibility of the feature owner to make sure the branch is reasonably current with master before it is merged itself. There are loads of articles on “simple git workflow,” which you can find online, like this one. git and GitHub make this paradigm easy to follow.

4. Follow “The Twelve-Factor App” methodology

I will let the methodology speak for itself. See part X in particular. The biggest continuous delivery benefit is no surprises during deployment.

At LivingSocial, my team ensured the application development environment behaved like production, except where Rails intentionally separates the two. Truth be told, we didn’t have a reliable staging environment at our disposal, so we went straight from development to production. Believe it or not, because of our practices, this still worked quite well.

At LearnZillion, we take this further by using similar SaltStack configurations for production, staging, and a Vagrant-powered development environment. In development, the Ruby process and gems for the app are still installed on the host operating system but everything else runs inside VirtualBox. It has the side benefit of speeding-up the on-boarding process for new engineers.

5. A test suite

At both LivingSocial and LearnZillion, we used Ruby on Rails, which strongly encourages use of a unit testing framework. Engineers make certain to have a passing test suite before merging a branch into master, must have a passing test suite on master post merge, and a failure on the master branch takes top priority–second only to a live site outage.

At LearnZillion we took this farther by integrating CircleCI with GitHub to minimize the execution burden on engineers.

6. An automated QA test suite

At LearnZillion, we have a QA team. They naturally have the potential to be a bottleneck for getting features out. Since quality is their main objective, you want them to be gatekeepers. What you don’t want is for their review and gatekeeping processes to be cumbersome or inefficient. The most powerful lever you can maneuver within your QA team for continuous delivery is to automate their testing. Our team has an extensive QA test suite, which QA engineers can run against any branch, at any time, on a staging server. Automated tests are usually written soon after deployment to production, but sometimes are completed before then. Manual QA of emerging features still takes place, of course.

7. Look at your dashboards

It doesn’t take much effort to have a short list of links to Google Analytics, Mixpanel, or your error reporting service like Bugsnag or Honeybadger. An engineer can inspect them after deploy to see if something broke. Engineers and product designers should be doing this anyway to see how users are responding to changes or new features.

Bonus 1: Manual QA in a different time zone

When an engineer’s code has passed peer review and the automated QA test suite, it is sent along to QA for manual inspection. Test results are back by the next business morning because some of our QA team members are located in India. They test our work while we sleep.

Bonus 2: Continuous QA

At LearnZillion, we’ve integrated a GitHub pull request web hook that deploys a branch to a staging server and runs the QA test suite against it. This means that a branch has been regression tested before it gets to the QA team and usually before it gets to peer review. If you want to read more about our automated QA process, see Kevin Bell’s article about us over at CircleCI.

In Summary

With the good engineering and deployment practices of continuous delivery, you can achieve the same benefit of continuous deployment: safe, consistent delivery of product as a habit. You don’t have to build-out a dedicated infrastructure, and you can build a better engineering team and environment in the process.

Looking for your next gig?

If this sort of engineering environment is appealing to you, and you are interested in being a Senior Software Engineer or Senior Product Designer at LearnZillion, please apply. We would love to meet you.

[Thanks to my team for reviewing this post and recommending improvements to it.]

Advertisements

The 10x Engineer and Delegated Responsibility

Whenever I do an introductory phone call with an engineering candidate, I make sure to explain my management style and how my approach directs our team’s process. Our process is agile, but it is decidedly not a formal Agile methodology. It’s not Agile Scrum; it’s not Extreme Programming; it’s not Kanban. Instead, it’s delegated responsibility in a culture of continuous deployment. I delegate the responsibility of something important to an employee–usually in the form of a significant feature–and let them take it from concept through implementation to deployment.

One of our co-founders serves as our product manager, and we have an experience design team that translates spoken words into diagrams and pictures. However, I make it very clear to my team that any text or visual content they receive are merely representations of product vision. We need them to guide us from here to there. The people on the front-lines–the ones doing the actual building of code and product–are the ones most equipped with information. They face the real constraints of the problem domain and existing code base; they have the best insights into how we can be most economical with their time; and they have the capacity to see all the options before us. I’m there to help them sift through that information, when needed, and to be that supportive coach, but my goal is for them to be carrying us forward. I manage, but I aim to lead, not micro manage.

Delegated responsibility is a very common and efficient practice in the business world. However, the practice has largely been abandoned in the software industry by practices and processes that shift responsibility onto a team of replaceable cogs. The team is expected to churn through a backlog of dozens of insignificantly small bits of larger features, which often lack foresight into the constraints that will be discovered and the interdepencies between smaller bits that result in developer deadlock. On top of this, a generalized backlog of small pieces creates room for misinterpretation by omitting full context around features or results in excessive communication overhead (see The Mythical Man Month).

We are most definitely inspired by Agile. We build a minimum viable product iteratively. We build-measure-learn, pair program when needed, collaborate, peer review each step of the way, and let our QA engineer find our leaky parts. However, my team members are individually responsible for their work and ship whenever they have something ready to show the world.

Some candidates would much rather be working on a team with equally-shared responsibility, collective code ownership, and continuous pair programming. I realize some people need this model, which is why I always discuss it with potential hires. However, others thrive with delegated responsibility. They take ownership, require little to no management or direction, make the right decisions, take pride in what they have built with their own two hands, and are extremely productive. Not surprisingly, others understand their code. It integrates well with the code base. They avoid the dangers that formal methodologies try to curtail. Often they are, or are becoming, that 10x developer. They are liberated, thrilled, and at their best working in this environment. It’s a joy to provide it to them.

If this sort of environment sounds exciting to you, please check out our careers page at LearnZillion.

Pivotal Tracker Dashboard

UPDATE: The script I wrote is no longer needed for the latest release of Pivotal Tracker, as pinned panes persist between browser refreshes. The script below is for the last release of classic Pivotal Tracker on June 20, 2013.

I like Pivotal Tracker. It’s a step-up from the cumbersome ticket tracking systems I’ve used in the past. As a manager though, it’s too cumbersome to see how my individual team members are doing and get an overall picture of how the team is doing at the same time. I’m only given a single backlog, which intermingles everyone’s tickets. I can see all tickets for a single engineer by using search, and I can pin each search results panel to get what I want. But if I reload the page, I lose all my efforts to build a usable dashboard. This is what I’m aiming for but without the hassle: a view of our current sprint (column one), our backlog (column two), our icebox (column three), and each engineer’s backlog (the remaining columns).

Yeah, the screenshot is a bit small here, but I’m not allowed to show you what we’re working on. With a little TamperMonkey grease, you too can have a comprehensive, and persistent dashboard. (GreaseMonkey if you’re still using Firefox.) Here is the script to pull it off. All you have to do is customize the project number and list of engineer name abbreviations.

// ==UserScript==
// @name       Pivot Tracker Dashboard
// @namespace  https://www.pivotaltracker.com/s/projects/
// @version    1.0
// @description  Show Pivotal Tracker panels for each engineer
// @match      https://www.pivotaltracker.com/s/projects/453961
// @copyright  2012, Ian Lotinsky
// ==/UserScript==
function main() {
  setTimeout(function() {
    var devs = 'mms, ay, hkb, bh, js, jw, np'.split(', ');
    for (i = 0; i < devs.length; i++) {
      $('.search .std').attr('value', 'mywork:' + devs[i]);
      $('#search_button').click();
      $('.pin_search').last().click();
    }
    $('searchString').clear();
  }, 2000);
}

// Source: http://stackoverflow.com/questions/2303147/injecting-js-functions-into-the-page-from-a-greasemonkey-script-on-chrome
var script = document.createElement('script');
script.appendChild(document.createTextNode('('+ main +')();'));
(document.body || document.head || document.documentElement).appendChild(script);

Now, this is a bit of hack, so not everything is roses. You still have to move tickets around in the standard backlog and icebox; you can’t move tickets around inside the developer backlogs since they’re just pinned, search result panels; and you need to refresh your browser from time-to-time to refresh the developer backlogs. It’s not as smooth as a first-class feature, but it gets the job done.

Team Debt

I’m currently having a blast leading the technical team behind the LivingSocial Takeout & Delivery web site. One of the challenges of a growing team is maintaining appropriate amounts of communication. You want everyone to know everything that’s important, but not everything. Otherwise, you end up being a case study in The Mythical Man Month.

Although our team did not follow this plan when it was ramping-up, hindsight reveals the need for a team debt management strategy as it grows. After mulling over it for awhile, I’m fairly sure that if I lead a new team in the future, we will follow this path:

First engineer to join the team

  1. Sets-up the source code repository
  2. Writes a starter project README
  3. Provisions the application and team notification email addresses
  4. Wires-up application notification email(s)

Second engineer

  1. Sets-up the continuous integration (CI) server
  2. Provisions the CI notification email address(es)
  3. Wires-up CI notification emails

Third engineer

  1. Sets-up the team’s Campfire
  2. Wires-up commit and deployment notifications (Campfire and/or email)

Fourth engineer

  1. Sets-up a scrubbed production database dump that engineers can use for local development

What tech team debt tools do you typically employ, and when do you employ them?

The Art of Innovation by Tom Kelley and Jonathan Littman

Several years ago, my dad, Bo Lotinsky, showed me the infamous 60 Minutes special on IDEO–the mecca of innovation. After watching it, I couldn’t help but buy their book The Art of Innovation. I finally got around to reading it, and boy is it good. As always, a bulleted list of ideas and quotes don’t do the book justice. They’re more for me to remember what I read and for you to be intrigued enough to read it yourself. Enjoy!

Chapter 3: Innovation Begins with an Eye

  • Keep a list of what bugs you in products.
  • Ask “why/why not?” to understand and challenge what has already been done.
  • Observe the action–not what people say.
  • “Think of products in terms of verbs rather than nouns…as animated devices that people integrate into their lives–and you’ll become more attuned to how people use products, spaces, services–whatever you’re trying to improve.”

Chapter 4: The Perfect Brainstorm

  • Stick to one hour (one and a half max).
  • “Start with a well-honed statement of the problem…at just the right level of specificity…open-ended.”
  • Play: “go for quantity,” “encourage wild ideas,” and “be visual.”
  • Number each idea. Aim for 100 per hour.
  • Build on ideas with variations. Jump to other trains of thought when the current thread has died.
  • Use giant whiteboards, Post-It notes, or butcher paper. The brain is wired for spacial memory, so move around the room to write and to revisit topics.
  • Start with a mental warm-up exercise if people seem to be elsewhere. One great exercise is to survey products in the same category you’re trying to brainstorm in.
  • Sketch, mind-map, diagram; don’t just write words.
  • Spend much more time brainstorming than writing. You want to stay on the creative side of the brain.
  • Everybody is on the same level. No one is the boss, expert, or auditor.
  • No idea is to be critiqued. Just write it down and continue.
  • Don’t make up any other rules.

Chapter 5: A Cool Company Needs Hot Groups

  • Even the world’s best historical innovators worked in teams. Loners don’t succeed.
  • Build teams around problems to be solved, not a team role.
  • Bring in people from all roles and experiences.
  • Look outside the group for ideas, solutions, and feedback.
  • Team leaders pitch potential project members. No one “owns” people. (Note: movie studios, Google, and Netflix do the same thing.)
  • Don’t mandate attire or business hours.
  • Provide snacks.
  • Have a geek club where people can show off the latest technology or demo something they have built.
  • Play hooky as a team and go on a field trip.

Chapter 6: Prototyping Is the Shorthand of Innovation

  • “A playful, iterative approach to problems is one of the foundations of our culture of prototyping.”
  • “A prototype is almost like a spokesperson for a particular point of view.”
  • “A prototype is worth a thousand pictures.”

Chapter 8: Expect the Unexpected

  • “History teaches that innovation does not come about by central planning. If it did, Silicon Valley would be nearer to Moscow than San Francisco.”
  • It’s almost impossible to predict how the market is going to use a product.
  • Spend time absorbing what’s going on around the world. IDEO has subscriptions to at least 100 magazines.
  • Observe people in the wild accomplishing tasks.
  • Hold an open house to solicit feedback and ideas from people.

Chapter 9: Barrier Jumping

  • Organizational checklist: merit-based, employee autonomy, familiarity among staff, messy offices, lots of tinkering.

Chapter 10: Creating Experiences for Fun and Profit

  • “As you step through the innovative process, try thinking of verbs not nouns.”

Chapter 11: Coloring Outside the Lines

  • “The person who toils endlessly at his desk is not likely the person who is going to hatch a great innovation.”

Best Kept Secrets of Peer Code Review by Jason Cohen

Best Kept Secrets of Peer Code ReviewIt has been three years since I have been under the oppressive finger of the waterfall software engineering process, but that is still what comes to mind when I hear the words “peer review.” Corporate software outfits typically require programmers to present their code to other engineers in hopes of finding bugs and fixing them before they become a problem. Usually it involves some form of screen-sharing and walk-through of code snippets developed in the past few days. From my five years of experience doing this sort of review, reviewers rarely found bugs in code–no matter how poor a software engineer or how great a reviewer.

I know book-editing is a great practice, and I know that no software engineer is perfect, but my experience has prevented me from seeing benefit in the concept. Best Kept Secrets of Code Review by Jason Cohen renewed my perspective on peer reviews by presenting a more effective way of doing them. The book’s purpose is two-fold: (1) present compelling arguments and practices for effective code review and (2) sell a peer review tracking product called Code Collaborator. Thankfully, the book reserves the sales pitch for the last chapter and does a great job presenting the facts and arguments that ultimately led Smart Bear to build a peer review product. Here are some of my notes, but, like the reviews before this one, I must encourage you to snag your own copy.

Chapter Four: Brand New Information

  • Time is the one factor that a reviewer can control that will effect their ability to identify bugs. The reviewer cannot control the language, algorithmic complexity, or the experience of the developer who wrote the code.
  • 60 minutes of peer review is the sweet spot. Spend more or less time than that and you will statistically either miss bugs or waste time looking for bugs that don’t exist. (On that note: do not spend more than 90 minutes…ever.)
  • The more time a reviewer spends during his first pass over the code, the faster he will be at spotting bugs during his second pass. In other words, go slower the first time around.
  • Private inspection of code produces better results than being guided by the developer in a presentation format. Often reviewers will raise questions that are about how something works, not whether or not a particular piece of code is correct. (More proof that meetings usually waste people’s time and companies’ money.)
  • The hardest bugs to find are code omissions, so arm reviewers with checklists. This list might include items like “should this page require SLL?” or “is allocated memory properly destroyed?” Have your team keep a running list of common bugs to look out for.

Chapter Five: Code Review at Cisco Systems

  • Formal peer review meetings don’t uncover more bugs but do add to the cost of software engineering. (Yes, this is a repeat comment–just driving it home!)
  • The one benefit of a formal meeting is that it motivates the presenter (developer) to be more careful and produce higher quality material (code). Knowing that someone else is going to formally inspect your code will compel you to write better code.
  • Only 200-400 lines of code should be reviewed at any one time. (Recall our 60 minute sweet spot from above? 200 lines / 90 minutes = 2.22 lines per minute. 400 lines / 60 minutes = 6.66 lines per minute. That’s a lot of time.)

Chapter Seven: Questions for a Review Process

  • Keep your peer review checklist in-between 10 and 20 items. Too many, and the reviewer’s effectiveness drops.
  • If necessary, create different flavors of checklists to cover more than 20 items: logic bugs, security, design, etc. Have different reviewers use different checklists.

The book is also full of really nice references, case studies, techniques for measuring team effectiveness, and points for team leads. It’s worth the read, so check it out.

Importing a Contacts CSV into Gmail and BitPim Correctly

Snowmageddon was the perfect excuse for me to initiate my migration over to Google Gmail and play with Google Voice.

I had to start by exporting and merging my contacts out of an LG EnV via BitPim, Microsoft Windows XP Address Book, and Microsoft Hotmail Contacts. The export to comma-separated values (CSV) in each application was straightforward. The merging I did manually in Microsoft Excel because Address Book and Hotmail stick to the human-readable Mobile Phone, Home Phone, Office Phone, but BitPim spits-out primary, secondary, and tertiary phone numbers as numbers_number, numbers_type three times over. As for email addresses and email types, Address Book and Hotmail do the primary, secondary, tertiary dance, while BitPim limits it to primary and secondary email addresses and types.

After having a cleaned-up contacts list, I was ready to charge forward. However, to my dismay, the Gmail Contacts CSV importer prefers a different format. It wants primary, secondary, and tertiary phone number and phone number type columns and primary and secondary email and email type columns, but the type column data is different depending on what sort of data a record has. So, while Home may work for one record, another record requires * Home. Mind-blowing.

With inspiration from others who have walked a similar path, I decided to write a simple Ruby script to convert contacts from a human-readable CSV format into something Google Contacts likes.

The format of the input CSV is as follows (you might as well throw these column headers into your input CSV as well):

First Name,Last Name,Home Email,Work Email,Mobile Phone,Home Phone,Work Phone

The output CSV matches what Google likes.

And here is the script:

# Simple CSV to Gmail Contacts CSV
# by Ian Lotinsky (http://ianlotinsky.com/)
# Released under the MIT License (http://en.wikipedia.org/wiki/MIT_License)

# Run like
#   ruby gmailcsv.rb my_contacts.csv > import_into_gmail.csv

require 'rubygems'
require 'fastercsv'
if ARGV.size != 1
  puts "USAGE: #{__FILE__} <original csv>"
  puts "  First Name,Last Name,Home Email,Work Email,Mobile Phone,Home Phone,Work Phone"
  exit
end

puts "Name,Given Name,Additional Name,Family Name,Yomi Name,Given Name Yomi,Additional Name Yomi,Family Name Yomi,Name Prefix,Name Suffix,Initials,Nickname,Short Name,Maiden Name,Birthday,Gender,Location,Billing Information,Directory Server,Mileage,Occupation,Hobby,Sensitivity,Priority,Subject,Notes,Group Membership,E-mail 1 - Type,E-mail 1 - Value,E-mail 2 - Type,E-mail 2 - Value,Phone 1 - Type,Phone 1 - Value,Phone 2 - Type,Phone 2 - Value,Phone 3 - Type,Phone 3 - Value"

FasterCSV.foreach(ARGV[0], :headers => :first_row) do |row|
  full_name = "#{row.field("First Name")} #{row.field("Last Name")}".strip
  puts "#{full_name},#{row.field("First Name")},,#{row.field("Last Name")},,,,,,,,,,,,,,,,,,,,,,,* My Contacts,* Home,#{row.field("Home Email")},* Work,#{row.field("Work Email")},* Mobile,#{row.field("Mobile Phone")},Home,#{row.field("Home Phone")},Work,#{row.field("Work Phone")},"
end

And because I wanted to be able to sync my contacts between Google and my phone in the future, I tried to write a script to transform a Google Contacts output CSV into a CSV that BitPim likes. But, alas, Google’s export CSV is inconsistent at best, so I wrote a script to go again from human-readable but this time to BitPim CSV:

# Simple CSV to BitPim Contacts CSV
# by Ian Lotinsky (http://ianlotinsky.com/)
# Released under the MIT License (http://en.wikipedia.org/wiki/MIT_License)

# Run like
#   ruby bitpimcsv.rb my_contacts.csv > import_into_bitpim.csv

require 'rubygems'
require 'fastercsv'
if ARGV.size != 1
  puts "USAGE: #{__FILE__} "
  puts "  First Name,Last Name,Home Email,Work Email,Mobile Phone,Home Phone,Work Phone"
  exit
end

puts "names_title,names_first,names_middle,names_last,names_full,names_nickname,numbers_number,numbers_type,numbers_speeddial,numbers_number,numbers_type,numbers_speeddial,numbers_number,numbers_type,numbers_speeddial,emails_email,emails_type,emails_email,emails_type,categories_category"

FasterCSV.foreach(ARGV[0], :headers => :first_row) do |row|
  full_name = "#{row.field("First Name")} #{row.field("Last Name")}".strip
  puts ",#{row.field("First Name")},,#{row.field("Last Name")},#{full_name},,#{row.field("Mobile Phone")},#{"cell" unless row.field("Mobile Phone").nil?},,#{row.field("Home Phone")},#{"home" unless row.field("Home Phone").nil?},,#{row.field("Work Phone")},#{"office" unless row.field("Work Phone").nil?},,#{row.field("Home Email")},,#{row.field("Work Email")},,"
end

And of course, after I got this all working, I remembered that there is a Google Contacts API that I should have tried playing around with.

I hope this saves people from the frustration I experienced and the time I wasted because of bad CSV importers and exporters. If you have an suggestions for improvement, please leave a comment.