I’m currently having a blast leading the technical team behind the LivingSocial Takeout & Delivery web site. One of the challenges of a growing team is maintaining appropriate amounts of communication. You want everyone to know everything that’s important, but not everything. Otherwise, you end up being a case study in The Mythical Man Month.
Although our team did not follow this plan when it was ramping-up, hindsight reveals the need for a team debt management strategy as it grows. After mulling over it for awhile, I’m fairly sure that if I lead a new team in the future, we will follow this path:
First engineer to join the team
- Sets-up the source code repository
- Writes a starter project README
- Provisions the application and team notification email addresses
- Wires-up application notification email(s)
- Sets-up the continuous integration (CI) server
- Provisions the CI notification email address(es)
- Wires-up CI notification emails
- Sets-up the team’s Campfire
- Wires-up commit and deployment notifications (Campfire and/or email)
- Sets-up a scrubbed production database dump that engineers can use for local development
What tech team debt tools do you typically employ, and when do you employ them?
It has been three years since I have been under the oppressive finger of the waterfall software engineering process, but that is still what comes to mind when I hear the words “peer review.” Corporate software outfits typically require programmers to present their code to other engineers in hopes of finding bugs and fixing them before they become a problem. Usually it involves some form of screen-sharing and walk-through of code snippets developed in the past few days. From my five years of experience doing this sort of review, reviewers rarely found bugs in code–no matter how poor a software engineer or how great a reviewer.
I know book-editing is a great practice, and I know that no software engineer is perfect, but my experience has prevented me from seeing benefit in the concept. Best Kept Secrets of Code Review by Jason Cohen renewed my perspective on peer reviews by presenting a more effective way of doing them. The book’s purpose is two-fold: (1) present compelling arguments and practices for effective code review and (2) sell a peer review tracking product called Code Collaborator. Thankfully, the book reserves the sales pitch for the last chapter and does a great job presenting the facts and arguments that ultimately led Smart Bear to build a peer review product. Here are some of my notes, but, like the reviews before this one, I must encourage you to snag your own copy.
Chapter Four: Brand New Information
- Time is the one factor that a reviewer can control that will effect their ability to identify bugs. The reviewer cannot control the language, algorithmic complexity, or the experience of the developer who wrote the code.
- 60 minutes of peer review is the sweet spot. Spend more or less time than that and you will statistically either miss bugs or waste time looking for bugs that don’t exist. (On that note: do not spend more than 90 minutes…ever.)
- The more time a reviewer spends during his first pass over the code, the faster he will be at spotting bugs during his second pass. In other words, go slower the first time around.
- Private inspection of code produces better results than being guided by the developer in a presentation format. Often reviewers will raise questions that are about how something works, not whether or not a particular piece of code is correct. (More proof that meetings usually waste people’s time and companies’ money.)
- The hardest bugs to find are code omissions, so arm reviewers with checklists. This list might include items like “should this page require SLL?” or “is allocated memory properly destroyed?” Have your team keep a running list of common bugs to look out for.
Chapter Five: Code Review at Cisco Systems
- Formal peer review meetings don’t uncover more bugs but do add to the cost of software engineering. (Yes, this is a repeat comment–just driving it home!)
- The one benefit of a formal meeting is that it motivates the presenter (developer) to be more careful and produce higher quality material (code). Knowing that someone else is going to formally inspect your code will compel you to write better code.
- Only 200-400 lines of code should be reviewed at any one time. (Recall our 60 minute sweet spot from above? 200 lines / 90 minutes = 2.22 lines per minute. 400 lines / 60 minutes = 6.66 lines per minute. That’s a lot of time.)
Chapter Seven: Questions for a Review Process
- Keep your peer review checklist in-between 10 and 20 items. Too many, and the reviewer’s effectiveness drops.
- If necessary, create different flavors of checklists to cover more than 20 items: logic bugs, security, design, etc. Have different reviewers use different checklists.
The book is also full of really nice references, case studies, techniques for measuring team effectiveness, and points for team leads. It’s worth the read, so check it out.
While researching user experience design techniques, I stumbled upon some nifty whiteboard magnets for prototyping called GuiMags as well as a complementing book called The Unplugged.
GuiMags look like the nicest way to prototype something before going to HTML and CSS. Labor intensive forms of prototyping don’t seem to add much value, and paper and traditional whiteboard prototyping only works until you’ve changed your mind about something and have to throw your work in the trash or erase half the board.
Although I decided to postpone a magnet purchase until I am doing design again, I was able to get my hands on the book. Its premise: we limit ourselves by the technologies we use. Instead of thinking outside the box, we’re often thinking and functioning in it. A large part of this thinking inside the box is how we develop software.
Although, everyone interested in the topic should pick up the book, here are a few of my takeaways:
- Every major form of art that involves technology (music, film, video games, graphic design) starts outside technology. Artists do not limit themselves by their technology but by the limits of their own minds. As a software engineer, you often limit yourself by the technology you use day-to-day.
- Spend as much time as you can iterating on concept and design before going to implementation.
- Design the software front-end not the back-end first.
- Just like there are code freezes, freeze the product when it has passed the design phase.
- It is often wise to outsource the implementation.
- This serves as a peer review of the design before it goes to implementation. Software developers traditionally think about the back-end first.
- Different cultures have different strengths: “England and Western Europe are great at design, Ukraine and Macedonia have amazing and prompt developers who can think for themselves, the Netherlands always emails back the same day, India is extremely polite, etc.”
- Work can be done while you are sleeping. “This can cut the development time in half.”
- Because you already know what you want and won’t be constantly changing the design, contractors will want to work with you even if you pay less.
- Only be satisfied with five-star developers.
- Pay more than you agree to pay.
- Do one-week sprints. Longer sprints end up getting delayed, with excuses.
With the last (sub)point in mind, I think this methodology is well-suited for an agile development process.
There is a lot to gain from reading the book, so make sure to grab a copy for yourself.
I’ve been using Ruby on Rails exclusively for over a year now, but have used other web frameworks for longer periods of time (classic ASP, ASP.NET, and J2EE). Rails is unique in many ways, and if look hard enough online, you’ll find its qualities spelled-out for you.
Deprecation is one quality that isn’t spoken of much, but it’s one of my favorites. The Rails core team is adamant about doing things the best way possible. When it sees a better way of doing things, it immediately starts removing parts of the API that don’t meet that standard of perfection. Developers offload deprecated functionality into plug-ins that give teams time to migrate aging code, but the deprecated code doesn’t stay in the main code base for long.
The other frameworks I’ve used have kept deprecated API calls in for extremely long periods of time. Some I don’t think will ever disappear. These other APIs allow developers to continue using inefficient, poorly-designed, and overall bad code however long they want.
Rails isn’t simply doing things the best way possible, it trains its developers to do the same. I like that.
I used to subscribe to Yahoo Music Unlimited. It let me download as much music as I wanted without paying per track or album. It was much cheaper for me than Apple iTunes. Then they decided to shut the service down and migrate users over to Rhapsody Unlimited. Although Rhapsody is a tad pricier, I was okay with the change. I was still getting a better deal.
The Rhapsody installer worked fine and actually migrated about a third of my Yahoo Music downloads and all of my ripped CDs. Although a little disappointed about having to manually download the rest, I was satisfied.
It took me a little over a month to get around to the “big download” since my first install. A newer version of Rhapsody had since been released, so I routinely applied the update. Then I opened up
My Documents/My Music/Yahoo Unlimited to see which old Yahoo tracks I needed to manually download into Rhapsody. To my dismay, the new version of Rhapsody was deleting all my Yahoo tracks (whether previously imported or not). It was systematically deleting
WMA files one-by-one off my computer but leaving all the folders and any tracks I had ripped from CD. I could see it happening before my eyes. All along, Rhapsody didn’t say a word; it just acted normal and then dropped all the deleted tracks out of My Library when it was done.
If Rhapsody is manually inspecting each Yahoo Music file to determine whether or not to delete it, shouldn’t it kindly find the same track in its own service and download it for me? Or at least give me a list of the tracks it just removed so I can go find them myself? I’m not quite sure who made it, but this has to be one of the worst usability decisions ever made. It’s almost like switching over to Blu-ray and having someone sneak in and take all your DVDs back to Best Buy.
The good news is that before uninstalling the Yahoo Music Player, I had used Marc Abramowitz’s Export2Excel plug-in to dump a list of all my tracks just in case. I’m glad I did. Thanks, Marc!
I’m now getting carpel tunnel while rebuilding My Library.
I just got back from my parents’ house, where we got sucked into a TV special on a live exhibition Dale Chihuly coordinated at the Museum of Glass. The artists he pulled together blew and sculpted some amazing pieces of glass art. We’ve all seen it done on Mr. Rogers’ Neighborhood and Reading Rainbow, but this was something else.
Our family noticed something interesting as we watched: the artists loved to praise each other publicly–often coming across as flattering one another. There was no critique or constructive criticism. Everything everyone did was “beautiful”–even if the person didn’t do what was planned. The only thing that came remotely close was when someone dropped a finished piece and everyone gasped aloud. Everyone was disappointed, but it was a communal loss. No one was pointing fingers.
That got me thinking about other occupations, particularly mine, where a gathering of often ends in a verbal blood bath. Read any technical books and blogs and you’ll come across a lot of critique and criticism–constructive and otherwise. Opinions are strong and believers get passionate. There’s a lot of debate about who’s right, who’s wrong, and who’s to blame as the root cause of a failure. I’m certainly guilty.
Of course the contrast I’m making between artists and the exacting isn’t always as bad as I just described. But it could change–at least among those inclined to critique. Instead of celebrating another company’s addition to the TechCrunch deadpool and analyzing why its business plan was doomed from day one, maybe we should look at some of the things they did right. We should try to find some beauty in the work they did even if the finished product ends up lying in pieces on the floor.
I just witnessed a beautiful example of web application architecture. After periodically making adjustments to my schedule in Google Calendar the past several hours, I witnessed the user interface change, in real-time, without even refreshing my browser.
For as long as I can remember, any time I would click and drag on my calendar to create a new appointment, a block of time would be selected in gray until I released my mouse button. The time would then be reserved with a solid blue block and I would fill in details about the appointment I just made. Just last week I was thinking, “it would sure be nice if they did a better job indicating the time frame you’re actually blocking off so I don’t have to eyeball it.”
Well, while making one final appointment on my calendar tonight, I noticed that the typical gray selection of time had been replaced with a transparent blue block with the time-frame in it. The time frame and size of the box still adjusted as I moved my cursor down my schedule. When I released the mouse button, things worked as they always have. If this new user experience wasn’t enough to be impressed by, I realized I hadn’t refreshed my browser at all. The new feature just appeared:
And that is one of the wonderful benefits of running an application on the web. Although I’m not sure exactly how they do it, Google has engineered Google Calendar so that they can roll-out changes to the user interface in real-time to users. Real-time connection to Google’s servers through Ajax is all that’s needed–no bulky Microsoft-Office-like update to download and install, not even a browser refresh.