Category Archives: Web Development

Bucking the Microservices Fad

In the middle of 2013, I was hired to lead the engineering team at LearnZillion–a digital curriculum for K-12 Math and English subjects composed of videos, slides, documents, and images. At the time, there were several applications in support of the business:

  • 2 Ruby on Rails web applications: the content authoring platform for a select group of users and the content consumption platform most teachers and students used
  • 2 native mobile apps: 1 iOS and 1 Android for students only
  • 1 API inside the content consumption Rails app, which served data to the 2 mobile apps and the web application

After a few months, momentum led us to build a publishing API so that the 2 Rails apps could talk with each other. (Before this second API, we were painstakingly moving data between the two apps via CSV and file exports and imports.)

Having recently left a company where we were migrating from one monolithic application into dozens of microservices, the momentum felt right. It was the in thing to do. Microservices were hot. However, as time moved forward, it became increasingly clear at LearnZillion that a microservices architecture came with a non-negligible overhead cost: the cost of keeping not 1 but 2 Rails apps up-to-date with dependency upgrades, authoring 2 APIs internal APIs, 3 API clients, and often changing all 5 when adding or removing a feature, all to pass data around. On top of the software management overhead, there was the overhead of hosting each and bending over backwards at times to make sure that APIs never called themselves. The benefits of having separated apps and APIs were dragged down by the cost.

We didn’t want the over-complexity and overhead, so over an 18-month period, we did a 180. We brought content authoring into the content consumption app, killing-off 1 Rails app, an API, and an API client. We also transitioned our iOS and Android native apps in favor of a cross-platform Ionic app that WebView-ed our main Rails app on both iOS and Android, killing-off an API and 2 API clients.

We now have 1, hosted app, built the Rails way, that has a median response time of 75ms, 2 hybrid mobile apps that look and function identically, and whenever a feature is deployed to our main web application, it is immediately available inside the mobile apps. No extra moving parts, no coordinated deploys. Most importantly, this setup allows us to scale the impact of each member of the Engineering & Design team by keeping each one focused on the features, not on a microservices architecture.

Bucking the Microservices Fad

Our course, there are plenty of valid reasons to have microservices, purely native mobile apps, etc., but this is not always the case.

I’m merely warning against jumping on the microservices bandwagon because it’s the in-thing. And this is a concrete example of where microservices were hurting a business not helping it. Thankfully, it wasn’t too late to reverse course. And I don’t think our team could be happier with the results.

As one of my colleagues says, “you have to use your brain.”

Google Analytics Crash Course Notes

Thinking that you will adequately learn Google Analytics by clicking around the product, even over years, is a foolish concept. You will only understand a subset of its features and how they work together. You need to do your homework.

I cannot improve upon Google Analytics’ (GA) own crash course, titled Google Analytics IQ Lessons. It covers just about all the material in the paid Analytics courses (101, 201, and 301) at just the right level–not too high, and not too deep.

Here are my notes of the key gotcha’s and items to configure for your GA Web Properties. As well, I’ve linked to other helpful learning resources. As is my standard practice, this is mostly for my reference down-the-road, so it’s not comprehensive. However, I figure others can benefit from them as well.

Gotcha’s

  • Incognito mode and other browser privacy sessions count as new Visitors, Visits, and Page Views, as if the user had cleared his cookies. Not a huge surprise to most, I’m sure. (Although other trackers can still track you.)
  • Visits are separated by exits from the site or a 30-minute cookie timeout while on the site. Advertising Campaign attribution expires after a 6-month cookie timeout. Both are customizable.
  • Time on Exit Pages is not tracked because time is calculated between page loads on the same site. This also means that Bounce Page time is not tracked either. This has serious implications for some genres of sites, like blogs where a bit of traffic goes into and out of a single article. Know how to track Exit Page times and Bounce Pages, if you need to.
  • A Visitor can only trigger a Goal conversion once during a Visit, but can trigger an E-commerce Goal multiple times in a Visit.
  • Filters are applied between raw data capture and the Account’s Profile where the data is ultimately stored. Even if you change a Filter that sits in-between the raw feed and Profile, you cannot recover historical data. Try accomplishing the same filtering with Advanced Segments instead, which don’t run the risk of losing data. At the least, you should use Advanced Segments or other features to test concepts before creating a real Filter for them.
  • Domains and subdomains can break tracking in many glorious ways–especially E-commerce Goal tracking.

Basic Checklist

  • Always have a raw Profile that has no Filters, Advanced Segments, etc.
  • Have a Profile that excludes internal IP address so you’re not tracking yourself and your staff as they click around your site.
  • Have a Profile that exclusively tracks internal IP addresses for debugging Google Analytics code on your site.
  • Use the Google Analytics Debugger Chrome Extension for your own debugging and analysis of competitors’ tracking.
  • Enable Auto-Tagging between Google AdWords and Analytics if you are using both products.
  • If you create an AdWords Profile, set up two Filters to focus-in on AdWords traffic (Campaign Source: google, Campaign Medium: cpc).
  • Set up E-commerce tracking.
  • Set up Goal tracking.
  • Set up Internal Site Search tracking. (It’s much easier than you think.)
  • Utilize _addIgnoredOrganic to attribute Organic Search Visits for your web site’s address (i.e. someone searching for “example.com”) to a Direct Visit instead.
  • Set up appropriate Custom Variables to track additional information about Visitors, Visits, and Page Views.

Hopefully, all these will help us do a better job optimizing our customer average lifetime value (LTV).

JavaScript Style Sheets

One of the privileges I have experienced in my professional life is having worked with Jeremy Keith of Clearleft. I learned a lot from the author and web developer from our interactions and by studying his code.

Jeremy is a fan of graceful degradation and ensures that a site will function just fine without JavaScript browser support. However, instead of mingling JS and non-JS styles together, he employs a particularly clever technique. He puts the standard styles in the standard CSS file. But just as his sites provide print and mobile stylesheets for printers and mobile devices, he also provides one for JS-enabled browsers. This has two nice side-effects: (1) better CSS organization and (2) CSS that is applied immediately, not after JS has had an effect on the DOM.

One example is a thumbnails carousel he designed. If the browser does not have JS support, the thumbnails are arranged in a grid. If the client does support JS, he wraps-up the thumbnails in a single row inside a horizontally-scrolling carousel, so they can be browsed left to right. The carousel presentation of the thumbnails requires a different set of CSS rules, which override the vanilla styles.

Here’s how he pulls off the wiring-up of a JavaScript CSS style sheet:

  <script type="text/javascript">document.write('<link rel="stylesheet" href="/stylesheets/javascript.css" media="screen,projection">');</script>

Any site I’m involved in will be following suit.

Thanks, Jeremy!

Specificity, Eames. Specificity?!

Way-back-when, I suggested that CSS authors use classes as much as possible. It was a foggy idea at the time, but, thankfully, someone important stumbled-upon the same concept, and codified it. Around the same time, it started to become popular for developers to consolidate CSS and JavaScript into two files for their sites. This technique saves HTTP requests, speeding-up users’ experiences of your site.

Now that I have had a few years to use these techniques, I’m happy to report they work beautifully; that is, until you have large CSS and JavaScript files that take a long time to download. Long downloads risk new visitors’ first impressions. You want it them to be wowed. A slow introduction doesn’t wow.

Large CSS and JavaScript files are often symptomatic of defining a heterogeneous set of styles or behaviors for very particular elements on very particular pages. In other words, styles and behaviors that aren’t re-used; ones that don’t scale. Here are some straightforward techniques to lighten those two global files:

  1. Page-specific styles. I see CSS as a way to define general styles that affect large portions of a site (using classes). If you want to do something really, really special on a particular page, just put it inside the handful of elements that want to be different as inline styles. (ID selectors are a dying breed.) There’s no need to define rules in a global style sheet if they’re not really global. If the page has a lot of inline styles, and they are making the markup unmanageable, you can always extract the styles into a page-specific stylesheet. One extra HTTP request on one page (or a handful of pages) won’t kill you or your users.
  2. Page-specific JavaScript. Users may never even get to some pages; why slow down their first request to your site to grab JavaScript for them? Don’t be ashamed to slap-in a JavaScript script block at the bottom of your page’s markup.
There is a continuum between two consolidated files for all your CSS and JavaScript and obtrusive CSS and JavaScript. As with many other things in life, you’ve gotta’ find the healthy balance.

Document Domain

Every few years I run into an issue with JavaScript-based rich text editors and spellcheckers when they spawn pop-ups. The pop-ups open but don’t function.

If I open my Firebug console in the pop-up, I see something like:

Permission denied for <http://assets2.mysitedomain.com> (document.domain has not been set) to get property Window.tinymce from <http://www.mysitedomain.com&gt; (document.domain has not been set).

Chrome’s console, shows a similar error:

Unsafe JavaScript attempt to access frame with URL http://www.mysitedomain.com/mypage from frame with URL http://assets2.mysitedomain.com/javascripts/lib/tiny_mce/themes/advanced/source_editor.htm. Domains, protocols and ports must match.

In this case, TinyMCE‘s HTML plugin is running-up against JavaScript’s same origin policy because I’m serving assets (and therefore TinyMCE pop-ups) from a different fully-qualified domain name than the page TinyMCE is being embedded in. When not explicitly set, my site’s pages will default to something like http://www.mysitedomain.com and TinyMCE’s document domain will default to assets2.mysitedomain.com.

The simple fix is to bump up the document domain on my site’s pages to just mysitedomain.com. I do this in my global JavaScript file. I do the same thing to TinyMCE’s tiny_mce_popup.js file.

    document.domain = 'mysitedomain.com';

(You might also know that cookies need a similar bump when trying to read and write to them across subdomains.)

Although this works, there is a problem for those developing locally: there’s a good chance they’re not developing at mysitedomain.com but something like localhost. A page at localhost certainly isn’t allowed to claim its document domain is mysitedomain.com.

To handle both cases, we can instead set the document domain smartly, by putting this in both our global JavaScript file and in tiny_mce_popup.js:

    document.domain = /(\w+)(.\w+)?$/.exec(location.hostname)[0];

Gift Cards Getting Declined?

Several months back, I was testing an application’s ability to charge credit cards before going live with it. The application allowed people to make one-time purchases, purchases using a previously saved credit card, and subscription purchases wherein people’s credit card information was saved for future use. Like many sites, we used a third party to charge and store the credit card information securely. (In our case we used Authorize.Net Customer Information Manager–also known as CIM.)

As a part of my testing, I employed the use of VISA, MasterCard, and American Express gift cards to make sure they worked too. We didn’t want to miss-out on any potential purchases. However, we ran into a curious situation when using gift cards as saved cards: the first transaction would be approved but any subsequent attempts would always be declined.

I spent the majority of a day talking with our merchant gateway (Authorize.Net), our payment processor, and the gift card issuing banks to try to understand what in the world was going on. Just like any perfectly-executed Department of Defense project, not one person knew or could explain the big picture. Each person could only tell me what they knew about my declined test transactions and point fingers at someone else. After speaking with everyone, though, I was able to extract what was going on.

To make a transaction with a credit card online or in the real world, you must provide a piece of verifying information. There are two options: (1) the 3-digit security code on the back of the card or (2) your physical address.

The catch about using your physical address for a gift card purchase is that you must first register the gift card with the issuing bank so that they have your address on file to verify it. The problem with address verification is that virtually no one registers their gift cards. (I didn’t even know it was possible.)

The problem with security codes is that no one other than the issuing bank is allowed to store it. The merchant gateway might capture and send along the code when making the initial purchase and even storing the card. However, it will never be available for subsequent transactions. This is why our gift cards were being declined when used as saved credit cards. We were sending the customer’s address to the payment gateway, but the issuing bank didn’t have an address on file to verify. And we couldn’t send the security code because Authorize.Net hadn’t stored it.

So, what could be done? Well, we couldn’t just reject gift cards on our site because there is no way to determine if the card is a credit card or gift card. (The only people who know this information is the card holder and the issuing bank. Credit card processors and banks are agnostic.) We could require the customer to provide a security code on every saved credit card transaction or subscription transaction, but that inconvenience to all of our customers would defeat the purpose of storing credit cards for automatic processing. The only viable option was to tell customers to register their gift cards with their issuing bank if they wish to use a gift card as a saved card or for subscriptions.

Hindsight is 20/20, so this makes perfect sense. However, it was quite confusing before the epiphany. Hopefully this will save some others a day of investigation.

Rails path and url helpers

Sometimes I forget the simple differences between Rail’s helpers. This mini post is so I don’t forget.

*_path
Generates relative URLs: /users
Used in views by link_to, form_for, etc. (per DHH)
The browser maps relative URLs to absolute URLs based on the current page’s protocol and host (/users on the page http://domain.com/new translates to http://domain.com/users)

*_url
Generates absolute URLs: http://domain.com/users
Used in controllers by redirect_to (per DHH) because RFC 2616 states that redirects must be absolute URLS. This is true, however, modern browsers can handle relative URL redirects now too