What’s a Parent of Younger Kids to Do About the Internet?

Despite my role at LearnZillion, leading the construction of one of the more popular sites for students, I’m not convinced the Internet is a safe place for kids, unprotected.

Incredibly, the Internet has become this global city—a warehouse of the world’s information, an international marketplace, our long term memory organizer and storage unit, and an endless educational playground. And yet, at the same time, it has become a cesspool of the most vile and disgusting ideas, words, media, and communities humankind has ever concocted—all conveniently streamed into our houses! It’s my personal and professional opinion that it is generally unsafe for kids. It begs for guardrails—badly.

Until the powers that be fix the situation, what’s a parent of young kids to do about the Internet? Our solution as a family until recently was to keep ours off it. We got them an Apple iPad Mini, locked it down, and installed a handful of Lotinsky-approved apps for the kids when we were traveling. We picked up an NES and an SNES Classic for our video game fix. But then, last year, I learned my oldest, who was in 2nd grade, had access to the Internet at school during class, including Google Search. We needed to get ahead on the home front.

Most of the options for Internet parental controls are either a pain for parents to configure or a frustration for kids to use. For two decades now, I’ve followed and experimented with the options available. Even though my access was practically unrestricted growing up (my parents relied on their kids’ consciences), I knew my kids weren’t going to be given that same level of freedom. I knew what the Internet was back in the 90’s, and I foresaw its inevitable future. I wanted to be ready when the time came. We’ve always planned on being active parents and not hand over parental responsibilities like open and frank conversations with our kids, teaching discernment, and warning of the dangers of certain content and people. That said, we have always determined to protect our kids with technology as best we can.

We’re introducing our third grader and kindergartener to the Internet through a Chromebook and Google Family Link. We’re providing access to a selection of educational sites their teachers recommended, a few we’ve selected, and Google Docs and Drawings (not Search) for creative expression—nothing more.

After a little bit of setup to designate my wife as another parent in the Google Family Group, Family Link was very easy to configure and manage for the kids. We start with everything blocked. If the kids get their hands on a web domain that we haven’t whitelisted yet—either one they type in by hand or reach through a link on a whitelisted site—they get blocked by their Chrome web browser and are given the option to request access from their parents. Requesting access prompts my iPhone and my wife’s for us to allow the domain or dismiss the request. If we allow a domain, any content on it is allowed. This is extremely convenient because most other products require approving a series of domains or IP addresses for a site to work, since most pages pull in content and code from other web domains. We can rest assured the whole site will function, and it is a forcing factor in deciding whether an entire site is worthy or not. This convenience and the forcing factor mean we’ll actually use the parental controls rather than throw our hands up and tear it all down or write-off technology for the kids altogether. The only slight annoyance is that many Google products are hosted on google.com proper—no subdomain. God knows it’s going to be a few years before I whitelist that root domain!

We’ve been using this setup for months now and it’s holding up great. I’m sincerely hoping that Apple takes a cue from Google Chrome OS and Family Link. We prefer Apple’s ecosystem, but this works great for now. Thank you, Google!

I hope this is helpful for other parents trying to navigate the World Wide Web Wild West. Stay strong, parents!

P.S. While I’m on this tangent of what works and what doesn’t, we love how Netflix has a kids experience—although a short lockout code from the adult profiles once the kids one is activated would be nice. Don’t even get me started with Amazon Prime Video though! There are shows on Prime Video we would love for the kids to watch, like Reading Rainbow, but we throw our hands up and avoid it. Even I don’t want to see all the suggestive B- and C-grade movies they’ve managed to amass.

P.P.S. Wait, what about Circle? You’ve probably seen ads for it or heard about it. I love the concept, but I’m not a fan of the technical implementation. It relies on a hacking technique that has a good chance of slowing down your home network and overall connection to the Internet. Good luck letting your kids get into online gaming with something that can add latency to your connection. I already have enough trouble trying to get my Verizon Fios Quantum Gateway to clock in above 50 Mbps, which is half of what we pay for—100 Mbps. I simply don’t want the hassle.

Advertisements

Thank you, Robert Voit, creator of JASC Paint Shop Pro

Robert —

In the early to mid-90s my hobby was making video games of various sorts with my friend Jesse. Our tools were Recreational Software Designs Game-Maker 2.0 and your creation JASC Paint Shop Pro (PSP). I discovered both in a software mail order catalog and purchased them with my lawn mowing cash. We used PSP to design our title screens mostly.

I had purchased the shareware version of PSP, which came with a 30-day trial period. I discovered that if I simply uninstalled and reinstalled PSP, we would get another 30-days of free use. But, my conscience didn’t feel good about my discovery. It was stealing–plain and simple. I sent JASC a brief letter explaining how we were using PSP and asking for permission to continue using it. I figured, the worst that could happen is that you could say “no” and I would have to save up to buy the full version. The best that could happen is that you would say “yes” and we would be back in business.

To my shock and surprise, you sent me the following reply:

Robert Voit JASC Paint Shop Pro

You included a boxed version of PSP with your letter. I couldn’t have been more surprised or excited. It’s still one of my favorite childhood memories to this day.

Only a year or two later, Jesse and I stopped making our games. The limitations of Game-Maker were quite real, and something bigger had arrived: The World Wide Web.

I quickly taught myself HTML and continued to use PSP as my image editor–even later paying for upgraded versions. (Version 6 was my favorite!) I made websites on top of lawn mowing to earn cash throughout high school and college, which helped pay for school. The experience set me up to be a career website and web app builder (see PC World, September 2006, page 37). I couldn’t have done it without PSP. And I always liked it better than Adobe Photoshop, which I got to use during a few summer jobs.

Your kindness taught me three things: honesty and hard work are rewarding–both spiritually and materially–and that it’s fun to surprise and delight and help those whom you can. It was no surprise to me years later to learn that JASC was acquired by Corel. Your life’s hard work, honesty, and kindness rewarded you, and I couldn’t have been happier for you.

Thank you for teaching me some valuable lessons and your gift that summer. I’m grateful.

Although Jesse and I never officially shipped a game to market, I dug up one of them for old time’s sake and to finally follow through on my end of the deal. Here’s a video of Xylon.

Thanks again,

Ian

P.S. For folks interested in the Paint Shop Pro story, read this great Motherboard article.

You Don’t Need to Attend a Prestigious School to Network Well

I spent my first two years in higher-ed at our local community college, followed by two more at the University of Maryland (UMD), where I earned a bachelors in computer science. Both schools ranked okay nationally–especially for public schools–but I got grief from certain life advisers at the time for not attending a more prestigious school, like Carnegie Mellon or MIT.

I valued my family and friends too much at the time to move away, and I didn’t want to accumulate crippling debt. I was inclined to stay local. One thing I weighed losing out on was the ability to rub shoulders with future leaders. Being a commuter student who lived an hour off campus meant I would be spending most of my campus time in class, at the library, or in the computer lab. My classmates and I weren’t the best at networking. It simply wasn’t in the undergraduate CS culture at UMD.

I appreciate the connections I made while there, while also regretting not taking more advantage of the time I had with them or seeking out even more connections. I have learned since then that I didn’t need a prestigious school to network well. Several of my former classmates or schoolmates I met for the first time in industry years after graduation. We compared notes, figuratively, and realized we were in many of the same classes and even had vague recollections of each other. They have been great coworkers, advisers, and friends over the years. They are leaders at significant organizations. And I’ve had the fun of seeing them run circles around people who attended more recognized schools. Here’s just a smattering of examples:

  • Director of Engineering at Stitch Fix
  • Staff Engineer at VMware and later Senior Software Engineer at Microsoft
  • Senior Consultant at Carbon Black
  • Senior Consultant at Microsoft
  • Engineering Manager at Capital One
  • Engineer at InstaCart
  • Principal Product Designer at Main Street Hub
  • Chief Product Officer at GlobalGiving
  • A co-founder at LivingSocial
  • And another I reconnected with at a private event at Skywalker Ranch

I have a wealth of connections and friendships that I’m extremely grateful for–all who attended the same public school I attended.

I’m not belittling prestigious schools in the least. I’m simply encouraging us all to get past the stereotype that to make significant professional connections you have to attend a prestigious and expensive school. Don’t limit your opportunities like I did those two years; you’re probably just a few feet from greatness.

My Personal Process

Over the years I’ve blogged about personal productivity–it’s one of the dimensions I enjoy optimizing to get more done at work, and people think I’m good at. This post is what I wish I had at my disposal when I starting my journey down productivity.

A real challenge with reading advice from literature can be how to put things into practice in the real world. I always appreciate it when experts describe or show examples, especially of tools and techniques. This my attempt at doing just that.

In addition to following tried and true advice, like inbox zero, saying no, unsubscribing from worthless email lists, the two-minute rule, trash whatever you can, and the like, here are the tools and techniques I use today:

Individual Tasks

First, I must point out that these are for tracking my own items–not items that should be visible to a broader group. I put those in formal systems like Asana, Clubhouse, etc. That disclaimer aside, here is how I track my personal tasks:

  • Urgent tasks I schedule on Google Calendar as a time-bound event. Stephen Covey would be proud.
  • Future-urgent or top-of-list important items I put in my Google Calendar Tasks because I can place them on specific dates I suspect I will complete them on. Yes, when I’m busy there’s some drag-and-drop from day to day or week to week.
  • I manage private text files for “_Me”, other people (e.g. “Jim”), or departments (e.g. “QA”) to track open action items, whether committed to or future, possible work. I use indentation or bullets to nest subtasks. I find the relative lack of structure much easier to manage than something like Asana or other task managers. Depending on my connectivity needs, these have been text files on my computer, Google Docs, or Evernote notes. I only ever use one tool during a season of life. I never try to use multiple text editing platforms for tasks.

Email

  • I BCC followupthen.com whenever sending an email I need a timely response to and yet don’t trust the recipients to reply by a specific date.
  • I immediately star sent emails that require a reply but that are not time-sensitive. If someone responds later and the thread is effectively closed, I un-star it before archiving. I sift through these starred messages periodically to unstar items that are no longer relevant, were in-fact resolved. If it is still unresolved, I re-email people nudging them to respond.
  • If I receive an email I don’t want to deal with at the moment, but may be interested in reading several weeks or months later, I will forward it to followupthen.com and archive it immediately.
  • I set up GMail filters to label and skip the inbox for any system-generated messages from apps like GitHub, Clubhouse, or Slack. I then view them at particular points in the day in batch. I additionally have GMail automatically archive and mark as read emails that do not necessitate an action but may be a helpful reference in the future, like receipts from vendors.
  • I schedule a 60-minutes recurring calendar event to get to inbox and Slack zero at a time that makes sense for my schedule (9-10 am). Others can’t schedule meetings during this time. This recurring event ensures I have space to get to inbox zero or as close as I can.

Slack/Group Chat

  • I keep up with it as much as I can, and I ensure others are following Slack etiquette so that we’re not suffocating our own company. Sometimes it’s more appropriate to send an email or hop on a voice or video call.
  • Because I keep up with Slack, I can disable most of its notification settings to focus on the work at hand.
  • I ensure my channel list only shows ones with unread messages for easier catch-up and use my keyboard to jump between channels.
  • I leave any channels I am not deriving value from or contributing value to. I trust someone will @ mention me if I need to rejoin. I left and rejoined some channels a few times in a single day this year. The mute feature is too aggressive; it doesn’t notify me if someone mentions me again in the channel.
  • When I need to get deep work done, I ignore or quit the app. (I can ignore it since I turned off notifications.)

I hope this is helpful for others! Please comment if you need clarification on any of these tactics or have your own to share.

Sony a6000 Thinks It’s 12/31/69!

I just solved a terrifying and frustrating issue with family photos taken on our Sony a6000. I found no solutions online, so I’m posting the issue as well as my solution to spare others the same headache.

As I was looking at photos on our Apple Mac Mini computer tonight, I noticed that all of them had a Created date of 12/31/69 7:00 PM. What?! Was this OS X’s fault or the a6000’s? I did some googling to no avail. I only saw mentions of leap year bugs in other hardware and software products.

On a whim, I turned on our a6000 to see if it still had copies of the photos. It didn’t, but it still had an image index of all the photos going back to when we first purchased the camera, properly dated. However, each thumbnail displayed with a “?” question mark and “Unable to display” when trying to view on the camera. I had previously moved all the photo files to our computer while the camera was connected with its USB cable, which is why they weren’t on the camera anymore. As I started deleting really old photo thumbnails off the camera using the camera controls, I started getting messages like “Recovering Data,” “Writing to the memory card was not completed correctly,” and the camera would occasionally reboot in utter confusion with no real way of escape. I had stumbled upon a real mess.

This mess got me thinking, “maybe the Sony a6000 software engineers didn’t do a good job engineering for the case where photo files were moved from the camera to a computer via USB Mode and OS X Finder.” I haven’t ever run into an issue like this one with other devices, but maybe I needed another way to copy the files to our computer to extract the right creation date.

As an experiment, I copied the files for the photos from our Mac Mini back over to our a6000 via the file system–essentially putting them back where they came from. The photo index then had a few real thumbnails–the photos that didn’t have thumbnails were ones I had deleted on the computer months ago. From there I started OS X’s Preview app. I clicked File > Import > NO NAME. “NO NAME” is the volume name for the Sony a6000 memory card when connected in USB Mode. From there I imported the files to the Mac, and, voilà, the creation date was now correct.

The best guess I have is that the Sony a6000 is using 12/31/1969 (or some null value) as the file system created dates on its memory card. I’m also guessing that the OS X Preview app is extracting the created dates not from the memory card file system but from the photos’ meta data. Then I suspect it is using those values to set the created dates on the Mac’s hard drive when importing the photo files.

After successfully importing all the files from the a6000, I used the camera’s format feature to wipe the memory card clean. From now on, I will be using Preview, Photos, or some other OS X photo app to import our photo files, not dragging and dropping them from the NO NAME volume to our computer directly.

I hope this helps someone else someday. Drop me a note if it does.

 

Note: if you are trying to recover from the same issue, but you have already wiped your memory card or deleted the thumbnails for the files you are trying to recover, I do not know how to resolve your issue. I’m sorry!

Bucking the Microservices Fad

In the middle of 2013, I was hired to lead the engineering team at LearnZillion–a digital curriculum for K-12 Math and English subjects composed of videos, slides, documents, and images. At the time, there were several applications in support of the business:

  • 2 Ruby on Rails web applications: the content authoring platform for a select group of users and the content consumption platform most teachers and students used
  • 2 native mobile apps: 1 iOS and 1 Android for students only
  • 1 API inside the content consumption Rails app, which served data to the 2 mobile apps and the web application

After a few months, momentum led us to build a publishing API so that the 2 Rails apps could talk with each other. (Before this second API, we were painstakingly moving data between the two apps via CSV and file exports and imports.)

Having recently left a company where we were migrating from one monolithic application into dozens of microservices, the momentum felt right. It was the in thing to do. Microservices were hot. However, as time moved forward, it became increasingly clear at LearnZillion that a microservices architecture came with a non-negligible overhead cost: the cost of keeping not 1 but 2 Rails apps up-to-date with dependency upgrades, authoring 2 APIs internal APIs, 3 API clients, and often changing all 5 when adding or removing a feature, all to pass data around. On top of the software management overhead, there was the overhead of hosting each and bending over backwards at times to make sure that APIs never called themselves. The benefits of having separated apps and APIs were dragged down by the cost.

We didn’t want the over-complexity and overhead, so over an 18-month period, we did a 180. We brought content authoring into the content consumption app, killing-off 1 Rails app, an API, and an API client. We also transitioned our iOS and Android native apps in favor of a cross-platform Ionic app that WebView-ed our main Rails app on both iOS and Android, killing-off an API and 2 API clients.

We now have 1, hosted app, built the Rails way, that has a median response time of 75ms, 2 hybrid mobile apps that look and function identically, and whenever a feature is deployed to our main web application, it is immediately available inside the mobile apps. No extra moving parts, no coordinated deploys. Most importantly, this setup allows us to scale the impact of each member of the Engineering & Design team by keeping each one focused on the features, not on a microservices architecture.

Bucking the Microservices Fad

Our course, there are plenty of valid reasons to have microservices, purely native mobile apps, etc., but this is not always the case.

I’m merely warning against jumping on the microservices bandwagon because it’s the in-thing. And this is a concrete example of where microservices were hurting a business not helping it. Thankfully, it wasn’t too late to reverse course. And I don’t think our team could be happier with the results.

As one of my colleagues says, “you have to use your brain.”

Continuous Delivery, not Continuous Deployment

Engineering teams like Etsy’s have popularized the idea of continuous deployment: infrastructure that automatically rolls-out newly minted code to production in a safe and gradual manner. In Lean Startup and Web Operations, Eric Ries explains the rationale behind continuous deployment: making a safe habit out of shipping product. I loved the motive, but it was clear that the practice as described required heavy operations infrastructure:

  • A continuous deployment server for automatic deploys after successful continuous integration runs
  • Live monitoring to discern if a code change negatively affected business metrics
  • Automatic rollback if business metrics are negatively affected
  • Incremental rollouts to production servers so as not to deploy to all servers at once
  • Code architecture that allows for both old and new code to run in production simultaneously while code rollouts are in-progress
  • Feature switches

While leading a team at LivingSocial, I set out to achieve the goal of safe code shipping as a habit but without the complicated and time-costly infrastructure. We were successful by incorporating good software engineering and deployment practices–all of which were generally good for us and didn’t require as much dedicated tooling or time. Later we discovered others outside the company were starting to do the same under the label “continuous delivery.” We have been even more successful with continuous delivery at LearnZillion, where I am today.

Unfortunately, the cost of continuous deployment infrastructure can discourage engineering teams from investing time in their development and deployment process because they don’t realize the lower-cost alternative, continuous delivery, is also a viable option. I want to share how we do continuous delivery at LearnZillion, so that others can achieve similar results without the overhead of extra infrastructure.

0. Assumptions

I am going to assume the year is 2015, or even 2010 or 2006, and that you have a deployment script or tool like Capistrano to automate the basic deployment steps for your application. As well, I’m going to assume your team or organization wants to do continuous delivery. If neither of these are in-place, start with them.

1. Individual responsibility

Although we work as a team, individuals are responsible for carrying work forward to completion and doing that well. Staff are responsible for taking features from initial definition to production shipment. Along the way, they collaborate with and incorporate input from the broader team and company. (See Multipliers and Drive for reasons to give employees meaningful responsibility in the workplace.)

With these responsibilities come expectations:

Do not break the site. Do not break features. Do not break the test suite. Do not commit code you did not write (this is a smell of a bad development database on your machine, failed merge, etc.). Run the tests regularly–especially before merging into the master branch. If the master branch code changes in-between your test suite run and your re-attempt at commit, run the tests again after cleanup, as appropriate.

Unfortunately, I have found that in many organizations, lack of trust is the default. A tech lead or manager is responsible for scrutinizing code from all team members, merging, deploying, and ensuring the application won’t break. This may make sense for new team members until they understand and are comfortable with the team conventions and have demonstrated that they are capable engineers. Otherwise, it should not be the norm.

2. Smallest overlap of responsibilities

We often pair a product designer (design, UX, HTML/CSS) with a full-stack engineer (SQL, Rails, Ruby, JavaScript, HTML/CSS) to work on a feature. However, we avoid assigning multiple engineers the same feature. We try to keep engineers working on “orthogonal capabilities.” (See “The Three Musketeers” and The Mythical Man Month for the rationale behind this approach.)

3. The master branch is sacred

We deploy to production from our master branch. Developers can depend on master as a reliable foundation to fork, merge, and rebase from. Features are developed, reviewed, and QA-ed in separate branches. If you have test failures, it’s most likely your code. Feature branches are only merged into master immediately before deployment. It is the responsibility of the feature owner to make sure the branch is reasonably current with master before it is merged itself. There are loads of articles on “simple git workflow,” which you can find online, like this one. git and GitHub make this paradigm easy to follow.

4. Follow “The Twelve-Factor App” methodology

I will let the methodology speak for itself. See part X in particular. The biggest continuous delivery benefit is no surprises during deployment.

At LivingSocial, my team ensured the application development environment behaved like production, except where Rails intentionally separates the two. Truth be told, we didn’t have a reliable staging environment at our disposal, so we went straight from development to production. Believe it or not, because of our practices, this still worked quite well.

At LearnZillion, we take this further by using similar SaltStack configurations for production, staging, and a Vagrant-powered development environment. In development, the Ruby process and gems for the app are still installed on the host operating system but everything else runs inside VirtualBox. It has the side benefit of speeding-up the on-boarding process for new engineers.

5. A test suite

At both LivingSocial and LearnZillion, we used Ruby on Rails, which strongly encourages use of a unit testing framework. Engineers make certain to have a passing test suite before merging a branch into master, must have a passing test suite on master post merge, and a failure on the master branch takes top priority–second only to a live site outage.

At LearnZillion we took this farther by integrating CircleCI with GitHub to minimize the execution burden on engineers.

6. An automated QA test suite

At LearnZillion, we have a QA team. They naturally have the potential to be a bottleneck for getting features out. Since quality is their main objective, you want them to be gatekeepers. What you don’t want is for their review and gatekeeping processes to be cumbersome or inefficient. The most powerful lever you can maneuver within your QA team for continuous delivery is to automate their testing. Our team has an extensive QA test suite, which QA engineers can run against any branch, at any time, on a staging server. Automated tests are usually written soon after deployment to production, but sometimes are completed before then. Manual QA of emerging features still takes place, of course.

7. Look at your dashboards

It doesn’t take much effort to have a short list of links to Google Analytics, Mixpanel, or your error reporting service like Bugsnag or Honeybadger. An engineer can inspect them after deploy to see if something broke. Engineers and product designers should be doing this anyway to see how users are responding to changes or new features.

Bonus 1: Manual QA in a different time zone

When an engineer’s code has passed peer review and the automated QA test suite, it is sent along to QA for manual inspection. Test results are back by the next business morning because some of our QA team members are located in India. They test our work while we sleep.

Bonus 2: Continuous QA

At LearnZillion, we’ve integrated a GitHub pull request web hook that deploys a branch to a staging server and runs the QA test suite against it. This means that a branch has been regression tested before it gets to the QA team and usually before it gets to peer review. If you want to read more about our automated QA process, see Kevin Bell’s article about us over at CircleCI.

In Summary

With the good engineering and deployment practices of continuous delivery, you can achieve the same benefit of continuous deployment: safe, consistent delivery of product as a habit. You don’t have to build-out a dedicated infrastructure, and you can build a better engineering team and environment in the process.

Looking for your next gig?

If this sort of engineering environment is appealing to you, and you are interested in being a Senior Software Engineer or Senior Product Designer at LearnZillion, please apply. We would love to meet you.

[Thanks to my team for reviewing this post and recommending improvements to it.]