Category Archives: Testing

Sony a6000 Thinks It’s 12/31/69!

I just solved a terrifying and frustrating issue with family photos taken on our Sony a6000. I found no solutions online, so I’m posting the issue as well as my solution to spare others the same headache.

As I was looking at photos on our Apple Mac Mini computer tonight, I noticed that all of them had a Created date of 12/31/69 7:00 PM. What?! Was this OS X’s fault or the a6000’s? I did some googling to no avail. I only saw mentions of leap year bugs in other hardware and software products.

On a whim, I turned on our a6000 to see if it still had copies of the photos. It didn’t, but it still had an image index of all the photos going back to when we first purchased the camera, properly dated. However, each thumbnail displayed with a “?” question mark and “Unable to display” when trying to view on the camera. I had previously moved all the photo files to our computer while the camera was connected with its USB cable, which is why they weren’t on the camera anymore. As I started deleting really old photo thumbnails off the camera using the camera controls, I started getting messages like “Recovering Data,” “Writing to the memory card was not completed correctly,” and the camera would occasionally reboot in utter confusion with no real way of escape. I had stumbled upon a real mess.

This mess got me thinking, “maybe the Sony a6000 software engineers didn’t do a good job engineering for the case where photo files were moved from the camera to a computer via USB Mode and OS X Finder.” I haven’t ever run into an issue like this one with other devices, but maybe I needed another way to copy the files to our computer to extract the right creation date.

As an experiment, I copied the files for the photos from our Mac Mini back over to our a6000 via the file system–essentially putting them back where they came from. The photo index then had a few real thumbnails–the photos that didn’t have thumbnails were ones I had deleted on the computer months ago. From there I started OS X’s Preview app. I clicked File > Import > NO NAME. “NO NAME” is the volume name for the Sony a6000 memory card when connected in USB Mode. From there I imported the files to the Mac, and, voilà, the creation date was now correct.

The best guess I have is that the Sony a6000 is using 12/31/1969 (or some null value) as the file system created dates on its memory card. I’m also guessing that the OS X Preview app is extracting the created dates not from the memory card file system but from the photos’ meta data. Then I suspect it is using those values to set the created dates on the Mac’s hard drive when importing the photo files.

After successfully importing all the files from the a6000, I used the camera’s format feature to wipe the memory card clean. From now on, I will be using Preview, Photos, or some other OS X photo app to import our photo files, not dragging and dropping them from the NO NAME volume to our computer directly.

I hope this helps someone else someday. Drop me a note if it does.

 

Note: if you are trying to recover from the same issue, but you have already wiped your memory card or deleted the thumbnails for the files you are trying to recover, I do not know how to resolve your issue. I’m sorry!

Best Kept Secrets of Peer Code Review by Jason Cohen

Best Kept Secrets of Peer Code ReviewIt has been three years since I have been under the oppressive finger of the waterfall software engineering process, but that is still what comes to mind when I hear the words “peer review.” Corporate software outfits typically require programmers to present their code to other engineers in hopes of finding bugs and fixing them before they become a problem. Usually it involves some form of screen-sharing and walk-through of code snippets developed in the past few days. From my five years of experience doing this sort of review, reviewers rarely found bugs in code–no matter how poor a software engineer or how great a reviewer.

I know book-editing is a great practice, and I know that no software engineer is perfect, but my experience has prevented me from seeing benefit in the concept. Best Kept Secrets of Code Review by Jason Cohen renewed my perspective on peer reviews by presenting a more effective way of doing them. The book’s purpose is two-fold: (1) present compelling arguments and practices for effective code review and (2) sell a peer review tracking product called Code Collaborator. Thankfully, the book reserves the sales pitch for the last chapter and does a great job presenting the facts and arguments that ultimately led Smart Bear to build a peer review product. Here are some of my notes, but, like the reviews before this one, I must encourage you to snag your own copy.

Chapter Four: Brand New Information

  • Time is the one factor that a reviewer can control that will effect their ability to identify bugs. The reviewer cannot control the language, algorithmic complexity, or the experience of the developer who wrote the code.
  • 60 minutes of peer review is the sweet spot. Spend more or less time than that and you will statistically either miss bugs or waste time looking for bugs that don’t exist. (On that note: do not spend more than 90 minutes…ever.)
  • The more time a reviewer spends during his first pass over the code, the faster he will be at spotting bugs during his second pass. In other words, go slower the first time around.
  • Private inspection of code produces better results than being guided by the developer in a presentation format. Often reviewers will raise questions that are about how something works, not whether or not a particular piece of code is correct. (More proof that meetings usually waste people’s time and companies’ money.)
  • The hardest bugs to find are code omissions, so arm reviewers with checklists. This list might include items like “should this page require SLL?” or “is allocated memory properly destroyed?” Have your team keep a running list of common bugs to look out for.

Chapter Five: Code Review at Cisco Systems

  • Formal peer review meetings don’t uncover more bugs but do add to the cost of software engineering. (Yes, this is a repeat comment–just driving it home!)
  • The one benefit of a formal meeting is that it motivates the presenter (developer) to be more careful and produce higher quality material (code). Knowing that someone else is going to formally inspect your code will compel you to write better code.
  • Only 200-400 lines of code should be reviewed at any one time. (Recall our 60 minute sweet spot from above? 200 lines / 90 minutes = 2.22 lines per minute. 400 lines / 60 minutes = 6.66 lines per minute. That’s a lot of time.)

Chapter Seven: Questions for a Review Process

  • Keep your peer review checklist in-between 10 and 20 items. Too many, and the reviewer’s effectiveness drops.
  • If necessary, create different flavors of checklists to cover more than 20 items: logic bugs, security, design, etc. Have different reviewers use different checklists.

The book is also full of really nice references, case studies, techniques for measuring team effectiveness, and points for team leads. It’s worth the read, so check it out.

Testing Rails Exception Notification in Production

Every Ruby on Rails project I have been involved in has used the Exception Notification gem. It sends you an email with very helpful debugging information whenever your application breaks in the real world. However, I’ve seen people fail to ensure that it actually works in their production environments–not when they first launch their site, but weeks and months afterward when it really matters. If your production environment changes, your application may fail, but you might not get an email about it because that which broke might have affected email sending. And you’ll think everything is honky-dory. Make sure you have something like this in your application and that you test it periodically to ensure you’re getting exception notification emails.

In config/routes.rb:

  map.connect 'test_exception_notifier', :controller => 'application', :action => 'test_exception_notifier'

In app/controllers/application_controller.rb:

  def test_exception_notifier
    raise 'This is a test. This is only a test.'
  end

I also recommend adding this to a periodic test script and your deploy script so you don’t have to remember to test it.

Cutting Out the Middle (Wo)Men

SandwichBoard manOur first two customers are South Street Steaks and Aqui Brazilian Coffee respectively. As you can see, the sites will eventually need new designs; however, both establishments helped us develop a solid system, have been great beta-testers, and most importantly, they love SandwichBoard.

Today I walked Carminha Simmons of Aqui through SandwichBoard. She added a news article, event, and web page herself during the training. While she used the system, I took notes on anything she didn’t understand, things she got stuck on, and features that broke. When I got back to my home office in the afternoon, I went through my list and fixed the majority of the issues or UI flaws I saw before dinner.

I had direct contact with the customer and saw her every mouse click and facial expression. I was able to discuss with her how to fix things she didn’t understand. I didn’t have to go through a committee or get permission to fix what we thought was broken. All we have to do now is run a command to update our system live in a matter of minutes. Try doing that when working in an organization divided into job functions and heavy processes.

Can You Hear Me Now?: Real Testing

For about a year and a half, I owned a Motorola E815 mobile phone. I loved the thing. It worked flawlessly until the Bluetooth feature decided to stop working one day and I could no longer pair a headset with it. I called Verizon Wireless, which agreed there was a physical malfunction and offered to replace it with a refurbished unit. I took them up on their offer and received a replacement unit within three days.

Along with the replacement unit came a two-page printout of very cryptic test results. From what I could tell, they had hooked the refurbished unit up to a computer and ran a bunch of unit tests on the phone to prove to me and themselves that I would receive a functioning unit. The tests came in two flavors:

  1. Happy Path
    “A well-defined test case that uses known input, that executes without exception and that produces an expected output” (http://en.wikipedia.org/wiki/Happy_path). In other words, the computer testing my phone made phone calls, used the built-in contact list, and exercised other common functionality in ordinary ways.
  2. Boundary Condition
    Read any of the Pragmatic Unit Testing books (available in both Java and C# flavors) and you will learn that software often fails on unexpected input and boundary conditions–really large numbers, really large negative numbers, zero, null values, full hard disks, or anything else the developer wasn’t expecting when s/he was writing code.

I clearly remember thinking “Wow, yet another reason to like Verizon Wireless. They really tested this replacement phone.”

The funny thing was that the number two (2) button on the phone didn’t work all the time. After trying to live with the inconvenience of a fickle button, I called Verizon to get another replacement. Again I received a refurbished phone along with the same two-page printout with slightly different but successful test results. All the buttons worked this time, but the speaker buzzed like it was overdriving whenever someone would talk to me, even if the volume was all the way down at its lowest setting. After trying to live with that inconvenience, I again called to get a replacement. Another refurbished phone, accompanying test results, and this time one out of every three attempts to flip the phone open resulted in a phone power reset.

And then it dawned on me: Verizon (or Motorola, not quite sure) probably spends much time, effort, and money creating well thought-out and automated happy path and boundary condition tests to run on phones before shipping them out. However, I have a high degree of confidence that a human never tried to actually make a phone call with any of the phones I received. I noticed all three replacements were broken during the first calls I tried to make with them. All that time, effort, and money was wasted (in my situation at least). Once I realized the testing process for refurbished units was broken, I decided to just cough up the money and buy a totally new phone. (Which I just dropped the other day and shattered the external screen on. We’ll see how long I can live with that nuisance.)

The moral of this long story is not to bash Verizon. (Their network truly is what it’s hyped-up to be.) The moral of the story is that real testing needs to be done. Verizon should be making real phone calls using real humans–or at least a robotic device that simulates a human’s interaction with its phones.

Integrated test suites that know the guts of an implementation and execute at lightening speed are great–let’s not discount those. However, we must ensure that real testing takes place from the deepest parts of the system all the way out to the point of human touch. Obviously, subjecting humans to perform all the testing of a product by hand is inhumane and grossly cost-inefficient. (This is particularly true in the case of multiple iterations of regression testing–don’t laugh, I’ve seen it happen.) Testers should strike a balance. Testers should use automated, but realistic, simulated interaction tests with software, web sites, and product interfaces. They should use application test suites that actually click software buttons and Sahi, Selenium, or Watir to click web-based hyperlinks and check checkboxes. This type of testing provides a nice balance of both automation and human interaction simulation.

In short, testing should involve traditional, automated happy path and boundary condition tests; automated human-touch simulations; and, finally, real human-touch. The order of importance will depend on what exactly is being tested; just make sure all three happen on your project or else I might be blogging about you too.