Skip to content
 

Testing is the process of executing a program or system with the intent of finding errors.

— Glenford Myers, The Art of Software Testing, 1979

 

Testing is any activity aimed at evaluating an attribute of a program or system. Testing is the measurement of software quality.

— Bill Hetzel, The Complete Guide to Software Testing, 1983

Testing is a concurrent life cycle process of engineering, using and maintaining testware in order to measure and improve the quality of the software being tested.

— R. Craig and S. Jaskiel, Systematic Software Testing, 2002

 

Here at Fugue, we’re building a system that automates and enforces the operation of your cloud infrastructure. It’s powerful, resilient, and easy to use. Under the hood, this system is made up of a collection of services engaging in sophisticated interactions. It became obvious early on that testing manually would only get us so far. In this post, I’d like to share some of the insights that have proven useful during a crucial time in our team’s development—when we transitioned from a mostly manual work mode to a mostly automated one.

 

Let’s define some terms. For the purposes of this blog post, we'll define manual testing as testing that doesn't enjoy the benefits of any automation, and we'll define automated testing as any sort of testing enhanced by pre-defined programmatic subroutines. What about the mention of mostly, i.e., mostly manual to mostly automated? How do we define that? It could mean that "most" of our work cycles and effort are being invested into one of these two modes, but, in this instance, I mean that "most" of our coverage is being executed in one of these two modes. I’m referencing output—the number of executed test cases.

 

It’s worth pausing a moment and thinking about when and if such a QA transition makes sense. Much depends on your tolerance for risk. If there’s a chance that your product’s code base will change often or if there’s a possibility that your product’s behavior will change as a function of load (i.e., number of users, number of concurrent operations, cumulative number of operations, etc.), you will either move a lot of your testing into automation or you will learn to cope with risk assessment blind spots that prevent you from having an accurate and concurrent view of your product’s quality. Since Fugue continuously operates our customer's changing cloud infrastructure, the expectation of quality demands automated testing.

 

Weighing benefits and challenges is generally part of any decision to transition as well. Consider the following points.

 

Benefits of automated tests over manual tests.

 

At least three major benefits of automation are evident:

 

  1. Preservation of the QA engineer’s precise intent and the elimination of variance in test steps. The more a test case is automated, the more reliable the test’s steps shall be. A human testing manually will occasionally fat-finger a button, enter a typo, or absent-mindedly repeat a command. Machine precision is hard to simulate. It should be noted, though, that this inherent unpredictability of manual testing is often a good aid to uncovering defects and so has its rightful place in the testing process as discussed below.
  2. Protection against quality regressions. Product developer unit tests provide a strong guard against this issue at the module level. However, in any modular system where components talk to each other and integration testing is required, further automation will be needed. Manually testing against regressions in any system with even a moderate degree of sophistication does not scale well. It is entirely possible that a small bug fix that correctly solves one issue may cause another defect in a different part of the product, perhaps a part that unwittingly relied on the buggy flow that was fixed. In this scenario, verifying that the defect has been fixed is not enough; retesting the entire product would be the appropriate response. This being the case, having automated tests that can guarantee certain flows still work according to spec is highly valuable.
  3. Long-term monitoring over extended deployments and load testing. For software that is expected to run a large number of processes for lengthy intervals unsupervised, manual testing is a poor choice. Rather, for this task it’s preferable to have machine patience and indefatigability—repeating tasks over and over with slightly different variables, automatically storing and parsing all data germane to the system while testing its robustness over time.

 

Challenges of building an automated test framework.

 

The challenges can be significant. They include:

 

  1. Maintenance in the face of a changing subject codebase. Depending on how you design your automation and what strategy you use to implement it, you may spend a lot of time keeping your code up-to-date in order to match the evolving interface of the product you are testing.
  2. Flexibility to add new test cases in keeping with an evolving risk matrix. Once again, depending on your automation design and the strategy you use to implement it, you may have a hard time accommodating new tests in an organized, clean, and internally coherent way that allows you to keep track of what sort of coverage you actually have in any given moment.

 

How to transition.

 

One of the biggest concerns we had in transitioning from mostly manual test coverage to mostly automated test coverage was creating the necessary breathing room for the transition. While manual testing is ongoing, it’s still necessary to invest time and resources into putting together a fully automated testing framework.

 

Despite considerable buy-in from stakeholders, our attempts to increase the share of automated testing on early versions of the software fell short of the mark and we were forced to spend a lot of time running tests manually to keep up with our established QA obligations and deadlines. We noticed during this period that the incentives of manual testing and fully automated testing aren’t aligned in the short term and this was a source of friction. Manual testing provides instantly quantifiable value by determining whether or not a feature is working correctly. Fully automated testing is a longer term investment, the exact value of which might not be immediately obvious until completion. Even then, as with any other software product, there’s no guarantee that the benefits will live up to the original vision. Furthermore, the more ambitious the automation project, the harder it is to project the capacities and virtues of the finished product.

 

We needed an automation-building strategy that met several criteria. Automation had to be built in tandem with manual testing. Any interference with concurrently verifying the product’s quality had to be offset by increased productivity in the short term (i.e., in the same sprint). The benefits of any automation we built had to be verifiable sooner rather than later. Maintenance costs needed to be as low as possible and confined to the smallest possible module of code.

 

The strategy we decided upon that matched these criteria was to build small scripts that improved the efficiency of manual testing. This approach felt akin to creating hammers and screwdrivers before attempting to build an actual abode. Also, like a society beginning to create its first tools, it felt like a more organic and natural progression of things. Actions that one repeated often, such as setting up or tearing down a particular test environment, begged to become shell scripts. Indeed such scripts existed in raw, distributed form across our organization. The sea change began when we started to share, iterate, and improve our little shell tools as we would any other piece of software.

 

Once we moved forward with this strategy we noticed several things. First, building simple tools that reduced the time spent on manual testing helped bridge the incentive gap. Manual testing became much more efficient, so we were able to fulfil our concurrent testing obligations. At the same time, we were advancing our mid-to-long-term goals of automating as much of our test plan as possible. Another thing that improved through this process was our ability to maintain our gains. Small, modular tools that were built to work with a certain version of the product were much easier to modify and update. As we automated more of our work, our steps to reproduce certain defects became more reliable.

 

Which tools do you use first?

 

Setup and diagnostic tools have proven to yield a solid rate of return. Additionally, such tools can prove useful to product development teams who might in turn contribute and improve the testware. Conversely, it’s entirely possible that someone from another team has written something like this already, in which case it is your team that can improve upon the existing tools. Generally speaking, any tools that you can use to make manual testing faster and more efficient and that also can be called from fully automated test code are what I suggest using first. That way, even if you get stuck for a period in a hybrid manual/scripting test situation, you will preserve your gains in testware infrastructure.

 

Where does manual testing still come into play?

 

Getting newcomers acquainted with the system or learning to use a new feature will involve manual testing. It has been my experience that a QA engineer’s first contact with a new product or feature should have as few filters as possible. An automated test must preserve the intent of a test that has already been confirmed to work manually. If an engineer only sees the test case through the prism of testware, it’s hard to determine if that testware obscures any issues. In the case of a failure, it’s hard to determine whether the fault lies with the product and its features or with the testware itself.

 

Manual testing also remains in play during exploratory testing. Something about ongoing organic contact with the product can reveal a wealth of test cases (and actual productive use cases) that might not come up during the original draft of a test plan. It would make perfect sense that a QA engineer is the person in the organization with the most hands-on experience using the product. Just remember to be organized and keep track of any flows that crop up using this approach that weren’t accounted for in the original test plan. The opposite of forgetting is writing it down.

 

This post began with a few concise explanations of testing from respected thinkers in our field. Let me humbly add an observation. When possible, manual tests eventually should be replaced by automation. It’s not always clear which steps a team must take. Discussed here is one tactic that has been useful for us. Hopefully this illustrates a potential path for others in our community.

 

Categorized Under