By Titus Fortner, Senior Solutions Architect, Sauce Labs
There's
something about testing that seems to attract analogies. Everywhere you go,
every presentation you watch, someone is talking about how similar testing is
to this, or how much testing reminds them of that. Now, if I say that with a
twinge of negativity in my voice, it's because the most popular of those
analogies also happens to be my least favorite: the mountain top.
It
isn't even just testing; chances are you've seen or heard someone talk about
how some task is analogous to climbing a mountain, with imagery of needing to
cross a hazardous chasm and overcome obstacles before eventually reaching the
mountain top and achieving victory.
Here's
the problem with that analogy, though, and it's kind of a big problem: there's
often no mountain top, especially not in testing. If you're doing it right,
testing is a continuous activity. There's no end state; no ultimate achievement
that renders your work complete. There's nothing you can do or accomplish today
that can't conceivably be undone by something released tomorrow.
Remember,
effective automated testing is essentially about avoiding the risky and the
unpredictable. In that sense, the mountain top isn't just the wrong analogy for
testing, it's the antithesis of what testers should strive to achieve. Let's
agree to ditch the mountain once and for all, and instead, focus on finding
what I call the Valley of Success. (Credit where credit is due: Rico Mariani
first coined a similar term - The Pit of Success - when discussing the
development of low-level software code in the early 2000s.)
The
Valley of Success is about sustainability, flexibility, and repeatability. It's
the place where testers have the reliability and equilibrium they need to make
the best possible decisions in the name of making the most possible progress
while incurring the least possible cost. The Valley of Success is where
effective automated testing happens. Let's examine four key strategies testers
can implement to help them get there.
Favor long-term flexibility over short-term success
Whatever it is you hope to achieve with
your test implementation, there's almost certainly more than one path to
getting there. With that being the case, there's no reason to sacrifice
flexibility and resilience in the name of short-term success. When you focus
entirely on short-term success, you inevitably wind up with an implementation
that's brittle and prone to long-term failure. Rather than thinking about
what's going to make my project successful right now, think instead about
designing an implementation that provides the best insulation against long-term
issues. Remember, short-term success is the easy part. Staying in the Valley of
Success, however, requires an implementation that's built to last.
Don't get too clever
Let's start with
a basic premise about most testers: we love a challenge. We wouldn't be testers
if we didn't. The thrill of a challenge is often more alluring to us than
something simple and straightforward. Now consider the reality that you can
make just about any process "work" when it comes to designing a test
implementation, and you wind up with a world in which test engineers often
create extremely complicated approaches.
Now, with the
requisite resources and expertise, these complex approaches can be entirely
successful (and leave said tester extremely satisfied). But just because you can do something doesn't mean you
should. Designing a test implementation that works right now is not the same as
designing one that's built to last. When it comes to finding the reliability
and repeatability of the Valley of Success, the simpler and more
straightforward implementation is almost always the preferred path.
Consider other implementers as well as end-users
One of the
biggest reasons to eschew the complex in favor of the simplistic is that the
code's author is unlikely to be the only person who will ever have to maintain
it. Framework design choices built around conventions that make sense only to
someone with the author's history and frame of reference are bound to fail when
another maintainer inevitably takes the reins.
Instead of
promoting continuity and consistency - hallmarks of the Valley of Success -
this personalized approach leads subsequent users to find creative but
ultimately unsustainable workarounds rather than using the framework as
designed. If and when the framework author leaves the company, all their hard
work is scrapped in favor of a new approach that fits the new author's personal
frame of reference, thus perpetuating the cycle of wastefulness and
inefficiency. Don't just think about yourself when creating a framework. Think
instead of the next user.
Don't follow prescribed
object-oriented design principles
Overdeveloping your code in the name of following preordained
design principles is often worse than having "bad" development practices in the
first place. Consultants
make a lot of money convincing people that they aren't following the correct
rules (as decided by various luminaries such as themselves, of course). But
it's much easier to fix underdeveloped code than overdeveloped code, and in my
experience, too many developers wind up becoming focused on strictly following
predefined rules rather than understanding the purpose behind them and applying
them thoughtfully to their specific circumstance.
A good design principle is one
that makes the code you're writing easier to work with and easier to maintain.
What that ultimately looks like is much more context-dependent than any preset
rules can account for. That doesn't mean you shouldn't understand the various
design principles and why they exist, but following them blindly and without
considering context tends to result in more harm than good.
Finding your valley
One of the unique things about
testing is that it requires us to make decisions every single day, and those
decisions are what ultimately determine our fate. It's up to us to block out
the noise and focus on test implementations that are both simple and
sustainable. We need to resist the urge to start climbing mountains and stick
to the valley of success.
##
About the Author
Titus Fortner, Senior Solutions Architect,
Sauce Labs
Titus Fortner is a senior solutions
architect at Sauce Labs, where he works with customers and the community to
facilitate testing best practices. He is also a core contributor to
the Selenium project and the maintainer of the Ruby bindings. Titus spends a
significant amount of time writing open source testing software built on top of
Selenium. He is the project lead for Watir and is active in supporting these
projects on Stack Overflow, message boards, and in the Selenium Slack and irc.