Re: Agile Methodology
- From: "Michael Bolton" <google@xxxxxxxxxxxxxxxxx>
- Date: 3 Sep 2005 10:17:42 -0700
> >> > * Manual testing is not useful, and testing team should 100%
> >> > automated testing?
> >> Any manual testing is a waste of time that should be converted to
> >> automated
> >> testing.
> > It is patently absurd to make these statements
> If you can't _easily_ write a new automated test for _anything_, then you
> have a bigger problem than the bug you are trying to capture with a test.
Machines can't do everything that humans can. I don't see that as a
big problem, but rather a fact of our existence. It comes down to the
appropriate choice of tools, techniques, and approaches for the task at
One feature by which some Agilistas fail to distinguish themselves from
the Factory School (and it pains me to say so) is that they make
absolutist statements and promote relentless silver bulletism. There
are some fantastic things about the Agile philosophy, but it's not like
Agile processes can cure cancer. More critical thinking would serve
some members of the community well.
For example: "easily" is a context-dependent term: easily /compared
to what/? Running an automated test is usually cheap; developing one
is often cheap--but cheap is not the same as free. It takes more
amount of time, effort, and imagination to construct a complex test,
whether it's performed manually or automatically. We dishonour
programmers, testers, and tests by thinking that programming a test of
"anything" is trivial. It takes time to craft the expression of the
idea into code, and there are contexts in which it's less difficult,
less expensive, and more valuable to put a trained brain, turned on, to
the task. For exactly the same reason, Agile teams don't write
everything down in a functional spec; they talk, sketch diagrams, write
stuff on 3x5 cards, use whiteboards, write email. And yet when some
Agilistas talk about testing, they talk about automated tests as though
they were the only tests worth performing. Ridiculous.
There are some things that manual tests can almost always do more
easily or appropriately than automated tests (note my use of "/almost/
always"; I believe it's a hallmark of good thinking that we entertain
the possibility of uncertainty). Here are but a few examples:
- spelling and grammar checking
- more generally, checks for almost all kinds of semantic correctness
and many kinds of syntatic correctness
- tests that compare the operation of a program that you've developed
with a program that someone else has developed
- comparison of written documentation and a software product
- checks to ensure that the product successfully models workflows for
several, diverse kinds of users
- reasoning associated with things that involves paradox,
contradiction, or ambiguity
- tests involving the manipulation of hardware in a way that only
humans would do, e.g. leaving a book on the keyboard or unplugging a
- tests to discover show-stopping bugs
- recognition of the graphical characters used to help thwart spammers'
bots (in this case, a successfully automated test would be a disaster)
- tests of human-style behaviour, such as changing of mind, radical
alteration of expected workflows, hacking, cheating, anticipating,
- tests to find a workaround for a complicating factor in an existing
product, where writing and deploying new code would be more expensive,
troublesome, or risky than supplying the workaround
- comparison of a product that you're developing with an older version
of that product for which no automated tests exist
- perception of appropriateness to purpose, esthetic judgement, etc.
- evaluation of audio and video quality
- subjective or qualitative assessments
Are these the only things that can be tested, or the only approaches
that we should take to testing? Of course not. Would tests like these
expose important bugs? They might well.
If you believe in automating all tests, why stop there? Why not
automate all programming? The answer is that programming computers is
usually a deeply intellectual and creative art. Some foolish people
used to believe that computers could be trained to write their own
code, and that we would, within a few years, have programs that would
interpret human requirements and write them automatically. That sort
of talk eventually went away. Testing is also a deeply intellectual
and creative art that also, sometimes, involves programming computers.
> > ...the original is an absurd claim for the ThoughtWorkers to
> > make...
> To start the coaching process, it is a rallying point. "Get ready for this
> mode. Get mentally ready, now. Tests are more important than anything else,
> because they are the only things that gate delivery."
Again, this is an absolute statement that simply doesn't hold up in
every situation. The need to revise requirements for changing market
conditions can gate delivery. The cancellation or suspension of a
project for budgetary reasons can gate delivery. Lack of resources or
staff can gate delivery. Natural disasters can gate delivery.
>>From the opposite perspective, at some point, we decide that we've
asked enough questions (and answered them with testing) to have
sufficient confidence that we can ship. We could ask more questions
about the product, but we ship anyway. If the business want to ship
the product, and the programmers and testers would like to do more
testing, who wins? The business might consult the programmers and
testers (and would be wise to do so), but ultimately, it's their call.
In this sense, many tests don't gate delivery at all.
> Agile teams practice "Daily Deployment", which means every day. Here's a
> great blog by Gunjan Doshi on the topic:
> "I have been helping this client since March of this year. Yesterday, they
> achieved a major milestone - production launch. I think it is ok to classify
> a system going live on budget, time and within scope as a success story.
> "We all know about the last minute jobs and the ensuing stress, when the
> system goes live for the first time. However, this one had a very different
> feel to it - there was no anxiety at all. The system went live as per
> schedule and there were no hiccups. It just seemed like any other day at
> "How did that happen? Well, for the past few weeks, we have been regularly
> deploying the system into a simulated production environment and testing the
> hell out of it. The production manager was so confident of the success that
> during the release, she was in a meeting for a totally unrelated project."
> The team deployed daily to a production simulation. That means absolutely no
> repeated manual tests gated the deployment. They would have slowed down the
> process, and would have made delivery day much more nervous.
> In terms of "non-Agile" projects, what project lifecycle could possibly
> sustain such bad practices as infrequent deployment and have such high odds
> of success?
The answer is: every product before the Agile Manifesto existed. The
Agilistas that frame questions this way are like teenagers who've just
There are contexts in which it's impractical or infeasible to deploy
every day. Embedded systems in which hardware and software are being
developed at the same time can't do it. You'll find very few people
who run systems in high-risk contexts that are enthusiastic about daily
deployment. Also, when developers talk about deployment, that's one
thing. IT managers, vendors of mass market software, makers of
mass-market products with embedded software mean something different by
I like many, most, of the Agile processes and practices. For many
contexts, they're a better way to develop software than other more
traditional methods. It's just that they're not the only way to
develop software, and there is much about development that the Agile
community (just like all the others) is learning.
> Manually test all the time. The need to do the same manual test twice is a process smell.
So is coffee. Look at the list above; some tests in those categories
will have to be performed manually more than once. Oh well.
> > I have been working in an Agile project for some months. The products
> > are well and thoughtfully developed. The programmers are highly
> > capable and professional. They write unit tests, and they co-operate
> > with testers by quickly and responsively writing and troubleshooting
> > Fitnesse fixtures. Bugs still leak out.
> Why? What practices do they follow?
Pair programming. Communication. Courage. Test-driven development.
Unit tests. Acceptance tests, written in Fitnesse. 40-hour work week.
Sustainable pace. Refactoring. Retrospectives. Short iterations.
On-site customer. The works.
They also deal with a passel of problems that are common to all
software development projects. Requirements change. Experienced
people leave. New people arrive. Software is developed using
imperfect tools on imperfect platforms. Manual tests take time and
effort. Automated tests take time and effort. People speak and write
ambiguously. People make mistakes. People interact with software in a
way that developers and designers don't anticipate. Automated tests,
as wonderful as they are, don't solve these problems. Neither do
manual tests. But both classes help to mitigate risk.
> > I've found a bunch of them,
> > and I've found the majority of them in manual testing.
> You started this entire slant by assuming "we" meant never manually operate
> the program to see what it does.
No, I didn't. I started this entire slant by reacting to a couple of
"Manual testing is not useful, and testing team should 100% automated
"Any manual testing is a waste of time that should be converted to
Had these been reframed as follows, I wouldn't have said anything at
"At every opportunity, seek to use and program the computer to perform
rote tasks that you'll want to run often, tests that involve large
amounts of data, tests that can reveal information more quickly, more
broadly or more accurately than a human can. At every opportunity, use
the time that you've gained to perform manual tests that require human
observation, judgement, skill, evaluation. Balance the cost and value
of each kind of test with the other. Make pragmatic choices."
> Tests propel development by providing an alternate platform for such
> experiments. Your project is experiencing some problem in this area; don't
> blame ThoughtWorks!
I don't blame ThoughtWorks for the fact that /all/ projects, even Agile
ones, experience some problems, and I don't deny that Agile processes
help to solve a great many of them. I do blame at least one
ThoughtWorker for making a fatuous remark that, in my view, bespeaks a
very shallow understanding of what testing means.
- Prev by Date: Re: What is the measurable parameters of performance test in browser
- Next by Date: Can anyone explain what is HIPAA Standard Test Cases in Healthcare domain
- Previous by thread: TSL
- Next by thread: Can anyone explain what is HIPAA Standard Test Cases in Healthcare domain