Tip:
Highlight text to annotate it
X
I'm going to try to tie together some of the things that we've
been talking about in TDD and I'm going to try to convince you
that even for those of you for whom TDD appears to be a new
thing. You actually already know how to do it because you've
been doing all of the same things already, you just been doing
them really ineffectively. Before you do that let me just
quickly go through a set of other testing terms you might hear.
Testings has been around for a long time, there's different
vocabulary concepts and terms that have come up around besides
the ones that we commonly use. Mutation testing is a really
interesting idea which is basically if I introduce a deliberate
error into the app does some test fail. Because if not then
there's probably a hole in your testing coverage somewhere. In
fact as a preview this is essentially how we're going to test.
There's a homework that we're going to give out where you guys
actually going to write your own cucumber scenarios and this is
basically how we're going to test it. We have a version of the
app where we can insert bugs and make sure that you have some
scenario steps that fail because of those bugs. Fuzz testing,
this is a scenario called monkey testing, you basically generate
random random inputs for your code. In fuzz testing basically
you try to exercise the code in ways it wasn't design to be
used. If it was a command line program you can just actually
type input at the command line for a SAS application, it's
things like having forms that are submitted with random values
or values that are way too long or values that have illegal
characters in them. Microsoft says that they found about 20% of
their bugs this way. There are fuzz test that can crash up to
25% of the popular Unix command line utilities. This is a
powerful technique once you've done the basic level of testing,
if you do this before you've done your basic level of testing
you'll despair because any random input will break your app
because it'll be brittle. DU- coverage which is for define and
use. This is another variant of how you measure test coverage.
If you look at every place where you've got some variable that's
define meaning a value gets to sign to it and sometime later
somebody consumes that value and the number of pairs of those is
multiplicative in the number of references because if I define a
variable and there's three different code pass that could
consume the value that I set that's three different DU pairs.
Another way that you can look at coverage is of all possible DU
pairs what fraction of them are exercise somewhere in my test
suite. Lastly you'll probably hear terms black box testing
versus white box testing or sometimes people call it glass box
rather than white box. The idea is a black box test is what you
do when you either don't have much information about the
implementation or you're really just concern in testing against
an external spec as oppose to a white box test where you know
something about the implementation and your test cases are
design to stress the thing you know. An easy example of this is
if you're coding a hash table, the black box definition of a
hash table is given a key can you get back the right value but
if you know something about what hash function is used inside
the hash table you might create a glass box or a white box test
that deliberately tries to create hash collision because you
know how it works. Again, there's some new ones in what you are
trying to test but if you hear the terms black box or white box
that's what it's referring to. This is where I try to convince
you that you already know how to do TDD you've just been doing
it in the wrong order. This is how I use to write code, I have
been saves I hope you too can be saved. But I used to write a
bunch of lines which I was sure was correct, I would run the
lines of course there'd be a bug and " Ah, that's a stupid bug.
That wasn't me that was just stupidity." No problem. I'll
breakout the debugger. I'll look at the values of things. I'll
put in some print ups to make sure I know where the code is.
Once I've stop in the debugger I can tweak or set other
variables and make sure that the right thing happens. " Okay
continue. Oh, it didn't work." Once I've fixed it I'm sure I
fixed it and now that's the last bug ever and of course it never
is. All of these same steps occur in TDD just in a different
order. You write a few lines but you write the test first you'll
know immediately you wrote something stupid. Instead of writing
a bunch of lines, " Oh, I made a spelling error," figure it out
now instead of later when your brain in some place else. Instead
of having to put print ups everywhere just mark or stub the
things you want and use an expectation to say at this point this
value or this variable ought to have this value. Use something
should equal or you should receive when you're talking about a
method being called. What about the part where I stopped in an
interactive debugger and then I kind of try to very carefully
setup the world so that I know which code path is going to come
next that's what marks and stubs are for. We can return canned
values from method calls and Seams so that we know which code
path it's going to move down. Finally when I get to the part
where I was sure that I fixed it but it turns out I was wrong,
if you've already done this with the test you just rerun the
test, you have auto test run it for you and you keep making
tweaks until you get it right. You've actually already done all
of the things that TDD requires. Really this is just about doing
them in a different order but it's all the same skill set. My
view that it's the same skills you've already developed but you
put them ... you make them more productive. The second lesson
which is less obvious is that writing test before the code it
does feel like it takes more time upfront and it does take more
time upfront because you don't realize that you're getting a
free benefit from doing it. It forcing you to think about how
the code will be used. It is remarkable common ... I guarantee
the following is going to happen at least once, you're going to
write a method that you know you're going to need and you know
what argument it takes and you know what it supposed to return,
you coded it up, you're proud of yourself. Now you go back to
the part of the code that was going to use that method that's
going to call it and you realize that you actually weren't quite
right about how you're going to call it. There's actually an
additional argument you didn't realized you needed to pass or
actually the return value isn't quite what you wanted. It be
more convenient if the return value is different. When you're
doing TDD it forces you to think about that stuff upfront and
that's why it takes longer. You're doing little kind of bits of
design as you go by the time get backfill in the method, like I
said the hard part has been done. You figured out the structure
of the code and what you need to test about it. Actually writing
the code given those constraints is the much easier job. Here's
what I believe we've learned about TDD. You learn about this
idea of red - green - refactor and make sure the test fails for
the correct reason then backfill the code and always look for
opportunities to beautify. Your goal is to spend most every time
in green. You have to go red when you're adding a new test and
as soon as you add it write the minimum amount of code to go
back to green. Think of it as trying to always go back to the
stable state. Test one behavior at a time and we've seen how you
could Seams to sort of intimately control what happens in each
test case. Use place holders. We saw some examples when we first
started developing these that even just saying it should do such
and such without filling them the test code at least it serves
as a place holder to you to remind you that you've got to come
back to that test. You can even use the pending keywords that
you'll see a little message printed out in yellow every time you
run that case through our spec. Look at the coverage reports. We
will have you run simple cup as part of your overall test
coverage. It's worth using them not to get sort of religious
about where you needed to get 100% coverage but the point the
parts area code that are clear under tested. Kind of in hand and
hand with that is don't rely too helpfully on anyone kind of
test. There's a reason that many different kinds of test has
evolved overtime and one of those reasons is that different
types of test will catch different errors. Here's a quick
question about testing. Which non- obvious statement is false?
Even a 100% test coverage which ever metric you want to pick is
not a guarantee of bug freedom. If you can stimulate a bug-
causing condition using a debugger then you can capture it in a
test. Testing eliminates the use of debugger, assuming you've
done the second thing. When you change your code, you need to
change your tests as well. Which of these four is false?