Tip:
Highlight text to annotate it
X
The first reason, I think, is that it's because the developers of the commercial software systems
aren't doing it or at least not doing it enough and I think it's pretty clear that the kind of bugs found,
for example, in the fuzzing papers, or in Charlie Miler's talk would'nt have been there in the software
if the developers of Adobe Acrobat or the Unix utilities have
had a reasonably aggressive random testing program, and remember that
some of those bugs Charlie Norris was talking about were secured liabilities.
These are things that they really don't want in their software, and so what I'd argue--
Software development efforts that don't make proper use of random testing are flawed,
and the reason that these efforts are flawed is because
modern software systems are so large and so complicated
that test cases produced by nonrandom processes are simply unlikely to find
the kind of bugs that are lurking in these software systems.
What that leaves us with is a question: what should they have done?
How is random testing supposed to work? Let me give some ideas about that.
What I'm going to do here is show sort of a rough software development timeline
with a releasing software over here and early development stages over here.
What we've looked at mainly so far in this course is random unit testing.
We're developing these software units and what we're trying to do is make sure that they are robust enough.
Then we start composing them together later, they'll be a solid foundation.
We looked at several cases of, for example, with a queue--bounded queue here.
We looked at fuzzing the interfaces that it provides.
We also looked at an example of random fault injection and that was for the read_all function,
and if you remember, that was the function that was supposed
to cope with the fact that the Unix read system call can display partial success.
They were doing fault injections that were fuzzing the interface used
by the read_all call not the interface that it provides.
And so, like I said, what we want to do is ensure as we're developing the modules
that we're creating robust pieces of software whose interfaces we understand
and that are going to be solid foundation for future work,
so I would start developing more elaborate software stacks.
It's going to be the case that some of our random testers become useless.
For example, if we have a Q instantiated here that is used by some
more sophisticated piece of software, we no longer are interested in the ability to randomly test
the interface provided by the Q because it's simply being used by the rest of the software.
On the other hand, other kinds of random testers such as those that come in at the top level
and those that perform system-level fault injection are absolutely still useful.
In fact, fault injection of things like erroneous responses to system calls
are really important things to test larger pieces of software with
because typically, those kind of errors can result in failures propagating
all the way through our software stack and we'd really like to understand
how our system responds to that sort of thing.
To be part of something that's more of a complete product
our focus should be on external interfaces provided, so this is going to be things like
file I/O and the graphical user interface, and so, if you recall those fuzzing papers,
we're fuzzing exactly these sorts of things.
Here, we're delivering random bits to the file interface and they were delivering
random gooey events to applications and knocking over
a pretty large proportion of applications that they tested.