Highlight text to annotate itX
WHITTAKER: So, I'm James Whittaker. I am an engineering director here at Google, not here
at Google but in Kirkland in Seattle. And engineering directors at Google, our responsibility
is to take credit for the work of people who are smarter than we are, and I excel at this.
And, generally, I don't even talk about those people who I take credit for their work because
they're never in the audience. But today one is and so I just want to, you know, kudos
to my friend, Joel Hynoski. Joel, wave to everybody. Besides being my bodyguard, Joel
is also works very hard to see that I get credit for all client testing at Google. That's
Chrome, Chrome Operating System, you know, small things like that. And the only I do
with my time is I take credit for people who write cool tools. We call it engineering tools
and I have a couple of teams to do ID e-work dev work that developers consume. And I have
another team that does IT e-work integrated test environments which you're going to see
a little bit today. So, since this is a closing, I would like to say this has been a really
good experience. The keynotes, Bob and Jeff, you guys were awesome. I thought it was really,
really cool the way you had an academic and a consultant, who's I think his major customer
is Microsoft right now, come to this conference. It was a good change of pace. I think that
GTAC is special if--for a lot of reasons. But one reason, this is the only conference
I've ever been to in 25 years of going to conferences--okay, am I really that old? Okay,
let's say 15, let's call it 15--were people stay until the end. I mean have you ever been
to a conference where you're at the end of day two and there's actually more people here
than were here yesterday. I think that's pretty amazing. So CJ, the committee, you guys have
done a great job and cheers to you all. And, I have to admit, I have to thank the host
country of India because I didn't actually jumped with joy when I found out GTAC was
going to be in India, I'm just going to be honest. I did not know what to expect my first
visit to this country. And I can say now, truthfully, that I was not disappointed. And some cool things have happened these two
days. You know, we had a demand from our Turkish Embassy for building good tools. And it was
the right time to say that of us because you've got Microsoft here, Tool Builders, you've
got Mozilla Tools Builders and you've got Google Tool Builders. So, if we don't listen
to you, then try again next year. So, I'm here to turn quality on its head because I've
sat here through all of these, politely, mind you, listening to all of these kind of testability
love and before I turn that on its head, I'd like to just talk about some common ground.
I've got my--I was a professor for 10 years; I got my chalkboard with me today. A little
bit a common ground that we can all agree on; just the fact that we're all here and
a test conference, we are all implicitly agreeing, explicitly agreeing, that software just isn't
that good, there's bugs in software, all software. Coding is complicated. Coding is error prone.
No one gets it right, except Tracy and Russ. And I'm going to call you Tracy and Russ because
everybody else calls you Russ and Tracy and I just want to be different. And so, instead,
really we should just be trying to, you know, duplicate Tracy and Russ as opposed to testing
software because I submit this is an endless endeavor especially the way you guys want
to do it. You guys want to take something that we all admit is fundamentally flawed,
the process of writing software, really complicated, incredibly error prone because humans are
doing it. And, stick with me here, you want to solve it by writing more code. Interesting.
So then, we're going to have another GTAC, GTAC, Google Test Automation Conference for
Test Automation, because the test automation rewrites is going to be buggy. This is an
infinite regress. There is a fundamental problem with what we're doing here with the theme
of this conference. But let's look at the facts first. Let's figure this out. Let's
have a contest between dev and test writing code and testing code and see what we end
up with. Now, at Google, testers are sorely outnumbered. Some teams, you know, 4, 5 to
1; some teams 10, 12 to 1; some teams, who knows, 15, 20, that all depends. Now how do
we get by with this because this sounds atrocious? I know there's a lot of Microsofties in the
audience and somebody had a quote, I think, it was Jeff from Bill Gates that said, you
know, "We have a developer for every tester and a tester for every developer." How do
we get by with this at Google? We get by with this by having our developers do lots and
lots of testing. In fact, the general rule is an SET, a Software Engineering Test, will
write a framework. But the test cases for at framework would be written by the dev team.
So, we have this huge dev teams, and we have this little bitty test teams, but we got the
devs doing a lot of the upfront work, right? They're writing unit tests. They're doing
TDD. They're writing test cases for the automated frameworks. This is a lot of code that's going
in to test. I call this early cycle testing. On the other hand, we have test engineers
and some contractors who do late cycle testing, some scripting, some manual testing, but this
is when the product is more mature later in the cycle. So, I want to just kind of, let's
have a little contest. Right now, it's dev-test nil-nil and let's see where we're going to
invest. Now, I'm not going to say the winner gets all the money and we got to stop doing
the other, that's not what I'm doing here. This is Google, we've got some change in our
couch equations. If I'm going to make an additional investment in testing where is it going to
be, early cycle or late cycle? That's what I want to talk about. So we all know that
you can't test quality in, right? You put lipstick on the pig, it's still a pig. Only
thing you're going to do is turn it into bacon. So, the whole idea of, you know, testing,
test and test and test and test and test to pick your worst product ever. Pick a product
that you hate the most. Pick the product that you cuss the most. More testing isn't going
to help it, right? Re-factoring, redevelopment, throw it in a way and starting over, you know?
It's development that fixes problems. Test doesn't fix problems. So, if you want better
software, you got to write better code and this is a development task. One-nil, dev.
There is nothing we can do about this. Nothing we can do in late cycle that will really fundamentally
improve a broken product. We can find some bugs but the devs are going to have to fix
them. We can totally prove that it's correct, and it'll just be crap. One-nil, dev. Sorry,
test. Now, someone else produce a slide earlier today that showed how much bugs cost. They
don't cost hardly anything if you realize you write, as you're writing the bug, you
realize it's a bug and you just fix it right there, no cost, right? There's no bug report.
There's no time lost in debugging. There's no rebuilding involved. It's free. And as
you go, I think the number of that shocked, it was like 500x, right? Five hundred times
the cost in late cycle testing. We've done some things to make that cheaper, right? We
were all looking for--it's funny, when we were doing the exercise, we were looking for
the bug on our little badge things. I was sitting next to Ted and Ted's looking at it
online. He's like, "There's no bug. There's no bug. This is perfect, right?" The bug had
been fixed but on the printed version, right? We don't do much printing of software and
distribute it that way anymore we do it in the Cloud. So, the cost has come down. But
certainly there's more cost late cycle, nothing you can do about it. Two-nil, dev. Because
even in the late cycle, even if there's no additional cost, you can just fix it really
quickly in the datacenter and pump it out to all users for free, there's still a loss
of reputation. There is, you know, that bug has to DB, be debugged, and it could that
the developer who wrote that bug wrote it weeks ago. And weeks in developer years is
like, you know, a century. They've forgotten about it that they got to get it back inside
their head. Lots of time, lots of money, two-nil, dev. Sorry, test, can't help you. This is
the thing that really, really makes me mad at testers. They create throw away stuff.
They do, all the time. You write a test plan, how many of you are right now working on a
test plan that is the exact reputation--execution of what you're doing, a representation of
what you're doing, and an exact representation of the product that's being built. That's
what I thought. I walk around the halls of Google and I see dead test plans. Remember
this little boy from the Six Sense, right? I see dead people. I see dead test plans.
They're all dead. It's like a graveyard at Google. You write a test plan, why do you
write a test plan? Because you're told to write a test plan. You're not doing it to
solve any real fundamental issue. Are you really planning test strategy in that test
plan? Are you really designing all your automation and figuring out all your manual test cases,
really? It looks to me like a kiks document or a Word document for you, Sam. It looks
to me like a document that describes kind of some touchy feely stuff about, "Our products
got some stuff in it and we're going to do some stuff to it and here's a big long list
of it," and you write these things hoping that they'll fool people. Hoping that you
can write it, and give it, and show it around to people, and the people say, "Yeah, all
right, you wrote the test plan. Now go off and start testing." And then you go off and
start testing, what happens? Do you use a test plan? Do I walk around and see testers
with two screens? We all have two screens at Google. Some of us have four, right? Two
screens, where the test plans on one and the after testing is on the other and it's guiding
you every step, only in my wildest dreams. I have these dreams and I wake up in a cold
sweat. I'm like, "Oh, that was great. If I could only go back to sleep and retrieve it."
People writing usable test plans and using them. Sorry, three-nil, dev. That test, you're
down. Three-nil. It's half time and you're getting your *** kicked. "Oh, my god, I was
starting to agree with you people now. Yes, let's write a bunch of test code upfront,
that's the way to do it." Is there any hope for test, the creating dead documents, the
finding test of bugs long after the code has left the devs had? Too expensive. What are
we going to do with these people? Fire them all. Now, why do I even have hope for late
cycle testing? What is that makes me think that this is something that we should do at
all, that we should invest in at all? Because I have dev directors who I partner with, right?
I'm a test director for Chrome Operating System, there's a dev director for Chrome Operating
System. If I walked up the line and said to them and said, "Linus, you got two choices,
we can delete all of our automation and stop doing writing auto test test cases or I can
fire all my manual testers? You pick." I know in my heart of hearts that he would fire the
automation. I know it because he's told me he would because I asked him. I asked them
all. They cherish manual testing, the devs. Now, they're not proud about this, right?
And Linus is going to throw me under the bus with--we'll watch this on YouTube. They're
not proud about it. When they ask for more manual testers, they whisper, "Don't they,
Joel," right? They, "Joel, can you get us ten more manual testers," right? And Joel
is like, "Hey, James, guess what Linus wants?" And then, of course, we like to broadcast
this out. Joel sending out emails so, you know, dev up, "Hey, man, we're looking for
10 more manual testers for Linus." They don't like to admit it but they know in their heart
of hearts, there are some very valuable stuff going on there, right? There is recall class
bugs being found by people sitting down using the stuff, people with minds, people with
fingers and people with eyes. So there is something there, what is it? So this is the
way I think about it. I think about software as a forest. I like to hike. I'd go out in
the woods and I think about ideas to write and all my books have started in the woods.
Now, in these woods, the developers grow trees, right. They have a slice of the product and
there's nobody on the team that understands that slice of the product better than the
dev, right? All the way from the UI, all the way down to the bowels of the system. Nobody
who understands it better than the dev, and there's no question that the person most qualified
to test that code is the dev. They understand it better than anybody. Why would you hire
a tester to test the code that the dev just wrote? Stupid. Devs grow trees. Now, if I
want to make an investment in early cycle testing maybe I could get the tester to help
water those trees. But this doesn't make any sense. Testers don't want to water trees.
Testers want to chop them down, right? This is why. I mean, we don't hire testers because
they can't write code. Our testers can write code. We hire testers because they want to
break stuff. They want to write code that break stuff. They want to use their fingers
to break stuff. This is their passion. We don't want them to help water the trees. What
I need them to do is scenarios like these. And I'm going to score a goal from my tester
here because this is where a developer's ability to test completely falls down. They're great
at watering their own tree and caring for it, but when you start to put other trees
up next to him that they have to interface with, all of a sudden, there's a lot of integration
scenarios that the developer doesn't see. That's code they don't understand over there
written by that other dev. And you don't want the other dev to test it either. In fact,
I don't even think you want the two devs together testing it. They're just going to confuse
each other because they don't want to do that. We don't hire devs because they have a passion
for tests. We hire devs because they want to sit at their desks and write code. Bless
their hearts, I'm glad somebody does. This is what we need testers for. The testers are
going to see these scenarios. They're going to be bringing independent mind, independent
thought and they are going to find the bugs that either of those two devs and then you
have 3 and 4 and 5 and 6 and 20 and 115 and it gets more and more complicated and you
need someone who can see the forest for the trees. Three-one, test--three-one, devs, sorry.
Now, the devs often build these big beautiful trees. They love their trees. They stand by
to admire their trees. "This is a beautiful tree that I just built." You built it right
in the middle of
a user path, right? You built it upside down. You built something that the user can't use.
You built something that's too hard to use. You built something that's full of bugs and
you can't see them because you're just admiring the foliage. Devs often build the wrong code.
They often put it in the wrong place. They often build code that ends up getting integrated
and you find out that there's really not a great user path through it. Devs don't think
like users. They think like devs, that's why we call them devs. We need somebody to think
like the user. We need somebody that has to look at these forests and say, "There's no
path that I like. There's a path here that goes around in a circle. There's a path here
that's supposed to go through this boulder. Really? And then there's a path here where
I just fall off at the end of Earth." The world is flat in your forest. So, three-two,
tests. I'm going to score a goal for test for the ability to think like the user. So
now I'm being to understand this demand for my Dev directors for manual testers. Could
we have an equalizer? Of course, we're going to have an equalizer. What is this? This is
a forest. This is a product that's complete. Is it bug free? No. It's bug-free because--it's
not bug-free for a number of reasons. It's not bug-free because we've got fallible human
beings building it that are guaranteed to make errors. So there will be latent bugs
and there are going to be bugs in the test code that is trying to find those bugs. So
there's going to be bugs for that reason too and there's also going to be bugs because
it's now a complete forest and complete forests get used in different ways. Just ask Steve
Jobs. iPhone4 worked great in the lab and then you get it in its case and in the hands
of a left-handed fat thumb user and it breaks, right? There are some bugs that simply don't
appear until software is built. And let me show you some just in case you don't believe
me. I heard Alberto wanted to go to England so I was planning a route for him. This is,
by the way, a walking route. Was the user Jesus? I don't know. But you're going to need
his help to navigate it. This is a walking route from Cambridge to Hull. Now, find this
with an automated test. Find this with a unit test. You can't. Not until you get the data
on the map and the software installed and the user selecting a very specific route can
you find this bug. You got to have the forest. Three-three, test. That's right. We have spot
and spot B. Start, finish; 150 yards apart, which Google Maps tells you this is the shortest
path. Find this with a unit test. Test the CLIs. Was it test to CLIs? Test the CLIs.
Test the CLIs. I'm only going to pronounce test the CLIs that way because if I change
it a little, it might be good. Test--what was it, test the virus? Test the virus, okay.
Find it, right? See there are some bugs he wants to find code, bugs when the software,
whether the software is soft and malleable. The problem is there are certain bugs that
aren't there until it's hard and brittle. And we've got to be able to find these bugs
these are the bugs that the user will see. This is why my dev directors want manual testing.
This is why dev directors want smart human beings doing scripting and manual testing
and other late cycle test engineering. Go on automation, find the bug. See, some of
them are even hard for manual testers to find. Do you see it? Do you see it? Do you see where
it says Arlington National Cemetery? Look just above Arlington National Cemetery, it's
a restaurant. It--what are they serving? Catch in the day cadaver, I don't know, but it's
not nice to do this. You can't find these with automation. This is a late cycle test
engineer problem. And so is this. Do you see it? Let me blow this one up. Copenhagen International
Airport is a bridle way. That's a little horse. What are they doing? I mean, I'm assuming
Denmark is fairly developed, decent infrastructure. What have they got horses pulling their 747s
down the--pulling that down the runway? Find this early cycle, you can't. Sometimes it's
the test code itself that really makes the show. This one says, "An error has occurred
while displaying the error that has occurred while creating an error report." This is a
bug in test code at Microsoft. This was a Microsoft bug. It was found while I was an
employee of Microsoft. What did we do when we found this bug? We got t-shirts printed
with it on there. So, I've got to give test a tie here; it's a draw. So, what are we going
to do about this? It's three-three and I really can't give another goal to either dev or test,
I've run out and we're in extra time. We're in penalty kicks and everybody is making their
penalties. Somebody's got to miss. What's going to give? So, let's think about it a
little bit differently then. If it's a draw, let's really put some thought into where we
spend our money. What could we do to just fundamentally make early cycle test engineer
better? I don't know. This is really a hard one. We've been talking about it for two days,
haven't we? And there are some ideas, they're hard ideas. Everything we've talked about
is hard. You know, we talked about it. The very first talk was about writing test code
and there was a concern from the audience you've increased the attack surface of the
application, right? So, you know for every solution there seems to be another problem.
Every time we look really hard at early cycle test engineering. And there's another problem
here. How many people are really good at this, Tracy and Russ? I can name, actually, I can
name off the top of my head, I can name at least 50 Googlers who are really good at this.
Many of them work for me. And Joel, you and I are thinking of the same names, aren't we?
Now, the problem with these really ultra-talented people, like we've got one on Chrome OS, man,
he is an absolute Jedi early cycle test engineer. If I ask him to go work on maps he'd be, "No,
no, no, no, no, dude. This is me, right? I live in the kernel." These early cycle testing
engineering skills aren't particularly transferable. They can't really be moved easily from one
project to another so I'm really hesitant to place an investment there. And then the
skills are very dev-like, very dev-like. In the last six months, two of my SETs, software
engineers and tests have converted to SWEs, software engineers, right, from test developers
to feature developers. I have six more in my tree of whatever how many I have, 80-90
people in my tree, who are going through the process now. So it's the very skills that
we need like for early cycle test engineering lead people into feature development. And
that's not a bad thing. In fact, I think it's a good thing. I encourage it. But right now,
"Man, this is being recorded, this is so hard." I'm going to say it anyhow, open kimono. Right
now, the best dev in my org is a SWE that was a tester before. He was a tester for 12
years. He's been a SWE for three. Best dev in the org. All of the SWEs say the same thing,
right? He writes code a little slower than everybody else. But people, he just breezes
through the code review process. Hardly any regressions, never breaks the build, his stuff
just works. So this is something we want to encourage. I love test skills moving to dev,
far more than I love dev skills moving to test. I'm always hesitant when a SWE wants
to become a SET. You've got to really want to do it. You really got to tell me you have
a passion for breaking things. So I'm worried about this. Not only are these people that
can do this very rare, they tend to move in the dev. Yeah, I'm not so sure about an investment
there so let's think about the other half. What's the other half? Well, hire more manual
testers. Get 20,000 people from Mozillas and uTest crowd to help us to do this. I mean,
I do like that. I like the fact that we've got a solution for this numbers game now with
crowd sourcing. This makes me quite happy. I'm going to talk about it in a few minutes.
But this isn't enough, something technological has to happen. Manual testers are expensive,
and they're really not used through the entire process. Until that code gets sort of hard
and brittle there's just a lot of prep work for them do. There's not a lot of value to
be added. We really need to be flexible with how we use these, move these resources, these
late cycle test engineering resources, around our various products. So this is a problem.
How do we solve this problem? If I had a solution for that, if I could take a tester and make
them sort of hyper-productive somehow, right, that little helmet that Anakin Skywalker wears
in the Phantom Menace, you know, driving little pods around, hide in one of those helmets
and I could just put on and had, you know, like heads-up displays inside and just, and
turn them into a Jedi that would be cool. But if I don't have that tool I'm not sure
I can place an investment in late cycle testing either. So, what do we do? We either give
up, stop having GTAC and all these other silly nonsense conferences, and just realize we're
going to be producing crappy software for decades. Users are already used to it anyhow,
right? I mean, really, are you surprised when something breaks? No. I expect it to. I'm
thinking, "Wow, I've been using this software for five minutes. Five whole minutes, this
is great." Well, I am going to make an investment. I am going to put that helmet on that tester
and I'm going to see if this works. Data still coming in but I want to show you how I'm making
my investment. How I'm spending Google's hard earned dollars on this problem. I'm going
to put the helmet on my tester and I'm going to turn my tester into a bunch of testers.
I'm going to turn a single good tester into a really hyper-productive good tester. That's
what I'm going to do. Notice I'm not promising to turn the mediocre tester into a good tester.
I'm going to take a good tester, which I believe is a plentiful resource, and turn them into
a hyper-productive tester. And here's how I'm going to do it. First of all, I'm not
going to ask them to do stuff, throw away work. No more dead test plans. No more dead--in
fact, I'm not even going to write test plans. Test plans *** me off. Instead, I'm going
to do test planning. But I'm not going to write a test plan. I'm not going to write
test cases. I'm going to do testing. In fact, I'm going to try my best to write the minimal
amount of artifacts I can. In fact, what I'd really like to do is just start in the middle.
Start testing and have the artifacts be documented while I go, I'd love to do that. But that's
version two. Version one requires some upfront work and I'm going to show you what that upfront
work is. Now, for this to really be effective, I need to solve the problem for all late cycle
test engineering not just manual testing or a person sitting at their desk. So, I've got
tester and what I'd like to be able to do is I'd like to assign a number to that tester,
right? We completed X number of testing, you know? We took it from 100% undone down to
90% undone. And then I'd like to give it to the crowd and have them not just start testing
but continue testing to be able to pick up where the internal testers left off so that
when you say, "All right, how much of this problem have we solved?" You know, the number
goes down. Now we're down to 60% left to do. And then, give it to the dog fooders to continue
to testing. Not just to start it out all over again. I mean, you can't. Come on, you guys
have all run dog food programs, what happens? Every single dog you've got in your bug database,
that's what you spend the first two weeks re-finding, just a bunch of bug duplicates
and a lot of noise and a bunch of developers taken offline to have to do this stuff. And
then finally it's a beta. And the beta should, again, be able to do the continuation. Wouldn't
it be nice if all this test infrastructure is attached to the beta and we can still continue
to monitor coverage and we can still continue to monitor new pass that have been hit, new
test cases that have been "written," all right? So this is where I want to place my investment.
And if it pays off, I got something. If it doesn't, then I'm going to back to my three-three
draw and think it through again. Are you with me? All right, so let's see what we're going
to do about this. We need a way to think about software. If we're not going to do or write
a test plan, we need a way to do test planning. And so here's our process, we have a three-step
process. Adjectives and adverbs are first followed by nouns followed by verbs. We're
going to describe our system in those three terms. We're going to discover all the ways
that you can talk about our system with an adjective or an adverb and then we're going
to call those things attributes. Just like a human body has attributes. A human body
is responsive, alive, intelligent. Now let me just give you a little peak ahead of what
I'm trying to do. I want to be able to attach tests that test for these attributes just
like you had with the human being. We have tests for a responsive human being. Is patient
responsive? Well, let's see, right? We do the little reflex test here. We've got the
little light test on the pupils to see if they're dilating properly. We can take a pulse.
We have tests to see if those attributes are there. So attributes are first. Second are
the nouns. The parts of the body if you will; the hand, the brain and the arm, the things
that make the attributes come alive. What are the attributes to software? Well, they're
the components. I'm sorry, the nouns are software of the components. So, now we've got attributes
that describe our system, components that define our system and we're left with the
verbs, the actions. What does our software do? We call these capabilities. So we end
up with ACC, Attribute, Component, Capabilities. We should be able to test that capabilities
are correct and that the collection of capabilities that make up a component are correct and that
they describe the attribute. Wouldn't it horrible if you had something like Chrome with an attribute
of fast, all right? When you think of Chrome, you think of speed. Wouldn't it horrible if
you went into Joel Hynoski's test team and ask the testers, "All right, Chrome is supposed
to be fast, right? Show me the test cases that indicate that that's either true or false."
That should happen. We should be able to do it. So let's show you how we do this. There's
a product that we have, that were--the codename is Testify; I'm still--I don't do live demos
anymore because I've been burned by it two months so these are screen snaps I scraped
before we change the thing. It turned out, Testify is trademarked by some legal organization.
I mean, this is definitely not a trademark you want to break. It's going to be called
Google Testing Analytics. Step number one, tell the world about your product. This is
Google, every product needs to be--if I can look at the source code and of any of my other
teams, I should be able to look at the test cases of any of my other teams. So the first
thing to do is define your project and it's Word-readable by default, everybody can look
at your test cases. So let's say we have two related products, you know, apps have lots
of related web applications. One web app could actually copy the test cases of anther web
app, wouldn't that be nice. Second step are the attributes. This is a small sample of
the attributes for a Chrome Operating System. It has about 14 total and they're listed on
another screen. And we come up with these, on our own. We figured this out, right? How
did we figure out the attributes? The best way to figure out the attributes is to watch
a dev manager or a test manager or a sales person demo the product and write down all
the adverbs that they say. This takes minutes. Now for each attribute we attach components.
Which components make it fast? Which components make it secure? And we write all of these
down. Now some of these are actually automatically available. So we have like a bug databases
and other places within the databases existing within Google that we can just pull these
from and auto populate this. We're not doing that auto population yet but we want to do
this manually to see how painful it is. But it is possible. And for Chrome OS, there's,
I don't know, 20 or so of this. And the third is to add capabilities, right? What does the
software do? Write them all down. Now, when I first started telling people I want to do
this, I want to write down all the capabilities of your application, every single one of them
are like, "There's too many, we can't do this. It's going to take too long." And so, I'm
in charge and so they did it anyhow. It's what a wonderful thing that is. And we ended
up with 304 for Chrome operating system because, you know, a capability like play Flash content
it's pretty broad. So, I'm not asking you to boil the ocean here I'm asking you to come
up with statements about what this software does and then we're going to test them all.
There's many of them as we can in many different ways as we can and we're going to know how
many test cases play Flash content and which component that's in and which attribute that
it describes and we created all those. When we went to the dev team and said there are
304, they were aghast. So they reviewed it with us and it turns out we're wrong, there
was like about 314, I think and then they believed us. Now, we end up with the numbers
here and this is just a small section of it while it was still--I've scraped all of these
out while it was still in development. The next thing we do is we prioritize this. So,
we've got 314 capabilities, which ones are more important? Now coming up with risk numbers
is really not scientific. How do you do it? What are the most risky parts? Well certainly,
any capability that kept it from performing one of its key attributes, they would be really
high risk. So if you have a capability that has to be fast, that's got to have the high
risk number. If you have a capability that has to be secure, right, like, you know, Chrome
Operating System, the capability would be that we time out after inactivity, right?
If that's broken that's really high risk, so we just guessed. What is this Oz's rule
from Usnics? If you don't know the answer, take a guess to a bunch of experts and they'll
tell you you're wrong. But if you go to them and ask them for the answer, they'll say I
don't know. So we guess. We come up with the bunch of risk numbers and then we take them
to the devs and say, "Hey, this is the order in which we're going to test your stuff."
And they get real interested all of a sudden, because devs have a different view of risk
than we do. If you ask a single dev, they'll going to tell you, "The stuff I work on is
the riskiest part, please test it the most." If you ask a sales guy what's the riskiest
part, they'll going to tell you, "The part that I have to demo to customers is the riskiest
part, please test that the most." So we don't know how to do this. So we guess as testers
and that guess forces the hands of all of our partners. And we put in numbers and we
average them and we get something. Something hopefully is better than nothing. And we won't
know if we're wrong for many, many months. We've got to do this project after project
after project after project. We need to this so eventually we'll understand. "Okay, we
were wrong and we were wrong this way and we can fix this." So maybe in March, if I
gave another talk, I will have some data about that. So those are the risks. And I just color
coded them red, yellow, green. So here's how it all stacks up. This is a visualization
that the tour provides. This is basically for all of the--and this is--I think this
is all the attributes that we have or all the--no, no, no, this is subset of the components.
This will show you the--basically, the test volume of each of those components. This is
how much testing has to be done. The number of capabilities it need to be tested for a
component. So then you could do a resource planning. You'll figure out, okay, this is
going to need more testers than this and you can do some preparation. This is the part
of the application that requires testing and so mountains require more testing than the
valleys. If you don't like to look at it that way, you can flip it upside down and you can
think of pouring test cases into these holes until they're full. Or you can look at it
this way as a spiral where the highest risk is the center of the graph, lowest risk is
the outside of the graph, and you can see which of these circles you have, how many
test cases does it take to push one of these circle from the inside to the outside. If
you want to know what one of circles is, you can click on it and then it will tell you
which capabilities, how many test cases are associated with it, et cetera. You can also
pivot on smaller number of points, if you're only interested in a few of the attributes
or few of the components. And those arrows indicate the direction that you want to go.
So these are time laps arrows. This is where we started. We did some testing and this is
where we are now. Our risk is going down. Are you with me? Okay, so this is a set up.
This is what we want to do. This is what I'm investing in. I think this is a good idea.
I end up with an application called Google test analytics that contains all of my test
planning information. There's no test plans in there. But everything I need to tests is
described in this tool and I can maintain it. See, what we need to do, devs have a huge
advantage, they have no choice. They cannot have dead code. The code is always alive.
The code is always up-to-date. In test, we have this ability to just let our stuff die,
because really the code is the only thing we shift. I'd like us to maintain this. I'd
like to make it as easy as possible. I'd like to automate as much of it as possible. I'd
like to drag a lot of this stuff in from the environment if I can. We're working on it.
But this is compelling to me and it's working for me. Now, how do I go about testing? The
thing--once I figured this out and we started doing it and it started working on client
and YouTube and some of the other places within Google that are using it, it became clear
that we still won't really helping testers be better. We were just helping them be more
organized. So that's the second part of this. How do we help testers be better and do a
better job of testing? How do we take that good tester and amplify their effort and amplify
their productivity. So I take an inspiration from gaming here because I think the gaming
world has it right. If you think about what does a tester have to do? The tester does
a lot of information. The tester has to consume all their testing. There's a bug database,
there's a source database, there's a series code review database that testers check all
the time to see if these things are actually worth testing. There's test automation, there's
test case databases, right, test case management system. There's all of this information that
a tester has to run around and figure out, you know, how much time the--I started this--I
actually started asking every single tester in my org, "How much time do you actually
spend fingers on the keyboard testing or fingers on the keyboard writing the script that we'll
test; writing test cases, whether they're automated, scripted or manual. How much time
do you actually spend doing that?" After I got the fifth person, I was so depressed that
I decided I was going to cut my survey short. My boss likes data like that. Data like that
depresses me. It was clear to me that testers aren't spending enough time actually testing
software. There's spending a lot of time doing all these other stuff. Just stopping testing,
turning around, going through and finding your bug report, writing the bug report, making
sure the bug reports really good, really accurate, what version of them am I running, what operating
system do I have, what version browser. This takes time and chances are you've just produce
the bug report that someone else has another bug report for because we've got 18 duplicates
in our bug database for this. Not only has this taken a lot of time, it's all been a
waste of time. So what can we do for that? Gamers got it all figured out for me. I didn't
even have to invent this one. What do gamers do? Do gamers spend a lot of time trying to
figure stuff out? Do I have to read the game user manual to figure it out? No. All that
stuff is displayed for them. It's called the heads-up display. Does a gamer--you know,
you're in Halo when you're going through, do you have to figure out what weapons you
have in storage? If you're in World of Warcraft and you're a wizard do you have figure out,
"Oh, let's see, do I have fireball or was that magic missile, I don't remember." Pause
the game and go check. No. It's all there on your heads-up display; every spell, every
weapon, every capability. There's a mini map in the upper right hand corner to tell you
where you are. There's all of this help. You can hover over an enemy to see what their
health and power is so you know whether to attack them or not. That's what I want. I
want you to spend--the gamer spend all their time gaming and the testers are running around,
looking at all this other stuff. So, now I want to take the bug database and I was to
surface it on the screen in a heads-up display. I want to take the source code and I want
to be able to surface it anytime you see it. I want to take every single piece of information
that takes you away from testing your application and I want to overlay it just like this heads-up
display. That's what I want. If could do that, I think we would be very impactful. Now, there's
one thing that gamers do, there's one thing that they have to look at, there's one thing
that stops the gamer in their tracks. And I know this because I have a 13-year-old son.
Xbox is upstairs, he'll come whipping down the stairs, flick on the Mac and start looking
for Game Cheese. He's like, "I got to figure out how to play this level," right? And there's
two things he's looking for, he goes to YouTube to look for videos of other people who've
beat the level and he could just watch them. He's like, "Oh, that's how you beat that hobgoblin,"
right? He whips back upstairs. He's like, "Don't turn off the Mac." "Yes, son." And
then he's up there and then, I don't--I figured, you know, I could tell him that it's actually
all internet enabled up there. He could--but I just kind of like that he'd get the exercise.
So, now, you don't have to do that. You got the cheats. I hope you can see this well,
I've got another one. But the upper left-hand corner is a look ahead radar. It's a cheat,
right? And it's in the same vein as everything else. It's pulling information, except this
time from the game illegally, because all that information is cached on your Xbox. So
it pulls it out, right? See the little red squares? Those are bad guys waiting to be
spawned as soon as you in those rooms, because it's all cached right on your--in your memory
waiting, right? And so you can do some really cool things. The blue are shields around people
on my team so I don't actually shoot them. The red are bad guys. This is a hack we did
while I was a Microsoft employee. It was live on Xbox for awhile. We were trying to figure
out. So what we did instead of trying to figure out how people were cheating us, we just wrote
better cheats than them. And then figure out how to stop those, right? So why debug their
cheats when it was really fun to write our own. And my son was a tester on this. The
day I came home and said, "I've left Microsoft. I'm going to Google." The poor little lad
cried, right? He said, "What about my Xbox cheats?" It's a simple world when you're that
young. All right, so this is--can I get this too? Can I cheat? Can I make myself more powerful
than just a regular user by surfacing information from the application itself? There's cheats
are all over the place, right? This is such a useful common thing but it's implemented
in almost every game you get, right, heads-up displays and cheats. So maybe we can learn
something from these video game guys. Maybe there's something here, there's something
to this. All right, are you with me? I mean is this worth investing in? Is it worth pursuing?
Or we should be just stop right now and call it a draw? Proceed. Okay, thank you. So let's
proceed. Whoa. So here it is. It's not quite as sexy as the video game. But I'm testing
Chrome Extension's gallery and that's my app. The heads-up display is in the upper right-hand
corner. And you'll see it's got journal, bugs, flux, which I'll talk about in a minute, and
a map. The map will cross the code and builds a map of your application for you. It knows
about the components, maps those components to the components in the ACC. All is well.
The journal is your log, right? It's your mission log. You're going to do some testing
and we're going to record it. This is another thing that really--maybe I should have take
a goal away from the testers because this pisses me off too. We just got through review
cycle, right? Arkin and I were seating around having a beer last night and complaining about
this very scenario, right? "How do you judge a tester?" This is what we're trying to figure
out over beers. And he's like, you know, "What do I have? I don't know a lot of this people.
I don't know their work. All I can do is sort of look at what code they wrote and look at
what bugs they found." And it's very unsatisfying to judge somebody on that. This person may
have written, you know, a thousand really good test cases that validated really complex
scenarios and showed that there's no bugs in them. Really valuable work. But all we've
got credit for is the bug. I hate this. So I want to be able to take credit for those
thousand really good scenarios so I'm going to record everything. And I'll show you a
little bit of what we record later so that not only can we reuse these test cases, we
can take credit for them as well. So what I'm going to do here, you see in the heads-up
display, hopefully it's big enough. I'll just interpret for you. It says you're going to
perform a bad neighborhood tour. So this is part of the cheating. Just like my son goes down to
YouTube and watches someone else better than him try to finish a level and then learn from
that and go upstairs to the Xbox and do it himself, we've done the same thing. We've
taken test wisdom. We call these things tours. It's in the fourth book, exploratory software
testing. Damn! I just proved that override I'm here to sell books. It's wisdom that we've
learned over the ages, right? So the bad neighborhood tour is the piece of wisdom that says run--the
bugs tend to hang out together. If you found one bug in a component there's probably another
way, right? We've known this for decades. We don't know why but it's very wise to test
there. And so we do. So a bad neighborhood tour goes through and says, "Okay, we've got
all these components, extensions, most recent results, most popular, et cetera. And there's
three of them that are marked red, meaning they have had the most bugs and they have
outstanding bugs right now." And so all you do is click on extensions and you're taking
to the extensions. You don't have to navigate there. The tour will take you there. It already
is figured out the structure of this application. And now, you click that and it takes you there
and you see that there are three bugs here; two red bugs and one yellow bug. You also
see those bugs are overlaid. They have a little, a little cellophane in overlay to them. The
red bugs are bugs that have been filed but the dev hasn't fixed them yet. How do we know
that? Because I told you, we just drew all this information out of the bug database.
We are totally aware, completely aware of every bug against this app in the bug database.
And we show you where they are. So now if you look at one of those red ones, you think,
"Well, there's already been a bug file there." Certainly, you're not going to file that bug
again because you can hover over that and you can get information about that bug. So
if you have found another bug there, you can make sure it's been distinguished. The yellow
one, you might want to go ahead and validate that. That means it's been filed, it's been
fixed, a fix has been checked in. It's in the build, ready to test, and there's no test
cases that have run through that area yet. And so you could do that or you could file
another bug, some stuff we stole from feedback. So you found a bug there. You've got this
little cross hairs and you identify it, you click on it and say, "Okay, I want to file
a bug." Now the tool pulls up the information about that control to help you identify it.
"This is the control. Is that the bug you want to file a bug against this control?"
"Yes, I do." Okay. Now, you can do two things here, you can type in information about the
bug or you're going to let the tool do it. And the tool knows where it's been and knows
how it's got there. It's recorded everything you do, it'll be happy to put that in there
recording that we've been doing in the background. "Do you want to attach that? Yes or no." Screenshots
of everywhere you've been from the time you started the application to the time you found
you've done so you can actually replay it. The developer can just click the link in the
bug report and have it replayed for them automatically. And then a DOM Dump. It turns out that the
series of changes to the DOM, we're quite interested in this. The series of changes
into the DOM is the way that a lot of developers debug web apps. They look at the DOM. They
look at what happened to the DOM as they run these test case that you--so we're trying
to figure out maybe there's some patterns there that we can just file bugs automatically
when we see those patterns occur. That's a little 20% project that me and one of our
guys are working on. So now we've done that. It automatically detects what versions of
all the relevant software that you're doing so you don't have to guess. Yes, users do
and we've turned that control red to let other testers know. All testers are going to see
the same heads-up display when they're testing the app. And then the journal will tell you
what you've done, where you've been, how long you've done it and all that stuff in storage.
So it can be either be replayed or you can take credit for it. And then you can go back
to your map and you can click on something else and continue on your merry way. Are you
with me? There are few other capabilities of this tour, of this tool. There's a bunch
of tours that we've implemented. Tours are basically collections of test cases of similar
form and they'll guide you through it. The journal there and then you've seen some of
those. If you hover over those squares you get information about the bugs you can click
and get more information if you like. And then there's a historical record of the testing
that you've done on this application. This is what the scripts look like. This is--we've
every build. Some of them break because UI changes. We throw them away, re-record them.
If somebody's building a tool in an hour, it will play the script until it breaks and
then it'll give you a little editor and you can try to fix it or you can just go over
and start the script from there so you don't have to repair it all. Or re-record it and
oriented versions of it. That's what we're going to do. That's what we're doing. This
tool has been--the current form of this tool is that, "We have it. We're dog-fooding it
with, I think were up to about eight projects now. And uTest has it and uTest has done one
project. What we did with uTest is we gave it to ten testers. We hired 20 testers from
the crowd to test the same thing, ten with the HUD, ten without the HUD. And we're still
collecting the data to see how we've done. So now if this works, if this is a worthy
investment, then we've got four or three tests. And we're going to have to have people do
this. So how do we get people to do this? Well, we're committing to open sourcing this.
Open sourcing as much of it as we can. You realize that as an internal Googler, I've
got access to infrastructure that is very hard to export. So the extent that we can
open source this, we will, right? So for right now it runs as a Chrome Extension and the
Chromium project is completely open source. So as long as your testing within Chrome and
using those bug databases et cetera, you're totally fine. My new friend Matt from Mozilla
and I spent today together at the Fort. And we have a gentleman's agreement to share our
source and figure out how we can get this in Firefox. And my old friend Sam from Microsoft
is--we have a gentleman's agreement to get Internet Explorer to look hard at this too
so that the three browsers can share source code and try to get this implementation in
all three. So we are very near to the point where we're going to announce this and give
it out to others, so stay tuned. The Google testing blog, Google testing blog, googletesting.blogspot.com
is when and where--where we will announce it. When we will announce it? Hopefully, before
the end of this year. Okay, so if we succeed here what we've done is we've got a tool that
stores the record of our testing. Now, what I want to do with this tool is I want it to
maintain itself through manual testing inside of Google or in your company and then when
we hire the Mozillas and uTest crowds or Sethian crowds or whoever else is doing crowdsourcing,
it would continue on, right? The tool would understand. Okay, there's more testing going
on. This is accumulative effort. Same thing with the dog-fooders. I would like my dog-fooders
to see those little overlays and know where the bugs work so they don't bug me with new
ones. Finally, into beta and then the HUD can be stripped by simply taking away the
Chrome extension. It's that simple. I'm not introducing any new code into the app at all.
parts that don't work and that we throw away. We're going to be really open and honest about
researching cool stuff we want to do, maybe you could help. And we're going to release
it as soon as we think it will do more good than harm. And that's it. I hope I have succeeded
in turning quality on its head. And now, I know everybody is expecting some statement
about Alberto. This is my wish, my wish is that next year at GTAC he gets invited to
do a keynote and I get invited to do a video. That is my wish. I'm James Whittaker. I'm
from Google and I'm done. Thank you. Questions please, that I can answer preferably.
>> Thank you, James. That was a fantastic talk and that's why I'm standing up.
>> WHITTAKER: And I like you already. Okay. Who anybody else have a question?
>> So, is it me? >> WHITTAKER: Yeah, it's you.
>> So three things that I noticed in the presentation first thing was it being test planned. I agreed
that you said empowering the testers to be hyper-productive and providing them some sort
of helmet. I think that's defines that helmet Testify as far as I understood is again planning
our own risk and that will take same amount of time as creating a test plan document or
a Word document. Isn't it the same thing as planning around the risk, that time tester
will not spend time on testing he'll be planning around risks. He'll be providing some numbers
through test scenarios at whatever level we find comfortable with. So is Testify a real
life cycle tool or late life cycle tool or is it throughout the product life cycle tool,
first thing. Second thing, is it a new short of test planning with the Tesfify.
>> WHITTAKER: Okay. So Testify is our internal codename, we're going to be calling this Google
test analytics. Now, so the first question is this a late cycle or early cycle testing.
So here's how we do it now. So on Chrome OS, we've actually been using this in the early
stages when, according to Alberto, the software is still "soft and malleable." And it's hard,
right, because it's difficult to keep it up-to-date. It's the same exact problem as people have
with test plans. I'll be perfectly honest with you. However, it really gives you a running
start to the point where the software does begin to dry out and become crispy. And, you
know, so for example, a lot of my products and client will get to a part where complete
code freeze. So if I wait until a code freeze to start doing this, I've waited too long.
So I need to be test planning and executing while the software is still soft and malleable.
But at the point a code freeze, I want my ACC to be complete and I want to just go all
out sprints on doing testing. So we use it through the whole process, you know, it's
it to Turning Quality on its Head. So I see it as both.
up, dude? >> Well, you got a bug.
works like an American investment bank and that's not Turkey.
>> WHITTAKER: There was a Turkey in there somewhere, wasn't there?
>> WHITTAKER: Oh, okay. All right. So, my Greek friend.
Greek. I will send you a T-shirt. You want this one? It's--thanks for that correction.
>> So with this tool, have you observed like the Heisenberg effect that just because you're
>> WHITTAKER: Oh, I don't even think I can spell that. You're--performance isn't an issue
with the tool, right? It's lightning fast. It runs as a Chrome extension and it does
a lot of stuff in the background in advance so that, you know, while you're running test
like when you're in the bugs tab, we're really only interested in the chromium.org bug database
at that point. So it has not been a problem in our trials as of yet. You know, YouTube
is probably been the--well, I mean all of the clients stuff is way further a bit, but
YouTube has been, there's lot of test cases there and, you know, we throw the ones away
to break. We record new ones easily. The recording process is really fast. So we haven't noticed
>> Hey, James. Hey. I'm over here. >> WHITTAKER: Oh, hello, Marcus.
actually think the examples you brought with the Google Maps are, for me, a classical example
that should have been found by a small test in the development cycle. But that's a different
point. The point is I see that we need to test and I know that we need to make a huge
effort, but I think the real effort should go into getting our products to a level that
actually don't need testing anymore at all. So I'm thinking of I'm building a house, which
complicated equipment to test the house. There are no house testing conferences. How do you
think we can actually get rid of the need of testing in the first place?
>> WHITTAKER: So I would, first of all, argue that building a house and writing software
famous, and I think brilliant paper, many years ago about arguing that software engineering
is harder than anything, right, even putting space shuttle in the air and he makes very
convincing argument for it. So, but houses break all the time, man. I've built four of
you know, people have been trying. I mean really smart people have been trying. Harlan
Mills, Cleanroom Software Engineering, back in 1984, right? That was probably the best
attempt. The reason he called it cleanroom, right? Cleanroom. You design hardware in a
clean room where bugs never exist. You know, the only bug for your software will be software
that never has a bug to begin with. This is a really hard problem. We've been working
on it for a really long time. I don't see us making a lot of progress. So maybe this
is a stop gap, maybe there's some future in which we don't have testing conferences anymore,
but I don't think I'll live to see it. >> Okay. Let me put it different. So we've
seen basically two kinds of talks today and yesterday. We've seen half of the talks telling
us how to make software better and half of the talk is telling us how to test software
better. Now, you're in a position that you're actually allocating resources to one of the
two things. How do you think is the right tradeoff?
>> WHITTAKER: I think the right tradeoff is the status quo on building software better
because I tried to make the argument that progress there is going to be limited because
of the number of people they're capable of doing it, Tracy and Russ, and that there is
a lot of benefit to be gained in late cycle testing because I disagree with you. Those
maps bugs require data to be found, right? And that data changes on a monthly basis.
They're sucked into the maps constantly. So there are a certain number of bugs that will
be introduced after the software is built by definition. So I--we're going to be talking--we're
going to be having test conferences for probably decades, certainly decades, probably centuries.
And in fact, I'm not sure it will ever be solved. Humans are fallible, man.
>> They are. Thanks. >> WHITTAKER: You can't take them out of the
equation. >> Hi, James. This is your new test buddy,
right? >> WHITTAKER: Hey, man.
>> Hi. >> WHITTAKER: Good old man. Man, you take
some good pictures. >> Oh, glad you like them. How does this tool
handle, like, a session or is there a pause and resume on this tool so that it, you know,
you can just go home and resume later when you come back for the next day?
>> WHITTAKER: So a session meaning a contiguous part where you're just testing?
>> Yeah. >> WHITTAKER: I mean it's like there is no
pause. You know, it just, you can just pick up completely, just like a video game. You
stop playing, you pickup. You can either run a new mission. You know, save your old mission,
run a new one, or you can continue where you left off.
>> Okay. >> WHITTAKER: Now, hopefully, you know, you're
doing this as teams. We like to--the funniest part about the HUD particularly, the heads-up
display, is doing this in teams and watching, you know, four or five testers working on
machines and having this stuff updated in real time. It's actually pretty cool. So in
that sense, once somebody leaves, the mission is over. Just like an Xbox game, if somebody
has got--some kids got to go bed, you know, the game is over because you just lost the
player. >> Right. Great, thanks.
>> James, this side. >> WHITTAKER: I'm glad you all raise your
hands because it seems like someone from above is talking to me.
>> Okay. So my question is around the known adjective and adverb technique. So beyond
there were some research done called extensions to non-web techniques, which gives us questions.
And then to organize those questions in the form of shareable test ideas across teams
and organizations, there is something called questioning patterns, queue patterns which
is paired in by different culture. >> WHITTAKER: Yeah, queue patterns are goal
question metrics. I mean there's lots of stuff out there on that.
>> Yeah. So are these similar measures taken at Google to share test ideas by stripping
off the domain part of that making them generic and then sharing across teams? What kind of
initiatives do you take there? >> WHITTAKER: Well, for us, within my team--so,
you know, this is all pretty new stuff. And sharing this, you know, every single test
director, a peer of mine is ready to use this but they want to see it work, right? So we're
all kind of starting small. Tours are really our mechanism for sharing test wisdom. We
don't call them patterns. We call them tours because that's what I call them in the book.
It seemed to make sense, right? You're visiting a city and you want to make a good progress
and that's how you organize your thoughts. We could have called them missions. I know
mission is kind of cool. It's more military. But, you know, we try to group it in something
that makes sense. So here's I knew, you know, back when I worked at Mircosoft, I was walking
the down the hall and there were two people around the corner and they were talking about
the landmark tour. And I thought, "Hey, that's cool. I invented the landmark tour." Here
are two people. I'm going to walk right in between them. You know, just kind of sashay
in between them and then they're going to go, "Oh, wow. There's the creator." And I
walked--I sashayed in between them and they just kind of stepped aside and looked at me
like I was rude and then continued to talk about the landmark tour applied to visual
studio. They didn't even know who I was. That was really--to me, I walked away with a big
smile on my face because I thought I've succeeded, right? We've taken this out in it's sort of
virally gone through the company. Microsoft has tours now in their course ware that Alan
Page runs for the--on behalf of the SDETs in the company . So that's kind of the idea,
right? It's the--because the tours do make sense. A landmark tour, once you learn it,
you can tell a tester run the landmark tour and they will know the exact set of test cases
to run no matter what the application is. It's a way to talk about testing. And it's
in my fourth book. >> The root cause you cited was at the beginning
was code. Code is complexity and we hire devs to write code and testers to write more code.
If the problem is code perhaps an alternative solution is to hire people who's explicit
job it is to delete code and to clean code up and reduce…
>> WHITTAKER: Hey, you sound like Simon Stewart. >> Have you…
>> WHITTAKER: Simon is on the way over in the airplane and he comes over to me and sits
down and he says, "I want to talk to you about deleting code," right? "I want to figure out
a system where we can pay people to just rip code out of Google 3." It's a wired stuff,
right? And, you know, so it's his big grand idea and it's a great idea. I'd love to do
it. I think we ought to use our--milk our peer bonus system for this, right? You don't
get paid for writing code. You get paid for deleting code. I love it, Simon. Let's do
it. >> Okay. Thanks.
>> WHITTAKER: But really the problem gets back to complexity. I mean Marcus' idea that
we can build this is wonderful. But software is hard and human beings, fallible human beings,
are doing it. And we continue to do the same stupid things, you know? I've worked for a
lot of companies, as a consultant, as a full time employee. I see the same stupid things
in the halls of all these companies. Until we get to the point where we stop doing those
stupid things, we're going continue to be talking about this. And even then, it's still
hard and we're still fallible human beings. So I mean we either start, you know, collecting
DNA from Tracy and Russ or we continue to struggle. But I tell you what, Marcus, you
figure it out, turn it into home building, I'm buying you--I'm not on buying you a beer.
I'm buying you a keg. >> Hey, James.
>> WHITTAKER: Yes? >> Here, James. Back.
>> WHITTAKER: Oh, I see a bright light. >> And here…
>> WHITTAKER: Yes, sir. >> So you mentioned that, you know, you're
making this helmet of a smart tester, right. What if it becomes overhead, right? You know,
so… >> WHITTAKER: I know a helmet is kind of over
the head and sort of. >> Yeah. So, you know, there's somebody who's
advising the guy, you know. There's somebody, somebody who's keeping audit trail of all
the activity that he is doing. And somebody who's expecting that smart engineer to follow
the advice of, you know, based on some scores on matrix which he may like to question or
like, you know, he may like to take a different part. So I don't--do you think that this is
kind of an information overload or that's kind of expectation that, you know, that kind
of, you know, compliments his style of work. >> WHITTAKER: Okay. So there are two parts
of this answer. The first part is we're not controlling the tester. The tester is going
where they want to go, right? They can ask the HUD, "Take me to this component so I can
start testing," or they can go there themselves, right? So there's no fences that they have
to climb in order to go where they want to go. Testers are completely and totally unleashed.
The second part of it is, is you just turn it off, right? If there's a part of the HUD
you don't like, just like in the video game, if you walk into, you know, one of these internet
cafes and everybody is planning the same game all their HUDS are going to be different.
They're going to customize it for themselves based on what kind of help that they need.
So no shackles, no fences and it's fully, fully customizable.
>> Okay. So I just wanted to point out that we are 50 minutes over and we are eating into
our break. So, if there are more questions, we can continue on and get a chance to have
James address them or we could take a break now and you can find him later on.
>> WHITTAKER: Let's take a break. But before we take a break, did you all like the chalkboard?
That was--so that's in Presently. That's Google Presently, believe it or not. And some of
the people that worked on this product were going, "That's Presently?" So a guy on my
team Joel Maharsky and his wife Terry Maharsky are the artists behind this. So I just want
to shout out to them. >> All right. I just want to…
>> WHITTAKER: Thank you. >> I wanted to thanks James for all the excitement
that he brought on. >> WHITTAKER: Dude, I loved it.
>> And Alberto that tagged him along. >> WHITTAKER: And you invite me back to India,
I'm going to have a smile on my face when I say, "Yes, I'll come."
>> That is a great change of heart. So we would expect that continue on. Thank you,