Tip:
Highlight text to annotate it
X
FEMALE SPEAKER: I'd like to welcome
you to meet Ivan Janicijevic.
He's a senior software engineer in Test,
and he's passionate about building
Android development and test infrastructure.
He joined Google in 2006 and has worked
on a few teams, including the Google Checkout team.
He also helped the Google Wallet Android Engineering team
by building a few kernel development
and build release systems.
He's lived in San Francisco, just got back to New York
about three weeks ago.
And today he's working on mobile infrastructure for ads
here in New York.
So let's give him a warm welcome.
[APPLAUSE]
IVAN JANICIJEVIC: Thank you.
Thanks.
Hello, everybody.
My name is Ivan Janicijevic, and I'm a senior SET here
at New York office.
First, I want to welcome you to this amazing office.
There's so many cool projects here in this office.
There's so many cool people you can work with.
There's so many cool things to play with.
And the food is just amazing.
OK, so let me ask you a question.
How many of you are Android developers?
How many of you actually test your applications?
Do you find it painful?
Good.
So my team are responsible to make the Android testing easier
at Google.
We developed the tools and infrastructure
to make building Android, developing Android, and testing
Android easier at Google.
And today's talk is going to be focusing on Android automation
testing.
And we're going to be talking about how
to break the manual testing matrix.
To set the stage for my presentation,
I'm going to play this dramatic video.
[VIDEO PLAYBACK]
-I imagine that right now you're feeling
a bit like Alice tumbling down the rabbit hole, hmm?
-You could say that.
-I can see it in your eyes.
You have the look of a man who is executing
all these tests by hand now.
Ironically, that's not far from the truth.
Do you believe in manual testing, Neo?
-No.
-Why not?
-Because I don't like the idea that people
should do the same thing over and over.
-I know exactly what you mean.
[THUNDER RUMBLING]
-Let me tell you why you're here.
You're here because you know something.
What you know you can't explain, but you feel it.
You've felt it since 2006, that there's
something wrong with Mobile.
You don't know what it is, but it's there
like a splinter in your mind, driving you mad.
It's this feeling that has brought you to me.
Do you know what I'm talking about?
-The manual testing matrix?
-Do you want to know what it is?
[THUNDER RUMBLING]
-The testing matrix covers everything.
It means testing on all phones on every API level.
Now even it means testing on your glasses.
Soon you'll have to test on your watch.
You already need to test on TV.
You need to test on cars.
You need to test on different networks.
It's the spreadsheet that has been pulled over your eyes
to blind you from the truth.
-What truth?
[THUNDER RUMBLING]
-That you are a slave, Neo.
Like everyone else, your manual tests
are catching bugs that could have been caught
with automation or emulation immediately--
a prison for your mind.
[THUNDER RUMBLING]
[SIGH]
-Your life is being wasted installing app
on 100 different phones.
You know you'll never prevent a regression.
[THUNDER RUMBLING]
-This is your last chance.
After this, there is no turning back.
You take the blue tablet, the story ends.
You go home, test however you want.
You take the red tablet, you stay at gTech
and I show you how deep the automation goes.
[THUNDER RUMBLING]
[END VIDEO PLAYBACK]
[APPLAUSE]
IVAN JANICIJEVIC: Hope you enjoyed the video.
So I'm going to ask you to make a choice.
Would like to take the blue tablet or the red tablet?
AUDIENCE: Red.
AUDIENCE: Red.
IVAN JANICIJEVIC: Good.
I have no slides for the blue tablet.
[LAUGHTER]
IVAN JANICIJEVIC: So why automate?
Let me put this in terms that everybody can understand.
Just look at this function.
This could actually be a Google interview question.
Can somebody tell me what is the computational complexity
of this function?
AUDIENCE: [INAUDIBLE].
IVAN JANICIJEVIC: [INAUDIBLE] yeah.
Yes.
So let's look into this.
We have tests running on different devices.
These devices could be Nexus One, Nexus S, Nexus 7, 10,
Razr Droid, many different devices,
and they keep adding new ones every day.
Now, these devices might have different API levels.
By "API level" I mean different Android OS.
It can be Froyo, Gingerbread, Gingerbread MR1, ICS,
Jelly Bean, and so on.
So you can see there's a combinatorial explosion here.
We have a full matrix of different devices running
different API levels.
This is how a developer desktop might
look like-- Android developer desktop.
As you can see, there's a bunch of phones lying around,
lots of cables, generating lots of heat.
But wait.
There's something wrong with this picture.
For each of the phones, we need to have different API levels.
So for Nexus 7 we need to have one
which runs Gingerbread, Froyo, Jelly Bean, ICS.
So this pile is even actually bigger.
Luckily, a typical developer uses only one or two phones
on their desk, and they develop the tests on it.
And there's some point where they
need to test on a different phone which they don't have.
Then they're just going to scramble around the office
and scream, does anybody have Razr Droid?
Does anybody have Nexus 10 so I can
test my feature on a different one?
So automation is actually easy to parallelize.
You just have tests running on different phones
with the same tests running on the different phones
using different API levels.
But why aren't we automating?
That's the problem.
Well, we have several problems to deal with.
First, you need to have a place where
you're going to run your tests.
So we also need to have a device easily accessible
under your fingertips.
You might need to test it on a tablet
or on a phone which has different density,
and you need to have it easily accessible.
And also we need to make it fun and easy to write tests.
Otherwise, you're going to meet lots of resistance.
People are just going to give up and not write any tests.
So this is written in our developers.android.com.
And it states, when building a mobile application,
it is important that you always test
your application on a real device
before releasing it to users.
This somehow gets twisted and turned around,
and becomes parsed down to this.
We only do testing on real devices.
Otherwise, we don't catch any bugs.
Well, this is not necessarily true.
So to do testing, we need to have a device lab.
You need to have a place to run tests.
So we have a choice.
We can choose data center hardware,
which is gray and boring.
But reliable, stable.
It's standardized.
It has a published data about mean time before failure,
so you exactly know the time when to replace your hardware.
And you have a good vendor support 24/7.
On the other hand, you have consumer hardware.
You have this Hello Kitty phone, which
is very cute, very desirable.
But that's not really reliable, especially when
you run continuously, 24/7.
You bombard it with lots of tests, nonstop.
And it's just going to give up at some point.
It's meant to be used for about six months
and then replaced with a newer model.
And the support for it?
You're just in your local mall where you actually
bought the phone in just some little kiosk,
so it's not very good support for it.
So you would think that Google has amazing device
labs, but something that looks like this, maybe.
So I'll show you the actual real lab
which I encountered about a couple years ago.
So here you see a bunch of Nexus S phones.
They're actually glued to the wall with Velcro.
And they use USB cables to connect to the USB hub, which
is in turn connected to a desktop workstation.
So this was great.
It actually ran tests.
It was actually mesmerizing.
Sometimes I encountered people sitting there for hours,
looking at tests running.
But then problems started happening.
The USB cable gave out.
The hub was having problems.
The desktop just rebooted sometimes.
And then in about a week or two, because we
were continuously running tests, the CPU heated up and melted
the glue, and the phones started falling down, one by one.
Suddenly you get test failures.
People start pulling their hair.
What's going on?
It keeps failing.
AUDIENCE: All kinds of problems.
IVAN JANICIJEVIC: Yeah, it's really hard to manage this lab.
You fight against gravity here.
It's really tough.
AUDIENCE: Put them on the table down there, right?
IVAN JANICIJEVIC: Well, the lesson
is, put them on the table, yes.
So managing the physical lab has lots of problems.
I mean, their security, somebody can just
waltz in and steal your phone.
There's maintenance.
You have to replace the phones.
There's new phones being issued.
The phones die.
All kinds of things happen to them.
Then you have to upgrade the software on the phones.
So you have hundreds of phones sitting there.
How are you going to upgrade, and how are you
going to manage it?
How are you going to connect it to the proper hub?
The hardware is also consumer-grade hardware.
It's not server hardware, which is a very stable.
You have to rack it.
You have to arrange it in small spaces.
And they're all different sizes and shapes.
Then they give up lots of heat.
And we had lots of problems with signal interference.
When you have Wi-Fi radio of 100 phones sitting
in a small space, they keep disconnecting all the time,
and you have these false failures.
So is automation dead on arrival?
How can we even test when we don't
have anywhere to run their tests?
Well, why are we so hung up on the real phones?
What are the most common bugs that you encounter?
The classic bugs that you mostly encounter
are concurrency and buffer overflows
and off-by-one errors.
Do they care on which device they run?
No, these bugs are going to be reproducible on any device they
do run.
In addition to that, if you replicate the OS version,
the screen size, the cache, and the memory constraints,
we'll probably catch 99% of the bugs
when we have the full matrix set up.
There's still a chance there's a small set of bugs that
can slip through, and they're usually very hardware-specific,
like Bluetooth or secure element.
But we can work around it by insulating the code around it
and then testing it in isolation.
This is actually a real Google data center.
It's quite pretty.
And we can run emulators in data centers.
That means they run on the cloud.
We can scale without any limits.
Well, there's some limits, but it's quite large.
These servers can handle lots of emulators.
Can somebody tell me how many seconds are there
in month of March?
There's 2.6 seconds in the month of March,
and we managed to run about 82 million tests in March.
So we can scale pretty good.
We also have reproducible test environments.
We start fresh emulators which are in clean state
so there's nothing to contaminate them
from the previous run.
And we have a fully controlled test environment.
But it's ridiculous.
You're going to test on emulators?
That's not the real device.
Actually, it's just another Android device.
It emulates the real phone hardware,
and the system images are almost identical.
This is what the Android system architecture looks like.
I'm not going to go deep into this.
I just want to point out what is device-specific.
Can somebody tell me, where do you usually
find bugs in this diagram?
AUDIENCE: [INAUDIBLE].
IVAN JANICIJEVIC: Well, you don't find any bugs
in this diagram.
You find bugs in your application, actually.
So we optimized the emulator.
We created specific device specifications
for all the emulators for all the devices
we're going to try to emulate.
We took the RAM, the screen resolution, the screen density.
And we also took the operating system.
Android emulators come with a rooted system
so you can easily change the product model
to reflect the real device name.
And we install some test-friendly services.
These services allow you to create an account,
take screenshots, dismiss the ANRs and system pop-ups that
are usually causing the flakiness for UA testing.
And we log everything, not just the logcat.
We do a system dump.
We log everything that is loggable.
That makes it easier when you debug the test failure.
You can just drill down, look at the logs,
see the whole state of the emulator,
and find the root cause of the bug.
So how do I find the device that I want to use?
I'm developing something.
I want to test my feature.
I have to dig down in this pile and connect the right cable
to the right phone.
Is there a better way to do this?
Well, it looks something like this.
This is actually the emulators I was talking about.
We developed a tool that you can launch any emulator that you
want by just specifying in a command line saying,
launch Nexus 10 using Jelly Bean.
And boom, in like three seconds you would get them there.
You can install your APK.
You can install your test.
You can run tests.
You can do whatever you want with you emulator.
[LAUGHTER]
Emulator's just too slow.
Well, this is the Android boot sequence.
I remember the first time I installed Android SDK.
I started running the emulator.
It was blinking and said it's booting.
I lost my patience after a couple minutes,
and I just killed the process.
I tried it again, and the same thing happened.
After several minutes, I got my Android running.
Oh, cool.
It took some time.
This is not very friendly.
So if you look at this boot sequence,
it's very similar to desktop.
And in desktop we have something called hibernate.
On Android we have something similar called Snapshot.
So what we did, we just took a snapshot,
and then we used that to boot.
So we boot directly into Android,
and that takes about three to four seconds.
So what happens under the hood?
On the left side you have the phone.
On the right side you have the emulator.
The emulator runs on the host hardware.
Can somebody tell me what is the bottleneck here?
Well, somebody drew a yellow rectangle to help you out.
AUDIENCE: [INAUDIBLE].
IVAN JANICIJEVIC: So in this case,
in the case [INAUDIBLE] demo, it runs on host hardware.
And we are emulating the ARM CPU.
The CPU instructions are interpreted,
and that's really slow.
It's a software emulation.
So what we did, we removed that bottleneck.
We compiled Android with x86, and we are running that
directly on the host hardware.
The host hardware, you need to turn on the virtual machine
to be able to emulate directly and run Android directly
on the host hardware using the emulator.
And this runs much faster.
The host hardware is a much beefier machine
than any Android device.
So the problem is solved.
We reduce the boot time by using snapshots,
and we are speeding up using virtual machine.
So I created a little video that demonstrates
the race between the real device and the emulator running
the same set of tests.
This might be a little bit loud.
OK, sync problem.
[VIDEO PLAYBACK]
-Gentlemen, start your engines!
[ENGINES REVVING]
[END VIDEO PLAYBACK]
IVAN JANICIJEVIC: And we have a winner.
So we solved several pieces of the puzzle.
We have a place where to run tests.
We have any device we can bring up
under your fingertips at any time
that you need to do your development and testing.
Cool.
There's lots of work.
You're done.
You can go home, have a beer.
It's only this.
Don't be this guy.
Listen to Jackie Chan.
He's a wise guy.
So unit testing on Android is different than in typical lab
testing.
Now, lab testing, we have it down pretty good.
We use design patterns.
These are model view controller or model view presenter.
We put most of the business logic into presenter,
and we tested in installation.
Model view doesn't have much logic.
It's pretty lightweight.
In Android, we have something called Activity.
Is Activity a view or a presenter?
Well, it's actually both.
Activity contains both view logic and the business logic
of the presenter.
So we are at a little bit of disadvantage there for testing.
Android's SDK provides a really nice tool
called Android instrumentation.
And it's a nice layer on top of Android
that allows you to do all kinds of wonderful things
like clicking on a screen in any location, sending key events,
sending text, and also inspecting
the state of your application and verifying
what's going on on the screen.
However, it's very complicated to use.
The API is not very friendly.
So there are some efforts outside of Google, open source
efforts-- I have experience with the Robotium--
that try to solve this problem by providing a simple API.
We started using that and started
running tests using Robotium.
But after a while, our tests became very flaky,
and they looked something like this.
Our continuous build, which runs the test continuously,
started to look something like this-- fail, fail, fail, pass,
fail, pass.
And these failures are not genuine failures.
Those are the fake failures.
They're failures because of the flakiness.
Matrix wins?
Your testing is too hard.
Let's just give up and go to the beach.
That's it.
Why are the tests so flaky?
Well, Android have a UI thread which runs the business logic.
And then we have a test thread, which
we request a click, which translates
as a motion down in the UI thread, and then motion up.
So some side effect happens meanwhile.
And if you don't assert in the right time, we get a failure.
We solve this by putting a sleep.
OK, that works.
But then in the future, something changes in the code.
And now our side effect changes the location,
and then we again have a failure.
So that's causing flakiness.
So what if we could put all of this concurrency handling
and synchronization under the hood
and come up with something like this to help us run testing?
We can then only focus on procedural testing.
We can say, click on this view and then
verify if this text appeared.
Well, earlier this year my team got together,
and we did a deep dive into Android OS.
And we came up with a framework to give us something like this.
We call it Espresso.
Unfortunately, it's not yet open-sourced.
And we are working on that, so hopefully it's
going to become available sometime soon.
So I would like to show you how does automated testing
look like for us.
[LAUGHTER]
IVAN JANICIJEVIC: Oh, of course.
That's right.
[VIDEO PLAYBACK]
[PHONE RINGING]
[MUSIC - RAGE AGAINST THE MACHINE, "WAKE UP"]
[END VIDEO PLAYBACK]
IVAN JANICIJEVIC: So we're not going to go into Q&A yet.
I want to show one more video later on.
And remember, earlier we mentioned there's some set
of bugs that we cannot catch with emulators.
And I got a chance to work on a Google Wallet project
where we had a secure element which was stored on a phone.
And one of the main features of Google Wallet
is to be able to come down on the reader, going
the range of the reader, and then tap and pay, and then
get out of it.
So that was really hard to emulate.
And I was thinking how to solve this problem.
I came up with the idea to go and buy a little robot arm.
I thought it was cool, so I did it
in my free time, kind of thing.
I went to a local store.
I bought this arm, which was about $35,
and it had a USB connector.
I wrote a little driver for it.
And it was very simple.
It was just saying, turn this motor on for one second
and turn this other motor for two seconds, and that was it.
And it was great.
I mean, I can solve this problem.
So I tried it out.
It was going down and up, and it was actually working.
But if I did it like 20 times, it went out of alignment,
so it started hitting the desk.
I'm like, crap.
It was a great idea.
It's not going to work.
I went home.
I was taking a shower.
I was like, hey, the phone actually has a gyroscope.
It's a very precise instrument.
I can use that to tell me what is the angle of the phone.
So I was like, cool, I'm going to try it out.
So I wrote a little app to tell me what is my angle.
And then I hooked it up, and I made it work.
So I'm going to show you the video of that.
[VIDEO PLAYBACK]
[BEEPING]
[WHIRRING]
[MUSIC - KRAFTWERK, "THE ROBOTS"]
-That's duct tape right there.
[MUSIC - KRAFTWERK, "THE ROBOTS"]
-We're going to skip all this.
It's very long.
This is the angle.
This is what the phone sees.
[END VIDEO PLAYBACK]
IVAN JANICIJEVIC: Do you want more?
[LAUGHTER]
IVAN JANICIJEVIC: OK, now we can move into questions.
Yes?
AUDIENCE: You showed they're all there [INAUDIBLE].
Right?
IVAN JANICIJEVIC: So we are testing only on the Google
Experience phones, and they cannot get
manufacturer-specific ROMs.
But I think again, most of the bugs
should be covered by testing Google Experience phones
unless it's something drastically
strange in the manufacturer ROMs.
It's possible.
Yes?
AUDIENCE: Are those going to also be available on Eclipse?
Or is it only specific for Google?
IVAN JANICIJEVIC: So those emulators
are available on Eclipse.
Those are the same emulators that we use, the same images.
You have x86 images that you can use from Eclipse.
What we did, we did some additional modifications
to the emulators to make it more friendly for Google developers.
And it's like installing, setting up the accounts,
taking the memory RAM size density, that's all available.
You can just go and do it yourself.
AUDIENCE: Hey, how's it going?
So we actually moved away from emulators back
in 15 because we saw a problem with them crashing.
We would try to run our tests, and then they would crash.
So this became a problem.
How do you guys deal with that sort of flakiness?
Or has that stopped becoming a problem?
Because we never actually used snapshots.
IVAN JANICIJEVIC: So we actually optimized our emulators,
and we made sure that they're not crashing.
We didn't have any problems with crashing.
Do you know of any reason why they were crashing?
AUDIENCE: We have no idea.
IVAN JANICIJEVIC: Have you been using x86 emulators?
AUDIENCE: Yeah, we were using x86 on Ubuntu boxes.
So not sure why.
AUDIENCE: [INAUDIBLE].
AUDIENCE: Yeah.
And also [INAUDIBLE] is not available at this time,
so they were really slow.
IVAN JANICIJEVIC: So maybe it's the hardware
that you're running on.
I'm not sure.
Try using different hardware.
AUDIENCE: Try different hardware?
Sounds good.
IVAN JANICIJEVIC: Yes.
It's a very specific question.
It's hard to tell what is the reason for the crash.
AUDIENCE: So I'm actually curious.
So how much manual testing do you guys do, if any?
IVAN JANICIJEVIC: That's a good question.
I think manual testers are very precious,
and we should use them preciously.
We should do most of our testing using emulators,
using unit testing, and try to automate as much as possible.
And just let our manual testers do
real testing, exploratory testing, creative testing,
versus trying to go and execute a big matrix
of a list of things and checking the boxes.
Because they're going to be testing it
as the real users, and that adds real value then.
Could you walk up to the microphone, please?
AUDIENCE: One of your remarks, it
sounds like you expect the developers to test themselves.
Is that how you work?
Or do you have a separate QA group or a separate testing
group?
IVAN JANICIJEVIC: That's a good question.
At Google, part of your job as a developer is to write tests.
So it's your job.
It's test-driven development, so you
have to write tests before submitting.
You shouldn't be submitting your code in repository
without writing any tests.
And these tests have to pass before submitting it in.
That way we ensure we have quality at HEAD revision.
We have somebody in the back there.
AUDIENCE: So give us an idea of what a typical test suite would
look like for any arbitrary project or piece of code
that you're about to test.
What's the life cycle of developing the tests and then
the effort that actually is involved, given the support
[INAUDIBLE]?
IVAN JANICIJEVIC: Good question.
We have a philosophy at Google which is distributing tests
into small, medium, and large.
And the rule is about 70/20/10.
So 70% of your tests should be unit tests.
They're very, very small tests which actually go really fast.
And they're not dependent on any UI or anything large.
They shouldn't be talking to the network.
They should be focusing only on your function.
Then you have medium tests, which actually
is testing between different components.
And you have large tests, which are end-to-end tests which
exercise the whole system.
So that's pretty much the philosophy
that we go by when we design tests.
Most of them are very small, and then only 10% should be large.
AUDIENCE: The complexity hasn't changed, though.
We still multiply the number of tests
by the number of platforms to get the full coverage, right?
IVAN JANICIJEVIC: Yes.
AUDIENCE: And yes, you can get more tests running
at the same time on a host than you
can on a consumer electronics device.
IVAN JANICIJEVIC: Yes.
AUDIENCE: OK.
But as you keep adding more devices
and you keep adding more tests, how rapidly
are the respective sizes growing,
the amount of work you have to do, and the power
you have to do the tests?
IVAN JANICIJEVIC: Yes, so we can scale very well in our data
centers.
So right now there's no concern about running out
of processor time or space in our data centers.
But you're right.
The matrix is expanding.
It's growing.
But we also look at the usage of different, older operating
systems.
For example, Donut is not really used that much anymore,
so we might decide at one point to drop it, and reduce
the tests that way.
Yes, of course.
AUDIENCE: So it's my own perception
that you don't expect progress to be as fast on Android
as the other platforms.
I don't know if you agree or not.
But it's an expectation, so you try to test this as possible
on Android for the performance.
So you didn't mention anything about that.
For you, the test is failing if the program
does something unexpected, correct?
IVAN JANICIJEVIC: Yes.
AUDIENCE: But the perceptional parts of it
are not really part of the discussion.
But to the testers that is a problem,
especially if you're doing something really intense,
graphically.
But even if it's a business application,
if it's jerking around, you're behind your competition,
essentially, and you don't know it
until you test on that device.
That brings back the problem of testing on an emulator.
Sometimes it's faster than the real device.
Sometimes it's slower.
You can't predict this.
Well, OK.
But suppose you do say, OK, I want to test on my emulators
only.
But still, how do you test the performance
aspect of that [INAUDIBLE]?
IVAN JANICIJEVIC: That's a very good question.
Performance is-- you mostly test the logic on emulators.
That's the primary goal.
Performance, you can put in some kind of instrumentation
and try to test the performance and optimize it on the emulator
itself.
But it's not going to guarantee that it's
going to run the same way on the device,
and it's a whole different set of problems
that you have to deal with when it comes to the performance
testing.
I guess you can still improve your application using
emulators and make sure it's optimized there,
and it has a good chance running better on the real device.
But it doesn't really help you much, running on emulator.
It's not going to give you the real information.
That's true.
AUDIENCE: Do you guys have plans to provide testing or structure
outside, something like cloud service, maybe, in the future?
IVAN JANICIJEVIC: We definitely play with this idea.
Honestly, it's lots of work.
It might be possible in the future.
We're discussing this.
I can't tell you it's going to happen anytime soon,
but it's possible.
I think it's a wonderful idea.
AUDIENCE: The application that we're
testing uses embedded YouTube videos.
And in the past, the emulator hasn't done so well
with playing video in automation.
Has it improved?
And how do you handle video testing in-house?
IVAN JANICIJEVIC: Yes, very good question.
We did enable OpenGL emulation on emulators.
So if you have a host machine which has a GPU,
it's possible to run OpenGL and actually have
the GPU acceleration.
In that case, the videos would work fine.
If in your data center, you have a set
of servers which don't have GPU, then you're
going to have troubles.
But you can still go around it by using Mesa drivers, which
is a software video driver emulation.
It's going to be slower, but it's still
going to achieve the effect of testing your video.
Yes?
AUDIENCE: Have you tried emulating
[INAUDIBLE] or other emulation [INAUDIBLE]?
IVAN JANICIJEVIC: So think of it as emulator
being the real device, so it actually
operates as a real device.
Do you have any specifics on which--
AUDIENCE: Multiple background [INAUDIBLE].
IVAN JANICIJEVIC: Oh yeah, you can run anything
as you can run on real devices.
As far as the OS is concerned, it's the same.
It's running on device.
It doesn't know it's running on emulator.
AUDIENCE: But you guys use constraints.
Should that have [INAUDIBLE]?
IVAN JANICIJEVIC: Oh.
We're just trying to pretend that we
are that specific device.
Let's say Nexus S has-- I don't know exact number,
but it's 64k cache, and it has the 512 memory.
So we make sure that our emulator has the same RAM
constraints.
If your application runs out of memory in the Nexus S,
it will also run out of memory on the emulator.
AUDIENCE: [INAUDIBLE]?
IVAN JANICIJEVIC: Yeah, that's a good question.
That probably falls in the category of this 1%
that we can't test on emulators.
The way to go around it is to take your code which interfaces
with sensors, insulate it, and then test it thoroughly
in isolation, and then probably have
some kind of a manual addition at the end of the day.
But most of the bugs you're going
to catch in the software emulation,
so you're going to have to do minimal work
with the real hardware at that point.
AUDIENCE: [INAUDIBLE]?
IVAN JANICIJEVIC: Yes.
One of the features of our test service
is we also added the ability to change the orientation, which
has the effect of redrawing the application.
Yes.
AUDIENCE: Does your testing offer third-party applications
like Sense UI and [INAUDIBLE]?
IVAN JANICIJEVIC: Yes.
Because you're not at a microphone,
I'm going to repeat the question.
Do the emulators reflect the Sense UI,
the third-party-- actually, the ROM producers' modifications?
No, we just use the Google Experience phones.
So we don't.
And there's a slight possibility that something might not
work as expected there because we don't have the images.
They're proprietary.
AUDIENCE: How do you approach testing
by the browser across the panoply of [INAUDIBLE]?
I've always sort of wondered, how do you actually
get a handle on testing when all you really
have are three or four phones, and there's actually
hundreds out there?
IVAN JANICIJEVIC: Yes.
Again, you should use emulators.
So there's something called WebDriver that you commonly
use to test the web applications.
And there's a WebDriver implementation
for Android native browser that happens
to be the same as the WebView.
WebView is using Android native driver implementation.
And also others browsers for Android like Chrome
and, I don't know, maybe Opera, they
should have their own implementation for WebDriver.
They might not, but it's their responsibility
to implement the WebDriver spec.
And then you can use the WebDriver API
to drive your testing on the web page on the Android.
Yes?
AUDIENCE: I have another question.
So we actually used Robotium in our project a little bit ago,
and we had similar experiences in terms of the flakiness.
And we recently switched to the UIAutomator, which was really
used by Google with Jelly Bean.
Do you have any comments about it?
Because you mentioned that you are working on your own gig,
Espresso.
So is there any particularly reason?
IVAN JANICIJEVIC: So it's a little bit different.
UIAutomator has tried to approach the problem
from a different angle.
They're using the accessibility framework,
and they don't actually have the access
to the context of the application.
So they can just click around, but you don't actually
know what happened underneath.
You can't query.
You can't change the state of the application.
And it's like using instrumentation.
The approach that we took is to use the instrumentation
to drive the testing.
I mean, there's overlap.
But again, you can use both to solve different problems.
And it's less flaky, definitely.
Good.
More questions?
OK.
Well, thank you.
[APPLAUSE]
[MUSIC PLAYING]