[ Video playing ]
>>>October 28, 2012, an undisclosed Google testing lab.
(Sirens)>>>Test failure in section Q71A2.
Alert. Alert. [ Video ends ]
>>>Oh, jeez. This battery is leaking everywhere. It’s all over the entire tray. Goddamn. I’m
going to have to dispose of this. What in the world are you people doing here
without HazMat suits on? Don’t you know that this is a mobile testing environment? You
guys could get yourselves killed. It’s seriously toxic here.
Oh, brother. Iris, we’re going to need a decontamination
team deployed to section 31415. We have multiple human infections with mobile devices. Ah,
jeez. You guys think this is a joke? The mobile development environment is toxic. We do things
on devices that are not reliable. Our tests are not reproducible. To run these reproducible
tests takes forever. When they’re running, jeez, they have no control over the device
that they’re running on. How are we going to simulate what we need to simulate? Scaling?
Jeez, it takes a developer one or two days to set up his desktop workstation. With devices
to work, if you’re going to set up some sort of cloud testing lab, what, a quarter? Two?
Jeez. And debugging that? How many things are going to go wrong in this setup? You’re
going to spend your life debugging test failures. Who wants to do that?
Oh, man. I can’t believe you guys walked in here on me.
Look, we’re going to get you cleaned up. Don’t worry. Don’t tell anyone; okay? I want you
to focus on all the areas that we’re talking about here. We’re going to make your environment
consistent, stable, reliable, fast, totally under your control, and debuggable.
The decontamination team has kind of identified four areas where there’s contamination. The
devices you’re running your tests on, ADB, your applications, and the test harness that
orchestrates the whole thing. So pay attention. Oh, thank God, Stefan you’re here. Our first
decontamination team member.>>Stefan Ramsauer: Hey, Tom. What are all
those people doing here? There is really nothing to see. Keep moving, keep moving.
[ Laughter ] I have to focus on my work. I have to clean
up devices. Look, they are not consistent. We cannot scale. They are not reliable. And
they’re really slow. So, oh, my God, a physical device lab. Oh,
my God. Cables. Cables everywhere! Corroded connectors. Look, a leaking battery. This
device is hanging in a boot loop. That device even does not even boot. Man, we are in 2012.
The mobile revolution just started and we want to use this to scale? This will never
work. If we really want to solve the problem we
have to switch to virtual devices. Hmm, what about this Android emulator thingy?
I found this picture on the Internet. It must be true.
[ Laughter ] So let’s try it out.
Okay. Let’s start the Android emulator. So to configure — So our idea is to get as close
as possible to a real device. To do so, we create an AVD configuration. We pass on some
commandline flags, some underlying Qemu flags. We have some environment variable. So it looks
something like this for a Galaxy Nexus. And if you mess up one parameter, guess what?
The emulator will not boot. And it’s really hard to remember all those flags in the right
way. So our solution is, we have a simple Python script with a clear interface, and
it’s really, really simple. We have bash completion, tab completion so you type something like
virtual device. Tap, tap. Nexus 6. Tap, tap. 21, and you get a device. So let’s go ahead
and launch the thing and run a test. Oh, what happened? My test failed. Wait! The
thing is booting. But when is it ready? It’s a little bit like
baking a cake the first time. You better check several times.
So let’s do the same and block until the emulator is fully launched.
So we extend our script and check several states, like “is system server running”,
or “network configured”, et cetera. So now we have a nice configuration. Nexus
5. Let’s start it, and start installing our APKs.
What? We’ve run out of disk space on a virtual device? Seriously?
I mean, this should be easy to fix. There must be a comment line flag. Oh, here, I found
it. Oh, it doesn’t work across all API levels. What a bummer.
Okay. What’s the next solution? Hmm, let’s go ahead and buy a new hard disk.
On the emulator side, we have two partitions, system and data. They are represented by two
files. And I just go ahead and increase the size to 2GB each.
Now we should have enough space to even install Google Play services.
[ Laughter ] Okay. So we are launching the device, installing
our APKs. Maybe we have the Wiremock or something so we have enough space. And now we’re starting
to launch our Espresso test. The UI test is working and — bummer. Android,
what have I done that you treat me like that? Have you guys seen this in a UI test? Okay.
I give you guys a little bit of background. So we run a lot of tests. And once in a while
the system decides to pop up a system dialogue which has nothing to do with our test so if
we are a manual tester we can easily dismiss the dialogue and keep going. But we run in
the cloud. Who is pressing the thing? So we invested a little bit and it turns out
there is a really nifty service. It’s called activity controller inside of Android. Unfortunately,
it’s hidden. But has this ever stopped us? We control the
device. So let’s write our own activity controller, install the script, launch it inside the thing
and keep going. Let’s do contamination check. So we have a
consistent device. It’s scalable. It’s reliable. I think it’s safe to remove the shield.
(coughing). Oh, my God! I totally forgot about speed.
It’s still super toxic. It’s red. But you guys, if you have seen GTAC last year,
you know there are known solutions for it. Use KVM and use snapshots.
Oh, hey, what happened over there? Reloading of a snapshot file failed. Oh, well, let’s
investigate this outage. Oh, it turned out that migrating snapshot
files across different machines does not work. And at Google, we have a lot of machines.
So say good-bye to snapshot. And let’s think of alternative solution.
Let’s step back and question why we were using snapshots in the first place.
They helped us to boot faster. So let’s play a little bit with this Android device and
see what we can do. So it turned out the second boot is always
faster than the first boot. Maybe the system is configuring some stuff. So let’s go ahead
and cache that first boot and use it every time we start the device.
And the second thing, let’s just go a little bit deeper. I mean, we had some feelings that
the emulator is doing something. By reviewing the source code, we figure out it’s doing
a lot of I/O operations. I mean a lot. It’s copying system images two times. And just
by removing those unnecessary I/O operations, we are saving five to ten seconds per start.
And in our environment, this means we are saving 10,000 CPU hours day.
Okay. Now I think it’s time for a little demo.
[ Video starts ]>>>Okay. Let’s start a Galaxy Nexus API 16.
The emulator is running. Now it’s polling. And the emulator is started in under 18 seconds.
[ Video ends ]>>Stefan Ramsauer: Let’s do contamination
check again. We cleaned up consistency, scalability, reliability, and speed. Let’s check if I can
pull off the helmet. Okay. Easy to breathe.
>>Thomas Knych: Oh, man. Thank you, Stefan. It’s seriously hot in here. I appreciate being
able to breathe. Now we have all those devices, man. What are we going to do with them? What
are we going to do? We want to control them; right? We want to get over. Who is going to
cross the bridge? Come on. It’s only mostly lethal.
[ Laughter ] For those of you that don’t know, the Android
debug bridge is pretty much the only way you get to do anything with Android.
Okay. Fine. Except it’s extremely not reliable. If anyone has written a script with ADB, here’s
what you do. You execute your command. You wait 60 seconds to see if it comes back. If
it doesn’t come back, you kill the command, you kill ADB. You reconnect everything, and
you issue it again. And do you that three or four times and hopefully it worked.
On top of that, it’s not consistent. You can do ADB shell or less, sometimes 50 milliseconds,
sometimes two seconds, totally random distribution. And speed, man. I have one process, another
process. I want to transfer 30 megs of data on my computer. 90 seconds? Why?
How am I going to build anything if I have to cross this bridge every time I want to
interact with my device? All right. Let’s dig down. What’s going on
here? When you type ADB shell LS, opens a network
connection to the ADB server running on your machine which opens a network connection to
the emulator which takes its packets and routes all its packets through SLIRP which is this
user level networking tool thing from the ’90s that was invented so Unix could do dial-up
networking. It lives in the emulator today, pretty much. Magically those packets appear
on port 5037 on the emulator. ADBD picks it up, executes a command, and then shoves everything
back. So, eh, this seems fine. Jeez, well, all right. It’s not fine. Let’s
get rid of SLIRP. Let’s bypass the virtual networking stack.
Now a couple of years ago, I think it was his Noogler project, I’m not sure, basically
replaced or added an option to the emulator to bypass the networking stack using this
thing ADBD over QEMUD. Maybe that made things faster? No, pretty much the same. Also, it
was pretty untested and unreliable and really hard to use. So like every version of Android,
we would have issues with this and we’d have to fix it up.
So — oh, and snapshots and this, man, were he this were not friends.
So didn’t really do much here. Why are we having all this trouble?
Well, it’s the ADB protocol itself; right? The ADB server kind of speaks in four kilobyte
packets to the ADB. So if we’re thinking, like, 30 megabyte APK, that’s 7,004 kilobyte
packets, about. Oh, and each one of those packets needs to be ACK and that’s 14,000
packets and that’s crossing so many boundaries. What are we going do with this, man?
Let’s put those love birds together. If they’re both running on the emulator together, at
least they’re not crossing so many boundaries and they can exchange their little messages
to each other, and we can deal in slightly bigger packets.
And this was great. We did this using the qemu-pipe, which is kind of the same technology
that lets the Android emulator do its OpenGL graphics. So it’s much more tested. They make
sure those work. We get 30 megabits per second over of this. But wait. When we’re pushing
data and we do top on the emulator, we see ADB server and ADBD using 80% of the CPU.
What? Come on. This is an I/O operation. Why are you using CPU?
What are you guys doing? And it’s still freezing. We didn’t fix that. We just kind of rejiggered
things a little bit. Let’s get rid of it. Come on. I just need
to execute a commanded and push some bytes back and forth and maybe tunnel the network
connection. That’s a perfect job for Go. We kind of rewrote
ADB in Go and optimized it for the emulator and we used a nice qemu-pipe interface. It
was awesome. It was a drop-in replacement for ADB because
we have lots that of scripts that fork out to it.
And we didn’t want to go fix them. So this is great. With this, we have a brand-new
bridge. Look at it. Taxpayer dollar money. [ Laughter ]
All right. And it just works. It’s very consistent. We’ve
got 20 millisecond overhead. And it’s fast, man. I can get data onto the device in, like,
three seconds. Unbelievable. Can we roll the video.
Don’t take my word for all this. Like, let’s actually see a demo.
[ VIDEO ] So let’s do ADB shell a few times to see this
slowness and inconsistency I’m talking about. It’s all over the place. How about we try
it with ADB (indiscernible) shell (indiscernible). Very fast and very consistent. Let’s install
a 20-megabyte APK. Real ADB is going to take quite a while to do this and it will peg the
CPU while it’s transferring the data. Now it has installed it. It took about 15
seconds. Let’s try it again with ADB turbo.
The data is on disk in 87 milliseconds. And it’s installed in seven seconds.
Let’s try something else. Let’s pull the data partition off this emulator.
It’s done in a second. Not going to try that with real ADB, because it takes about five
minutes. [ Laughter ]
[ VIDEO concludes ]>>>Man, I’m going to take these off, because
I think we really cleaned up the consistency area. We’re green here.
And we’ve done our part to improve reliability and speed. We made things kind of faster with
ADB turbo. But we have a lot to go. And I see Vishal, our next decontamination team
member.>>Vishal Sethia: I think we seem to be making
good progress. So as application developers or testers, you
want to make sure that you are able to control the device from the application itself. Like,
consider one of these device settings. Let me just take an example here of entering or
exiting airplane mode. That seems like a simple device setting. So all we’ll probably do is
make an API call and call it a day; right? That should just work.
But wait. We’re going to get a permission denier. Anybody know the reason why?
No. It’s probably because of the way the Android system is designed. For any API that you call,
you need to make sure that your androidmanifest.xml has permissions for it. So that seems simple.
Just go and add that permission to our androidmanifest.XML and run the test, and then everything you
know, just works fine. Why is that a problem? This happened with one of the developers that
I know. For testing, they added a permission to the appsmanifest.xml and launched it to
production; right? And we really don’t want to do that.
So what are the problems with this particular approach? The test APK runs in the same processor
as your app under test. It inherits all the permissions that the app under test manifest.xml
has. And every time you — you know, you want to make a new device setting change, you add
a new permission. That’s just going to convolute your manifest.xml. And your users are going
to get confused as to why these permissions are required even though the app does not
need it. How do we solve this at Google?
At Google, we get help from good guy Greg. Does anybody know who good guy Greg is?
No? Good guy Greg is a person who has access to
all — who has access to all the resources that he’s legally entitled to access but your
app may not have access to. Does anybody get the drift here of what I’m
talking about? Probably not. All right. Kidding aside, you know, our solution
is very simple. All we do is create a buddy APK that encompasses all the permissions that
it needs in its own manifest.xml. Now that it has permissions, it can make those device
settings. It exposes all of these things as services. Then we create a testable class
that sends out intents to these particular services so that it can make those device
(indiscernible) changes. You then — all the test then has to do is
make these static method calls to the special library.
Let’s take a look at the sample test code. You have a UI test that extends from (indiscernible)
instrumentation test case 2. All the test needs to do is encode the test to the library,
and that’s about it. It does not need to convolute the app under test manifest.xml, the users
are never going to get confused as to why additional permissions are required, and your
test code is really clean. With that, I think, you know, it’s safe to
take out our gloves, just because, you know, we have good control over the application,
good control over the device itself from the application. And I see another member from
the decontamination team to make our environment completely toxic-free.
>>>Thanks, Vishal. Yeah, that will work.
Yeah, good guy Greg, he’s a pretty useful resource. We’re actually going to be hanging
out later today. But let’s do a quick overview here. Let’s
see. We’re consistent. We’ve got that going for us.
But we’re consistently unreliable still. We still have issues with scaling and speed and
control, and debuggability. I mean, this is just a toxic wasteland.
But before we dive into the cleanup stage, let’s see what we’re dealing with. What exactly
is a test harness? So I explain this with a story. I’m walking
to work. I work in San Francisco. And every day, I commute. And as I walk to work, often,
I get stopped on the streets of San Francisco by somebody who says, hey, man, run all these
test cases for me. [ Laughter ]
Like, if that happens to you guys, what are you going to tell this guy? I mean, I tell
him to get lost and do it himself. But people in Seattle are much nicer, so you
guys are probably going to accommodate them; right?
Well, the first thing you’re going to ask for is, well, if you’re going to ask me to
do all this work, why don’t you give me some test devices. I really like those Nexus 6s,
man, they fit so well into the palm of your hand.
[ Laughter ] Okay, the guy says, here you go, man, have
100 Nexus 6s, they’re all going to be virtual, by the way, but, you know, that will do. We
know it works. So you have all these devices, and the next
stage, you may want to install your APKs and maybe push some data onto the device that
you — may be useful for your testing. Now you’re finally ready to execute all your
test scripts. You do that, and at the end, you report the results. And the guy says thank
you. So that’s pretty straightforward. You know,
we all know about this process; right? But, man, all along this path here, we have
toxic waste. We — I don’t know, we can’t work like this.
Let’s begin the cleanup. So what are we dealing with? We have this
huge Python script — at least it’s Python. That’s not too bad. We can deal with that.
And this Python script bundles everything together by actually making sure that it can
launch a device. It has a bunch of code for that. And running your tests. And why is that
a problem? Well, what happens if launching a device fails?
All of a sudden your test gets blamed. That’s not fair. I mean, I wrote a good test. The
device fails to launch. That’s not my problem. And the second part is, oh, what do you do
if you want to run the test locally versus in the cloud, where it runs by default?
Hmm. Looks like I have to comment out some code here. I have to find the ADB port of
the device, paste it into here. Man, like, no developer is ever going to have patience
to do that. They’re just going to give up. So what’s the solution? Like every problem
in computer science, we solve it with another layer of indirection.
[ Laughter ] The device broker interface is super simple.
You can have a device, but when you’re done with it, you give it back. And so what this
gives us is, if launching a device fails from the test harness, the test harness, you know,
can actually mark it as an infrastructure failure, and it can even retry it. And because
this is an interface, guess what? We can implement a clouds data — device broker, and we can
implement the local one. So we have now better reliability and better control.
All right. Moving on. So the next stage of staging test fixtures.
And in the Java world, in the standard Java world, we don’t even think about this. We
are running on the same machine as where the data lives. Like, for example, you may have
a build script with some test arguments. It’s all in one machine, so not a problem.
But this is Android. And on Android, we run our tests on devices. Not on the host machine.
Okay, the solution here is pretty simple. We now have this awesome abd turbo. Let’s
use it to push the data onto the device consistently, reliably, and quickly. And good guy Greg lives
on that device, too, man. So we’re going to get help from good guy Greg and access all
of that data without polluting our manifest with toxic permissions.
It’s all beginning to come together; right? Oh, my goodness, but what is this?
Looks like somebody was trying to run Android instrumentation tests. I don’t know.
Well, what’s the problem here? If you do a simple Google search on how to
run Android tests — and, actually, Michael showed this today — inevitably, you’re going
to be led to the instrumentation test runner documentation page. And the very first option
on that documentation page is going to be how to launch all of your tests in one grand
sweeping invocation. And I believe Michael mentioned that actually
Spoon does this. Now, why is this an issue? Oh, there’s many
issues here. First of all, because it doesn’t know exactly
what kind of test cases it has to run, it has to do a class path scan. That’s pretty
reasonable; right? Well, on Android, this takes forever. And that’s the first problem,
speed. And a very, you know, kind of convoluted problem,
even more important, is reliability. The class path scan will actually initialize all of
the static code not only in your test APK, but also in your app under test. So you may
find this bug, and you’re going to get all excited and run to the developer who created
the code and say, hey, man, I found the bug in your code. And the developer is going to
say, okay, let me try to repro that on my device. No, no repro, sorry. Why is that?
Well, we have just mutated the app under test. It behaves differently under test than it
does in the real world. That is a really bad problem and it’s actually really hard to debug
and fix. But let’s say that our test actually finds
a legitimate issue. That’s great. Let’s say our test finds a crash in the app. And what
happens then? Well, you know, that test ran. That’s great. Oh, but all of these other tests,
they’re not going to get to run. Remember, we are running on one process as the app.
If we crash the app, everything goes down. We don’t get that extra test coverage. And
that’s really bad. And, finally, shared state. Our test engineer’s
favorite friend; right? Again, we’re running all on the same process.
And as we’re running — and hopefully, we have lots of tests. We’re kind of going to
be likely creating some shared state. We’re going to be building it up as we run. And
Android actually makes this problem even worse. Android is kind of stateful in many ways.
For example, if one of your tests launches an activity that fails to shut down cleanly,
your subsequent test may well fail because of that. So all three of these are huge reliability
and speed problems. So how do we solve this?
If there’s anything that you take away from my part of the presentation today, it’s that
you should run one test per instrumentation call. And in order to do that, you first need
to extract all of the test methods from your test APK. And we do that by using a utility
called dexdump. We do that on the host machine so it’s much faster and doesn’t affect what’s
happening on the device. And that solves, like, 90% of our problems or even more.
Sometimes there are shared states that’s persistent and we have to use a bigger hammer, so we
also give an API to our test to be able to clear package data between test runs. It’s
not always necessary, but sometimes it is. So with that, we’ve cleaned up probably what’s
the biggest mess in the test harness, and that is the instrumentation and running tests.
Finally, result reporting. So let’s see what our huge Python script gives
us here. Oh, it’s log cat. That’s wonderful. And one of my tests failed. Let me go and
debug that. Let me search this log cat. This has info from maybe, like, 100 other tests.
So I kind of have to navigate this hijack string buffer here. Oh, but I can’t find the
info for my test. What happened? Well, log cat is a circular buffer. So sorry, if you
have a lot of tests, that data may be gone already. And not only that, but our device
is also gone. Our device has gone to a better place. It lives in the cloud.
[ Laughter ] So that’s not going to work. And we’ve already
seen some solutions for that. You know, we just collect all the things. So let’s see
how that actually looks like at Google. Please roll the video.
[ VIDEO ]>>>At Google, we have system called sponge
capable of absorbing vast amounts of data from tests and presenting it in a Web front-end.
Here’s what the experience looks like for an Android test. We see here that in this
particular test target, two of the tests failed. The history of the target shows that it’s
unlikely to be test flakiness. It’s pretty stable.
Clicking back, we are presented with the exception of the test. We can see that the test is expecting
the string 1236 to appear on the screen. Scrolling down, we can look at the screenshot
and confirm that, indeed, the string 1236 is not on the screen.
But what happened? Let’s look at the video.
It goes by really quickly, so we can actually drag it here and go frame by frame.
>>>Dragging.>>>It looks like there were some clicks.
We tried to pick a date. We were expecting the date to be presented to the user, but
it’s not showing up. We can also look at logcat.
And here, I would just like to note that it’s pretty short. That’s because only the logs
from this particular test run show up. Let’s go and chase down the person that introduced
the regression. [ Video concludes ]
>>>Who is that person, by the way? You guys know him? It’s a trick question.
It was me. [ Laughter ]
So with that are we green? I think we’re super green, man.
So we have devices, well, virtual devices, that are consistent and reliable and fast.
And most importantly, they help us scale. And I’m going to spend a little bit of time
here, because I think it’s important, to note that at Google, we ran some numbers a couple
months ago. We run 200 million tests per month. That averages out to be about 70 tests per
second. So in the time that it just took me to say that, we just ran about 1,000 tests.
Oh, and here’s another 1,000 tests. So scalability is absolutely critical for
us. And we have it with the emulator. With ADB, we are — again, we built a new
ADB. And our bridge is now both earthquake proof and it has wider lanes. And you guys
saw a demo for that. Our applications are free of toxic waste from
toxic permissions and our test harness just orchestrates everything together and makes
sure that it plays beautifully and it doesn’t affect the reliability again and consistency
and gives our developers good control. So with that, Iris, I report the contamination
of the mobile test environment has been cleaned up. Decontamination team, please report to
staging area. But wait.
So we kind of finished with our act now. We wanted to make sure that, you know, it’s 5:00
p.m., we did something exciting for you guys. And we also wanted to sort of follow up on
our presentation from last year. Last year at GTAC, we presented this grand
vision of a battle against the manual testing matrix. And at Google, we have won this battle.
But let me do a little poll here. Who here wants to take the slide deck and is really
excited at 6:00 p.m. on a Wednesday night to go back to their desk and start reimplementing
all of this? Anyone?
Okay. I see a hand there. Please send us your resume.
[ Laughter ] You’re crazy, but we need people like you
on our team. Yeah, I mean, obviously, in reality, guys,
I think it’s — you’ve seen here, you know, we wanted to give you the technical details,
you know, just to prove that it exists. And developers at Google have an environment that’s
green, that allows them to actually start, you know, writing test cases. And, actually,
this is also a very important point. Somebody mentioned — I don’t remember which talk — that
frameworks don’t really matter. Frameworks do matter. Even if you have this green environment,
you still can have flaky tests. And flaky tests has been the theme of this conference.
They’re really bad. So frameworks do matter. And at Google, we pretty much use Espresso.
If you’re doing Android development, you’re going to be using Espresso.
So with that, you know, I think that we’ve proven that this environment does exist. We’ve
already released Espresso. And all we can say at this point is that we would really
like to give this environment to all of you. So please stay tuned.
>>>But wait. Hold on. We just, like, got this brand-new clean world. Can’t we do something
more fun with it?>>>I don’t know —
Wait, wait. Are you — Don’t say anything, man. That’s, like, a Google X project.
>>>No, no, no. The other thing.>>>You mean, like, self-flying cars?
>>>No, no, no, no. The other thing.>>Valera Zakharov: The other thing. Okay.
>>Thomas Knych: Yeah. The video. Let’s roll the video.
[ Video playing ]>>>Imagine if getting to a new device was
as easy as opening a new tab, in Android in the (indiscernible) makes that imagination
a reality. Simply select a device that’s running in one of our data centers and connect to
it over Chrome remote desktop. These devices have been optimized and are fully under your
control. Now, maybe I want to change the network so
the device behaves as if it’s on the edge network. Well, that’s just a few clicks away.
I can also do things like playing with the battery levels of the device. So I can simulate
a low-power situation and make sure that my app responds properly.
The tab is yours. [ Video ends ]
>>Valera Zakharov: Thank you, man. That made me cry. I was going to cheer up, man.
[ Applause ]>>Yusuke Tsutsumi: Thank you, mobile ninjas,
team. That was the best presentation.>>Valera Zakharov: And by the way, Ashwin
is picking up the tab. When we said the tab is yours, we really meant….
[ Laughter ]>>Valera Zakharov: Any questions?
>>Sonal Shah: Any questions?>>Valera Zakharov: Well, it’s really fast
at Google to do anything.>>Thomas Knych: It’s really fast to do things.
It’s really hard to get it out the door. This presentation was pretty hard to do, for multiple
reasons. And, yeah, but we’re working on it.
>>Alan Myrvold: So we have a question from the moderator. I can haz has ADB turbo?
>>Thomas Knych: Yeah, totally. That’s just another open source release away.
>>Valera Zakharov: Oh, yeah, man. So the nice thing about ADB turbo, and Tom can speak to
it more because he actually developed the whole thing as his go/readability project,
I believe –>>Thomas Knych: Still working on that.
>>Valera Zakharov: — but I believe it’s fairly stand-alone; right? And doesn’t have a lot
of dependencies? You can talk more to that.>>Thomas Knych: It’s purposely produce (indiscernible).
It kind of sits on top of our other launching infrastructure because you need to set up
a little environment like Stefan was referring to. But, yeah, for the emulator, we can make
that happen. As long as you’re using Linux.
>>Alan Myrvold: Another one from the moderator. What about Espresso? And any progress moving
to ASOP? Android Open Source Project?>>Valera Zakharov: We hoped that somebody
would ask that question. Well, since we were preparing for this presentation,
we weren’t working on Espresso. But no, actually, I was even doing some coding today on it.
I think we’re putting kind of putting on the last engineering touches on this project.
For example, just today, I was trying to enable WebView support on API level 18 and higher.
So that is all coming. There’s a lot of good stuff in there. I’m really sorry for the wait.
It’s just — you know, it’s been kind of hard to work on it. But now, the whole team is
working on it and I think it’s just weeks away.
So….>>Thomas Knych: And once it finally lands
in ASOP you guys are going to kind of see a release version of Espresso and its leading
head that we’ve always been using. So we’re going to do a lot more work in the open for
that.>>>Yeah, we really hope that this is our
last big release and we just want to have a regular release cadence where we just push
it out very frequently.>>>So you made a strategic decision to automatically
boot the device to ensure reliability and consistency in your chart. Do you guys have
data to back up that that has given more quality to the tests that were run since you’re no
longer simulating the user environment by always booting clean and double booting?
>>Thomas Knych: No, actually, we do do a full boot. So the emulator has these, like, disk
image files; right? We push it through a boot cycle. We shut it down nicely, we preserve
its disk images. So it’s like were you turning off your phone, and then we make that like
a little bundle that when some test finally needs to run that emulator in that bundle
–>>>I think what you’re getting at is you’re
losing some test coverage, maybe, because you’re not polluting enough and it’s too clean.
>>>I think it’s really important to layer your testing. So I think that, you know, at
Google, we have this 70/20/10 rule. So 70% of your test should be very, very small tests.
We actually recommend now using robolectric for that and a lot of our developers love
it. 20% of your test should probably be UI tests that run on Espresso and may be isolated
from the network. The last ten would be these very big end-to-end tests. And even after
that, I think when all of that is green, you can do alpha testing. And I think that that
may be a better place to catch those bugs. You know, we kind of — at our last presentation
at GTAC we alluded to the fact that the majority of bugs you are going to find are going to
be things like off-by-one errors, and you can find those either with standard robolectric
tests or with your UI tests. So I think that’s the approach I would recommend for that.
>>>Are you able to take a dogfooders image and put it in an emulator if someone wants
to reproduce that specific environment that exposed a bug?
>>Thomas Knych: Yes. Like all these tests when they’re running internally, Stefan can
do something and say, hey, Tom, I’m having a problem. I can totally reproduce his whole
setup and environment. So….>>Sonal Shah: Last call.
All right. Thank you very much, mobile ninjas.
[ Applause ]