Tip:
Highlight text to annotate it
X
[MUSIC PLAYING]
PAUL SAXMAN: Hello and welcome to a special pre-I/O episode
of Google Maps Developers live.
This week Brendan, Mano, and I have been joined by a team,
Instrument, who's working on a map dive exhibit for Google
I/O. They're going to give us a demo and introduce
themselves.
And then we'll talk a little bit about the technology.
So with that, Instrument team, do you guys mind letting us
know what you're working and who you are?
MARTIN LINDE: I'm Martin Linde.
I'm the creative director on the project at some of the
labs groups here at Instrument.
BEN PURDY: And I'm Ben Purdy, and I am a technical lead and
have been doing a bunch of the development on this thing.
MARTIN LINDE: So just a little background about Instrument.
We're a small, or medium sized creative agency
in Portland, Oregon.
And we actually started out as basically purely a dev shop
with just a little bit of design farmed out.
And it's grown to about 80 people now doing everything
from website development to motion graphic and content
production.
And it's a cool opportunity to be working on this project,
particularly since that's what the labs group within
Instrument was formed as.
So we'd love to show you the demo of the Map Diving.
So Ben will go outside, and I will try to
narrate what he's doing.
So what you're seeing right now on the seven big screens
here are the attract mode.
And Ben will raise his arms to sort of attract Pegman to fly
towards him.
And now he will raise his arms so that the Kinect or the Asus
device will recognize him.
It will do like a pirouette and then fly over a countdown
above the clouds.
And that's just to orient the user as to what's going on.
And let them get a feel for flying.
And now the level starts.
So he's diving down towards a zoomable Google Map.
And the objective of the game is to hit
gates and bonus stars.
And the way that it works is if you have your arms
stretched out, you're planing out and can sort of move
vertically in the air--
or horizontally, sorry.
And then when you move your arms closer to your
body, you will dive.
And so that's what he's doing right here.
And he does a little loop once he hits a gate and a barrel
roll when he hits a star.
And every so often, the map will reload to get you closer
to the ground, so to speak.
And it's super responsive and fun to watch, even, and even
more fun to play.
So we'll let just Ben show his skill.
And he purposefully missed an object in space there.
People at I/O will be able to see the bonus
rounds there will be.
There's 13 base maps that takes on a normal Google Map.
And then there's about five bonus modes that are just fun
takes, and more intense levels that will give you bonus
multipliers and more points.
So we look forward to showcasing those.
And so you see in the distance there sort
of a pulsing graphic.
And that's kind of the drop zone.
And so you're diving over 13 locations in the world.
And right now we're over Statue of Liberty in New York.
And so he has just managed to hit the drop zone, and we'll
initiate the celebration sequence right here.
And so that's kind of a cinematic cut sequence.
And he will circle the Statue of Liberty, and after a couple
of seconds, he basically will take off up
into the sky again.
And that's the basic gameplay.
You can miss, and then a parachute will deploy.
So to show you that you weren't quite as good.
And then quickly you can get back to it and try again.
And so, without further ado, here is Ben describing the
magic behind this thing.
BEN PURDY: All right.
So as you can see, this is a multi-screen experience.
I'll just go through, really quickly, what is driving the
actual display and game.
So each one of these seven monitors is being run off of a
separate PC.
And every single one of those is just an instance of Chrome
running in full screen.
This whole thing is built on JavaScript and web
technologies.
All of the communication is handled through WebSockets
just routing messages through an instance of Node.js.
So there's no native custom apps running except for the
body tracking, which is an openFrameworks app using
OpenNI and a 3D camera, sort of like a Kinect.
So that's what's driving this thing.
Each of the view ports is basically just showing the
current game state.
And then we have another PC that is not connected to any
of the game displays.
There's a podium that you can see in the scene there that is
running the game logic.
And then game state is sent out to all seven of the
displays at about 60 hertz, which is what we try to keep
the frame rate at.
So it's pretty ambitious for something running
completely in Chrome.
But it's been a breeze to work with.
And the synchronization between the displays has been
not much of a problem at all.
So that's basically what's driving it.
We use the three.js WebGL library to do all of the 3D
objects in the foreground.
And the map plane is actually an HTML
Google Map that's live.
There's no trickery going on.
It's just being transformed using 3D CSS to stay in sync
with the WebGL camera.
So what you're seeing is an actual live Google Map.
If I hadn't turned off the user input, you could actually
drag and zoom and pan on the thing.
Even though that would distort the game experience, it really
is there, really is running.
MARTIN LINDE: [INAUDIBLE].
BEN PURDY: Yeah.
So that's been a fun experience trying to marry the
3D CSS with WebGL.
Additionally, we had to do some interesting trickery to
get the view synchronized.
So what we ended up doing is having a dummy object in 3D
space that we attached the actual camera to, and then we
just offset the rotation of that camera depending on the
view index.
And then as long as each viewport is rendering the same
scene, and since they're all synchronized over the network,
the scene stays in sync.
Then once you put the view side by side, everything
stitches together.
So what I was also going to talk about a little bit was
the tools that we had to make to build this thing.
And what we ended up doing is building the tool to create
levels using, again it just runs in Chrome.
And then we used Google Maps extensively
to create this tool.
The whole game space is actually tied to real world
coordinates.
And so we just have a translation between actual
latitude and longitude and our game space.
And so that makes it really easy to design these courses
which are laid out at various points over the globe.
So we actually have a Google Map where we can just drop
items and kind of practice courses and
that sort of thing.
And so I can walk through the editor here a little bit.
You're going to have to forgive my sort of UX-less
developer console styling here.
And another caveat I must say is that I designed this to
work on the touch screen podium.
And so these big fat buttons are as a result
of my big fat fingers.
So I'll just run through this really quick.
Again, this is the behind the scenes, nobody should have to
see this poor UI.
But what I have here is both the admin console that runs
the game, and then the editor actually runs on top of this.
So you can switch into edit mode at any time, and it sort
of takes over the game play.
So I've got an instance of the game running on my local
machine here.
So you can see that my little Pegman is not falling because
I'm in edit mode right now.
And so that lets me fly around magically so that I can
position items and see the layout better.
And so in this admin console, I've got a map on the left,
which is the overview of the game space.
And I've got markers for the player.
And I've got markers for gates and items you can pick up and
landmarks and that sort of thing.
There's a lot of pieces that I've added to make building
levels easier.
So I can center on items.
You can select items from this list that are all of the
entities in the world.
Double-clicking on one of those will
center it in the view.
All of these items are interactive.
One kind of cool thing.
At first, I was building all the levels just using
this top down view.
And that turned out to be really cumbersome because it's
really easy to make a level that's unbeatable if you're
just doing it from a top down view.
And so eventually I merged the editor, like I said, into the
actual game engine.
And so with this new system I'm actually able to--
try to get this on the screen here.
All of the changes that I'm making are actually sent live
to the viewports.
So I'm not sure how well the frame rate will translate to
this screen share, but you can see that as I'm dragging one
of the objects in the map view, the 3D object in the
game space is also picking up those changes.
And the same thing goes for creating new entities and that
sort of thing.
So it's been really great to not have to code all of this
map drag and drop.
I use things like checking to see if a
location is within bounds.
I have some other niceties to make sure I don't make
unbeatable levels.
If I drag the start point for the map around, I get this
nice ring that shows me how far the player needs to be
from the drop zone to be able to hit it in time.
So I don't end up starting the player too
far away or too close.
And then I'm even using geocoding, so that when we're
designing a level, I can just type in the name of the
location, and it'll prepopulate some search
results text that we end up putting on the screen.
So it's been really great, like I said, to not have to
reinvent all of these tools to build an
application like this.
We've been able to just focus on things like the game play
and the graphics.
PAUL SAXMAN: Very cool.
So you can actually run all this on your laptop then?
I mean you can actually play the game and everything?
BEN PURDY: Yeah, oh yeah.
I mean before we had the seven displays that we had the game
up and running.
I can set the viewport dynamically through a little
config file.
So I would just literally open seven little, tiny, tall,
skinny instances of Chrome and line them up across the top of
my larger desk monitor.
And then I could simulate what it would be like when the
monitors showed up.
And then when they showed up, we just got Chrome running on
them and fired it up.
And it was basically painless to get this thing working on
the actual distributed installation versus just on a
single computer.
PAUL SAXMAN: So it sounds like it's pretty much built all on
open technologies.
BEN PURDY: Like I said, we're using--
so it's all running in Chrome.
All of the game logic is JavaScript running in Chrome.
And then Node.js is running the communications.
And it literally is just a hub so that the game logic can
broadcast the game messages out to the
seven display clients.
And then the node also handles routing the control
information from the body tracking software.
Which is just a UDP connection to the Node.js, which then
sends the body tracking.
It's just basically the angle of your torso
compared to the ground.
And then each arm compared to the torso.
So we just get three angles that are being sent to the
control node at as fast of a frame rate as
the thing can capture.
And so then the control node integrates that.
I've got a way of normalizing input so I can play off of the
keyboard or the body tracking.
Or I even have a way to simulate body tracking using
the mouse cursor.
But that's built on openFrameworks using OpenNI to
do the body tracking.
Which was pretty fun to do some debugging and
playing with that.
We had a lot of fun trying to work out some of the kinks of
having multiple people in the scene at once.
We did a lot of running around in circles with lots of people
in the view.
MARTIN LINDE: We should show that.
BEN PURDY: Yeah.
[LAUGHTER]
BEN PURDY: Play that back at like five times speed with a
funny soundtrack.
MANO: The most important question right now is really
who's the best player right now?
BEN PURDY: Oh gosh.
I would not want to take on our technical director.
He's pretty competitive.
And every time I would come around the corner to try to
test out some change, I would have to kick him off because
he'd be there trying to get it perfect.
PAUL SAXMAN: Yeah, I heard when we were chatting with you
guys earlier, there's some talk about having a
leaderboard?
That would be--
BEN PURDY: Yeah.
We're working on adding some capabilities to
do competitive play.
Just in terms of ability of levels and being able to
one-up your buddies.
PAUL SAXMAN: If not for I/O, then maybe for the open source
release that we're going to potentially do.
BRENDAN KENNY: So you guys--
so this is obviously going to be at I/O. And you guys are
going to be at I/O so that you can talk to people about this.
So I'm sure a number of developers are going to be
very interested in the tech stack.
But then afterwards, can you talk a little bit about your
plans for this, afterwards?
BEN PURDY: Well, the plan right now is that we would
like to release the code.
It's going to need a little bit of
cleanup before that happens.
The code base right now is very much tuned to some of the
challenges of working on a distributed display.
It does work on a single machine.
And I don't foresee a lot of integration and refactoring
effort required to get this all working within a single
instance of Chrome.
All I need to do is pull out the code that would be talking
to Node.js and just basically have like a
dummy message router.
There's nothing that would stop it from being like a
single player in a single browser
experience at this point.
BRENDAN KENNY: Oh great.
MARTIN LINDE: We're also hoping that some of the tools
that we use for designing levels that Ben created, also
like a tool that our other developer, [INAUDIBLE], wrote
to allow us to design tricks for little Pegman, like barrel
rolls and flips and glides and things.
They'll be the last things we add, but we would like to show
that too in open source as well.
BEN PURDY: There have been a lot of fun little tools and
tricks along the way.
We actually have full body motion capture that was
driving an articulated Pegman at one point.
We've got a lot of pretty neat little things that we've had
to overcome and play with for this.
BRENDAN KENNY: So you're doing from like the openFrameworks
to actual like motion capture and mapping to Pegman?
BEN PURDY: Yeah, because we can get all of the actual
joint positions.
And so for the game play, we're just tracking your arms
and that sort of thing.
But to create assets, rather than hand animating, we were
going to do like full body motion capture
of like little Pegmans.
PAUL SAXMAN: That's awesome.
Cool.
We don't want to give too much away because we'll probably
have some follow-up shows.
Maybe after Google I/O we can do a technical deep dive with
you guys to talk a little more.
Maybe look at--
BRENDAN KENNY: Specific APIs
MANO: Once it is released.
PAUL SAXMAN: So for those of you tuning in, if you're going
to be at I/O, we definitely hope that you'll have a chance
to kind of play the Map Dive.
And if not, if you're going to be joining us actually via GDL
or I/O Extended, we're definitely going to have some
more shows.
And so hopefully you can join us in spirit.
So thanks for joining us, Instrument team.
This was really awesome.
MANO: See you guys in San Francisco.
BEN PURDY: Sounds good.
PAUL SAXMAN: And viewers, thanks for joining us.
And hope to see you soon.
Buh-bye.
[MUSIC PLAYING]