Tip:
Highlight text to annotate it
X
MALE SPEAKER: Hello, everyone.
Welcome back.
We have two more sessions left today.
The first one here is Understanding Your Players
Using Near Real Time Game Analytics.
And this is going to be Michael Manoochehri from
Google, that search company you've
probably never heard of.
And then we also have, from Staq, Luca joining us to talk
about how they've used Google BigQuery to build their game
analytics solution.
So they're going to be starting here in just a
second, but then the last session that starts at 4:30 is
going to be How EA Builds Mobile Game Servers On Google
App Engine.
So hope you can all stick around for both of these, and
welcome, guys.
LUCA MARTINETTI: Hello.
Can you hear me?
Hi.
I am Luca Martinetti.
I'm the CTO of Staq.
Staq is a brand new startup that started in TechStars.
Today, it's a three-man army.
It's me and these two guys here, Massimo and Francesco.
We had a big vision.
We tried to build a game management platform, and I'll
try to explain a little bit better what we
believe this means.
Today, I'll be talking about how to understand your players
using analytics, and how to do that in real time.
And I have to say thank you, Google, for having us here.
That's because we use a big chunk of their analytics
offering, BigQuery.
So you are a game developer, so you'll probably want to
build a game.
You're here to build a game.
And what I'll say is no.
No more.
No more games.
You're not shipping boxes anymore.
The market is so changed.
What you're actually doing is you're building a service.
Building a service is very different
from shipping a product.
It has a complete different set of challenges and requires
different state of mind and different procedures.
Building a service means that it's always on 24/7.
Somebody's always interactive with your game and generating
events, generating data.
At a certain point, if you're successful--
and that's a problem you want to have-- you understand that
you're sitting on a giant data fire hose.
Spoiler--
in this fire hose, there are so many interesting things you
want to learn, you want to try to grasp.
In order to do that, we believe you should set up a
process made of three steps.
Measure something, so collect the data, try to understand
something you don't know before, and react.
React in a way that can make your game better or your
monetizaton stream better.
So actually your life better.
Yeah, the three guys there are us, looking at the big screen,
and this is the flow we want to enable at Staq.
So starting from an event that could be in an upper trace,
try to understand something new through analytics and data
analysis, and then react somehow with a promotion or
with a discount, with a game change, and so on.
So today, trying to explain a little bit more in detail how
to collect data in the right way, and how important it is
to have a way of running queries very efficiently.
Because these are, at the end, the two important steps that
will enable all the rest of the process.
So collecting data, what you want to collect, what you want
to measure.
It depends.
It depends.
There are so many options.
Some of those are game-specific, some are common
to different kind of games.
What is important is that you should try to collect a piece
of information that happens in a certain point in time that
is an event.
That's the JSON schema that we use.
What's happening here is that at this timestamp, in this
[INAUDIBLE] game, this was this warrior that was at level
seven, and with these two and a big sword was raging and
hitting somebody, and create some damage.
So this is a typical event you want to collect from, again.
And using Staq, you're not enforced to any specific
schema, and we'll talk about that more in detail later.
So you start collecting these guys, and you figure out that
you have tons of data.
What does it mean?
It depends.
It depends greatly on the traffic you have, on how much
your instrument in your game.
But if you're successful, if your player base is
significant, at a certain point, you'll have to manage
gigabytes per hour.
This means that many of the solutions that classical
architecture with a MySQL database are not good anymore.
Not good anymore for the amount of
data and the velocity.
And there are some caveats, some very important things you
have to consider.
And this is the biggest one, that you cannot pre-aggregate.
It means that you cannot take just some piece of
information, increment counters.
You need to keep the raw data, all of it.
Why?
Because at a certain point, you'll come up with new
questions that you didn't have beforehand, or you have new
data points that will make extremely different the story
of a player, for example.
Say you're using an advertising network, and at a
certain point in time, you receive the list of the
players you bought from that specific channel.
So you need to reconcile this data with
your history of events.
So you need a system that can handle tons of data in the raw
form because that will allow you to ask new questions over
time and integrate the new data points when they come.
I'll talk about the kind of data we're also building to
offer our analytics platform and the
tech solution we chose.
We're so proud of what we're building that we chose this
double buzz word, the real-real-time.
That means two time real time.
Why that?
Because we ended up choosing to use two different data
stores for answering the same questions.
We're building a system that allows you to run the very
same exact queries on two different databases that have
some different performance profiles and allows you to
have always the freshest data available, and at the same
time, run analyses and run the same queries on all your
historical data set.
So if your game has been around for one year, you want
to be able to answer a question about what's
happening now and how this trend
developed in the last year.
We're building on top of two technologies
that are very different.
One of that is a database that is called MemSQL.
That is an in-memory version of MySQL.
It's a startup as well.
It's here in San Francisco, a YC startup.
That is pretty cool, because it keeps all the data in
memory and allows you to do very fast aggregation.
At the same time, we're using this excellent product from
Google that is called BigQuery.
That is a database as service, you could say, for analytics
that allows you to run queries real fast on gigabytes of
data, or terabytes, without having to manage the whole
infrastructure.
Our design goal was to be awesome fast on the latest
buffer of data.
So it depends on the size of your game, but it could be the
last day or last week's, and be really, really fast on all
the historical analysis.
What we love about BigQuery is that, as I was saying, it
offers us a service.
And if you're a three-man company, like us, not having
to manage a big cluster, like an Hadoop
cluster, for example.
Even if it runs on a cloud provider, it's a big win,
because you just take a service and you know it will
be working, at night, 3:00 AM.
It allows you to do fast ad hoc queries on large data
sets, and that's a very nice feature that is called nested
fields that allows us to manage the TOCs that we
associate with every event.
So it's a little bit broader in the data definition than a
row of a table in a database.
It's as fast that sometimes you don't understand how much
data you're touching, and the pricing model is actually
based on the amount of data your queries are touching.
So I was running a demo a few months ago,
and that's what happened.
I ran a query on two terabytes of data, and I said, what?
Because that's what you see in the bill.
So it's so fast that you really don't understand how
much data you're touching in a single query.
And so we love it, and when the bill
came, I was like this.
Let me introduce Michael, and then we'll talk more about the
query, and I'll continue later.
MICHAEL MANOOCHEHRI: Awesome.
Thank you, Luca.
Can you guys all hear me?
Yeah?
That sounds like a yes.
Yeah, so I'm going to do my best "Futurama" Fry impression
and say I don't know if I'm happy to see grumpy cat or sad
that it was used in the same sentence as BigQuery.
But anyway, I'll talk a little bit about how to integrate the
BigQuery API into your application.
For those of you who don't know, BigQuery is an API that
lets you ask questions about large data
sets, your own data.
I'm not going to get too deep into the technical details.
I'm actually going to show you how it works.
BigQuery is an API in which you can send messages to it in
JSON format and then retrieve query results in JSON as well,
which makes it really easy to incorporate into your existing
applications.
And what I really like about the Staq story is they've got
their existing application.
I think you build mostly in the cloud, and it's easy to
integrate BigQuery with that system.
So they really understand the kind of modern data pipeline
where you have a particular technology for collecting data
in real time very quickly, and being able to ask questions
about that real time data that's coming in, and then
asking quick questions about historical data or aggregate
data that takes a different tool.
And so what Staq's done really well is integrate two
different tools on top of their web stack.
The best way to show you, if you've never seen BigQuery
before, is how it works and how to integrate.
I have an example here.
I tried to build the simplest-- let me see if I can
make this a little bit bigger--
the simplest application that I could using
the BigQuery API.
All of the Google APIs that are built on our modern stack
have client libraries in many languages.
We've got Ruby, and Java, and JavaScript, and Python, PHP,
just about everything you need.
And we have open source projects of people building
other things because it's just a RESTful API.
It's easy to integrate.
So this is an example of some code I wrote just really
quickly using BigQuery with the JavaScript API.
And what I'm doing here-- and I'm just showing you sort of
what it looks like, how little code you need to run a query
on a very massive data set.
In this case, I have a client ID and a project ID that I got
from the Google Developers Console.
This is something you can kind of read about on
the BigQuery website.
I don't have a lot of time to get into that part.
And I've integrated it into this application.
I've chosen a scope for authorization of the API.
And simply, what I'm doing is I'm just running a query, and
that query will come from a box.
And I'll show you this application in a second.
And then visualizing that data, the response from that
query, with a chart.
So let me show you how this works.
Oh, are we offline?
We might actually be offline.
LUCA MARTINETTI: Offline demo?
Let's try again.
MICHAEL MANOOCHEHRI: One moment, please, while we make
sure the network's right.
LUCA MARTINETTI: What happened?
We can try just switch to Wi-Fi.
What do you think?
MICHAEL MANOOCHEHRI: All right, let's give that a shot.
Great.
Yeah, it looks like we had a network [? conflict. ?]
OK.
So is what the app looks like.
It's really simple.
It has an authorization button.
I've actually run through this.
It's doing an OAuth 2.0 authorization, but as you saw,
the client library's taking care of a lot of the
complexity.
And now here's a query.
I'm going to use a sample that we have in
our public data samples.
When you sign up for BigQuery, we have a collection of really
large data sets for you to try it out.
And so what I'm doing here is I'm sending it a SQL-like
query, and what's great about BigQuery, it combines a lot of
the best of technologies from big data applications.
So when you're dealing with gigabytes and terabytes of
data, often you turn to MapReduce-based tools like
Hadoop, data warehousing tools like Hive, or you're using a
NoSQL data source, something like Mongo.
BigQuery doesn't use a MapReduce-based paradigm.
We actually have a different kind of execution model.
Basically we store your data in a columnar format, and then
we do most of the actual aggregation in memory across a
very large cluster.
So actually what you're getting with BigQuery is our
own infrastructure.
But what's cool about it is you can ask questions, not by
writing MapReduce functions, but in a SQL-like language, so
it makes it very easy to iterate.
So in this example, in this application that you just saw
the code for, I'm going to ask a question
about our GitHub timeline.
The GitHub timeline data, by the way, is a public data set
that GitHub provides of any repository that's public.
So here, I'm asking what are the top five languages that
get the most events?
So let's just run that.
Hopefully everything will work fine.
In fact, I'm going to pull up in the developer console so
you can see what the response looks like.
So here's an example.
That was a very quick query.
This is actually a fairly large data set.
There's 30 million records in here.
And almost instantly, it returned this table, and this
is what the response looks like on the JSON side.
I don't know if you can see that, but it's basically just
a JSON representation of the table you see.
So it's very easy to integrate.
You saw that small amount of code.
I'm able to build a dashboard that's querying this really
interesting data set to show you how you can integrate this
into your applications.
But let's talk about doing this with games.
What kind of queries are really good for game things?
I was thinking about some queries that might appeal to
some people coming to GDC, and when you talk about games,
you're often talking about cohort analysis.
So here's another example of a public data set that we have.
Hopefully you can see this.
I'll make it a little bit bigger so you can see it.
We're going to look at the Wikipedia revision
history data sets.
So this is a very large data set that we have.
It's 35 gigs.
It's got 300 million records.
And let me see if I can actually zoom in
a little bit here.
Is that a little bit better?
Can you see that, everyone?
So basically, there's 35 gigs.
It's pretty big.
It's got 300 million records, and the data kind
of looks like this.
It's got a title.
It has a timestamp.
And a contributor ID, like who revised the article, what
their contributor's name was.
So I was thinking about some queries we could run to kind
of demonstrate what a cohort analysis would look like, and
so I have some examples here.
The first thing, I was thinking about people's names,
and I realized people like to name themselves "Wikipedia,"
like to name themselves like "Wikipedia Mage" and
"Wikipedia Wizard," much like some of the gamers that you
have on social media.
So the first query I thought of was this cohort analysis.
I wanted to see what usernames contained these strings.
So I wrote us a very simple SQL-like query here, and it
should go pretty quickly.
So in seven seconds, it did a full table scan of all 300
million records, and it found some matches that look a lot
like "mage." And you can see images in there.
So that was very quick.
There's 283,000 records.
I wanted to do an ad hoc analysis.
I wanted to iterate quickly.
Now if I was doing this with MapReduce, I'd have to write a
new MapReduce function.
I'd have to do a new workflow.
It takes time.
What I want to do is just ask these quick questions on this
huge data set very quickly.
The first thing I want to do is get rid
of those image results.
So I'm going to do a regular expression match, which
BigQuery supports.
So what I'm doing here is I'm running the same query--
and actually, let me take this out.
I'll do this in a second.
So I'm running the same query, but what I'm doing here is I'm
saying instead of the contain string, I'll do a regular
expression match on "wizard" or "mage."
So I'm going to run this again, and hopefully what will
happen is that I'll see the real user names that don't
have the word "image." So we have Pharaoh of the Wizards,
and Wizard191.
So now I'm getting somewhere, like I'm building a cohort
analysis where I'm actually looking at a
particular type of user.
Now what I want to do is bucket my results by a time.
I want to see who's doing what on a particular day.
Again, this is going over 35 gigs, and imagine doing this
on something like MySQL Datastore or
something like that.
It would take a long time, but I'm doing these queries in
four seconds.
So I'll do another quick query.
Let's see, I'll run another sample that's similar.
Now let's add a time bucket, and I'll just run through
these really quick.
What I'm going to do here is I'm going to say, use the
function call UTC USEC TO_DAY, which means it'll take any
timestamp it sees and bucket it in a 24-hour period.
Basically what I'm doing is I'm saying, give me events in
a particular day.
Segment my data like that.
So let's run this again.
What you're going to see here is something that looks like
the same data, but events happening per day, Wikipedia
revisions happening per day.
So that, again, took just six seconds, and now you can see
Wizardman had done a revision on that day.
By the way, these are Unix timestamps.
That's why they're these big integers
in microsecond format.
Wizardman did something on this day.
Pharaoh of the Wizards did something to this.
And now I'm getting somewhere.
I'm seeing what activity's happening per day.
And now I want to do something a little bit different.
I want to do something where I can actually segment them as
are they wizards or are they mages?
So I've added a conditional.
I don't know if you can see that there, but I've added a
conditional statement that actually segments by what type
of player or what type of user are they.
Are they the wizard player or the mage player?
So in just a few seconds, I should get that as well, and
basically what this is going to do is break that up by day.
And it's going to say how many wizards are doing something,
how many mages are doing something.
So here you can see, a wizard interacted this day,
interacted that way, and I'm able to break them up into
different mages, wizards, warriors, what have you, based
just on their username.
And finally, let's do a final kind of wrap it up query.
We're going to put this all together.
We're going to group things by wizards and mages.
We're going to add a time bucket.
We're going to format the timestamp.
So in just a few queries--
you can write these very quickly and
iterate very quickly--
but I've just done a really interesting cohort where I've
said, break it down by day.
Mages are revising only 15 times a day, while people with
"wizard" in their username are revising 185, and
so on, and so forth.
You can imagine taking this data and
really breaking it down.
You could see what type of players you have.
What are they doing.
What's their behavior.
Real quick, we've listened to our developers, and we really
care a lot about what guys like Luca are doing at Staq.
One thing that they've asked us for is more features to
help them do things on larger data sets.
Traditionally, the way that BigQuery has designed it, it
really didn't do big joins on different size data sets.
You could do a large data set join to a lookup table.
But we've just a few weeks ago released something we call Big
JOIN, which does allow you to do these
very, very large joins.
We've done terabytes to gigabyte data sets.
Very useful for year's worth of activity joined with data
coming from something else, like purchases.
And I was able to ask the Staq team to generate a fake data
set, but something like the data that you saw Luca
provide, which is kind of like what their events look like.
So let me show you a little bit about that.
Their events data looks something like this where you
might have a particular user ID.
It's just some hash.
A timestamp, like you saw, they did something,
and then the event.
So in this case, this data says that a player started at
a certain time, started playing the game.
Another event might be an in-app purchase.
So what if I want to ask a question-- this particular
data set, by the way, is 6 gigabytes.
It's about, let's see, 30 million records, right?
30 million events.
And I'm going to join this data with another data set
that's even bigger, something like 10
gigs, 47 million records.
I'm going to show a join, a big join, on that, and see how
fast it can go.
And what I'm going to do is I'm going to look for players
who started a session yesterday or the day before,
and then bought something, had an in-app purchase today or
the next day.
So this is a large kind of a join.
It looks a lot like a join you would see in a relational
database, and this is joining data from 10
gigabytes to 6 gigabytes.
And let's see how long this takes to run.
In here, as I mentioned, I'm looking for in-app purchase
events that happen in a certain day.
I'm looking for the count and the total amount of money.
And that just took seven seconds, so I've just did this
huge join, that's something that you would normally do a
MapReduce job in, in just seven seconds,
which is pretty fantastic.
And here, you've got the aggregate queries, the count
of purchases, the amount of money, and which user it was.
So this is the kind of cohort analysis you can do, and you
can integrate this into your own applications using the
BigQuery API.
I'll leave you with one more thing before I hand it back to
Luca, and this is something you can try
right now on your laptops.
To show this off without having you actually develop
something or log into our console to actually start
writing code yourself, you can play with this using something
we call the BigQuery Tour, and this is sort of a cartoon
exploration of two really bigger data sets than I've
shown you today.
One is data from weather stations all around the world
since 1929.
It's a pretty large data set.
And an even bigger data set is a Wikipedia page views data
set, which is a really interesting one.
Let's look at what the top page views were for last year.
Today's the 26th, so let's see what happened March 26.
Any guesses of what the top Wikipedia page
viewed was a year ago?
Think about it.
I'll show you.
So I'm going to run this.
This is doing the same thing.
This is an app built on the API, no caching.
It's hitting BigQuery's API live.
It's crunching, and crunching.
That's actually what BigQuery looks like inside of Google.
I don't know if you know that.
We do have ping pong balls.
So in 10 seconds we've analyzed 83 gigs of data.
Very quick.
Let's see what the results are.
Looks like "Hunger Games!" I guess "Hunger Games" had just
come out, so everything is "Hunger Games." Except for
number four, Ludwig Mies van der Rohe.
Does anyone know why he might have had one of the fourth
largest Wikipedia page views?
It was his birthday, but we also had a Google Doodle, and
whenever there's a Google Doodle, the Wikipedia page
views are enormous on that day.
So that's actually what happened.
So there you go.
You could also look at the queries through this.
It's really great.
You can get to this by going to cloud.google.com/bigquery,
and you can just play with it yourself.
Or just Google search for "BigQuery Tour." It's probably
going to be the first hit.
So that's it.
That's what BigQuery does.
I'm going to pass it back to Luca, and he can show you how
to integrate it into a real app.
LUCA MARTINETTI: All right.
OK.
Yeah, the cat.
So after collecting these events, the second step is try
to understand something that you don't know, and this
analysis can be of very different kinds.
There are the standard metrics that every
single game app needs.
So daily active user, monthly active user, revenues per
user, per paying user, and return rates, so how long my
players are staying with me in next day, 7 days, and 30 days.
And also these, let's say, easy metrics, these standard
metrics, have some challenges when I'm talking about
monetization.
For example, when I'm dealing with in-app purchases, one of
the biggest problems that we're facing are fake
transactions.
What does it mean?
It means that for very large studios, if you don't do
server-side verification of the receipts.
So if the purchase that again is reporting to you is real,
you see just 1 in 100 transactions that is an
actual, real transaction.
And it's not as bad that you're giving away virtual
goods, because you're not paying for that, but your
metrics will be barely usable with all that noise.
What has happened?
Because the devices are not secure, and if your user base
is large enough, somebody will spend time, nights, and hack
the game to get that great sword for free, and you start
getting fake events.
Solution for that is having server-side receipt
verification.
So with the Apple App Store, for example, with Google Play,
they expose an API, so you're not only at this point running
the game on the device.
You need a server-side part.
And that's something that Staq provides out of the box.
What I want to show you, it's a demo of these basic metrics.
And obviously [LAUGHS]
hope everything goes well.
I don't always test my code, so I will do it on stage now.
So live coding.
This is the dashboard of Staq, and what I do now is create a
new application that I've called GDC_demo01.
And creating this application, I will get an app ID.
All right, let's switch to that.
I should see some nice zeroes.
And what I do is take this app ID and just paste it into this
small Python script that is a simulator that we call our
REST API, simulating some events, some users logging in
and some purchases.
So let's start it.
We have connection.
It's not very popular yet.
Don't have any user.
OK.
I can see real time users coming, and those are
generating some events.
I see some events flowing in in real time, so some sessions
are started.
If these guys--
yeah, somebody bought something.
I see the revenue's going up, and I see the basic metrics in
real-real time.
So that's what I was talking about before.
The last buffer, it's all in-memory, so we're able to
have all these metrics, and also custom metrics that I'll
show later, really, really fast on your data.
And this was really quick to set up, so if you have a game,
you just plug in our SDK.
We have, as I was saying, REST APIs, and we also provide
Unity, JavaScript, iOS, and an Android will be
out very, very soon.
And this is part of a story, so understanding how much
money you're making in a reliable way.
But that's not all.
That's not enough.
You need to ask new questions.
You need to understand something that is
specific for your game.
As Michael was saying, are mages spending more than
warriors if you're doing RPG, for example?
Or are mages that reach level seven and use this specific
sword performing better than this other class of players?
And this can be very arbitrary.
So why are they not buying the dead bird?
Why?
So cute.
It's this item for "Team Fortress 2." It's only $12.
Why are they not buying that?
I think they're actually buying it.
So there are so many questions that are really specific to
your game, and you need a quick and effective way to go
against the raw data.
And I want to show something that is even more difficult
than what I did before.
So again, demo gods, please help us.
I have this small HTML5 game that I found online from a guy
in Belgium.
It is a beat box.
[PERCUSSION SOUNDS]
LUCA MARTINETTI: And what I'm going to do now is just plug
the Staq API in that and start tracking some events.
So let me load it up.
It's just HTML5, so just a few JavaScript files and CSS.
It runs on the canvas.
What I did is just adding the JavaScript client of Staq of
the REST API that's a few lines long.
It's very, very simple.
And what I'm doing now, it's finding where the event of
somebody eating the button goes.
And it goes here.
Let me paste some code that I have ready.
Where is that?
Here.
So I have an integer that is the ID.
Can you read it, or is it too small?
How is the size of the font?
Is it too small?
AUDIENCE: [INAUDIBLE]
fine.
LUCA MARTINETTI: That's fine?
OK.
So I have an integer that is the triangle I've been
clicking on, and I just did the small map of the colors.
So what I'm doing in-- oops.
Copy that--
so what I'm doing is creating an event, and the
event looks like this.
As a timestamp, it's the custom event that I'm calling
"beat." Maybe I can zoom in a little bit.
It's an event that I'm calling "beat."
As a value, it takes the color of the ID I'm clicking on, and
I'm just adding two metadata.
One is the tile ID itself, and I'm setting a flag that says
"green" if the ID is 12 or 15.
So this is how the ID is of the different tiles.
What I need is just create the event, and I have to put the
application ID here.
Oops, where was that?
Let me create a new application, call it
GDC_beat01.
Now get an app ID, copy that, switch to that.
Oops, other side.
And I paste it.
OK, here we go.
What I'll do is git commit, gods help me--
and I'm pushing this small demo.
Do a little dance, do a little dance.
I never test my code.
Yeah, if you have iPhones and iPads video, it would be nice
if you could go to this URL that I'll
give you in one second.
That's live publishing--
come on, go there.
MICHAEL MANOOCHEHRI: It works on laptops, too.
LUCA MARTINETTI: Demo gods, are you helping me today?
OK.
In the meantime, I'll give you the address.
Cleaning up, installing.
So it's happening.
Sorry for that.
I haven't found a faster way of deploying my application.
OK, here we go.
So if we go to this URL here, beat.staq.io, please do that.
It's not [INAUDIBLE] beat.
Do you have iPhones or iPads video?
I have an iPad, maybe.
And just load the page.
OK, seven.
Seven of you guys are on the page now.
I can see that.
Are you sending beats?
[INAUDIBLE]?
Are you playing?
I can't hear you very loud.
Turn up the volume.
MICHAEL MANOOCHEHRI: I want to hear some more, too.
LUCA MARTINETTI: Want to hear some more.
MICHAEL MANOOCHEHRI: We need big data.
LUCA MARTINETTI: Is it loading?
This should be working on iPhones and Androids as well.
But not--
[PERCUSSION SOUNDS]
LUCA MARTINETTI: Is it working?
OK.
So you see the events flowing in real time, and that's kind
of interesting, but not super interesting.
So what I want to do, while you *** on it, it's show you
that we can do custom queries.
So with the SQL syntax that I was saying will be the same
for real time exploration for all the historical data.
I can just say, SELECT COUNT the distinct event ID from my
table where the event will be "beat." And I want to do that
on the last minutes.
Yeah, you banged on it 2,000--
oh, almost 3,000 times.
OK, nice.
And since I'm storing the raw data, and I'm against the raw
data, I can go down to the second.
So if I'm lucky--
come on, guy.
Yeah.
That's very fresh data.
That's what's happening in real time.
So if you stop playing-- stop playing, guys, stop playing.
[LAUGHTER]
LUCA MARTINETTI: Stop it.
You'll miss my demo.
This will go down.
Here we go.
And at the same time, I add some value that was a string
that I put with each single beat that was the color, and I
say group by string.
Here we go.
All this mess here is the different colors that you're
clicking on in real time.
So let's go green, everybody.
Let's start eating only the green ones.
[PERCUSSION SOUNDS]
LUCA MARTINETTI: We should see only the greens going up.
So imagine this is some real data, and I'm showing, for
example, wins versus losses for two specific classes.
So what I'm doing, I'm slicing data in real time.
And this is quite nice because I can go on different
resolutions in terms of time.
This is down by the second, so it's more interactive.
But what is useful is that I can see this data in my
dashboard, like all the rest.
So I can just pin this query and see is our beats by color,
and I'll just add this to my summary.
So when I go on my dashboard on my Summary page, I'll have
it here like any other metrics that are built in.
We'll enable more data exploration without coding for
non-technical user with a query builder, but the idea is
that since we're creating our product on top of two very
powerful big data technologies, we can go down
to the single event every single time.
So adding features, slicing, will be very,
very easy in the future.
I was imagining this to be more loud than that, and I had
a "Harlem Shake" here.
["HARLEM SHAKE" PLAYING]
LUCA MARTINETTI: Having the beat in the real time on top
of that, but you're so shy, guys.
I was imagining all banging on their iPads.
But that's fine.
So this was asking new question, asking custom
question, to your data and try to understand something.
Credit for the game goes to this guy here.
Thank you.
The last step will be change something, having
the option to react.
Once you understand something, change it.
It can be a game design change because you understand your
players are stuck at a certain level or spending too much
time on a certain puzzle, or it can be something like
reengaging players for a specific cohort or trying some
hypothesis, like running a test against a specific subset
and see how it goes.
This is something that I will not show you today.
We're launching our beta, so if you're interested in trying
Staq, please come talk to me.
Thanks again for Google for having us, and if you have any
questions, we're here to answer.
Thank you.
[APPLAUSE]
LUCA MARTINETTI: Any questions?
MICHAEL MANOOCHEHRI: If you guys have any questions,
please come up to the microphone so we
can hear your question.
LUCA MARTINETTI: No questions?
OK.
Was very clear?
MICHAEL MANOOCHEHRI: Yeah, I hope so.
So great.
So like you said, if you want more information about either
of these products, staq.io and developers.google.com/bigquery.
And it's easy to reach us there as well, at least on the
Google side.
Join our Google+ page, the Cloud
Platform Developers page.
And we were asked to tell you guys, please fill out your
mobile survey feedback.
The GDC is really interested in that, as well.
Great.
We'll be up here until the next session, so
thank you very much.
LUCA MARTINETTI: Just standing and staring at you.
No pressure.