Tip:
Highlight text to annotate it
X
MARC COHEN: OK.
Why don't we get started?
Welcome.
This is a session about Google Cloud Storage.
We're going to try to make this a regular
Office Hours session.
And so the idea here will be to share a quick tech talk,
which I'll present in a moment, and to give you guys a
chance to ask any questions you might have.
We've got a Moderator, a list of questions that people have
already submitted things to.
And we also have a Chat window in the Hangout.
So if you see that little Chat button, you can open that up
to submit your questions as well.
I want to introduce Ville Aikas.
He's on the Hangout with us.
Ville is the technical lead for the Cloud Storage
Engineering Team.
VILLE AIKAS: Hi, guys.
Welcome.
MARC COHEN: If you have questions and you want to send
them in the Chat window, Ville will be taking a look at that
as well as helping me out with any live questions you have.
OK.
Let's start.
Let me share my screen here.
OK, is everyone seeing the slide that I have here that
says Google Cloud Storage, a title slide?
OK.
So as I said, the agenda is 30 minutes on quick tech talk and
then Q&A. The tech talk topic will be CNAME Redirection and
Website Configuration.
Those are really two different sub-features, but they go
together nicely.
So we'll try to cover both of those.
Start with a little bit of terminology.
So Google Cloud Storage is Google's way of making our
internal storage infrastructure available for
application developers.
A lot of people often confuse it with Google Drive, which is
more of a consumer cloud storage capability.
So there are two different products.
And we're not talking about Google Drive today.
We're talking about Google Cloud Storage.
Within Google Cloud Storage, we divide up the world into
buckets and objects.
So buckets are object containers.
They're essentially somewhat like folders or directories,
if you're familiar with those concepts.
And objects themselves are the contents.
So they're kind of like files in a file system.
And the URLs shown here are depicting the syntax you would
use to specify a bucket or an object,
respectively, using a URL.
And one of the nice features of Cloud Storage is that when
you create these resources, you can programmatically
access them.
But you can also access them with HTTP by pasting a URL
into your standard web browser.
And this is giving an example of what the URLs would look
like for a bucket and an object.
I have this notion of directories.
And it's a little bit tricky because, technically, the
storage subsystem is a two-level level hierarchy,
simply buckets and objects.
And the notion of a directory in this
context is purely abstract.
So there's not an actual formal directory or folder in
the system.
But rather, if you created an object with a slash-delimited
name, sometimes we think of those intermediate nodes in
the name as directories because they look like file
system directories.
They're technically not.
So if you look at the second URL there, the
storage.googleap is.com/bucket/directory/object,
that's actually the name of an object whose name is
directory/object.
That compound name is the object's name, and then bucket
is the container.
Next we have the CNAMEs.
And so CNAMEs are a special type of
record in the DNS system.
And CNAMEs are used to alias one name to another.
So when you do DNS lookup, if the name you're looking up has
an alias, you'll get back, depending on the
configuration, the CNAME alias.
And then you can resolve the alias name to find the actual
destination.
So it's basically a way of redirecting
from one name to another.
We'll talk about why you might want to do that in a second.
The last definition here is static websites.
And these are websites that have no
active server-side logic.
So if you're used to thinking about PHP or Python or Pearl
or Java probably more likely doing the active runtime logic
of templatizing or doing database lookups, or something
active when you're fetching pages, static web pages are
those that don't actually do anything on the server side.
It doesn't mean that they're totally static in the sense
that they're not really doing anything.
A lot of websites can do client-side dynamic things
that are quite elaborate.
And we'll see an example of a page like that or really a
site like that in a few minutes.
So CNAME redirection, why would you want to do that?
It enables you to associate names with your storage
objects that are more natural and more meaningful.
So by default, you can create resources in Google Cloud
Storage which are accessible through this URI--
storage.googleapis.com.
And so you can qualify your bucket to be meaningful to
your particular company or your application, or however
you want to name the bucket.
But at the end of the day, you would end up giving out these
URLs that are vectored off of storage.googleapis.com.
And you might prefer to have the resources be named with
your own company's domain name.
And that's exactly what CNAME redirection lets you do,
create your URL like the first example there, like
my.company.com, slash, whatever resource, instead of
having to use the Google API's dot-com domain as the
root of your path.
And by the way, if anybody has questions as we're going
along, feel free to just jump in.
And we have a small enough group here that
that should be fine.
In a nutshell, the procedure for using this CNAME
redirection feature is to obtain a domain name if you
don't already have one.
You need to go through a few steps to verify ownership of
the domain.
So that basically entails proving to Google that you
actually have the rights to that domain name.
And the reason there is, obviously we can't let anybody
have ownership of buckets in the name space of somebody
else who might actually be the rightful owner
of that domain name.
And so there are three different
ways you can do that.
I won't go into the details here.
But the easiest one entails serving a file with a
particular value on the website associated with the
domain name in question.
And that proves that you actually have ownership,
because you were able to save that file at that URL.
And then after you verify ownership of the domain name,
you would configure the DNS directory system to actually
have the CNAME record, which is aliasing your domain name
to this pattern, this c.storage.googleapis.com.
So that's a special URL that gets routed into Google that's
specifically there to do the right thing with mapping the
CNAME into the resource relative to
Google's name space.
With that, users can then use your company names, like I
mentioned earlier.
And you can advertise your resources in a way that's
really natural.
It looks like it's in your own website rather than in
Google's website space.
One caveat to know about is that CNAME redirection is
supported only for HTTP, not HTTPS.
So question?
We're up to 16 viewers, which is great.
I'll just repeat.
If anybody on the call wants to add questions, feel free to
in the Chat window.
And also if you're live streaming via
YouTube, there is a--
actually, one thing I didn't mention, if you look at the
very first slide, these slides are available right now
through tinyurl..com/gcs-office-hours.
So if you'd like to follow along with the slides, you're
welcome to.
And we also have a moderated question queue,
which is down here.
So the page that you came in on, either to livestream the
video or to join the Hangout, if you scroll down to the
bottom, you'll see a moderated set of questions.
So feel free to post there as well.
And we'll try to get to as many of those as we can in the
second half of the talk.
Any questions before I pick up again?
Let's zip ahead here.
So we covered CNAME redirection.
And I'm actually going to show you an
example of that shortly.
But now I would like to switch gears to website
configuration.
And so what website configuration?
It's basically a feature that we've added recently that
gives you the ability to serve an entire website directly
from Google Cloud Storage.
Why is this useful?
Why would you care about this?
There are few reasons.
One is that dynamic logic in web applications is
increasingly client-side based.
And so you can do some very cool things purely in
JavaScript without having to do much work at all, having
essentially a static server-side component.
And in fact, it's a nice pattern for building your web
applications because it scales very well, because you're
harnessing the computing capacity of the clients rather
than having to multiply capacity to serve a number of
users on a small set of servers.
Some websites just happen to be very simple.
If you're an App Engine user, for example, and you've ever
wanted to build a small app that just serves up some
relatively simple content and you wondered why do I have to
build a whole application just to do this?
Well, you don't have to anymore.
And this feature I'm talking about is a perfect alternative
for that type of task.
It's ideal for caching of read-mostly websites.
So if you have something like a blog where you might post an
article once a day or something once a few hours or
something on that order, this is a great way
to cache that site.
So anytime the content changes, you can post a copy
of the entire blog or the website or just the changed
components to Google Cloud Storage and then serve that up
really nicely without having--
I'll explain what I mean by why it's great to serve that
up so well in the next bullet.
But before I move to that, I just want to mention that
there are WordPress plug-ins out there which make this sort
of thing easy for caching blogs to offsite readers
basically or offsite storage servers.
And if anyone out there is interested, I think this would
be a great project for building an app on top of
Google Cloud Storage to extend one of those plug-ins or to
create a new one to push WordPress content over to
Google Cloud Storage.
And the reason why it's useful to push your content, your
blog content-- you might be wondering, if I have a blog
and it's serving up content every day, why do I care?
Why would I want to push cached copies
of it on Cloud Storage?
And the reason is scalability.
If your blog is successful, if you ever get slash-dotted or
post something that attracts a lot of interest, your usage is
going to spike, probably very suddenly.
And what typically happens is people who deploy blogs on
relatively small footprints where you might be paying per
server or per virtual server, you can't tolerate the spike
in traffic sometimes.
And that's really bad because when you most need the
capacity, at the point in time when your blog is suddenly the
most popular it's ever been, it falls over and it fails to
respond appropriately.
And so it's really nice to be able to just dynamically
absorb all that traffic and perform equally well under
very high load as well as the norm.
And that's what you get with deploying not just a blog but
any static website on Google Cloud Storage.
You get virtually unlimited scaling.
You get incredibly high performance.
And you get a global footprint where people can read this
content, access your website all around the world thanks to
the fact that Google Cloud Storage is deployed on
Google's global worldwide network.
So you get all these benefits that you wouldn't have if you
were in a more constrained web-hosting or blog-serving
type environment.
So a few words--
is there a question?
A little bit about how website configuration works.
So we've added two new metadata properties.
And these are applied on a per-bucket basis.
So you'll typically define a bucket, and that will be the
root of your website.
And you'll associate with that bucket a MainPageSuffix and a
NotFoundPage property.
When you've set those, if someone does a GET Object
either on the bucket itself or on what we refer to as a
directory, kind of a virtual directory underneath that
bucket, we will look for an object underneath that path
with the specified name that you've set for MainPageSuffix.
And if it exists, we'll deliver it up automatically.
So the nice thing about this is you don't have to actually
distribute your URLs with the form pathname/index.html or
whatever the name is of the file that contains the home
page or the default page.
It works kind of naturally if you're used to Apache or any
other popular web server.
You get a default page that gets served up automatically.
And you can specify which page it is as well.
So it doesn't have to be index.html.
It can be any name you like.
And equivalently, you can do the same thing with this
NotFoundPage metadata.
And that, as you might guess from the name, is giving you
the ability to serve up a page that is providing a response
whenever a resource that's requested is not available,
essentially the 404 capability.
Let's see.
The other thing is this metadata defines names.
But it doesn't define content.
So you actually upload objects into those names in order to
have the content of the page delivered.
And I'll show you how that all works in a second.
So the procedure for doing this is to upload content into
a bucket, you have to make sure the ACLs are set
according to your intended access permission.
So if you're, for example, doing, again, a blog where you
want everybody in the world to be able to access the content,
you'd set the public read access control so that
everybody can see your content anywhere.
You want to create and upload default pages for the bucket
and optionally for any directories you care about And
then you may, in addition, want to create a custom 404
page as well if you're interested in that.
And then you'll set one or both of the
following metadata values.
And I already covered that.
You can do that either via the XML API or
with the gsutil tool.
The gsutil tool makes it very easy.
And I'll show how that works, in a minute.
This is a pictorial diagram of the logic that's happening
behind the scenes when you actually submit a request.
And I'm not going to walk through this.
We can if people want to drill down into it.
But this picture comes from our online documentation.
And you'll see at the end of the talk a list of resources
you can use to read more about this future.
But I'd like to jump into the demo because I think it's a
little bit easier to explain something interactively.
So let me switch over here.
Are you guys still seeing my screen?
You should be seeing a shell window.
AUDIENCE: It's delayed.
MARC COHEN: OK.
So as part of this talk, I created a new domain name.
And it's called "Cloud and Clear." I thought that would
be a cute name for a blog at some point.
So I just created this domain name a couple of days ago.
And as you can see, I was able to create a bucket for it.
So I named the bucket after the domain name.
It's cloud-and-clear.com.
I'm using gsutil.
For those who haven't seen it, it's a command-line utility
that lets me do all sorts of things.
If I do Help, it'll show you all the kinds of things you
can do here.
But it makes it very easy to upload, download, list,
modify, set access permissions, et cetera, on my
Google Cloud Storage resources.
So what I've done is I have created this bucket.
And in order to create this bucket, I had to prove that I
owned the domain name.
So I went through the procedure doing that, and
we'll do minus b here.
There's information about the bucket.
So I went through the process of registering a domain name,
proving that I owned it.
And then I created the domain name.
But it's virtually empty right now.
And so if I go to a web browser and I enter that
name-- oh, and I did set the CNAME in the
DNS system as well.
So when you enter cloud-and-clear.com, it's
being redirected automatically to Google Cloud Storage.
But the problem is there's nothing in that directory.
And one more thing.
If I show you the way that you GET and SET the metadata for
website configuration can be done through a special command
in gsutil called getwebcfg.
So if I enter that command, cloud-and-clear.com, here's
the configuration I've set on this bucket.
So by default, I want the MainPageSuffix to be
index.html.
So if there's an object index.html, I want that served
up any time someone does a GET on the bucket.
And if you can't find the requested object underneath
that bucket, I want Google Cloud Storage to serve up the
contents of 404.html.
Now, the problem is neither of those objects have been
uploaded to this bucket.
And so when I access the bucket right now, I've got the
configuration set that says by default if you don't specify a
particular object, serve up index.html.
There is no index.html.
There's no such key in that bucket.
And so I'm getting an error message.
So what I'm going to do now is I'm going to upload
index.html.
And it should make my problem go away.
Now, the way I'm going to upload it is I have something
called upload_home, I call it, which
uploads really two files.
Since I showed you the configuration is enabling both
the MainPageSuffix and the NotFoundPage, I'm going to
upload both in one shot.
So this script is copying home.html and 404.html, home
to index, and 404 to 404.
And then it sets the ACLs on both to public read.
So if I run upload_home, it's kind of my home environment.
It's setting both index.html and 404.
So now, if I refresh, I get the contents.
I should've shown you what the contents of that file was.
So index.html is a very simple page with a header and one
link to a sub-domain, /pool.
So as you see, it's now serving that up.
And I didn't have to say, cloud-and-clear/index.html.
I could have done that.
But I didn't need to because it picked up the default
setting here.
If I instead ask for something that doesn't
exist, I get a 404 page.
That's my 404 page, actually.
That's not Google's.
And if I show you the 404, it looks like this.
I copied this from Google's Broken Robot page that some of
you have seen before.
But I actually crafted this myself,
uploaded it into 404.html.
And that's what I'm getting when I request something that
doesn't exist, like foo.
So let's go back to the top.
And I have this link on the index.html page to Pool Demo.
And if I hover over that, hopefully you can see at the
bottom of the window-- let me raise it a little bit, just in
case it's cutting off.
If I hover on that, you see at the bottom of the window it's
showing you the link underneath there is
cloud-and-clear.com/pool.
Now, I didn't upload Pool.
So if I do that, I'm getting my error page that I already
showed you.
So what I'd like to do now is upload Pool, which is
essentially a static website.
So I've got a script called upload_pool.
And all that does is it uses gsutil to copy, "-r" means
recursively, from the current directory to
cloud-and-clear.com/pool.
So it's creating a subdirectory with the
recursive contents of this website that you'll see what
it does in a minute.
And then it sets the ACL again to public read, because I want
everybody in the world to be able to access this.
So I'm going to run upload/pool.
And it's a significant-sized website.
But the interesting thing about this website, it's
completely static on the server side.
There's no server-side logic.
But you see it's quite dynamic in terms of their client.
So now that I've set this, I should be able to
click on Pool Demo.
And I probably need to refresh the page.
The previous result was cached.
And that's what I get.
And I'll show you there's more to that.
But let me go back to the main URL and show
that that just works.
Now, I can either click on that link and get to the Pool
thing or I can specify Pool explicitly
myself and get to it.
Now, what's happening is that Pool is a subdir, it's a
subdirectory underneath the cloud-and-clear.com bucket.
And in that subdir is an index.html file.
If that didn't exist, then when I went to the /pool URL,
I would be getting the 404 NotFound.
And I can just show you that very quickly by displaying the
contents of cloud-and-clear. com/pool/index.html.
And there it is.
So what this site is actually doing is downloading an app
that's got some pretty JavaScript with audio and
visual stuff to it.
I can click the Spacebar, and I approach the pool ball.
And then I can hold down the Spacebar.
And it actually has audio files attached to it, so it
sort of simulates the sound effects of the pool.
If I click on Space, it brings me to the cue ball.
I can rearrange the table to my liking.
I'm not a very good pool player, so I won't
do this very well.
But then I can do the same thing, hold down the Spacebar
and blast away.
And it would be nice if I actually got a ball into a
pocket here, because it makes it makes a cool sound.
But I'm probably not a good enough pool player to do that.
I'll give it one more try, and then we'll move on.
People are probably laughing at--
pool players right now are laughing at me, I'm sure.
Yay!
OK.
Anyway, you're welcome to try that.
And one cute thing about these slides--
let me go back to the start presentation thing.
I'm sorry, it keeps bringing me back to the beginning.
But one cute thing about this slide is
that this is clickable.
This image on the slide is clickable.
So if you're watching at home, walking through this
presentation right now, you can click on the image of the
pool table.
And it should take you right to the demo.
And you can try it out for yourself.
It's kind of fun.
So that was it.
Let's just kind of recap what we did there, because we did a
lot of things.
We set up CNAME redirection so that this cloud-and-clear.com
imaginary domain name property that I created was serving
resources directly from Google Cloud Storage with the names I
like instead of something that was forced on me.
Then I configured the website settings for the bucket that I
created, modeling the domain name.
And then I uploaded content.
And I'm now able to serve websites directly from that
bucket as well as subdirectories like that /pool
path from that bucket.
So I effectively have a little website distribution system
here that's pretty much infinitely scalable and high
performance and global, and all that good stuff.
A quick list of resources here for people who
want to read more.
There's a link here for the console.
The Developers Console is the entry point for all of our
cloud products.
So if you want to get access to Cloud Storage as well as
anything else--
App Engine, Compute Engine, all our different products--
that's kind of the entry point as well as where you
administer and set up your projects.
There's a Product Description site.
And our documentation is quite good.
So I'd recommend that third link there if you want to
learn more specifically about Cloud Storage.
The feature I've covered today is on this link here, Website
Configuration.
And lastly, you see a link to the demo that you can use
again to try yourself.
That's pretty much it, right on the half-hour.
So what I wanted to do next is just to open
things up to your questions.
Either we can take questions from the moderated list or the
Chat window.
Or ideally, maybe we'll first give people a crack who are in
the Hangout right now.
So let me undo the screensharing here.
And by the way, we're open and actually quite interested in
not just questions, but if you have feedback, if something's
not working right for you, you think we should be doing
something differently, please let us know.
We're interested in that stuff quite a bit as well.
So yeah, why don't I open it up to the Hangout first.
And don't all jump at once, please.
Anything from the Hangout?
Going once.
OK.
If you guys get interested later, just jump right in at
any time, please.
I'm going to switch to the Moderator queue.
The top rated one, I think, is a question about--
well, no.
Actually it's fair. "Is there a way to back up Google Docs
to Google Storage directly?"
There's no direct way in the sense of having a button to
press kind of thing.
The way to do this would probably be to write an
application to do it.
We had a request on the discussion group for Cloud
Storage recently.
That was in the space of how to move data from a Google
Drive spreadsheet into Google Cloud Storage.
And in answering that question I learned
about some Apps Script.
If you're familiar with that capability, it's a way to
write JavaScript-like code that
manipulates Google Docs content.
So there's a way to use Apps Script to move
data between the two.
But it still boils down to writing an app to do that.
There's no direct pass-through type capability
that I'm aware of.
Let's keep going.
"Can there be a multistep verification process enforced
on Google Storage before allowing files
on GCS to be removed?
Ville, want to take a crack at that?
You can say no.
VILLE AIKAS: Sorry.
I just had to get unmuted there.
So yeah, right now there is no way to do this.
But we're always looking at ways of improving that.
I assume it would be some kind of a two-factor
authentication.
Maybe the person is not here.
But I assume that's what they mean.
So we have to go ahead and see how much interest there is.
And hopefully in the future, we can
investigate adding that option.
But right now, there's not.
MARC COHEN: Right.
And there's a group that people can join if you're
interested in getting more detail on what you envision
this to look like.
I guess the other thing you could do is just add to this.
Whoever posted this--
Brandon from Sacramento--
if you'd care to just add to this question with a little
bit more information about how you saw this working, that
would help us figure out how it might look, how it might be
implemented.
"What are some tools you would recommend non-developers use
to interact with GCS?"
So the first comment I would make is that Google Cloud
Storage is primarily focusing on application developers.
So that's really the target audience
for the product initially.
But there are some ways to interact with it without
having to write code, for example.
And the two tools that I would mention are one is the one
I've already shown you guys with the gsutil command.
And the other one would be that
there's a web user interface.
The name of the tool is the Google Cloud Storage Manager.
And you can find it in the documentation page
I pointed you to.
So that gives you the ability to do things like drag and
drop files from your desktop to Cloud Storage, and list
buckets and objects, and so on.
Let's see.
Comment?
OK.
"How experimental is logging?
What are some best practices.
And could you recommend some third-party services?"
I think logging is pretty well established at this point.
I presume Dan is referring to our logging feature.
And so just to explain for anybody who's not aware of it,
we have a feature called Access Logs, which give you
the ability to get information about who's--
kind of low-level tracking information about every access
to your buckets and your objects.
And that information gets posted periodically to a
separate bucket that you define.
And it gives you fine-grain ability to monitor what the
activity is on your resources.
I'm not sure about the experimental status, where
that is and when that's supposed to be changing.
Do you know anything about that, Ville?
VILLE AIKAS: I'm sorry.
I was actually chatting with Hal there about their logs.
I was spacing for a second.
Can you just read the question?
MARC COHEN: How experimental is logging?
VILLE AIKAS: It should be pretty well established.
We are always taking feature requests.
But we feel like it's pretty well in there.
Once again, it would be nice to know if there are some
particular issues or feature enhancements that people are
looking for.
MARC COHEN: Right.
And then, I don't have any particular recommendations for
third-party services in the logging space.
But as Ville mentioned, if you would care to share a little
bit more information about what you're trying to do with
the logging feature, we can definitely try to
explain that better.
"How to start Cloud Storage use in Google App Engine."
So there is actually an API in App Engine, integrated right
into App Engine for interacting with
Google Cloud Storage.
And I think if you search on Google Cloud Storage App
Engine, you'll very likely find it in one of the
top-returned hits.
That's probably the best way to interact with Google Cloud
Storage that's supported in both Java and Python.
You may find cases where you need to do something that is
outside of the scope of what's supported by that API.
And there are ways to work around the API or circumvent
it and go directly to programming interface we have
supported for Cloud Storage.
Without knowing more about what you're trying to do, the
best way to start would be to look at the published API
that's built right into App Engine and see if that does
what you need.
The next one is a Google Drive question.
Someone's asking when there will be a Linux version of
Google Drive.
And as I mentioned earlier, that's outside the
scope of this talk.
So unfortunately for Adita, I'm going to have
to skip that one.
Sorry.
"Is CNAME redirection with HTTPS on the roadmap?"
That's a pretty relevant question to what I've been
talking about, and I don't know the answer to that.
In general, we don't reveal future plans.
We're not allowed to talk about what we're going to do
down the road.
But we've heard of this before.
We're aware that this is an important capability.
So we're taking it into consideration
for our future planning.
Is there anything else you want to say on that, Ville?
VILLE AIKAS: Nope.
That's pretty much.
MARC COHEN: So it's a good point.
And we agree.
It seems like a very useful thing to be able to do.
"Do Google Cloud Storage domain name CNAMES need to be
associated with a Google App account?"
MARC COHEN: I think the answer to that is no.
I'm not sure, I guess, what Adam means exactly by a Google
App account, whether you mean like a Google App for Domains
type account or just a Gmail account, a Google account.
I guess I'm going to assume it's a Google account proper,
like a Gmail ID or something like that.
The CNAME capability is independent of the Gmail or
the Google IDs.
Basically, You'll create a domain name.
And you'll have to prove ownership of that.
I'm trying to think if you need a Google
account in order to--
you might need a Google account in order to actually
do the verification steps.
Because the way that works is you go to
Google Webmaster Tools.
And you go through an interactive wizard on that web
page that steps you through the different ways to verify
your domain.
So that might be one place where you
need a Google account.
Like some of these other questions, if there's
something you're trying to do that makes this a problem, let
us know and we'll try to see if there's a workaround.
Let's see.
Any questions on the Hangout before I keep going here?
And I see lots of activity in the Chat window.
I won't try to recap that.
But thanks, Ville, for helping out over there.
"Are there any plans to allow any edge-cached copies of GCS
objects to be cleared programmatically, overriding
the original cache control header?"
Not that I'm aware of.
I think your only option at the moment is to use
conservative values in terms of the life of the object and
wait for it to time out.
I guess I'm curious to know whether there's a problem
there or what the application need is that
would motivate that.
But as of now, the edge caching--
AUDIENCE: That was my question.
And the problem is when people make mistakes.
So if you have some content that's supposed to be served
up with a long lifetime, and you like the edge caching and
everything, but then you realize, oops, that was wrong,
and you're not using strongly named objects or something,
then it would be nice.
It's not a must-have feature.
We can make do without it.
But it would be nice to be able to say, actually, guys,
we didn't mean that.
Can you clear the edge cache so we can serve up the new
thing under that name that we meant in the first place?
VILLE AIKAS: Right.
Right.
So yeah, so right now it's a little difficult.
With the caching obviously, especially when it's just
really caching over the internet,
it's very, very difficult.
We could certainly go ahead and maybe do a
little bit on our end.
But even then, you would have any intermediary
caches along the way.
AUDIENCE: Yeah, understood.
I mean, even if it were best effort, it would still be a
useful feature for us.
VILLE AIKAS: Yep.
Yep, yep, yep.
AUDIENCE: But I do understand that it's hard.
MARC COHEN: Is there a way to modify the cache by changing
the lifetime to a very small number, the header that
dictates how long the cache element should be retained?
AUDIENCE: Once you've set it to a big number and it's been
cached, the problem is that that cache then knows its age.
It doesn't need to check back with the actual economical
storage of the object.
MARC COHEN: That make sense.
VILLE AIKAS: It's the same problem as with the DNS--
oops!
MARC COHEN: Right.
AUDIENCE: Yeah.
And we've made that mistake, too.
MARC COHEN: I didn't mention that.
That's actually a good point.
I meant to point that out.
One of the biggest gotchas that people run into--
I was talking through the CNAME redirection business
early on in the tech talk portion of this Hangout.
And I wanted to mention that when you're doing that, be
liberal about how much time you wait for things to change,
for the world to reconfigure.
What will happen is typically--
especially, for example, setting up the CNAME record in
DNS, you'll go to your name registrar and make that change
administratively, administrative that text
record, for example, or the CNAME record.
And then you'll go try it, and it won't work.
And it's just because of the nature of DNS.
It takes a while for things to propagate.
So use liberal test intervals when you're doing that stuff.
OK.
Let's see what else we have here.
Any other questions from the Hangout?
AUDIENCE: I joined the Hangout late.
I was watching the video.
So you might have said this already.
But I think you handled a question from the Moderator
earlier about an extra step of verification when you're
deleting objects through the web
interface for cloud storage.
And I think what the person was getting at was just an
"Are you sure?
Yes or no," if you click the Delete button on the web
interface rather than anything to do with two-step
verification, which is what I think you might have thought.
MARC COHEN: OK.
Yeah.
Thank you for correcting that.
I think you're right.
AUDIENCE: Because also I would like that feature.
It sounds like a really small thing.
But the web interface is really convenient.
But at the moment, I sometimes daren't use it because I'm
worried about hitting that button with no
"Are you sure?" step.
VILLE AIKAS: Ah, OK, OK.
AUDIENCE: I'm even thinking of writing a Chrome extension
just to give me an "Are you Sure?" button.
MARC COHEN: Yeah.
Yeah.
And I think that as the scope of deletion increases, that
becomes even more critical.
And then you can take that idea even further and imagine
a trash-can type model, where the stuff that you delete
doesn't actually get deleted.
It goes somewhere else outside of your normal view.
And then later on, if you decide you made a mistake, you
can restore it.
I think those are all good ideas.
And I personally would love to see more of that.
So like I said earlier, I can't give any specifics.
But that's a great point.
And thank you for clarifying that question.
I think it was misinterpreted.
VILLE AIKAS: And just to extend a little bit, we did
talk about the I/O, that they're going have versioning
coming very, very soon.
So you can basically go instead of buckets, go and
have versions.
So we keep copies for you.
AUDIENCE: Yeah.
That sounds useful.
VILLE AIKAS: So if you're interested in being a
participant when it becomes available, just go and send a
note to the gs-team@google.com, and we can
go ahead and put you on to it when it becomes
available to show.
MARC COHEN: Thanks, Ville.
Next question is "has anyone worked on porting the OAuth 2
Boto plug-in to Boto itself?
And how would you pass in the access token during runtime in
non-Google App Engine projects, for example, a
vanilla Django project?"
MARC COHEN: So the second part of the question, how would you
pass the access token during runtime?
There are ways to get--
I assume this is a Python programmer, because I see the
references to Boto.
There are ways to get access to the inner OAuth goodies--
AUDIENCE: OAuth 2.
MARC COHEN: Sorry?
I thought maybe we got lucky again and had the questioner
in the Hangout.
So there are ways to get access to the refresh token
and the access token and extract it from the underlying
library wrappers that make the OAuth 2 stuff easier to use,
both in Boto and also using the Google
Python Client Library.
As far as porting the OAuth 2 Boto plug-in to Boto itself,
what that's about in case anybody's wondering what
that's talking about is the tool that I showed you
earlier, gsutil, is a Python application that we've
open-sourced.
And it's layered on top of Boto, which is a popular
third-party, also open-source Python programming library.
The purpose of Boto is to make it easy to develop cloud
computing apps using Python.
And it supports Amazon stuff as well as Google Cloud
Storage and a few other things.
And gsutil, the tool I was using earlier, uses Boto.
In order to make it work with Google Cloud Storage, we did
some work in Boto-- and other people did, as well-- to make
Boto understand how to interact with
Google Cloud Storage.
One of the things that we had to sort of teach Boto how to
do was to get OAuth 2 credentials.
And that part of the puzzle was done in the
gsutil command itself.
It's not baked into the lower-level Boto library.
And the person's asking, has any work been done to move
that functionality down into Boto?
And it's been something we've thought about.
And also it makes sense to us.
But we haven't actually worked on that.
And Boto and gsutil both being open-source projects, we would
totally be happy to invite and review any work that anybody
wanted to do in that space if you are enthusiastic about
helping in that area.
It doesn't mean we won't do it or somebody else won't do it.
It's just we haven't gotten to it up to this point.
Let's see.
Down to two more questions here.
"Integration with PageSpeed, can't get it to work."
So PageSpeed is a Google service that speeds up--
I guess I don't know too much about it.
But my understanding is it analyzes your website content
and helps you optimize the download speed by doing things
like minimizing JavaScript syntax and other techniques.
Maybe somebody on the Hangout knows more.
Feel free to jump in if you do.
I assume this person's--
go ahead.
It sounded like we had a comment there.
No?
Sorry if I'm interrupting anybody.
I think there's a bit of delay on this line.
I assume the questioner is saying specifically PageSpeed
with respect to the website configuration feature I've
been talking about.
So I'd like to know more about what didn't work for you.
But I will reach out to the questioner after the call and
try to find out about that because it sounds like an
interesting area that I haven't tried myself.
"Has client software of Windows been launched?"
JEFF SILVERMAN: I've got an answer for that one.
MARC COHEN: OK.
Let me get you on camera here.
JEFF SILVERMAN: Hi.
I'm Jeff Silverman.
I'm a development support specialist.
And I got the gsutil utility working quite nicely under
Windows and Cygwin.
MARC COHEN: Thanks, Jeff.
So that's another option you have.
gsutil, the tool I was using earlier, can be run
on Windows as well.
In fact, it can be run without Cygwin.
But I think it's a little bit easier--
JEFF SILVERMAN: It works a lot better with Cygwin.
MARC COHEN: With Cygwin.
That's the end of the question list.
I'm just checking to see if a couple more might have popped
up in the meantime.
"Any alternatives to Google Storage Manager?"
So that's the web user interface.
And there are commercial products that are available
and support Google Cloud Storage.
So there are few third-party alternatives out there.
And if you Google Cloud Storage, Google Cloud Storage,
web UI, something along those lines, you should be able to
find a few.
But there's no alternative that's officially supported as
of now by Google.
And let's see.
I think that pretty much covers
the Moderator questions.
So we're coming up towards the end of the hour.
We have about five minutes left.
I'll just open it up one more time for any
questions in the Hangout.
And then I think we'll close up shop here.
Any other questions live?
OK.
Well, thank you guys all for joining us today.
Hope you got something out of it.
And we'll try to do more these in the future.
VILLE AIKAS: Thanks, guys.
MARC COHEN: Thank you.