Tip:
Highlight text to annotate it
X
Veme: So this talk is aimed at a 2.01 level.
That means we're expecting
that you've got some experience with App Engine,
you've probably written some code
and that you probably also have written some code
using the task view system.
If you haven't, that's okay.
You should be able to keep up
with what I'm going to say today,
get the gist of what's going on.
So let's just go back and look at App Engine history.
So App Engine is a great system
for writing your web apps.
You can manage them, scale them automatically.
You can write them very easily,
you go straight out of the box,
start up for free if you like.
One of the limitations that we have
is that user face requests are limited to 30 seconds.
So the task key was initially introduced
as a method of overcoming some of this limitation
and also offloading some of the work
that users really shouldn't have to wait for,
so things like data start updates or URL protocols,
things that just do some cleanup tasks in the background.
So here's a mental model for you of what a task queue is,
how we'd like you to view it.
The tasks here are indicated as the colored boxes
and the queue here is sorted in order of increasing ETA.
That's the earliest time at which we'll attempt
to push the task to your application.
So I'm showing you here a model for pushing tasks.
So tasks here, HTTP requests,
you can see on the right hand side,
there's an instance of your app
that's adding a task to the queue
and that blue box is going somewhere in the back there.
And you can also see that the task queue service
is taking some tasks from the queue,
and pushing those tasks out
to other instances of your app for execution.
So this is the push model.
You have control over how many tasks
can be running concurrently.
You have control over the rate at which they're executed,
and the task queue service handles everything else for you.
So the way that you define these queues is quite simple.
You've got queue.yaml
or if you're using Java, queue.xml,
in which you can specify the name and the rate
and other parameters like the retry parameters.
The tasks are gonna be added to the queue using an API,
call it taskqueue.add,
very simple one for both Python and Java.
You can see that in the Python case,
you refer just to a URL equals foo,
use a bit more type information in Java.
The task queue system will push those tasks,
those URLs to your app at the rate that you specified
and the instances are scaled at the,
just automatically by App Engine itself.
So any failed task that is any task
that has a return code not in the 200 range
is automatically retried,
and you also can exert some control
over how those retries are managed,
what the back offs are, how the back offs should grow,
what number of retries are allowed
before the task would fail permanently.
So here's another bit of history.
This is the way that our product has grown
in the last couple of years.
So you can see that after the App Engine launch,
it was a gap of about a year, and then we were able to launch
task queues as a labs feature, it was called at the time,
that's experimental.
And Java support was added a quarter after that,
but initially, we had limitations,
maximum QPS was initially 20 QPS across all of your apps.
That was relaxed later to maximum of 50 QPS per app,
and then maximum of 50 QPS per queue.
So at that time, you could have ten queues.
That was a big increase.
Finally, we were out of labs at the end of last year
and we had maximum task running length
increased from the standard 30 seconds up to 10 minutes.
Then we were increasing our QPS per queue to 100.
That's the maximum allowed, and today, we'd like to tell you
that you're allowed 500 QPS per queue.
So today, we really want to focus your attention
on the first really large new feature
in the task queue system.
And this is a new model for the queues.
It's called pull queues.
So these are new, as I repeated.
They have been mentioned
in a couple of the other talks already.
For example, if you were here for the backends talk,
you might have heard them mentioned.
So here's the basics.
Push queues, the tasks are actually HTTP requests.
In pull queues, tasks are just data.
So the meaning of that data is completely up to you
as the programmer who creates the tasks
and will also be doing some work in response to the tasks.
So workers are leasing tasks from the queue.
That's in contrast to the push model
in which the task queue system just calls
those HTTP requests on your application.
So instead of the task queue system,
deleting tasks because an HTTP request
returned a return code in the 200 range,
it's up to a worker in a pull queue
to delete the task manually.
And scaling is also your responsibility
when you're using the pull model.
So in the case of push queues, the App Engine itself
was going to scale the number of instances for you
depending on load.
In this particular case of pull queues, it's up to you
to configure the number of workers that you need.
So I said that workers lease tasks
and there's a reason for that,
and that's that a worker might crash
while it's working on a task.
And it would be a bad thing if the task was lost
and no one else was able to complete the work.
So you, if you crash, the lease will expire
and at some time after that, another worker will be able
to lease the task and complete the work.
But, if the task is actually the cause of the crash,
and will be causing crashes indefinitely,
you can actually limit that.
You can say no more than this many tries
at leasing the task
before the task is considered to have failed permanently.
That prevents infinite crash loops.
So let's just take a quick look at the API.
First I'll cover Python and then I'll cover Java.
But I'd also like to give a plug to Go.
Although I don't have a slide on it,
we have an implementation in Go.
So in Python, the big change is that a queue in queue.yaml
that you specify now can be specified with mode pull.
If you don't specify mode at all,
it defaults to push.
So pull and push are the two valid values for mode.
When you're adding tasks to tell the system
that you're actually adding one of these data-only tasks,
you say, method equals pull.
So here you can see, you know, task add,
payload equals hello, method equals pull.
Now, leasing tasks,
we return a list of task objects back to you.
You can lease more than one at once.
So here we're just trying to lease three tasks
for 30 seconds.
The first parameter there is the length of the lease.
And finally, with that list of tasks,
you can use that also as the argument to delete tasks
once you've finished doing your work.
So q.delete tasks.
And in Java, things are very, very similar.
There's just a few more things
that are a bit more idiomatic for Java there.
So here we use our builder patent for task options
to add a task, and we use an object
that specifies that this is a pull task.
Leasing tasks returns a list of tasks handled.
Task handles are a slightly different abstraction in Java
that represent tasks in storage vs. task options
which represent the tasks before they have actually
been created in storage.
And we see lease tasks again.
This time, we can specify using the time unit.
And finally, deleting tasks is done in a loop,
but a very simple loop, as you can see there.
So I want to give you a mental picture
of what's happening as you are leasing tasks
so that you can see that in this particular case,
time is going to be flowing.
The tasks are always in the queue until deleted.
So I'm indicating again tasks as the colored boxes.
And I'm also indicating with that now pointer
that the, that's the current time.
And then everything to the left of that pointer
is available for lease.
Everything to the right of that pointer
is considered already under lease
and therefore, you can't lease those.
So here is a worker attempting to lease three tasks
for 30 seconds.
We can see highlighted there
the three tasks that are available.
So in this slide,
we see them having been delivered to the worker,
but also moved past the now pointer
into the leased section of the queue
and they actually moved to a point that's 30 seconds
beyond the now pointer as it exists.
So we see the three tasks come down.
Now the worker is going to work on those tasks,
is gonna do whatever work was encoded in them,
presumably the same programmer
wrote the code for inserting the tasks
as is writing the code for the worker.
So the interpretation is very much up to you.
So now it's up to the worker to delete the tasks,
so uses those three colored rectangles
and we see them gone.
But you also see that the now pointer has moved on
and the next leased tasks will possibly return three as well.
So I'd like to give you a demonstration here
of pull queues in action. So this is one of the models
that is pretty good for pull queues.
And this is when your tasks are considered to be
very small items of work rather than large items of work.
And we want to lease multiple of them at once
so that we can batch the work together
and gain some efficiencies on datastore and other APIs.
So in this particular case, the tasks are votes.
And we're going to be tallying the votes
and storing them periodically back to datastore.
But we don't want to be doing a datastore operation per vote
because that may well use up all of quota very quickly.
So we're going to reduce by accumulating votes first.
So let me just move across to voter later.
And this is something that all of us have opinions on.
It's all very tongue and cheek,
so I don't hate any of these languages, actually.
So selecting one of them is considered a vote.
And you'll notice that the tally hasn't updated immediately,
however, if I refresh the task queue,
I can see that some of you
have already been putting votes in as well.
And these are the votes currently in the queue.
Now, if I do another refresh,
we should see the tallies changing.
There's somebody here who's decided to add a vote for Java
and another one in for Perl.
And you can vote as often as you'd like.
So this one may decide just to curl in a loop.
No, it can't collect. That's all right.
Right, so let's just move on-- to some of the code.
So firstly, we have our boiler plate, appl.yaml.
Everything happens in main.py.
It's a very simple application.
And queue.yaml, you can see defines a queue
and it's called votes and it's in pull mode.
And our handler is at the root level
is just the vote handler.
That's where on the post,
you can register a vote on the get.
You just simply display the form for voting
and the current tallies.
There's a tally handler as well,
and that's called only by the tasks.
Sorry, it's called from cron, but it's not meant for users.
This is the one that actually is going to do the work
of pulling the votes from the queue and updating tallies.
So just a quick look at vote handler.
You can see there in the post method,
that I'm adding the tasks to the queue with method pull.
Now, the workers,
well, we've got lots of choices for workers now.
Just in the previous talk, you heard about backends.
Backends could be workers.
Also, cron tasks can be workers.
Also, long running tasks in a push queue would be workers.
It's really up to you.
So in this case, for this example,
we're gonna specify it in cron.
So you see there, every minute, the tally handler runs.
So the data model is pretty simple.
It's just account,
the key name is going to be the language name.
And there's a convenience method there
that actually will update the count.
Storing, we do that convenience method
in a transaction.
And finally, the bit that you've been waiting for,
which is the loop in which we are going to be leasing tasks,
doing our work, and then deleting them.
So it's as simple as you could imagine.
First, we get hold of the queue.
Then, in a while, true loop, we just say, lease tasks,
try to get 1,000 of them at once.
A 1,000 is the maximum that we're allowed
and we'll lease them for five minutes.
If there's any kind of failure, five minutes from now,
those should be available for another worker to lease.
So if you didn't get anything, you might as well just return,
and the next indication of this handler, via cron,
will pick up anything that's arrived.
And as you see there, we're gonna add them into a map
and then use that store tallies function to go through the map
and do the datastore updates.
Finally, delete the tasks.
So a failure at any point in this,
is going to result in either a retry,
if you were using long running tasks
or the next indication of cron,
picking up any of the tasks that were actually lost
or could have been lost.
So the main advantages to an approach like this
is to really limit your load on datastore.
So if we were doing one transaction per vote,
potentially we could be doing
a 1,000 times datastore operations
as we're doing now in this approach.
And with that, I'd like to hand over to Vivek
who is gonna tell you all about how workers can run
outside of Google App Engine.
Sahasranaman: Can we switch?
Veme: Yes.
Sahasranaman: So we're gonna talk
about something slightly different,
and it's probably like the first API
that really exists on App Engine
that allows App Engine to be accessed from outside.
And we think that for task queues,
it's a very common use case that there are things
you want to do that, you know,
currently may not be doable on App Engine,
or it might be easier for you to do them outside.
You know, all image processing is a good example.
You've got some image processing stuff in--
So image processing is an example.
OCR is another example.
For all of these things, you probably have binaries
that you downloaded from the Internet
and things that you can't run easily in App Engine.
So what we provide for these things is an API
that you can access the pull queue or REST.
And this allows you to run your workers
anywhere on the Internet.
So they can be on VMs.
They can be on hosted machines.
They can be, you know, anywhere.
And it allows, given that these workers can run anywhere,
they can actually run custom binaries,
they can run image processing packages.
So for people who aren't familiar with REST,
REST stands for Representational State Transfer,
and it's a very common model
that a lot of Google APIs are moving towards.
If people were here at the Google storage
for the developer's talk in the morning,
they also have REST API.
And REST basically models,
has a model for collection resources,
and for the task queues,
we have basically a queue and tasks as our collections.
And we allow workers to lease and delete tasks
using the REST API.
And we still expect that most of the insert calls
and the thing that actually feeds into the queue
is happening from inside App Engine.
It's, the first question you would ask
when an API is made aware of any of these things is,
how do you authenticate?
So how do you make sure that, you know,
only legal guys can actually call this API?
And the way we set that up is
that you can actually specify an ACL.
And we'll see an example of how that is set up.
You can specify in ACL in queue.yaml
while you're uploading an App Engine app.
And the API then makes sure that only the guys
who are specified in the ACL can actually access the API.
It uses OAuth,
which is a standard authentication mechanism
which kind of avoids, you know,
say keeping your passwords in to do authentication,
that a lot of Google APIs are also moving towards.
And we will see examples of how this works.
So to get started with this,
there's an extra bit that you need to specify in queue.yaml
or queue.xml if you're doing Java.
It's called ACL.
And we'll see a very specific example,
but basically you specify just user app domain,
a list of those, and all of those guys
are then authorized to access your task queues.
And then you access the REST API which is at that URL.
And Google is moving to REST in a big way,
and a lot of APIs are available,
and there's an open source API client library
that you can use to talk to these APIs.
And we've built a couple of samples on top of these
that are available open source from that link,
which are also at the end of the slides.
And specifically, there are two of them.
There's one which lets you run a single command
against your queue,
just to make sure that everything is working.
There's another one which kind of runs in an infinite loop,
continuously pulls tasks from your queue,
executes an arbitrary binary for each of your tasks,
and then deletes the task.
This, we think, is another very common pattern
that a lot of people will want to use
when they're using pull queues.
And both of these examples are available.
We'll also use these examples in the demos
that we're going to do.
So the first example I have is probably the simplest example
that you can build,
and something that you can't do in App Engine today,
which is very simple image modification.
So App Engine has very simple image API,
which lets you do small degrees of transformation,
but we'll try to do something that's just a little bit beyond
what App Engine can do today.
And the model is very simple.
The task that we will create
will contain a single photo in the payload
which will be uploaded from an App Engine app.
And then workers running in VM will actually execute
an image processing binary on these tasks
and put it back into App Engine.
I'll show you the demo first.
And then come back to how this thing is built.
So--
I'll pick an image--
and upload it.
What I also have is a machine.
This is just a standard machine.
It doesn't really matter where this machine is running.
And on this machine,
I'm gonna basically be running something
that can access the API or REST.
So in this case, I'll run this thing called the puller
which will continuously access the queue
and will actually execute the binary,
convert minus annotate on the payload
and push it back into App Engine.
And we will notice that there's a processed image
that's come back.
So to kind of go into a little bit more detail
into how this app is constructed,
I'll kind of just lay it out pictorially first for everyone.
So there's basically an App Engine app,
and there's a worker machine pull
which is outside App Engine
that's trying to access the pull queues.
So what happens is you first insert your image in
as a user request handler,
that then writes the image in a datastore
and then also queues up task into the pull queue
which also contains this image as the payload.
You've got a worker machine pull
which is continuously pulling this pull queue
and it will eventually notice that there's a task available.
It'll lease this task over the REST API,
execute its custom work, whatever it needs to do.
In this particular case, I've chosen to use
OAuth authenticated upload
to return the converted image back into App Engine.
And there's a handler that exists inside App Engine
that eventually writes it back into datastore.
The OAuth authenticated upload
ensures that only authenticated users
can actually access the worker upload handle.
So the App Engine side of things
looks very similar to Nick's demo.
So I'm gonna kind of skip over that part
and I'm gonna focus more on how to set up the thing
that's outside App Engine.
So as mentioned before,
there's this ACL bit that you need to set.
In this particular case, I've chosen me and Nick
as the consumers of this queue.
And we just specify those,
and then you write an App Engine.
You write your App Engine code to insert stuff
into the queue as before.
And then, you have to write some code on the other side
to actually pull it out of this queue.
As I mentioned before,
there's the API client library which lets you
access all APIs that are available over REST
in a very systematic way.
And this is available in many languages.
It's available in Python and Java.
In this specific example, we choose Python, and in Python
because of the ability of Python to generate code on a fly.
The API client can actually call a URL,
get a description of what the API looks like,
and generate an object for you
on which you can make a function call,
which eventually translates into an API call on the interface.
So the way we do this is import a build library
that's available from API client
and give it the API that we want to build.
And this will give us an object which we can then use
to make an API call.
And it will just look like a function call in Python.
The next thing we want to try out
is to make sure that this whole thing works.
And here, I get an opportunity
to demonstrate how the OAuth system works,
and how the thing that I'm running outside App Engine
is actually figuring out,
is actually getting a token from you,
and using that token to authenticate you
back into the API.
So I will kill this...
and run this utility
which basically is trying to lease one task from the queue.
If you notice, the first thing that it does
is that it actually asks you to go to this URL and authorize.
So what this is trying to do is it's doing standard OAuth
and asking for authorization for--
it's saying that there's this command line utility,
Google API client by task queue command line,
which is trying to access an API on behalf of you
and you need to grant access.
So if you do this, it will give you a verification token
which I paste back into my command line
and then I notice that the API call goes through.
And just for simplicity, this app is written
so that it actually caches the token in a file
so you don't have to do this over and over again
which is why my puller was actually working without the--
without this step in my previous iteration of the demo.
And then finally, to the worker code,
so once you've got authentication working,
what you need to do is you need to take this object
that you created using the API client libraries
that I described two slides ago,
and just make an API call on it and as I promised,
the API call looks like a function call.
So, like, every verb that you expose over the API
translates into a function call in Python
and it looks quite nice.
It's like all you have to do is you get a request which says
it's task API.lease and you give it a few arguments
and just execute this request.
This results in a REST API call
with an OAuth token happening to the backend.
And a response comes back
which is [indistinct] representation of a task,
or of a series of tasks in this case,
because lease can actually give you more than one task.
And then, like, for people who are familiar with Python,
all you do is you just iterate away your tasks.
For each of your tasks, I chose to execute
convert minus annotate and post the output
back into my app, and then eventually delete.
So things work out very nicely in Python.
In Java, it's a little bit more complicated
because you can't,
because Java deflection only works at--
you can't do this code generation kind of phase
in Java, so you have to actually compile the,
your library into your binary
rather than doing it all at one time.
We will also see another example which is slightly different.
So another you know, very, like the second question
after OAuth that people would ask about in API is,
how much can I pull from this API?
The current answer to that is about 100K.
So like you can actually,
you can have tasks of a size of 100K
and you can pull a maximum 100K out.
So the next question would be,
how do I do something bigger than that?
So the standard computer science answer to that
is to use a level of indirection.
So what we will do is we will actually use another app here
which actually demonstrates that.
And you can use this, you use several storage systems
to kind of do this level of indirection.
In this particular case,
we choose to use Google storage for developers,
which is also kind of talked about in the morning today.
And it's an authenticated system which,
where you can store either an authenticated token
or use a predefined secret that is shared
between Google storage and yourself
to kind of authenticate.
And what we've done is that we have the same token
stored between inside App Engine
and between workers that's outside App Engine,
so both these guys can basically access the same buckets.
And just for fun, we've-- what we're trying to do here
is photo-stitching, which is kind of now become commodity
and kind of work on phones but just to make it
a little bit more fun, I also chose some examples.
These are images that are taken from the Street View project.
And what we will do is we will ask the worker here
to stitch a few images for us.
So like before, I run a binary on my worker
which has now noticed that there's a task in the queue
and got busy with it,
and will eventually finish stitching these images.
This application also is built a little bit--
continues to show you what the app in the background is doing.
And if you notice what it's doing is
it's actually writing all output to bigstore
or to Google storage for developers,
and both the App Engine app and the worker outside
are able to access it.
And here's potentially an interesting place
that a lot of people here would probably know about.
It's kind of close to here.
And just for fun, I'll do another one,
which is a little more complicated.
I might stretch you guys a little bit more.
And you notice the worker pulls up,
pulls it up again and starts off,
and once this--
And once this one finishes,
I'll give people an opportunity to guess
where these images were taken.
I suppose you can only guess once the stitching is done.
Okay.
And here we go. Any ideas?
The guys who have seen this demo before
are not allowed to answer, and for the other guys,
there's a very small hint
of a very iconic monument.
Try again.
Someone said Sydney. This is actually a view
from very close to the Google Sydney office.
If you notice, that's the Sydney Harbor Bridge
that you can see.
And this is an image that was taken from
the Google Street View project very close to the Google office.
So coming back to this app, just to kind of show
this app pictorially again, basically,
it's very similar to the previous app
except that it uses external storage system,
and uses a shared secret
between App Engine and the worker pool
to access the storage authenticated.
And in this case,
we happen to use Google storage for developers.
You could use lots of others.
There are several other similar storage utilities available.
So to kind of recap, there's basically a few things
that you need to worry about when you use pull queues,
especially for users of push queues
who are kind of used to a degree of convenience.
There's, scaling is a problem
that you need to worry about yourself, the pull queues.
The benefit that it gives you is that you can run these outside
and as well as you can do things like
you can reduce load of datastore,
and so on and so forth, like Nick demonstrated.
But the number of workers that you run is your problem.
There's no way that the queue is a passive entity
in this case, and it will,
and you have to figure out how many workers to run.
There's an operation on the REST API
in case you're workers outside App Engine
which gives you statistics
on the number of tasks in the queue,
so how many tasks are there in the queue,
and how many have been executed in the last hour,
and so on and so forth.
And that can potentially give you some hints about,
you know, how many workers you should be running.
If your queue is backing up,
you will start seeing that count rise
and maybe that will give you a hint
that you should be running more workers.
If your queue size is at zero for a long time,
then maybe you should, you know, get some workers down.
The second thing to talk about is about choosing a lease.
So as Nick discussed, if your tasks overrun the lease,
they're available for leasing again
which means that another worker will be able to pull them
and execute them,
which means that you will get a lot of, you know,
duplication of work happening
if you choose a lease that is too short.
So to start with, we suggest that people should choose
close to the worst possible time a worker can take.
So in that case, you kind of reduce
the amount of duplication of work.
At the cost of a little bit of overhead
if your worker crashes, but that should be okay
because you're probably doing offline processing here anyway.
Another thing to think about
is about what your worker is really doing.
Even with choosing a very conservative lease,
it's possible that sometimes your workers
might overrun the lease.
In that case, the same task will get done more than once.
Although this, you know, depending on how
you've got your things set up,
it might be relatively rare or not.
But you know, we suggest thinking about, you know,
whether you want your tasks to be idempotent.
So like, if they execute more than once,
they don't produce effects that, you know, you don't,
you can't handle in your application.
It would be, if a repeat of a task
was to produce exactly the same effect,
then you would be basically ensured
against a task executing more than once.
And finally, posting back to App Engine from workers.
On the getting the task out of App Engine side,
the REST API helps you.
It's authenticated.
On posting data back to App Engine from workers,
you need to worry about OAuth yourself,
and we saw two examples of how that could be done.
App Engine inherently supports OAuth on all its handlers.
So you can actually use authenticated uploads
into App Engine, which is what we did in our first example.
And you can also use like an external storage
kind of system that-- and with a shared secret
to kind of protect data transfer.
And that's it.
So we've got a bunch of links
and a lot of this code is already available.
We've got both the samples that we used in the demos
are actually up on the App Engine's sample's code site.
And the rest API samples are also available for you
to pull and use.
And there's documentation.
One of the things I'd like to show people
is this thing called the API explorer
which came out from the API team which also kind of indicates
how you can use like a description of the API
to actually kind of render and actually run an API
from a browser.
So in this case, the task API,
it's doing all this by actually querying
what the API looks like,
and getting a description of the API
and it knows exactly how to make the call.
So, like, for example, for the task API,
there are all these calls that exist.
And in this case, if I choose to do the lease call,
it knows that these are the four parameters to fill.
In most of the calls that we have on our API,
are authenticated calls, so they're not public calls,
so it allows you to go into this mode
which allows you to do private access which did not work.
Oh, okay, well, actually, okay, it did work.
So it-- and it automatically knows
that there's a scope on this API
which is a task queue scope,
so the only thing you can do on this API call
is access a task queue.
And now I can fill up arguments very similar to--
and try to lease one task out of that queue for 30 seconds
and you will actually see a request go through
and response come back.
In this case, my queue has no items.
So the lease returns nothing but it's kind of like
a fun way of, you know,
actually trying out an API from a browser
and figuring out whether things work.
Questions?
Please come over to the microphones
to ask your questions,
so that they can get recorded on the video.
man: How many task queues can I have now?
Veme: You're allowed up to 100
if you have enabled billing on your app.
Otherwise, you can have 10.
man: The other is, are tasks--
when App Engine goes in maintenance mode,
are task queues still available?
Veme: Yes, they should be available.
man: I think you mentioned the word "reduce,"
I think you said it was with a small "r,"
not a capital "r."
Veme: Yes.
man: You've gone through 41 slides
without mentioning MapReduce.
Veme: Yes, that's right,
but MapReduce is still under development
and we're hoping that pull queues themselves
may actually form part of the solution for reducing.
man: Okay, thanks.
Sahasranaman: Just wanted to add,
that MapReduce and pull queues are actually also in,
to think about a slightly different--
a pull queue is also something that kind of gives you
like a work flow kind of system,
whereas MapReduce is more like bulk processing.
man: Could I add to the queue
in a transaction like I can with the push queue?
Veme: Yes.
Actually, all of those operations
for adding transactional adds are still supported.
Yes.
man: Also, can I relinquish a lease once I've taken it?
Veme: The way that you would do that
is simply to let it expire.
So stop working on it, it expires,
someone else can get it.
woman: I may have missed something, so I'll ask anyway.
I have two questions.
One is, do you see any value to workers actually being
with REST and be notified for tasks
so that they can pull it down just to reduce the overhead?
And the second one is, I know it's called queues,
but I'm wondering is there any potential
for a Google space like mechanism
so workers can actually request by template
rather than just for the first task in the queue?
Because right now, it looks like you'd end up
having to set up separate queues for each kind of task.
Sahasranaman: I think I'll let you answer the second part
and I'll answer the first part.
Veme: Yes. So we do have several requests
on our issue tracker
for just the kinds of things that you've mentioned.
We don't support them just at the moment,
and we're certainly working on a design,
but I really can't give you more than that. But, yes.
woman: So does that mean that's a yes for both of them?
Notifications?
Sahasranaman: No so I'll answer the notification question.
woman: Oh, okay.
Sahasranaman: So the notifications question
is that basically, this is something that our API team
is actually currently kind of starting to work on.
None of our APIs currently have notifications.
Like, Calendar doesn't have notifications.
So there's a lot of APIs that you could imagine--
and notifications and API
are really two sides of the same coin.
woman: Right.
Sahasranaman: So yeah, so we are probably gonna have them
but they don't exist right now.
woman: Thanks.
Veme: All right. Well, there's no more questions.
Thanks, everybody, for coming.
Sahasranaman: Thank you.