Tip:
Highlight text to annotate it
X
[sound]. Folks welcome to CS10 Lecture 2 everybody. Woooh! [clapping]
Alright, I am happy to say that SOPA and PIPA got killed. These were bills that were
introduced to reduce [crowd cheering] uh, online piracy. It's a big deal. Folks it's, those
of you who are following it, this is a really big deal. They were well meaning
bills that really is, they are still, they are right here. There is still a tension. There
really is still an issue that piracy is happening and the goal is, was to try to
curb online piracy. And that's a real thing, and I really support the, the
intent of the bills but the letter of the law was actually quite poorly scripted
and could allow for a lot of censorship. So that was obviously why the folks at
Google and Wikipedia and other folks stepped up and said, got to kill these
bills because they are worthy of being killed. They are not written well. So
maybe version two or version three will be okay, but this version one was terrible
and I'm glad that, I personally Dan Garcia, support that these bills went to sleep.
That's great. Alright. Today we're going to see and demystify. Today is one of our
demystification lectures in which we're going to demystify 3D computer graphics
so it's really awesome. Um, if you're 10 miles up in the field of computer
graphics, computer graphics is one of the many sub-fields of computer science.
Others are AI and human computer interfaces and databases and computation
biology, a lot of theory. I'll talk more about those later as the course
and the semester goes on, but this is one of the biggest, and for me it's the
coolest. Why? Why? Because of images like that. Because you can do
such creativity, because of the artistic side that really can be blended with an
art, with the 3D computer graphics PhD foundation. You can really have an
artistic side to that. So, our graphics group is ranked in the top 10. If you
have any interest in this field and want to do more, there are two
opportunities I'll talk about later. One is to do recreational graphics through a
group called UCBUGG. This is for local folks and two, is to talk to local people
about some research project involving that. People often know about 2D graphics
and 2D graphics sometimes gets confused with 3D graphics. 2D graphics is really
illustration, you know like print media. This is really 3D graphics, which is very
different. So, 3D graphics as you probably know is used in many different
fields. You see it all the time. If you're going to watch a Super Bowl you'll see tons
of 3D graphics. If you watch any big, blockbuster films, there's tons of 3D
graphics involved there. One of the big ideas is that there's two different sides
of them. There is the kind of graphics that you can lovingly take lots of time,
because you're going to be building up frames, that end up being shown at the
Super Bowl and then end up being shown on a movie. And so those are
kind of the film, television, and print, where you might have hours per frame.
Where to render, which means to make the picture, to render that picture might
literally take a whole day for one frame of a 30 second film. And that's a lot of
time to make a 30 second short commercial. The other side is the video
game market and the video game market is one in which you have to have real time
play. In which the number of seconds per frame is not one, not five, not an hour
but a fraction. So you need to have 30 frames a second so it has to be
one thirtieth of a second per frame. What's incredible is how much work has been done
by the group of folks putting together graphics cards. Have you seen the big graphics
cards in your computer? Who knows about the graphic cards in your computer? Okay,
so these are big cards that often cost a couple of hundred dollars that you add on
to your machine. They come, most systems, most big desktop systems for free as part
of the system and those have a rendering capability to have millions of polygons,
which are little teeny triangles, per second yielding unbelievable realism and
unbelievable real time visualization of images. So you see the picture from Gran
Turismo which is a gorgeous picture of this photo-real video game, in which
you're seeing thirty frames happening a second. Contrasting that with Avatar,
which might have been thirty *hours* per frame. So there really are two different
worlds in terms of the lovingly taking care of, I don't care how long it takes,
just give me the most beautiful photo-realistic image you can, or if in
the Pixar sense, in the Dreamworks sense which isn't doing a kind of plate
matching, but just give me the best looking image you can. You're not
doing photorealism. You are doing something else, doing a caricature
character versus the video game industry, which is giving me, give me the best you
can do. I don't care if you cheat physics at all. I don't care if you cheat
anything. Make it, just give me a hack so that you can get that great
performance for that game, okay? It might not be totally realistic, but it works
and it's fast. However, the line is often blurred between video games and film. And
this is a competition called scene from a movie, in which people try to take a real
movie, and make a real time video game that looks as good as that. And so, this
is a scene from Blade Runner. Who has ever seen Blade
Runner? Blade Runner is an incredible game. Incredible movie uh, back
science fiction movie I think in the 80s. Some love, 80s?
There's a nod [confirming it was the 80s], before you were born
I think most of you. Uh, in, in which is this rain scene and it's incredible. And
this is a real time video game experience. So people spend a lot of time trying to do
that, trying to kind of blend the two. So they are, they are blended in terms of
getting movie performance. Movie quality in a realtime setting. It's quite
impressive all the people who are building graphics cards and what they can do. Okay.
This is what we call the 3D graphics or 3D animation pipeline. It is a 4 stage
pipeline simplified into CS10 terms. It is a 4 stage pipeline in which you have
modeling and animation and then lighting and shading where you're adding lights to
this and then you have the rendering stage. Each four of those, each of those
four pieces I will talk about in a second, okay? So this is the 4 stage animation
pipeline. So modeling, and modeling by the way, if you ever see on the slide, a
picture, of a movie, it's time for a little movie. So we're going to do a lot
of that today, hopefully. So modeling means you need to model what the geometry
of the world is for the system. And so that is, that could come from several
sources. That could come from, as the slide says, that could come from scanners which
are scanning something like a laser range scanner, very much what happens in a
Kinect™. You can have the Kinect actually feeding a geometry modeling system that
can actually, you put your hands out there and all of the sudden get all the fingers
in the body. A Kinect could do that work for you. That's a 3D scanner. Interactive
modeling which is actually having an animator go and move the pieces, we'll
show you that. Libraries of people who have built a lot of these objects, you're
just downloading like clip art, but for 3D. Also, procedural techniques where
you write a program to generate geometry and that's really rich and really
powerful. And that image you see of the Menger Cube is actually that case of
procedural geometry. People wrote code to generate that beautiful cube, isn't that
amazing? This also involves, and this is the part of modeling, this also involves
something called rigging. So I don't usually say this rigging but it also
involves the rigging as part of it, which is attaching a skeleton to whatever you
build so that when I move the skeleton, the whole thing will move around. Does
that make sense? It's kind of like attaching connections to where the joints
would be and me. If you're building me, you might model me, but if you don't rig me I'm
just still motionless. You have to then rig it to add that, so rigging is part of
modeling here. See I didn't say rigging so write this down because rigging is on this
slide. The key is there's a lot of math behind it. So, if you like math and you
like thinking about models. Wow, that's really fun. You may think of going into
modeling. Okay, so now let's go to a film and show you some examples of what
modeling could be. All right.
[music]
[sound]. So, I showed you that video, because, A: it's a fun video and it
expresses why animation and computer graphics are so wonderful, but also
because it highlighted some of the tools that actual modelers are using. One of the
tools took a couple vertices and merged them into one. That's a tool that we use
in Maya™, in modeling software. There's a tool that grabbed one of the polygon faces
and extruded it out. That's another tool they use. At the last step there was a,
she come, becomes smooth, and that's using subdivision surfaces based on a polygonal
model. So those are the kind of tools that people use in the modeling process in
addition to kind of entertaining you for a while. So that's modeling. The next field
is animation and animation is huge. Animation is where a lot of wonderful,
traditional artists have found their home, in that pipeline. You might have more
technical folks working the lighting end, making, working the rigging end, but the
artists, the real folks who are deep, deep, deep right-brained people have found
a wonderful home in animation. And that is really where things come to life. That
really is where the *** for the buck would come if I'm Pixar, I'd put a lot of
money into making sure my animators are amazing because that's where the things
come to life. Animating is to bring something to life, that's the definition
of animation. So that can come from someone interactively key framing a position where
they move a 3-D object that is rigged nicely, like this, and then taking like a
snapshot, like this, and the computer will go zzzzz between those two frames. So the
computer can automatically interpolate, write that one down,
interpolate between two key poses done by interactive techniques. You could also
have procedural motion. You could have, maybe, some flat plain move like a wave,
move like an ocean. And oceans, you sure don't, don't wanna have an artist hand
animate an ocean. That would be done based on some kind of wave technique with maybe
different octaves of motion with big waves and small waves all combined together.
That's actually how they do oceans. And that would be procedural motion. You also
have motion capture as an idea. And motion capture, um, you may have seen the making
of Lord of the Rings, the making of Avatar, some of those films
are used for that. And I have a little bit of a clip for that and I'll show that
right now. [sound]. >> The very thing [inaudible]. You knew
this would happen? >> Because of the nature of this film, you
know, with, uh, with this alien clan, this alien culture, you know, we had a choice.
We could do it with make-up like it's always been done. You know, rubber
appliance make-up. It would've looked horrible and it would've been boring and
stupid and, you know kind of blue actors running around in the rain forest in their
underwear, you know, and a bunch of blue body paint. It would look terrible. Uh,
and I wasn't interested in that, you know. If I was going to do
this I wanted to do it this way, which is what performance captured.
>> But before he could do that Cameron first had to make sure his technology could
cross what's known in robotics and animation as the uncanny valley. [sound].
Let's say this is an absolute human. Uh, and this is uh, you know kind of a talking
moose you know? As you approach human our attraction to the character goes down.
And then at the last second just when you get to a true human look it goes back up.
Well we needed to get on the far side of that dip in the response curve which is
called the uncanny valley. And we needed to get to the opposite side where we
believe. We don't have to necessarily believe that it's 100% photo real
and we don't have to necessarily, necessarily believe that they actually
exist but we have to believe in them as emotional creatures and so we
came up with the, the head rig, we call it the head rig. It's basically just a kind
of helmet, very tight conformal kind of skullcap which is based on a life cast of
the actor's head and a laser scan. So it fits very tightly and smoothly and
comfortably and then, you know there's a carbon fiber boom that comes out with a
camera on the front of it. And that camera shoots the face in a dulled out closeup
so even though, though the actor's moving all around, running, jumping,
yelling, screaming, jumping off stuff, jumping over logs, you know running flat
out, whatever. We're getting that facial performance absolutely locked off. >> And
from that, they were able to record every facial movement. From the actors lips, to
their eyes. That proved to exactly be the, the, the kind of Holy Grail approach to
how to do uh, CG faces. Not the stuff that they've done before which was what
we call marker based. We're now uh, image based. We got the best animators in the
world to take all this data that was coming out of our performance captures.
And then we limited their options to things that were value added, like the
ears and the tails. So they took a human performance with no diminishment
whatsoever, and then added to it. So when people ask me, you know sort of, what
percentage of the actor's performance came through in the final character, I say
110%. Because you actually had an
increase in the sense of whatever the emotionality of the moment was.
>> For Discovery News, I am Jorge Ribas.. >>So that was a great uh, explanation of
how they're adding the artistic side which was just to add on the tails and the
ears which didn't appear in the actual facial uh, data that came through to
climb that uncanny valley. If you've watched other videos, other early attempts in
motion capture. Um, Lord of the Rings was quite good, but some of them were quite
bad. It might be Polar Express where kind of the face kind of floats and the eyes kind of
float. People are, basically said that one falls right in the bottom of the uncanny
valley, me and many other folks have said that. But Avatar, climbed that edge and it was
a great description by James Cameron of the uncanny valley and actually getting
above that and getting much more um, the connection with the audience is much, much
stronger with that. You could also generate animation through physics but
in real time physics like it might want to have a, a waterfall I might have water
simulation and say go and water starts flowing and water starts coming and
physics just determines how things move. As well as evolution rule and rule system
and I'll show you some of that in a second. Um, emotions are really conveyed
and this is the critical piece of the animation piece is you have to take it
from the heart. In some sense the heart of the author, initially says, I want the
audience to feel this emotion through the images into the heart of the
audience and making that connection is the difficult part. Procedural based
motion, which I think is some of the coolest work in procedural motion you'll
ever see. So here is um, Brian Mirtich who was a PhD along with me at Berkeley
at the same time, we were friends. And he had a thesis called Impulse in which he
tried to build a physically, a very physically realistic system. But to show
his system off what he did was he showed this wonderful little cart that kind of
used a rule based system. When I say rules, it followed this very simple logic
you see on the right part of the slide. In which it is basically saying, well if, the
goal of the cart is to push these little weebles off the edge of this infinite,
this little square floating in space. And it says, if I see a weeble in front of me,
go forward at it. If it's behind me then turn around and try to find, and try to
turn until I see one in front of me, and keep doing that, okay? If I happened to be
rolled above him then back off into. Very simple logic, okay, used for that. And it's rule
driven motion. Look at the amazing animation he gets without scripting what
these actors are going to do. He just put those rules together. All he did was build
that five-node example. Looks like a baseball diamond. That's all he did with
these arrows, of what to do and watch the video that results from that [inaudible]
okay and I'll narrate on top. Okay. So what you're seeing is, the cart finds one
of the Weebles, and rolls toward it and it's now to the side and the Weeble tips
over and then, each Weeble has its own logic up, and he pushes it off and the
Weeble now falls into the, I call it the vat of food. I think of it as not being
killed, but it's actually being sent to where the food is. And, it, it, so. I'm
just trying to spin it positively. So it turns. Now it's on top of it. It's kinda
trapping it. And his system was quite impressive to handle all the physics of
those, of those interactions. And now it backs up, and now it zooms forward. And
he's trying to run away, it's so sad. And he pushes it, and it falls off on its own.
It spins its wheels. All of this was automatic. He never once said, turn right,
turn left, go forward, go back. Those five nodes created all the animation. It's
pretty incredible stuff. And he's, and he's kind of trapping it and his system
was actually for the time, for '96, it was quite impressive on all of the
interactions and how it did it really right to handle the friction of the
wheels, what happens if the mass, you know, a virtual mass of the wheel, the
virtual mass of the cart and all the other guys. And the other guys are all now
going, ahhh, run away, run away, and the wheels are kind of wobbly, and it backs up
and tips it, watch how it backs up and tips, all of that is really wonderful
motion and a testament to his system in terms of the realism of the motion, of the
motion. But I'm saying the testament is, in those five simple node states,
able to push himself off. And the five simple node states, easily create,
incredible motion. So, that's procedurally driven or rule driven, motion. Quite
impressive. Lights, if you would. Alright. So, what I haven't shown you is genetic
algorithms. And genetic algorithms are another incredibly powerful, incredibly
exciting field of animation. Automatic animation. Here's the idea.
Ready? I'm going to breed these robots in the system. And they're going to breed based
on if they win, kind of Olympic challenges, like swimming fast, or getting to a particular
node, and as they, as the winners win they get to breed and have mutations of
themselves and all their children now compete in the Olympics and get this, so
they keep competing in the Olympics and those winners get to make other children.
It's kind of like, like the survival of the fittest if you will, but virtually,
okay? This is called a genetic algorithms, this is Carl Simms work from 1994, edging
on a 20-year anniversary and even today it's incredible, incredible work.
Let me share you some of Carl Simms work, it's called Evolved Virtual Creatures,
let's see if this can work here. Come on.
>> This demonstration shows virtual creatures that were evolved to perform
specific tasks in simulated physical environments. Swimming speed was used to
determine survival. Most of the creatures are results from independent evolutions.
Some developed strategies similar to those in real life. Once they're evolved,
multiple copies of these creatures can be made and simulated together in the same
environment. The next group of creatures were evolved for their ability to move on
a simulated land environment with gravity and friction. Some simple solutions with
just two parts were found. Some seem like they could use some assistance, while
others were fairly efficient, such as this rowing-like behavior. Here is an odd
cousin of the previous. A mutation caused him to tumble. Some creatures evolved to
incorporate contact sensors in their control systems. Here is another inchworm
like creature that tends to go in circles. This was actually a creature first evolved
for its ability to swim in water, then later put on land, and evolved further. A
successful sidewinding ability resulted. Here is one with a hopping style. The
protrusions on its arms seem to help prevent it from tipping over. This was the
fastest, with a successful galloping like stride. This group was evolved for their
jumping ability. This group was evolved for their ability to adaptively follow a
red light source. The resulting creatures are now being interacted with. A user is
moving the light source around as the creature behaves. This one seems to flail
randomly. But somehow, still manages to approach the light. Perhaps it is mean to
move the goal away just as it arrives. Here is one that has propeller-like
fins, which are tilted, depending on the direction of the light. It could
adaptively swim up or down very well. This final group of creatures was evolved with
their ability to compete for control of a green cube. The creature closest to the
cube at the end of the simulation is the winner. Here a strategy first arose for
simply tumbling towards the cube. Then one learn to block out his opponent. But then
later, one learned to overcome the obstacle by climbing over it. Some pinned down
their opponents. Some covered the cube with protective arms, others simply
unfolded onto the cube. The success of this strategy is often highly dependent on
the opponent. He was a hockey playing creature, which takes the cube away and
wins by a large margin. Here are two similar hockey strategies, battling it out with
appropriate gestures. This crab-like creature walks well, but often continues
past the cube and instead seems to prefer beating up on his opponent. Against the
arm, the crab seems to simply walk away. A successful strategy is this two arm
technique, that swipes quickly in from the side and moves the cube over to a second
arm. These are the final rounds of competition amongst the overall best.
Finally the seeker arm goes against the side swapper, but the cube is just out of
reach. >> Awesome stuff right? 20 years old is that work, incredible. Alright, so
that was genetic algorithms to the line of automatic animation. Nowhere did he say
move this way, he just said up the system where genetically it would evolve with a
control system to animate and do the right thing to try to teach you those goals and
those Olympic tasks. So that's the last, that's the first two. That's
modelling and animation. Now we are up to lighting and shading, the third stage. And
this is just like a movie so if anybody's worked in film before, if you've ever
worked in films at home, this is, the virtual thing what we do is exactly like
you do in real film. You set up a camera, we set up a virtual camera. You set up
lights, we set up virtual lights. It's exactly the same thing. The lights can be
aerial light sources, like the light sources in, in many classrooms, or they
can be point-light sources like a very bright LED, or they can be a long
directional light source like the sun. You can have all these things in the virtual
side as you do, as you have the creativity you do in the real side. Teams of artists
also apply things called shaders. And shaders are the things that make, have
things, make things have anything more than a grey plastic shading to them, look
to them. If the computer goes by default, it will give everything a grey plastic look.
But every time you see an amazing photograph, as I show in these slides, an
amazing image, a computer graphics image, a CG image, you're seeing leaf texture,
and you're seeing banana skin and human skin, and hair, and shirt texture. And
different kinds of shirts. Shiny versus rough. All of those things are things that
people can program. You actually, rather than, well you say, well that's easy Dan.
You take a, a picture of a leaf, and that's what it is. That's what they did
20 years ago. Now they're actually coming up with procedurally done shaders
that involve a lot of programming to get the look of an exact leaf. The layers of
it, the many different layers of how lights bouncing around is quite complex
and programmers are now writing shaders, which is quite incredible. At the end,
rather than saying action, you say render. Render just means make the picture rather
than action, which means you're capturing on film. How does rendering work? This is
the hardest part, probably, of the slide. Everything, so far, wasn't really hard for
you. This is the only map we have in the slide. And here, what I'm showing you is
similar triangles. The eye is in the bottom of the screen, of the image. That
horizontal line is the image plane. That blue square is a cube out in the world. And
what do you draw on the screen? What do you draw on that image plane is basically
coming from a very simple set of equations, which are similar triangles,
the ratio of the x distance from, from the eye to where it hits on the image plane,
the little x, the little yellow x over the x distance from the eye to the actual
cube is equal to the y distance from the eye to the image plane, which is known, over
the y distance from the eye to the actual cube. You get that? So you know all the
values and you're trying to solve for where that x distance is. That make sense?
Cuz you know where the cube is in three dimensions, that's easy. And you know
where the image plane is, so you know what that height is, but you don't know where
the x is, that one, the, the x value, called B of x in this equation. And
you just try to solve B of x, which is B of z, times A of x over A of z.
It's literally, that's all the mathematics that's really going in the
deepest level of a graphics card to generate pictures on this plane. Is
that cool? And in fact, I have a demo, if I can squeeze in here, to show it to you.
Look at this guy. It's called PPCabinet.rot. It's a software lights
called Rotator which is going to show you this is an image of, there's a cube. Look
at this. Three dimensions. See that cube in three dimensions? I'm going to spin it.
There's the cube in three dimensions and it projects through those lines. Those
blue lines are my projection lines and you end up getting that green picture on the
image plane. See how that works? Don't worry about the math. The math is just
those similar triangles. But essentially, I take all the corner points, and I draw
lines as if the lines are pointing towards the sun. It's like the reverse picture. Or
towards the viewer actually. Okay? So I'm now the viewer. All the lines, when those blue
lines light up like this, guess what? That's the picture you get, see, cause
that's the view you get from the eye infinitely far away. This is called
parallel projection and it makes the math is a little easier than perspective
projection. But thinking of me being infinitely far away, watch that, that's
exactly the picture. Because that's why you get it, see? You just draw the corner
points from all the corner parts of the cube through that purple plane giving you
that green projection. Isn't that cool? That's all it is. That's the math. If you
all said, how does that stuff work? That's the picture. That's how it works. It's
pretty cool. All right. Back to lights and back to the video. Alright, so that is
projection, and that is the rendering process and how the math of projection
actually works. There's lots of algorithms to do this and what we use as a metric is
what we call the cost. If I said ooh, you can't use that algorithm, because it's too
expensive. Does that mean the licensing cost? No, it means how much time it will
take to generate the picture. If it might take a full day, and that says I can't do
it because I have, I have a deadline to produce the movie by some time. So I can't
use the algorithm that's so remarkably photorealistic, but it's going to take a full
day per frame. And if I work out how many frames I need, I can't make my deadline.
So when I say expensive in this context, it means, takes too long. Okay? So there
is another important thing to understand, which is global illumination. And global
illumination is, in a rendering model, once you try to do the algorithim for
rendering, we, you don't even know what an algorithim is yet, you'll see that pretty
soon. But in the process of making the picture, people now realize that global
illumination, which means, trying to simulate where all the photons of a scene
are actually bouncing and landing, that's the right way to do it. If you don't use
global illumination, you'll see the results. And those are called direct
illumination. If you use a non-global illumination renderer, it's called direct
illumination, and not as good. Let's look at a picture. One of the ways you measure
not as good is something called the Cornell Box. And the Cornell Box is, let's
take an actual photograph and let's take a rendering, meaning we have a virtual cube
and a virtual big rectangular solid and a virtual green wall and a virtual red wall
and let's take an actual green wall and an actual red wall and let's see if our image,
our rendering, our computer graphic simulation of that is close to the real
thing. That's a really important idea. How do you know whether you're good enough?
The people of Cornell who built the Cornell Boxes, we won't know until we
have the real thing and we have our simulation and we compare those and see if
those match up exactly pixel for pixel. And they did and it was incredible. So
that said a lot about the quality of their rendering algorithm being photorealistic
and that was very important, to be able to close the loop in terms of getting their
feedback of whether something actually works or not. This is a direct illumination
image. And look at, take a look at that, and see what it looks like. And you say,
oh, that's pretty good, right? That's a mirrored ball and a glass ball, but you're
thinking, wait. Light doesn't work like that. That glass ball has no light going
through it, because the rendering model can't support that. Watch this. Bam. Look
at that quality. You get a caustic where the light goes through the, the glass ball
and forms a highlight like a magnifying glass. You get real lighting effects. And
so by having a global illumination model, you get much better to photo realism than
a bad direct illumination model. But, you don't get something for free. It costs
more. And what it costs me is it takes more time. Okay? Just omit all the photons
in the scene. So, if you're interested in this the way you learn more is something
called UCBUGG. UCBUGG is an undergraduate group in computer graphics and I will take
you from zero, from Jump Street, from never seeing computer graphics, never
knowing anything more than CS10, to making a short, an animated short film. And this
is a class taught by students as a local DeCal. I want to encourage you all to
think about that. CS24 is also a course here at Berkeley about graphics. In
summary, I can't think of anything that summarizes the beauty and joy of computing
more than 3D computer graphics and being able to share with you kind of
lifting the hood and explaining that to you has really been a privilege for me
because I love this stuff. I've been doing it for the last 20 years. I love it.
It has transformed film, print, video, YouTube. It's been transformative in terms
of being able to have simulations and. You know, robots taking over the world, all
those. The imagination of artists, film artists have really exploded thanks to the
advent of really highly successful 3D computer graphics. There are four stages:
modeling, animation, which is also rigging, lighting and shading, and
rendering and it allows people to exercise the right side of their brains. 3:59, I
just ended on time. Thanks folks, that's end of class two. We'll start again on
Wednesday talking about video games! Alright.