Tip:
Highlight text to annotate it
X
I was as a kid really interested in music. I played a lot of instruments starting with French horn, actually.
It's quite unusual. I played that for 12 years. I went on to start playing guitar and bass, piano, drums,
a bit of this and that. I was really into producing shows and standing on stage myself, to be honest.
I had a bit of a career in that when I was 12 and I was really enjoying the world of sort of performance.
And then later I got, I wouldn't say serious, but I would say I got more interested in technology and art
and architecture and so on. I studied in the natural science program in the high school and I sort of took that
scientific route in a way. I was still having this music and the art and architecture in the background, but
I became an engineer in media technology and engineering. I studied things like how does Photoshop work
in the background and how does a video game work in the background. How do you create photorealistic environments
and how do you model things up in 3-D. I was really into that and made lots of animations and things
and I finished that and basically didn't really know what I would do with it, to be honest. I still had this music
in the background and all the architectural side of things. And I sort of found this program called
Architecture Lighting Design and I started to study that for a year and got interested in lighting, and I realized
that lighting and music and sculptural form and color, that actually there is a world for that and I went
to Berlin. I did a couple of internships. I lived in Austria and kept on doing this and then eventually
I started as an intern here at UVA, and right now all these things which I was interested in from my childhood
up to today is actually what I work with everyday. My role here in UVA is somewhere between being a creator
and being an engineer/technician in a way. I work a lot with AutoCAD technical drawings, with 3-D worlds. I work a lot
with our own software called D3, and I set up projects for either our own shows, pieces or
for other companies' products. So I would say I'm a D3 expert in a way. It sort of covers the technical sides
of things but also having an aesthetical mind, I think, and I think that's my general role. I'm not a pure
content creator at all. I'm not a pure engineer. But I'm somewhere in between. I'm sort of flowing a bit
here and there. Since UVA was created in 2002 with the first Massive Attack tour, that was a bit
before my time, obviously, but coding and computer systems and custom-made programs have always been
involved in the projects. For the Massive Attack tour, they wrote some code to drive that show which
kind of took real-time data into the system and immediately displayed in on the screen. All of it has
been around - these computer based installations, and now with D3 it basically turned out when we were approached
in 2005 by Willie Williams, the show director for U2 for the Vertigo tour that D3 was born, really because we were
commissioned as content creators to make content for
these quite unusual sculptural settings. We had circular, sort of
elliptical screens in the floor and we had curtains in a three-dimensional sort of layer system. We were
quite concerned of how our content would look like in this environment, so that's really when D3 was born.
It's a three-dimensional simulator for applying content onto LED walls, or productions or even lasers
and DMX lights. So it's a three dimensional simulator. You can walk around and rotate around in this
world as a video game really. You can import your audio track and sequence your visuals exactly to the beat.
And once you've done that, you can also use it as a playback device. So it's a production tool, which covers
the entire process in a show. I wouldn't call it a media server as some people do. It's just something else.
A media server as I see it is more a two-dimensional playback device which has certain sequence capabilities.
This is a communication tool, which can bring the entire production together. You can show your technical
management how more engineering-wise, how that actually doesn't fit there because your 2-D drawings
look like this, but now when they went into 3-D, there's a mistake here. So it sort of covers mistakes. And you can
show the client how your content will look like to the beat of the music. And it's just something you can
work in before anything has been built. Then, once it's built, you just basically plug it in and there you go.
You don't swap systems throughout the process. You actually use the same system consistently, which is quite
unique, I think. Just to give you a brief overview of D3, what it does... As I said before, it's a 3-dimensional
simulator. It's actually not a visualizer, it's a simulator, because all the pixels you see here in this
virtual world actually represent true pixels in reality. So you can now walk into this world. You can rotate
around, zoom in and out as an any other sort of videogame. You can see how the content would look like from
different angles, which obviously concerns people like TV producers. We can create some camera fly-throughs
and really actually give the TV producer ideas of how they should pan their cameras to the music.
So they get a quick idea of how the track will proceed. So what we have here is the timeline, where we can import
an audio track. And this is the actual audio track. It starts there and ends there quite logically. You can now
separate this timeline into sections or like logical sections such as intro, verse one, verse two, chorus,
bridge and so on. You can also write notes if there is a certain sort of thing going on in the lyrics or in the music.
What you see here - these lines here - they represent the actual beat of the track.
If I hit "Play" now, you can see how it goes to the beat like that, so that makes it really simple for us
as content creators to sequence visuals directly to the beat, and I can in fact demonstrate that really quickly.
Let me do something really quickly here.
Right now it's just completely black, so I'm going to apply color, just a white color, which flashes to the beat.
Supersimple example. So, when I right-click here in the timeline, I'm going to say new layer, and we have a bunch
of different sort of plug-ins or effects. Some are based on video, some are based on JPEG's, bitmaps, whatever.
You can do weird things with them. And some other modules are pure mathematical generative-based visuals.
Now I'm just going to pick color here, poof, and I'm currently applying this color to all these screens.
But I can also, actually I just want to apply it to the LED 01, on the floor and so on. I can combine screens
in very different ways. I can say this is now one screen, the floor. I can also say that the floor and the middle
LED wall here is another screen, or these two is another screen, so I can combined screens
into different canvases. So for now I'm just going to apply this to the floor and the ceiling, this color.
Just to see how easy it is to animate the brightness of this color to the beat,
just to repeat here, these lines represent the beat. These lines up here in the module are the same line.
They're all the same. So that means I can just simply animate the brightness
and snap that to the beat.
And I can just repeat that for the end of this color layer. So let's have a look at that.
That's how easy it is. And I can obviously do that twice as fast, to the half of the beat.
It's almost like playing drums, in a way, when you're doing visuals. You can do really rhythmical stuff
really easily without having to go back and forth and back and forth as you have to do in AfterEffects.
Part of the sequencing as a content creator is purely done in D3. In terms of how we apply content
into this stage, there are different ways of doing it. You can either apply a videofile, which you made and
just put it directly onto a screen, just squash it in there, if you like, or you can just make sure that
the content fits the aspect ratio of your LED wall or your production. There are other ways of doing it.
Just to give an example here,
let's say we have this little video file which we've created in AfterEffects.
How do you now apply it onto a really unusual shaped screen like this? Because this floor is obviously
not a rectangle. It's quite unusual. It's made of these MyStrip products, lots of them, hundreds or perhaps even
thousands of them. Well, we just do it like this. Poof.
We have a square image, which we put the videofile on,
and everything which happens to be behind this ghost image
is simply going to get projected into the scene,
so that means that if I now change the rotation on this image, it's going to update the mapping, the content
on the actual screens. See, both on the floor and
on the main LED wall. So this is one way of applying content in D3.
It's really simple for us to put something on a very organic looking sculptural LED screen. It can also be
used as the actual playback device. So once you've done the entire show, you go on site and you use D3 as a
playback machine. So what we are sending out to the processors is this for the floor.
This is the content. Hold on, I'm just going to fix this background. This is our second head, 1920x1080,
and that's what we're now sending out to the floor. So your content, which looked like that from the beginning,
has now been projected onto the stage. And once it's done that, it's going to look like that. The reason it looks
like that is because of texture mapping. When I created this screen, I didn't do it in D3,
I did it in 3-D Studio Max, and I texture mapped it in an appropriate way. So this floor here is actually
one object, which I can move up and down.
I can set another resolution on the screen, both X and Y. I can change the scale of the screen, like that and so on.
So all these strips here are not individual screens, they're actually one single screen. What D3 does is -
really taking care of this entire production pipeline. You start with D3 in the early sketches. You invite your
clients with some initial treatment, initial visuals to the beat of the track, and you can immediately hear
the client's response. You can do changes on-the-fly next to him, if you like. This show is way way more complex
than that because what we have here is a... I'm just going to zoom into this crazy crazy screen... what we have
here is actually a cylindrical screen, an elliptical, egg-shaped screen, which is a problem in itself,
because how do you deal with the end and start of a videofile? The second thing, which is a way bigger problem,
is that... you can see how slowly it opens. And so on... It opens all the way down there.
So I've got some some camera fly-throughs, which takes over my... Okay. So, that's the problem.
As a content creator, again, or as a video engineer, how do you deal with a screen that expands
but at the same time make sure that your content keeps the format, the aspect ratio? So, obviously,
we don't want to have Bono's face stretched out just because the screen is stretched out.
That's a big challenge. That was a big challenge for us. So we had to write... We used D3 as it was, but we added
some features to it. I can go back to another sort of demo track of the same product here.
So what we have here is - 4 live cameras, which are being output onto the screens. So, these pictures of Bono
now represent these live camera inputs. They are all color-coded. So this is a PAL input,
this is another one which is blue, green and yellow. They're all being composed, which is sort of input
into what we call a composed module. So these are the sources: source 1, 2, 3 and 4. I can now decide to
actually take one source away. Now I've got three sources, but the aspect ratio is still maintained.
So how do we do that? Let's see. I take away another one, and the aspect ratio is maintained. Bring them back
and they're there. Now, what we do is we do something I showed you in the previous product, but with
a cylindrical version of that. We're actually applying content onto the cylinder just like that.
We're not applying it directly onto the screen, we're applying it to the cylinder. And everything, which happens
to be behind and within the cylinder is going to get illuminated by the current pixels. So that means
when I now open the screen
it is just shooting it to whatever it hits
and thereby the aspect ratio can be maintained.
And that's the thing we added into this release of D3, which is about one and half years old now.
And there are other things we can do with this compose module. I'll go back to the closed... I can now change
the rotation speed for it. I can go the other way around. All these things here on the timeline are animatable
to the beat if you like. All the parameters like source 1 are animatable, so I can start with having them all on,
then after two bars - this represents one bar and this is another bar. So let's say, after two bars,
I'm going to have only two sources. So now let's go back and hit "Play".
In fact, let me animate the rotation speed as well. Why not?
Up there it's going to go to a little peak.
It's going to start in that direction, then here it's going to go in the other direction
and then it's going to go back again. At the same time, these two guys are going to fall off.
So let'*** play. Okay.
So, in all these modules we have in D3, we have the parameters stretching out along the entire timeline.
It's really important to point out that they are all animatable. That's key in D3,
everything is rendered in real time. There's no need to do all the details in AfterEffects as a content creator.
Do that in D3. You do it quite rough even in AfterEffects and all the synchronization, the timing,
the color saturation and all that stuff. You just do it in D3.
I'm just going to go back in here. And we can also change stuff like the border. I can make it a really sharp border.
I don't know if you can see this really, but now the border is really sharp.
I can now slowly blur that out to a soft border.
That means there's no front and back on the screen. That's, sort of, how the product works.
And just to show you how the... what we're sending out to the LED processors for this show.
That's all we're sending for this entire show. It can fit within a 1024x768 output - the entire show.
And people get quite, "wow, really?" But, yeah! And this is how the content looks like when the screen is closed.
But let's just slowly open the screen up. We can now see how the content gets squashed together.
And that's just in order to compensate for that geometrical expansion. It may look stupid here, but
in reality it's going to look as it should look like. We've been doing Massive Attack tours ever since 2002,
and it's been a lot of text-based tracks in there, and they're still on tour with a lot of text-based stuff.
What's happening is that they go from city to city and we just update the text files. Because some tracks are
based on pure gossip or headlines from the very same day. Let's say you go to Paris or to Moscow. Why not Moscow?
We would go to Moscow and then we'd got the equivalent of some gossip crap magazine and just copy and paste
into text files, which D3 reads and it's directly being displayed onto the screen. And this makes a big impact
for the audience, like "wow, how did they do that?" so there's no videofiles involved in a Massive Attack
product. I'll just show you. So all this text here is actually coming from a text file, which I can just,
as I said, insert from whatever. I can actually do it now, on the fly. I'm not going to do that. And it's being
directly output to the screen for this show. What happens is that every character in the alphabet is being linked
to a little font, which happens to be a little bitmap file, really.
So every time I write an "S" in this text file, it's just going to immediately link to a bitmap "S"
and then be displayed on the screen. So that's basically the concept of this.
So, you know, this gives a lot of flexibility for us.
Instead of just rendering, going to AfterEffects, rendering again and realizing actually
that the next show is going to be a lot of rendering again and stuff - it simply wouldn't work like that.
We can also have a sort of interactive live inputs from the Internet - RSS feeds, data that comes in directly.
For the first 2002 show there was a lot of that kind of stuff going on.
Actual real videofiles could also be automatically mapped into zeros and ones, which was quite
cool back in 2002. Yeah, so they're still on tour with this show and doing really good. �