Tip:
Highlight text to annotate it
X
I'd like to talk about the main approaches to interactive installations' creation.
On the screen behind me you can see logos of the most popular software packages used for this purpose.
By the way, in the exhibition hall just next to us Decode exhibition is taking place,
and most of the works on the show have been created with such software in one or another way.
Maybe some of you have questions in regard of what software
was used for creating of some particular work from there.
Recently the instruments for creating of interactive installations and generative graphics become more
and more accessible for a wide audience. And this is great, because more and more people
are getting involved in this kind of artistic practice.
For many amateurs it becomes a hobby and this further boosts this movement.
Originally, generative graphics evolved in so-called demoscene subculture,
which came to existence with the development of computers in 70's -80's,
especially with the appearance of personal computers.
Demosceners were creating some kind of graphics, visuals without a particular purpose,
such as for computer games, but as an art per se, creating all kind of special effects.
Here on the screen is a photo of a demosceners gathering in 1990's in the US.
And here is its counterpart in Russia, taken at Chaos Constructions festival in Saint Petersburg.
Demos they are making - something of this kind - are similar to games, but with very strict scenarios.
They are being generated by a computer program. So the element of art in this applies not only
to the visuals produced as the result, but also to the source code being used in the process.
I'd like to mention that what we see now is a video recording, of course,
but the source file itself is just a few kilobytes, a few dozen kilobytes at most,
and all the visuals are being rendered in real time.
That's the whole point - it's all created by a code and can be perceived
as a computer game, which renders frame by frame.
Yes, in those "old times" when the demoscene movement has peaked,
there was an ongoing competition between coders and artists for making file size small as possible and so on
Actually, there was not any necessity to make this with a code.
It was all about fun. Thus, demoscene is a very specialized computer subculture,
something appealing to geeks and code maniacs.
It didn't really involve artists and these applications were not interactive.
And that's really the purpose of making this by pure code, actually,
because you can relate certain parameters of the generated graphics to certain events
and make it happen in real time, in this present moment.
The tipping point was the appearance of OpenCV library in 1999. CV here stands for Computer Vision.
For the first time all the instruments for image analyzing were put together.
Before that each programmer had to invent his own bicycle and there was no any common platform.
This library evolves and nowadays, and most of the interactive installations
you could see at Decode exhibition employ it in one way or another.
So, it has happened in 1999. The second push for generative graphics
and interactive installations' popularization came
with the Processing library development - an open source programming language and environment, built on Java.
What is peculiar about Processing - the big community of people
interested in graphics around it, not geeks only, but artists as well.
Currently Processing' following is the biggest, and this programming environment is the best documented.
Here you can see some of the books available; one of them even called Processing for Visual Artists… and so on.
As you see, generative graphics and interactive installations movement came out from digital underground,
and for many people working with it now the beauty of the source code is irrelevant;
what counts is the beauty of the graphics generated.
Here you can see what can be done with Processing. This installation is present at the Decode show.
This kind of thing. Or, let's take as an example a brilliant contemporary dance project,
which was created by our friends. In it moving object tracking was used.
Where is the tracking...
Here you can see it.
Here you can see clearly that some program was used - you just switch it on
and it takes care of the rest automatically, more or less.
So, Processing became the first instrument among an array of other development environment tools built on
some particular language, which makes an artist's life easier,
helps someone who doesn't want to dig deep into the code.
The second such a product became openFrameworks. It is actually using the same principles as Processing,
but has been built on a more advanced and productive language,
which is C++. Now it has been used widely and this work is an example.
The camera is installed on the billboard and films passers-by in the street.
It adds to an image a hand, which tickles or picks them, taking people out occasionally; a divine hand in a sense.
What is really amazing about such kind of installations is not that is done so well;
there are plenty of things generally exist that done well.
My point is what it's amazing that this has happened at all;
that someone has commissioned such a work for a public space and installed it in a square in a city center.
I have not seen such a daring benefactor in our country yet.
What makes openFrameworks different from Processing is what it is more productive.
It has some disadvantages though, such as a longer learning curve and smaller community behind it.
This installation is also a part of the Decode show and it also uses openFrameworks.
It is done by Mehmet Atken, a brilliant London-based artist.
He has done a lot for the openFrameworks development too, for instance,
by creating a library of liquids' simulation, which is used here.
Unfortunately, video projector used at this exhibition is not very powerful,
so half of joy is missing from this installation.
We saw this work before at Yota Space festival, and the impression it left was much stronger.
This work is called Body Paint. It is a work of a very high skills level.
In installation like this a person stands in front of the screen. He is being filmed from behind or from above,
and then his actions are being reflected in some environment.
Probably in the end of our talk I will show you how Mehmet remade this installation for Toyota,
so you could have an idea how a creative work can be used for a commercial purpose.
This installation was at Yota Space festival too, here we can see a mini eco-system,
represented by the cubes-houses, which can be moved around.
Some of the cubes here are used as obstacles for the light, and one cube is used as a light source.
When you move it, you move the position of sun and shadows start to grow or shrink accordingly.
I also would like to talk about a class of so-called node-based tools for content creation.
They look in this way, visualizing data and algorithms used
and connecting these nodes by wires transmitting the data.
Talking about this class in general, it can be represented by Max/MSP,
which was originally created for musicians and it is around for quite a while now.
Later there was realized Jitter as an addition to it that allows to work with graphics.
Generally, all of those who are taken back by working with code find that it's an easier
and more pleasant way to work. But, in fact, if you are dealing with a complicated algorithm or logic,
it can become something that will look like this. Especially if you don't plan it properly.
VVVV also belongs to this class of programming environment and is adored by many, as it's free and for Windows OS.
Max/MSP, on the contrary, costs money to buy and was originally developed for the Macintosh platform.
VVVV is widely used for video mapping. It is used by such projects as AntiVJ.
It is loved by many in Russia too.
Such tool as TouchDesigner also deserves a mention. It is special in a way,
because it has derived from Houdini - a software used for 3D graphics creation.
For many of those who already have some 3D graphics skills it will be easier to master,
because it's very similar to Houdini. It uses same objects classes et cetera.
Also it has a very attractive interface. Alva Noto, for instance, visualized with this Ryiuchi Ikeda's music
and used it for other concerts. Each note can be visualized with this interface in the node.
If you will take an audio signal out, for example, it will affect wave fluctuation or something else happens.
It possible to visualize something in millions ways. You will have your screen filled with such a graph,
and in each of the modules something is happening along the music.
It looks very nice indeed and as the result you will have visuals combining all of this,
rendering the whole in the background.
It looks very beautiful, and the thing that this is an interactive Houdini mod is also a big plus.
- And how convenient is this approach? I just see a bunch of blocks here
and I guess a person who sets them all can be easily lost in this too.
For instance, if you model a box in 3D Max, you can model another box inside that box
and so on - making layers in a sense. Here, on this graph…
- Here we have same boxes, or let's call them contexts. One context may contain geometry,
another animation or channel operator and so on. It all can be structured in a certain way,
but I'd say if you will construct this graph by yourself, you will know what node contains what.
- Architects, for instance, also integrate generative tools like Grasshopper in their practice these days,
and you may see a similar picture. A kind of an artistic approach,
like working with a brush - drop a program here, another one there, and then try to figure out what is what.
- One should be very accurate while using this approach, because it's easy to get confused in the long process,
of course, but what really makes TouchDesigner to stand out is - this is a 3D solution
specifically designed for interactive rendering. I was waiting for this for a long time.
- Actually, it's not purely a 3D solution.
It's possible to use it for all VJ-ing needs - you can do composing in it in real time, for example.
So it gives you ability to work with 3D space - and not in limited mode.
You can use it to the fullest: do modeling, set coordinates and so forth.
In other words, with it you can do things other software is hardly capable of.
- So, creating graphics itself - by code or other means, such as using these "boxes" in Max/MSP
or VVVV is not an easy feat. And this kind of problems is well addressed in 3D programs,
so its great that this kind of real time solutions appear. Here, for instance,
an example of how it is being used by a famous musician and producer of techno Richie Hawtin.
It is of course, not very technically complicated - just an attractive equalizer.
Nevertheless, it's known that this musician is using TouchDesigner often in his tour visuals.
It is surprising, actually, that there are not that many examples
of well-executed works done in TouchDesigner around.
I would like to continue by saying that a very perspective way of development lays
in using game engines for interactive installations' creation, because the set of tasks is similar.
The first interactive applications ever were used for gaming with a keyboard, joystick or mouse as an interface.
Interactive installations normally involve a spectator or spectator's image,
and many of common tasks are already have been solved in game engines.
It's great to use these solutions. For instance, if we use some kind of a character,
it can be already animated. Here we see an installation made
with Unity and openFrameworks with an interactive character.
It reacts on movements and the speed of movement…
That's what we have under the hood. Camera is installed above. It determines a spectator's position,
transmits data to the game engine and this animated character follows it. Here we have technologies combined.
I also would like to add that such thing as Xbox kinect is rapidly gaining popularity.
Kinect is a stereo camera produced for Xbox game console and it has development tools for PC and Macintosh already
So, many interactive and media designers nowadays are trying to understand how it works.
What is special about it is that this is a stereo camera, which allows to measure depth.
Each dot position can be plotted not only on x and y axis, but on z axis too.
Apart of that it can recognize a skeleton. For instance, you can come close to the camera
and see your own reflection represented by an imaginary character, which precisely correlates to its original.
What we wanted to say is that with the appearance of such a simple tool
as Kinect we expect lots of interactive works to appear in museums, on the streets…
Interactive displays, all kind of screen manipulators are based on this technology too.
So, this year and the next one interactive technologies will be developing very fast.
- We announce it a Year of Kinect. - Yes, so it is.
Interactive art today is on the rise, it's just coming to a wide audience.
So the software and technical means needed are not fully developed yet,
and even a visitor or a viewer is not get used to this and doesn't know how to interact with them.
So for now the simplest tools possible for interaction with a viewer or environment are used.
For now we are waving hands, pressing buttons and so on. But this is an action per se, there is no story
or quest behind it - for the time being at least - and, again, it's not widely accessible.
Decode exhibition raises some questions, as there are quite a few things in it, which are rather dated.
In my opinion half of the works simply shouldn't be there. But the other half is very interesting.
In any case, all interaction for the spectator is limited to being present
or absent in the space around the installation.
I think in the future, when the guest or visitor will be self-educated in a sense,
and this is a laborious process, the artists will be providing
more complex ways of interaction for the audience.
- May I ask you a question? What programs would you recommend to use for someone
who has practically zero level of knowledge, but would like to do video mapping and produce interactive works?
- Actually, for video mapping it's not necessary to do any programming at all,
because the kind of content you can see in architectural mapping, for instance,
can be produced in such programs, as Adobe AfterEffects or similar ones.
The only piece of software is needed simply to adjust the picture in place finely.
Normally some kind of media-server is being used. Maybe Yan would like to add something...
- In a context of what we were talking about, for a home video mapping Modul8, Granul8 or Resolume
- any of them - perfectly fit the bill. If this is a case when you have, let's say,
4 surfaces - two side walls, background and a ceiling or a floor,
with Resolute or Modul8 it can be achieved without any problems at all.
We are talking about indoor mapping now, of course, because for outdoors you would normally need
a kind of projectors one wouldn't be using for fun as its pricey.
If we are talking about tools for interactive installations,
I would advise you to start from learning Processing,
because it has the biggest community around and documented best too at the moment.
Along with it you can learn any module-based system, such as VVVV or Max/MSP,
because you can make sketching in them quickly which are easy to understand.
I would say that any system you will choose, even if it will look as easy to understand on the first glance,
needs some learning curve. So, I suggest: choose one system and concentrate on it,
but learn all what is there to learn to the fullest. If it's VVVV - that's it, but make max out of it.
The community is enormous, there are plenty of examples, it's all available for free download.
Everything is also very well documented. Max/MSP has such a useful built-in help section
that you can get inside any object, see plenty of case-by-case usage examples.
Everything is well explained. Honestly, if you want to learn, you can.
I just don't see the point to learn 3 different programs,
because more or less they all can help you to achieve similar results.
Maybe some of them have more optimal workflow,
but in general I think that learning both Max/MSP and VVVV is pointless.
- Thank you, now in regard of the equipment - what kind of projectors and other things I need simply
to kick things off, to start experimenting? Which camera with infrared sensors should I choose?
Advise on choosing a projector for a big scale work should be given by a professional,
as it depends. For home video mapping you can use any home projector which "opens" wide.
So if you have a small room, it's better to choose one with a wide-angle lens.
From my experience, for indoor events, especially if you will turn off lights, 5 000 lm will do.
If you have 10 000 lm, it's perfect. Outdoors - 20 000 lm and more, 30 000 lm ones are…
Oh, sorry, we are talking about home experiments, don't we?
To make it simple, everybody starts doing something with a home projector, and that's fine.
Especially, if you will switch off all the lights.
- A popular choice for the camera for interactive installations is a camera from Sony PS3 game console.
Sometimes people hack it by chopping off the lens
and putting in another one, which doesn't filter infrared light.
What is so special about this camera is the speed - you can get 60 frames per second
with it versus 24 you'd get normally.
- Am I correct by saying that these programs can interpret any kinds of signals?
Let's say if I have sensors to measure something, can I use given values as a source data to feed into them
and transform in some kind of image, for instance?
- Yes, you are absolutely right. - Anything I want, any existing sensors can be used?
- That's correct. Maybe we should talk about The Soul of Flowers?
- Yes, let's do. I will tell you about one installation in making… It's a very slow process actually.
The Soul of Flowers is based on the idea that you can approach flowers
and start a telepathic communication with them and exchange emotions.
So, we are sending a mental signal to a plant in front of us,
then collect some data from this plant- electric signals, which later on are processed in a computer
and being visualized with Processing or C++. The outcome will depend on the data from the plant we feed into it.
- So you are reading data from plants, not from humans?
- It may sound like we have gone nuts on the one hand, but on another - there are some Cilff Baxter's experiments
He allegedly has proved that the plants are hypersensitive and react on our intentions.
- Cliff Baxter, by the way, is the inventor of the lie-detector.
- So, is that an experiment or you know what you are doing exactly?
- No, at the moment we are just experimenting. Moreover, the progress has been very slow.
Plants don't talk to us, to be honest.
- Where are you going to talk about this research?
- …But we are not loosing hope. Baxter was writing about all kind of things they did to plants.
For instance, they were thinking bad things about plants, like imagining setting a plant on fire.
This particular plant was connected to the lie-detector, or polygraph,
and it was registering active signals as the plant was worried.
When everything was fine, there was no signal at all. I'd love to see it with my own eyes.
The point of us telling you this is what the question was about sensors.
So - yes, you can collect data from any source possible.
Everything that can be measured can be arranged in a matrix of data,
and this data matrix can be processed with the help of the programs we were talking about today.
- Thank you very much for this very engaging lecture.
- I also would like to say that we work in close collaboration with Audiovisual Academy.
They are our friends who are providing online educational services and teach people things similar to what we do.
Other there they program is described and below is their web address.
It's free, it's online - so please check it out.
Everything can find something useful there: for artists there are some tips from the professionals
in content creation for VJ-ing, mapping, all this kind of things.
And for the technicians and all people who like to mess around with hardware there are some lessons too.
So different programs are available there.
- Soon there will be some lessons on interactive technologies too.