interaction lab @ wdka

Archive for November, 2008

Schedual for the next 6(9) weeks: Puppetshow

1. week: Finish the video labyrinth = Creat the structure for the videos.

2. week: Create the arduino connection

3. week: Create the arduino connection + video labyrinth (= prototype)


4. week: Make the setup (test)

5.week: Make the setup (test with real videos and puppets)

6. week: Make the FINAL setup (Final)

1 comment

VideoManager Class Tutorial

Here it is - 55 minutes - 99MB!
I put a red overlay showing where the mistake in the code was.

Hope it is helpful!

1 comment

Links from class - cool stuff

For Peter
Golan Levin’s optoisolator -


For Vinesh and Lodewijk
Tine Papendick - Digital Puppetry -


For Marel and Alex:
SimplexNoise class for super cool smooth randomness:
You will have to add the two files to your project and #include “simplexnoise1234.h” in your testApp.h

Try using a small value based on ofGetElapsedTimef - it should give you back a value between -1 and 1.
float myVal = SimplexNoise1234::noise( ofGetElapsedTimef() * 0.04 );


No comments

Vinesh; Now teaming up with Lodewijk

The idea, as previously mentioned, is the Katamari Kitchen. (Person as a magnet through a kitchen).
The planning of Lodewijk and me goes as following:

Week 1 Make a test (see how movements and shaking is defined)
(Use particle code ofAmsterdam: placing objects in space. )
Homework - Write attraction target for objects code
Week 2 Look for a bounding box around a person and send particle targets to that.
Also work on Pixelcounting framedifference to detect the shaking.  (make tests to see if shaking has detected)
Homework - Inside the particle code: see if a person is shaking and write the function for shaking.
Week 3 Look for randomness and position (where will the objects stick to the person) make difference between left and right clipping.
Homework -  Work on animation that’ll include gravity. / fine tune
Week 4 Look at detecting blobs over time; in case 2 persons are detected and walking across each other.
Homework - Work on animation that’ll include gravity. / fine tune / extra
Week 5 Work on graphics, sounds and atmosphere.
Homework - Fine tune  graphics/ sounds/  test
Week 6 Combining all objects and test the prototype.

Oh; we indeed haven’t assigned the tasks to into individuals yet. This will happen as we go, as we want to make sure we wont be waiting on each other in the procces

Also an update on the sketch: where the camera is now placed at the beamer position above the person (45 degrees angle).

No comments


Bram Kuiper – Plan Uitwerking


Themeworld (Example Jungle Book)


My idea is to create a theme world with characters walking around. The characters are controlled by the people. When you stand in front of the camera the computer places a character on you.

The characters are made more lively with some extras: Z-space (when you move away from the camera your character gets smaller), talking (when you open your mouth your character opens his mouth), when you move, the character moves but the body tilts a bit to make it more lively.


I’m gonna stay with my example Jungle Book. When I’ve made the code for it I can easily place other themes in the code.



Week 1: Figure out what I’m going to do. Think about the code, what codes to find and copy. BackGround and choose characters. Sounds Playing.


Week 2: Open mouth


Week 3: Tilt


Week 4: Scale


Week 5: Prototype


Week 6: Finished

1 comment

Shark Bate 2, Alex en Marèl


Fase 1
walking affects the rimpling of the water.
*putting your foot down is cirlce rimpling
*gliding is leaving a trail in opposite direction
hands make rimpling too?

Beamer -> constant projection of water.

Fase 2
Enter Dangerzone/ shark goes hunting
*in your trail follows a shadow
But doesn’t catch you because its dives under when its near.

Fase 3
If the person stands still in the really danger danger zone the shadow starts circling around the person.
After for 4 rounds the shark attacks. It dives under and comes to the surface under your feet. When you’re attacked a picture of the shark, you stand in its mouth, and the water becomes red.
If person steps out of the circle without touching the shadow you get some time before it’ll attack again.
If person gets out of the really danger danger zone fase 2 or 1 starts again.

To do:
-Water animation
-Person tracking (blob id, directional tracking) keep in mind: hands
- Water wrinkling/ trail
- Shadow of shark
- Music activated by position of person.

No comments

Nice vision based particle system


Programmed in C++ with openFrameworks/openCV/openGL running realtime at 1080 HD, 30fps with 20K particles, optical flow, contour analysis, fluid dynamics, FBOs and VBOs.

1 comment

SMS Projection

My concept of the SMS Projection (okay, still need a project title) is that people can send their sms to the screen and play with it, together with other messages from previous users. You can twist, rotate etc the messages by hand.

This is a visual explanation of the concept:

No comments

SMS Project

I am still not having a really clear idea about my concept, but I really want to make something that will be activated/manipulated by SMS messages. Almost everybody has a mobile phone, so is able to participate in the project.

As said before: I really like the SwimmingMessageSystem (see favorite projects Jurjen)

Now, on the internet I found this app:

It is an application for PC to receive SMS text messages. I would like to rip the data and displaying/edit the text via OF.

Concrete concept + sketches are coming up soon…

No comments


My idea is to make an interactive WakeUp-”system”.
The way it is supposed to work in the end, is something like this:

The night before, you enter a time on which you want to wake up, let’s say 08:00am.

At that giving time, the next morning, alarm-bells, sirens and other sounds will be filling the room.
Above the bed hangs a webcam.
Once the alarms are going off, you have to look straight into the webcam,
which will detect your face.
The webcam will check if your eyes are open, and stay open for at least 5/10 seconds. Then the alarms sound will die out.

This procedure will repeat itself every 7/10 minutes, until you walk out of bed, to switch the system off.

Maybe trying to switch of the system will require you to solve a puzzle or answer a difficult question, just to test, if you’re really awake.

Technical part.

There are some things I really need to take a look at.
Especially the “Eye-recognition”.
I’ll be using the HaarFinder to recognize the face, but then I’m not really sure
how I’m going to check if the eyes are open or closed.

The build of the project can be split into three parts:

One: “Eye-recognition” - Check if the eyes are open or closed.
Two: “Set the Alarm” - Need an inputField where you can specify the time the alarm goes off.
Three: “Switch off the Alarm” - There should be a question or a puzzle to solve to make you shut down the system.

I got 6 weeks to finish the project.

My basic schedule will be:

Week.1: Present my idea.
Week.2: Start detecting those eyes.
Week.3: Proceed Eye-detection + inputField for “AlarmTime”.
Week.4: Finish Eye-detection + AlarmTime + Question/Puzzle.
Week.5: Last tweeks + Test run of the WakeUp-System.
Week.6: DefRun of WakeUp-System

As a little extra, Theo sent me a video of an awesome Alarm Clock, called:

Rise and Shine:

No comments

Links from class

Making things move blog:

Code from ofAmsterdam workshop - vector fields, particle animation etc

Zach Booth Simpson (butterflies / shadows)

Wonders down under (footprints / ripples from top down tracking)

Outdoor installation turns your shadow into a giant monster (for alex)

No comments

Vinesh’s idea update - Idea #2

I have made the above sketch to visualize the real life katamari kitchen idea:
1st image: Pans, oven, knifes, spoons stick to you since you passed by.
2nd image: Shaking will cause all objects to fall, making this huge sound.

It may only work when you are standing in front of the screen. It would look really ugly if the camera would detect you from 20 meters away. But how to detect a max distance from the screen? I thought of this:

One cam to capture the person and one camera to detect the distance from the screen by looking at a min x position for the center of max. But I could be terribly wrong. Any thoughts?

No comments

Metal Gear Solid - Trailer graphics

This is one of the trailers made for the PS3 game Metal Gear Solid 4. Notice the overal ambient with the graphics on the background. Really well done, subtile but powerful. Oh see the HD version here.

- Vinesh

No comments

John Whitney tribute animation

No comments

Pre-pre-pre-concept Peter & Jurjen

- projection vs. physical
- (interactive) table surface
- human body
- sounds
- music
- hands
- SMS messages
- special room?

Above are some keywords which we want to work with.
We don’t have a concrete concept yet, though we want some projections working together with physical elements.

No comments

Johanne, racisme in the digitalworld

I have an idea of creating this kind of racistic force in OpenFramework, that does something with an video input image. I am thinking the racistic force kind of prefure 1 rather than 0, or in color prefure numbers that can be devided with 2(working with the historygram)…

I have no pictures of the idea, but i imagine some kind of grafhical project related to my 1. favourite project.

No comments

Mathijs, sketches.

By placing your hands in the shape of a butterfly against a certain piece of the wall, and whisper into it, you create, a butterfly! The butterfly has colors related to what you said, the pitch, speed, and volume. They will fly away and live for a few days long. When someone touches your butterfly, it’s message is heard. The butterflies freely fly over and out of the room, even flying over the building and right into the sky. They are alive!

Technically, by placing your hands against ground glass it’s captured from behind, and it’s being rememberd as it records sound. This is then being projected by multiple projectors.

The second idea is a video wall on which every movement is captured, as in a very long shutter speed. Traces left behind by people all have their own colors, and add up to each other. The colors of multiple traces add up and form new colors, etcetera.

No comments

Jurjen - favorite projects

1. SwimmingMessageSystem of Nanika (

What about it interests you?

I find this project very interesting because of it’s ‘making static text-messages’ more dynamic/flexible.
Also the depth of the screens. One very long wall, one line full covered by hi res screens, which means that the “swimming text message” has all the space to swim in.

What technology does it use? List both hardware and software.

  • 24 (HD?) screens
  • SMS receiver
  • C++ script?

How does it work?

Youcan send an SMS message to the screen. The system will break up your message into separated words and makes a shape of a fish, swimming trough the water.

How could it be improved?

Perhaps more visualizing the text-message.
Analyzing what it says and visualizing it.

No comments

Marèl - Sketch 1

No comments

Marel 2

No comments

Marel Flying for video

You need to fly big wings to make a video and music play for someone who’s lying underneath you between your legs, the faster you clap your wings the more images and dreamier it gets the slower you clap them the more you come back to reality. I’ts a dreamflyer.

No comments

Marèl Shark Bate or walk on water

The idea is you walk in a dark room were you can walk upon the water, you can see the water rimpling under your feet and see strokes when you slide, its very peacefull. But when you come near the center of the room, the peacefull atmosphere slowly changes… you become hunted. the strokes of water you leave you see a shadow coming near and diving under etc. when you leave the center you’ll be in this happy I CAN WALK ON WATER mood.

No comments

Alex - Project Ideas

The first idea I had was some kind of a godzilla thing.

A person’s shadow would be given a tail, spikes and whatever, and rampage around in a city for a bit. The more I thought of this idea, the more I started disliking it, so I haven’t put much time in making a fancy sketch, just horrible sketches on paper (will be scanned later, scanner down).

After just thinking in class, maybe a more GTA 2 style stuff could be pretty awesome. Quick drawing:

After more thinking I decided to go with a technically easier project, and take time to make it look good. Most of my projects end up technically okay, but looking a bit meh, so I’d like to spend at least two or three weeks on the visual stuff this time, instead of spending all the time on getting stuff working.

The concept:
I want to bring people together by using the relation between their centers of mass to manipulate sounds. The further away blobs (persons) are from each other, the more annoying the sound will be. I also want to make some visual stuff connection the people, connecting them with green visuals when they’re close enough, turning red when they move too far apart. The sounds are easily created by using single notes and altering the speed for different tones.

Here’s a little (bad) sketch I did:



No comments

Johanne, The-portable-turist-portrait-making-machine

The idea is to create a portable device that can create postcards or other kind of turist related things.. like an alternative to the drawing protraits you can bye everywhere at turist attractions.

I havn’t fully developed the idea, but this is the first draft…

No comments

Johanne, The puppet theatre

This is a project about interactive storytelling.

It is formed as a puppet theatre with 4 puppets made in the jumping jack style. A frame creating the stage and a monitor in the background. When you pull one of the puppets it will trigger a part of a story, made with sound and moving images on the monitor. As you pull different puppets in different orders you creat different storys.

No comments

Johanne my 3. favourite project

This is an installation called: Playing the building. It is made by David Byrne, a member of Talking heads, great band.

I just reasently came to know about this project and i just love it.

The idea of the project is that you sit and play on a píano and instead of the piano playing regually piano music, every key triggers a sound around in the building.

In the movie Byrnes kinds of descrips how the installation works, with different machines that are placed around the building and gets switched on and of by playng the piano.

I really love the asthetic and the idea, it has a kind of old building, ghost kind of story to it. It would be cool if it was portable in some kind of way and you could go and make pirate koncerts in deffenrent buildings such as this buildings:

No comments

Computer Vision Test Videos

Almost forgot to post my archive of computer vision test videos - useful for developing installations without having the right environment to test with.

No comments

Two clever games

Developed by -


PS3 Augmented Reality demo

Pretty slick!

Interestingly enough it seems the the PS3 Eye has four mic inputs which can be used to determine the 3D position of a sound relative to the camera.
See -

No comments

PS3 Eye Camera - decent cheap camera for os x

Also windows too but it is not working so great with OF at the moment (on windows).

On the mac side of things - you can do awesome stuff like - control the fps, the shutter and the gain - with up to almost 60fps capture (Working towards 120fps). The camera is pretty cheap (about 40 - 50 euros).

For updates on the PS3 Eye and OF.

For Windows users I currently recommend the Phillips SPC1000NC . It is super wide angle and has good manual controls on windows.

No comments

3 Favourite books

Learning OpenCV - Amazon
The Pocket Handbook of Image Processing Algorithms in C - Amazon
OpenGL Programming Guide - Amazon

Also - not in the picture.
The C Programming Language - Amazon - PDF

No comments

Printing + openCV

Nice project from rAndom International and Chris O’Shea.

Temporary Printing machine - uses light reactive ink and a row of leds to draw pixels onto a surface.

More here:

No comments

Nice piece from Rockwell Group

Made with openFrameworks too :)

Best pictures here:

No comments

Peter - Three Favorite Projects

The three projects I’ve chosen, are:

1) “Wooden Mirror” (Daniel Rozin, 1999)
Link: Wooden Mirror

I’ve known this project of Daniel Rozin for quite a while allready.
What attracts me about this Mirror is paradoxial way of using a non-reflection material, as wood, for a mirror.
It’s not only the visual parts which is awesome, also the sound adds to the experience.

Hardware-wise this project contains:
830 square pieces of wood.
830 servo motors.
Control electronics.
Wooden frame.
Video camera.

The 830 pieces of wood are placed in a wooden frame, all pieces can be tilted verticaly.
A camera simply records who/what is infront of it.
Above the frame he placed a lamp, which shines downwards.
Little square bundles of pixels from the camera, correspond with one “wooden pixel”.
Now, the “wooden pixels” which should have a higher luminance are tilted towards the lamp,
which makes them brighter, the dark one are tilted downwards, so they get darker.

Maybe one thing which could be improved, will be the lack of colors.
If the “wooden pixel” would be able to tilt verticaly and horizontaly,
and if you place three (R,G,B) lamps above it, spaced out over the frame.
Meaning that if a “wooden pixel” would be titled 45 degrees horizontaly, it will get Green/Blue light on it.

2) “ReacTable” ( Music Technology Group)
Link: [Basic Demo 1] [Basic Demo 2] [Free Improvisation]

There are ofcourse a lot of interactive sound projects, but this one is different.
Though it doesn’t take use of your whole body, the ReacTable looks good, sounds good and works good.
I think almost every body is able to use it, even without any musical background,
but if you know your way around synthesizers and music, this is one heck of an instrument!
Using the old synthesizer components in this highTech table is really awesome.

The ReacTable hardware is based on a translucent, round multi-touch surface.
A camera situated beneath the table, continuously analyzes the surface, tracking the player’s finger tips,
position and orientation of physical objects that are placed on its surface.
These objects represent the components of a classic synthesizer,
the players interact by moving these objects, changing their distance, orientation and the relation to each other.
These actions directly control the structure and parameters of the sound synthesizer.
A projector, also from underneath the table, draws dynamic animations on its surface.

3) “PixelRoller” (rAndom International)
Link: PixelRoller

I’ve chosen for rAndom International’s PixelRoller, because it’s something I call UsefullArt.
It’s not only a wonderful machine, it’s also something you’d really use to paint some nice imagery one your wall, ceilling or floor.
In the end, compared to a lot of interactive art, you really have something. Which will be there for a long time.
Not just a memory of a cool installation.

The PixelRoller is a paint roller that paints pixels,
designed as a rapid response printing tool specifically to print digital information,
such as imagery or text onto a great range of surfaces.
The content is applied in continuous strokes by the user.
PixelRoller can be seen as a handheld “printer”, based around the ergonomics of a paintroller, that lets you create the images by your own hand.

Hardware-wise I’m not really sure how it’s made.
The software which is used is the, wellknown, Processing.

No comments

Vinesh’s concept - sticky, sneaky objects

The above diagram is my vision of how to realize the following concept:
What I like to do is having many objects flying on the screen (such as little balls). When a person is passing by and will be detected by the camera, OpenCV will smoothly move the objects to your founded body contours and stick to it ‘katamari’ style. However, when you look to the screen directly, the camera will detect your face and release the objects of your body immediately (whooosh!). The effect will be as if the little objects are alive: When you are not looking, they follow you and stick to you, but when you look at ‘them’ they will fly off.

Please see the full idea here

No comments

Lodewijk Luijts 2 projects

While I was in Rome I visited the forum and Colosseum and thought: why should we always imagine stuff that we can really not imagine at all. I want to have a set of glasses to show all those lions eating people, but more importantly, how would it feel walking trough all the markets and see the way people dressed. But since there is such a big difference in what people want, visualization or imagination. i would like to research both, and put them in an installation.

To make things more realistic and closer to everyday-life I imagine the screen to be an actual window and let people see the everyday world in stead of synthesized environments. In this world ill be telling a story that has coherence with the shown images but lets your fantasy do all the animation of the supernatural. Unless of course you want to interact and see things on the screen, thats okey tooooooo. So lets make this a touch screen and if someone presses a certain area of effect (which is found by listening to the story) the foretold animation will be shown for like 5 second before it fades away. Now to make the window believable we need a context for it to exist. I personally would like to create a wall that gives the impression that you are in one of the locations in the story. (Like if the stroy was about a crippled child that imagines all kinda stuff when looking outside of his house, that happens to have a view of the elementary school playground. And he has many things he thinks about when looking at that, he even has stages that conform to a psycological train of thinking, diappointment, anger, relativation, acceptance. If this is all too much we will scale this down) Note that this window will be interacting with the person in a realistic way, this can be done because of the headtracking invented by Johnny Lee Of course we have to decorate around the window in order to make it even more realistic and coherant. Oh and only one person can watch this in order to make the headtracking work so a wheelchair would suggests this, even though its still not definitive. All this could result in very immersive experiance I think.

PROJECT 2 While riding the train i listened to music and gathered the notion that spacing out a train-ride with amazing visuals and music would make the ride a lot of fun. Visuals and music have a big coherence. It will have a lot of impact on you. And the train is iconic for traveling from one place to another so that helps when you want to make an installation that shows multiple locations, or movement at all. The thing is that if u switch music, the effect will stay the same, it doesnt matter what music u play, only the beat per minute counts. This will be determined by markers (songs will be provided by us) and the visuals will sometimes be comming at u at the exact rate of those markers. So switching music doesnt change the installation, just the admophere.

So to be exact the 2 projects are:

A window that shows things pass by like a regular window would, only this one is touchscreen and has many hidden animations in it. The spoken story is an itergral part of this installation.

A train window which is a beamer on a screen coverd by a large glass window with video images and music to create a futuristic travel across the earth, space or even the microcosmos.


i just talked to mathijs about my post and he had some very nice feedback: he thought about a device that measures the speed of recurring things outside the train and chooses a similar beat in a track with it… kinda cool huh!

thanks to mathijs aka joke for this idea :)

No comments

Interactive installation with fire interface

A nice project that generates flames and fireworks when you light a match. I would guess that it is using an IR camera.

1 comment

Homework Correction - Diagram Size

Looks like this Wordpress theme can’t handle images that are 600 pixels wide - so for your diagrams please make the images 500 pixels wide max. You can link to a larger image if you need to. But for the blog make sure any image is max 500 pixels wide.

I changed the settings so now the medium image wordpress generates for you when you upload a file will be 500 pixels wide. So you can select Medium as the type to embed and it should look good.

To test -

No comments

Johanne, my 2. favourite project

Argumented sculpture by Pablo Valbuena-

I love this installation, its asthetic, the sound, the story.

I’m not sure how it works. Maybe this explains something:


The technology i am also not sure about, but maybe some kind of 3D program and a projector.

I guess it is static, kind of works like a play or a movie, so interactivety would be an improvement, using the idea of the asthetic, or making the atmosphere react on the viewer… It has development possibilies!

1 comment

Lodewijk’s 3 favourite projects

  1. I found out about this programmer a long time ago and this his invention is very cool -

-what i like about this: The fact that its a new way of utalizing a screen, it gives a new dimention to it. The realism it gives, it suggests you are looking trough a real window. With some propper placement it will really look like a normal window.

-what technology does it use, both hard and soft-ware wise: This setup uses a screen, wii-mote as IR data receiver and analisis, a IR source (mounted on ie. the head or hands), and a computer ofcourse.

-How does it work: The (head)mounted IR emitter tell the position of the persons to the computer via the Wii-remote. This way the computer knows where to place the image in relation to the persons sight. Now you can look at the picture from many different angles.

-How could it be improved: I would like to apply this principle to a wiimote-less setup by means of the boundingbox. This headtracking requires only 2 things. 1st of all, only one person can be using this technologie at a time 2nd of all, the position of that person has to be determined, this is what im using the boundingbox for. Afterwards i can put a visual in it that utilizes the fact that as u move your head it shifts the image. So the fun thing will be is finding a concept that works with this motion and extra depth of viewing it provides on looking at an image.

  1. im posting this vimeo because its exactly what I dont want to do (as of now though, this is just a current feeling). You see alot of these installations where peapole get to interact with the screen and see whats happening in an unrelated way, namely on a screen somewhere esle. I want to take it back to a more realistic setup and stop playing a new type of cultural videogame.

To make things more realistic and closer to everyday-life I imagine the screen to be an actual window and let people see the everyday world in stead of synthsised environments. In this world ill be telling a story that has coherence with the shown images but lets your fantasy do all the animation of the supernatural.

Unless ofcourse you want to interact and see things on the screen, thats okey tooooooo. So lets make this a touch screen and if someone presses a certain area of effect (found by listening to the story) the fortold animation will be shown for like 5 second before it fades away.

Now to make the window believable we need a context for it to exsist. Im personally thinking about a train window, a train can move in stead of beeing stationary. Or it can be a moviescreen if the train doesn’t permit the footloos-ness the project requires.

  1. is another approach i really like, the abstract one. I love beeing sucked into these cyber realms because of the sheare space and grandness of the location. Beeing able to look around more and having influence on the visuals would be really cool aswell. - is a nice example of what you can do with input.

-What i like about this: The spacial grandness, i have had dreams where i woke up in terror of the shear size of the location i was.

-What technologie does it use: Java draw applet?

-How does it work: It draw a line from point A to point B, these are both difined by a click of the button on a certain spot of a grid. Then the line drwn between these coordiantes are shown as continuous animated strings.

-How can i improve it: Let it influence sound composition.

  1. Cuts and Layers a work by Edwin van der Hijden - It uses spacial sounds of the surroundings it is beeing presented in. A user can compose with sounds from different locations around the and in the room the console is located. This is cool because its in direct relation with the space.
No comments

Homework - Brainstorming projects

For next week come up with two projects that you want to make.
The projects should relate to computer vision in someway but can incorporate other techniques as well.

Please keep in mind that this is an 8 week class - so try to have big concepts that are not too hard for you to make. A good example of a nice project that is not too hard is Kyle McDonald’s - I eat beats.

1 - Title of post should be - Your Name - Project Ideas
2 - Post the projects on the blog with a short description text for each one.
3 - Post a clear diagram(s) that illustrates how the project would work.


Mischan - 3 favourite projects

Life Writer Machine

– What about it interests you?

nice aesthetic decision to mix an old typewriter with artificial life… conscious desicion to use the typewriter as writer-metaphor… the idea, that you generate life while writing.

– What technology does it use? List both hardware and software.

  • an old typewriter
  • piece of paper
  • video projector
  • no idea which software

– How does it work?

I guess either the typewriterkeys or the typebars are picked up and send to a computer, then immediatly after that the letter is part of the animation that is projected onto the paper. When a linefeed is done that will be picked up as well to adjust to the paper movement. Softwarewise i imagine one of many a-life simulations…

– How could it be improved?

I don’t know, the whole installation space is used perfectly and the work doesn’t smell like ducktape in any way… maybe when you really try it out, you could find improvements for the actual typing/animation interaction?


– What about it interests you?

I am a complete sucker for robotics, especially when it is not as “clean” as represented most of the time… the use of meat on classic robotics hardware is awesome, especially since there is a historic connection between anatomy, robotics, drawing studies and flesh. 

– What technology does it use? List both hardware and software.

  • servo motors
  • a microcontroller because it looks like there is no external computer used
  • lots of wires as a supporting structure
  • silicone (for/or meat)

– How does it work?

the servo motors move the “meat”

– How could it be improved?

the webpage doens’t show a lot of the actual interaction, or what you can do to the meat… so like it mostly for conceptual reasons.

Hello, world!

– What about it interests you?

looks stunning, perfect scaling, i am generally interested in works that give basic computational or computer architectural components more space and extract the aesthetic value of the process. Especially using sound in space to bring up the concept of memory is totally awesome.

– What technology does it use? List both hardware and software.

  • computer
  • speaker
  •  246 meters of copper tube 
  • a microphone

– How does it work?

Except for the obvious description of the work… no idea.

– How could it be improved?

No idea… remove the computer in the casing, but that is probably intentional.

Wiki Notes

No comments

Danny - 3 favorite projects.

1. Waves to waves to waves.

– What about it interests you?

I like the idea of visualizing, the usage of man made electronic devices that radiate electromagnetic waves.  Something invisible that is present in our everyday lives, and then making that visible.

– What technology does it use?
It uses specialized sensing devices, that pick up, electromagnetic waves. Software?? 

– How does it work?

Their equipment is sensitive to changes in electromagnetic fields, first they convert the detected changes into electrical signals, then they use that data to draw animations.

– How could it be improved?

 I think it could be improved, by making more clear what kind of traffic is detected. Not only, converting changes in the electromagnetic fields, into electrical signals, but also other types of detection. (Receiving radio, or converting a wifi signal.)

2. ex-isles

– What about it interests you?

I like the idea of projecting unto water instead of just a plain wall. 

– What technology does it use? 

It uses a projector, a webcam, an internet connection, some kind of motion detection, and a water pump.

– How does it work?

Two light islands are situated at each side of the pool, one of them is on the ground and represents the “real”, the other in the water is a “virtual” one. When a visitor enters in the light of the island representing the physical space, he produces a natural shadow which is projected in the water as a luminous shadow crossing the pool by swimming. This silhouette leaves a trace, a line drawn from one island to the other. The line drawn by the presence of a visitor corresponds to the trajectory taken by the luminous shadow in the water.

In the exhibition space, a webcam broadcasts the pool and the two islands. On the website, internet visitors have the possibility to project themselves in the water of the installation and leave their trace going from the virtual island situated in the water to the real island. The links are created thus between the physical visitors of the exibition space and the net surfers who can interact from anywhere in the world.

– How could it be improved?

I could be improved by having visitors actually cross over the water in some way. Making the water, and also the web visitors, less distant. 

3. Polygon-playground.

– What about it interests you?

I like the idea of projecting on something physical, that visitors can actually walk on. And then animating that projection to distort their senses. 

– What technology does it use? /– How does it work?

The installation features a software aided 3D surface projection system to cover the object with a seamless 360 degree projection mapping. An additional sensory system detects peoples positions and proximity. 
when a person is detected, the visual appearance of the “Polygon Playground” changes. 

– How could it be improved?

I guess it could be improved by having the object itself change from time to time.

Wiki notes

No comments

Alex - 3 Favourite Projects

Nothing Happens:

What about it interests you?

It’s an interesting attempt to link internet activity with real world activity.

What technology does it use? List both hardware and software.

Software: Flash, HTML and probably PHP.

Hardware: Computer collecting internet data, a motor to pull the object, and a device to convert the digital data to analog, to control the motor.

How does it work?

Every click made on the page with the current image in a flash app is registered, and directly sent to the installation. Every click is a very small motor motion. A high amount of clicks will in the end make an object fall over.

How could it be improved?

The actual setup could be a bit more fancy, right now it looks rather plain, especially since the motor is very visable. Also, it’s a project that looks very lame without the explanation of the process.

Pneumatic Sound Field:

What about it interests you?

It’s a pretty unique way to create sound, and involving the audience. The sounds created by pneumatic valves sounds huge

What technology does it use? List both hardware and software.

It’s not quite clear what exactly is all being used, but as far as I can see: 42 pneumatic valves. There’s also some interface involved, but the specific page on this project on the site of the interface builder is down…

How does it work?

It uses various theories on perception of sounds, the wind, and other already available elements to create a field of noise around the audience. The sound isn’t created by speakers, but by the pneumatic valves.

How could it be improved?

I’ve seen this thing in person, and it completely trips out of too many people are underneath the installation. Surely a problem that could use some work, since it’s often presented at the entrance of exhibitions.

Sorting Daemon:

What about it interests you?

This projects takes a recent public concern, and tries to turn it into an artsy product. It also involves people in the work of art without getting them directly involved.

What technology does it use? List both hardware and software.

Software: Probably max/msp, since Rokeby has his own video and camera tracking and processing software for max/msp
Hardware: A computer, a camera, with zooming and panning options, and an LCD screen to present its result

How does it work?

The installation actively screens the street outside for everything it recognises as a person. Once it thinks it has found a person, the person is captured, and substracted from the background. The snap of the person is then placed in a database, where all the different colours and hues are taken apart, and orderly placed over the screen.

How could it be improved?

This is a nice project, but the end result is kind of lame. It just comes out as a coloured screen, with faces where the fleshy colours are collected, the machine should be able to do some more interesting compositioning. Also, it’s just presented inside the gallery where it is placed, which is a bit of a downer, a huge screen outside might work better.

Comments are off for this post

Vinesh 3 Fav projects

Moving elements on the screen towards your hand position and even of the screen -

What about interests me?
The unexpected awesomeness of elements actually getting out of your screen because of what you do.


  • OpenCV
  • Beamer
  • Monitor

A (multi) touch display made with OpenCV, camera, a beamer and a computer monitor. (I think?) The camera detects your (hand) position on the display and openCV moves objects to it. In case hand is kept longer onto the display, the particles will move towards the biggest blob by beamer this time, starting from your hand position.

When they hand is leaving the divined position, the creature animation will be set again to the boundaries of the fake display.

The objects move nicely but I think it’s a bit unexciting in matters of colors.

It would be nice to add an actual function to it. It’s fun now, but you can only use it once: for example layer information like so; A text layer with decoration can be visible if you move objects away or even get them off the screen! (or something).

Creating shapes by making contours with hands. The shapes include gravity.

What about interests me?
A very close connection between a person and the app. It totally feels like you are doing this.


  • Camera
  • OpenCV / OF
  • Beamer

I imagine this is done by a form of background subtraction. the app is looking at ‘white’ pixels which are surrounded by back pixels. If this is the case, then this will be a new object with certain attributes (color, animation). So it scans the image vertically and horizontally for black pixels to start and to end in a x or y pixel row…pretty impressive.

Dont make the shapes disappear, leave them at the bottom of the screen untill you somehow choose to get rid of them. With these particles you can make a nice object on you multitouch display! (even though it’s sort of a different idea)

Using Motion tracking and convert live motion into 3D shapes. They actually sell their design as products.

What about interests me?
The function to draw 3D in a easy way while directly seeing the scale.


  • 2 infraRed Camera
  • Sufficient space
  • A monitor to see what you do
  • OpenCV

Since this is using 3D, meaning width, depth and height, it needs to capture 3D. It could be done by one camera at front capturing the width + height and one on from the side, capturing the depth. The drawing is done by the hand. The hand is the most far from the center of mass. So you could say if the distance between the first x of the bounding box is higher than amount of pixels start draw. (or lower than)…This seems very complicated.

Does anyone have any thoughts about how this could work?

You probably can not control the thickness of the lines, they may be very important when designing to the details. Some additional options suchs as those could be useful to apply as you are drawing to have a more complete real life toolkit.
My notes on the wiki.

No comments

Marèl 3 projects

1.  Messa di Voce (performence) by Golan Levin and Zachary Lieberman with Jaap Blonk and Joan la Barbara ,

What about it interests you?
The cooperation between the performer and the software and visuals
What technology does it use? List both hardware and software.

Video Equipment:

  • 2 XVGA video projectors, at least 2000 Lumens
  • 2 projection screens, at least 4mx3m
  • 2 VGA Distribution Amplifiers (two-way video splitters)
  • 2 VGA cables, 15 meters
  • 2 coax video cables, 15 meters, 75-ohm

Audio Equipment:

  • Stereo house PA sound support
  • 16-channel Mackie or equivalent audio mixer
  • 2 Sennheiser Evolution RX/TX wireless units (for mics)
  • 2 sets wireless in-ear monitors (e.g. Shure)
  • assorted audio cable & connectors

Stage Equipment:

  • 6 500-watt theater lights, with
  • electronic dimmers,
  • adjustable barn door enclosures,
  • multiple sheets of Dark Red lighting gel,
  • free-standing fixtures/stands

Computer Equipment:

  • live Internet connection (ISDN+)
  • 4 LCD or CRT computer screens, XVGA
  • 1 8-port Network hub


  • 2 standard desks
  • 3 chairs

FMOD, libsamplerate, libSnd, OpenCV, OpenGL, OpenSteer, portAudio, and many other open-source toolkits and components.
How does it work?
Custom software transforms every vocal nuance into correspondingly complex, subtly differentiated and highly expressive graphics.some parts of the performance also seems to react to the movement and position of the performer.
How could it be improved?
maybe during the first part (with the balls) that the balls really come from his mouth they now seem to come from somewhat above his shadow.

2. Cristobal Mendoza
What about it interests you?
The visuals give me a really mysterious dark vibe. I can imagine that when you stand for a wall and you are looking at this and this is what is left of yourself, it could really give you the feeling of staring into another world.
What technology does it use? List both hardware and software.
Mirror is a Processing based program that reads incoming video from a camera and triggers particles from points of the video frame that have changed by a certain level. The result is an image that is fairly representative, yet disappears within moments. When no one stands in front of the “mirror” its blank/white.

How does it work?
see question above.

How could it be improved?
Maybe draw a frame around it.

3. Snout by Golan Levin with Lawrence Hayhurst, Steven Benders and Fannie White
What about it interests you?
Its a robot arm! that seems to react and look at you!
What technology does it use? List both hardware and software.

How does it work?

How could it be improved?

my notes page


Mathijs - 3 favorite projects.

Christopher Bauder + Markus Lerner - Polygon Playground

[Youtube link]

The interesting thing about this project in my opinion, is that it uses an already existig technique in the same way as almost everyone does, but by changing the lines when movement occurs, it actually looks like the whole installation is deformed. This causes an interesting effect for people walking over it.

The project most likely uses a top-beamer with mapped projections. It also uses a top down IR camera to detect movement, and ignore it’s own projection. And of course the actual blocks to project on. Projections can just be projected on with any interactive visual software. (OF/Processing/vvvv/Etc.)

There’s however one thing that bugs me. It’s still so solid, the actual ’shape’ will never be changed, and this is rather limited. The amount of different visualizations is also a bit messy, and looks more like the creators having fun then an actual installation.

rAndom International + Chris O’Shea - Audience

[Vimeo link]

Audience is a very simple idea that is very effective. When the mirrors catch your attention, they follow you around, creating a very esthetically pleasing flock of mirrored images. The flow of movements create an interesting effect as well, as mirrors on the front row move less then those at the back. The fact it uses no monitors or any conventional computer related interfaces makes it really refreshing.

It most likely uses a video feed with the motion history, allowing it to see the in which direction someone is walking, and keeps track of the person being followed, instead of directly jumping to anyone walking by. It calculates lines from the real position of each mirror to the moving subject, and passing on the angle variables of this line to the motor so it can move to this angle. The closer you get to the installation, the more tilting occurs. When no subject is available, they will randomly twist and turn.

In it’s current form I don’t see any need for improvements. In a new related project though, it might be interesting to make the mirrors mobile, allowing them to move away when a subject gets to close, giving them the character of a group of shy animals.

Takeo Igarashi - Teddy modeler

[.Avi file link]

Teddy uses a very simple interface to solve a complicated task. it’s not accurate or perfect, but by giving it a sketch like look it actually uses this as it’s charm, creating what some would consider a flaw into a key element. Creating 3D objects becomes easy as the program tries to create shapes out of the lines you draw.

Technically, this is all software, programming. There’s not much to talk about.

It’s buggy and there’s still some buttons to be pressed. It would be interesting to try and convert it to a pure movement / gesture based interface instead.

[Link to notes page @ the wiki]

No comments

My 1. favourite projects

1: The growing poster -

  The name is a name i have given it. It is a poster made with a diffusion-limited aggregation simulation technique borrowed from Golan Levin Dendron project (2000). This technic is made in processing and is some kind of “power”. I really like the idea of making grafhics using tools where you sometimes looses controle. I guess the project isn’t an application for the adobe programes or similar programs, but is just a onetime “performance”, but that would deffinently improve the project!

No comments

In class assignment - 3 favorite projects

1 - Make a post on this blog - with the title: Your Name - 3 favorite projects.
2 - Pick your 3 favorite projects from the projects you researched in class.
3 - For each project do the following:
3.1 - Post the name of the project - then the link to it
3.2 - Answer the following questions:
What about it interests you?
What technology does it use? List both hardware and software.
How does it work?
How could it be improved?

4 - At the bottom add a link to your notes page.

No comments

Links for inspiration

A list of interactive media related websites. Some of the blogs have a lot of content so dig back through the old pages for some gems.

Artists / Small Companies

Interaction Companies


Education / Institutions


No comments