The final max patch is complete and works with some comprising. I think i have explored the original plan in most ways. Seeing people moving in-front of the cameras and viewing themselves and the other space was an interesting moment. I’d spent so much time looking at myself whilst working on it, I’d almost forgotten about the different ways people interpret and reacted to video works and interaction. People are much more random than is first expected when encountering this installation. The two people which helped me test the piece where very expressive and explored the movement of there bodies like crazed dancers, I can assume this may be because they knew each other and they found it much easier to ‘play in front of the camera. I would like to further explore how two strangers would explore the video projections. I’m assuming this will equally show me how inventive that people can be when thrown into a surveillance video scenario.
The scale of the video would be very important to how people immersed themselves into the piece. In my video I have used two computer monitors, but in reality these would have to be wall size projections.
At the moment I am not too pleased with the way the video switches the blending between them. I’ve used a pipe to hold the bangs coming in from the detection, I don’t think this is working correctly and also tried delay with equal issues. I would need to explore a better way of making the video change and not flicker for it to allow the user any chance to understand there own involvement in the work. The change in saturation levels does change when movement is detected, and was much faster than I’d expected, and did give some instant feed-back for the viewer.
I had to make some changes towards the end because of networking issues. Although i was able to get the maxhole to send numbers over the network, I had problems with sending video. Although this didn’t turn out to be the biggest issue, as I was able to send the data using everything on the one computer. So the final patch is able to run off one computer, and depending on how the second display is setup can be changed the position. I would like to work this out for the future as sending video over a network to other rooms and spaces in a gallery would be essential for a piece like this.
I’m in two minds weather adding microphones to the spaces would be usefull. The first reason I question this is about simplicity. Do i want to overload the spaces or should I alow the people to be able to hear each other? If i include both video and audio, there are two channels open to the spaces to discuss and engage with the work, but i don’t want either to dominate to much. In-fact I think i do, i want the video to be primary and the audio to be secondary. I’ve thought about adding a contact mic to the floor of each space so the persons foot steps can be heard within the space, but all comunication with be muffled and distorted. Going forward in future iterations of the work, I’d be interested in exploring the sound distortion between spaces. Alowing noise audio pick up to affect the other spaces and be heard on both.
Originally I was thinking i could just switch between the two video streams by using the toggle to send a message to one or the other, but this had two issues. The first was that the video change was to about and didn’t feel like a decent transition. I think i will need to find a better way of switching the videos over, maybe using the jit.slide object. The Second problem is when using one camera per screen, using one input for both the video feed and the motion detection and using the same data to switch the videos creates an recursive loop where nothing bangs. To fix this the simplest way, is to use two cameras per installation. I’m sure this will also stop anything else happening that might also come up because of using just one. This will allow me also to consider placing one in a different place other than directly in front of the user.
I think it would be interesting to bring in the chair from my other installation chair=chair or a row of chairs. This will make each space changeable over time and will signify that there has been presence in the space when no-one is in there. I wonder if people would avoid the chairs and choose to stand around? Also if the chairs where placed in different location, maybe facing a wall, would this exaggerate this?
Initial thoughts on the setup I’ll needed and how I might created the max patch
Elements needed for the max Patch:
If i was to need to show the installation in two different spaces which are not on the same network, I would need to look at the ip address and ports. I might also have a problem with dropped frame rates, although i’m not looking for a sharp HD image, more the distortion in the images would be interesting.
I’m thinking of keeping this simple enough so that once someone moves in front of the screen, there is a basic change, rather than creating a screen which is gridded off and its the corresponding co-orodidents which change. A simple way could be to use a microphone to pick up audio in the space and when the sound reaches a certain level, this changes the video. I could use the different sounds coming in to change different part of the video, maybe the colour or brightness. The second simple way, is to use a camera tracking. It could be a simple amount of movement changes what happens in the other space. Stay simple!
I made this sketch to help me try and explain my idea in a more diagrammatic way. I realised whilst drawing it how the once you start a repeating the same things on the other side, it can become confusing. The looped videos will be coming and going alot, I wonder if i’ll be able to send allot of data over a router? I’m intending on using the maxhole object to send data between computers.
The project aims to explore the way humans relate to screens, webcams and underlying technology and also to highlight the social and technological pressures we are subjected to.
Split in two parts, two identical spaces, with a projection on the far walls. Space one, will project the video and audio feed from Space two and in-turn Space two will project the corresponding data back to Space one, creating a loop between the spaces. When users enter Space one, there movement will be detected and instead of their movement changing anything on there own screens, it will only intervier with Space two’s video projection. The secondary states will be the video feed from each of the spaces own rooms. Some of the questions I’m interested in are; does the user understand their own involvement in the opposites video, or do they think they have changed there own video? If the user becomes aware of there involvement do they socially co-operate with the other user in the corresponding room? Can information be passed between the two spaces?
© 2013 Jonathan Munro - All Rights Reserved