Interactive

Spacelife

VR environment / installation
2022

Spacelife is an abstract audiovisual environment that can be explored with virtual reality technology.
It is composed of a series of objects created with particle systems. Participants can therefore immerse themselves in a visually rich and dynamic world, and thus discover perceptually unique points of view.
In the installation version, the images coming from the VR viewer are modified in real time and then projected onto multiple screens. 

Slabs

Audiovisual installation. Plexiglas slabs with polished steel boxes, sensors, audio players and magnetostrictive devices
2010-13

Sounds by Stefanie L. Ku
Realized with the co-operation of Enrico Pellegrini

Photos by Armando Rebatto
Video by Studio Vertov 

This audiovisual installation is composed by six vibrating plexiglas slabs with which the audience can interact. The slabs, suspended by steel cables, are printed on one side and laser-engraved on the other. In addition, they are lit by both spot lights as well as LED light strips placed on top of each slab and hidden by a polished steel box. In this way the engraved parts are highlighted.

Inside each box two devices transform an audio signal into vibrations by means of a principle known as magnetostriction. The whole plexiglas surface then becomes a musical instrument and contributes to the aural environment, making this installation a space defined by real audiovisual objects. The sound emitted becomes more intense as the viewer approaches each slab: in this way people can control its volume, thus changing the overall musical balance.

Technical sheet 1 – Technical sheet 2

Read about it on Ba3ylon.

KL

Interactive installation. Light table and digital prints on plexiglas
180 x 50 cm.
2009

X

Audiovisual game
Variable length
Quadraphonic, 44 KHz, 16 bit
Color, XGA and WXGA, variable fps, 32 bit
2005

Music by Matteo Franceschini
Soprano Silvia Spruzzola, Violin Barbara Pinna
Production AGON

The interactive installation X has been commissioned by AGON for the “Festival Iannis Xenakis”, held by Milano Musica at the Milan Triennale, October 27th – 30th, 2005.

X is inspired by the work and thought of the Greek composer Iannis Xenakis. Among all the concepts and ideas brought by Xenakis, we have chosen one that seemed absolutely contemporary, especially in the digital domain, that is the idea of a game/composition.

Videogames are used at all latitudes, as everybody knows. Often times, though, they don’t offer any cultural value. We like to think instead that it is possible to create interesting visual and musical architectures, starting from videogame technology.

Among all the possible games, we have chosen “memory”, because it is easy to use and well known.

In this game the user has to find eight couples of images distributed randomly on a grid. Every couple found generates an audiovisual event that is shown on a big screen and heard through quadraphonic speakers. When all couples are found, the composition is complete, but it is created each time in a different order, according to which couples are found first. Once done, it is possible to continue to the next level. There are three levels, each one with a different work.

Another side of Xenakis’ work that has greatly influenced X is the creation of granular events, that is events that are extremely short, but are produced in huge quantities in order to get clouds of singularities, managed with statistical methods.

The idea derives from the physicist Dennis Gabor, Nobel Prize for the invention of holography, who in the late ’40s demonstrated experimentally that it is possible to generate a continuous sound by starting from many discrete micro sounds, i.e. micro aural “frames”. The idea was then brought to music by Xenakis, and transposed into the digital music domain by Curtis Roads and into computer graphics by Bill Reeves.

The images of X are entirely made with visual granular synthesis: that is by putting together many particles in order to create irregular and changing shapes. The particles are generated in real-time.

This composition technique, as well as others introduced by Xenakis, such as glissandi for instance, are also used in the musical part.

n-grains

Interactive environment
Stereo
Black & White, XGA, variable fps
2005

n-grains is a follow up of the previous work flussi. The goal is similar: to create an audiovisual interactive work where the user controls the amount of audio and video information received, in order to create a balanced audiovisual experience.

The user manages the flow of information by hands’ movement. The movement is detected by two infrared sensors, and transmitted to the software, which in turn produces the appropriate images and sounds.

There are two major differences, if compared to the previous work: in flussi, predefined QuickTime video and audio movies were used, whereas in n-grains images and sounds are generated in real time, even though certain parameters are predefined. The visual part is created with a particles system and the music is produced with granular synthesis.

Secondly, in flussi the hands’ movement was triggering the sequences in a discrete manner, whereas in n-grains there is a continuous flow of events.

Clearly, the solution offered by n-grains is more flexible and more appealing. It’s a different solution to the same problem.

The first step into this new direction was to understand how it’s possible to generate a flow of visual particles and audio grains in real time. Not only, it was also necessary to control the flow by means of sensors.

After an extensive research, it turned out that the best solution is to create a particles system within Max/MSP/Jitter, coupling the patch with one of the available granular synthesis patches. The sensors used are the same ones of flussi.

The patch is a modified version of the particle_primitives example found in the Max/MSP package, so that it can deal with a pict file, which includes an alpha channel. A QuickTime movie file, always with an alpha channel, can also be used. These files are the single particle used. They have to be crafted properly in order to produce a reasonable result. This way the particle generator can create more appealing visuals than just pure dots.

A grain object, made by Nathan Wolek, was also added, providing the capability of generating audio grains. Similarly to what happens with the particle system, also in this case a sample file is used to generate the stream of grains.

In dealing with particles systems and granular synthesis, there are a number of parameters that have to be controlled. n-grains controls only the number of particles/grains with sensors, since that is its goal. All other parameters are changed by using traditional devices.

flussi

Interactive audiovisual environment
Stereo, 44 KHz, 16 bit
Black & White, XGA, 30 fps, 8 bit, Animation compression
2002

flussi has been created in the summer 2002 at C&CRS, Loughborough University, UK (now Creativity Cognition Studios at the Technology University, Sydney).

Audiovisual works often suffer of a common problem: the focus is spontaneously centered on images more than music. In order to create a more balanced work, the stream of events has to be reorganized, so that music gets as much importance as images. However, the perception of audiovisual events is indeed personal. As a consequence, flussi offers the user the chance of controlling this balance, by using each hand to balance audio and video respectively. The resulting images are video projected, while sound is output with traditional stereo speakers.

In flussi, QuickTime movies are coupled with QuickTime sounds, while the interaction is controlled by the user’s hands, their distance being detected by two infrared sensors. By moving each hand, the user can control the rate of each stream, video and audio. The whole environment is managed by a Max/MSP/Jitter patch.

flussi is made of a series of sequences, or modules. After the first module is done, the next one is executed and so on.

In each module there are 9 sounds and 4 videos already made. This is what happens:

  • The audio and video movies are read in a specified order.
  • The audio and video movies are stored preserving the correct order. The variable n is set to 0.
  • The sensors output values, which are read. Each sensor outputs 5 types of values: -2, -1, 0 (normal), 1, 2. For each sensor value there is also an array of 9 values regarding sound and 4 regarding video. The array values can be either 0 or 1.
  • The correct array is chosen.
  • The n-th value of the chosen array is read. If it is 0, the length of that event is read and nothing happens until that time has elapsed. If it is 1, that event is executed.
  • The variable n is incremented by 1.
  • If n has not reached the array length, then steps steps 3, 4, 5 and 6 are executed again, otherwise the module begins, until the end of the piece.

For instance, a module can last 15 seconds. Sounds’ duration can be 3, 1, 2, 1, 3, 1, 1, 2, 1 seconds. Videos’ duration can be 4, 3, 5, 3 seconds.

If the sensor value is 2, then there are 9 sounds, if it is 1, 7 sounds, if it is 0, 5 sounds, if it is -1, 3, and if it is -2, 1 sound. For images: 2/4, 1/3, 0/2, -1/1, -2/0. In the “worst” case there is only 1 sound and no video, while in the “best” case there are all 9 sounds and 4 videos. And all the combinations in between.

colori

Interactive environment
6.5 MB, variable length
Stereo, 44 KHz, 16 bit, Shockwave compression 64 kbits/sec
Color, SVGA, 25 fps, 24 bit
2002

colori features an interactive 3D environment the user can explore. The work was conceived just after oggetti.

The world of colori is made of seven objects and seven sounds. Each sound is related to one object: as the user moves around, the sound changes accordingly, including stereo panning (Macintosh version only).

This work is related to the world of games. The interaction is based on a gamepad, a device that is normally used in video games. This way anybody can use colori at home or on a journey.

At first the objects are mostly black and white and partially transparent. Moving to the very center of the inner object triggers a new view of the world: it becomes colorful and all sounds are produced together.

By clicking the mouse button while in color mode, the user can switch to another set of colors. Another click will switch back to the original set.

oggetti

Interactive 3D environment
1.6 GB, variable length
Stereo, 44 KHz, 16 bit
Color, SVGA, 15/30 fps, 24 bit, Animation compression
2002

oggetti features an interactive 3D environment composed of 18 abstract objects. Navigation and scene manipulation allow the user to create his own views. The user can dolly, pan and rotate the camera, as well as rotate and move each object. Clicking on each object triggers events chosen among a set of 36 animated sequences and 7 sounds. These events are not interactive. Some images taken from the sequences can be seen in the stills section of this website.

The 3D objects appear as shaded, wireframed or rendered as points according to different types of objects: shaded ones trigger animated sequences that represent the artist’s view of that object. Wireframed objects trigger animations that are totally abstract. Point objects represent sounds.

This work is inspired by flighs simulators and is a first attempt to use video game technology within an artistic framework.

artevideo.com

Interactive website
2000

In this web site the user can experience three interactive video installations and three QuickTime VR panoramas.

Genetic Art

Interactive software
1997-99

This piece of software creates images based on the genetic concepts of cross-over and mutation. Users can save images into a repository and then use them again as parents.

Software written by Marco Stefani

Clouds

Interactive environment
Stereo
1989

In the interactive environment Clouds a DataGlove was used to control audiovisual events that were previously made with my software AV, running on a Macintosh II. It was possible to navigate in an audiovisual hyperspace with a hand, going through several different audiovisual cells, each one showing sequences from the animation Isole, and thus creating a sort of interactive abstract film.

The DataGlove, developed by VPL Research Inc., was considered a very flexible tool of interaction, and in my opinion an ideal instrument to control audiovisual events. In fact, the DataGlove was neither a musical instrument, like a keyboard, nor a pictorial tool, like a tablet. It was a neutral device.

The hyperspace could be imagined as a 2 meter wide cube, located in the real world, and made of little cells. Each cell, located along the 3 axes x, y, z, was “containing” a micro sequence (0.5 to 4 seconds.) of audiovisual events: animations and synthetic sounds. Adjacent cells were related to one another, without spatial coherence among cells the navigation would have been meaningless.