Blip Shaper, Creating Patterns and Shaping Sounds

In this build I am creating drum patterns and shaping the individual sounds that make up those patterns. In the process of doing this I am also recording the audio to a buffer and tweaking the buffer with cut-up, granular and other effects.

It has been a while since I posted the progress of the Subcycle project so I created a walk-through to get readers caught up on some of the new features. I have shifted to Max For Live on this iteration and began using Eclipse to organize the Processing/Java code. This has taken me a long time to make this transition but should help make it easier to update functionality.

For the drum sounds I have Drumaxx running for synthesized sounds and Battery running for sampled sounds. These are running in parallel so for each voice there is a separate patch running in each VST. The Parameters are modified with the touchscreen independently but in all cases a single touch gesture on the X-Axis will cross fade between the sampled version of the sound and he synthesized version of the sound. I love this because I have have never seen this before and I can never decide which technique I like better. The synthesized drums are more malleable and have more interesting parameters to play with but the sampled sounds seem more substantial. I will post a detailed list of parameters and gestures in the future.

touch interface

Shaping a clap event with a single touch point. Delay effect is applied.

touch interface

Shaping a sub bass event with dual touch controlling pitch and distortion.

touch interface

Adjusting delay on distorted clap with a three fingered gesture.

touch interface

Shaping a bass drum.

touch interface

Navigating the audio buffer of recently recorded events.

I really like the idea of capturing the sampled events into a buffer and applying cut-up effects to that buffer. In the past I have always relied on drum loops for basis of a cut-up technique. This is much cooler because the providence of the sample is completely irrelevant and the result seems more authentic. In my mind this sort of validates this technique.

touch interface

Windowing a granular effect on the buffer.

touch interface

Pulled back view of pattern view and overhead visualization.

autopilot and recording phrase based gestures

The most important thing about this build is the ability to record the movements of the performer in a looped cycle freeing hands and fingers for additional layering.

I went into this phase of the project imagining that I would add an auto-play functionality and call it a day. I definitely got a lot deeper than I expected. I changed a lot visually for this sketch mainly because I was getting bored with the saturated additive look of the previous sketches and wanted to try something different.

autopilot – subcycle labs from christian bannister on Vimeo.

I have created a toggle on my mixer that will activate autopilot. Once autopilot is activated the instrument records the performers movements similar to a mixer automation. This is a looped sequence of about 4 bars. When the cycle loops back the recorded movements will play on their own. Two really nice things about this: a) The performer can multitask and do more complex things including play another instrument b) the rhythms created in subcycle can become more like phrases instead of one off glitch gestures. I am planning on adding more autopilot functionality as I come up with more features.

The landscape sound texture evolved out of some earlier sketches. I am still honing that aspect of the project. I like the diversity and the spacial experience but I have seen a lot of landscape based projects. I am applying distortions and warps to the 3D landscape in something like a signal chain for audio. Right now they are in the same order. In other words distortion is always first, delay second etc… I would like to make this visual signal chain reconfigurable just like the audio but that will have to happen another day.

Notable features:

  • The recording of phrase based gestures.
  • A landscape sound texture visualization with visual effects signal chain.
  • JLC Mini Desk mixer integration.
  • New visual design direction.

touch interface

Autopilot rhythm navigation allows performer to apply effects.

touch interface

Whole note delay with lots of feedback – visual effect.

touch interface

Landscape texture represents sound texture.

touch interface

Ambient landscape texture.

touch interface

16th Note delay.

touch interface

Two aspects of the bitcrush distortion (bandwidth and resolution) visualized.
Scrubbing the sound.

touch interface

Scrubbing the sound.

touch interface

Granular effect that does not look like bats!

touch interface

Fliter effect in conjunction with scrub tool (lethal combo).

I pulled out my JLC Mini Desk to allow me to navigate the song section. I was using a Kaoss Pad before, only because I needed the 8 buttons along the top. There are over 100 buttons on the Mini Desk and while they are very small I am liking the experience. I am going to be picking up two SpaceNavigator 3D joystick/mouse controllers and working with those in the next build. The joystick array is giving me problems with my USB bus and I want more precise control in the next round of updates.

It has been a while since I last posted. Mainly because I have been getting some things sorted in the studio. I put together a new computer with an i7 and a 275 GTX graphics card. This allows to push the visuals quite a bit. My old card was a 9800 GT, so the difference is pretty huge. This new configuration is also small form factor so I can transport it easily. I was looking at laptops but decided to go with a miniature desktop (micro-atx) because I really need the full size graphics card. I am running the os and presentation software from an SSD so this picks things up also. I also upgraded to Snow Leopard and It was a little rough.

steeplechase mode / integrated presentation

The most important new development in this build is the ability to switch between presentation modes. All of the sketches that I have developed to this point can now be performed in the same session.

It has been a long time since I have posted anything here. I was working pretty hard through the end of the year and didn’t have a lot of time for this project. I did manage to get some time in January to seriously clean up my code. This more formal approach will make it possible to become a lot more prolific. It will also makes it possible to switch between presentation modes (formerly known as sketches).

I am calling the new presentation mode steeplechase because the behavior of the notes reminds me of a steeplechase event. This is basically a spacialized step sequencer. When the notes come in to the foreground and strike the line in the center of the screen a sound is triggered. As the various characteristics of the sound of these notes changes so does the appearance of the note entities as they fly through space. For instance, when the FM pulse width is modified the edges of the note become wider and rougher.

steeplechase – subcycle labs from christian bannister on Vimeo.

I should say that this new mode is utilizing Reaktor Spark as a sound source. I have been using Spark for plucked sounds in a lot of my compositions and I wanted to dedicate this visualization to that instrument. I built the step sequencer in Max and went from there. I am still not using Live but I have the feeling that embedding multiple VST instruments would be a whole lot easier with a Max/Live combination. I bought a license for Live, but I am still not ready to dedicate the months of time that it will take to port everything over.

The basic functionality of steeplechase mode:

  • Single touch y-axis controls pulse amount, x-axis controls sine amount
  • Dual touch adjusts the 8-pole center frequency and left-right offset
  • The Clear option allows the performer to remove rows of notes by swiping their available hand over the note diagram on the left of the screen
  • The Draw option allows the performer to add notes by using their available hand to select a location and release the note directly into the visualization
  • The pause option Pauses the steeplechase visualiztion and resulting sound
  • Three touch points controls the cabinet or distortion effect in Spark
  • Four touch points controls the reverb size

touch interface

touch interface

touch interface

I am working on a project documentation page that will include walkthrough videos and better descriptions of what I am doing. Hopefully I can get this wrapped up this week.

multi-touch the sound storm

Things are starting to sound more song-like and I can really appreciate that. In previous builds everything sounded more like an experiment or a demo. Now I have something more akin to an experimental song.

I am finding it more and more difficult to separate my music from what I am doing with the instrument. I guess they are going to be presented/performed together for now.

Sometime soon I would like to do a long series of remixes and see how how diverse I can these interpretations of a single song. It would be really cool to explode a song into hundreds of different non-linear interpretations.

multi-touch the storm – interactive sound visuals – subcycle labs from christian bannister on Vimeo.

Many additions in this build:

  • 3D sound visuals for the texture of the sound.
  • Switching between song segments with controller (old Kaoss pad in front of screen) and no I am not using Live (yet)
  • Random color when new sound is selected
  • Integration of the joystick array (my Costa Rica project 2007.12.28)
  • Real-time effect controls (distortion, bitcrush, delay/feedback, timestretch FX)
  • Zoom and rate control of the sound
  • One shot effects on USB keypad next to the controller

I have not been able to take the time and create a comprehensive description of the things that I am doing here but I will eventually get to that. I realize a lot of people will look at this project and have no idea what is going on. For the moment I just want to stay motivated and keep pushing things. In the next few months I will be working on some proposals that will require me to sit down and

All of the seriously cool titles that are used to describe things like sound waves, touch and storms are taken. If anyone who is reading this has a good title for this project please suggest.

touch interface

I have a new projection film! This is great stuff… Digiline Contrast and still worth it after 100 euro shipping charges. The image is hugely improved. If you look at the older videos they seriously look crappy compared to the new and improved screen.

scrub it, slice it, granular synthesize it

Pet patch, new features… Way too many features and now the UX is completely messed up. I guess this is a good place to be.

Seriously this project is at the point where I need to start solving some problems. This newest build has multiple modes and it is starting to get a little unwieldy. I am going to be adding many new features and I need to start coming up with more gestural ways to switch modes.

Touch Loop Navigator from christian bannister on Vimeo.

As it stands right now:

  • single touch – loop navigator
  • forward/rewind – two fingers upper half of screen
  • scrub – two fingers upper half of the screen
  • four fingers – granular synthesis
  • first nav toggle – beat slicer
  • second nav toggle – reverse

I am really concentrating on getting all of these features implemented right now. Once everything is available, I can start to explore more musical and gestural ways of controlling things.

touch loop navigator

The essence of a new instrument is simply an elegant and effective method for manipulating sound.

In this really early sketch I am trying to find the right gesture for changing the offset and loop duration of a sample. I am finding that this has the potential to be extremely visual. By touching the sound in the area or region that you want to play next it will play from that point. Moving your touch higher on the y-axis increases the rate of the playback. I tried various approaches including the multi-touch zoom technique. These other approaches seemed too disconnected and slow. It may not play immediately because it will always remain on time to a certain extent. It is more or less quantized. This is different than just pulling up an audio editor and setting the loop points on the fly or triggering audio samples should be clear if you have tried these approaches.

multi-touch rhythm navigation (subcycle labs) from christian bannister on Vimeo.

Next steps here are to extend these gestures to include reverse, fast forward, beat slicing and granular synthesis.

low frequency entity

What would the bass look like? What would it be like to touch it and manipulate it directly and visually in real-time? These are some of the things I am trying to get at in this sketch.

I really wanted the the form to be more about the characteristics of the sound than something that only responds to the audio of the sound like what has become the standard fft based sound visualizer. For example, the filter is being modulated here and when the rate of the modulation increases so does the rate of vibration of certain aspects of the form. Easy to see… hard to explain.

I have created a spline based 3D form in Processing to represent the bass frequency and put it motion. The form is more about movement that anything else. It is constantly in flux.

 

low frequency entity – subcycle labs from christian bannister on Vimeo.

I am not analyzing the audio here at all. I am only using the settings on the bass synth to drive the
visuals. I broke out certain “effective” parameters from the NI Massive bass patch and worked with those to define the visual characteristics of the entity. I am looking at an lfo modulation of the filter here, a bitcrush effect, a sample and hold effect, feedback, an oscillator phase parameter, and an fm synthesis control. This gives me a huge palette to work with. I should say I am also sending midi note information to Processing and having that vibrate the form as well.

Once the visual starts making sense then I can begin interacting with it and finding musical ways to engage it or play it like an instrument.

A single touch here is simple rotating the form in 3D. This seemed really important to me for some reason and that could easily change down the road but for the moment it seems like a really good idea to be able to navigate the form in 3D. It feels like it brings some kind of clarity to the whole thing that is hard to find otherwise.

When there are two touch points registered the distance between the two points controls the lfo modulation rate. In other words, when the fingers get closer together the wobble speeds up and when the fingers spread apart the wobble slows. After playing with this a lot of different ways… this just felt right. In this two finger mode I have also defined bitcrush and sample and hold effects. When the left touch point goes toward the bottom of the screen the bitcrush effect is turned up.

When the right touch point goes to the lower part of the screen the sample and hold effect is turned up. The relationship of these y positions is also effecting the feedback of the sound. These effects are also breaking up the visual and adding randomness to elements that make up the visual. I found it necessary to sort the points horizontally so that the leftmost point is always distortion even when the touch points cross. This is kind of hard to explain but this is important because otherwise you are constantly trying to figure out which finger is doing what.

This is a great moment for me. I really like the huge variety of sounds and visuals that I am getting with the interface in this mode. There is also this cool dramatic moment when both touch points slide to the bottom of the screen and things really melt down.

When there are three touch points I begin effecting oscillator phase and fm synthesis as well as feedback. This is still a little rough and not completely under control at this point. It is still a lot of fun to hear these effects and I need to figure out a more clever way to work with three touch points.

I love this sketch because it all about manipulating a sound in real-time visually. It is brutally simple in some respects but completely cool at the same time. I am not actually manipulating the notes here and I think that would be an improvement. I want to try and combine what is going on here with a touchscreen step-sequencer in a later iteration so that you will be manipulating the notes as well as the other characteristics of the sound. I am thinking about this as a split presentation but that could change also.