Pad Player is an audio sampler written in Java using Processing. Initially simple upon first glance, the Pad Player concept gives the user unique options for deeper customization of the functionality of the sampler.

Inspired by other grid-based musical interfaces like the iconic MPC sampler, the Monome controller, and Ableton Live’s Session View “Clip” paradigm, I wanted to create a sampling interface that would allow the user to script actions when different pads were triggered.

A pad can be pressed and play a sound that the user loads, like any other sampler. But the user can also designate a “next pad” action that will trigger another pad on the next beat. The beat-per-minute is set by the user via a slider. The “next pad” functionality is accessed via an Edit Mode screen.

A pad, after being hit, can trigger another pad on the next beat in the following ways: up, down, left, right, specific pad number, random pad number, stop (no trigger).

This “next pad” functionality, combined with creative use of the beats-per-minute slider and specific sample selection can create a variety of usage scenarios.


A pad that is triggered to play can have it’s own trigger, which can cause a chain reaction. Pressing pad 1 can play the pad, which can trigger pad 2 on the next beat, and so on to step through to the last pad, which can then trigger 1 again in a loop. Load piano or instrument samples to sequence a phrase, load selections of a loop to “slice” it up among pads and play it in order.

Half and Half

Have a sequence of 8 using the method above, but reserve the other 8 pads for “stop” actions. The 8 step sequence can play and loop, while 8 other samples can be “one shot” for live performance. Pad by pad selection of the triggers allows any layout of this concept: top/bottom, left/right, or any random combination.

Fast Layers

High beats-per-minute are allowed. Configure a chain of triggering pads that cascade within a second. Small snippets of sound can be loaded and played in a rapid sequence that could mimic an arpeggiator function on a synthesizer. Longer samples could seemingly “layer” on top of each other, where multiple pads are triggered so quickly that they would appear to play simultaneously.


One sample sliced up and distributed among multiple pads all assigned to random triggering on high speed could produce results similar to granular synthesis. Soft and gentle synth samples selected in tune all on random and very slow could create a randomized ambient soundtrack. Have buttons at the top cascade down in sequence, where upon hitting the last row a random pad is triggered which would cascade down again.

Certain choices were made at the concept phase that guided certain development decisions. I wanted to have the option to target multiple development targets, have the end product be easily customizable by the user, and have customized sessions be able to be saved and loaded.

The Processing library for Java provided the possibility to target Windows, OSX, Linux, Android and Javascript via the Processing.js mode.

I wanted to avoid hard coding features or layouts: the number of pads to play can be raised or lowered easily via a variable. Pad “Resting” “Hit” and “Trigger” colors can be customized on a pad by pad basis by the user. The object oriented nature of the program allows the “pad” to be drawn in any way: circles, triangles, images, etc. The entire state of the program, including pad x/y position and sample loading (via a simple string filepath) is stored via an XML file that users can save, load, or edit by hand if they wish.

The XML “state” concept combined with the pad object architecture allows for a great amount of flexibility when considering coding other features in the future. Not yet implemented, there could be a fairly robust “WYSIWYG” layout editor that gives the user fuller control over pad position, size and shape. The rendering loop of the program simply re-draws the state of the saved XML that is loaded.

Reviewing and selecting Java sound libraries that were suitable for performance made me skeptical of being able to port my end results to Android and Javascript. With my options at the time I decided to avoid Android given the sound libraries available for the platform and the general issues with Android audio in terms of musical performance. I decided to focus on Javascript as my ‘alternate platform’ to Windows, OSX, and Linux. In addition, if I could make Pad Player work on Javascript, there was ultimately the possibility I could make it work on Android.

Javascript mode, despite its specific quirks, turned out to be a success. I was happy to be able to complete a feature-matched JS version of the Java app I made. The Java-to-JS conversion provided by Processing worked well when keeping in mind the inherent limits of JS compared to Java when designing.

I was happy to go a little bit deeper into Processing than the standard ‘sketch’ attitude the platform might initially guide a programmer towards. Getting to flex my OO muscles was good for my programming habits, and generally pushing an idea towards something more complete was satisfying. While there are still some quirks in terms of bugs, coding decisions, and general usability concepts, I was happy to design a tool that could produce creative results now while also having a strong foundation to expand feature-wise if there was enough interest.

A combination of approaches can be applied for fun and surprising results

Cell Sound is a prototype interactive concept that involves physics and sound, programmed in C++ using the openFrameworks library, the Box2D library, and the Tonic Audio library. I wanted to get to know openFrameworks, and had an interest in creating an abstract interactive experience where physics controlled entities triggered sounds. While familiarizing myself with Box2D, I wound up using circles as my initial testing shapes. Playing with the gravity and speed of the simulation in order to produce optimal results from the sound triggered, I was inspired to choose a “cellular” visual metaphor: circles, bouncing and floating in a space whilst being analyzed.

Small red circular cells and smaller blue rectangular cells float and bounce together in a scene. When crossing the horizontal middle of the screen, red colors trigger certain melodic sounds and blue colors trigger noisy percussive sounds. The x position of the red entities determines the notes, which go from lower to higher as the x position increases. The x position of the blue entities affects certain left/right panning and delay processing. Each entity can be clicked and dragged by the user to orchestrate sound, but also a spacebar can push all stage entities to the center of the screen. This triggers it’s own dull, filtered noise sound and red “flush” in the background of the scene. Perhaps suggesting the beating of a heart, or rush of blood that pulses through veins that carry these cells we observe.

Cell Sound

Demonstration of physics

Having had some exposure to openFrameworks (oFX) in a PC environment, I was happy to develop in Mac OSX in order to familiarize myself with XCode. oFX gives certain speed and processing advantages over Java and Processing, but not without getting used to the differences in C++. Combined with wanting to incorporate two fun looking libraries, I was excited to change up my routine in terms of environment, IDE, and language. oFX is also a desirable library due to it’s ability to create iOS apps, though I haven’t taken advantage of that compile target yet.

Box2D provided a robust physics engine that was deep but also easy to use. I knew that it was widely used, but having the pleasure to simply play with the options available truly convinced me of it’s utility. The choice of audio library was a little bit more of a search. I wanted something that could give me a robustness of options and control in terms of sound synthesis. Tonic showed itself to have a great conceptual paradigm close to the methods of actual synth design and production, as well as creative potential in creating unique voices and processing effects.

As the simulation runs, a horizontal one pixel row is continuously scanned for it’s color content. I could have achieved a similar result using collision detection in Box2D, but my initial concepts and experiments involved webcams and image analysis and I wanted to keep that conceptual door open for future variation on this prototype. Certain color thresholds at certain points in the pixel array trigger the appropriate synth voices with correlated effects. This encourages the user to ‘sweep’ along the field, furthering the potential for collision and bouncing. oFX’s C++ advantages were on display; constant analysis of the pixel array, multiple physics bodies interacting with each other, and live sound synthesis are all handled at solid frame rates on my machine.

Being able to get practical experience with C++ via oFX was good. But having chose two libraries that made variation and experimentation so easy and fun really made this experiment satisfying to work on. Not only did I leave myself open to lots of potential ‘riffs’ off of this concept, but I couldn’t help but consider them mid-development. Which in turn guided certain artistic decisions. This is the kind of workflow that I desire, where the creation and programming of these applications can be as improvisational and ‘live’ as other traditional artforms. A certain level of programming fluency will need to be reached to achieve that reality, but knowing your tools and picking the right ones helps a lot too.