Weng's World

Purpose

This site is dedicated to ideas and implementations generated by me whilst taking the computer audio and music programming course (CAMP07) at UC Irvine.

Project: Melody Synthesis Package

For the project I developed a package of Java Max objects (mxj) to facilitate melodic synthesis within the Max environment. The basic premise is that melodies are very structured sequences of pitches that can be synthesized/described by performing various mathematical operations on them. This package hopefully provides a sufficiently rich set of objects to manipulate melodies in ways that might be useful to a composer.

The source code is available here. The manual for the package is available here. To install the package, first compile the Java sources within the "melody" directory. This requires the max.jar library from "Cycling '74\java\lib" directory. Then move "melody" (now containing both class and Java files) into "Cycling '74\java\classes". Note that the containing directory "melody" is required during compilation and installation of the classes because all the classes are part of the package "melody".

The following examples demonstrate how the package can be used to generate melodies.

The following test patch shows all currently available objects.

Max Assignment 5 (Mar 8) Contrast

For this assignment I implemented a sound filter (as an mxj~ MSP object) that tries to alter the "contrast" of input sound signals. For each vector of samples of a signal that goes into the filter, the filter determines the constant amplitude envelope of the vector, normalizes the signal to the [-1,1] amplitude band and raises the signal values to some power (contrast value). After that the signal is rescaled in such a way that the RMS of the vector amplitudes remains unchanged (so that the perceived loudness of the input signal does not change). For the contrast value f, the intention is that when f > 1 samples that are close to the envelope will stand out more than samples further away and when 0 <= f < 1 samples will tend to bunch up at the envelopes, thus producing an indistinct and muffled signal. The Java source and class files can be found here.

Max Assignment 4 (Feb 27) Thunderstorm

For this assignment I used the reson~ and biquad~ filters to simulate various sound effects for a thunderstorm. The first effect used a cascade of reson~ filters and modulations of pink noise to create the whooshing sound of strong winds. The second effect tries to recreate the sound of rain beating against window panes and rattling them. This consisted of a cascade of reson~ filters and a biquad~ filter. The last effect is a biquad~ filter simulating the sound of surf breaking on the beach.

Max Assignment 3 (Jan 30) An Acoustic Sunrise

Note: In the zip file the main Max file is "assign3.txt" and the rest of the files (music data) has to be on the Max path.

For this assignment I implemented a program that plays acoustic guitar. The program reads in five chords (C, F, G, Am and Em), a sequence for playing the chords, a string-picking pattern for playing each chord, and plays the underlying melody for the song. Each chord comprises five notes: a bass note and four treble notes. Another part of the program randomly introduces "free notes" amongst the underlying melody.

The song comprises three sections. In the first section we fade in the song as the sun rises. In the middle section the song increases in intensity as more and more free notes get played. The intensity is controlled by a quadratic function. This paints a picture of more and more things awakening and coming to life. In the last section the sun has fully risen and there is a warm glow across the land. This section is faded out. Throughout the song, the bass note in each measure is hit twice in quick succession to produce a "heartbeat" effect. This seems to work particularly well in the last section where things suddenly quiet down and the heartbeat becomes conspicuous.

Max Assignment 2 (Jan 23) Signal Control With Delays and Exponential Amplitude Decays

For this assignment on control signals I experimented with signal delays to recreate real world effects of echo and stereo perception of sound sources.

The program takes an input audio file and adds echo and stereo effects to the signal before playing it. With the echo effect the user can specify the echo delay in milliseconds (0 to 20000) and the rate of decay of the echoed signals (0.0 to less than 1.0). For the stereo effect a pictslider object is provided as a graphical representation of a square room with the user standing in the center of the room. The user can click on any position in the room to move the sound source and hear the change in direction and amplitude of the audio signal. The user can change both the size of the room (in cm) and the sound attenuation constant (which determines the decrease in signal amplitude as the sound source moves further away from the listener. For best results use headphones or earphones to hear the output.

The code shown above for adding echo has been extracted to a subpatch. The way it works is to recursively extract the signal, delay it, decrease the amplitude and then add it back to the original signal. The amplitude of the echos fall away exponentially since the recursion keeps decaying each successive echo by a constant factor.

The bulk of the code shown above is for producing stereo effects. The way it works is when the user specifies a sound source position in the room, distances are computed from the source to the listener's left and right ears. These distances are used in two ways. Firstly, we compute the decrease in amplitude of the audio signal when it reaches each ear. We assumed that this attenuation of the signal follows an exponential law with respect to distance from the sound source. Secondly, we compute the difference in delay times for the signal to reach each ear. It is this delay difference that accounts for our perception of the directionality of the sound source. The original signal is split into two channels (one for each ear) and the changes in amplitude and delay times are applied to each channel.

In implementing the stereo effect I did not apply the actual delays to each channel. Instead I subtracted the minimum from each delay so that one channel gets delayed by the delay difference while the other is not delayed. This was done to accommodate the fact that the delay object has a maximum specifiable delay value and the fact that we allow the user to specify an arbitrarily large room.

I used a system timer comprising a metronome that regularly sends bangs to various objects to ensure that they keep updating their results. For example, a "+" object does not compute a result if only the right inlet gets updated. So to make sure it gets updated bangs are sent into the left inlet. I know this seems inefficient but trying to do everything asynchronously (with the result of one object triggering others) led to some very subtle bugs. It seemed simpler to just implement a clocked system (the same way that most computers operate).

Max Assignment 1 (Jan 18) Krazy Keyboard

For this introductory assignment on Max/MSP I chose to implement a program that plays a keyboard randomly. The various parameters that get randomized are playing rate, pitch, velocity and note duration. Metronomes are used to control how often these parameters get changed. Metronome speeds themselves can be altered by the user.

The results of this simple program were in my opinion not altogether displeasing even though no musical principles were used to constrain the random playing. The next evolution of the program could involve using statistical information gathered from actual musical pieces to guide the random generaton of notes (probably via Markov chains of some feasible order) and hopefully obtain something resembling coventional music. I'm hoping to turn this into a term project.