Wednesday, 19 September 2007

CC1 - Semester 2 - Week 8

Performance Sequencing

What I did was to create 3 different loops using Propellerheads’ Reason; a drum loop, a bass line loop and a guitar loop. For the guitar loop, I pitch-shifted in a real-time fashion.
After inserting the drum loop into Ableton Live, I warped it and introduced a different loop.
I also added a delay effect to the guitar line and controlled it real-time as well.
In total, my sequence is followed this order:
Original Drum Loop -> Modified Drum Loop -> Bass Loop -> Guitar Loop (with delay amount controlled and varied throughout the tune)-> Bass Loop -> Modified Drum -> Original Drum.
The result can be downloaded here:The interesting part of this week’s exercise for me was that Ableton Live actually makes it much easier to fuse different genres of music. Controlling rhythm and its tempo (as well as its groove) provides the user with lots of options to play around with different riffs and themes and at the same time it makes it possible to precisely and accurately mix these diverse stuff together. The reason I didn’t use my recordings from last semester was that my project didn’t involve drums and I needed rhythm to do this.
I think parts of this exercise of mine sounded like early experimental drum n bass tunes (i.e. early Aphex Twin).
Here is a very useful video of warping in Live. I know noone does but please have a look.

References:
- Christian Haines 'Creative Computing 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 12/09/2007
- Ableton Live Users. Wikipedia. (http://en.wikipedia.org/wiki/Category:Ableton_Live_users) [Accessed 19/09/07]
- Ableton Live. John Hopkings University Digital Media Centre. (http://digitalmedia.jhu.edu/learning/documentation/live) [Accessed 19/09/2007]

Monday, 17 September 2007

AA1 - Semester 2 - Week 8

Amplitude and Frequency Modulation (AM and FM)

Plogue Bidule is a total disaster; because I still have not mastered it.
However, after ages of trying to create two patches of Am and FM, I finally figured out the very simple and logical procedure of it. In amplitude modulation, the amplitude input of the oscillator should be receiving a signal from another oscillator. In Frequency modulation the frequency input should be the receiver!
Despite the simple algorithm of FM and AM, simulating a natural or man-made sound was pretty hard.
For the AM, I simulated the time period in which an 8-cylinder car (Chevrolet came to my mind) starts its engine, accelerates and stops. I like what I came up with!
The MP3 can be downloaded from here:for FM, I simulated the sound of an ambulance alarm (or whatever it is called in English).
The MP3 can be downloaded from here:The interesting occurrence in FM was the slight change of the tine when the carrier and the modulator have frequencies very close to each other. To see the effect, get the Plogue file from here and have a listen to it.
Here is the list of the AM and FM Plogue files I have and also some more stuff.
AM for the above MP3 (Chevrolet) :(Click the Download button)FM for the above MP3 (ambulance): (Click the Download button).
As it is apparent in the picture, I have modulated the signal with ITSELF. Nice sound..Here is the picture of my Protools session for the whole thing:Amplitude Modulation on YouTube..

References:
- Christian Haines 'Audio Arts 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 10/09/2007
- Amplitude Modulation. WhatIs.com . (http://searchsmb.techtarget.com/sDefinition/0,,sid44_gci214073,00.html) [Accessed 15/09/07]
- Amplitude Modulation. The University of Berkeley, Los Angeles, California, The United States of America. (http://robotics.eecs.berkeley.edu/~sastry/ee20/modulation/node3.html) [Accessed 15/09/2007]

Thursday, 6 September 2007

Forum - Semester 2 - Week 7

CC1 - Semester 2 - Week 7

Ableton live

What I did was to sequence a track using NIN samples provided in software Ableton Live. I did the sequencing (or rather mixing) live and real-time.
Arguably, Live has a good interface which compared to many others is more user-friendly. Most of the features that a typical user needs at one time are on the screen and there is not that much of need to browse the menus. Like many other softwares, however, there is a necessity to "flip" sides of the interface and go from the editing section to sequencing; this could possibly be problematic. The other good feature of Live is that it recognises the beginning of the sample (or rather the "beat") pretty precisely and does not always go off-time (it should be mentioned that the whole syncronisation process highly depends on the initial loop of the sample.)Restrictive practices that are to be experienced while using Live would allow the user to be more "creativity". As an example, since the demo version of Live did not allow me to save the file, I had to come up with an intelligent approach to playing and recording real-time using two different softwares (in this case, Live and Protools.)
Here is my final result in MP3 format:..and here is a video of two DJs called "Telephone Tel Aviv". It demonstrates how these guys utilise Live to get their jobs done:


References:
Christian Haines Creative Computing 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 06/09/2007
- Ableton (www.ableton.com) [Accessed: 06/09/2007]
- Ableton Live, Wikipedia (http://en.wikipedia.org/wiki/Ableton_Live) [Accessed: 06/09/07]

AA1 - Semester 2 - Week 7

Analogue / Digital synthesis; Juno 6

I experimented with a few synthesisers including Roland Juno 6 and Jupiter. For this week, I played with Juno 6 and recorded the result.
Obviously, the result of around one hour of mocking around with the synthesiser gave me a whole lot of different sound patterns. Not many of them, however, would be considered simulations of sounds n the "real" world.
At the end, I merged two parts of my final result; the first one sounds like activities of a volcano (bubbling?) and the second one sounds to me like wind blowing.here is the MP3 containing these two parts:After all, "additive synthesis" (the way through which electronic synthesisers and most of the softwares work) provide many option by which it is possible to create various sounds. In addition to that, the most enjoying part of the additive story for me is that since the processes are to be done step by step, it is easier to keep the track of what is being done throughout the whole way.
Below, is a video of Roland SH3A Synthesiser; it is a short demo and a good example of how additive (..and to me also addictive!) synthesis work:

There is a good lecture on additive synthesis from Princeton University, New Jersey, US. I've put the link in the references.

References:
- Christian Haines 'Audio Arts 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 03/09/2007
- Roland Juno6. Vintage Synth Explorer. (http://www.vintagesynth.com/index2.html) [Accessed 07/09/07]
- Additive Synthesis. Princeton University (http://soundlab.cs.princeton.edu/learning/tutorials/SoundVoice/add1.htm) [Accessed 07/09/2007]

Forum - Semester 2 - Week 6

CC1 - Semester 2 - Week 6

AA1 - Semester 2 - Week 6

Interactive sound design

I chose my external hard drive, Maxtor One Touch III for analysing its sound design.
There are two different sounds (beeps) coming out of this. When it’s connected to a computer via a USB cable there could be 3 things happening.
1- Everything is fine! And there is no sound.
2- The drive is connected but there is not enough power; could be caused by the cable or the USB port of the computer. In this case, there would be a constant tone, which won’t finish unless the device is disconnected.
3- The drive is connected but it’s busy being recognised (or rather configured) by the computer; in this case there would be an on-off beep indicating that nothing should be touched.
The job of designing the sound for this particular product in my opinion is not bad.
a) It is more or less has a simple structure and is not hard for the user to realise what goes on; on the other hand, it just uses one tone; not so hard to note and recognise.
b) The constant sound the device makes when there is a problem is so annoying; the first thing that comes to users’ minds is to disconnect the drive and it is intentionally what is expected to be done.
c) The on-off beep gives a sense of not-being-sure and typically the user waits to see what happens next; it is also exactly what is needed to be done. While the user is confused, the device takes its time would sort everything out itself!Here is a link to a project of interactive sound design by students of the University of Melbourne. : http://www.sounddesign.unimelb.edu.au/web/biogs/P000565b.htm
..and some good stuff (obviously related to interactive sound design) from Kristianstad University, Sweden : http://www.fb1.uni-lueneburg.de/fb1/austausch/kristian.htm

References:
- Christian Haines 'Audio Arts 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 03/09/2007
-Interactive Sound Environment - Australian Sound Design Project Work, The University of Melbourne (http://www.sounddesign.unimelb.edu.au/web/biogs/P000565b.htm)[Accessed 07/09/2007]
- Interactive Sound Design, Kristinstad University (http://lab.tec.hkr.se/moodle/) [Accessed 07/09/2007]

CC1 - Semester 2 - Week 5

Naturalisation

For this week, I took a MIDI file for the song "Sultans of Swing" by Dire Straits and naturalised it. Most of my work was to process the guitar part (particularly the last solo of the tune). I mostly modified the velocities, durations, and starting points (groove) of the notes.
Here is the ORIGINAL MIDI part also available in .cpr format for Cubase) ..
and HERE is my modified version of the tune. (and its .cpr file)

Naturalisation might come handy in many cases, firstly to add the spice of "human mistakes" to the artificial result of machines providing simulations of musical pieces. Nevertheless, many of MIDI files these days are already played by a person and they originally do have the human error within their structure. Still, there are many corrections needed to make the MIDI file sound as close as possible to the original song.
Besides, in many electronic compositions there is a huge need of "groove" which could just be provided by skillfully apply certain applications while producing the tune, for the final result to sound natural to ordinary audience.

References:
Christian Haines Creative Computing 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 21/08/2007
- Dire Straits, Wikipedia (http://en.wikipedia.org/wiki/Dire_Straits) [Accessed: 06/09/2007]
- MIDI search engine, Music Robot (http://www.musicrobot.com/) [Accessed: 06/09/07]

Forum - Semester 2 - Week 4

Forum - Semester 2 - Week 3

This week’s exercises were to generate square waves and manipulate the sound using various devices (potentiometers, resistors, etc..)
The first exercise is pretty simple :
The square wave in generated using a 4093 chip, a capacitor and a resistor ; as you can see in the video below:


I changed the resistor and observed the effect :



In the next one, I used an LDR (Light Dependant Resistor) and experimented it :



In this one, using a potentiometer (also called POT) I varied the resistance and checked the effect :



This time, I used a potentiometer AND an LDR, nice stuff.. :



At last, I used TWO potentiometers, one serving the device as a generator and the other one the modulator. I think I needed different POTs to get a more apparent sound :

Monday, 27 August 2007

AA1 - Semester 2 - Week 5

Sound Installation

Like always, my VERY first step in the research journey! Was Wikipedia, and I came across THIS which I strongly recommend everyone to have a look at; however, I ended up analysing someone’s art which was not mentioned in that page, blame it on Youtube.
Lionel Marchetti (as it is mentioned in the first Google search for his name) is “…a composer of musique concrète. First self-taught, he discovered the catalogue of Musique Concrète with Xavier Garcia. He has composed in the CFMI of Lyon 2 University between 1989 and 2002, where he still organizes workshops focused on the loudspeaker, the recorded sound and Musique Concrète, both on practical and theoretical levels. He has built his own recording studio, and composes also in the Groupe de Recherches Musicales (GRM) in Paris since 1993...
I analysed one of his sound installation projects which I found on Youtube; you can check it out below:

Use of music technology:
Apparently Marchetti is famous for his works on and with loudspeakers. This particular video is showing the application of his idea of transforming the electrical energy to the kinetic energy of a loudspeaker and again delivering that to some stones and finalising the act with the natural sounds of the stones.

Sound Mediums:
According to my understanding, the main mediums through which Marchetti is sending the sound(s) are stones and soil. The video is not very clear but I think there is something going on with the plants and their leaves as well; nevertheless, the main sound-makers are stones, which are collaborating with loudspeakers!

Type of artist:
Apparently Marchetti is known the best as a “Concrète Musician” (I just came up with this word; I hope it exists in the glossary of music technology.) but the work I have analysed –to me-is an example of sound installation. Many other installations that I watched, however, had mainly used sound and light. Yet, this work could be categorised the same. Marchetti is also a poet.

References:
- Christian Haines 'Audio Arts 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 20/08/2007
- Lionel Marchetti, Wikipedia (translated by Google from French to English) [Accessed 26/08/2007]
- Lionel Marchetti, ElectroCD [Accessed 26/08/2007]

Tuesday, 21 August 2007

CC1 - Semester 2 - Week 4

Cubase

The first interesting uniqueness (as far as I know) of Cubase is the algorithm of dealing with MIDI. Usually –and typically- softwares just need inputs and outputs set to proper ports and channels; then different plug-ins, VIs, VSTs, etc apply. In Cubase, you are ought to send the MIDI signal THROUGH the VI or VST that you want to utilise.
The other interesting fact I came across in Cubase was that you have all your different devices, instruments and effects organised in various folders and they are always accessible. I can’t comment whether this is good or bad; but it sure is different. Its positive point could be the fact that you always know your MATERIAL, and the bad side of it could be that you have to divide your mind into two, one to manage your actual sound and the other to keep an eye on the effects’ (or VSTs) folder.

Here is the MP3 of my final result:



Here is the Cubase file:

References:
Christian Haines Creative Computing 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 15/08/2007
- Cubase, Steinberg media (www.steinberg.net) [Accessed: 20/08/2007]
- VST - Virtual Studio Technology, Wikipedia (en.wikipedia.org/wiki/Virtual_Studio_Technology) [Accessed: 20/08/07]

Monday, 20 August 2007

AA1 - Semester 2 Week 4

Carlton Draught

In this ad, the famous music piece “Carmina Burana” by Carl Orff is used to support the video scenes of two group of people running into each other in a big field.
In general, most of the techniques are conceptual. The lyrics have changed and are (in a funny way) repeating the massiveness and the sophistication of the advertisement.
However, the process of synchronising the music and the video is notable. When the music is soft and –relatively- quite, the two groups of people have slow movements; they walk. When the music begins to be loud and the choir singers begin to raise their voices, the two groups run towards each other. By the end of the music, the camera zooms back and covers the entire scene and it becomes clear that one group were acting as the beer and the participants in the other group were serving the role of the consumer.

The ad is mostly utilising diegetic sounds and overdubs for the actors’ voices.


Sony.

Sony has used the Pilipino band Rivermaya for its ad. The ad is promoting the quality of sound and vision in Sony products. Throughout the short-film, there are many uses of hyper-real and ambience sounds like tearing the tape around the products’ boxes, the sound of boxes being moved, etc. Like many other ads, we hear a short narration being said at the end.
One of the notable characteristics of this video is that sounds build-up gradually and small sounds (of packing and un packing the product) result in the main point which is the actual band, playing the music in the highest quality possible. The evolution of these sounds is an interesting fact to me.


References:
Christian Haines 'Audio Arts 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 13/08/2007
- Carmina Burana, Carl OrfF, Wikipedia (http://en.wikipedia.org/wiki/Carmina_Burana_%28Orff%29) [Accessed: 19/08/2007]
- Rivermaya, Wikipedia (http://en.wikipedia.org/wiki/Rivermaya) [Accessed: 19/08/07]

Sunday, 12 August 2007

AA1 - Semester 2 Week 3

Requiem for a Dream

The exercise for this week is to analyse the soundscape for a hollywood movie.
I have taken 4 minutes of the film “Requiem for A Dream” from 29’ 30” to 33’ 30”.
Unfortunately, mostly as a result of copyright issues, I am unable to put the movie up here but down here is the film’s trailer derived from Youtube:


Analysis:

Music/Non-diegetic sounds:
- In the robbery scene, the background music gradually takes off and adds to its elements; however, when Jared Letto and Jennifer Conelly talk, it is heard as just a rhythm.
- When Harry’s mom (Ellen Burstyn) is experiencing the effects of Ecstasy pill and is dancing, a Balkan music is being played in the background.

Hyper-Real sounds:
- Robbery scenes have sounds of cash-machine chime when it’s valet is being opened. The sound is similar to the sounds Pink Floyd used in the track Money (the Dark Side of the Moon, 1973, Harvest/Capitol)
- Security alarm beep is heard in the background in robbery scenes.
- When the main actors are taking magic mushrooms and cocaine, the sound of their pupils being opened is heard. It sounds very artificial to me and in fact, I don’t think a pupil makes THAT sound when it is rapidly opened. (It would be great if anyone tells me if it made any sound at all). The same issue is there with the sound of blood running in the veins, containing loads of drugs.
- The sound of cocaine being sniffed is a reverse of some other artificial sound; it might not be the real sound of the action but in my opinion it makes sense in the context of the film.
- The electric noise of the lamps in the robbery scene is good; as a necessity of the design extremely exaggerated.
- When Letto and Conelly kiss each other, there is just ONE sound of kissing, which is weird. (The positions are different, I don’t think all the kisses sound the same; they sure don’t feel the same though!)
- When Conelly is making (it looks like) a wallpaper, some reverse sound is serving the effects and it totally doesn’t make sense to me; however, it might, again in the context.
- When Burstyn is taking her pills, again we hear an ultra-exaggerated sound of popping-up!

Diegetic sounds
- Doors are being slammed few times during this 4 minutes, again, the issue is that they sound more or less the same; despite the fact that they are different doors.
- One of the main quotes of the film “Purple in the morning, blue in the afternoon, orange at night..” is said by Harry’s mom when she is organising her timetable of taking Ecstasy pills.

References:
Christian Haines 'Audio Arts 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 06/08/2007
- 2000: requiem for a Dream (film), IMDB (http://www.imdb.com/title/tt0180093/) [Accessed: 12/08/2007]
- MDMA (Ecstasy), Erowid (http://erowid.org/chemicals/mdma/mdma.shtml) [Accessed 12/08/2007]

Wednesday, 8 August 2007

CC1 - Semester 2 - Week 2

Another Bidule project;

This time the circumstances were pretty different from the last week. Having experimented a bit with the software and also after reading an instructive writing on the net, life was easier.
I came up with this result:

..which its Bidule file looked like this:
..and it (the Bidule file) is HERE to download.

There were several issues that I encountered but overall, since Christian explained the system through which such an "open-ended" software works, the bugs became more understandable. Still, I have to experiment more to actually find out how to generate "what I want". Unfortunately, the software gives zillions of options from which you can choose your sound (or rather choose the "way" you want to generate your sound.) My aim however, is to come up with a sound which I can possibly use somewhere else; in collaboration with other softwares. I didn't really get into the "re-wire"ing side of the software but I think it is very possible to work with Bidule while it is cooperating with other programs.

References:
- Christian Haines 'Creative Computing 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 02/08/2007
-
Martin, "Bidule Tutorial", Front and Back [Accessed 06/08/2007] .

AA1 - Semester 2 Week 2

Wednesday, 1 August 2007

Forum - Semester 2 Week 1

Victorian Synth, as I understand (and of course as a result of my readings) is to come up with components of a synthesiser with several electronic and/or mechanical devices.
This week’s forum or rather Music Technology Workshop) of ours, gave us the basic understanding of the use, application and coordination between such devices. The aim was obviously to create a musical patter; (I strongly believe in the fact that there is NOTHING such as noise in the sense of an unpleasant sound. Music IS subjective)
We initially broke simple computer speakers apart and used their elements for our study of Victorian Synth.
The process was not sophisticated in the f
irst place. Having the electric current coming from a 9V battery, we momentarily connected and disconnected our wires (also connected to two phases of the battery) to a speaker. This provided an electric spark and a sound (the same sound that small sparks have! I don’t know if there is a particular name for it in English).
The experience was to have several things on top of the speaker, in the middle of the electric current, on the surface on which the connections were made, and so forth. We also used objects such as classes to dampen the sound or/and change the environment of the sound source hence the quality of sound.
As obvious in the 3 different videos of Victorian Synth that I have put here (all derived from Youtube) the whole process` requirements are simple. It just needs a bit of creativity to think of new sounds and ways of making those sounds.








The piece I have put here is consisting of 4 parts. 1st part is just momentary connections of two electrodes; second part is what is called “Electric
Feedback”. The short distance between the centre of the speaker and the point of the connection (of electrodes) provides a magnetic field from which the speaker would be affected again, and the result would sound like what you hear! The 3rd part is having the momentary electric pulses, and pressing the surface of the speaker in a very sentimental manner! (It would break for f***’s sake) The effect is the change of the pitch, and the reason for that is the change of the pressure from the back of the speaker’s diaphragm. The fourth one is simply scratching one of the electrodes to a Lighter Stone (Is it called this in English?) and check the result.
Cool, huh?

References:
- Christian Haines, Stephen Whittington 'Music Technology Forum' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 26/07/2007
-
Bowers, John. Suborderly (http://suborderly.blogspot.com/2007/03/suborderly-music-victorian-synthesizer.html) [Accessed 1/08/2007] .
-
"The Victorian Synthesizer". Field Muzick. (http://www.fieldmuzick.net/17.237.0.0.1.0.phtml) [Accessed 1/08/2007]

Monday, 30 July 2007

CC 1 - Semester 2 - Week 1

Plogue Bidule

I’d rather put my understanding of this software in this way: Plogue Bidule is one of the ways of making “pure” computerised music.
The process I undertook was basically to start with giving the computer a bunch of frequencies and receive a bunch of random pitches, then apply different changes, modulations and effects to them using various devices.

However, not my entire project was derived from MIDI. I also used one wave file and sent it to some VST plug-ins, effects and modulators.
The result was a mixture of patterns and “sound colours” which gave me a sense of electro-ambient music.

The final result:
The session file in Bidule format:




References:
- Christian Haines 'Creative Computing 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 26/07/2007
-
Beaulieu, Sebastien; Trussart, Vincent; Viens, David 2007, Bidule v0.92 user manual,
Plogue [Accessed 30/07/2007] .
-
"Ring Modulation". Wikipedia. [Accessed 30/07/2007]

AA1 - Semester 2 - Week 1 (I think)

Sound design

I have put two examples of sounds designed for a) a movie and b) a game.

2001: A Space Odyssey. By Stanley Kubrik.
Actually it is Richard Strauss who has composed the opening sound theme of this film and it’s “chosen” rather than designed. Kubrik, with the help of the film’s sound “supervisor” A.W Watkins decided to use Strauss’ piece Also sprach Zarathustra (or "Thus Spoke Zarathustra" in English) for the opening.

The Tune, however, is tremendously significant for its role in the movie. Many individuals remind of the movie instead of Strauss when they hear he first beats.
In my opinion, the track carries the same story as the movie itself. The element of evolution, -which plays a big role in the film- is pretty utilised in the track as well. Also as like as the film, the track finishes with –technically- a major cadence; a sign of victory. Likewise, the movie ends with the story of human beings’ victory.

Duke Nukem 3D; Difficulty level:“Come Get Some”!
One of the main sound themes for the game Duke Nukem 3D is called “Come Get Some” by the band Megadeth. The phrase is also a title of one of the game’s levels. In addition to these, the statement is “said” by the main character of the game, whenever he damages the enemy.
This phrase is one example of many different sound effects, sentences, and noises used in this game. (I think) to give the player a good sense of playing, “Come Get Some” is said in a way and intonation that ridicules the enemy.Unfortunately, embedding a notable example of this game in here was impossible, therefore I had to put the direct link to the video in YouTube HERE.
However, HERE is a homemade trailer for Duke Nukem 3D accompanied by Megadeth's tune. (Parental Advisory)

References:
- Christian Haines 'Audio Arts 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 23/07/2007
-
2001: A Space Odyssey (film), Wikipedia (http://en.wikipedia.org/wiki/2001:_A_Space_Odyssey_%28film%29) [Accessed: 30/07/2007]
- Duke Nukem 3D Credits, 3DRealms website (http://www.3drealms.com/other/index.html) [Accessed 30/07/2007]

Tuesday, 26 June 2007

Project - CC1

For this project I just used three sounds; a door being shut, the noise of a mobile phone signal (that annoying one you always hear coming out of electric devices) and the voice of one of my friends.
There were several parts to this composition of mine. My main intention was to change the sounds pretty dramatically and observe the effects of different softwares, plug-ins and devices. In some cases I also used the real-time change of particular devices. For example, I stretched the time of the door sound from 1 second to around 50 seconds, applied phaser to it and recorded the result while I was changing the parameter of the LFO amount on the phaser.
I also used panning, muting, reversing and other applications like these quite frequently in my tune.
The tune is about a dream; it is called “Pleasant nightmare”. It starts with the door being shut, goes on with the sound of door being delayed (not consistently; I changed the delay time and recorded it real-time) and having other sounds on top of it.
The track finished by a lady saying something (!) about good news, the door being shut again and the sound of an answering machine.
In total, it was a really interesting project I have done so far; especially when I lost my entire pro-tools session and I had to redo it again.

Nice semester ending.

HERE is my final MP3 track...

And here is the score for this dude.. :

Project - AA1

My final project was the recording of a song by the band “Anathema” from their 1999 album Judgement. The song is called “Parisienne Moonlight”.
Having no drum tracks, Amy Sincock did the vocals for me, Nathan Shea played bass and Douglas Loudon played guitars and piano.
For vocals, I used two microphones, a Neumann U 87 condenser and a Sennheiser MD-421 dynamic. The interesting issue was that the U 87 picked up so much reverb from the recording venue (studio 5) and I actually found it quite useful because the reverb plug-ins for pro-tools are not satisfying enough (or at least I don’t know how to get a good sound out of them yet). However, the MD421 picked a pretty much drier sound compared to U87.
I recorded bass through DI. Using Avalon pre-amp I got a relatively decent and fat sound of bass.
For acoustic guitar, I used two Neumann NT5 microphones and got a result of a wide and fat sound.
The electric guitar was recorded using a Shure SM58 and a Sennheiser MD421 microphone. In general, I found MD421 one of the handiest microphones around.
For the piano I used two NT5s and two U87s. The technique of using these U87s was the middle-side technique which provided me with a very wide and good stereo sound.

The process of production was a long story. In most of the cases, to have a wide sound, I doubled a particular mono track and then panned each of these two tracks to right and left. It is also notable that I inverted the phase of one of these two channels in order to get that stereo and wide sound.
Unfortunately, and as a result of me having few experiences in recording, I faced many difficulties in terms of having parts of my recordings distorted or not being able to apply the proper effect n the appropriate time and so on. But the good side of the story is that I learned a lot during the process of recording and production, which would be very handy later.

HERE is my final MP3 track...

These are the documents for this project:
- Pre Production
- Production
- Recording

Saturday, 2 June 2007

Week 12 - CC1

This week doesn't have an exercise!
Here is my Pre-Production sheet for the project of this semester.
To see the enlarged picture, click on it!

Reference:
- Christian Haines 'Creative Computing I' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 31/05/2007.

Week 12 - AA1

This week’s exercise was to re-mix the Eskimo Joe track “New York” again. This time with reverb, compressors and mastering plug-in.
The to me, the important part of the exercise was how these effects were supposed to be chained. The order of putting these effects is important, according to Christian. I personally use compressor as the last effect on tracks, because there might be unwanted frequencies caused by some device and the compressor controls all of them.
I also used delay and reverb; not for all tracks though. I used delays for one of the guitars and reverbs for the vocal tracks. At the end, it was compressor, which was used for a group of tracks and also the final master track.
30 seconds of the final result could be found HERE.

Another interesting exercise of our this week was to GROUP several tracks in protools and apply effects and changes on mixing to them simultaneously.As shown in the picture above, it is relatively easy to do the job and determine either the tracks are going to be changed together in terms of the effects applied on them, or the mix, or both.

My references are as same as the last week's! I still think the material stated in these references are valid for this week's exercise as well.

References:
- Halbersberg, Elianne ‘The underdogs’, Mixing Magazine (Accessed [20.05.2007]), (http://mixonline.com/recording/mixing/audio_underdogs/)
- Steven Fieldhouse 'Audio Arts I' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 29/05/2007.
- Drury, Brandon: ‘Music Mixing Technique’, Recording Review (Accessed [20.05.2007]), (http://www.recordingreview.com/articles/articles/59/1/Music-Mixing-Trick-Run-Everything-Through-A-Delay/Delay-Effects-Can-Bring-Excitement-To-Your-Music.html)

Week 11 - Forum

Construction and destruction in music;That was the topic of this week’s forum; however, many other (and in few cases non-relevant) issues popped up and were discussed in the session.
Simon Whitelock (the dude that accused me of not having any understanding of mainstream western music) was the first to present his ideas. He –as a DJ- has enough experience –and therefore qualifications- to comment on other’s “cover” works and the way they manipulate songs initially composed, arranged and performed by someone else.
He said that in many cases, the result is good but the way they –again- manipulate others’ tunes is not particularly justified (or as he addressed it, it’s just “Jacking” stuff!). I can agree with him on the point that many musicians gain their reputation and success “because” of something that they haven’t really been involved in. In other words, many icons (for example Crazy Frog) owe all their success to another person; they just have been wise enough to use the best thing and give the best shot in the right time and right place, but..
I also do believe that the intelligence needed for such an act could be appreciated; at least I appreciate it. In the session, I took the example of an anonymous musician who just becomes popular because some famous person takes his or her song and introduces it to public. (What Madonna did with Mirwais) and I think both parties are happy and the issue is a “win-win” game. However, due to freedom of speech and all these stuff everyone can have his/her own opinion; especially in a highly subjective issue of music.
Nathan Shea went second and expressed his opinions on a “death-metal” composer who was strongly spending effort NOT to come up with something mainstream, pop-sounding and commercial. In his viewpoint, numerous works of musicians working in fields of death-metal, black-metal and other “prefix”-metal genres were examples of deconstructing rules and mainstream portrait. I almost completely agree with him but I think there are few other examples which could be added to this; Many of musicians who basically “started” a new genre or/and eventually became icons of that particular genre (Jimi Hendrix, Curt Cobain, Ozzy, etc..) were breaking rules and ordinary settings of popular music in the first place, anyway. I think the values and benefits of “deconstructing what is supposed to be the rule” is not exclusive to black or death metal.
John Delany went third and talked about the fact of deconstruction (or might be construction!) of music in order to produce ambient (or particularly in the case of the music he played “scary”) themes for movies. He introduced and talked about the use of non-ordinary sounds in the context of a sound theme. In my opinion he was focusing on the impact of the whole idea of construction and deconstruction in music and its result which not just necessarily is bad, but also is listenable and enjoyable in many particular cases.
Although I am a big fan of ambient music, I don’t think I am qualified enough to comment on this idea that much. I do agree with John that making music while deconstruction rules, or sounds could possibly result in a valuable and creative (and also good-sounding) tune but I can also think of many cases in which the experiments of destruction, mainly in ambient music have resulted in tunes which don’t really sound good to me!

References:
Whitelock, S. "Forum week 11", at University of Adelaide, 24/05/2007
Shea, N."Forum week 11", at University of Adelaide, 24/05/2007
Delaney, J. "Forum week 11", at University of Adelaide, 24/05/2007

Friday, 1 June 2007

Week 11 - CC1

This week’s exercise was to continue working with the software Metasynth.

Apparently the algorithm by which the software works is sort-of CPU-consuming. The reason for that is the numerous times that the computer crashed. Apple computers have a good reputation of NOT crashing so I can’t blame the computer; it’s the software’s fault.
I don’t really know if the following exercise is the fifth or sixth time I actually did the job (refer to the issue of crashing)

1st step of mine was to take a sound sample and derive it into Metasynth. I used a vocal sample of the recording of Audio Arts’ week 11. I normalised and got rid of the empty spaces (rests) using the software Audacity.Having put the result in the room “Spectrum Synth” of Metasynth, I rendered the result and saved it as an AIFF file to use it in the sequencer section of the software.
For sequencing, I needed few sound files so I chopped the result of the 2nd step into 5 different files and used the “Sampler” of the “Sequencer” room of Metasynth.
This is how the sampler looks like. As apparent in the picture, I have 5 different files which are automatically recognised by the software. I just needed to address the folder in which the files were saved.
The last part of my job was to use the applications available in the “Montage” room of Metasynth. The entire idea looked like “Music Concrete” to me. I don’t really know how relevant that concept is to this exercise but I had an extremely hard time with Metasynth and I don’t really want to think about the relation of my ideas and the software.
You can hear the final result HERE.

I strongly advice everyone to avoid using this software, unless its bugs are fixed.

References:
- "Metasynth" software official website (Accessed [31.05.2007]), (www.metasynth.com)
- Christian Haines 'Creative Computing I' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 24/05/2007.
- "Audacity" the software's official website (Accessed [31.05.2007]), (http://audacity.sourceforge.net)

Monday, 28 May 2007

Week 11 - AA1

This week’s exercise was to mix down the song “New York” by “Eskimo Joe” in 3 different forms;
For the first sample, I chose to mix the song totally dry (no effects) just using the main audio tracks of the session in pro-tools provided to us.
In addition to be dry, this sample is a mono audio track; therefore there is no panning used in it.



2nd sample of mine, is another dry mixing, but stereo and including some pannings (of the vocal track, guitars, keyboards, etc.



The only difference between 3rd sample and the last two is the usage of equaliser on it. I didn't equalise all the tracks, though. I EQed the drums (kick, overead, etc.), the vocals, the bass, the guitars and the main intro piano.




Personally, I found the 2nd sample more “natural” whereas the equalised one sounded a little “artificial” to me; however, I think I needed some reverb to be satisfied with the sound.

References:
- Halbersberg, Elianne ‘The underdogs’, Mixing Magazine (Accessed [20.05.2007]), (http://mixonline.com/recording/mixing/audio_underdogs/)
- Steven Fieldhouse 'Audio Arts I' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 22/05/2007.
- Drury, Brandon: ‘Music Mixing Technique’, Recording Review (Accessed [20.05.2007]), (http://www.recordingreview.com/articles/articles/59/1/Music-Mixing-Trick-Run-Everything-Through-A-Delay/Delay-Effects-Can-Bring-Excitement-To-Your-Music.html)

Wednesday, 23 May 2007

Week 10 - CC1

This week’s exercise was to work with the software Metasynth and experiment the different effects it applies to the sound.
For the initial sound, I used the quote from last week, the “We become what we…” and inserted it in Metasynth.
As apparent in the picture below, the software has six different “rooms” and each of them works with different effects and changes the sound in a different way.I used the “effects room” and I applied effects “Pitch and time”, “Echo”, “Stereo Echo”, “Reverb”, “Resonator”, “Harmonics”, Stretch”, Harmonise”, “Phaser” and “Flanger”. Some of these effects’ functions are obvious like “Pitch and Time” which changes the pitch and (as a result of the change in the frequency) the time duration of the sample. “Stretch” changes the time duration of the sample according to the settings by the user. (I could not observe its function properly because I had affected the initial sample of mine using other effects prior to Stretch and it was a bit hard to precisely realise this particular effect’s effects.)
“Harmonics” adds harmonics (with regards to the key indicated and controlled-by-the-user in the software) to the basic pitchers (or rather frequencies)
One of the specialities of this software is the use of curves and manually-set diagrams to indicate the way the effect manipulates the sound.
I randomly drew curves and checked the results out.
After finishing my work in the “effects room”, I recorded the result using pro-tools. (Later I found out that I could have saved the file in AIFF format using Metasynth itself.)
“Image Synth” is another room in Metasynth where the information of generating sound is taken from an image. Again I came up with a random image and checked the result. I figured out that the vertical axis is used for the frequency; ie. The higher in the axis you go, the higher frequency you will get. The horizontal axis is for time. Because I applied a filter (which is another pre-set picture used as a filter) to my image, the beginning of the sample contains fewer frequencies and therefore it sounds like a fade-in. (which is well logical!)
I think because I almost used all the possible frequency range, my final waveform was like a rectangle, again pretty logical!I recorded this sample using pro-tools as well as shown in the picture below.

References:
- "Metasynth" software official website (Accessed [23.05.2007]), (www.metasynth.com)
- Christian Haines 'Creative Computing I' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 17/05/2007.
- Krabye, Helge : ‘A short presentation of Metasynth 4’, Krabye.com (Accessed [23.05.2007]), (http://helge.krabye.com/metasynth.php)

Monday, 21 May 2007

Week 10 - AA1

This week’s exercise was to record drums which compared to others is a total pain in where the sun never shines. Since the instrument has several parts and depending on the music, the importance of each part’s sound differs, there are many ways in which drums can be recorded. However, the two main categories of setting the microphones are “minimal” and (I think) “not-minimal!”
There are few points that are to be noted; the tuning of the drum heads, the comfort of the player, the positions of different parts, etc play a huge role in the final result. Personally, as a sort-of funk drummer, I DO like a nasty ring for the snare and toms. (According to Steve this is not shared within all sound engineers and producers.)I did two different sessions of recording. (Because I am the only drummer in the batch and I had to play as well as record!)
The first one was with Edward Kelly.
I used a Shure 52A beta for the kick (bass) drum, a Shure 56A beta for the snare, 2 Neumann KM84 microphones for the over heads (which were panned to left and right) and 2 AKG U87 condensers in front of the drum set so-called Middle-Sides (MS) audio capturers.
The protools session is as shown below:The final MP3 result is here as well: I used a 7-channel equaliser and a compressor for my bass drum track, a one-channel EQ and the same compressor (with different parameter settings) for my snare, a 4-channel EQ and a compressor for each of my overheads, one-channel EQs for my MS mics and finally a 7-channel EQ and a compressor for my master drum sound.
This was an experience of a minimal microphone setting. I purposely did not use any noise gate since firstly minimal setting is not my priority and secondly I intended to use the original reverb of the recording space (which again is not my first choice)

Second recording of mine had more microphones and was done with the help of Bradley Leffer and Darren Slynn.As it is quite obvious in the protools picture, I again used a 52A for the kick, a Shure 57 for the snare, a Neumann KM84 just for the Hi-Hat, a Shure 52A beta for the floor tom, two Sennheiser MD 421s for tom-toms, two NT5s for the overheads, and again two AKG U87s for the MS.
One of the main differences of this experience of mine with the first one was the fact that I used a noise gate as well as the compressor and the equaliser for the master track of the session. The final result was closer to my personal “taste” of a drum sound; pretty funky,.. and.. What a good drummer!

References:
- Knave, Bryan: ‘Capturing the kit’, Electronic Musician (Accessed [20.05.2007]), (http://emusician.com/mag/emusic_capturing_kit/)
- Steven Fieldhouse 'Audio Arts I' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 15/05/2007.
- Shambro, Joe: ‘How to record the perfect kick drum sound’, Home Recording – About.com (Accessed [20.05.2007]), (http://homerecording.about.com/od/recordingtutorials/ht/perfectkick.htm)