Tuesday 26 June 2007

Project - CC1

For this project I just used three sounds; a door being shut, the noise of a mobile phone signal (that annoying one you always hear coming out of electric devices) and the voice of one of my friends.
There were several parts to this composition of mine. My main intention was to change the sounds pretty dramatically and observe the effects of different softwares, plug-ins and devices. In some cases I also used the real-time change of particular devices. For example, I stretched the time of the door sound from 1 second to around 50 seconds, applied phaser to it and recorded the result while I was changing the parameter of the LFO amount on the phaser.
I also used panning, muting, reversing and other applications like these quite frequently in my tune.
The tune is about a dream; it is called “Pleasant nightmare”. It starts with the door being shut, goes on with the sound of door being delayed (not consistently; I changed the delay time and recorded it real-time) and having other sounds on top of it.
The track finished by a lady saying something (!) about good news, the door being shut again and the sound of an answering machine.
In total, it was a really interesting project I have done so far; especially when I lost my entire pro-tools session and I had to redo it again.

Nice semester ending.

HERE is my final MP3 track...

And here is the score for this dude.. :

Project - AA1

My final project was the recording of a song by the band “Anathema” from their 1999 album Judgement. The song is called “Parisienne Moonlight”.
Having no drum tracks, Amy Sincock did the vocals for me, Nathan Shea played bass and Douglas Loudon played guitars and piano.
For vocals, I used two microphones, a Neumann U 87 condenser and a Sennheiser MD-421 dynamic. The interesting issue was that the U 87 picked up so much reverb from the recording venue (studio 5) and I actually found it quite useful because the reverb plug-ins for pro-tools are not satisfying enough (or at least I don’t know how to get a good sound out of them yet). However, the MD421 picked a pretty much drier sound compared to U87.
I recorded bass through DI. Using Avalon pre-amp I got a relatively decent and fat sound of bass.
For acoustic guitar, I used two Neumann NT5 microphones and got a result of a wide and fat sound.
The electric guitar was recorded using a Shure SM58 and a Sennheiser MD421 microphone. In general, I found MD421 one of the handiest microphones around.
For the piano I used two NT5s and two U87s. The technique of using these U87s was the middle-side technique which provided me with a very wide and good stereo sound.

The process of production was a long story. In most of the cases, to have a wide sound, I doubled a particular mono track and then panned each of these two tracks to right and left. It is also notable that I inverted the phase of one of these two channels in order to get that stereo and wide sound.
Unfortunately, and as a result of me having few experiences in recording, I faced many difficulties in terms of having parts of my recordings distorted or not being able to apply the proper effect n the appropriate time and so on. But the good side of the story is that I learned a lot during the process of recording and production, which would be very handy later.

HERE is my final MP3 track...

These are the documents for this project:
- Pre Production
- Production
- Recording

Saturday 2 June 2007

Week 12 - CC1

This week doesn't have an exercise!
Here is my Pre-Production sheet for the project of this semester.
To see the enlarged picture, click on it!

Reference:
- Christian Haines 'Creative Computing I' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 31/05/2007.

Week 12 - AA1

This week’s exercise was to re-mix the Eskimo Joe track “New York” again. This time with reverb, compressors and mastering plug-in.
The to me, the important part of the exercise was how these effects were supposed to be chained. The order of putting these effects is important, according to Christian. I personally use compressor as the last effect on tracks, because there might be unwanted frequencies caused by some device and the compressor controls all of them.
I also used delay and reverb; not for all tracks though. I used delays for one of the guitars and reverbs for the vocal tracks. At the end, it was compressor, which was used for a group of tracks and also the final master track.
30 seconds of the final result could be found HERE.

Another interesting exercise of our this week was to GROUP several tracks in protools and apply effects and changes on mixing to them simultaneously.As shown in the picture above, it is relatively easy to do the job and determine either the tracks are going to be changed together in terms of the effects applied on them, or the mix, or both.

My references are as same as the last week's! I still think the material stated in these references are valid for this week's exercise as well.

References:
- Halbersberg, Elianne ‘The underdogs’, Mixing Magazine (Accessed [20.05.2007]), (http://mixonline.com/recording/mixing/audio_underdogs/)
- Steven Fieldhouse 'Audio Arts I' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 29/05/2007.
- Drury, Brandon: ‘Music Mixing Technique’, Recording Review (Accessed [20.05.2007]), (http://www.recordingreview.com/articles/articles/59/1/Music-Mixing-Trick-Run-Everything-Through-A-Delay/Delay-Effects-Can-Bring-Excitement-To-Your-Music.html)

Week 11 - Forum

Construction and destruction in music;That was the topic of this week’s forum; however, many other (and in few cases non-relevant) issues popped up and were discussed in the session.
Simon Whitelock (the dude that accused me of not having any understanding of mainstream western music) was the first to present his ideas. He –as a DJ- has enough experience –and therefore qualifications- to comment on other’s “cover” works and the way they manipulate songs initially composed, arranged and performed by someone else.
He said that in many cases, the result is good but the way they –again- manipulate others’ tunes is not particularly justified (or as he addressed it, it’s just “Jacking” stuff!). I can agree with him on the point that many musicians gain their reputation and success “because” of something that they haven’t really been involved in. In other words, many icons (for example Crazy Frog) owe all their success to another person; they just have been wise enough to use the best thing and give the best shot in the right time and right place, but..
I also do believe that the intelligence needed for such an act could be appreciated; at least I appreciate it. In the session, I took the example of an anonymous musician who just becomes popular because some famous person takes his or her song and introduces it to public. (What Madonna did with Mirwais) and I think both parties are happy and the issue is a “win-win” game. However, due to freedom of speech and all these stuff everyone can have his/her own opinion; especially in a highly subjective issue of music.
Nathan Shea went second and expressed his opinions on a “death-metal” composer who was strongly spending effort NOT to come up with something mainstream, pop-sounding and commercial. In his viewpoint, numerous works of musicians working in fields of death-metal, black-metal and other “prefix”-metal genres were examples of deconstructing rules and mainstream portrait. I almost completely agree with him but I think there are few other examples which could be added to this; Many of musicians who basically “started” a new genre or/and eventually became icons of that particular genre (Jimi Hendrix, Curt Cobain, Ozzy, etc..) were breaking rules and ordinary settings of popular music in the first place, anyway. I think the values and benefits of “deconstructing what is supposed to be the rule” is not exclusive to black or death metal.
John Delany went third and talked about the fact of deconstruction (or might be construction!) of music in order to produce ambient (or particularly in the case of the music he played “scary”) themes for movies. He introduced and talked about the use of non-ordinary sounds in the context of a sound theme. In my opinion he was focusing on the impact of the whole idea of construction and deconstruction in music and its result which not just necessarily is bad, but also is listenable and enjoyable in many particular cases.
Although I am a big fan of ambient music, I don’t think I am qualified enough to comment on this idea that much. I do agree with John that making music while deconstruction rules, or sounds could possibly result in a valuable and creative (and also good-sounding) tune but I can also think of many cases in which the experiments of destruction, mainly in ambient music have resulted in tunes which don’t really sound good to me!

References:
Whitelock, S. "Forum week 11", at University of Adelaide, 24/05/2007
Shea, N."Forum week 11", at University of Adelaide, 24/05/2007
Delaney, J. "Forum week 11", at University of Adelaide, 24/05/2007

Friday 1 June 2007

Week 11 - CC1

This week’s exercise was to continue working with the software Metasynth.

Apparently the algorithm by which the software works is sort-of CPU-consuming. The reason for that is the numerous times that the computer crashed. Apple computers have a good reputation of NOT crashing so I can’t blame the computer; it’s the software’s fault.
I don’t really know if the following exercise is the fifth or sixth time I actually did the job (refer to the issue of crashing)

1st step of mine was to take a sound sample and derive it into Metasynth. I used a vocal sample of the recording of Audio Arts’ week 11. I normalised and got rid of the empty spaces (rests) using the software Audacity.Having put the result in the room “Spectrum Synth” of Metasynth, I rendered the result and saved it as an AIFF file to use it in the sequencer section of the software.
For sequencing, I needed few sound files so I chopped the result of the 2nd step into 5 different files and used the “Sampler” of the “Sequencer” room of Metasynth.
This is how the sampler looks like. As apparent in the picture, I have 5 different files which are automatically recognised by the software. I just needed to address the folder in which the files were saved.
The last part of my job was to use the applications available in the “Montage” room of Metasynth. The entire idea looked like “Music Concrete” to me. I don’t really know how relevant that concept is to this exercise but I had an extremely hard time with Metasynth and I don’t really want to think about the relation of my ideas and the software.
You can hear the final result HERE.

I strongly advice everyone to avoid using this software, unless its bugs are fixed.

References:
- "Metasynth" software official website (Accessed [31.05.2007]), (www.metasynth.com)
- Christian Haines 'Creative Computing I' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 24/05/2007.
- "Audacity" the software's official website (Accessed [31.05.2007]), (http://audacity.sourceforge.net)