Wednesday 14 November 2007

AA1 - Semester 2 - Project

This project is a recreation of the sonic environment of a bus stop. There are various sounds of people walking, the pedestrian road signal, cars assing, moble phones beeping for text messages and rings, people speaking on the phone, etc. None of the sounds used (except for person talking to phone in Dutch language) are recorded from the original source; i.e. the sounds of the cars are made of other sounds and noises. In some cases -such as the ring of the mobile phone) MIDI technology has been utilised. There are also other sounds (particularly sounds of car engines), which are made using Plogue bidule. In reality however, there are many more sounds audible to an individual standing in a bus stop.
A number of sounds, which I used in this project, actually come out of my mouth! In these cases, I have recorded the sounds and have modified them using Protools. Utilising various effects, time stretch and other processes in protocols, in some cases I have dramatically changed the original sound and have simulated a real-world sound.
There also are a number of sounds, which I have used Plogue for. Particularly digital (or rather artificial) sounds such as mobile beeps are made by plogue.
My approach was to get as close as possible to the real situation of a bus stop. Although the result is not the best simulation of that particular situation, I have come very close to some of my aims.
Most of the issues, which I had to deal with, were regarding the ordinary errors and problems of using digital mediums such as computers.
The software that I had to use in order to compile my project (Cubase) was/is not the most reliable software and it occasionally slowed me very down.
In general I experimented new ways of manipulating various sounds and noises; The most important part was that each single element of this project should imitate the some existing sounds and should lead the listeners’ imagination! This needed a new approach to observe the sound scene.
The MP3 of the final result is here:



The documentation for this project can be downloaded too:

Monday 12 November 2007

CC 1 - Semester 2 - Final Project

Project; Sounds Without Origins.

My project is to experiment the effects of different devices on sound. The original sound, is an electric guitar and utilising the software Plogue Bidule, various effect devices affect the sound.
Firstly it is the sound of guitar which fills the final mix, but after a while a rhythm enters, the sound of guitar get delayed, reverbed, distorted, etc.. till the original sound completely fades out from the final mix and there are just “aftermaths” of the sound-generating process..
It should be noted that the patch provided by me, contains two rewire setups. It already is rewired with Reason and Live. In the example (blogged) I have used the rhythm of a Reason Dr. Rex device.
The idea behind this project of mine is to examine the existence of several electronic and digital effect processors without the presence of the original sound; hearing something and not knowing where it comes from or how it is generated.
In addition to this, the patch also adds harmonics to the original signals (pitch-shifter is one of the elements used in this patch).
For the most important part, the effects and the amount of them in the final mix could be controlled via an external controller (again in the case of my example, I have used Novation 61 SL) therefore in the final mix, there are different levels for different sounds throughout the duration of the project.
Issues that could possibly occur mostly deal with the interconnection of different softwares. It is essential to consider Plogue Bidule as the master software of the rewiring process. On the other hand it is also essential NOT to start another software which could be rewired to Plogue while the tune is going on.
This technology and the sound coming out of this patch, in my opinion, would be reasonably useful for a sound track or generally a “background theme”. Like many other tunes in the genre of Ambient, the final mix is not made up of too many sounds and the listener would not have many difficulties distinguishing the sounds. Mostly for this reason, the patch suits a film score.

Here is a piece that I did using this patch:



you can download the documentation needed for this project from here: (ZIP file.)

Friday 12 October 2007

CC 1 - Semester 2 - Week 10

Integrated stuff

For my project, with regards to the new information on “Integrated Setup” of several devices I’m going to use, I will –as said before- utilise Plogue Bidule, Reason and Ableton Live.
There is one single device that I just realised would be great to use; a Control Surface.
What I will do would follow a simple algorithm; the signal –most probably of a guitar- would come into Plogue, while Ableton is rewired to it, and some additional effects on the sound would be affecting the entire result via Reason. Ableton Live would most probably provide me with a rhythm; and I would control Reason’s effects with a control surface!
The setup is not as sophisticated as what I initially intended it to be. But after experimenting for a while, I got to a point that I realised not to overuse what I have.

Here is a test for this setup; the only difference is that I sequenced a riff and looped it, then I started controlling the effects in Reason: (Unfortunately it’s around 4 minutes!)
cc1sem2week10.mp3

References:
- Christian Haines 'Creative Computing 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 11/10/2007
- Plogue Bidule. Official website. (www.plogue.com) [Accessed 12/10/2007]

AA1 - Semester 2 - Week 10

Additive Synthesis

For this week, I provided a Plogue patch which has a very simple structure and function.
All the story is to ADD two different wave files, (and of course this is called « ADDitive synthesis !) and see the result.
I simulated a sound that I actually would need for my final project ; the « beep » of the bus indicator light when it gets close to a stop.
I grouped the different devices I used to construct the patch and provided a controller for it. In his controller, you can define the frequencies and the waveforms of the signals which you intend to add together.
All what I have explained is apparent in the picture I have for the patch :
The sonic result of this patch is here to listen to :
aa1sem2week10.mp3

References:
- Christian Haines 'Audio Arts 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 09/10/2007
- Additive (Fourier) Synthesis. the University of Princeton (http://soundlab.cs.princeton.edu/learning/tutorials/SoundVoice/add1.htm) [Accessed 12/10/07]
- Additive Synthesis. Wikipedia. (http://en.wikipedia.org/wiki/Additive_synthesis) [Accessed 12/10/07]

Monday 8 October 2007

CC 1 - Semester 2 - Week 9

Integrated Setup

What I did for this week was not very complicated. In Plogue, I just assigned two sine waves to control two pannings on the mixer. My Plogue patch contained a delay and a reverb. (Both being panned but in different frequencies)
On the other side, I had Ableton Live to add more effects to the final sound and Reason rewired to this whole setup.
I played the keyboard using a very typical retro sound of 80s’ and Erik played guitar.
The result is here; I think everyone knows how to use this DJ MP3 player below...
cc1sem2week9final....

Setting up bunch of integrated –in this case- softwares would be interesting when “appropriate use of each device’s capabilities” is taken into consideration. By that I mean NOT to use “more” or “less” than needed; example:
Both Ableton and Plogue have the reverb effect (and so does Reason) but I wanted the “reverb” to be a part of what is being “maximised” within Ableton Live. Therefore I reverbed the coming signal IN PLOGUE, and not in Live.
I plan to use more controllers in my final project and less “note playing”. This time I was basically providing the session with some sort of solo; I’d rather change the characteristics of the add-on materials, particularly effects in REAL-time in the project.
Having noted that, my setup would probably be: Guitar -> Plogue -> Reason -> Live -> speakers!
On the other hand I will set a surround system of sounding for my final project; 5.1 and assign sine waves to it in a manner that the sound ROUNDS the room! Nice, hey?

References:
- Christian Haines 'Creative Computing 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 04/10/2007
- Performing Expressive Rhythms with Billaboop Voice-Driven Drum Generator. Institut Universitari de l'audiovisual (IUA),Universitat Pompeu Fabra, Barcelona, Spain. (PDF format) (www.iua.upf.edu/mtg/publications/b5fb70-dafx05-ahazan.pdf)
Wikipedia. (http://en.wikipedia.org/wiki/Category:Ableton_Live_users) [Accessed 08/10/07]
- Surround Sound. Wikipedia. (http://en.wikipedia.org/wiki/Surround_sound) [Accessed 08/10/2007]

Sunday 7 October 2007

AA1 - Semester 2 - Week 9

FM

Apparently I had done more than enough last week; there was no obligation to provide an FM patch. Anyway, -oops- I did it again! And came up with some new stuff…
As you can see in the picture, the 1st FM of mine uses an oscillator as the carrier, which is being modulated regarding a constant value:
The modulation rate speeds up! and slows down with regards to the rate of the amplitude of the first oscillator (oscillator_3).
On the other hand, the other constant value of this patch (Constant value (filter!!!)) is practically an LP filter!

There are two more final results and I put them here. The picture of my pro-tools session is also below to visualise what I was doing!This one below is again some sort of car engine sound! For some reason, I always come up with such things.This one could probably be used for my final project; a person walking!And here we go; my protools regioned! session:
References:
- Christian Haines 'Audio Arts 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 02/10/2007
- Frequency Modulation. The University of California
Berkeley Robotics and Intelligent Machines Lab (http://robotics.eecs.berkeley.edu/~sastry/ee20/modulation/node4.html) [Accessed 07/10/07]
- Frequency Modulation. Federation of American Scientists. (http://www.fas.org/man/dod-101/navy/docs/es310/FM.htm) [Accessed 07/10/2007]

Tuesday 2 October 2007

Forum - Semester 2 - Instrument

Electronic musical instrument of mine:

In building this instrument, I had 3 initial principals:

1- Physical computing (Arduino)
2- Tone generating (Square wave generating)
3- Victorian Synth.

What my instrument does (as apparent in the video below) is that it gets a signal from the computer (using Arduino), controls a 4093 chip which generates a square wave and sends the signal to a speaker (through an amplifier) and then there is the Victorian synth.


This picture kind-of explains the principals following which the instrument works. (Frankly, I have forgotten what I wanted to call this invention of mine, if it is an invention at all)
It should be mentioned that I had to add another extra component, the amplifier, in order to actually get a reasonably loud sound out of this dude; nevertheless, I don't think this part of the job is counted by any means.

It was not an easy time fixing this instrument. S*** load of soldering and stuff was needed..

Wednesday 19 September 2007

CC1 - Semester 2 - Week 8

Performance Sequencing

What I did was to create 3 different loops using Propellerheads’ Reason; a drum loop, a bass line loop and a guitar loop. For the guitar loop, I pitch-shifted in a real-time fashion.
After inserting the drum loop into Ableton Live, I warped it and introduced a different loop.
I also added a delay effect to the guitar line and controlled it real-time as well.
In total, my sequence is followed this order:
Original Drum Loop -> Modified Drum Loop -> Bass Loop -> Guitar Loop (with delay amount controlled and varied throughout the tune)-> Bass Loop -> Modified Drum -> Original Drum.
The result can be downloaded here:The interesting part of this week’s exercise for me was that Ableton Live actually makes it much easier to fuse different genres of music. Controlling rhythm and its tempo (as well as its groove) provides the user with lots of options to play around with different riffs and themes and at the same time it makes it possible to precisely and accurately mix these diverse stuff together. The reason I didn’t use my recordings from last semester was that my project didn’t involve drums and I needed rhythm to do this.
I think parts of this exercise of mine sounded like early experimental drum n bass tunes (i.e. early Aphex Twin).
Here is a very useful video of warping in Live. I know noone does but please have a look.

References:
- Christian Haines 'Creative Computing 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 12/09/2007
- Ableton Live Users. Wikipedia. (http://en.wikipedia.org/wiki/Category:Ableton_Live_users) [Accessed 19/09/07]
- Ableton Live. John Hopkings University Digital Media Centre. (http://digitalmedia.jhu.edu/learning/documentation/live) [Accessed 19/09/2007]

Monday 17 September 2007

AA1 - Semester 2 - Week 8

Amplitude and Frequency Modulation (AM and FM)

Plogue Bidule is a total disaster; because I still have not mastered it.
However, after ages of trying to create two patches of Am and FM, I finally figured out the very simple and logical procedure of it. In amplitude modulation, the amplitude input of the oscillator should be receiving a signal from another oscillator. In Frequency modulation the frequency input should be the receiver!
Despite the simple algorithm of FM and AM, simulating a natural or man-made sound was pretty hard.
For the AM, I simulated the time period in which an 8-cylinder car (Chevrolet came to my mind) starts its engine, accelerates and stops. I like what I came up with!
The MP3 can be downloaded from here:for FM, I simulated the sound of an ambulance alarm (or whatever it is called in English).
The MP3 can be downloaded from here:The interesting occurrence in FM was the slight change of the tine when the carrier and the modulator have frequencies very close to each other. To see the effect, get the Plogue file from here and have a listen to it.
Here is the list of the AM and FM Plogue files I have and also some more stuff.
AM for the above MP3 (Chevrolet) :(Click the Download button)FM for the above MP3 (ambulance): (Click the Download button).
As it is apparent in the picture, I have modulated the signal with ITSELF. Nice sound..Here is the picture of my Protools session for the whole thing:Amplitude Modulation on YouTube..

References:
- Christian Haines 'Audio Arts 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 10/09/2007
- Amplitude Modulation. WhatIs.com . (http://searchsmb.techtarget.com/sDefinition/0,,sid44_gci214073,00.html) [Accessed 15/09/07]
- Amplitude Modulation. The University of Berkeley, Los Angeles, California, The United States of America. (http://robotics.eecs.berkeley.edu/~sastry/ee20/modulation/node3.html) [Accessed 15/09/2007]

Thursday 6 September 2007

Forum - Semester 2 - Week 7

CC1 - Semester 2 - Week 7

Ableton live

What I did was to sequence a track using NIN samples provided in software Ableton Live. I did the sequencing (or rather mixing) live and real-time.
Arguably, Live has a good interface which compared to many others is more user-friendly. Most of the features that a typical user needs at one time are on the screen and there is not that much of need to browse the menus. Like many other softwares, however, there is a necessity to "flip" sides of the interface and go from the editing section to sequencing; this could possibly be problematic. The other good feature of Live is that it recognises the beginning of the sample (or rather the "beat") pretty precisely and does not always go off-time (it should be mentioned that the whole syncronisation process highly depends on the initial loop of the sample.)Restrictive practices that are to be experienced while using Live would allow the user to be more "creativity". As an example, since the demo version of Live did not allow me to save the file, I had to come up with an intelligent approach to playing and recording real-time using two different softwares (in this case, Live and Protools.)
Here is my final result in MP3 format:..and here is a video of two DJs called "Telephone Tel Aviv". It demonstrates how these guys utilise Live to get their jobs done:


References:
Christian Haines Creative Computing 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 06/09/2007
- Ableton (www.ableton.com) [Accessed: 06/09/2007]
- Ableton Live, Wikipedia (http://en.wikipedia.org/wiki/Ableton_Live) [Accessed: 06/09/07]

AA1 - Semester 2 - Week 7

Analogue / Digital synthesis; Juno 6

I experimented with a few synthesisers including Roland Juno 6 and Jupiter. For this week, I played with Juno 6 and recorded the result.
Obviously, the result of around one hour of mocking around with the synthesiser gave me a whole lot of different sound patterns. Not many of them, however, would be considered simulations of sounds n the "real" world.
At the end, I merged two parts of my final result; the first one sounds like activities of a volcano (bubbling?) and the second one sounds to me like wind blowing.here is the MP3 containing these two parts:After all, "additive synthesis" (the way through which electronic synthesisers and most of the softwares work) provide many option by which it is possible to create various sounds. In addition to that, the most enjoying part of the additive story for me is that since the processes are to be done step by step, it is easier to keep the track of what is being done throughout the whole way.
Below, is a video of Roland SH3A Synthesiser; it is a short demo and a good example of how additive (..and to me also addictive!) synthesis work:

There is a good lecture on additive synthesis from Princeton University, New Jersey, US. I've put the link in the references.

References:
- Christian Haines 'Audio Arts 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 03/09/2007
- Roland Juno6. Vintage Synth Explorer. (http://www.vintagesynth.com/index2.html) [Accessed 07/09/07]
- Additive Synthesis. Princeton University (http://soundlab.cs.princeton.edu/learning/tutorials/SoundVoice/add1.htm) [Accessed 07/09/2007]

Forum - Semester 2 - Week 6

CC1 - Semester 2 - Week 6

AA1 - Semester 2 - Week 6

Interactive sound design

I chose my external hard drive, Maxtor One Touch III for analysing its sound design.
There are two different sounds (beeps) coming out of this. When it’s connected to a computer via a USB cable there could be 3 things happening.
1- Everything is fine! And there is no sound.
2- The drive is connected but there is not enough power; could be caused by the cable or the USB port of the computer. In this case, there would be a constant tone, which won’t finish unless the device is disconnected.
3- The drive is connected but it’s busy being recognised (or rather configured) by the computer; in this case there would be an on-off beep indicating that nothing should be touched.
The job of designing the sound for this particular product in my opinion is not bad.
a) It is more or less has a simple structure and is not hard for the user to realise what goes on; on the other hand, it just uses one tone; not so hard to note and recognise.
b) The constant sound the device makes when there is a problem is so annoying; the first thing that comes to users’ minds is to disconnect the drive and it is intentionally what is expected to be done.
c) The on-off beep gives a sense of not-being-sure and typically the user waits to see what happens next; it is also exactly what is needed to be done. While the user is confused, the device takes its time would sort everything out itself!Here is a link to a project of interactive sound design by students of the University of Melbourne. : http://www.sounddesign.unimelb.edu.au/web/biogs/P000565b.htm
..and some good stuff (obviously related to interactive sound design) from Kristianstad University, Sweden : http://www.fb1.uni-lueneburg.de/fb1/austausch/kristian.htm

References:
- Christian Haines 'Audio Arts 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 03/09/2007
-Interactive Sound Environment - Australian Sound Design Project Work, The University of Melbourne (http://www.sounddesign.unimelb.edu.au/web/biogs/P000565b.htm)[Accessed 07/09/2007]
- Interactive Sound Design, Kristinstad University (http://lab.tec.hkr.se/moodle/) [Accessed 07/09/2007]

CC1 - Semester 2 - Week 5

Naturalisation

For this week, I took a MIDI file for the song "Sultans of Swing" by Dire Straits and naturalised it. Most of my work was to process the guitar part (particularly the last solo of the tune). I mostly modified the velocities, durations, and starting points (groove) of the notes.
Here is the ORIGINAL MIDI part also available in .cpr format for Cubase) ..
and HERE is my modified version of the tune. (and its .cpr file)

Naturalisation might come handy in many cases, firstly to add the spice of "human mistakes" to the artificial result of machines providing simulations of musical pieces. Nevertheless, many of MIDI files these days are already played by a person and they originally do have the human error within their structure. Still, there are many corrections needed to make the MIDI file sound as close as possible to the original song.
Besides, in many electronic compositions there is a huge need of "groove" which could just be provided by skillfully apply certain applications while producing the tune, for the final result to sound natural to ordinary audience.

References:
Christian Haines Creative Computing 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 21/08/2007
- Dire Straits, Wikipedia (http://en.wikipedia.org/wiki/Dire_Straits) [Accessed: 06/09/2007]
- MIDI search engine, Music Robot (http://www.musicrobot.com/) [Accessed: 06/09/07]

Forum - Semester 2 - Week 4

Forum - Semester 2 - Week 3

This week’s exercises were to generate square waves and manipulate the sound using various devices (potentiometers, resistors, etc..)
The first exercise is pretty simple :
The square wave in generated using a 4093 chip, a capacitor and a resistor ; as you can see in the video below:


I changed the resistor and observed the effect :



In the next one, I used an LDR (Light Dependant Resistor) and experimented it :



In this one, using a potentiometer (also called POT) I varied the resistance and checked the effect :



This time, I used a potentiometer AND an LDR, nice stuff.. :



At last, I used TWO potentiometers, one serving the device as a generator and the other one the modulator. I think I needed different POTs to get a more apparent sound :

Monday 27 August 2007

AA1 - Semester 2 - Week 5

Sound Installation

Like always, my VERY first step in the research journey! Was Wikipedia, and I came across THIS which I strongly recommend everyone to have a look at; however, I ended up analysing someone’s art which was not mentioned in that page, blame it on Youtube.
Lionel Marchetti (as it is mentioned in the first Google search for his name) is “…a composer of musique concrète. First self-taught, he discovered the catalogue of Musique Concrète with Xavier Garcia. He has composed in the CFMI of Lyon 2 University between 1989 and 2002, where he still organizes workshops focused on the loudspeaker, the recorded sound and Musique Concrète, both on practical and theoretical levels. He has built his own recording studio, and composes also in the Groupe de Recherches Musicales (GRM) in Paris since 1993...
I analysed one of his sound installation projects which I found on Youtube; you can check it out below:

Use of music technology:
Apparently Marchetti is famous for his works on and with loudspeakers. This particular video is showing the application of his idea of transforming the electrical energy to the kinetic energy of a loudspeaker and again delivering that to some stones and finalising the act with the natural sounds of the stones.

Sound Mediums:
According to my understanding, the main mediums through which Marchetti is sending the sound(s) are stones and soil. The video is not very clear but I think there is something going on with the plants and their leaves as well; nevertheless, the main sound-makers are stones, which are collaborating with loudspeakers!

Type of artist:
Apparently Marchetti is known the best as a “Concrète Musician” (I just came up with this word; I hope it exists in the glossary of music technology.) but the work I have analysed –to me-is an example of sound installation. Many other installations that I watched, however, had mainly used sound and light. Yet, this work could be categorised the same. Marchetti is also a poet.

References:
- Christian Haines 'Audio Arts 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 20/08/2007
- Lionel Marchetti, Wikipedia (translated by Google from French to English) [Accessed 26/08/2007]
- Lionel Marchetti, ElectroCD [Accessed 26/08/2007]

Tuesday 21 August 2007

CC1 - Semester 2 - Week 4

Cubase

The first interesting uniqueness (as far as I know) of Cubase is the algorithm of dealing with MIDI. Usually –and typically- softwares just need inputs and outputs set to proper ports and channels; then different plug-ins, VIs, VSTs, etc apply. In Cubase, you are ought to send the MIDI signal THROUGH the VI or VST that you want to utilise.
The other interesting fact I came across in Cubase was that you have all your different devices, instruments and effects organised in various folders and they are always accessible. I can’t comment whether this is good or bad; but it sure is different. Its positive point could be the fact that you always know your MATERIAL, and the bad side of it could be that you have to divide your mind into two, one to manage your actual sound and the other to keep an eye on the effects’ (or VSTs) folder.

Here is the MP3 of my final result:



Here is the Cubase file:

References:
Christian Haines Creative Computing 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 15/08/2007
- Cubase, Steinberg media (www.steinberg.net) [Accessed: 20/08/2007]
- VST - Virtual Studio Technology, Wikipedia (en.wikipedia.org/wiki/Virtual_Studio_Technology) [Accessed: 20/08/07]

Monday 20 August 2007

AA1 - Semester 2 Week 4

Carlton Draught

In this ad, the famous music piece “Carmina Burana” by Carl Orff is used to support the video scenes of two group of people running into each other in a big field.
In general, most of the techniques are conceptual. The lyrics have changed and are (in a funny way) repeating the massiveness and the sophistication of the advertisement.
However, the process of synchronising the music and the video is notable. When the music is soft and –relatively- quite, the two groups of people have slow movements; they walk. When the music begins to be loud and the choir singers begin to raise their voices, the two groups run towards each other. By the end of the music, the camera zooms back and covers the entire scene and it becomes clear that one group were acting as the beer and the participants in the other group were serving the role of the consumer.

The ad is mostly utilising diegetic sounds and overdubs for the actors’ voices.


Sony.

Sony has used the Pilipino band Rivermaya for its ad. The ad is promoting the quality of sound and vision in Sony products. Throughout the short-film, there are many uses of hyper-real and ambience sounds like tearing the tape around the products’ boxes, the sound of boxes being moved, etc. Like many other ads, we hear a short narration being said at the end.
One of the notable characteristics of this video is that sounds build-up gradually and small sounds (of packing and un packing the product) result in the main point which is the actual band, playing the music in the highest quality possible. The evolution of these sounds is an interesting fact to me.


References:
Christian Haines 'Audio Arts 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 13/08/2007
- Carmina Burana, Carl OrfF, Wikipedia (http://en.wikipedia.org/wiki/Carmina_Burana_%28Orff%29) [Accessed: 19/08/2007]
- Rivermaya, Wikipedia (http://en.wikipedia.org/wiki/Rivermaya) [Accessed: 19/08/07]

Sunday 12 August 2007

AA1 - Semester 2 Week 3

Requiem for a Dream

The exercise for this week is to analyse the soundscape for a hollywood movie.
I have taken 4 minutes of the film “Requiem for A Dream” from 29’ 30” to 33’ 30”.
Unfortunately, mostly as a result of copyright issues, I am unable to put the movie up here but down here is the film’s trailer derived from Youtube:


Analysis:

Music/Non-diegetic sounds:
- In the robbery scene, the background music gradually takes off and adds to its elements; however, when Jared Letto and Jennifer Conelly talk, it is heard as just a rhythm.
- When Harry’s mom (Ellen Burstyn) is experiencing the effects of Ecstasy pill and is dancing, a Balkan music is being played in the background.

Hyper-Real sounds:
- Robbery scenes have sounds of cash-machine chime when it’s valet is being opened. The sound is similar to the sounds Pink Floyd used in the track Money (the Dark Side of the Moon, 1973, Harvest/Capitol)
- Security alarm beep is heard in the background in robbery scenes.
- When the main actors are taking magic mushrooms and cocaine, the sound of their pupils being opened is heard. It sounds very artificial to me and in fact, I don’t think a pupil makes THAT sound when it is rapidly opened. (It would be great if anyone tells me if it made any sound at all). The same issue is there with the sound of blood running in the veins, containing loads of drugs.
- The sound of cocaine being sniffed is a reverse of some other artificial sound; it might not be the real sound of the action but in my opinion it makes sense in the context of the film.
- The electric noise of the lamps in the robbery scene is good; as a necessity of the design extremely exaggerated.
- When Letto and Conelly kiss each other, there is just ONE sound of kissing, which is weird. (The positions are different, I don’t think all the kisses sound the same; they sure don’t feel the same though!)
- When Conelly is making (it looks like) a wallpaper, some reverse sound is serving the effects and it totally doesn’t make sense to me; however, it might, again in the context.
- When Burstyn is taking her pills, again we hear an ultra-exaggerated sound of popping-up!

Diegetic sounds
- Doors are being slammed few times during this 4 minutes, again, the issue is that they sound more or less the same; despite the fact that they are different doors.
- One of the main quotes of the film “Purple in the morning, blue in the afternoon, orange at night..” is said by Harry’s mom when she is organising her timetable of taking Ecstasy pills.

References:
Christian Haines 'Audio Arts 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 06/08/2007
- 2000: requiem for a Dream (film), IMDB (http://www.imdb.com/title/tt0180093/) [Accessed: 12/08/2007]
- MDMA (Ecstasy), Erowid (http://erowid.org/chemicals/mdma/mdma.shtml) [Accessed 12/08/2007]

Wednesday 8 August 2007

CC1 - Semester 2 - Week 2

Another Bidule project;

This time the circumstances were pretty different from the last week. Having experimented a bit with the software and also after reading an instructive writing on the net, life was easier.
I came up with this result:

..which its Bidule file looked like this:
..and it (the Bidule file) is HERE to download.

There were several issues that I encountered but overall, since Christian explained the system through which such an "open-ended" software works, the bugs became more understandable. Still, I have to experiment more to actually find out how to generate "what I want". Unfortunately, the software gives zillions of options from which you can choose your sound (or rather choose the "way" you want to generate your sound.) My aim however, is to come up with a sound which I can possibly use somewhere else; in collaboration with other softwares. I didn't really get into the "re-wire"ing side of the software but I think it is very possible to work with Bidule while it is cooperating with other programs.

References:
- Christian Haines 'Creative Computing 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 02/08/2007
-
Martin, "Bidule Tutorial", Front and Back [Accessed 06/08/2007] .

AA1 - Semester 2 Week 2

Wednesday 1 August 2007

Forum - Semester 2 Week 1

Victorian Synth, as I understand (and of course as a result of my readings) is to come up with components of a synthesiser with several electronic and/or mechanical devices.
This week’s forum or rather Music Technology Workshop) of ours, gave us the basic understanding of the use, application and coordination between such devices. The aim was obviously to create a musical patter; (I strongly believe in the fact that there is NOTHING such as noise in the sense of an unpleasant sound. Music IS subjective)
We initially broke simple computer speakers apart and used their elements for our study of Victorian Synth.
The process was not sophisticated in the f
irst place. Having the electric current coming from a 9V battery, we momentarily connected and disconnected our wires (also connected to two phases of the battery) to a speaker. This provided an electric spark and a sound (the same sound that small sparks have! I don’t know if there is a particular name for it in English).
The experience was to have several things on top of the speaker, in the middle of the electric current, on the surface on which the connections were made, and so forth. We also used objects such as classes to dampen the sound or/and change the environment of the sound source hence the quality of sound.
As obvious in the 3 different videos of Victorian Synth that I have put here (all derived from Youtube) the whole process` requirements are simple. It just needs a bit of creativity to think of new sounds and ways of making those sounds.








The piece I have put here is consisting of 4 parts. 1st part is just momentary connections of two electrodes; second part is what is called “Electric
Feedback”. The short distance between the centre of the speaker and the point of the connection (of electrodes) provides a magnetic field from which the speaker would be affected again, and the result would sound like what you hear! The 3rd part is having the momentary electric pulses, and pressing the surface of the speaker in a very sentimental manner! (It would break for f***’s sake) The effect is the change of the pitch, and the reason for that is the change of the pressure from the back of the speaker’s diaphragm. The fourth one is simply scratching one of the electrodes to a Lighter Stone (Is it called this in English?) and check the result.
Cool, huh?

References:
- Christian Haines, Stephen Whittington 'Music Technology Forum' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 26/07/2007
-
Bowers, John. Suborderly (http://suborderly.blogspot.com/2007/03/suborderly-music-victorian-synthesizer.html) [Accessed 1/08/2007] .
-
"The Victorian Synthesizer". Field Muzick. (http://www.fieldmuzick.net/17.237.0.0.1.0.phtml) [Accessed 1/08/2007]

Monday 30 July 2007

CC 1 - Semester 2 - Week 1

Plogue Bidule

I’d rather put my understanding of this software in this way: Plogue Bidule is one of the ways of making “pure” computerised music.
The process I undertook was basically to start with giving the computer a bunch of frequencies and receive a bunch of random pitches, then apply different changes, modulations and effects to them using various devices.

However, not my entire project was derived from MIDI. I also used one wave file and sent it to some VST plug-ins, effects and modulators.
The result was a mixture of patterns and “sound colours” which gave me a sense of electro-ambient music.

The final result:
The session file in Bidule format:




References:
- Christian Haines 'Creative Computing 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 26/07/2007
-
Beaulieu, Sebastien; Trussart, Vincent; Viens, David 2007, Bidule v0.92 user manual,
Plogue [Accessed 30/07/2007] .
-
"Ring Modulation". Wikipedia. [Accessed 30/07/2007]

AA1 - Semester 2 - Week 1 (I think)

Sound design

I have put two examples of sounds designed for a) a movie and b) a game.

2001: A Space Odyssey. By Stanley Kubrik.
Actually it is Richard Strauss who has composed the opening sound theme of this film and it’s “chosen” rather than designed. Kubrik, with the help of the film’s sound “supervisor” A.W Watkins decided to use Strauss’ piece Also sprach Zarathustra (or "Thus Spoke Zarathustra" in English) for the opening.

The Tune, however, is tremendously significant for its role in the movie. Many individuals remind of the movie instead of Strauss when they hear he first beats.
In my opinion, the track carries the same story as the movie itself. The element of evolution, -which plays a big role in the film- is pretty utilised in the track as well. Also as like as the film, the track finishes with –technically- a major cadence; a sign of victory. Likewise, the movie ends with the story of human beings’ victory.

Duke Nukem 3D; Difficulty level:“Come Get Some”!
One of the main sound themes for the game Duke Nukem 3D is called “Come Get Some” by the band Megadeth. The phrase is also a title of one of the game’s levels. In addition to these, the statement is “said” by the main character of the game, whenever he damages the enemy.
This phrase is one example of many different sound effects, sentences, and noises used in this game. (I think) to give the player a good sense of playing, “Come Get Some” is said in a way and intonation that ridicules the enemy.Unfortunately, embedding a notable example of this game in here was impossible, therefore I had to put the direct link to the video in YouTube HERE.
However, HERE is a homemade trailer for Duke Nukem 3D accompanied by Megadeth's tune. (Parental Advisory)

References:
- Christian Haines 'Audio Arts 1.2' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 23/07/2007
-
2001: A Space Odyssey (film), Wikipedia (http://en.wikipedia.org/wiki/2001:_A_Space_Odyssey_%28film%29) [Accessed: 30/07/2007]
- Duke Nukem 3D Credits, 3DRealms website (http://www.3drealms.com/other/index.html) [Accessed 30/07/2007]

Tuesday 26 June 2007

Project - CC1

For this project I just used three sounds; a door being shut, the noise of a mobile phone signal (that annoying one you always hear coming out of electric devices) and the voice of one of my friends.
There were several parts to this composition of mine. My main intention was to change the sounds pretty dramatically and observe the effects of different softwares, plug-ins and devices. In some cases I also used the real-time change of particular devices. For example, I stretched the time of the door sound from 1 second to around 50 seconds, applied phaser to it and recorded the result while I was changing the parameter of the LFO amount on the phaser.
I also used panning, muting, reversing and other applications like these quite frequently in my tune.
The tune is about a dream; it is called “Pleasant nightmare”. It starts with the door being shut, goes on with the sound of door being delayed (not consistently; I changed the delay time and recorded it real-time) and having other sounds on top of it.
The track finished by a lady saying something (!) about good news, the door being shut again and the sound of an answering machine.
In total, it was a really interesting project I have done so far; especially when I lost my entire pro-tools session and I had to redo it again.

Nice semester ending.

HERE is my final MP3 track...

And here is the score for this dude.. :

Project - AA1

My final project was the recording of a song by the band “Anathema” from their 1999 album Judgement. The song is called “Parisienne Moonlight”.
Having no drum tracks, Amy Sincock did the vocals for me, Nathan Shea played bass and Douglas Loudon played guitars and piano.
For vocals, I used two microphones, a Neumann U 87 condenser and a Sennheiser MD-421 dynamic. The interesting issue was that the U 87 picked up so much reverb from the recording venue (studio 5) and I actually found it quite useful because the reverb plug-ins for pro-tools are not satisfying enough (or at least I don’t know how to get a good sound out of them yet). However, the MD421 picked a pretty much drier sound compared to U87.
I recorded bass through DI. Using Avalon pre-amp I got a relatively decent and fat sound of bass.
For acoustic guitar, I used two Neumann NT5 microphones and got a result of a wide and fat sound.
The electric guitar was recorded using a Shure SM58 and a Sennheiser MD421 microphone. In general, I found MD421 one of the handiest microphones around.
For the piano I used two NT5s and two U87s. The technique of using these U87s was the middle-side technique which provided me with a very wide and good stereo sound.

The process of production was a long story. In most of the cases, to have a wide sound, I doubled a particular mono track and then panned each of these two tracks to right and left. It is also notable that I inverted the phase of one of these two channels in order to get that stereo and wide sound.
Unfortunately, and as a result of me having few experiences in recording, I faced many difficulties in terms of having parts of my recordings distorted or not being able to apply the proper effect n the appropriate time and so on. But the good side of the story is that I learned a lot during the process of recording and production, which would be very handy later.

HERE is my final MP3 track...

These are the documents for this project:
- Pre Production
- Production
- Recording

Saturday 2 June 2007

Week 12 - CC1

This week doesn't have an exercise!
Here is my Pre-Production sheet for the project of this semester.
To see the enlarged picture, click on it!

Reference:
- Christian Haines 'Creative Computing I' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 31/05/2007.

Week 12 - AA1

This week’s exercise was to re-mix the Eskimo Joe track “New York” again. This time with reverb, compressors and mastering plug-in.
The to me, the important part of the exercise was how these effects were supposed to be chained. The order of putting these effects is important, according to Christian. I personally use compressor as the last effect on tracks, because there might be unwanted frequencies caused by some device and the compressor controls all of them.
I also used delay and reverb; not for all tracks though. I used delays for one of the guitars and reverbs for the vocal tracks. At the end, it was compressor, which was used for a group of tracks and also the final master track.
30 seconds of the final result could be found HERE.

Another interesting exercise of our this week was to GROUP several tracks in protools and apply effects and changes on mixing to them simultaneously.As shown in the picture above, it is relatively easy to do the job and determine either the tracks are going to be changed together in terms of the effects applied on them, or the mix, or both.

My references are as same as the last week's! I still think the material stated in these references are valid for this week's exercise as well.

References:
- Halbersberg, Elianne ‘The underdogs’, Mixing Magazine (Accessed [20.05.2007]), (http://mixonline.com/recording/mixing/audio_underdogs/)
- Steven Fieldhouse 'Audio Arts I' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 29/05/2007.
- Drury, Brandon: ‘Music Mixing Technique’, Recording Review (Accessed [20.05.2007]), (http://www.recordingreview.com/articles/articles/59/1/Music-Mixing-Trick-Run-Everything-Through-A-Delay/Delay-Effects-Can-Bring-Excitement-To-Your-Music.html)

Week 11 - Forum

Construction and destruction in music;That was the topic of this week’s forum; however, many other (and in few cases non-relevant) issues popped up and were discussed in the session.
Simon Whitelock (the dude that accused me of not having any understanding of mainstream western music) was the first to present his ideas. He –as a DJ- has enough experience –and therefore qualifications- to comment on other’s “cover” works and the way they manipulate songs initially composed, arranged and performed by someone else.
He said that in many cases, the result is good but the way they –again- manipulate others’ tunes is not particularly justified (or as he addressed it, it’s just “Jacking” stuff!). I can agree with him on the point that many musicians gain their reputation and success “because” of something that they haven’t really been involved in. In other words, many icons (for example Crazy Frog) owe all their success to another person; they just have been wise enough to use the best thing and give the best shot in the right time and right place, but..
I also do believe that the intelligence needed for such an act could be appreciated; at least I appreciate it. In the session, I took the example of an anonymous musician who just becomes popular because some famous person takes his or her song and introduces it to public. (What Madonna did with Mirwais) and I think both parties are happy and the issue is a “win-win” game. However, due to freedom of speech and all these stuff everyone can have his/her own opinion; especially in a highly subjective issue of music.
Nathan Shea went second and expressed his opinions on a “death-metal” composer who was strongly spending effort NOT to come up with something mainstream, pop-sounding and commercial. In his viewpoint, numerous works of musicians working in fields of death-metal, black-metal and other “prefix”-metal genres were examples of deconstructing rules and mainstream portrait. I almost completely agree with him but I think there are few other examples which could be added to this; Many of musicians who basically “started” a new genre or/and eventually became icons of that particular genre (Jimi Hendrix, Curt Cobain, Ozzy, etc..) were breaking rules and ordinary settings of popular music in the first place, anyway. I think the values and benefits of “deconstructing what is supposed to be the rule” is not exclusive to black or death metal.
John Delany went third and talked about the fact of deconstruction (or might be construction!) of music in order to produce ambient (or particularly in the case of the music he played “scary”) themes for movies. He introduced and talked about the use of non-ordinary sounds in the context of a sound theme. In my opinion he was focusing on the impact of the whole idea of construction and deconstruction in music and its result which not just necessarily is bad, but also is listenable and enjoyable in many particular cases.
Although I am a big fan of ambient music, I don’t think I am qualified enough to comment on this idea that much. I do agree with John that making music while deconstruction rules, or sounds could possibly result in a valuable and creative (and also good-sounding) tune but I can also think of many cases in which the experiments of destruction, mainly in ambient music have resulted in tunes which don’t really sound good to me!

References:
Whitelock, S. "Forum week 11", at University of Adelaide, 24/05/2007
Shea, N."Forum week 11", at University of Adelaide, 24/05/2007
Delaney, J. "Forum week 11", at University of Adelaide, 24/05/2007

Friday 1 June 2007

Week 11 - CC1

This week’s exercise was to continue working with the software Metasynth.

Apparently the algorithm by which the software works is sort-of CPU-consuming. The reason for that is the numerous times that the computer crashed. Apple computers have a good reputation of NOT crashing so I can’t blame the computer; it’s the software’s fault.
I don’t really know if the following exercise is the fifth or sixth time I actually did the job (refer to the issue of crashing)

1st step of mine was to take a sound sample and derive it into Metasynth. I used a vocal sample of the recording of Audio Arts’ week 11. I normalised and got rid of the empty spaces (rests) using the software Audacity.Having put the result in the room “Spectrum Synth” of Metasynth, I rendered the result and saved it as an AIFF file to use it in the sequencer section of the software.
For sequencing, I needed few sound files so I chopped the result of the 2nd step into 5 different files and used the “Sampler” of the “Sequencer” room of Metasynth.
This is how the sampler looks like. As apparent in the picture, I have 5 different files which are automatically recognised by the software. I just needed to address the folder in which the files were saved.
The last part of my job was to use the applications available in the “Montage” room of Metasynth. The entire idea looked like “Music Concrete” to me. I don’t really know how relevant that concept is to this exercise but I had an extremely hard time with Metasynth and I don’t really want to think about the relation of my ideas and the software.
You can hear the final result HERE.

I strongly advice everyone to avoid using this software, unless its bugs are fixed.

References:
- "Metasynth" software official website (Accessed [31.05.2007]), (www.metasynth.com)
- Christian Haines 'Creative Computing I' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 24/05/2007.
- "Audacity" the software's official website (Accessed [31.05.2007]), (http://audacity.sourceforge.net)