Monday 28 May 2007

Week 11 - AA1

This week’s exercise was to mix down the song “New York” by “Eskimo Joe” in 3 different forms;
For the first sample, I chose to mix the song totally dry (no effects) just using the main audio tracks of the session in pro-tools provided to us.
In addition to be dry, this sample is a mono audio track; therefore there is no panning used in it.



2nd sample of mine, is another dry mixing, but stereo and including some pannings (of the vocal track, guitars, keyboards, etc.



The only difference between 3rd sample and the last two is the usage of equaliser on it. I didn't equalise all the tracks, though. I EQed the drums (kick, overead, etc.), the vocals, the bass, the guitars and the main intro piano.




Personally, I found the 2nd sample more “natural” whereas the equalised one sounded a little “artificial” to me; however, I think I needed some reverb to be satisfied with the sound.

References:
- Halbersberg, Elianne ‘The underdogs’, Mixing Magazine (Accessed [20.05.2007]), (http://mixonline.com/recording/mixing/audio_underdogs/)
- Steven Fieldhouse 'Audio Arts I' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 22/05/2007.
- Drury, Brandon: ‘Music Mixing Technique’, Recording Review (Accessed [20.05.2007]), (http://www.recordingreview.com/articles/articles/59/1/Music-Mixing-Trick-Run-Everything-Through-A-Delay/Delay-Effects-Can-Bring-Excitement-To-Your-Music.html)

Wednesday 23 May 2007

Week 10 - CC1

This week’s exercise was to work with the software Metasynth and experiment the different effects it applies to the sound.
For the initial sound, I used the quote from last week, the “We become what we…” and inserted it in Metasynth.
As apparent in the picture below, the software has six different “rooms” and each of them works with different effects and changes the sound in a different way.I used the “effects room” and I applied effects “Pitch and time”, “Echo”, “Stereo Echo”, “Reverb”, “Resonator”, “Harmonics”, Stretch”, Harmonise”, “Phaser” and “Flanger”. Some of these effects’ functions are obvious like “Pitch and Time” which changes the pitch and (as a result of the change in the frequency) the time duration of the sample. “Stretch” changes the time duration of the sample according to the settings by the user. (I could not observe its function properly because I had affected the initial sample of mine using other effects prior to Stretch and it was a bit hard to precisely realise this particular effect’s effects.)
“Harmonics” adds harmonics (with regards to the key indicated and controlled-by-the-user in the software) to the basic pitchers (or rather frequencies)
One of the specialities of this software is the use of curves and manually-set diagrams to indicate the way the effect manipulates the sound.
I randomly drew curves and checked the results out.
After finishing my work in the “effects room”, I recorded the result using pro-tools. (Later I found out that I could have saved the file in AIFF format using Metasynth itself.)
“Image Synth” is another room in Metasynth where the information of generating sound is taken from an image. Again I came up with a random image and checked the result. I figured out that the vertical axis is used for the frequency; ie. The higher in the axis you go, the higher frequency you will get. The horizontal axis is for time. Because I applied a filter (which is another pre-set picture used as a filter) to my image, the beginning of the sample contains fewer frequencies and therefore it sounds like a fade-in. (which is well logical!)
I think because I almost used all the possible frequency range, my final waveform was like a rectangle, again pretty logical!I recorded this sample using pro-tools as well as shown in the picture below.

References:
- "Metasynth" software official website (Accessed [23.05.2007]), (www.metasynth.com)
- Christian Haines 'Creative Computing I' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 17/05/2007.
- Krabye, Helge : ‘A short presentation of Metasynth 4’, Krabye.com (Accessed [23.05.2007]), (http://helge.krabye.com/metasynth.php)

Monday 21 May 2007

Week 10 - AA1

This week’s exercise was to record drums which compared to others is a total pain in where the sun never shines. Since the instrument has several parts and depending on the music, the importance of each part’s sound differs, there are many ways in which drums can be recorded. However, the two main categories of setting the microphones are “minimal” and (I think) “not-minimal!”
There are few points that are to be noted; the tuning of the drum heads, the comfort of the player, the positions of different parts, etc play a huge role in the final result. Personally, as a sort-of funk drummer, I DO like a nasty ring for the snare and toms. (According to Steve this is not shared within all sound engineers and producers.)I did two different sessions of recording. (Because I am the only drummer in the batch and I had to play as well as record!)
The first one was with Edward Kelly.
I used a Shure 52A beta for the kick (bass) drum, a Shure 56A beta for the snare, 2 Neumann KM84 microphones for the over heads (which were panned to left and right) and 2 AKG U87 condensers in front of the drum set so-called Middle-Sides (MS) audio capturers.
The protools session is as shown below:The final MP3 result is here as well: I used a 7-channel equaliser and a compressor for my bass drum track, a one-channel EQ and the same compressor (with different parameter settings) for my snare, a 4-channel EQ and a compressor for each of my overheads, one-channel EQs for my MS mics and finally a 7-channel EQ and a compressor for my master drum sound.
This was an experience of a minimal microphone setting. I purposely did not use any noise gate since firstly minimal setting is not my priority and secondly I intended to use the original reverb of the recording space (which again is not my first choice)

Second recording of mine had more microphones and was done with the help of Bradley Leffer and Darren Slynn.As it is quite obvious in the protools picture, I again used a 52A for the kick, a Shure 57 for the snare, a Neumann KM84 just for the Hi-Hat, a Shure 52A beta for the floor tom, two Sennheiser MD 421s for tom-toms, two NT5s for the overheads, and again two AKG U87s for the MS.
One of the main differences of this experience of mine with the first one was the fact that I used a noise gate as well as the compressor and the equaliser for the master track of the session. The final result was closer to my personal “taste” of a drum sound; pretty funky,.. and.. What a good drummer!

References:
- Knave, Bryan: ‘Capturing the kit’, Electronic Musician (Accessed [20.05.2007]), (http://emusician.com/mag/emusic_capturing_kit/)
- Steven Fieldhouse 'Audio Arts I' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 15/05/2007.
- Shambro, Joe: ‘How to record the perfect kick drum sound’, Home Recording – About.com (Accessed [20.05.2007]), (http://homerecording.about.com/od/recordingtutorials/ht/perfectkick.htm)

Tuesday 15 May 2007

Week 9 - CC1

Part 1

This week, we continued sampling and processing in a more advanced level.
For the sampling exercise, I used Doug’s voice and the sentence of “We become…” as same as the previous two weeks.
HERE is the basic quote:

Having the voice recorded to a pro-tools track, I reversed it as shown in the picture, and it resulted in this sample:
I applied “delay” to this sample, then I re-reversed the sample, applied “reverb” to it and normalised it.These steps’ samples are here in order:
- delayed and re-reversed sample:

- delayed, re-reversed, reverbed and normalised sample:

The next step was to create regions in the sample utilising the “choices” of pro-tools. In other words, I gave the control of choosing the beginnings and the endings of various regions to Pro-tools using the Tab key and the keys “apple” and “E”. In this process, pro-tools “decides” how and in which parts to chop the sample to different regions.
The next step was to consolidate each region. It was done as shown in the picture.

At the end of this part of the exercise, I had seven samples, which can be found below:
Sample 1
Sample 2
Sample 3
Sample 4
Sample 5
Sample 6
Sample 7
As same as the last two weeks’ exercises, I mapped the samples to octaves 0 to 7 of the keyboard using the device of NN19 in the software “Reason”.
I also mapped the controller knobs of the keyboard “Novation” to the controllers of NN19 (I could have mapped the faders as well). For this, I simply asked reason to “Learn” what knob I want to assign to any particular controller and played with that knob while reason was “Learning”.I set 3 different knobs of the keyboard to control the amount of panning, the samples’ amplitudes and the filter controller of the device.

It is notable that the “resonance” I set for the filter was NOT zero; I could end up having a sort-of WAH effect using the filter.
..and as the final task, I played and recorded a track! It is found HERE.

Part 2:

This part was the process of applying the Gate effect. The gate on one track was supposed to determine when ANOTHER track was supposed to be played. In other words, the control of sending track 1's signals was upon the activity of the Gate effect of track 2. (I still think this explanation has grammatic problems!)

My original track on which I wanted to apply the Gate effect was again the sentence of "We become..." with some slight changes (reversed, delayed, re-reversed and reverbed) and can be heard HERE.
The KEY (the indicator of the activity of the Gate was what Christian had provided for us (Claps sample) affected by Phaser and "Amplitube" effect (which sounds like a distortion to me).

I purposely changed the rhythm of Christian's initial sample and came up with THIS TRACK..
Applying this technique, I got a final result which can be heard HERE.

References:
- "Soundhack" software official website (Accessed [15.05.2007]), (http://www.soundhack.com/)
- Christian Haines 'Creative Computing I' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 10/05/2007.
- Bruce-Douglas, Ian : ‘Propellerhead ReasonReview’, Audiomidi.com (Accessed [15.05.2007]), (http://www.audiomidi.com/aboutus/reviews/douglas_reason2.cfm)

Friday 11 May 2007

Week 9 - AA1

This week’s exercise was to record 5 samples of bass guitar.
For the 1st sample, I used an AKG 414 condenser microphone. I did not use any other devices (such as the Avalon pre-amp) and recorded the line only using the mic and the Laney bass amplifier. As you can see, I set 3 microphones pointing towards the amp differently.

The bass player did not have his “tone” switch opened to the max; therefore I was not receiving a full range of treble frequencies.




This experience gave me a relatively clear sound. I found AKG 414 a good microphone to record the bass guitar with. Positioning the microphone pointing towards the edge of the amplifier, I also ended up having a reasonable amount of bass frequencies in my sample.

My 2nd sample used a “Shure 56A beta” dynamic microphone. I sent its signal to the Avalon Pre-amp and recorded the final result from the Avalon.




The use of Avalon notably added to the strength of the signal provided by the mic.
The result (compared to AKG) was a fatter and bassier sound.

3rd sample used a Yamaha MZ-204 dynamic microphone. All the processes were similar to the previous sample and I used Avalon again. It is notable that both samples 2 and 3 were having microphones pointing the centre of the amplifier.




MZ-204 did not make a huge difference in sound to me (compared to Shure). Both of these microphones are often used to record kick drum and presumably their functions are more or less the same.

For the next sample, I utilised the DI. Going directly to pro-tools and being recorded with no intervention of any other devices. Although the tone switch of the bass was not fully open, DI picked more of treble frequencies.




Attention!: the DI and the AKG (Sample 1) were recording in the same time. The technique was that I used the “Link” gateway of the DI; sent one signal direct to the pro-tools from it and another one to the amp. I could record two different tracks simultaneously.

5th sample is a stereo recording having AKG 414 set to the right channel and the DI to the left.




Out of all, this one sounds the best to me personally. Considering bass as the common background-instrument (or base-instrument?) of many songs, this experience was totally suitable for the job in my opinion.
Here is the picture for my Pro-tools session:
References:
- White, Paul: ‘THE LOW DOWN, Recording Bass Guitar’, Sound On Sound (Accessed [11.05.2007]), (http://www.soundonsound.com/sos/mar99/articles/recordingbass.htm)
- Steven Fieldhouse 'Audio Arts I' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 08/05/2007.
- Shambro, Joe: ‘Getting The Perfect Low End, Recording Bass Guitar’, Home Recording – About.com (Accessed [11.05.2007]), (http://homerecording.about.com/od/recordingtutorials/a/bassguitar.htm)

Tuesday 8 May 2007

Week 8 - CC1

Our task this week was to continue “Sampling”; The same principals of previous week applied and in addition we tested the software “Soundhack”
For the first sample, I used Soundhack applying changes to the sample 1 of last week. I changed the duration scale of the file by a ratio of 6 and came up with a new sample.
Again utilising the device NN19 in Reason, I mapped this sample from octaves -2 to octave 0 of the sampler key map.

For the second sample, I used “Phase Vocoder” effect again and changed the Window type from “Hamming” to “Rectangle”.
It’s notable that many of the characteristics of the file –including file format- could be varied using Soundhack. (As shown in the picture)I mapped this sample to the key range between C#0 to C1.
Third sample is the result of applying “Spectral Rate Extractor” effect on the 3rd sample of the previous week. Having changed he Transients and the Stable Sounds of the file, I got to this result.
I mapped this sample from C#1 to C2.Fourth sample uses Phase Vocoder again having different duration scales. I applied this effect on sample 4 of the previous week’s exercise.
Prior to having this sample mapped to a key zone in Reason, I changed the looping region of my file using Peak. I nudged the loop Backwards to see the effect; it is clearly recognisable in Reason after being mapped.I mapped it from C#2 to C3.
For the fifth sample, I picked the 5th sample of the last week, applied Phase Vocoder to it, then applied Spectral Rate Extractor” to the result and finally came up with my new sample.
I mapped this sample from C#3 to C5.Using the same principals of last week, I recorded my final tune, - in Reason- using my new samples, applying delay effect, varying the amount of attack and release of my tones and playing with panning, filter, pitch bend and modulation controllers.This is my final result. click HERE.

References:
- "Soundhack" software official website (Accessed [08.05.2007]), (http://www.soundhack.com/)
- Christian Haines 'Creative Computing I' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 03/05/2007.
- McLean, Steve: ‘Berklee Shares, an overview of mixing; propellerhead Reason and cakewalk sonar’, Berklee Shares (Accessed [29.04.2007]), (http://www.berkleeshares.com/production__technology/mixing_reason_sonar)

Friday 4 May 2007

Week 8 - AA1

Recording electric guitars:

I used a Shure SM57 dynamic microphone, a Sennheiser MD-421 dynamic and a Rode NT3 condenser.
1st sample: I used SM57 and positioned it pointing to the edge of the amplifier to get "bassier" sound.
I got an 80's "full-of-mid-range-and-fat" sound of distorted guitar.

Sample 2: The same microphone, but pointing towards the amp's centre:
I liked the sound for an un-plugged event. Not exactly what I prefer guitars to sound like, though.

Sample 3:Having kept the position of SM57, I added an MD-421 to it and panned them to right and left respectively.
I used an equaliser for this track. I sent the recording sound not to analog channels of the pro-tools but to a stereo bus. I set another stereo track to have inputs from that particular bus channel and record the sample.
The result of this technique was to have a wide-fat sound, which normally is good!

4th sample: I used the room's reverberation as well as the amp. I positioned the MD-421 a bit far from the amp to get the result of the room’s reverb. Since I had delay and compressor, I again applied the same technique of using the bus channel.
It was a "trippy" guitar sound!

5th sample: SM57 and NT3 in an XY pattern; again using the same stereo technique. Although NT3 is not the best microphone to record an electric guitar with, I tried my best to get a good result out of this combination.


The result was a wide sound but I think it's not always useful. I think NT3 isn't the best to record guitars. Well, the more I record and experience, the more accurate my comments become.

To edit my final samples and apply fade-ins and fade-outs, I used Audacity.












...And my final pro-tools session:


References:
- McAvinchey, Dan: ‘Tone to tape: Tips for recording electric guitar’, Guitar Nine Records (Accessed [04.05.2007]), (http://www.guitar9.com/studionine4.html)
- Steven Fieldhouse 'Audio Arts I' Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 01/05/2007.
- Levine, Mike: ‘Electric-Guitar recording’, Electronic Musician (Accessed [04.05.2007]), (http://emusician.com/editorspicks/Electric_Guitar_Recording)