Monday, November 1, 2010

Week of 11.01.10: Giants win the World Series

We began class be discussing some examples found in Beatles recordings of complex tape editing concepts. First, in the song “Rain” the guitar bass and drum tracks were recorded in a higher key at a faster tempo with a faster tape speed then what is heard on the record. In order to match the intended range for the vocal part, the tapes was slowed down fort he tracking of the vocals, yielding a slower song in a lower key with thick, fat snare and drums sound and unnatural guitar tone. The other song was “When I’m Sixty-Four” which was recorded in a lower key, slower tempo and slower tape speed. To vocals were also recorded this way, so that when the song was sped up during playback they sounded higher and more youthful. We also touched on a very significant tape edit moment during Strawberry Fields when the recording with the string orchestra and the recording with the band were spliced together successfully even though they were at different tempos and different keys. By putting his thumb strategically on the playback reel of the full band version, George Martin was able to accomplish this editing feat of a lifetime.
We went on to discuss the Transistor and what it did for music and general technology. The transistor offered a solid state option for amplifying a voltage signal that was more technologically efficient that the vacuum tube. The Transistor is smaller, lighter, easier and cheaper to make, lasts up to fifty times longer, is significantly more durable and less susceptible to the elements, and much more power efficient. While their sound may still be debated against the vacuum tube for character and tone, the efficiency and the affordability of transistors open up a great deal of opportunity for electronic instruments and the development of practical synthesizers.
Early synthesizers began with the Olson-Belar “electronic music composing machine” which was like an early computer that was dedicated to the production of audio sound. It was based off of Helmholtz’s concepts of the overtone series, that every note contains a fundamental pitch combined with a series of hundreds of additional frequencies reverberating with that fundamental pitch that build and sculpt the timbre and characteristics of the sound. The computer would read punch cards that had the harmonic overtone series punched in that told the computer what sound to generate. This was a laborious and somewhat impractical an approached to sound generation and yet it was a critical first step in computer synthesis of audio sound. This concept lead to the early development of the RCA Mark I synthesizer, produced in 1955. The original design of the Mark I was based on a 12-tuning fork oscillator that produced sine waves, and could output both to loudspeakers and to a record lathe to cut records that could be made into vinyl. In 1958 RCA released the Mark II which was 7ft tall and 20ft long, three tons and contained 1700 vacuum tubes. The original oscillator was joined with a noise generator and two tube oscillators with variable pitch with a range of 8khz-16khz. This synth could produce not only sine waves but also triangle waves, sawtooth waves, and white noise. The Mark II also had a frequency shifter and built in reverb unit. While these synthesizers were innovative and groundbreaking they were also bulky, high maintenance, and had extremely unmusical interfaces. In 1965 Donn Buchla took over as the leading synthesizer designer, which had to do with his musical perspective in the design of the instrument as well as his use of high quality Ampex tape decks.
We also discussed Cage and his philosophical views on electronic music composition. Cage treated his compositions like scientific experiments, in an attempt to remove all human emotional element from the compositional process such that he could emancipate his music from the human element of Western Music. Cage broke down electronic music sound into five basic elements. First, frequency, vibrations (hz) that build up to create sound, pitch, and tone. Second, amplitude, the molecules displaced by the generation of the sound, or more simply, volume (db). Third, timbre, or the characteristics and quality of what the sound sounds like, or how it is perceived by the listener. Fourth, duration, the amount of time it takes a sound to last or to come to and end. Fifth and finally, envelope, the attack, decay, sustain, and release of the notes and sounds made in the composition. Undoubtedly influenced by these concepts, the age of Electro-Acoustic Music was born. This is compositions and recordings made that have both natural, organic sound sources and unnatural synthesized sound sources as well. This is the beginning of the integration of the analogue and digital worlds of music and has been the majority of popular music since its conception. The availability of professional and usable recording, editing and processing technologies allowed for more generalized exposure and experimentation in the field of electronic music composition and production. Signal processors also became more widely used and accepted by audiences. These include Echo, which is the direct reflection of a sound after it is heard, Reverb, which is the perpetuation and persistence of a sound after it has ceased playing, and Delay, which is the playing back of stored audio after it has been played.

No comments:

Post a Comment