Friday, November 19, 2010

Week of 11.15.10

We began this week by finishing up the presentations. The first presentation was on Turntablism. Turntablism most simply is when the turntable is used as a musical instrument or to somehow significantly influence the sound. The early innovators of this style where Hindemith with “Trick Music” and Toch with “Spoken Music.” Also by Cage with his piece “Imaginary Landscape No. 1.” In these pieces samples of instruments, voices and foley are being manipulated in pitch, duration, and playback speed and direction via the turntable. Turntablism evolved into the hip hop scene in the 1970s and to this day the majority of Turntable performance is in beat mixing, beat matching, scratching, and cross sampling music live for theatrical, exhibition, and dj proposes. Players will have made marks in the records to tell them where a sample is so that that can mix between samples of drum grooves, prerecorded tracks, individual hits, and any other kind of sample with extreme precision and accuracy. Some djs set it up such that they have a master turntable, that contains full song sample and lead lines, and a slave turntable that will have drum grooves and drum and bass grooves at different tempos and patterns that can be mixed into the other samples. An even more modern interpretation of Turntablism is Video Turntablism where videos are digitally synced with the playback from the turntable and can as such scrub back and forth with the same accuracy as the audio on the vinyl. Turntablism is a popular and familiar genre of electronic music to this day and will always be rotted in the human manipulation and exploitation of a playback medium.
The next presentation was on Drum Machines. This presentation consisted mostly of the outlining and defining of the progression of individual drum units and as such was informative but some what stale. One of the first drum machine was the Wulitzer Sideman in 1959. The Sideman was mostly used as the rum and rhythm machine in the Wulitzer organs but was also available as a stand along unit. Its operation was based on a wheel with an arm that would spin over lanes of triggers, each lane representing a different drum sample, the tempo determined by the speed at which the arm turned. The Sideman had 10 present patterns and had original pattern capabilities. In 1964 the R1 Rhythm Ace came out by Roland before it was Roland. The R1 had 10 preset patterns and looping, as well as original pattern composition. Then we were introduced to a series of machine developed by Roger Linn (1955-) who began with the LM-1. The LM-1 had digital samples at a 28khz resolution and had 18 drum sounds but no cymbals due to the length of the samples. In 1982 an upgrade of the LM-1 called the Linn Drum came out that included cymbal samples. In 1984 the Linn 9000 was released which was a midi trigger machine. It had 18 touch sensitive pads and a mixer and would play high resolution samples including cymbals. In 2001, the AdrenaLinn series began which is a series of drum machine and effect processing pedals for guitarists. The most current model is the AndrenaLinn III that has 200 presets and 40 different sounds.
The next presentation was on Ray Kurzweil (1948 -). Kurzweil is a technological prophet and designer who has designed and developed serious technology and has written many books on the interaction between human and technology which have included many technological predictions that have come true. He developed a computer that can recognize and read musical notes. He developed software that can read and speak from text. He developed a synthesizer with Stevie Wonder and was part of a program that developed accurate generation of a natural overtone series, which lead to his digital piano patches that sound at time indistinguishable from the real thing. Kurzweil seems to be the genuine mad scientist of the field in some senses, given his radical nature and somewhat haunting predictions. Altogether a fascinating character.
The next presentation was on the company Korg, which has made some of the best synthesizers and keyboards and keyboard workstations on the marked for decades. The company was founded by Tyutora Katon and Tadashi Osanai, Katon being the investor and businessman and Osanai the innovator. They wanted to make better drum machines and in 1966 they came out with the DB 66 which was and upgraded and improved version of the Wulitzer Sideman. In 1973 they produced the Korg 700 which was an one oscillator synthesizer and later the Korg 700s which had two oscillators. These synths had multiple built in fx busses which gave them a flexibility that most other synths at the time did not have. This was followed by the 900PS which was a monophonic synth with multiple presets. In 1975 Korg introduced the WT-10 which was the first handheld portable tuner that revolutionized the entire music industry in the convenience and ease of tuning an instrument. In 1976 Korg introduced the PE-1000 which was a polyphonic synth with presets, fx busses, and improved keys. The MS-10 was a monophonic modular synth with a front patch interface. The Poly 6 which came out in 1981 had 32 patches and a cassette backup. In 1986 keyboards became Keystations that included mutitrack recorders, sampling, fx bussing, and touch screen interfaces.
The next presentation was on an Electronic Music artist named Amon Tobin. Tobin is a dj, a composer, a film and game score writer, and a producer of his own music. He uses vinyl sampling and rum machines to create remixes and re-imagined music out of preexisting recordings. His work displayed and reflected the time and precision with which he has to know his music and his concepts. Whether he was using samples of full songs or mixing together foley sounds he demonstrated a clearly advanced and rounded technical skill at his craft. I will be interested to check out more of his material.
The next presentation was on the early developments in film sound. From 1890 to 1920 there was no sound in film at all, and in theaters the films would be accompanied by live music, not always in conjunction with the content of the film but rather to ease the tension many views had when seeing film for the first time, and to mask the sound of the projector. In 1910 the Kinetoscope was developed that would synchronize music and film playback, but was still tow separate mediums and the film had no sound. Lee De Forest developed phonofilm in 1919, which was the first successful composite medium of sound and video. It used scales of light and dark to read back the audio signals through the film. In 1927 Sunrise was the last silent film ever made and had sweeping camera movements due to the light and agile cameras of the time. In the same year Jazz Singer came out as the first film with sound, and had all static camera shots due to the bulk of the cameras needed for the new medium. The device was a Vitaphone which was developed in 1926 by Bell Labs., and cut the audio straight to disc. The modern era of film sound was ushered in by Fantasia, Disney’s third film released 1940. The film included a mutitracked audio accompaniment that was mixed down to a stereo track with a third track that read volume automation. Knowing that the medium offered a limited amount of dynamic range, the music was composed to accommodate the limitation so that the music would sound full for the duration of the film.
The last presentation was on the comparison between tube technology and transistor technology. The first diode was developed by Thomas Edison and was further develop by John Flemming. Lee De Forest developed the triode which is the driver for all modern tubes. Tube technology relies of the ionization and attraction of electron particles in a vacuum, and while very competent and extremely good sounding they are very fragile and highly temperamental. Transistors are a silicon based alternative that are solid state and are considerable more reliable. When transistors where developed tube technology went extinct in almost every industry except for the pro audio in the music industry, where the auditory properties of tubes are still desired. There is an undeniable sound that tubes impart of an audio signal that is in most situations highly beneficial. When tubes distort a signal it is harmonic based distortion that yields a build up in even order harmonics via compression, solid-state transistors yield odd ordered harmonics and distort in direct linear reaction with the volume of the input signal. To this day compressors and preamps and guitar amps and bass amps are all available with tube technology. There are advantages and disadvantages to both systems. Tube compressors are warm and organic sounding, but cant handle the speed of a drum set like solid state compressors can. The tube verse transistor debates will always be a bled of opinion and taste as well as practical functionality.
The second component of this week was a documentary on Bob Moog, instrument builder and great technological innovator of the 20th and 21st centuries. The film opened with a very close up shot of the inside of one of Moog’s synthesizers, and as Moog gives a monologue about hoe he can feel the signal in his instruments and that that organic understanding is part of his inspiration, the camera is following the signal path as is goes through the many electronic components of the synth. This is followed by a short cartoon with a rather fascinated visual representation of a synth sound signal with three colored band waves that would changes as the synth sound changed, displaying a perfect overlapping representation of how synth sounds are built from multiple modulating tones at one. Moog defines his synths as analogue instruments, for while they are based in electric current it is electronic components that are generating the sound out of current and differences in voltage with no numbers or digital computing involved.
He began by building Theremins as a kid, which he eventually began to sell. This got him going to trade shows and demonstrations, which turned him on to electronic music. This in turn exposed him to the synthesizers of that time, which he soon began to design on his own. These designs and models got him noticed and before long he was the designer of some of the most sought after devices in electronic music. His early synths were popular with the commercial productions houses in New York and were bought to replace musicians in the studio, which of course never works but none the less got the sound out there. As synth sounds were used more and more in commercials and on the radio the public became more used to these types of sounds, making room for the synth in popular music instrument and as a sound people would want to hear in music.
By Moog’s definition Synthesizers produce real sounds that is made up by synthesizing elements like oscillators, but that all the same synths are real instruments that produce real sound. His synths soon became modular, meaning that an array of different components such as oscillators, modulation oscillators, modulators, envelopes, and effects busses are all individual components that can be patched together by the user to create and influence the sound of the instrument. Moog also discussed the interaction between the human and the instrument. He has always designed his instruments with the intention of live performance, such that the interface can yield live performance possibilities and that the player and really play it like an instrument not like working a machine. The interaction between the human and the instrument is personal, and can be what inspires aspects of performance and composition such that this interaction should be encouraged and nurtured when he conceptualizes and instrument layout.
I found the documentary to be rather enlightening. The humanity and the personal compassion with which Moog approaches his whole occupation is so humbling that it really makes me rethink how we relate to out gear, even the less personal gear. I found it quite fascinating and I would very much like to play a Moog synth sometime.

No comments:

Post a Comment