Week of 11.15.10
We began this week by finishing up the presentations. The first presentation was on Turntablism. Turntablism most simply is when the turntable is used as a musical instrument or to somehow significantly influence the sound. The early innovators of this style where Hindemith with “Trick Music” and Toch with “Spoken Music.” Also by Cage with his piece “Imaginary Landscape No. 1.” In these pieces samples of instruments, voices and foley are being manipulated in pitch, duration, and playback speed and direction via the turntable. Turntablism evolved into the hip hop scene in the 1970s and to this day the majority of Turntable performance is in beat mixing, beat matching, scratching, and cross sampling music live for theatrical, exhibition, and dj proposes. Players will have made marks in the records to tell them where a sample is so that that can mix between samples of drum grooves, prerecorded tracks, individual hits, and any other kind of sample with extreme precision and accuracy. Some djs set it up such that they have a master turntable, that contains full song sample and lead lines, and a slave turntable that will have drum grooves and drum and bass grooves at different tempos and patterns that can be mixed into the other samples. An even more modern interpretation of Turntablism is Video Turntablism where videos are digitally synced with the playback from the turntable and can as such scrub back and forth with the same accuracy as the audio on the vinyl. Turntablism is a popular and familiar genre of electronic music to this day and will always be rotted in the human manipulation and exploitation of a playback medium.
The next presentation was on Drum Machines. This presentation consisted mostly of the outlining and defining of the progression of individual drum units and as such was informative but some what stale. One of the first drum machine was the Wulitzer Sideman in 1959. The Sideman was mostly used as the rum and rhythm machine in the Wulitzer organs but was also available as a stand along unit. Its operation was based on a wheel with an arm that would spin over lanes of triggers, each lane representing a different drum sample, the tempo determined by the speed at which the arm turned. The Sideman had 10 present patterns and had original pattern capabilities. In 1964 the R1 Rhythm Ace came out by Roland before it was Roland. The R1 had 10 preset patterns and looping, as well as original pattern composition. Then we were introduced to a series of machine developed by Roger Linn (1955-) who began with the LM-1. The LM-1 had digital samples at a 28khz resolution and had 18 drum sounds but no cymbals due to the length of the samples. In 1982 an upgrade of the LM-1 called the Linn Drum came out that included cymbal samples. In 1984 the Linn 9000 was released which was a midi trigger machine. It had 18 touch sensitive pads and a mixer and would play high resolution samples including cymbals. In 2001, the AdrenaLinn series began which is a series of drum machine and effect processing pedals for guitarists. The most current model is the AndrenaLinn III that has 200 presets and 40 different sounds.
The next presentation was on Ray Kurzweil (1948 -). Kurzweil is a technological prophet and designer who has designed and developed serious technology and has written many books on the interaction between human and technology which have included many technological predictions that have come true. He developed a computer that can recognize and read musical notes. He developed software that can read and speak from text. He developed a synthesizer with Stevie Wonder and was part of a program that developed accurate generation of a natural overtone series, which lead to his digital piano patches that sound at time indistinguishable from the real thing. Kurzweil seems to be the genuine mad scientist of the field in some senses, given his radical nature and somewhat haunting predictions. Altogether a fascinating character.
The next presentation was on the company Korg, which has made some of the best synthesizers and keyboards and keyboard workstations on the marked for decades. The company was founded by Tyutora Katon and Tadashi Osanai, Katon being the investor and businessman and Osanai the innovator. They wanted to make better drum machines and in 1966 they came out with the DB 66 which was and upgraded and improved version of the Wulitzer Sideman. In 1973 they produced the Korg 700 which was an one oscillator synthesizer and later the Korg 700s which had two oscillators. These synths had multiple built in fx busses which gave them a flexibility that most other synths at the time did not have. This was followed by the 900PS which was a monophonic synth with multiple presets. In 1975 Korg introduced the WT-10 which was the first handheld portable tuner that revolutionized the entire music industry in the convenience and ease of tuning an instrument. In 1976 Korg introduced the PE-1000 which was a polyphonic synth with presets, fx busses, and improved keys. The MS-10 was a monophonic modular synth with a front patch interface. The Poly 6 which came out in 1981 had 32 patches and a cassette backup. In 1986 keyboards became Keystations that included mutitrack recorders, sampling, fx bussing, and touch screen interfaces.
The next presentation was on an Electronic Music artist named Amon Tobin. Tobin is a dj, a composer, a film and game score writer, and a producer of his own music. He uses vinyl sampling and rum machines to create remixes and re-imagined music out of preexisting recordings. His work displayed and reflected the time and precision with which he has to know his music and his concepts. Whether he was using samples of full songs or mixing together foley sounds he demonstrated a clearly advanced and rounded technical skill at his craft. I will be interested to check out more of his material.
The next presentation was on the early developments in film sound. From 1890 to 1920 there was no sound in film at all, and in theaters the films would be accompanied by live music, not always in conjunction with the content of the film but rather to ease the tension many views had when seeing film for the first time, and to mask the sound of the projector. In 1910 the Kinetoscope was developed that would synchronize music and film playback, but was still tow separate mediums and the film had no sound. Lee De Forest developed phonofilm in 1919, which was the first successful composite medium of sound and video. It used scales of light and dark to read back the audio signals through the film. In 1927 Sunrise was the last silent film ever made and had sweeping camera movements due to the light and agile cameras of the time. In the same year Jazz Singer came out as the first film with sound, and had all static camera shots due to the bulk of the cameras needed for the new medium. The device was a Vitaphone which was developed in 1926 by Bell Labs., and cut the audio straight to disc. The modern era of film sound was ushered in by Fantasia, Disney’s third film released 1940. The film included a mutitracked audio accompaniment that was mixed down to a stereo track with a third track that read volume automation. Knowing that the medium offered a limited amount of dynamic range, the music was composed to accommodate the limitation so that the music would sound full for the duration of the film.
The last presentation was on the comparison between tube technology and transistor technology. The first diode was developed by Thomas Edison and was further develop by John Flemming. Lee De Forest developed the triode which is the driver for all modern tubes. Tube technology relies of the ionization and attraction of electron particles in a vacuum, and while very competent and extremely good sounding they are very fragile and highly temperamental. Transistors are a silicon based alternative that are solid state and are considerable more reliable. When transistors where developed tube technology went extinct in almost every industry except for the pro audio in the music industry, where the auditory properties of tubes are still desired. There is an undeniable sound that tubes impart of an audio signal that is in most situations highly beneficial. When tubes distort a signal it is harmonic based distortion that yields a build up in even order harmonics via compression, solid-state transistors yield odd ordered harmonics and distort in direct linear reaction with the volume of the input signal. To this day compressors and preamps and guitar amps and bass amps are all available with tube technology. There are advantages and disadvantages to both systems. Tube compressors are warm and organic sounding, but cant handle the speed of a drum set like solid state compressors can. The tube verse transistor debates will always be a bled of opinion and taste as well as practical functionality.
The second component of this week was a documentary on Bob Moog, instrument builder and great technological innovator of the 20th and 21st centuries. The film opened with a very close up shot of the inside of one of Moog’s synthesizers, and as Moog gives a monologue about hoe he can feel the signal in his instruments and that that organic understanding is part of his inspiration, the camera is following the signal path as is goes through the many electronic components of the synth. This is followed by a short cartoon with a rather fascinated visual representation of a synth sound signal with three colored band waves that would changes as the synth sound changed, displaying a perfect overlapping representation of how synth sounds are built from multiple modulating tones at one. Moog defines his synths as analogue instruments, for while they are based in electric current it is electronic components that are generating the sound out of current and differences in voltage with no numbers or digital computing involved.
He began by building Theremins as a kid, which he eventually began to sell. This got him going to trade shows and demonstrations, which turned him on to electronic music. This in turn exposed him to the synthesizers of that time, which he soon began to design on his own. These designs and models got him noticed and before long he was the designer of some of the most sought after devices in electronic music. His early synths were popular with the commercial productions houses in New York and were bought to replace musicians in the studio, which of course never works but none the less got the sound out there. As synth sounds were used more and more in commercials and on the radio the public became more used to these types of sounds, making room for the synth in popular music instrument and as a sound people would want to hear in music.
By Moog’s definition Synthesizers produce real sounds that is made up by synthesizing elements like oscillators, but that all the same synths are real instruments that produce real sound. His synths soon became modular, meaning that an array of different components such as oscillators, modulation oscillators, modulators, envelopes, and effects busses are all individual components that can be patched together by the user to create and influence the sound of the instrument. Moog also discussed the interaction between the human and the instrument. He has always designed his instruments with the intention of live performance, such that the interface can yield live performance possibilities and that the player and really play it like an instrument not like working a machine. The interaction between the human and the instrument is personal, and can be what inspires aspects of performance and composition such that this interaction should be encouraged and nurtured when he conceptualizes and instrument layout.
I found the documentary to be rather enlightening. The humanity and the personal compassion with which Moog approaches his whole occupation is so humbling that it really makes me rethink how we relate to out gear, even the less personal gear. I found it quite fascinating and I would very much like to play a Moog synth sometime.
MPA 334 Madness
Friday, November 19, 2010
Friday, November 12, 2010
Week of 11.08.10
This week’s class was dedicated to the first of the partner research projects. The first presentation was on the MIDI protocol. The Musical Instrument Digital Interface is a binary based code that digital devices that create sound can use to playback music. It was created in 1981, demonstrated in 1983, and publically available by 1984. in 1991, the MIDI protocol was standardized so that all manufacturers of hardware and software would have compatible interfacing. While conceived for musical purposes, MIDI can be used as the control protocol for almost any base software, and is commonly used as the control protocol for advances and intelligent stage lighting. In music, the main function of MIDI is an interface protocol between hardware and software. For instance, midi files on a computer can be linked via midi cables to an outboard sound generating device like a synth that will generate sound based on the information in the midi file including pitches and durations and play back that information with a user settable sound. The midi file itself is a tiny code file that contain so sound on its own, but can instruct a sound generating device where when and what to play. Midi files can also be converted into traditional scores and sheet music via most notation software.
The second presentation was on the early electronic music studios that Schaeffer, Stockhausen, and Cage used in their time. Schaeffer did a great deal of work at RTF Studio starting in 1943. This studio had four turntables, a four channel mixer, a reverb chamber, filters, a portable recording unit, a disc cutting lathe, and a sound effects library. During his time here we worked on a lot of foley turntable recording and editing found sound projects. He later moved on to GRM Studio, which was the first electronic music studio with equipment dedicated to the craft. GRM had a three-track tape recorder, a ten head tape machine, a keyboard operated tape machine with Varispeed, and an elaborate loudspeaker system. It was at this studio that Stockhausen began his experiments with electronic music composition before we moved to his studio in Cologne where he composed and recorded Studie I and Studie II. In Cologne he had multiple oscillators, Varispeed tape decks, ring modulators, filters and a white noise generator. Across the pond Cage was working in the Barron’s studio which was one of the most advanced in the United States at the time. This studio included multiple tape recorders, custom loudspeakers and oscillators that produced sine, sawtooth, and square waves, filters, a spring reverb unit, and a sound effects library.
The next presentation was on Piezo pickup technology. The original concept was discovered in 1880 by the Curie brothers. The piezoelectric principle is that there are 20 classes of rock like quartz that can turn vibrations and physical stimulus into a readable electric current. This means that when harnessed correctly, a Piezo pickup will pick up and transmit the vibrations of sound on a surface to a electric current that can be manipulated like a microphone signal. When first developed, this technology was used for measurement of explosions and cumpustable engines, but by the 1960s was a common alternative to the open diaphragm microphone. Piezo pickups become practical for recording any sound with a vibrating surface, or utilizing a surface in close proximity to a sound source. These pickups are commonly found built into or added to string instruments such as violins, guitars, and harps, as well as pianos. They are also great for stage boundary recording, or as room mics for drum sets.
Cynthia Salazar and I gave the last presentation of the day on Magnetic Tape as a recording medium. I think overall the presentation went well. I was a little self conscious about my rambling and tendency to curse like a sailor when I get excited about what I’m talking about but I hope such vulgarities can be written off by solid content. I felt good about it.
On Friday we had John Vanderslice come out to give a master class lecture about his experiences as a studio owner and musician. Vanderslice opened Tiny Telephone Studio in San Francisco in 1997. When he started out it was a rehearsal space that he eventually developed into a recording studio. He funded the studio for the first seven years by waiting tables I restaurants while engineers he hired worked the days. His starting rate was 100 dollars a day, and is now 350 plus 200 for the engineer, which is very competitive in the Bay Area scene. John has a degree in Economics, which I’m sure was a great advantage when he was trying manage money and budget when they were getting the studio off the ground. Over time he upgraded his equipment and environment and acoustic treating to where he now feels he has a completely capable studio setup. He started with a Mackie board and made his way up to a sweet Neve setup. He runs his studio as a network of eleven engineers that take the different clients and rotate out based on the schedule, all working at the same fixed rate. When he can he tried to match up his engineers with the clients such that their workflow, style and approach compliment each other for better and more mutually productive sessions. He is very set in his system; the price will never change based on client, and now with a deposit system, if a day is booked than that is it, the date will not get moved to accommodate another session. Basically throughout his discussion he would give anecdotes that would lead to a solid salient point of knowledge or advice. For instance, he uses tape and provides tape for his clients that want to record to it, and maintains that investing in good analogue gear and having it around even when not used adds to the aesthetic of the environment and can be stimulating for clients. He also countered this by saying that when economics are tight it is important to be aware of what gear is being used, what gear is not, and what gear is needed so that decision can be made about selling unused gear to get something needed. He also discussed how trying to make it in this business can be like war, and that to truly get ahead one must “game the system” in ways that get your work noticed. For instance his angle with tape is great; providing free tape to clients that want to record on it and having a fully operational tape setup with all the necessary gear greatly reduced the number of studios that can compete with his service. The point he ultimately made with from this was that in order to survive you have to find a niche and provide something that few if any others can, so as to make what you bring to the table unique and sought after. He also discussed working his way into endorsements and advertisements with equipment companies. He pushed Millennia and Josephson into endorsing him by basically creating the advertisements himself and giving them to the manufacturers and asking them to use it. Now his adds run in Mix and Sound on Sound magazine.
I found his presentation to be rather inspirational and refreshing. It is always a little discouraging to hear just how brutally difficult it is to work in this industry but al the same time it is reassuring to hear from someone who has gone through it and seen how it was done. I will definitely try to get in contact with him for a tour of the studio and who knows.
This week’s class was dedicated to the first of the partner research projects. The first presentation was on the MIDI protocol. The Musical Instrument Digital Interface is a binary based code that digital devices that create sound can use to playback music. It was created in 1981, demonstrated in 1983, and publically available by 1984. in 1991, the MIDI protocol was standardized so that all manufacturers of hardware and software would have compatible interfacing. While conceived for musical purposes, MIDI can be used as the control protocol for almost any base software, and is commonly used as the control protocol for advances and intelligent stage lighting. In music, the main function of MIDI is an interface protocol between hardware and software. For instance, midi files on a computer can be linked via midi cables to an outboard sound generating device like a synth that will generate sound based on the information in the midi file including pitches and durations and play back that information with a user settable sound. The midi file itself is a tiny code file that contain so sound on its own, but can instruct a sound generating device where when and what to play. Midi files can also be converted into traditional scores and sheet music via most notation software.
The second presentation was on the early electronic music studios that Schaeffer, Stockhausen, and Cage used in their time. Schaeffer did a great deal of work at RTF Studio starting in 1943. This studio had four turntables, a four channel mixer, a reverb chamber, filters, a portable recording unit, a disc cutting lathe, and a sound effects library. During his time here we worked on a lot of foley turntable recording and editing found sound projects. He later moved on to GRM Studio, which was the first electronic music studio with equipment dedicated to the craft. GRM had a three-track tape recorder, a ten head tape machine, a keyboard operated tape machine with Varispeed, and an elaborate loudspeaker system. It was at this studio that Stockhausen began his experiments with electronic music composition before we moved to his studio in Cologne where he composed and recorded Studie I and Studie II. In Cologne he had multiple oscillators, Varispeed tape decks, ring modulators, filters and a white noise generator. Across the pond Cage was working in the Barron’s studio which was one of the most advanced in the United States at the time. This studio included multiple tape recorders, custom loudspeakers and oscillators that produced sine, sawtooth, and square waves, filters, a spring reverb unit, and a sound effects library.
The next presentation was on Piezo pickup technology. The original concept was discovered in 1880 by the Curie brothers. The piezoelectric principle is that there are 20 classes of rock like quartz that can turn vibrations and physical stimulus into a readable electric current. This means that when harnessed correctly, a Piezo pickup will pick up and transmit the vibrations of sound on a surface to a electric current that can be manipulated like a microphone signal. When first developed, this technology was used for measurement of explosions and cumpustable engines, but by the 1960s was a common alternative to the open diaphragm microphone. Piezo pickups become practical for recording any sound with a vibrating surface, or utilizing a surface in close proximity to a sound source. These pickups are commonly found built into or added to string instruments such as violins, guitars, and harps, as well as pianos. They are also great for stage boundary recording, or as room mics for drum sets.
Cynthia Salazar and I gave the last presentation of the day on Magnetic Tape as a recording medium. I think overall the presentation went well. I was a little self conscious about my rambling and tendency to curse like a sailor when I get excited about what I’m talking about but I hope such vulgarities can be written off by solid content. I felt good about it.
On Friday we had John Vanderslice come out to give a master class lecture about his experiences as a studio owner and musician. Vanderslice opened Tiny Telephone Studio in San Francisco in 1997. When he started out it was a rehearsal space that he eventually developed into a recording studio. He funded the studio for the first seven years by waiting tables I restaurants while engineers he hired worked the days. His starting rate was 100 dollars a day, and is now 350 plus 200 for the engineer, which is very competitive in the Bay Area scene. John has a degree in Economics, which I’m sure was a great advantage when he was trying manage money and budget when they were getting the studio off the ground. Over time he upgraded his equipment and environment and acoustic treating to where he now feels he has a completely capable studio setup. He started with a Mackie board and made his way up to a sweet Neve setup. He runs his studio as a network of eleven engineers that take the different clients and rotate out based on the schedule, all working at the same fixed rate. When he can he tried to match up his engineers with the clients such that their workflow, style and approach compliment each other for better and more mutually productive sessions. He is very set in his system; the price will never change based on client, and now with a deposit system, if a day is booked than that is it, the date will not get moved to accommodate another session. Basically throughout his discussion he would give anecdotes that would lead to a solid salient point of knowledge or advice. For instance, he uses tape and provides tape for his clients that want to record to it, and maintains that investing in good analogue gear and having it around even when not used adds to the aesthetic of the environment and can be stimulating for clients. He also countered this by saying that when economics are tight it is important to be aware of what gear is being used, what gear is not, and what gear is needed so that decision can be made about selling unused gear to get something needed. He also discussed how trying to make it in this business can be like war, and that to truly get ahead one must “game the system” in ways that get your work noticed. For instance his angle with tape is great; providing free tape to clients that want to record on it and having a fully operational tape setup with all the necessary gear greatly reduced the number of studios that can compete with his service. The point he ultimately made with from this was that in order to survive you have to find a niche and provide something that few if any others can, so as to make what you bring to the table unique and sought after. He also discussed working his way into endorsements and advertisements with equipment companies. He pushed Millennia and Josephson into endorsing him by basically creating the advertisements himself and giving them to the manufacturers and asking them to use it. Now his adds run in Mix and Sound on Sound magazine.
I found his presentation to be rather inspirational and refreshing. It is always a little discouraging to hear just how brutally difficult it is to work in this industry but al the same time it is reassuring to hear from someone who has gone through it and seen how it was done. I will definitely try to get in contact with him for a tour of the studio and who knows.
Friday, November 5, 2010
For the Alex Vittum portion of my blog I would like to submit a copy of my Master Class paper on the presentation:
On Friday, November 5, 2010 I went to a lecture demonstration at the MPA Music Hall at CSUMB. The presenter was Alex Vittum, who is a drummer, composer, instrument builder, recording engineer, producer, and extremely free thinking musician. The venue could hold about two hundred people, and there were about fifty in attendance.
Vittum began by giving a little background. He began his musical studies as a drummer in New York where he met Dr. Waters playing in a workshop band that would practice and work out contemporary compositions. As his career and tastes developed be sought out more of the technical side of music creation as a new outlet for composition. Vittum came to California to study at Mills college for his post graduate work, were he began working in Berkeley for Donn Buchla the instrument builder and synth designer.
As part of his experimentation with bridging the gap between drumming and engineering, he developed a software instrument project called Prism. The setup for this software is he has his drum set with Audix drum mics of the snare and kick, which go through his Metric Halo 2882 interface into his computer which uses the MIO software to interface with MaxMSP which houses the Prism instrument, which is then routed back out of the computer via MIO to a pair of Mackie SRM 450 powered 2-way loudspeakers that sit just behind the drum set. Max MSP is also connect via MIDI to a one octave Mallet Cat trigger surface that allows him to trigger different assigned parameters in the Prism program. This setup allows him to use his drum set and other percussion instrument to generate different sounds and loops via Prism that would playback through the speakers which adds the further element of controlled feedback from the speaker to microphone relationship.
The Prism functionality was inspired by some of the base concepts of the Buchla synths; the manipulation and control of Timbre, Amplitude, and Frequency. These parameters can be manipulated in a number of ways via Prism, but the most direct and used for this demo were frequency shifters, effects routing via a complex matrix, and Granular synthesis. For some compositions he would split the two input signals into four signals for extended sampling ability and more complex loop harmonics. The program also includes envelops, minimal compression, and cross-effect routing for feedback.
The first piece he performed used the drum kit, a saw blade, and bells and in Prism used frequency shifters and reverb. As he played the bells and the blade he would trigger via the MIDI controller different ranges of frequency shift which when bussed to the reverb created these complex harmonic tones that while unruly and unlike the natural sound still felt organic and authentic. I found the composition to be and incredible exposition of his instrument and what it can be capable of. The second piece he played utilized the Granular synthesis concept which involves the sampling of four different segments of time of his playing and the predetermined or randomizing of samples within those samples that have independent “window” of shape. This means that sections of his tracked loops are played back and affected to create a delay style unlike any other delay available. The third piece he played was in many ways like Alvin Lucier’s I am Sitting In A Room in that what he did was play a snare roll (which was insanely executed, it was like he could just turn his snare roll on and off he had such good technique) that was fed into reverbs and sent back out the speakers. He continued the snare roll until resonate feedback from the mics began to occur. At first the feedback was from the snare mic but after a while the floor tom which was also micd began to resonate at it’s frequency to add a low drone that built up with the snare sound until a cacophonous roars had built up to the point where he could stop playing the snare and let this ominous feedback buildup continue and stimulated the snare and tom to resonate continuously. The build up was so long and so steady that the gradations in sound could not be heard but rather felt at intervals; it was very, very impressive.
When asked about the role his instrument design plays in his composition he said it was a “give and take” relationship. While the concept for a piece might be birthed from a concept created by Prism, elements of Prism were also inspired by composition concepts. This is a very unique and interesting synthesis between the production concept and the compositional process.
Vittum finished his presentation with a short demonstration of on of Buchla’s hybrid analogue digital modular synthesizers. While the interfacing and the routing and the components themselves are all analogue and traditional the inner workings are all interpreted digitally with a computer. This enables presets and recall capabilities that normal analogue synths do not have. The synth can also interface with controllers via MIDI, which we set up with one of the standard M-Audio 61 key KeyStations. The synth contains three oscillators each with a primary and modulation oscillator. These can be routed in mono, or in poly where one can have three distinct voices or two voices with two mods or two voices with one mod. These oscillators also had mod controls for Timbre, Amplitude, and Pitch, and linear sine wave shape potentiometers. There are also four envelopes with Attack and Release controls, which can also be configured as two envelopes with Attack Decay Sustain and Release. These filters also have alternative inputs so that one can create internal feedback loops for increased harmonic creation and control. It can also be controlled via a touch plate that is sensitive in pressure, location of the touch, and the velocity or intensity of the hit. This allows for unending possibilities for sound creation and synthesis and is definitely one of the most impressive and comprehensive pieces of equipment I have ever seen.
This presentation was very impressive and highly inspirational. As with much of the related curriculum in MPA 334 this presentation has further opened my mind to the endless possibilities in composition, engineering, the use of an instrument and in production.
On Friday, November 5, 2010 I went to a lecture demonstration at the MPA Music Hall at CSUMB. The presenter was Alex Vittum, who is a drummer, composer, instrument builder, recording engineer, producer, and extremely free thinking musician. The venue could hold about two hundred people, and there were about fifty in attendance.
Vittum began by giving a little background. He began his musical studies as a drummer in New York where he met Dr. Waters playing in a workshop band that would practice and work out contemporary compositions. As his career and tastes developed be sought out more of the technical side of music creation as a new outlet for composition. Vittum came to California to study at Mills college for his post graduate work, were he began working in Berkeley for Donn Buchla the instrument builder and synth designer.
As part of his experimentation with bridging the gap between drumming and engineering, he developed a software instrument project called Prism. The setup for this software is he has his drum set with Audix drum mics of the snare and kick, which go through his Metric Halo 2882 interface into his computer which uses the MIO software to interface with MaxMSP which houses the Prism instrument, which is then routed back out of the computer via MIO to a pair of Mackie SRM 450 powered 2-way loudspeakers that sit just behind the drum set. Max MSP is also connect via MIDI to a one octave Mallet Cat trigger surface that allows him to trigger different assigned parameters in the Prism program. This setup allows him to use his drum set and other percussion instrument to generate different sounds and loops via Prism that would playback through the speakers which adds the further element of controlled feedback from the speaker to microphone relationship.
The Prism functionality was inspired by some of the base concepts of the Buchla synths; the manipulation and control of Timbre, Amplitude, and Frequency. These parameters can be manipulated in a number of ways via Prism, but the most direct and used for this demo were frequency shifters, effects routing via a complex matrix, and Granular synthesis. For some compositions he would split the two input signals into four signals for extended sampling ability and more complex loop harmonics. The program also includes envelops, minimal compression, and cross-effect routing for feedback.
The first piece he performed used the drum kit, a saw blade, and bells and in Prism used frequency shifters and reverb. As he played the bells and the blade he would trigger via the MIDI controller different ranges of frequency shift which when bussed to the reverb created these complex harmonic tones that while unruly and unlike the natural sound still felt organic and authentic. I found the composition to be and incredible exposition of his instrument and what it can be capable of. The second piece he played utilized the Granular synthesis concept which involves the sampling of four different segments of time of his playing and the predetermined or randomizing of samples within those samples that have independent “window” of shape. This means that sections of his tracked loops are played back and affected to create a delay style unlike any other delay available. The third piece he played was in many ways like Alvin Lucier’s I am Sitting In A Room in that what he did was play a snare roll (which was insanely executed, it was like he could just turn his snare roll on and off he had such good technique) that was fed into reverbs and sent back out the speakers. He continued the snare roll until resonate feedback from the mics began to occur. At first the feedback was from the snare mic but after a while the floor tom which was also micd began to resonate at it’s frequency to add a low drone that built up with the snare sound until a cacophonous roars had built up to the point where he could stop playing the snare and let this ominous feedback buildup continue and stimulated the snare and tom to resonate continuously. The build up was so long and so steady that the gradations in sound could not be heard but rather felt at intervals; it was very, very impressive.
When asked about the role his instrument design plays in his composition he said it was a “give and take” relationship. While the concept for a piece might be birthed from a concept created by Prism, elements of Prism were also inspired by composition concepts. This is a very unique and interesting synthesis between the production concept and the compositional process.
Vittum finished his presentation with a short demonstration of on of Buchla’s hybrid analogue digital modular synthesizers. While the interfacing and the routing and the components themselves are all analogue and traditional the inner workings are all interpreted digitally with a computer. This enables presets and recall capabilities that normal analogue synths do not have. The synth can also interface with controllers via MIDI, which we set up with one of the standard M-Audio 61 key KeyStations. The synth contains three oscillators each with a primary and modulation oscillator. These can be routed in mono, or in poly where one can have three distinct voices or two voices with two mods or two voices with one mod. These oscillators also had mod controls for Timbre, Amplitude, and Pitch, and linear sine wave shape potentiometers. There are also four envelopes with Attack and Release controls, which can also be configured as two envelopes with Attack Decay Sustain and Release. These filters also have alternative inputs so that one can create internal feedback loops for increased harmonic creation and control. It can also be controlled via a touch plate that is sensitive in pressure, location of the touch, and the velocity or intensity of the hit. This allows for unending possibilities for sound creation and synthesis and is definitely one of the most impressive and comprehensive pieces of equipment I have ever seen.
This presentation was very impressive and highly inspirational. As with much of the related curriculum in MPA 334 this presentation has further opened my mind to the endless possibilities in composition, engineering, the use of an instrument and in production.
Monday, November 1, 2010
Week of 11.01.10: Giants win the World Series
We began class be discussing some examples found in Beatles recordings of complex tape editing concepts. First, in the song “Rain” the guitar bass and drum tracks were recorded in a higher key at a faster tempo with a faster tape speed then what is heard on the record. In order to match the intended range for the vocal part, the tapes was slowed down fort he tracking of the vocals, yielding a slower song in a lower key with thick, fat snare and drums sound and unnatural guitar tone. The other song was “When I’m Sixty-Four” which was recorded in a lower key, slower tempo and slower tape speed. To vocals were also recorded this way, so that when the song was sped up during playback they sounded higher and more youthful. We also touched on a very significant tape edit moment during Strawberry Fields when the recording with the string orchestra and the recording with the band were spliced together successfully even though they were at different tempos and different keys. By putting his thumb strategically on the playback reel of the full band version, George Martin was able to accomplish this editing feat of a lifetime.
We went on to discuss the Transistor and what it did for music and general technology. The transistor offered a solid state option for amplifying a voltage signal that was more technologically efficient that the vacuum tube. The Transistor is smaller, lighter, easier and cheaper to make, lasts up to fifty times longer, is significantly more durable and less susceptible to the elements, and much more power efficient. While their sound may still be debated against the vacuum tube for character and tone, the efficiency and the affordability of transistors open up a great deal of opportunity for electronic instruments and the development of practical synthesizers.
Early synthesizers began with the Olson-Belar “electronic music composing machine” which was like an early computer that was dedicated to the production of audio sound. It was based off of Helmholtz’s concepts of the overtone series, that every note contains a fundamental pitch combined with a series of hundreds of additional frequencies reverberating with that fundamental pitch that build and sculpt the timbre and characteristics of the sound. The computer would read punch cards that had the harmonic overtone series punched in that told the computer what sound to generate. This was a laborious and somewhat impractical an approached to sound generation and yet it was a critical first step in computer synthesis of audio sound. This concept lead to the early development of the RCA Mark I synthesizer, produced in 1955. The original design of the Mark I was based on a 12-tuning fork oscillator that produced sine waves, and could output both to loudspeakers and to a record lathe to cut records that could be made into vinyl. In 1958 RCA released the Mark II which was 7ft tall and 20ft long, three tons and contained 1700 vacuum tubes. The original oscillator was joined with a noise generator and two tube oscillators with variable pitch with a range of 8khz-16khz. This synth could produce not only sine waves but also triangle waves, sawtooth waves, and white noise. The Mark II also had a frequency shifter and built in reverb unit. While these synthesizers were innovative and groundbreaking they were also bulky, high maintenance, and had extremely unmusical interfaces. In 1965 Donn Buchla took over as the leading synthesizer designer, which had to do with his musical perspective in the design of the instrument as well as his use of high quality Ampex tape decks.
We also discussed Cage and his philosophical views on electronic music composition. Cage treated his compositions like scientific experiments, in an attempt to remove all human emotional element from the compositional process such that he could emancipate his music from the human element of Western Music. Cage broke down electronic music sound into five basic elements. First, frequency, vibrations (hz) that build up to create sound, pitch, and tone. Second, amplitude, the molecules displaced by the generation of the sound, or more simply, volume (db). Third, timbre, or the characteristics and quality of what the sound sounds like, or how it is perceived by the listener. Fourth, duration, the amount of time it takes a sound to last or to come to and end. Fifth and finally, envelope, the attack, decay, sustain, and release of the notes and sounds made in the composition. Undoubtedly influenced by these concepts, the age of Electro-Acoustic Music was born. This is compositions and recordings made that have both natural, organic sound sources and unnatural synthesized sound sources as well. This is the beginning of the integration of the analogue and digital worlds of music and has been the majority of popular music since its conception. The availability of professional and usable recording, editing and processing technologies allowed for more generalized exposure and experimentation in the field of electronic music composition and production. Signal processors also became more widely used and accepted by audiences. These include Echo, which is the direct reflection of a sound after it is heard, Reverb, which is the perpetuation and persistence of a sound after it has ceased playing, and Delay, which is the playing back of stored audio after it has been played.
We began class be discussing some examples found in Beatles recordings of complex tape editing concepts. First, in the song “Rain” the guitar bass and drum tracks were recorded in a higher key at a faster tempo with a faster tape speed then what is heard on the record. In order to match the intended range for the vocal part, the tapes was slowed down fort he tracking of the vocals, yielding a slower song in a lower key with thick, fat snare and drums sound and unnatural guitar tone. The other song was “When I’m Sixty-Four” which was recorded in a lower key, slower tempo and slower tape speed. To vocals were also recorded this way, so that when the song was sped up during playback they sounded higher and more youthful. We also touched on a very significant tape edit moment during Strawberry Fields when the recording with the string orchestra and the recording with the band were spliced together successfully even though they were at different tempos and different keys. By putting his thumb strategically on the playback reel of the full band version, George Martin was able to accomplish this editing feat of a lifetime.
We went on to discuss the Transistor and what it did for music and general technology. The transistor offered a solid state option for amplifying a voltage signal that was more technologically efficient that the vacuum tube. The Transistor is smaller, lighter, easier and cheaper to make, lasts up to fifty times longer, is significantly more durable and less susceptible to the elements, and much more power efficient. While their sound may still be debated against the vacuum tube for character and tone, the efficiency and the affordability of transistors open up a great deal of opportunity for electronic instruments and the development of practical synthesizers.
Early synthesizers began with the Olson-Belar “electronic music composing machine” which was like an early computer that was dedicated to the production of audio sound. It was based off of Helmholtz’s concepts of the overtone series, that every note contains a fundamental pitch combined with a series of hundreds of additional frequencies reverberating with that fundamental pitch that build and sculpt the timbre and characteristics of the sound. The computer would read punch cards that had the harmonic overtone series punched in that told the computer what sound to generate. This was a laborious and somewhat impractical an approached to sound generation and yet it was a critical first step in computer synthesis of audio sound. This concept lead to the early development of the RCA Mark I synthesizer, produced in 1955. The original design of the Mark I was based on a 12-tuning fork oscillator that produced sine waves, and could output both to loudspeakers and to a record lathe to cut records that could be made into vinyl. In 1958 RCA released the Mark II which was 7ft tall and 20ft long, three tons and contained 1700 vacuum tubes. The original oscillator was joined with a noise generator and two tube oscillators with variable pitch with a range of 8khz-16khz. This synth could produce not only sine waves but also triangle waves, sawtooth waves, and white noise. The Mark II also had a frequency shifter and built in reverb unit. While these synthesizers were innovative and groundbreaking they were also bulky, high maintenance, and had extremely unmusical interfaces. In 1965 Donn Buchla took over as the leading synthesizer designer, which had to do with his musical perspective in the design of the instrument as well as his use of high quality Ampex tape decks.
We also discussed Cage and his philosophical views on electronic music composition. Cage treated his compositions like scientific experiments, in an attempt to remove all human emotional element from the compositional process such that he could emancipate his music from the human element of Western Music. Cage broke down electronic music sound into five basic elements. First, frequency, vibrations (hz) that build up to create sound, pitch, and tone. Second, amplitude, the molecules displaced by the generation of the sound, or more simply, volume (db). Third, timbre, or the characteristics and quality of what the sound sounds like, or how it is perceived by the listener. Fourth, duration, the amount of time it takes a sound to last or to come to and end. Fifth and finally, envelope, the attack, decay, sustain, and release of the notes and sounds made in the composition. Undoubtedly influenced by these concepts, the age of Electro-Acoustic Music was born. This is compositions and recordings made that have both natural, organic sound sources and unnatural synthesized sound sources as well. This is the beginning of the integration of the analogue and digital worlds of music and has been the majority of popular music since its conception. The availability of professional and usable recording, editing and processing technologies allowed for more generalized exposure and experimentation in the field of electronic music composition and production. Signal processors also became more widely used and accepted by audiences. These include Echo, which is the direct reflection of a sound after it is heard, Reverb, which is the perpetuation and persistence of a sound after it has ceased playing, and Delay, which is the playing back of stored audio after it has been played.
Wednesday, October 27, 2010
Week of 10.25.10:
We began class this week by presenting our research topic proposals to the class. Mine is as follows:
We would like to present our research topic on magnetic tape as a recording medium, both physically and philosophically. Tape is after all, the recording medium that the modern recording industry is based off of both in production style, process, and concept. Tape one of the best representations of linear time that man has invented, and the flexibility of the medium has been the ground work for editing, mixing, playback, and recording concepts since its introduction to the world in 1928.
We will begin by discussing the history of the medium, going over inventor Fritz Plfeumer and how in invented the medium and the incorporation of iron oxide based off of the magnetic wire recording medium of his time, and what tape offered that no other medium of the time could. From here we will move forward in time through significant technological developments of the medium, including the different sizes and styles, the playback machines, and touch on it’s competitors as they arise later in the 20th century. This includes 1/4in, 1/2in, 1in, and 2in tape types, as well as different players and heads including 4-track, 8-track, 16-track, and 24-track heads and tape, and how they are incorporated into studios as time and techniques and technology develops.
We will also discuss what tape did for music. Besides being a high quality recording medium, tape offered editing abilities beyond anything that had yet been invented for audio recording. This physical manipulation of the medium in order to achieve different editing styles, manipulated sound based on speed, and multi-track overdubbing are all production concepts birthed by the tape medium that have all surpassed the tape era into the modern age of recording. We will discusses different early works that pioneered these techniques and are in the curriculum such as Cage’s William’s Mix and as well as Stockhausen’s Studie I and Studie II. We will talk about how these early mixes demonstrate the capabilities of tape as a medium, and how they have changed the editing process.
As with all technologies, there are downsides to tape, which we will also discuss. Such downsides include the longevity of the material, the continuous maintenance required for tape machines, and the real time aspects of all the processes tape related. This will lead us into a discussion about digital recording and how it is a technological advancement of the same production concept, only without the physical element of the tape. While tape v. digital is an entire discussion unto itself, we will talk about comparisons between the two mediums and why one could be preferred over the other for personal, tonal, and technological reasons.
While tape is no longer the popular recording medium of today, it is the foundation for all recording both in concept and in practice. A better knowledge of tapes gives us as engineers a better understanding of what recording is as an art rather than as a trade, and how this art came to be and why we record the way we do.
We moved on to continue to discuss Electronic Music as the defined Third Stage of Aesthetic for Music. HH Stackenschmit has seven traits that define electronic music. First, that Electronic Music has unlimited available sound sources. A composer can invent sounds or use and manipulate natural sounds util they no longer sound natural. Second, that Electronic Music can expand the perception of tonality. Electronic Music often explores micro tonality and all sounds and tones are given equal importance. Third, that Electronic Music exists in a state of actualization. Since Electronic Music is composed for the recording, and only exists once it is made, it can only be in actualized form, rather in an abstract state such as a written score. Fourth, that Electronic Music has a special relationship with the temporal state of music, meaning that all aspects of the sound can be captured over time. Fifth, that in Electronic Music the sound itself becomes the material of the composition, and is what is written and created rather than interpreted and performed. Sixth, that Electronic Music does not breathe, there is no human element in Electronic Music and it is exact and precise every time it is played. Finally Seventh, that Electronic Music lack a comparison to the natural world in the sense that the sounds heard are not organic, and require an active listening intellect and imagination in order to interpret the sound and derive meaning.
We also discussed al lot of the information we will be going over in my research topic; tape composition and impact. Recording techniques are to this day based in the linear tape model. Even the transport bar in ProTools is a model of a tape interface. This is because tape is a perfect interpretation and representation of time and linear function, which makes it very easy to understand and manipulate. Tape embodies the relationship between space and time. Tape enables specific time edits, as well as playback option such as reverse, speed adjustment, and depth. Duration, pitch, and color all become interchangeable variables manipulate-able in a tape studio.
We began class this week by presenting our research topic proposals to the class. Mine is as follows:
We would like to present our research topic on magnetic tape as a recording medium, both physically and philosophically. Tape is after all, the recording medium that the modern recording industry is based off of both in production style, process, and concept. Tape one of the best representations of linear time that man has invented, and the flexibility of the medium has been the ground work for editing, mixing, playback, and recording concepts since its introduction to the world in 1928.
We will begin by discussing the history of the medium, going over inventor Fritz Plfeumer and how in invented the medium and the incorporation of iron oxide based off of the magnetic wire recording medium of his time, and what tape offered that no other medium of the time could. From here we will move forward in time through significant technological developments of the medium, including the different sizes and styles, the playback machines, and touch on it’s competitors as they arise later in the 20th century. This includes 1/4in, 1/2in, 1in, and 2in tape types, as well as different players and heads including 4-track, 8-track, 16-track, and 24-track heads and tape, and how they are incorporated into studios as time and techniques and technology develops.
We will also discuss what tape did for music. Besides being a high quality recording medium, tape offered editing abilities beyond anything that had yet been invented for audio recording. This physical manipulation of the medium in order to achieve different editing styles, manipulated sound based on speed, and multi-track overdubbing are all production concepts birthed by the tape medium that have all surpassed the tape era into the modern age of recording. We will discusses different early works that pioneered these techniques and are in the curriculum such as Cage’s William’s Mix and as well as Stockhausen’s Studie I and Studie II. We will talk about how these early mixes demonstrate the capabilities of tape as a medium, and how they have changed the editing process.
As with all technologies, there are downsides to tape, which we will also discuss. Such downsides include the longevity of the material, the continuous maintenance required for tape machines, and the real time aspects of all the processes tape related. This will lead us into a discussion about digital recording and how it is a technological advancement of the same production concept, only without the physical element of the tape. While tape v. digital is an entire discussion unto itself, we will talk about comparisons between the two mediums and why one could be preferred over the other for personal, tonal, and technological reasons.
While tape is no longer the popular recording medium of today, it is the foundation for all recording both in concept and in practice. A better knowledge of tapes gives us as engineers a better understanding of what recording is as an art rather than as a trade, and how this art came to be and why we record the way we do.
We moved on to continue to discuss Electronic Music as the defined Third Stage of Aesthetic for Music. HH Stackenschmit has seven traits that define electronic music. First, that Electronic Music has unlimited available sound sources. A composer can invent sounds or use and manipulate natural sounds util they no longer sound natural. Second, that Electronic Music can expand the perception of tonality. Electronic Music often explores micro tonality and all sounds and tones are given equal importance. Third, that Electronic Music exists in a state of actualization. Since Electronic Music is composed for the recording, and only exists once it is made, it can only be in actualized form, rather in an abstract state such as a written score. Fourth, that Electronic Music has a special relationship with the temporal state of music, meaning that all aspects of the sound can be captured over time. Fifth, that in Electronic Music the sound itself becomes the material of the composition, and is what is written and created rather than interpreted and performed. Sixth, that Electronic Music does not breathe, there is no human element in Electronic Music and it is exact and precise every time it is played. Finally Seventh, that Electronic Music lack a comparison to the natural world in the sense that the sounds heard are not organic, and require an active listening intellect and imagination in order to interpret the sound and derive meaning.
We also discussed al lot of the information we will be going over in my research topic; tape composition and impact. Recording techniques are to this day based in the linear tape model. Even the transport bar in ProTools is a model of a tape interface. This is because tape is a perfect interpretation and representation of time and linear function, which makes it very easy to understand and manipulate. Tape embodies the relationship between space and time. Tape enables specific time edits, as well as playback option such as reverse, speed adjustment, and depth. Duration, pitch, and color all become interchangeable variables manipulate-able in a tape studio.
Wednesday, October 13, 2010
Week of 10.11.10:
The Mellotron and the Chamberlin are very sophisticated and mechanical electronic instruments with a complicated history. Harry Chamberlin and David Nixon developed the Chamberlin as a parlor instrument that was intended to be able to reduplicated the sounds of a full orchestra in one’s living room. Chamberlin devised a way to achieve this using magnetic tape recordings of sounds. The idea was that when a key is played it triggers a tape to start playing, giving the user eight seconds of sound. Each key’s tape had eight tracks, so any of eight sounds could be triggered by the keys depending on what the user wanted. In order for these tapes to work, Chamberlin had to track master versions of these tapes so we hired the Lawrence Welk Orchestra to come in and sustain perfectly tuned notes for night seconds. At this point Chamberlin is effectively making samples of each note of every instrument he wants available so that he can have recordings of different pitches for the different keys. The Chamberlin master tapes were very high quality, recorded with a Neumann U47 into an Ampex valve tape deck. Some keys would not trigger sustained notes but percussion and drum loops. These tapes could loop once placed but would snap back to their starting point when the key was no longer depressed. This made it possible to have rhythm loops in addition to sustained notes.
The first Chamberlin came out in 1948, and the Chamberlin Company was founded in 1956. Chamberlin’s main intention and therefore selling point for the instrument was that it would be a “rich man’s toy” or parlor instrument used for the entertainment of the wealthy in their homes. The Chamberlin was marketed for upper end novelty stores, piano dealers, and in magazines for the wealthy all across America in the 1950s.
Bill Franson was one of the Chamberlin Company’s best sales men when one day he disappeared. Franson stole two Chamberlin 600s and went over to Europe thinking he could make improvements on the design and market a better product. He went to England were he put an ad in the paper that connected him with three engineering brother named Bradley who owned Bradmatic. While the initial design of the Mellotron MKII, which was the first production version and was compared to the Chamberlin 600, was very similar, the Chamberlin was using a third party home stereo amplifier and had lever controls, the MK II had a proprietary amplifier designed by Bradmatic and was operated with buttons. This was the beginning of the Mellotron Company. While Chamberlin was still working out of garages and small workplaces like a “mom and pop” business, the Mellotron Company employed a large group of workers from the post World War Two generation who had all received military training in the fabrication and assembly of electronics. Melletron also had to record their own Master tapes and did so at IBC Studios in London. The general consensus is that the Chamberlin tapes were much higher quality tapes and sounded much more realistic than the Mellotron tapes, most likely due to the gear used to track and the quality of the musicians used. By the 1970 models the Chamberlin M1 and the Mellotron M400, unique aspects of the different designs became more apparent. The Chamberlin had a fixed cartridge of tapes that could not be changed out by the user but had 120 different high quality sounds. The M600 had less sounds at any given point, but the tapes could be changed out for other tapes with different sounds, and sets of tapes were sold by instrument or by theme. Changing out the tapes can be exploited as well, one artist used it by having each key trigger four measures of a piece at a time so that if a chromatic scale were played with a note every four bars an entire piece could be heard.
Shorty after the Mellotron Company was off the ground they went to the NAMM show in American and ran into the Chamberlin Company. Ultimately the Mellotron Company ended up having to pay royalties to the Chamberlin Company as well as stay in the U.K. while Chamberlin would stay in the United States. As music progressed through the sixties and early seventies the Mellotron Company had more success. Chamberlin stuck to the business model of the parlor instrument, as did Mellotron but the Mellotron was gaining more notoriety as a rock instrument and was being sought after by a different crowd. While the intentions of these instruments were to emulate the sounds of a real orchestra, they reality was that they did not sound nearly as good as the real thing, but rather had unique and intriguing qualities of its own that made it attractive. Unfortunately, these instruments were very temperamental and fragile. They were extremely sensitive to temperature and environment, so touring with them was highly impractical and difficult. It was not long before the advancements in synth and other keyboard technology made the unique necessity of the Mellotron less critical since the same sounds could be achieved through different and easier means. The companies were not making a profit and ended up in debt to their electrical component suppliers and had to fold. Other instruments were developed to try to improve upon the designs of the Chamberlin and Mellotron but none met great success. The Opticon was a tape based drum machine that could loop drum samples, and the Birotron was an adaptation of the Mellotron that was supposed to be lighter and better fit for travel as well as cheaper.
These instruments fell out of style until the late 80s when certain vintage sounds began to be sought after again. Since then the Mellotron has come back into the world of relevant rock instruments, being heard on recording by popular bands like Radiohead, Opeth, Porcupine Tree, Bigelf, Kanye West, and other progressive and texturally experimental rock and pop groups. In 1993 Mellotron Archives was founded and now the Mk VI is available for purchase and is much more usable than the older models but maintains the authenticity of the sound and operation. Developments in softsynths and samplers have made it such that Mellotron sounds are available as plug ins for DAWs and as sounds on professional grade keyboards like Nords. While the instrument might not be around forever, at least its unique sounds and tones will always be available.
The Mellotron and the Chamberlin are very sophisticated and mechanical electronic instruments with a complicated history. Harry Chamberlin and David Nixon developed the Chamberlin as a parlor instrument that was intended to be able to reduplicated the sounds of a full orchestra in one’s living room. Chamberlin devised a way to achieve this using magnetic tape recordings of sounds. The idea was that when a key is played it triggers a tape to start playing, giving the user eight seconds of sound. Each key’s tape had eight tracks, so any of eight sounds could be triggered by the keys depending on what the user wanted. In order for these tapes to work, Chamberlin had to track master versions of these tapes so we hired the Lawrence Welk Orchestra to come in and sustain perfectly tuned notes for night seconds. At this point Chamberlin is effectively making samples of each note of every instrument he wants available so that he can have recordings of different pitches for the different keys. The Chamberlin master tapes were very high quality, recorded with a Neumann U47 into an Ampex valve tape deck. Some keys would not trigger sustained notes but percussion and drum loops. These tapes could loop once placed but would snap back to their starting point when the key was no longer depressed. This made it possible to have rhythm loops in addition to sustained notes.
The first Chamberlin came out in 1948, and the Chamberlin Company was founded in 1956. Chamberlin’s main intention and therefore selling point for the instrument was that it would be a “rich man’s toy” or parlor instrument used for the entertainment of the wealthy in their homes. The Chamberlin was marketed for upper end novelty stores, piano dealers, and in magazines for the wealthy all across America in the 1950s.
Bill Franson was one of the Chamberlin Company’s best sales men when one day he disappeared. Franson stole two Chamberlin 600s and went over to Europe thinking he could make improvements on the design and market a better product. He went to England were he put an ad in the paper that connected him with three engineering brother named Bradley who owned Bradmatic. While the initial design of the Mellotron MKII, which was the first production version and was compared to the Chamberlin 600, was very similar, the Chamberlin was using a third party home stereo amplifier and had lever controls, the MK II had a proprietary amplifier designed by Bradmatic and was operated with buttons. This was the beginning of the Mellotron Company. While Chamberlin was still working out of garages and small workplaces like a “mom and pop” business, the Mellotron Company employed a large group of workers from the post World War Two generation who had all received military training in the fabrication and assembly of electronics. Melletron also had to record their own Master tapes and did so at IBC Studios in London. The general consensus is that the Chamberlin tapes were much higher quality tapes and sounded much more realistic than the Mellotron tapes, most likely due to the gear used to track and the quality of the musicians used. By the 1970 models the Chamberlin M1 and the Mellotron M400, unique aspects of the different designs became more apparent. The Chamberlin had a fixed cartridge of tapes that could not be changed out by the user but had 120 different high quality sounds. The M600 had less sounds at any given point, but the tapes could be changed out for other tapes with different sounds, and sets of tapes were sold by instrument or by theme. Changing out the tapes can be exploited as well, one artist used it by having each key trigger four measures of a piece at a time so that if a chromatic scale were played with a note every four bars an entire piece could be heard.
Shorty after the Mellotron Company was off the ground they went to the NAMM show in American and ran into the Chamberlin Company. Ultimately the Mellotron Company ended up having to pay royalties to the Chamberlin Company as well as stay in the U.K. while Chamberlin would stay in the United States. As music progressed through the sixties and early seventies the Mellotron Company had more success. Chamberlin stuck to the business model of the parlor instrument, as did Mellotron but the Mellotron was gaining more notoriety as a rock instrument and was being sought after by a different crowd. While the intentions of these instruments were to emulate the sounds of a real orchestra, they reality was that they did not sound nearly as good as the real thing, but rather had unique and intriguing qualities of its own that made it attractive. Unfortunately, these instruments were very temperamental and fragile. They were extremely sensitive to temperature and environment, so touring with them was highly impractical and difficult. It was not long before the advancements in synth and other keyboard technology made the unique necessity of the Mellotron less critical since the same sounds could be achieved through different and easier means. The companies were not making a profit and ended up in debt to their electrical component suppliers and had to fold. Other instruments were developed to try to improve upon the designs of the Chamberlin and Mellotron but none met great success. The Opticon was a tape based drum machine that could loop drum samples, and the Birotron was an adaptation of the Mellotron that was supposed to be lighter and better fit for travel as well as cheaper.
These instruments fell out of style until the late 80s when certain vintage sounds began to be sought after again. Since then the Mellotron has come back into the world of relevant rock instruments, being heard on recording by popular bands like Radiohead, Opeth, Porcupine Tree, Bigelf, Kanye West, and other progressive and texturally experimental rock and pop groups. In 1993 Mellotron Archives was founded and now the Mk VI is available for purchase and is much more usable than the older models but maintains the authenticity of the sound and operation. Developments in softsynths and samplers have made it such that Mellotron sounds are available as plug ins for DAWs and as sounds on professional grade keyboards like Nords. While the instrument might not be around forever, at least its unique sounds and tones will always be available.
Friday, October 8, 2010
Week of 10.04.10:
John Cage (1912-1992) was in many was the Stockhausen of American electronic music. He was an innovator not only in the realm of electronic composition, but in performance, compositional philosophy, and the technology of music production. Cage was born into an Episcopalian family in Los Angeles. His father was an inventor who told him "that if someone says 'can't' that shows you what to do." [1] When his need to create was finally facilitated by composition, he began to take lessons in composition and arrangement. His lack of confidence in his traditional skills as a composer combined with his experimentations with prepared instruments eventually led him to thinking outwardly about the limitations of composition is, what performance is, and how art is really made.
Chance became the a central focus to his composition style. He could set up scenarios where certain elements of the composition were controlled, while others were left up to a designed element of chance. The element of chance separated the content of the music and the concepts of the composer. In this sense a composition is birthed from a production concept rather than in a finite, note for note, written composition. Through these experimentations Cage was able to place himself outside of conventional thought, and in line with many of the electronic composers at the time was opened up to the world of unconventional sounds and operations.
Of course tape was one of the first mediums Cage used for these experimentations. He worked with Louise (1920-1989) and Bebe (1927-) Barron who were exceptionally innovative inventors and composers of electronic music. The designed and modified gear so that it would do whatever was required of the compositional process. After Cage and the Barrons first collaborative tape effort, Imaginary Landscape No. 5 (1952) which used material from phonograph records, Cage became focused on the tape editing portion of the composition, and began to develop compositional tools that took advantage of these opportunities. For their next effort, Williams Mix (1953) was a huge undertaking of tracking and editing. Firth the Barrons collected hundreds of tape recorded sounds, which were then organized into a 192 page score, the systems of which were built on the eight tracks of the tape. Cage then developed change parameters that would determine where and how the tapes were spliced together, and the process was so laborious that it took nine months. Cage would invite all kinds of people to help with the edits, and their different interpretations and skills would be a component of the chance element of the composition.
In 1965-1966, a group of engineers and composers from Bell Labs put on the Variation Series in New York, which was a complex, multiple performance concert series in the Armory, what showcased electronic compositions. For this series, John Cage created Variation VII, which was performed in October 1966. This was a huge display of Cage’s chance operations in action. There was no tape involved, all the sounds used during the performance were being made right then and there. To begin with, the Armory is a ridiculously large, empty concrete venue that has six seconds of natural reverb. Normally this would deter any performer but Cage saw this as an extension of the performance, that sympathetic and tuned frequencies of the space were as much a part of the composition and performance and any other aspect. In the room there was a platform with tables full of the instruments that were being used, and there was a control room that was built for the performance. The tables held a plethora of appliances and noise making instruments such as blenders, radios (which had FM and could pick up non domestic signals), fans, juicers, oscillators, each with contact microphones and patch bay equivalents. In the performance notes, one of the tables was referred to as “David’s Own” and was designated for whatever tools, instruments and devices David Tudor wanted to incorporate. In addition to these sound sources, telephone lines were specially installed for this piece that led to phones all across the city, some hanging outside in public areas, one in the kitchen of a popular restaurant, one next to a turtle tank, an aviary, the Ney York Times press room, and the sanitation department. The signals from these phone where processed by photo-optic sensors. High output lights were set up underneath the tables on stage, and the shadows of the performers as they walked around the tables would change and affect these signals that were routed to the control room. One of the engineers, even had sensors on his head designed to pick up brainwave patters (borrowed from Alvin Lucier), which were then patched into the performance. The patch bay for this performance was so huge that at one point during preproduction everybody had to stop and make patch cables so that there would be enough. As the performance developed Cage was open to anything happening. At one point members of the audience began to walk up and stand next to the tables and watch what was happening. Cage got into the idea and invited the crowd up the next night. When an engineer had to run on stage to fix something Cage simply said “you are part of the performance” and was only excited when his pants started to catch on fire from the lights under the table.
What was his role in this performance? Like a god he created a world and an environment within which he let loose free agents that could do whatever they wanted with what they were given. He designed the parameters of the performance but not its content, and in his mind whatever happened happened and that was the performance. In many ways this can be seen as the embodiment of Cage’s chance operations concept, designing an event to transpire but not the content.
John Cage was successfully able to separate the composer from the music, and this changed the way a lot of people since then think about music. Both in his compositional style and in his technological innovations, Cage redefined what it meant to experiment with music. Experimentation was not limited to the notes played, but could be explored in every aspect of music from conception to performance.
[1] http://www.biographybase.com/biography/Cage_John.html
John Cage (1912-1992) was in many was the Stockhausen of American electronic music. He was an innovator not only in the realm of electronic composition, but in performance, compositional philosophy, and the technology of music production. Cage was born into an Episcopalian family in Los Angeles. His father was an inventor who told him "that if someone says 'can't' that shows you what to do." [1] When his need to create was finally facilitated by composition, he began to take lessons in composition and arrangement. His lack of confidence in his traditional skills as a composer combined with his experimentations with prepared instruments eventually led him to thinking outwardly about the limitations of composition is, what performance is, and how art is really made.
Chance became the a central focus to his composition style. He could set up scenarios where certain elements of the composition were controlled, while others were left up to a designed element of chance. The element of chance separated the content of the music and the concepts of the composer. In this sense a composition is birthed from a production concept rather than in a finite, note for note, written composition. Through these experimentations Cage was able to place himself outside of conventional thought, and in line with many of the electronic composers at the time was opened up to the world of unconventional sounds and operations.
Of course tape was one of the first mediums Cage used for these experimentations. He worked with Louise (1920-1989) and Bebe (1927-) Barron who were exceptionally innovative inventors and composers of electronic music. The designed and modified gear so that it would do whatever was required of the compositional process. After Cage and the Barrons first collaborative tape effort, Imaginary Landscape No. 5 (1952) which used material from phonograph records, Cage became focused on the tape editing portion of the composition, and began to develop compositional tools that took advantage of these opportunities. For their next effort, Williams Mix (1953) was a huge undertaking of tracking and editing. Firth the Barrons collected hundreds of tape recorded sounds, which were then organized into a 192 page score, the systems of which were built on the eight tracks of the tape. Cage then developed change parameters that would determine where and how the tapes were spliced together, and the process was so laborious that it took nine months. Cage would invite all kinds of people to help with the edits, and their different interpretations and skills would be a component of the chance element of the composition.
In 1965-1966, a group of engineers and composers from Bell Labs put on the Variation Series in New York, which was a complex, multiple performance concert series in the Armory, what showcased electronic compositions. For this series, John Cage created Variation VII, which was performed in October 1966. This was a huge display of Cage’s chance operations in action. There was no tape involved, all the sounds used during the performance were being made right then and there. To begin with, the Armory is a ridiculously large, empty concrete venue that has six seconds of natural reverb. Normally this would deter any performer but Cage saw this as an extension of the performance, that sympathetic and tuned frequencies of the space were as much a part of the composition and performance and any other aspect. In the room there was a platform with tables full of the instruments that were being used, and there was a control room that was built for the performance. The tables held a plethora of appliances and noise making instruments such as blenders, radios (which had FM and could pick up non domestic signals), fans, juicers, oscillators, each with contact microphones and patch bay equivalents. In the performance notes, one of the tables was referred to as “David’s Own” and was designated for whatever tools, instruments and devices David Tudor wanted to incorporate. In addition to these sound sources, telephone lines were specially installed for this piece that led to phones all across the city, some hanging outside in public areas, one in the kitchen of a popular restaurant, one next to a turtle tank, an aviary, the Ney York Times press room, and the sanitation department. The signals from these phone where processed by photo-optic sensors. High output lights were set up underneath the tables on stage, and the shadows of the performers as they walked around the tables would change and affect these signals that were routed to the control room. One of the engineers, even had sensors on his head designed to pick up brainwave patters (borrowed from Alvin Lucier), which were then patched into the performance. The patch bay for this performance was so huge that at one point during preproduction everybody had to stop and make patch cables so that there would be enough. As the performance developed Cage was open to anything happening. At one point members of the audience began to walk up and stand next to the tables and watch what was happening. Cage got into the idea and invited the crowd up the next night. When an engineer had to run on stage to fix something Cage simply said “you are part of the performance” and was only excited when his pants started to catch on fire from the lights under the table.
What was his role in this performance? Like a god he created a world and an environment within which he let loose free agents that could do whatever they wanted with what they were given. He designed the parameters of the performance but not its content, and in his mind whatever happened happened and that was the performance. In many ways this can be seen as the embodiment of Cage’s chance operations concept, designing an event to transpire but not the content.
John Cage was successfully able to separate the composer from the music, and this changed the way a lot of people since then think about music. Both in his compositional style and in his technological innovations, Cage redefined what it meant to experiment with music. Experimentation was not limited to the notes played, but could be explored in every aspect of music from conception to performance.
[1] http://www.biographybase.com/biography/Cage_John.html
Subscribe to:
Posts (Atom)