Friday, November 19, 2010

Week of 11.15.10

We began this week by finishing up the presentations. The first presentation was on Turntablism. Turntablism most simply is when the turntable is used as a musical instrument or to somehow significantly influence the sound. The early innovators of this style where Hindemith with “Trick Music” and Toch with “Spoken Music.” Also by Cage with his piece “Imaginary Landscape No. 1.” In these pieces samples of instruments, voices and foley are being manipulated in pitch, duration, and playback speed and direction via the turntable. Turntablism evolved into the hip hop scene in the 1970s and to this day the majority of Turntable performance is in beat mixing, beat matching, scratching, and cross sampling music live for theatrical, exhibition, and dj proposes. Players will have made marks in the records to tell them where a sample is so that that can mix between samples of drum grooves, prerecorded tracks, individual hits, and any other kind of sample with extreme precision and accuracy. Some djs set it up such that they have a master turntable, that contains full song sample and lead lines, and a slave turntable that will have drum grooves and drum and bass grooves at different tempos and patterns that can be mixed into the other samples. An even more modern interpretation of Turntablism is Video Turntablism where videos are digitally synced with the playback from the turntable and can as such scrub back and forth with the same accuracy as the audio on the vinyl. Turntablism is a popular and familiar genre of electronic music to this day and will always be rotted in the human manipulation and exploitation of a playback medium.
The next presentation was on Drum Machines. This presentation consisted mostly of the outlining and defining of the progression of individual drum units and as such was informative but some what stale. One of the first drum machine was the Wulitzer Sideman in 1959. The Sideman was mostly used as the rum and rhythm machine in the Wulitzer organs but was also available as a stand along unit. Its operation was based on a wheel with an arm that would spin over lanes of triggers, each lane representing a different drum sample, the tempo determined by the speed at which the arm turned. The Sideman had 10 present patterns and had original pattern capabilities. In 1964 the R1 Rhythm Ace came out by Roland before it was Roland. The R1 had 10 preset patterns and looping, as well as original pattern composition. Then we were introduced to a series of machine developed by Roger Linn (1955-) who began with the LM-1. The LM-1 had digital samples at a 28khz resolution and had 18 drum sounds but no cymbals due to the length of the samples. In 1982 an upgrade of the LM-1 called the Linn Drum came out that included cymbal samples. In 1984 the Linn 9000 was released which was a midi trigger machine. It had 18 touch sensitive pads and a mixer and would play high resolution samples including cymbals. In 2001, the AdrenaLinn series began which is a series of drum machine and effect processing pedals for guitarists. The most current model is the AndrenaLinn III that has 200 presets and 40 different sounds.
The next presentation was on Ray Kurzweil (1948 -). Kurzweil is a technological prophet and designer who has designed and developed serious technology and has written many books on the interaction between human and technology which have included many technological predictions that have come true. He developed a computer that can recognize and read musical notes. He developed software that can read and speak from text. He developed a synthesizer with Stevie Wonder and was part of a program that developed accurate generation of a natural overtone series, which lead to his digital piano patches that sound at time indistinguishable from the real thing. Kurzweil seems to be the genuine mad scientist of the field in some senses, given his radical nature and somewhat haunting predictions. Altogether a fascinating character.
The next presentation was on the company Korg, which has made some of the best synthesizers and keyboards and keyboard workstations on the marked for decades. The company was founded by Tyutora Katon and Tadashi Osanai, Katon being the investor and businessman and Osanai the innovator. They wanted to make better drum machines and in 1966 they came out with the DB 66 which was and upgraded and improved version of the Wulitzer Sideman. In 1973 they produced the Korg 700 which was an one oscillator synthesizer and later the Korg 700s which had two oscillators. These synths had multiple built in fx busses which gave them a flexibility that most other synths at the time did not have. This was followed by the 900PS which was a monophonic synth with multiple presets. In 1975 Korg introduced the WT-10 which was the first handheld portable tuner that revolutionized the entire music industry in the convenience and ease of tuning an instrument. In 1976 Korg introduced the PE-1000 which was a polyphonic synth with presets, fx busses, and improved keys. The MS-10 was a monophonic modular synth with a front patch interface. The Poly 6 which came out in 1981 had 32 patches and a cassette backup. In 1986 keyboards became Keystations that included mutitrack recorders, sampling, fx bussing, and touch screen interfaces.
The next presentation was on an Electronic Music artist named Amon Tobin. Tobin is a dj, a composer, a film and game score writer, and a producer of his own music. He uses vinyl sampling and rum machines to create remixes and re-imagined music out of preexisting recordings. His work displayed and reflected the time and precision with which he has to know his music and his concepts. Whether he was using samples of full songs or mixing together foley sounds he demonstrated a clearly advanced and rounded technical skill at his craft. I will be interested to check out more of his material.
The next presentation was on the early developments in film sound. From 1890 to 1920 there was no sound in film at all, and in theaters the films would be accompanied by live music, not always in conjunction with the content of the film but rather to ease the tension many views had when seeing film for the first time, and to mask the sound of the projector. In 1910 the Kinetoscope was developed that would synchronize music and film playback, but was still tow separate mediums and the film had no sound. Lee De Forest developed phonofilm in 1919, which was the first successful composite medium of sound and video. It used scales of light and dark to read back the audio signals through the film. In 1927 Sunrise was the last silent film ever made and had sweeping camera movements due to the light and agile cameras of the time. In the same year Jazz Singer came out as the first film with sound, and had all static camera shots due to the bulk of the cameras needed for the new medium. The device was a Vitaphone which was developed in 1926 by Bell Labs., and cut the audio straight to disc. The modern era of film sound was ushered in by Fantasia, Disney’s third film released 1940. The film included a mutitracked audio accompaniment that was mixed down to a stereo track with a third track that read volume automation. Knowing that the medium offered a limited amount of dynamic range, the music was composed to accommodate the limitation so that the music would sound full for the duration of the film.
The last presentation was on the comparison between tube technology and transistor technology. The first diode was developed by Thomas Edison and was further develop by John Flemming. Lee De Forest developed the triode which is the driver for all modern tubes. Tube technology relies of the ionization and attraction of electron particles in a vacuum, and while very competent and extremely good sounding they are very fragile and highly temperamental. Transistors are a silicon based alternative that are solid state and are considerable more reliable. When transistors where developed tube technology went extinct in almost every industry except for the pro audio in the music industry, where the auditory properties of tubes are still desired. There is an undeniable sound that tubes impart of an audio signal that is in most situations highly beneficial. When tubes distort a signal it is harmonic based distortion that yields a build up in even order harmonics via compression, solid-state transistors yield odd ordered harmonics and distort in direct linear reaction with the volume of the input signal. To this day compressors and preamps and guitar amps and bass amps are all available with tube technology. There are advantages and disadvantages to both systems. Tube compressors are warm and organic sounding, but cant handle the speed of a drum set like solid state compressors can. The tube verse transistor debates will always be a bled of opinion and taste as well as practical functionality.
The second component of this week was a documentary on Bob Moog, instrument builder and great technological innovator of the 20th and 21st centuries. The film opened with a very close up shot of the inside of one of Moog’s synthesizers, and as Moog gives a monologue about hoe he can feel the signal in his instruments and that that organic understanding is part of his inspiration, the camera is following the signal path as is goes through the many electronic components of the synth. This is followed by a short cartoon with a rather fascinated visual representation of a synth sound signal with three colored band waves that would changes as the synth sound changed, displaying a perfect overlapping representation of how synth sounds are built from multiple modulating tones at one. Moog defines his synths as analogue instruments, for while they are based in electric current it is electronic components that are generating the sound out of current and differences in voltage with no numbers or digital computing involved.
He began by building Theremins as a kid, which he eventually began to sell. This got him going to trade shows and demonstrations, which turned him on to electronic music. This in turn exposed him to the synthesizers of that time, which he soon began to design on his own. These designs and models got him noticed and before long he was the designer of some of the most sought after devices in electronic music. His early synths were popular with the commercial productions houses in New York and were bought to replace musicians in the studio, which of course never works but none the less got the sound out there. As synth sounds were used more and more in commercials and on the radio the public became more used to these types of sounds, making room for the synth in popular music instrument and as a sound people would want to hear in music.
By Moog’s definition Synthesizers produce real sounds that is made up by synthesizing elements like oscillators, but that all the same synths are real instruments that produce real sound. His synths soon became modular, meaning that an array of different components such as oscillators, modulation oscillators, modulators, envelopes, and effects busses are all individual components that can be patched together by the user to create and influence the sound of the instrument. Moog also discussed the interaction between the human and the instrument. He has always designed his instruments with the intention of live performance, such that the interface can yield live performance possibilities and that the player and really play it like an instrument not like working a machine. The interaction between the human and the instrument is personal, and can be what inspires aspects of performance and composition such that this interaction should be encouraged and nurtured when he conceptualizes and instrument layout.
I found the documentary to be rather enlightening. The humanity and the personal compassion with which Moog approaches his whole occupation is so humbling that it really makes me rethink how we relate to out gear, even the less personal gear. I found it quite fascinating and I would very much like to play a Moog synth sometime.

Friday, November 12, 2010

Week of 11.08.10

This week’s class was dedicated to the first of the partner research projects. The first presentation was on the MIDI protocol. The Musical Instrument Digital Interface is a binary based code that digital devices that create sound can use to playback music. It was created in 1981, demonstrated in 1983, and publically available by 1984. in 1991, the MIDI protocol was standardized so that all manufacturers of hardware and software would have compatible interfacing. While conceived for musical purposes, MIDI can be used as the control protocol for almost any base software, and is commonly used as the control protocol for advances and intelligent stage lighting. In music, the main function of MIDI is an interface protocol between hardware and software. For instance, midi files on a computer can be linked via midi cables to an outboard sound generating device like a synth that will generate sound based on the information in the midi file including pitches and durations and play back that information with a user settable sound. The midi file itself is a tiny code file that contain so sound on its own, but can instruct a sound generating device where when and what to play. Midi files can also be converted into traditional scores and sheet music via most notation software.
The second presentation was on the early electronic music studios that Schaeffer, Stockhausen, and Cage used in their time. Schaeffer did a great deal of work at RTF Studio starting in 1943. This studio had four turntables, a four channel mixer, a reverb chamber, filters, a portable recording unit, a disc cutting lathe, and a sound effects library. During his time here we worked on a lot of foley turntable recording and editing found sound projects. He later moved on to GRM Studio, which was the first electronic music studio with equipment dedicated to the craft. GRM had a three-track tape recorder, a ten head tape machine, a keyboard operated tape machine with Varispeed, and an elaborate loudspeaker system. It was at this studio that Stockhausen began his experiments with electronic music composition before we moved to his studio in Cologne where he composed and recorded Studie I and Studie II. In Cologne he had multiple oscillators, Varispeed tape decks, ring modulators, filters and a white noise generator. Across the pond Cage was working in the Barron’s studio which was one of the most advanced in the United States at the time. This studio included multiple tape recorders, custom loudspeakers and oscillators that produced sine, sawtooth, and square waves, filters, a spring reverb unit, and a sound effects library.
The next presentation was on Piezo pickup technology. The original concept was discovered in 1880 by the Curie brothers. The piezoelectric principle is that there are 20 classes of rock like quartz that can turn vibrations and physical stimulus into a readable electric current. This means that when harnessed correctly, a Piezo pickup will pick up and transmit the vibrations of sound on a surface to a electric current that can be manipulated like a microphone signal. When first developed, this technology was used for measurement of explosions and cumpustable engines, but by the 1960s was a common alternative to the open diaphragm microphone. Piezo pickups become practical for recording any sound with a vibrating surface, or utilizing a surface in close proximity to a sound source. These pickups are commonly found built into or added to string instruments such as violins, guitars, and harps, as well as pianos. They are also great for stage boundary recording, or as room mics for drum sets.
Cynthia Salazar and I gave the last presentation of the day on Magnetic Tape as a recording medium. I think overall the presentation went well. I was a little self conscious about my rambling and tendency to curse like a sailor when I get excited about what I’m talking about but I hope such vulgarities can be written off by solid content. I felt good about it.

On Friday we had John Vanderslice come out to give a master class lecture about his experiences as a studio owner and musician. Vanderslice opened Tiny Telephone Studio in San Francisco in 1997. When he started out it was a rehearsal space that he eventually developed into a recording studio. He funded the studio for the first seven years by waiting tables I restaurants while engineers he hired worked the days. His starting rate was 100 dollars a day, and is now 350 plus 200 for the engineer, which is very competitive in the Bay Area scene. John has a degree in Economics, which I’m sure was a great advantage when he was trying manage money and budget when they were getting the studio off the ground. Over time he upgraded his equipment and environment and acoustic treating to where he now feels he has a completely capable studio setup. He started with a Mackie board and made his way up to a sweet Neve setup. He runs his studio as a network of eleven engineers that take the different clients and rotate out based on the schedule, all working at the same fixed rate. When he can he tried to match up his engineers with the clients such that their workflow, style and approach compliment each other for better and more mutually productive sessions. He is very set in his system; the price will never change based on client, and now with a deposit system, if a day is booked than that is it, the date will not get moved to accommodate another session. Basically throughout his discussion he would give anecdotes that would lead to a solid salient point of knowledge or advice. For instance, he uses tape and provides tape for his clients that want to record to it, and maintains that investing in good analogue gear and having it around even when not used adds to the aesthetic of the environment and can be stimulating for clients. He also countered this by saying that when economics are tight it is important to be aware of what gear is being used, what gear is not, and what gear is needed so that decision can be made about selling unused gear to get something needed. He also discussed how trying to make it in this business can be like war, and that to truly get ahead one must “game the system” in ways that get your work noticed. For instance his angle with tape is great; providing free tape to clients that want to record on it and having a fully operational tape setup with all the necessary gear greatly reduced the number of studios that can compete with his service. The point he ultimately made with from this was that in order to survive you have to find a niche and provide something that few if any others can, so as to make what you bring to the table unique and sought after. He also discussed working his way into endorsements and advertisements with equipment companies. He pushed Millennia and Josephson into endorsing him by basically creating the advertisements himself and giving them to the manufacturers and asking them to use it. Now his adds run in Mix and Sound on Sound magazine.
I found his presentation to be rather inspirational and refreshing. It is always a little discouraging to hear just how brutally difficult it is to work in this industry but al the same time it is reassuring to hear from someone who has gone through it and seen how it was done. I will definitely try to get in contact with him for a tour of the studio and who knows.

Friday, November 5, 2010

For the Alex Vittum portion of my blog I would like to submit a copy of my Master Class paper on the presentation:

On Friday, November 5, 2010 I went to a lecture demonstration at the MPA Music Hall at CSUMB. The presenter was Alex Vittum, who is a drummer, composer, instrument builder, recording engineer, producer, and extremely free thinking musician. The venue could hold about two hundred people, and there were about fifty in attendance.
Vittum began by giving a little background. He began his musical studies as a drummer in New York where he met Dr. Waters playing in a workshop band that would practice and work out contemporary compositions. As his career and tastes developed be sought out more of the technical side of music creation as a new outlet for composition. Vittum came to California to study at Mills college for his post graduate work, were he began working in Berkeley for Donn Buchla the instrument builder and synth designer.
As part of his experimentation with bridging the gap between drumming and engineering, he developed a software instrument project called Prism. The setup for this software is he has his drum set with Audix drum mics of the snare and kick, which go through his Metric Halo 2882 interface into his computer which uses the MIO software to interface with MaxMSP which houses the Prism instrument, which is then routed back out of the computer via MIO to a pair of Mackie SRM 450 powered 2-way loudspeakers that sit just behind the drum set. Max MSP is also connect via MIDI to a one octave Mallet Cat trigger surface that allows him to trigger different assigned parameters in the Prism program. This setup allows him to use his drum set and other percussion instrument to generate different sounds and loops via Prism that would playback through the speakers which adds the further element of controlled feedback from the speaker to microphone relationship.
The Prism functionality was inspired by some of the base concepts of the Buchla synths; the manipulation and control of Timbre, Amplitude, and Frequency. These parameters can be manipulated in a number of ways via Prism, but the most direct and used for this demo were frequency shifters, effects routing via a complex matrix, and Granular synthesis. For some compositions he would split the two input signals into four signals for extended sampling ability and more complex loop harmonics. The program also includes envelops, minimal compression, and cross-effect routing for feedback.
The first piece he performed used the drum kit, a saw blade, and bells and in Prism used frequency shifters and reverb. As he played the bells and the blade he would trigger via the MIDI controller different ranges of frequency shift which when bussed to the reverb created these complex harmonic tones that while unruly and unlike the natural sound still felt organic and authentic. I found the composition to be and incredible exposition of his instrument and what it can be capable of. The second piece he played utilized the Granular synthesis concept which involves the sampling of four different segments of time of his playing and the predetermined or randomizing of samples within those samples that have independent “window” of shape. This means that sections of his tracked loops are played back and affected to create a delay style unlike any other delay available. The third piece he played was in many ways like Alvin Lucier’s I am Sitting In A Room in that what he did was play a snare roll (which was insanely executed, it was like he could just turn his snare roll on and off he had such good technique) that was fed into reverbs and sent back out the speakers. He continued the snare roll until resonate feedback from the mics began to occur. At first the feedback was from the snare mic but after a while the floor tom which was also micd began to resonate at it’s frequency to add a low drone that built up with the snare sound until a cacophonous roars had built up to the point where he could stop playing the snare and let this ominous feedback buildup continue and stimulated the snare and tom to resonate continuously. The build up was so long and so steady that the gradations in sound could not be heard but rather felt at intervals; it was very, very impressive.
When asked about the role his instrument design plays in his composition he said it was a “give and take” relationship. While the concept for a piece might be birthed from a concept created by Prism, elements of Prism were also inspired by composition concepts. This is a very unique and interesting synthesis between the production concept and the compositional process.
Vittum finished his presentation with a short demonstration of on of Buchla’s hybrid analogue digital modular synthesizers. While the interfacing and the routing and the components themselves are all analogue and traditional the inner workings are all interpreted digitally with a computer. This enables presets and recall capabilities that normal analogue synths do not have. The synth can also interface with controllers via MIDI, which we set up with one of the standard M-Audio 61 key KeyStations. The synth contains three oscillators each with a primary and modulation oscillator. These can be routed in mono, or in poly where one can have three distinct voices or two voices with two mods or two voices with one mod. These oscillators also had mod controls for Timbre, Amplitude, and Pitch, and linear sine wave shape potentiometers. There are also four envelopes with Attack and Release controls, which can also be configured as two envelopes with Attack Decay Sustain and Release. These filters also have alternative inputs so that one can create internal feedback loops for increased harmonic creation and control. It can also be controlled via a touch plate that is sensitive in pressure, location of the touch, and the velocity or intensity of the hit. This allows for unending possibilities for sound creation and synthesis and is definitely one of the most impressive and comprehensive pieces of equipment I have ever seen.
This presentation was very impressive and highly inspirational. As with much of the related curriculum in MPA 334 this presentation has further opened my mind to the endless possibilities in composition, engineering, the use of an instrument and in production.

Monday, November 1, 2010

Week of 11.01.10: Giants win the World Series

We began class be discussing some examples found in Beatles recordings of complex tape editing concepts. First, in the song “Rain” the guitar bass and drum tracks were recorded in a higher key at a faster tempo with a faster tape speed then what is heard on the record. In order to match the intended range for the vocal part, the tapes was slowed down fort he tracking of the vocals, yielding a slower song in a lower key with thick, fat snare and drums sound and unnatural guitar tone. The other song was “When I’m Sixty-Four” which was recorded in a lower key, slower tempo and slower tape speed. To vocals were also recorded this way, so that when the song was sped up during playback they sounded higher and more youthful. We also touched on a very significant tape edit moment during Strawberry Fields when the recording with the string orchestra and the recording with the band were spliced together successfully even though they were at different tempos and different keys. By putting his thumb strategically on the playback reel of the full band version, George Martin was able to accomplish this editing feat of a lifetime.
We went on to discuss the Transistor and what it did for music and general technology. The transistor offered a solid state option for amplifying a voltage signal that was more technologically efficient that the vacuum tube. The Transistor is smaller, lighter, easier and cheaper to make, lasts up to fifty times longer, is significantly more durable and less susceptible to the elements, and much more power efficient. While their sound may still be debated against the vacuum tube for character and tone, the efficiency and the affordability of transistors open up a great deal of opportunity for electronic instruments and the development of practical synthesizers.
Early synthesizers began with the Olson-Belar “electronic music composing machine” which was like an early computer that was dedicated to the production of audio sound. It was based off of Helmholtz’s concepts of the overtone series, that every note contains a fundamental pitch combined with a series of hundreds of additional frequencies reverberating with that fundamental pitch that build and sculpt the timbre and characteristics of the sound. The computer would read punch cards that had the harmonic overtone series punched in that told the computer what sound to generate. This was a laborious and somewhat impractical an approached to sound generation and yet it was a critical first step in computer synthesis of audio sound. This concept lead to the early development of the RCA Mark I synthesizer, produced in 1955. The original design of the Mark I was based on a 12-tuning fork oscillator that produced sine waves, and could output both to loudspeakers and to a record lathe to cut records that could be made into vinyl. In 1958 RCA released the Mark II which was 7ft tall and 20ft long, three tons and contained 1700 vacuum tubes. The original oscillator was joined with a noise generator and two tube oscillators with variable pitch with a range of 8khz-16khz. This synth could produce not only sine waves but also triangle waves, sawtooth waves, and white noise. The Mark II also had a frequency shifter and built in reverb unit. While these synthesizers were innovative and groundbreaking they were also bulky, high maintenance, and had extremely unmusical interfaces. In 1965 Donn Buchla took over as the leading synthesizer designer, which had to do with his musical perspective in the design of the instrument as well as his use of high quality Ampex tape decks.
We also discussed Cage and his philosophical views on electronic music composition. Cage treated his compositions like scientific experiments, in an attempt to remove all human emotional element from the compositional process such that he could emancipate his music from the human element of Western Music. Cage broke down electronic music sound into five basic elements. First, frequency, vibrations (hz) that build up to create sound, pitch, and tone. Second, amplitude, the molecules displaced by the generation of the sound, or more simply, volume (db). Third, timbre, or the characteristics and quality of what the sound sounds like, or how it is perceived by the listener. Fourth, duration, the amount of time it takes a sound to last or to come to and end. Fifth and finally, envelope, the attack, decay, sustain, and release of the notes and sounds made in the composition. Undoubtedly influenced by these concepts, the age of Electro-Acoustic Music was born. This is compositions and recordings made that have both natural, organic sound sources and unnatural synthesized sound sources as well. This is the beginning of the integration of the analogue and digital worlds of music and has been the majority of popular music since its conception. The availability of professional and usable recording, editing and processing technologies allowed for more generalized exposure and experimentation in the field of electronic music composition and production. Signal processors also became more widely used and accepted by audiences. These include Echo, which is the direct reflection of a sound after it is heard, Reverb, which is the perpetuation and persistence of a sound after it has ceased playing, and Delay, which is the playing back of stored audio after it has been played.

Wednesday, October 27, 2010

Week of 10.25.10:

We began class this week by presenting our research topic proposals to the class. Mine is as follows:

We would like to present our research topic on magnetic tape as a recording medium, both physically and philosophically. Tape is after all, the recording medium that the modern recording industry is based off of both in production style, process, and concept. Tape one of the best representations of linear time that man has invented, and the flexibility of the medium has been the ground work for editing, mixing, playback, and recording concepts since its introduction to the world in 1928.
We will begin by discussing the history of the medium, going over inventor Fritz Plfeumer and how in invented the medium and the incorporation of iron oxide based off of the magnetic wire recording medium of his time, and what tape offered that no other medium of the time could. From here we will move forward in time through significant technological developments of the medium, including the different sizes and styles, the playback machines, and touch on it’s competitors as they arise later in the 20th century. This includes 1/4in, 1/2in, 1in, and 2in tape types, as well as different players and heads including 4-track, 8-track, 16-track, and 24-track heads and tape, and how they are incorporated into studios as time and techniques and technology develops.
We will also discuss what tape did for music. Besides being a high quality recording medium, tape offered editing abilities beyond anything that had yet been invented for audio recording. This physical manipulation of the medium in order to achieve different editing styles, manipulated sound based on speed, and multi-track overdubbing are all production concepts birthed by the tape medium that have all surpassed the tape era into the modern age of recording. We will discusses different early works that pioneered these techniques and are in the curriculum such as Cage’s William’s Mix and as well as Stockhausen’s Studie I and Studie II. We will talk about how these early mixes demonstrate the capabilities of tape as a medium, and how they have changed the editing process.
As with all technologies, there are downsides to tape, which we will also discuss. Such downsides include the longevity of the material, the continuous maintenance required for tape machines, and the real time aspects of all the processes tape related. This will lead us into a discussion about digital recording and how it is a technological advancement of the same production concept, only without the physical element of the tape. While tape v. digital is an entire discussion unto itself, we will talk about comparisons between the two mediums and why one could be preferred over the other for personal, tonal, and technological reasons.
While tape is no longer the popular recording medium of today, it is the foundation for all recording both in concept and in practice. A better knowledge of tapes gives us as engineers a better understanding of what recording is as an art rather than as a trade, and how this art came to be and why we record the way we do.

We moved on to continue to discuss Electronic Music as the defined Third Stage of Aesthetic for Music. HH Stackenschmit has seven traits that define electronic music. First, that Electronic Music has unlimited available sound sources. A composer can invent sounds or use and manipulate natural sounds util they no longer sound natural. Second, that Electronic Music can expand the perception of tonality. Electronic Music often explores micro tonality and all sounds and tones are given equal importance. Third, that Electronic Music exists in a state of actualization. Since Electronic Music is composed for the recording, and only exists once it is made, it can only be in actualized form, rather in an abstract state such as a written score. Fourth, that Electronic Music has a special relationship with the temporal state of music, meaning that all aspects of the sound can be captured over time. Fifth, that in Electronic Music the sound itself becomes the material of the composition, and is what is written and created rather than interpreted and performed. Sixth, that Electronic Music does not breathe, there is no human element in Electronic Music and it is exact and precise every time it is played. Finally Seventh, that Electronic Music lack a comparison to the natural world in the sense that the sounds heard are not organic, and require an active listening intellect and imagination in order to interpret the sound and derive meaning.
We also discussed al lot of the information we will be going over in my research topic; tape composition and impact. Recording techniques are to this day based in the linear tape model. Even the transport bar in ProTools is a model of a tape interface. This is because tape is a perfect interpretation and representation of time and linear function, which makes it very easy to understand and manipulate. Tape embodies the relationship between space and time. Tape enables specific time edits, as well as playback option such as reverse, speed adjustment, and depth. Duration, pitch, and color all become interchangeable variables manipulate-able in a tape studio.

Wednesday, October 13, 2010

Week of 10.11.10:

The Mellotron and the Chamberlin are very sophisticated and mechanical electronic instruments with a complicated history. Harry Chamberlin and David Nixon developed the Chamberlin as a parlor instrument that was intended to be able to reduplicated the sounds of a full orchestra in one’s living room. Chamberlin devised a way to achieve this using magnetic tape recordings of sounds. The idea was that when a key is played it triggers a tape to start playing, giving the user eight seconds of sound. Each key’s tape had eight tracks, so any of eight sounds could be triggered by the keys depending on what the user wanted. In order for these tapes to work, Chamberlin had to track master versions of these tapes so we hired the Lawrence Welk Orchestra to come in and sustain perfectly tuned notes for night seconds. At this point Chamberlin is effectively making samples of each note of every instrument he wants available so that he can have recordings of different pitches for the different keys. The Chamberlin master tapes were very high quality, recorded with a Neumann U47 into an Ampex valve tape deck. Some keys would not trigger sustained notes but percussion and drum loops. These tapes could loop once placed but would snap back to their starting point when the key was no longer depressed. This made it possible to have rhythm loops in addition to sustained notes.
The first Chamberlin came out in 1948, and the Chamberlin Company was founded in 1956. Chamberlin’s main intention and therefore selling point for the instrument was that it would be a “rich man’s toy” or parlor instrument used for the entertainment of the wealthy in their homes. The Chamberlin was marketed for upper end novelty stores, piano dealers, and in magazines for the wealthy all across America in the 1950s.
Bill Franson was one of the Chamberlin Company’s best sales men when one day he disappeared. Franson stole two Chamberlin 600s and went over to Europe thinking he could make improvements on the design and market a better product. He went to England were he put an ad in the paper that connected him with three engineering brother named Bradley who owned Bradmatic. While the initial design of the Mellotron MKII, which was the first production version and was compared to the Chamberlin 600, was very similar, the Chamberlin was using a third party home stereo amplifier and had lever controls, the MK II had a proprietary amplifier designed by Bradmatic and was operated with buttons. This was the beginning of the Mellotron Company. While Chamberlin was still working out of garages and small workplaces like a “mom and pop” business, the Mellotron Company employed a large group of workers from the post World War Two generation who had all received military training in the fabrication and assembly of electronics. Melletron also had to record their own Master tapes and did so at IBC Studios in London. The general consensus is that the Chamberlin tapes were much higher quality tapes and sounded much more realistic than the Mellotron tapes, most likely due to the gear used to track and the quality of the musicians used. By the 1970 models the Chamberlin M1 and the Mellotron M400, unique aspects of the different designs became more apparent. The Chamberlin had a fixed cartridge of tapes that could not be changed out by the user but had 120 different high quality sounds. The M600 had less sounds at any given point, but the tapes could be changed out for other tapes with different sounds, and sets of tapes were sold by instrument or by theme. Changing out the tapes can be exploited as well, one artist used it by having each key trigger four measures of a piece at a time so that if a chromatic scale were played with a note every four bars an entire piece could be heard.
Shorty after the Mellotron Company was off the ground they went to the NAMM show in American and ran into the Chamberlin Company. Ultimately the Mellotron Company ended up having to pay royalties to the Chamberlin Company as well as stay in the U.K. while Chamberlin would stay in the United States. As music progressed through the sixties and early seventies the Mellotron Company had more success. Chamberlin stuck to the business model of the parlor instrument, as did Mellotron but the Mellotron was gaining more notoriety as a rock instrument and was being sought after by a different crowd. While the intentions of these instruments were to emulate the sounds of a real orchestra, they reality was that they did not sound nearly as good as the real thing, but rather had unique and intriguing qualities of its own that made it attractive. Unfortunately, these instruments were very temperamental and fragile. They were extremely sensitive to temperature and environment, so touring with them was highly impractical and difficult. It was not long before the advancements in synth and other keyboard technology made the unique necessity of the Mellotron less critical since the same sounds could be achieved through different and easier means. The companies were not making a profit and ended up in debt to their electrical component suppliers and had to fold. Other instruments were developed to try to improve upon the designs of the Chamberlin and Mellotron but none met great success. The Opticon was a tape based drum machine that could loop drum samples, and the Birotron was an adaptation of the Mellotron that was supposed to be lighter and better fit for travel as well as cheaper.
These instruments fell out of style until the late 80s when certain vintage sounds began to be sought after again. Since then the Mellotron has come back into the world of relevant rock instruments, being heard on recording by popular bands like Radiohead, Opeth, Porcupine Tree, Bigelf, Kanye West, and other progressive and texturally experimental rock and pop groups. In 1993 Mellotron Archives was founded and now the Mk VI is available for purchase and is much more usable than the older models but maintains the authenticity of the sound and operation. Developments in softsynths and samplers have made it such that Mellotron sounds are available as plug ins for DAWs and as sounds on professional grade keyboards like Nords. While the instrument might not be around forever, at least its unique sounds and tones will always be available.

Friday, October 8, 2010

Week of 10.04.10:

John Cage (1912-1992) was in many was the Stockhausen of American electronic music. He was an innovator not only in the realm of electronic composition, but in performance, compositional philosophy, and the technology of music production. Cage was born into an Episcopalian family in Los Angeles. His father was an inventor who told him "that if someone says 'can't' that shows you what to do." [1] When his need to create was finally facilitated by composition, he began to take lessons in composition and arrangement. His lack of confidence in his traditional skills as a composer combined with his experimentations with prepared instruments eventually led him to thinking outwardly about the limitations of composition is, what performance is, and how art is really made.
Chance became the a central focus to his composition style. He could set up scenarios where certain elements of the composition were controlled, while others were left up to a designed element of chance. The element of chance separated the content of the music and the concepts of the composer. In this sense a composition is birthed from a production concept rather than in a finite, note for note, written composition. Through these experimentations Cage was able to place himself outside of conventional thought, and in line with many of the electronic composers at the time was opened up to the world of unconventional sounds and operations.
Of course tape was one of the first mediums Cage used for these experimentations. He worked with Louise (1920-1989) and Bebe (1927-) Barron who were exceptionally innovative inventors and composers of electronic music. The designed and modified gear so that it would do whatever was required of the compositional process. After Cage and the Barrons first collaborative tape effort, Imaginary Landscape No. 5 (1952) which used material from phonograph records, Cage became focused on the tape editing portion of the composition, and began to develop compositional tools that took advantage of these opportunities. For their next effort, Williams Mix (1953) was a huge undertaking of tracking and editing. Firth the Barrons collected hundreds of tape recorded sounds, which were then organized into a 192 page score, the systems of which were built on the eight tracks of the tape. Cage then developed change parameters that would determine where and how the tapes were spliced together, and the process was so laborious that it took nine months. Cage would invite all kinds of people to help with the edits, and their different interpretations and skills would be a component of the chance element of the composition.
In 1965-1966, a group of engineers and composers from Bell Labs put on the Variation Series in New York, which was a complex, multiple performance concert series in the Armory, what showcased electronic compositions. For this series, John Cage created Variation VII, which was performed in October 1966. This was a huge display of Cage’s chance operations in action. There was no tape involved, all the sounds used during the performance were being made right then and there. To begin with, the Armory is a ridiculously large, empty concrete venue that has six seconds of natural reverb. Normally this would deter any performer but Cage saw this as an extension of the performance, that sympathetic and tuned frequencies of the space were as much a part of the composition and performance and any other aspect. In the room there was a platform with tables full of the instruments that were being used, and there was a control room that was built for the performance. The tables held a plethora of appliances and noise making instruments such as blenders, radios (which had FM and could pick up non domestic signals), fans, juicers, oscillators, each with contact microphones and patch bay equivalents. In the performance notes, one of the tables was referred to as “David’s Own” and was designated for whatever tools, instruments and devices David Tudor wanted to incorporate. In addition to these sound sources, telephone lines were specially installed for this piece that led to phones all across the city, some hanging outside in public areas, one in the kitchen of a popular restaurant, one next to a turtle tank, an aviary, the Ney York Times press room, and the sanitation department. The signals from these phone where processed by photo-optic sensors. High output lights were set up underneath the tables on stage, and the shadows of the performers as they walked around the tables would change and affect these signals that were routed to the control room. One of the engineers, even had sensors on his head designed to pick up brainwave patters (borrowed from Alvin Lucier), which were then patched into the performance. The patch bay for this performance was so huge that at one point during preproduction everybody had to stop and make patch cables so that there would be enough. As the performance developed Cage was open to anything happening. At one point members of the audience began to walk up and stand next to the tables and watch what was happening. Cage got into the idea and invited the crowd up the next night. When an engineer had to run on stage to fix something Cage simply said “you are part of the performance” and was only excited when his pants started to catch on fire from the lights under the table.
What was his role in this performance? Like a god he created a world and an environment within which he let loose free agents that could do whatever they wanted with what they were given. He designed the parameters of the performance but not its content, and in his mind whatever happened happened and that was the performance. In many ways this can be seen as the embodiment of Cage’s chance operations concept, designing an event to transpire but not the content.
John Cage was successfully able to separate the composer from the music, and this changed the way a lot of people since then think about music. Both in his compositional style and in his technological innovations, Cage redefined what it meant to experiment with music. Experimentation was not limited to the notes played, but could be explored in every aspect of music from conception to performance.

[1] http://www.biographybase.com/biography/Cage_John.html

Friday, October 1, 2010

Week of 09.27.10

In the new world were every sound is trackable, and usable in the context of a musical piece, one must set delimitations to focus the process of the composition. Pierre Schaeffer (1910-1995) set himself four delimitations for sounds to record that would then be unconventionally used in his pieces. These delimitations were that he could only record living elements, like animal sounds, noises, like found sound, modified and prepared instruments, and conventional instruments. With the confines of these parameters Schaeffer was able to compose electronic music that led him to the development of seven values that apply to all sounds. Mass, which is the organization of sound in a spectral dimension, Dynamics, which are measurable amplitude values of the sound, Timbre, which is the tonally qualities of the sound, the Melodic Profile, which is the temporal evolution of the sound in reference to the sound spectrum, the Profile of Mass, temporal evolution of the spectrum in reference to highs and lows, Grain, which is the analysis of the irregularities in surface and texture, and Pace, which is the analysis of the dynamic and amplitude irregularities. With these parameters of sound in mind, Schaeffer developed plans that would facilitate the compositional process of electronic music. These plans included a Harmonic Plan, which encompassed the material in al spectrums, a Dynamic Plan, which determined the envelope of the sound (Attack Decay Sustain Release), and a Melodic Plan, which is the development of pitch and tone over time.
While this was going on in France, there was a different kind of electronic music being developed over in Germany. While French electronic compositions were more organic, German compositions were much more methodical in that they were based on serialism, and 12 tone music. 12 tone music was the beginning of the serialist movement and was developed by Arnold Schoenberg (1874-1951) and is based on the concept of a tone row. A tone row consists of all twelve notes in a specific order, with no one note having any more or less significance than the other. No notes can be repeated until all the other notes in the row have been played. The order can be reversed and inverted, and the piece is to avoid having a tonal center or any type of formal cadence.
From these influences came the German composer Karlheinz Stockhausen (1928-2007) who is considered by some to be one of the most influential composers of the twentieth century. In 1952 Stockhausen began experimenting with tape in Pierre Schaeffer’s studio. From here he began to develop tape music, creating loops and using tape as a liner time editing device. Because tape is one of the most accurate physical manifestations of the way humans think of time, Stockhausen was able to do early Varispeed editing with his array of tape machines. This allowed him to speed up and slow down sounds to alter their pitch, timbre, and duration. By using sine wave tone generators and tape, Stockhausen was allowed to create serialized compositions that were birthed from the mathematical analysis of tones applied to the shape and editing of the sounds. Stockhausen was also able to develop his own method of notating these types of compositions that very much resembles the modern day PTools midi editor. Two graphs one above the other, the top graph has frequency (pitch) vertically, while the bottom graph showed the attack or velocity of the tone and its release. Time was shown horizontally. This notation first appeared in his 1954 composition Studie II. It was the first electronic composition for sine waves, and to have a score based on pitch, duration and attack. Stockhausen developed a principles for his process of electronic composition. These are first, a Unified Time Structure, meaning the modification of tone, dynamics, frequency and timbre via tape. Second, Splitting the Sound, that one must have the ability to edit and manipulate the smaller elements of the synthesized sound. Third, Mutli-Layered Composition, can be understood as the necessity of the control of the sound during performance, conversely not relying on the human element on performance. And Fourth, the Equality of Tone and Noise, which in Stockhausen’s words “any noise is musical” but “you can’t just use any tone in any interval” meaning that one must have more constructive thought that just the recording or generating of sound for the purpose of listening. Stockhausen’s work forever changed the way msic can be approached. He successfully worked toward liberating music from the confines of western music. His Helicopter String Quartet is a phenomenal undertaking of performance and compositional blend, and his fresh perspectives on the characteristics of sounds are inspirational if not musically, al least compositionally.

Friday, September 24, 2010

Week of 09.20.10

After we took a quick assessment, we began the week with discussing recording mediums before tape. These included wire recordings, phonofilm recording, and disk recording. Phonofilm was developed by Lee De Forest, and is a film based medium that takes snapshots of the audio and reads them for play back with an optic sensor. At the time, disk recording offered higher quality and increased longevity compared to other mediums. This technology led to the development of turntablism, which was introduced into the classical music scene in the 1920s. Because the playback on a turntable is so manipulate able, composers like Hindemith, Varese, and Cage began to compose music for the purpose of recording which could then be manipulated alongside a live orchestra. This is an instance of exploiting the weakness in a technology; the turntable was not meant to be used to alter the pitch and speed of the playback, but this was often how it was used in those early experimentations.
Paul Hindemith would go on to develop and contribute to a new genre of music called Musique Concrete, which came to fruition around 1949. Musique Concrete is music that is based on the found sound concept, and is made up of manipulated recording of everyday sounds like that of a train. The concept behind this style of composition is to reveal the musicality in every day sounds, so that people may better appreciate the beauty of the world we live in. tow other brilliant conceptual minds behind Musique Concrete were Pierre Schaefer (1910 -1995) who was a broadcaster, and Pierre Henry (1927-) who was a percussionist and composer. They collaborated to develop this genre and form the philosophy behind it. Schaefer was a technological innovator who could apply the technology to Henry’s musical composition and sensibly. They would play real world sounds back at different speeds and tempos, reversed, and edited in all kinds of rhythmic ways without instruments or human interface. Schaefer described it as the use of any and all sounds except traditional instruments unless it’s the warped sounds of recorded instruments. The breakdown of harmony, melody, and traditional music theory served to re-conceptualize the abstraction of music notation. Since traditionally music exists abstractly as notes on paper that is then interpreted by musicians and performed, Musique Concrete conversely is purely recorded and manipulated should that inherently is the music, rather that realized music. This would be achieved through looping, sampling, and splicing audio. Musique Concrete is known as the second era of electronic music, and is composing through technological means, using organic, non traditional music sounds that can be replayed identically each time, and can be performed without human involvement.

Friday, September 17, 2010

Week of 09.13.10 Leon Theremin

Professor Leon Theremin was a Russian inventor and man of electronic music exploration. His inventions paved the way for the modern era of electronic music. He was born in Russia in 1896 and died in Russia in 1993 having lived a long and turbulent life. Theremin began working with electronics from an early age, and continued studying electronics through higher education. In the Russian military he attended Military Electronic School and Graduate Electronic School for officers, which landed him a radio oversight position for the Russian military during the first World War and during the Russian Civil War.
In 1920 Theremin invented the Theremin, an electronic instrument that utilizes the electrical capacity of the human body within generated fields. The result is an instrument that does not need to be touched to be played. The Theremin generates a tone through the creation of magnetic fields surrounding a vertical and a horizontal post. The vertical post controls the pitch while the horizontal post controls the volume. This was a whole new kind of instrument unlike any invented before, and it inspired the imaginations of millions. Theremin demonstrated it to Vladimir Lenin, whole began to learn the instrument himself. By touring demonstrations of this new instrument, Theremin eventually ended up in the United States, where he had it patented in 1928. It was here that he stayed and opened a laboratory in New York. During this time he met and worked with individuals like Nicolas Slonimsky, Albert Einstein, Joseph Shillinger, and Clara Rockmore, and invented and perfected numerous other electronic instruments including the Rhythmicon and the Theremin Cello. The Rhythmicon was on of the earliest drum machines and could play multiple rhythmic patterns triggered by a keyboard. By establishing a fundamental pitch, different rhythms were generated based on the addition of notes from that pitch’s series of harmonic overtones. Another invention was the Theremin Cello, which was based on a lot of the same concepts as the Theremin but was played on a cello like instrument with only one string. It had one ribbon running the length of the neck that produced a tone when touched, while the volume was controlled by a lever. Other inventions developed at this time involved motion censor technology based in principle on the Theremin design of magnetic fields. After the Lindberg Baby ordeal, one application of this technology was generating fields in cribs so that if someone tried to reach into it an alarm would go off. This also led to the first motion censors for store fronts in New York City.
During this time in New York Theremin was involved in numerous public performances of the Theremin. At one point an orchestra of ten Theremins performed at Carnegie Hall. Among them was Theremin’s star player Clara Rockmore, who became as much of a Theremin rock star as there has ever been. The Theremin became popular but in many ways as a novelty. It was popularized as the instrument without touch where music was “pulled out of thin air” which was probably more of the attraction than the performances and musical content. The Theremin was used to play classical music like violin concertos, which it was not as good at playing as the violin. The instrument would have had more impact had there been more music composed specifically for the instrument to utilize the unique aspects of the sound. The Theremin was also being used in conjunction with ballet, and there was a whole Theremin ballet troupe. It was here that Theremin met his second wife, Lavinia Williams, who was a black ballet dancer from the group. This union was of course bold and controversial in 1930s America, and provides some insight as to the kind of man Theremin was.
In the mid thirties Theremin left the United States and went back to Russia. The reasons behind this are not all that clear in the eyes of history. Most of the people in his lab, including Rockmore, did not know he was going to leave, and have no idea why. Witnesses claim he was taken away by men with guns and theorized that we was kidnapped by the Russian government, others theories claim that he was totally broke and was forced to go home. Theremin was imprisoned in Russia, and made to work on developing military technology for the KGB. From this came The Thing, or the “bug” which is a small spy tool for remotely listening and recording audio. A small microphone transmitter can be hidden in a room and transmit audio to the listening source. We was also made to restore this audio and the audio of all kinds of spy tapes using filters and eqs. He lived most of the rest of his life in Russia, but did make it back to the states in his later years after the Cold War.
Theremin’s influence on the electronic world can not be measured. He was a huge inspiration for Robert Moog, the inventor of the modular synth and one of the most significant electronic instrument innovators of the mid 20th century. Moog built Theremins as a kid and was even designing his own by his late high school years. Theremin’s inventions paved the way for Moog and his synthesizers, and consequently the bulk of modern day electronic instruments. His influence on Moog alone changed the musical world forever.
It is very difficult to imagine what I could have done differently as Theremin. I think one of the biggest things that would have benefited his career would be to push more new original compositions for the Theremin. This would better establish the Theremin as its own instrument, rather than an electronic imitation of other instruments. Were the Theremin to have had more of an identity when it was popular it could have become a more prominent and powerful instrument in 20th century music. If I could work with him for three moths it would definitely be during his time in the laboratory in New York. To be in an environment of such stimulation, creation, and innovation would be phenomenal. Working with him to help develop his instruments and inventions or to help with the Theremin performance production would be really exciting.

Saturday, September 11, 2010

Week of 09.05.10

We did not meet for class this week due to the Labor Day holiday, so my blog is going to be an abstract of my notes of the readings I’ve done in the second chapter.
The chapter begins with a discussion of the time period of many of these events for reference and context. World War II brought about a rise in free-thinking and exploration that opened listeners up to new sounds and new forms of the arts. This social mindset played a critical role in the development and expansion of the electronic music genre. Increased interest and notoriety, combined with further advancements in technology led to further experimentations not only with the instruments themselves, but the methods and practices of composition.
Advancements in technology were seeing there way further into established mediums of musical performance. Contemporary classical music began to see the use of turntables in compositions such as Respighi’s The Pines of Rome (awesome piece) and in the works of Paul Hindermith and Ernst Toch (1887-1964). Grammophonmusik, later Turntablism, was a genre birthed from the advancements in recorded playback mediums such as the phonograph, with plastic cylinders and the gramophone with shellac discs. These machines could record and playback material that could be played along with an orchestra. This changed composition because it meant that one could compose for the purpose of recording, and consequently be recording for the purpose of performance. Intense concept.
In France, Pierre Scheaffer (b 1910), a radio broadcaster and Pierre Henry (b. 1927), a composer, were making breakthroughs in a genre that would become known as musique concrete. This is really the first genre of music composed with the purpose of being recorded. This led to concepts like Abraham Moles’s Sound Object, that music is a “sequence of sound objects” and that means that the materials used for producing sound for music are not limited to things commonly considered melodic or harmonic. Moles determined that a sound object is made up of amplitude, frequency, and time/ duration, and that these can be further dissected into attack, sustain, decay, and release. Because sound cold now be easily recorded and played back, the examination and experimentations of sound became endless, and a whole genre of music with unconventional, basic “found sounds” populated the genre. Subsequent to the technology and product, new methods of score writing, compositional principals, and performance venues had to be explored but were all generally based in the conventional musical styles.
The Germans were getting into atonal music and serialism. These are systems of music that are based on 12 tone sequence determined by the composer called a tone row. Of this genre came a true developer in the evolution of electronic music Karlheinz Stockhausen. Stockhausen was a composer who began work composing in recording studios in the 1950s. Using complex multi-tape machines, oscillators, and speakers, Stockhausen delved deep into the world of electronic music composition and realization and formulated principles of composition and technological ability. These are first, a Unified Time Structure, meaning the modification of tone, dynamics, frequency and timbre via tape. Second, Splitting the Sound, that one must have the ability to edit and manipulate the smaller elements of the synthesized sound. Third, Mutli-Layered Composition, can be understood as the necessity of the control of the sound during performance, conversely not relying on the human element on performance. And Fourth, the Equality of Tone and Noise, which in Stockhausen’s words “any noise is musical” but “you can’t just use any tone in any interval” meaning that one must have more constructive thought that just the recording or generating of sound for the purpose of listening.

Friday, September 3, 2010

Week of 08.30.10:

This week saw the introduction of this class and the world of the electronic instrument. We began by discussing some of the philosophic concepts behind electronic music. Among these ideas is the statement that the marriage of technology and music is inescapable but not always perfect. This is referring to the inevitability of the incorporation of electronic instruments in music, but also stating that in the discovery of this genre and in the development of these instruments there will be failures. The history of invention leads to the creation of new instruments, which will not always be the most musical in nature. In the early days the gap between inventors and composers was one that bred both unsuccessful instruments, unmusical instruments, and new instruments that would forever change compositional possibilities.
Edgard Vrese (1883-1965), was an early conventionalist, composer and inventor of electronic music devices and compositions. His goal was to emancipate the composer of the human element in a performance. In order to achieve such a feat he used tapes, oscillators, and mics to record and generate electronic tones that could be played back at command, consistently. This gave him and the composer a musical instrument with consistent and unchanging tone and performance. It was through Varese’s championing of the collaboration of inventor and composer than helped ignite the electronic music era.
Elisha Grey (1835-1901) was an inventor who created a telegraph that could transmit different tones from one place to another. The frequency of these tones where made and could be changed via electromagnets, which generated a two octave range.
Herman von Helmholtz (1821-1895) wrote On the Sensations of Tone as the Physiological Bases for the Theory of Music. This work illustrated and outlined the scientific approach to electronic music synthesis. He was also the inventor of the Helmholtz Resonator, which generated tone via the use of chimes. Helmholtz was a huge influence on Thaddeus Cahill.
Thaddeus Cahill (1867-1934) was an inventor and a visionary in the early days of electronic music synthesis. He invented the Telharmonium, which was a building-sized synthesizer that used pitch shafts and tone wheels to generate sound and was played with a touch-sensitive polyphonic keyboard. The goal was to create a machine that would allow one individual to create and control an entire orchestra of sounds. This early synth occupies an entire floor in Manhattan and piped music to local customers and businesses via power and telephone lines. It was in operation from 1906 to 1908, but was shut down due to gastronomical power consumption, relentless maintenance needs, and to some extent, lack of monetary popularity. Even in it’s two year run, the Telharmonium was one of the most ambitious and extravagant endeavors in the history of electronic music.
Busoni (1866-1924) wrote Sketch of the New Aesthetic of Music and was a member of the Futurist movement. The Futurists were a sect of artists capitalizing on the incorporation of electronic synthesis into their compositions. This included experiments with composition, micro tonality, and new ways to create sound and music.
Russolo (1885-1947) published The Art of Noise. This manifesto outlined Futurist ideals and created definitions of sounds. He divided sounds into six categories, the first being roars, thundering and explosions, the second whistling, hissing and puffing, the third whispers murmurs and mumbling, the fourth screeching and creaking, the fifth being percussive sounds, and the sixth being voices and human and animal noises. He composed the Grand Concerto Futuistico.
Lee De Forrest (1873-1961) was a huge contributor to the music technology scene with his invention of the vacuum tube. The tube is a method of amplifying an electrical signal via the use of a vacuum. By introducing the signal into a controlled vacuum, the electrons spread out in the vacuum and generate a larger, louder signal. Tube soon became widely used for all types of equipment including amplifiers, radio broadcasting, television, and in all kinds of musical equipment.
Leon Theremin was a Russian inventor who created the rythmicon, an early drum machine, and the Theremin, which is a uni-linear signal generator that uses capacitance to operate. By generating electronic fields around two posts, the Theremin uses the human body as a capacitor, the movement of which in relative to the posts changes the electromagnetic field and consequently the pitch and amplitude. While a hugely fantastic and creative instrument its used are limited, and its popularity was short lived. This was partly due to the fact that this brand new instrument was used to play classical material, which it is unable to do as well as conventional instruments. A better approach would be to compose new music with the capabilities and sounds of the instrument in mind.
We also touched on a few other notables such as John Cage (1912-1992) who was a composer of advanced electronic music, Hammond (1895-1973) who invented the portable electronic organ, Thomas Edison (1847-1931) who invented the phonograph, and Pflumner, who invented celluloid and iron oxide tape.
In class we took some time to look at the Ondes Martenot, which is one of the coolist instruments of all time. Based on similar technology as the Theremin, the Martenot has a linear pitch ring controller superimposed over a keyboard that can be played polyphonically. Sounds is created when a touch sensitive key is depressed which controls velocity and amplitude. The ring can slide up and down the keyboards controlling pitch. The keyboard can also be played, with vibrato. Super awesome instrument and I really want one.