Although Mixing Secrets For The Small Studio has been designed to teach the subject of mixing pretty much from scratch, it nevertheless relies on the reader having a certain basic level of background knowledge about audio and music production in general. This on-line resource page is designed to clarify what you need to know to get the best out of the book, and also to provide a swift introduction/refresher for newbies. I've tried to be as concise as humanly possible, while at the same time including lots of carefully selected links which can supply more detail if required.
Health Warning! If you have a strong allergic reaction to sweeping generalisations, then top up your medication now...
SOUND IN DIFFERENT FORMS
Sound travels though the air as a series of pressure waves moving at the speed of sound (roughly 343 metres/second). When these pressure waves reach our ear drums, they cause them to vibrate, triggering our hearing sensation. Pressure waves in air can also be picked up by a microphone, which converts them into an electrical signal that represents the vibrations as voltage fluctuations. Feeding this electrical signal to a loudspeaker recreates the original pressure wave by causing the speaker's driver unit to vibrate in the air. Once the sound wave is represented as an electrical waveform, it becomes possible to process it using electronic circuitry and to store/replay it using an analogue recording device such as a tape recorder. The electrical signal can also be fed on to an analogue-to-digital converter, which transforms it into a stream of numbers which can be stored as a digital recording and be processed with DSP techniques. A digital-to-analogue converter can then used to convert the digital data back into an analogue voltage waveform.
SINEWAVES & AUDIO FREQUENCIES
If a sound wave is chaotic, what you hear will appear noisy, whereas repeating waveform patterns are heard as pitched events such as musical notes or pitched overtones. However, before I say any more about complex sounds, it's useful first to understand the simplest repeating sound wave: a sinewave tone (Ex0.01:WAV/MP3). Its pitch is determined by the number of times it repeats per second, referred to as its frequency and measured in Hertz (Hz). Roughly speaking, the human ear can detect sinewave tones across a 20Hz-20kHz frequency range. Low-frequency tones are perceived as low-pitched, while high-frequency tones are perceived as high-pitched. Although a sinewave tone isn't exactly thrilling to listen to on its own, it turns out that all the more interesting musical sounds can actually be broken down into a collection of different sinewave tones. The mixture of different sinewave components within a given complex sound determines its timbre: for example dull sounds have more low-frequency sinewave components (or 'low frequencies'), while bright sounds have more high-frequency sinewave components (or 'high frequencies').
One way to examine the sinewave components which make up any sound is to use a spectrum analyser, a real-time display of a sound's distribution across the frequency spectrum. On a spectrum analyser, a simple sinewave tone shows up as narrow peaks, while real-world signals create a complex undulating plot. Narrow peaks in a complex spectrum-analyser display indicate pitched components within the signal, while the distribution of energy across the frequency display determines the timbre of the sound. Here's a short video of a spectrum analyser in action, to illustrate what I mean:
Although a single sinewave tone will be perceived as pitched note, almost all real-world musical notes are actually made up of a harmonic series of sinewaves. The most low-frequency of these, the fundamental, determines the perceived pitch, while a series of harmonics at multiples of the fundamental's frequency determine the note's timbre according to their relative levels.
LOGARITHMIC SCALES FOR LEVEL & PITCHOur hearing perceives both level and pitch in a roughly logarithmic way -- in other words, we compare levels and pitches in terms of ratios. For example, the perceived pitch interval between a note with its fundamental at 100Hz and a note with its fundamental at 200Hz is the same as that between notes with fundamentals at 200Hz and 400Hz. Similarly, when dealing with sound in electrical form, the perceived volume difference on playback between signal levels of 100mV and 200mV is roughly the same as that between signal levels of 200mV and 400mV. In recognition of this, both pitch and sound measurements are frequently made using a logarithmic scale. In the case of pitch, this is done in terms of traditional musical intervals: eg. 100Hz to 200Hz is an octave, 200Hz to 400Hz is an octave, and so on. (More examples here.) In the case of sound level this is done using decibels (dB): eg. 200mV is 6dB larger than 100mV, 400mV is 6dB larger than 200mV, and so on.
Simple decibel values can be used to indicate changes in signal level, which is why they are often used to calibrate audio gain controls such as faders. However, it's important to remember that decibels (just like musical intervals) are always relative. In other words, it's meaningless to say that a signal level is "4.75dB", because the question is "4.75dB larger than what?" Therefore, if you want to state absolute level values in terms of decibels, you need to express them relative to an agreed reference level, indicating this using a suffix. Common reference levels used for studio purposes include dBSPL, dBu, dBV, and dBFS. For more information on signal levels, and how they are displayed using audio meters, check out this article.
NOISE, HEADROOM, & FREQUENCY RESPONSEIn order to get the best performance out of any piece of audio equipment, you should ideally present it with an audio signal level which is as far as possible above the equipment's inherent noise floor, but without triggering distortion by exceeding the equipment's available headroom. Because different devices have different noise-floor and headroom characteristics, this means that you often need to apply gain at various points in a signal processing/recording chain in order to get the best out of it. For more details, check out this article on gain structure.
Even with perfect gain structure, however, any studio device may still alter the nature of sound passing through it, most noticeably in terms of the sound's frequency balance. A common way to express this frequency-alteration is in terms of a frequency response graph, which shows the gain applied by the device across the frequency range. A device that leaves the frequency balance completely unchanged will show a straight horizontal frequency-response plot at the 0dB (or 'unity gain') level. However, most real-world equipment deviates somewhat from this ideal 'flat response' -- indeed, some devices (such as filters and equalisers) deliberately warp their frequency-response curves for creative purposes. However, a device's frequency response isn't its only important sonic characteristic -- its inherent distortion characteristics and phase response are also important factors. For some real-world examples of this, check out this in-depth
THE MULTITRACK RECORDING STUDIO
Modern studio production centres around the concept of multitrack recording, whereby you can record (or 'track') one set of sounds first, and then subsequently overdub new sounds alongside those. Multitrack recorders can be analogue or digital, and allow you to store each newly recorded sound on a separate track so that you can still edit and process all the parts independently as you build up your production. When the recording has been completed, all the tracks are then mixed together (the 'mixdown stage' of the production) to create a final stereo recording.
In order to manage all the audio signals which need to go into and out of a multitrack recorder, you need a mixer, whether hardware or software. At its most basic, a mixer simply provides facilities for taking a set of input signals and mixing them together into a single output signal. However, what quickly makes things complicated is that real-world mixers may need to provide a wide variety of different mixes for different purposes: recombining the outputs of a multitrack recorder to create a stereo master mix; sending mixes of different input signals for further processing during tracking and mixdown; and creating tailored headphone mixes for the benefit of overdubbing musicians, to mention but three common examples. In addition, many mixers include audio processing functions such as equalisation and compression, and/or the ability to insert external audio processors into their signal paths. For more details, check out these two articles which give a general overview of mixer functionality: the anatomy of a hardware mixer; using a software mixer.
Need more tips? Get in touch!