Best Sound Design Tricks

Sound Design Tricks

Sound Design Tricks - It can be deceptively easy to mix and master a track, but what happens to the sound when it leaves the speakers?

Our friends at Get That Pro Sound give a rundown on sound design techniques that will seduce your listeners’ ears into hearing exactly what you want.


Sound Design Tricks 
With using mere basic knowledge about psychoacoustic principles, you can find creative ways to bring your listeners a more powerful, clear, and ‘larger than life’ experience. By understanding how the hearing system interprets sounds, we can creatively and artificially recreate certain responses to particular audio phenomena, particularly EQ, compression and reverb.

For example, if you incorporate the natural reflex of the ear into the designed dynamics of the original sound by including a very loud hit, the brain will perceive the sound as “loud” (even when it’s played back relatively quietly).

You’ve fooled the brain into thinking that the ear has closed down slightly in response to ‘loudness’. The result? The experience of loudness is quite distinct from actual physical loudness.


1. The Haas Effect
Named after Dr. Helmut Haas (who first described it in 1949), this principle can be used to create an illusion of spacious stereo width…starting with just a single mono source.

unprocessed vs delayed sound

Haas was studying how ears interpreted the relationship between originating sounds and their ‘early reflections’ within a space. His conclusion was that – as long as early reflections and identical copies of original sounds are heard less than 35ms after (and at a level no greater than 10dB louder than the original) – the two sounds will be interpreted as a single one.

The directivity of the original sound would be essentially preserved, but because of the subtle phase difference, the early reflections/delayed copy would add extra spatial presence to the perceived sound.

Haas effect in recording
 
The Haas Effect in Practice
In a musical context, for thickening and/or spreading out distorted guitars (or any other mono sound source), it’s a good trick to duplicate the part, pan the original to extreme right/left, and pan the copy to the opposite extreme.

You might also delay the copy by between about 10-35ms (every application desires a slightly different amount) by shifting the part back on the DAW timeline or inserting a basic delay plugin on the copy channel with the appropriate delay time dialed in. This tricks the brain into perceiving larger width and space while leaving the center wide open for other instruments.

You can also use this technique to pan a mono signal away from the busy center in order to avoid masking from other instruments. At the same time, you don’t want to unbalance the mix by only panning to one side or the other. The answer lies in “Haasing it up” and panning your mono signal both ways.

Consider Using Slight Delays
Of course, there’s nothing stopping you from slightly delaying one side of a real stereo sound. For example, you might want to spread your ethereal synth pad to epic proportions. Just be aware, however, that you’ll also be making it that much more ‘unfocused’ as well. For pads and background guitars though, this is often entirely appropriate.

As you play with the delay time setting, you’ll notice that too-short delays result in a pretty nasty out-of-phase sound. Meanwhile, too-long delays will break the illusion, and you’ll start to hear two distinct and separate sounds. You’re looking for something in between, which will sound just right and help you catch the space you want.

Find the Right Balance
Remember: The shorter the delay time, the more susceptible the sound is to unwanted comb filtering when the channels are summed to mono. This is something to consider if you’re making music primarily for clubs, radio, or other mono playback environments.

You’ll also probably want to tweak the levels of each side (relative to each other) to maintain the right balance in the mix and the desired general left-right balance within the stereo spectrum.
You can apply additional effects to one/both sides, like applying subtle LFO-controlled modulation or filter effects to the delayed side.

A word of caution: Don’t overdo it. In a full mix, use the Haas Effect on one or two instruments, maximum. This helps you avoid unfocusing the stereo spread and being left with phasey mush.

2. Frequency Theory: How Masking Works
There are limits to how well our ears can differentiate between sounds that occupy similar frequencies of human hearing.

Masking occurs when two or more sounds sit in the exact same frequencies. Generally, the louder of the two will either partially or completely obscure the other, which then seems to ‘disappear’ from the mix.

frequencies of human hearing

Obviously, this is a pretty undesirable ‘phenomenon,’ and it’s one of the main things to be aware of throughout the whole writing, recording, and mixing process. It’s also one of the main reasons EQ was developed, which can be used to carve away masking frequencies during the mixing stage.

Our audio trick? Avoid masking problems during the writing and arranging stages by using notes and instruments that occupy their own frequency ranges.

Even if you’ve taken precautions, masking will still sometimes occur at the mix, and it’s difficult to determine why certain elements sound different solo than they do in the full mix.

Although the root notes/dominant frequencies of the sound have the space they need, the sound harmonics (that also contribute to the overall timbre) appear at different frequencies. These may still be masked, which is a point where EQ might come to the rescue.

3. The Ear’s Acoustic Reflex
As mentioned in the introduction, when confronted with a high-intensity stimulus, he middle ear muscles involuntarily contract. This decreases the amount of vibrational energy that transfers to the sensitive cochlea, which converts sonic vibrations into electrical impulses for processing by the brain. Basically, the muscles close to protect the more sensitive structures of the ear.

The brain interprets the dynamic signature of these reduced-loudness sounds, with the initial loud transient followed by immediate reduction when the ear muscles respond. The result? It still senses ‘loud sustained noise’.

This principle is often used in cinematic sound design techniques and is particularly useful for simulating the physiological impact of massive explosions and high-intensity gunfire (without inducing hearing-damage lawsuits).

The ears’ reflex to loud sounds can be simulated by manually playing with fine dynamics of sound. You can make that explosion appear quite loud by artificially shutting down the sound following the initial transient. The brain will immediately perceive it as louder and more intense than the sound actually is. This also works well for booms, impacts, and even drops in a club or electronic track.

4. Create Power and Loudness – Even at Low Listening Levels
If you take only one thing away from this article, hear this: The ears’ natural frequency response is non-linear. More specifically, our ears are more sensitive to mid-range sounds than frequencies at the extreme high and low ends of the spectrum. We generally don’t notice this, as we’ve always heard sound this way and our brains take the mid-range bias into account. It does, however, become more apparent during mixing, where relative levels of instruments (at different frequencies) change depending on the overall volume you’re listening at.

Even though your own ears are an obstacle to achieving a perfect mix, there are simple workarounds to this phenomenon. You can also manipulate the ears’ non-linear response to different frequencies and volumes in order to create an enhanced impression of loudness and punch in a mix – even when the actual listening level is low.

The Fletcher-Munson Phenomenon
This nonlinear hearing phenomenon was first written about in 1933 by researchers Harvey Fletcher and Wilden A. Munson and although the data and graphs they produced have since been improved upon, they were close enough that ‘Fletcher-Munson’ is still used as a shorthand phrase for everything related to ‘equal loudness contours’.

Generally, you should be able to do the best balancing at low volumes (this also saves your ears from unnecessary fatigue). Loud volumes are generally poor for creating an accurate balance because, as per Fletcher-Munson, everything seems closer than it is.

Think About Your Audience
In certain situations (like mixing sound for films), it’s better to mix at the same level and similar environment to where the film will eventually be heard.

This is why film dubbing theaters look like actual cinemas and are designed to essentially sound like them too.

The best mixes result from taking the end listener and their environment into account, not necessarily mixing something that only sounds great in a $1 million studio.

best sound design studio
Cinema-scale mixing at Skywalker Sound
 
So, how do our ears’ sensitivity to the mid-range manifest on a practical level? Try playing back any piece of music at a low level. Now gradually turn it up: As the level increases, you might notice that the ‘mid-boost’ bias of your hearing system has less of an effect, with the high- and low-frequency sounds seeming proportionally louder (and closer, which we’ll go into in the next tip).

Given that extremely high and low frequencies stand out more when we listen to loud sound effects, we can create the impression of loudness at lower listening levels by attenuating the mid-range and/or boosting the high/low ends of the spectrum. On a graphic EQ, it would look like a smiley face, which is why producers talk about ‘scooping the mid-range’ to add weight and power to a mix.

Comments

Popular posts from this blog

Tips for Becoming a Sound Designer - 2020

Sound Design Tips For Beginners 2020