Wave Ears

In-ear Monitors in Worship

Kent Margraves, 11/14

Many worship venues have made the transition from “wedges” to “ears” for stage monitoring purposes, and often find that this can be a surprisingly tricky process.  Essentially, wedges are loudspeakers that are laid sideways and angled up at the performers. The sound content or “mix” in each monitor, or group of monitors, is customized for the performers’ needs and each often has a much different balance than the house sound mix.

In-ear monitor systems have become popular in recent years. This places earphones in the ear of the performer, mostly sealing the ear (both generic fit and custom molded models are common). They may be mixed from the front-of-house console, a dedicated monitor console, a personal on-stage mixer, or a wireless computer or tablet.  Regardless of the type of delivery or mixing system, monitoring earphones (either wired or wireless) usually replace the monitor wedge(s)..

There are a number of advantages to in-ear monitoring including: lower stage volume (no open wedges blaring, better house sound, greater gain-before-feedback), artist mobility (no sweet spot to stand in as with most wedges), hopefully lower monitoring levels for increased hearing safety, the ability to listen deeper into the monitor mix details, a custom mix and loudness for each user, a discreet path for talkback to the user’s ears, system portability (wedges weigh a whole lot more than earphones), aesthetics, and acoustic isolation.   Those last two points may be argued as disadvantages too, but most users agree that earphones are visually less distracting than wedge monitors. Read on for more regarding the acoustic isolation.

Worship leaders and techs transitioning to IEMs should plan for increased communication and expect to spend more time building the right monitor mix. The acoustic isolation (occlusion effect) of in-ear monitoring offers extreme control, but also requires more attention to detail.  First, a bit about human hearing and stage monitoring. Consider this scenario:

A worship leader is downstage center with a single wedge monitor and his guitar and vocal mic. The tech mixes both signals into the wedge plus any other signals the WL requests, which might include other instruments. He monitors comfortably. Does he monitor in mono or stereo?

It’s true that a single wedge monitor reproduces a mono audio signal, but is that all he hears? No. He not only hears the sound from the wedge, but also the sounds from all around him including other performers, audience sounds, room reverberation, and more. He hears all these things with a true sense of space and dimension. Humans hear binaurally. Next, consider this scenario:

A church buys its first wireless personal monitoring system to replace the worship leader’s wedge. The sound tech removes the wedge and routes the regular monitor signal to the in ear monitor system, whether wireless or wired (mixed from FOH). At sound check the WL puts his new earphones in and soon says, “My mix is different!” The tech responds, “Nope, it’s the same mix you’ve always had.” Who’s right?

They both are. With the wedge, the WL heard the monitor signal plus his acoustic surroundings as an integrated listening experience, with both ears open. Now that his ears are essentially plugged by earphones, he hears only the monitor signal provided in them, and does not hear his acoustic surroundings. He relies nearly 100 percent on the mix he receives.    So, the WL is correct that he’s hearing a different balance than before, and the audio operator is correct in the sense that he’s feeding the same old monitor mix to the WL.    The difference is the delivery method, with its DRAMATICALLY different acoustic experience. For this reason, the transition can be tricky and potentially frustrating for new in-ear monitor users.

Professional earphones for stage monitoring are designed to seal the ear, acoustically isolating the user from nearby sounds. When people with normal hearing close off the opening to the ear canal, things change big time. Occlusion / isolation is a big deal, but so is the way that the user hears their own voice via bone conduction (vibration of the bones in the head).   Typically vocalists express a sudden change in the tone of their own voice when using earphones for the first time.  Our WL mentioned earlier would have certainly noticed this. He didn’t hear much of his voice through air conduction any longer, due to occlusion.   Before the vocal microphone is added to the IEM mix, most of what a vocalist hears is the bassy/muffly sounds of their voice due to this bone conduction.

For this reason, vocalists often have the toughest time adjusting to wireless personal monitors.   So, a vocalist using in-ear monitors is certainly going to need to hear others on the stage (such as the band or orchestra) in their monitor mix, but usually needs a lot more of their own vocal. This helps overcome the occlusion. The vocal will often need to be, by far, the most prominent element in their mix.   If vocalists do not hear sufficient level of their own voice, the bone-conducted tone of their voice is predominant and they are uncomfortable.  Ever heard a vocalist trying

IEMs for the first time say, “I sound really weird!”? This is probably why.  Instrumentalists using professional, sealed earphones also experience isolation, but they do not have the challenge of their instrument being mounted in their skull :>), and hence, don’t have to deal with tonal distortion due to bone conduction.

So, because of the isolation provided by proper earphones, artists no longer hear sounds naturally as they traditional have with wedges.  If there is something they want or need to hear, it must be deliberately routed to their monitor mix. It becomes critical that the monitor mixer auditions the mix with earphones of preferably the same type.  And, mix adjustments that required four or five knob “clicks” with a wedge might need only two or three “clicks” (or less) in great earphones.  The details are simply much more obvious.

Consider a worship leader with a choir behind him: in a wedge application, he may hear plenty of the choir naturally, without any choir being in his wedge. But with “ears,” he will need the choir up in his mix if he intends to hear them at all.  So the acoustic isolation offers wonderful control, but requires increased attention and effort.

Full Vs. Partial Mixes

A straight-up full IEM mix might sound much like the front-of-house mix, a commercial CD mix, or similar, have every element processed and blended at the proper “finished product” balance.    But a typical IEM mix intentionally omits non-essential elements (for that particular user!) so that the remaining elements may be monitored clearly, without unnecessary “clouding” from a busy mix. Clouding is sometimes also loosely referred to as “crowding.”

For instance, a bass player’s IEM mix in a modern worship band setting will certainly have his bass, the kick drum, the basics of the rhythm section, the lead vocal, and maybe a few other things he may request. But it might omit the choir mics, orchestra sounds, background singers, playback devices, “talking heads,” or other elements (or at least balance them low) that are not really essential in helping him get the pitch and time cues he must focus on.

It is often helpful to remind each other (musicians and techs alike) to contrast the terms “listening” and “monitoring,” and remember the purpose of stage monitoring. This can sometimes become a race for the perfect full mix in a user’s wireless personal monitor, when usually that’s really not the point at all (if it were, we could all save a ton of money and effort by routing the main PA mix to all IEMs, but that would be a disaster…)

Downward Mixing
“Downward” or “subtractive” mixing describes the idea of “less is more” in monitoring mixing, and this technique applies very well to both wedges and earphones.   So when we have an artist continually asking for more and more level from various sources in their ears, especially their own signal, we should instead turn other elements down.    The artist still gets the balance adjustment they desire, but without an overall volume increase. And this is also a better approach when it comes to the science of proper gain structure in our mixing consoles and monitor gear (wired or wireless).

Mono Vs. Stereo Mixes
Mono in-ear mixes can be made to work.  But those that use their systems in stereo soon discover that there is a world of increased monitoring flexibility available to them. And humans aren’t designed for mono.   A mono IEM mix means that everything is heard “dead center.” That is, above the head in the virtual center of the sound image, or “phantom center.” A stereo mix allows the placement of sources to be panned across the stereo space in the listener’s head.

Stagger Panning
Here, various sources are intentionally panned in different places across the stereo image for the purpose of “un-mixing” them for monitoring.  It is interesting to watch and see that musicians can (whether consciously or not) train themselves to “point” their listening to different directions in their head, depending on what sound they want to focus on at any moment.  It is important that the user’s own “me” signal stays prominent and in the center/top of their head, or “up the middle.”   Say a musician has his acoustic guitar and the worship leader vocal both placed center in his head (good), and an electric guitar is panned to 10 o’clock in his ears, another guitar may be panned to 2 o’clock, stereo keyboards might be mixed real wide across or not, and some other sources might be panned to 11 o’clock, or 4 o’clock, and so on. While we would usually not do this for a mix intended for an audience, this “un-mixing” by stagger panning can be very effective for stage monitoring. One excellent worship bassist stated:

“When running an IEM system in mono, I hear the mix dead center. That is a problem when I need to hear kick, snare, overhead drum mics, bass, acoustic guitar, electric guitars, click, percussion, back-ground vocals, the worship leader, a choir, loops, etc… I have to choose three to five things to monitor and everything else takes the back seat…”

(he just described clouding from a full mix)

”…when I use IEMs in stereo, I have a much larger sound field to use. I’ll pan background vocals slightly left, the worship leader slightly right, acoustic around 30 percent right, piano around 30 percent left, kick and bass dead center, overhead drum mics around 50 percent right, and so on…”

(sounds like his version of stagger panning)

“With a stereo mix, things don’t compete as much… In mono, the only way to get more room is to increase the gain, which takes my mix louder, whereas a stereo mix allows me to take my mix wider. In fact, I am able to use 25-35 percent less volume with a stereo mix.” – Andrew Catron, Worship Leader

And here is a quote on this topic from a veteran professional monitor mixer:

“…I’ve found that creating a stereo mix with slight spread of sources with the artist’s own voice or instrument dead center allows me to keep levels under control. I also get a lot less of the ‘more me’ requests with this approach.” – Scott Fahy, lead audio engineer/ monitor engineer

While Catron has a good working audio knowledge, he is a musician first and it’s interesting that he sorted out the thoughts above while transitioning from a mono to stereo in-ear mix.   Fahy is not a musician but a very skilled and experienced audio engineer, and usually provides several dozen monitor mixes at a time on a dedicated console in a complex worship environment.  Both, from very different approaches, are convinced that stereo (vs. mono) in-ear monitoring makes for easier monitoring, happier users, and lower volume.  Again, humans are built for stereo.

One Or Two Ears?
Yep, I went there… it’s important.  Here goes:   Have you ever seen an artist remove one earphone on stage? Why is that? One common reason is that they “can’t hear” or are uncomfortable with their mix when wearing both earphones.     It’s most common with vocalists, in my own experience, and probably largely due to the occlusion and bone conduction effects discussed above.    They are certainly most comfortable with their raw and open ear(s), as they’ve been using them reliably their entire lives. But if the monitor mix is really needed and is suitably delivered to the earphones, they should wear them both.  One good way to achieve this is to work toward a proper and comfortable IEM mix so that the artist is not tempted to remove either earphone. That may include ambient miking and mixing, and we’ll get that to shortly…

Using one earphone on the live stage usually brings an accompanying increase in monitoring volume, finally controlled by the user at his/her bodypack receiver.  This author has witnessed this in enough scenarios to note a trend: it seems that a single earphone (all else equal) tends to be run at least 10-12dB louder than two earphones!   One reason is simple: the open ear is not sealed and hears the array of surrounding stage sounds, which are often fairly loud, and the single earphone in the other ear naturally must be turned up to compete clearly. Also, when one earphone is removed, “binaural summation” is defeated, with a cost of another 3-6dB.  This is a psychoacoustic phenomenon that very positively affects listener perception of loudness—but it only works with two ears working together.  Yikes!     So on top of the already increased monitoring volume, the loss of binaural summation causes even higher listening levels to be needed.

It’s easy to believe, then, that a single earphone may be run well beyond twice as loud as two earphones. In the interest of safety, anything we can do to minimize the sound pressure exposure for all users (IEMs, wedges, or any other application) is the right thing to do.   Avoiding the single ear method for extended use is highly recommended. The better move is to work toward a proper two-ear mix. It is worth the effort.

Ambience/Audience Response
Once the balance of sources is mixed well in an IEM mix, the hard part is done. And for some users, we’re finished.   But others feel the need to overcome the isolation, and we need to find a way around that.   After all, performers on stage want to feel like they’re still in the venue with the worshippers—not in an a tight iso-booth at the local recording studio.  This means hearing the audience sounds and room ambience.  Some of this “space” happens naturally through leakage, but sometimes we must deliberately fix this. Consider this:

A worship tech sets up a new wireless IEM for his worship leader and he knows that isolation is part of the game. So, he sets up a stereo pair of cardioid condenser mics in an X-Y configuration (Figure 1), front and center, facing the audience. This simple stereo technique provides a good image of the audience sounds and some room ambience. He pans the mics hard left and right (for the worship leader’s perspective) and blends them into the IEM.

mics for blog

This can work very well when blended just right. When the WL faces forward, it’s simple…  If, say, a sound comes from an audience member on the worship leader’s left (house right) it will be heard and seen on the WL’s left. So, his eyes and ears agree, and the brain likes that a lot.

…That is, as long as he remains facing forward, and center stage. But suppose he moves and turns to face a stage left guitar player during a musical moment, with his right ear now facing down stage. What has happened?  That same audience sound is still heard just as easily before, but now there is a localization error. What is heard on the worship leader’s left side is seen on his right. His eyes and ears disagree. The brain dislikes this. With stereo IEMs, this “stationary ambience” issue may be a problem for stage performers. His head orientation moved, but his artificial ears (the ambience mics) did not.     Our eyes and ears like to perceive sources from their correct/coincidental directions, and when they don’t agree, it’s a problem. In some cases, it’s just annoying. In other cases, it can be completely disorienting.

One approach would be to have a monitor operator updating the pan pots of the ambient mics on the fly, following the artist in real time by watching their movements and updating the directional cues. Yeah, right! Not a very reliable or repeatable solution.

Or, maybe a GPS-enabled pan-tracker gizmo in the future.  Yeah, right.    I feel a plugin coming on.    So in most worship environments, we live with stationary ambience. It’s manageable, and it’s still far better than no ambience at all.    Also, because the “aesthetics police” are often present, that X-Y mic pair often gets removed from the center downstage location for sight lines.   They typically wind up one on each end of the stage, crossed toward the back of the audience, or flown over the audience  . That’s OK; it’s a compromise that can still provide a usable spacious image.

But what if we were to mount a subminiature ambient mics (which are essentially serving as artificial ears in this application) on either side of the head, or on the outside of the earphones themselves? Then, no matter where the user moves, the directional cues always work because the mics move with the user. Nifty. There are a few technologies emerging on the market that integrate some type of of binaural miking with in-ear monitors.  Another market trend is the inclusion of an ambient mic on a personal, on-stage monitor mixer or even clipped onto a user’s lapel. These are great for communication (especially during rehearsals) and a little ambient sound, but do not provide accurate directional cues or a stereo sound field.

Potential Timing Issues With Ambient Mics
Sometimes, sound engineers will place ambient mics further back into the audience area, attempting to minimize sound leakage from the stage and PA into these mics. While that may decrease the leakage, it creates latency: there is still some leakage, but it now takes a little while for that sound to travel from the stage and PA to the mics. The further away the mics are from the PA, the longer it takes. When such located mics are combined into an in-ear monitor mix, the timing offset can be problematic for musicians attempting to play tightly together, as they hear slightly out-of-time musical leakage and sometimes degraded fidelity due to comb filtering.   These mic placements may be more useful for recording or broadcasting applications where they can be carefully used to helped convey venue size to the audience. But in such applications, no musician is relying on those mixes for critical performance monitoring.    So, keeping any ambient mics that may be mixed into IEMs close to the PA (from a time perspective) is a wise move. After all, we’re talking about LIVE sound, not LATE sound.

Kent Margraves began with a B.S. in Music Business and soon migrated to the other end of the spectrum with a serious passion for audio engineering. Over the past 25 years he has spent time as a staff audio director at two mega churches, worked as worship applications specialist at Sennheiser and Digidesign, and toured the world as a concert front of house engineer. Margraves currently serves the worship technology market at WAVE (stage.ardentcreative.com/wave2016) and continues to mix heavily in several notable worship environments including his home church, Elevation Church, in Charlotte, NC. His mission is simply to lead ministries in achieving their best and most un-distracted worship experience through technical excellence. His specialties are mixing techniques, teaching, and RF system optimization.