In the world of audio production, mastering is the final and critical step that ensures a track sounds its best across various playback systems. However, often music producers are challenged with tracks that are less than perfect. Whether it’s due to earlier recording issues, environmental noises, or mixing inconsistencies, these problematic tracks demand a specialized approach in mastering - a process we can call 'Mastering Restoration and Repair'.
This process is not just about applying a finishing polish but requires a deep dive into the very anatomy of the track. It's about identifying and rectifying issues that might have been overlooked or introduced during the recording and mixing phases. From clipping and unwanted noise to imbalances in frequency and dynamics, a range of problems can mar an otherwise great piece of music.
As we delve into this critical aspect of music production, it's crucial to understand that the tools and techniques employed in restoration and repair are both an art and a science. They require not only a keen ear but also a mastery of sophisticated audio repair tools. In this journey, products from Mastering The Mix emerge as invaluable allies. With their precise capabilities and intuitive interfaces, these tools empower music producers to transform problematic tracks into polished, professional-sounding pieces.
In this comprehensive guide, we’ll explore various techniques and tools integral to mastering restoration and repair. We’ll discuss how to address common audio issues, delve into advanced restoration techniques, and utilize EQ and compression not just as creative tools but as surgical instruments for audio repair. Additionally, we’ll walk through a real-world case study, demonstrating these techniques in action.
Mastering restoration and repair is about giving a new lease of life to your tracks. It’s about ensuring that your music not only meets industry standards but also resonates with your artistic vision. So, let’s embark on this journey to transform your tracks from flawed to flawless.
In the realm of music production, the path to a perfect track is often littered with a variety of audio problems. Understanding these issues is the first step in mastering restoration and repair. Let's delve into some of the most common audio problems that can plague a track, turning a potentially great mix into a problematic one.
Clipping occurs when the audio signal exceeds the maximum level that a device or software can handle, resulting in distortion. This often happens during recording or mixing when levels are pushed too high. In the mastering phase, it’s crucial to identify and rectify clipping, as it can cause harshness and ruin the listening experience.
Noise can come in various forms – hums from electrical equipment, hiss from recording gear, or ambient sounds captured during recording. Removing or reducing these noises is essential for a clean and professional sound. Noise reduction techniques, however, must be carefully applied to avoid losing vital aspects of the track.
A tonal imbalance can leave a track sounding either too bass-heavy or too treble-focused, or it may lack clarity in the mid-range. This imbalance can make the track sound off on different playback systems. Mastering involves correcting these imbalances for a well-rounded and consistent sound across all devices.
Tracks with uneven dynamics can have parts that are too loud or too soft, affecting the overall impact and emotion of the music. Mastering restoration seeks to even out these dynamics without squashing the life out of the track.
Mastering The Mix offers tools that are particularly adept at addressing these issues. For instance, our LEVELS plugin can help identify clipping and dynamic range problems, while REFERENCE can be instrumental in pinpointing tonal imbalances by comparing your track with professionally mastered music.
Addressing these common audio problems is not just about fixing mistakes; it’s about enhancing and refining the track to meet its full potential. It's a delicate balance between art and technical precision, where each decision can have a profound impact on the final output.
In the following sections, we will explore how to tackle these issues using advanced techniques and specialized tools, ensuring that your music not only sounds good but resonates with the quality and clarity it deserves.
Mastering the art of audio restoration and repair is akin to being a surgeon in the operating room of music production. It's not just about identifying the issues but also applying precise and sophisticated techniques to rectify them without compromising the track's integrity. Let’s explore some of these advanced techniques that can turn a problematic track into a masterpiece.
One of the most powerful tools in audio restoration is spectral repair. This technique involves analyzing the audio in a spectral view, which provides a visual representation of the sound across frequency and time. Spectral repair allows you to pinpoint specific unwanted sounds like a click, pop, or an isolated noise and surgically remove or reduce them. This technique is invaluable for cleaning up recordings where isolated incidents of noise occur.
De-noising is a staple in the audio repair toolkit. Advanced de-noising tools allow you to reduce or eliminate background noise, such as hiss, hum, or ambient noise, without affecting the quality of the main audio. The key here is subtlety – removing just enough noise to clean up the track without stripping away its character or introducing artifacts.
Sometimes, issues with a track can be related to timing or pitch. Advanced tools can correct slight timing issues or pitch variations in a way that sounds natural and seamless. This is particularly useful in vocal tracks where maintaining the natural quality of the voice is paramount.
Beyond basic EQ, mastering restoration might require more intricate equalization techniques. Dynamic EQ, for example, can be used to address problems that occur only at certain moments in the track, applying EQ changes dynamically based on the input signal.
Mastering The Mix provides a suite of tools that are perfect for these tasks. For instance, their REFERENCE plugin can be invaluable for comparing your track to a high-quality reference, ensuring your spectral balance is on point. MIXROOM and BASSROOM plugins offer intelligent suggestions for EQ adjustments, helping to tackle tonal imbalances and ensuring your lows and highs are perfectly tuned.
Advanced audio restoration is a blend of technical skill and artistic judgment. It requires an understanding of the tools at your disposal and a keen ear for what makes a track not just technically sound, but emotionally resonant. In the next sections, we will dive deeper into using EQ and compression for restoration, followed by a practical case study showcasing these techniques in action.
Mastering can be complex when it comes to adjusting individual instruments, as the adjustments you make often affect the entire mix. For instance, if you're enhancing the snare with a 5kHz EQ boost, this will simultaneously brighten the vocal track. If the vocal was well-balanced before, this change could make it excessively bright.
To navigate these intricate situations, focusing your EQ adjustments on the mid, side, left, or right channels is essential.
This approach requires pinpointing the stereo field location of the specific issue. Imagine you aim to enrich a thin-sounding vocal:
This method is also effective for modifying specific instruments. The extent of your adjustment depends on how dominant other instruments are within the same frequency range. For example, you could augment the mid-channel between 500Hz-3kHz to boost the vocals, provided the snare and synths in that range aren't too dominant. Typically, you can adjust the vocal level in a master by about ±3dB before it begins to sound unnatural. Use a reference track to ensure you don't overdo it.
Hi-hats often present unique challenges in mastering, but they can also be more straightforward to adjust due to three reasons:
With these characteristics, a multiband compressor can effectively isolate and address any issues with hi-hats.
These techniques demonstrate the finesse required in mastering, where precise adjustments can significantly enhance the overall sound quality.
Mastering restoration and repair often involve tackling tracks that come with a unique set of challenges. In this case study, we’ll walk through a real-world example of transforming a problematic track into a polished piece, highlighting the application of mastering techniques and the use of Mastering The Mix tools.
Let's image a track has the following issues:
Initial Assessment: The first step is to analyze the track using Mastering The Mix’s LEVELS plugin to identify technical issues like clipping and dynamic range problems.
Spectral Repair: Using spectral editing tools, specific instances of clipping can be visually identified and surgically reduced, preserving the integrity of the audio.
Dynamic Range Correction: Compression can be applied, but very subtly. A low ratio with a high threshold can be used to gently even out the dynamic range, making sure the track’s energy remains intact.
EQ Adjustments: The EQ process can begin with broad adjustments using BASSROOM to balance the low end. Then, MIXROOM can be employed for more precise adjustments in the mids and highs, ensuring clarity and presence.
Final Touches: After the primary issues are addressed, you can use LIMITER to ensure optimal loudness without introducing any new clipping.
Reference and Finalization: Finally, compare the track to a high-quality reference track using REFERENCE, ensuring the restored track meets industry standards.
This case study demonstrates the transformative power of mastering restoration and repair. By meticulously applying the right tools and techniques, a track fraught with issues can be revived, showcasing the expertise and precision that goes into professional mastering.
Mastering restoration and repair is a nuanced process that requires a fine balance between technical proficiency and creative intuition. To ensure your tracks not only sound professional but also retain their artistic essence, here are some best practices and tips to follow.
Mastering restoration and repair is as much about preserving the soul of the music as it is about technical fixes. By adhering to these best practices, you ensure that your tracks are not just technically sound but also emotionally resonant and true to your artistic vision
Mastering restoration and repair is an intricate dance between the technical and the creative, a process where precision meets artistry. As we conclude this guide, it’s important to remember that the journey of mastering is as much about preserving the essence of the music as it is about enhancing its quality.
Mastering, especially in the context of restoration and repair, is not just about applying filters and effects. It’s about understanding the soul of the music, its intended impact, and how best to bring out its innate beauty. Each track comes with its unique set of challenges, and as a mastering engineer, your role is to navigate these challenges with a blend of skill, patience, and creativity.
Tools like those offered by Mastering The Mix are not just utilities but are extensions of your creative intent. They empower you to make informed decisions, bring precision to your work, and achieve results that resonate with listeners across the spectrum.
The reward of mastering restoration and repair is more than just a well-balanced track. It’s the satisfaction of breathing new life into music, of transforming a problematic recording into a piece that artists are proud to share, and audiences are moved to listen to. It’s about leaving your mark on the music in the most subtle yet profound way.
As you continue on your mastering journey, remember that each project is an opportunity to hone your skills, to experiment, to listen, and to learn. The path to mastering excellence is a continuous one, filled with discovery, challenges, and immense gratification.
]]>For today's music producers, the digital age has brought a plethora of mastering plugins, each offering unique features and capabilities.
However, with so many options, the challenge often lies in choosing the right ones and using them effectively.
In this comprehensive guide, we'll delve deep into the world of mastering plugins. We'll focus on how to build an effective mastering chain, highlighting when and how to use specific plugins to achieve the desired sonic quality.
Mastering a track is akin to sculpting a piece of art. Just as a sculptor needs a clean block of marble to start, a music producer requires the right amount of headroom in a track for effective mastering.
Headroom in audio mastering refers to the space between the highest peak in an audio signal and the maximum level of the digital system, often denoted as 0dBFS (decibels relative to full scale). This space is crucial for several reasons.
Adjusting headroom starts with analyzing the peaks of your track. The goal is to ensure the peaks do not hit 0dBFS, leaving enough room for subsequent processing. Here’s a step-by-step guide:
LEVELS by Mastering The Mix is a powerful tool for this task. It provides a clear visual representation of your track’s headroom, along with other essential mastering metrics. With its intuitive interface, you can easily identify parts of the track that need adjustment and ensure your headroom is set correctly before proceeding with further mastering steps.
Setting the right headroom is the foundation of a successful mastering process. It’s the first critical step in transforming your track from a rough mix to a polished masterpiece. With careful attention to headroom and the aid of tools like LEVELS, you're setting the stage for a master that stands out in clarity, dynamics, and overall sonic quality.
After establishing the right headroom, the next vital step in the mastering chain is to address and eliminate resonances. Resonances are prolonged or exaggerated frequencies that can muddy your mix, making it sound unbalanced or harsh. This is where RESO by Mastering The Mix becomes an indispensable tool in your mastering arsenal.
RESO is designed to pinpoint and deal with problematic resonances efficiently. Here's how to use it effectively:
While RESO is a powerful tool for tackling resonances, it's worth noting alternatives like Soothe 2. Soothe 2 is another dynamic resonance suppressor, known for its frequency smoothing. Here’s a quick comparison:
Both RESO and Soothe are resonance suppression plugins, but they do it differently. RESO focuses on the individual resonances rather than broadly applying reduction. This means you ONLY get rid of the issues and leave the rest of the sound unprocessed, which gives a very transparent result. This makes RESO completely different from anything else currently available. RESO also has a number of unique features, such as Q optimization, automatic resonance detection, super-high quality filters, and frequency sweeping. If you have a few specific unpleasant resonances in your audio (and there often are), RESO is the tool you want.
Soothe2 identifies problematic resonances on the fly and applies broad matching reduction automatically. This results in a smoother, more balanced sound and saves you from having to notch out frequencies by hand.
Mastering with a focus on removing resonances is like polishing a gemstone - it removes the rough edges, revealing the true shine of your music. With tools like RESO, you can ensure that your track sounds its best, free from distracting and unwanted frequencies.
Achieving a well-balanced tonal spectrum is critical in mastering. It ensures that all frequency ranges are properly represented, making your track sound harmonious and cohesive. Mastering The Mix’s Mixroom and Bassroom plugins, along with FabFilter's Pro Q 3 and other EQs, are excellent tools for shaping the tonal balance of your track.
Tonal balance refers to how various frequencies are distributed in your track. A balanced tonal spectrum ensures that no particular frequency range dominates or is underrepresented, leading to a more professional and pleasing sound.
Mixroom specializes in shaping the mid and high frequencies, providing a clear and intuitive interface for making precise EQ adjustments.
Bassroom is designed for perfecting the low-end of your mix.
Pro Q 3 is renowned for its surgical precision in EQ adjustments. It's particularly useful for tackling specific frequency issues or for adding finishing touches.
Achieving the perfect tonal balance is an art. It requires a combination of visual tools like those offered by Mixroom and Bassroom, along with your auditory judgment. FabFilter's Pro Q 3 complements this process by providing additional precision and the dynamic option where needed.
Mastering tonal balance is about creating a track that sounds great on any system, from club speakers to earbuds. By using these plugins judiciously, you ensure your track not only meets professional standards but resonates well with your audience.
Stereo imaging in mastering is a subtle yet powerful process. It involves adjusting the stereo width of your track to ensure a spacious yet coherent sound. Mastering The Mix’s REFERENCE and iZotope’s Ozone Imager are excellent tools for this task, each serving a unique purpose in the stereo imaging process.
REFERENCE by Mastering The Mix is a powerful tool for comparing your track’s stereo image with that of professionally mastered tracks.
iZotope Ozone Imager provides the controls to make precise stereo width adjustments.
Stereo imaging is not about making the widest possible mix but about finding the right balance that complements the track. A/B testing with REFERENCE ensures that your adjustments are in line with professional standards. Meanwhile, Ozone Imager provides the necessary tools to fine-tune these adjustments for an optimal stereo image.
Mastering stereo imaging is a crucial step towards a sonically rich and professional-sounding track. With the right tools and techniques, you can create a master that not only sounds great on all systems but also provides an engaging listening experience.
The final stage of the mastering chain is often the application of a limiter. This crucial step is about controlling the loudness and ensuring that your track meets industry standards while retaining its dynamic quality. LIMITER by Mastering The Mix is an exceptional tool for this purpose, providing both precision and ease of use.
LIMITER’s user-friendly interface and advanced algorithm make it a top choice for the final mastering stage.
While LIMITER is an excellent tool, it’s worth mentioning alternatives like FabFilter’s Pro L2 and Sonible’s Smart Limit, which offer different features and controls.
Each limiter has its unique strengths, and the choice largely depends on the specific requirements of your track and your personal preference as a mastering engineer.
In mastering, the limiter is not just a tool; it’s the final gatekeeper ensuring that your track sounds its best across all listening environments. With LIMITER by Mastering The Mix, you get the control you need to finalize your track confidently, ready for the world to hear.
Mastering is not just about using the right tools; it’s about applying them effectively to enhance the musicality of a track. In this section, we will explore practical applications and examples of how to use mastering plugins, providing insights into the thought process and techniques of professional mastering.
As we conclude our journey through the meticulous process of using mastering plugins effectively, it's clear that mastering is not just a final step in music production, but a form of artistry in itself. It requires a balance of technical skill, critical listening, and creative decision-making to bring out the best in a track.
Throughout this guide, we've explored various aspects of mastering, from setting the right headroom and removing resonances to achieving a balanced tonal spectrum and precise stereo imaging. Each plugin, whether it's LEVELS, RESO, MIXROOM, BASSROOM, REFERENCE, iZotope Ozone Imager, or LIMITER, has played a pivotal role in shaping the final sound.
For music producers, understanding the nuances of mastering for different mediums is crucial. It's not just about making a track louder or adding a shiny veneer; it's about ensuring that your music communicates its intended emotion and energy, regardless of where it’s played. In the age where music consumption is more varied than ever, producers face the challenge of creating masters that stand out and maintain their integrity across streaming services, clubs, CDs, and vinyl records.
This comprehensive guide aims to equip you with actionable techniques and insights for mastering across different mediums. We'll delve into the specific considerations and limitations each medium presents, from the digital realm of streaming and CDs to the physical constraints of vinyl.
We'll take a look at how to use LEVELS and EXPOSE 2, to measure technical details like loudness (LUFS), true peak, and dynamic range, ensuring your tracks meet the requisite standards and qualities of each format.
Join us on this sonic journey as we uncover the secrets of mastering for diverse listening experiences. Prepare to elevate your mastering skills and ensure your tracks not only sound professional but also captivate listeners, no matter the medium.
Loudness Normalization: Most streaming services use loudness normalization, which adjusts the playback volume of tracks for a consistent listening experience. This is measured in LUFS (Loudness Units Full Scale).
Target Loudness Levels: Platforms like Spotify and Apple Music typically target a loudness level around -14 LUFS. Mastering your track louder than this can lead to unwanted volume reduction or dynamic changes when streamed.
Optimizing Loudness: Aim for a loudness level that aligns with your target platform’s standards. Use a LUFS meter, like the one in LEVELS, to gauge your track's loudness.
Managing Dynamics: Preserve the dynamic range. Over-compression can make a track sound lifeless, especially after loudness normalization.
True Peak Control: Ensure your track’s true peak level doesn't exceed the threshold set by streaming services (usually -1 dBTP). This prevents digital clipping and distortion.
EQ Adjustments: Apply subtle EQ adjustments to enhance clarity and balance. Avoid extreme boosts or cuts that might lead to issues post-normalization.
LEVELS: Use LEVELS to monitor loudness (try to aim for around -14 LUFS int), true peak (try to keep it below -1dBTP), stereo field, and dynamic range. Its visual feedback is crucial for making informed decisions.
EXPOSE 2: After bouncing your audio, use EXPOSE 2 to analyze your final track. It highlights any technical issues like loudness inconsistencies or true peak problems, ensuring your track meets the requirements of streaming platforms. Select 'Spotify' as your preset and EXPOSE will warn you if you have any technical issues. Click the 'F' (feedback) button for actionable insights on how you can fix the problems.
Reference Tracks: Compare your track with well-mastered songs on streaming platforms. This provides a benchmark for loudness and tonal balance.
Avoid the Loudness War: Don't sacrifice dynamics for loudness. A well-balanced and dynamic master will sound better on streaming services.
Test on Multiple Devices: Listen to your mastered track on different devices to ensure it translates well across various listening environments.
By mastering these techniques and utilizing tools like LEVELS and EXPOSE 2, you can ensure that your music not only meets the technical standards of streaming services but also retains its intended emotional impact and sonic quality.
Mastering for club play is a unique challenge. The goal is to ensure that the track not only sounds great but also resonates with energy and impact in a club environment, characterized by large sound systems and an atmosphere that demands a physical response from the audience. This section explores the key considerations and techniques for mastering music that thrills club-goers.
Bass Emphasis: Clubs systems are designed to deliver powerful low-end frequencies, making bass management crucial in mastering.
High Volume: Tracks are played at high volumes, so mastering must ensure clarity and prevent distortion at loud levels.
Spatial Dynamics: The sense of space and stereo imaging must be carefully managed to maintain energy and engagement on the dance floor.
Bass Management: Ensure the bass is powerful yet tight. Avoid muddiness that can overwhelm the mix in a club setting.
Mid-High Frequency Clarity: Clarity in mid and high frequencies is essential, as they cut through the ambient noise of the club.
Loudness and Dynamic Range: Aim for a louder master to compete with other club tracks but maintain enough dynamic range to keep the energy and punch.
Stereo Imaging: Optimize stereo width for a spacious feel, but avoid extreme stereo effects that might cause phase issues on club systems.
LEVELS Plugin: Use LEVELS for precise bass management and to monitor loudness and stereo imaging, ensuring your track performs well on club systems.
Pro Tip: Use BASS SPACE in LEVELS to ensure that only your kick and bass channels are outputting low-end energy. This will help keep your low-end sounding clear, defined and powerful.
EXPOSE 2: After mastering, analyze the track with EXPOSE 2 to check for issues like clipping, over-compression, or stereo phase problems.
Reference Club Tracks: Compare your mastering with tracks known to perform well in clubs. This helps in matching the energy and sonic qualities.
Avoid Over-Compression: While loudness is important, avoid squashing the life out of your track. The dynamics are what make a club track engaging.
Mono Compatibility: Ensure the track sounds good in mono, as some club systems might sum your stereo mix to mono.
Mastering for club play requires a balance between power and finesse. By focusing on these techniques and leveraging tools like REFERENCE, LEVELS and EXPOSE 2, you can craft a master that not only sounds good in the studio but also electrifies the club environment.
Mastering for CDs requires a distinct approach, focused on delivering a high-quality audio experience that complements the medium's physicality and longevity. Unlike the transient nature of digital formats, a CD encapsulates your music in a tangible form, making the mastering process even more crucial for ensuring lasting audio fidelity.
Red Book Standard: CDs follow the Red Book standard, which dictates specific technical requirements such as a 44.1 kHz sampling rate and 16-bit depth.
Headroom and Dynamic Range: Adequate headroom and a wider dynamic range are preferred for CDs to maintain audio integrity and prevent clipping.
Bit Depth and Sampling Rate: Ensure your final master is in the 16-bit/44.1 kHz format. Higher resolution files should be properly downsampled and dithered.
Loudness and Headroom: While CDs can handle higher loudness levels, avoid excessive loudness that compromises dynamic range and introduces distortion.
Stereo Image and EQ: A balanced stereo image and well-managed EQ are vital for a pleasing listening experience on various playback systems.
Consistency Across Tracks: For albums or EPs, ensure consistency in tonal balance and loudness across all tracks.
EXPOSE 2 can help you ensure all your bounced tracks have a comparable tonal balance and loudness. This will ensure you give your listener a great experience when they consume your album as a complete body of work. (This is pretty crucial for all mastering mediums).
Dithering: Apply dithering when reducing bit depth to prevent quantization errors and preserve audio quality.
Track Spacing and Order: Consider the spacing and order of tracks on the CD. Transitions should be smooth and enhance the listening experience.
Test on Various Systems: Listen to your mastered tracks on different CD players and systems to ensure consistent quality.
Mastering for CD is both a technical and artistic process that requires a nuanced approach to preserve the essence of your music. The medium's enduring nature demands a master that withstands the test of time and technology.
Mastering for vinyl is a craft that combines technical precision with an appreciation for the medium's unique qualities. Vinyl records have made a significant resurgence, prized for their warm sound and physical presence. Mastering for this format requires a different approach, considering its analog nature and physical limitations.
Physical Limitations: Vinyl's physical grooves impose constraints on loudness, bass frequencies, and stereo imaging.
Bass Management: Excessive low frequencies can cause the needle to skip. Therefore, managing and often mono-izing low frequencies is crucial.
High Frequency Sibilance: Excessive high frequencies can create distortion. Careful de-essing and treble management are essential.
Bass Considerations: Keep bass frequencies centered and under control. Stereo bass can cause phase issues and groove distortion.
Dynamics and Loudness: Vinyl cannot handle the same loudness levels as digital formats. Preserve dynamics and avoid over-compression. Aiming for a maximum of -9LUFS short-term will be ideal.
High-Frequency Management: Tame harsh sibilance and high frequencies to prevent distortion and ensure a smooth playback experience.
Track Spacing and Length: Longer tracks mean tighter grooves and potentially lower sound quality. Plan your tracklist accordingly.
Test Pressings: Always get a test pressing of your vinyl record. This is the best way to ensure that the mastering translates well to the format.
Mono Compatibility: Ensure your mix is mono-compatible, particularly in the low end, to avoid playback issues.
Respect the Medium: Remember, vinyl has a character of its own. Embrace its warmth and analog nuances in your mastering process.
Mastering for vinyl requires a delicate balance between the artistic and technical aspects of audio production. It's about respecting the medium's limitations while bringing out the best in your music, ensuring a listening experience that vinyl enthusiasts cherish.
As we conclude this comprehensive journey through mastering music for different mediums, it's crucial to reflect on the key insights and techniques that tie these diverse processes together. Each medium - streaming, club, CD, and vinyl - presents its unique set of challenges and opportunities, demanding specific mastering approaches. However, the core principles of balance, clarity, and fidelity remain constant, guiding the mastering process regardless of the final format.
Streaming: Focus on achieving the right loudness without sacrificing dynamic range, adhering to the loudness norms of streaming platforms.
Club Play: Master for high volume and energy, ensuring bass management and clarity to make the tracks stand out in a live setting.
CD: Maintain high fidelity with careful attention to dynamic range, ensuring consistency across tracks for a seamless listening experience.
Vinyl: Respect the physical limitations of the medium, focusing on mono compatibility in low frequencies and avoiding excessive sibilance and high-frequency content.
The ability to adapt your mastering approach to suit the medium is a testament to your skills as a music producer and mastering engineer. Stay updated with the evolving standards and technologies in audio mastering to keep your skills sharp and relevant. Mastering tools like LEVELS and EXPOSE 2 are invaluable in achieving the desired results, providing visual feedback and technical insights.
Mastering is not just a final step in music production but a crucial phase that shapes the listener's experience. Whether it's a digital stream, a pulsating track in a club, a physical CD, or the warmth of vinyl, your mastering choices directly impact how the audience connects with your music. Embrace these challenges, and let your creativity and technical expertise shine through in every mastered track.
]]>In this deep dive, we'll explore how mastering approaches vary widely between these genres. We'll focus on the fine art of tailoring EQ, managing dynamics, and strategizing loudness to suit the unique characteristics of each musical style. Whether it's the punchy bass of electronic music or the dynamic range of a classical piece, mastering these elements is key to ensuring your track not only sounds great but also resonates with its intended audience.
One indispensable tool in this process is REFERENCE by Mastering The Mix. This plugin stands as a beacon for quality comparison, allowing you to measure your mixes against industry standards and fine-tune them to perfection. We'll dedicate a special section to show you how to make the most out of REFERENCE, ensuring your tracks stand shoulder to shoulder with the best in the business.
Join us on this sonic journey as we unravel the secrets behind mastering different genres, providing you with actionable instructions and insights to elevate your mastering game. Get ready to engage with a post that's not just informative, but also a pleasure to read from start to finish.
In the intricate world of music production, mastering varies significantly across genres. Each genre - Pop, Rock, Electronic, Classical, Jazz, and Hip Hop - has its distinct sound and audience expectations. Let's explore how mastering differs across these genres, focusing on their unique requirements and characteristics.
Pop music is characterized by its broad appeal and polished sound. Mastering in pop focuses on enhancing clarity and brightness, ensuring vocals and key melodies shine through. The high frequencies are sculpted to add sparkle, and the mid-range is carefully managed for vocal clarity. The low end is tightened for a clean, radio-ready sound.
Rock music demands a mastering approach that preserves its raw energy and dynamics. The focus is on the low-mids, which carry the weight of the guitars and the punch of the drums. Balancing these elements is crucial to maintain the driving force and edginess typical of rock music.
Electronic music is all about the beat and bass. Mastering in this genre aims to enhance danceability and energy. The sub-bass requires precise control for power and balance, while the mid and high frequencies need detailing to bring out the textures. Loudness is a critical factor for making an impact in club settings.
Classical music mastering emphasizes the preservation of natural dynamics and tonal balance. The process strives for transparency, avoiding any coloration. A wide dynamic range is maintained to capture the expressiveness and emotional depth of classical compositions, with special attention to spatial qualities for an immersive experience.
Jazz mastering is a fine art of maintaining the genre's inherent warmth and organic feel. The mid-range is key for highlighting the nuances of brass and woodwind instruments, while controlling the low end to provide a solid yet unobtrusive foundation. The high-end is usually duller than other genres.
Hip Hop mastering revolves around the rhythm and groove, with a focus on low-end punch and vocal clarity. The bass and kick drum are critical elements, requiring precise EQ and compression to ensure they hit hard without muddying the mix. Vocals must cut through clearly, often requiring detailed attention to mid-range frequencies. The overall loudness is also crucial in Hip Hop to ensure the track stands out in a playlist or club environment.
Mastering is a genre-specific craft, where understanding the distinct sonic characteristics and audience expectations is key. Whether it's the polished sound of Pop, the raw energy of Rock, the beat-driven nature of Electronic, the dynamic expressiveness of Classical, the nuanced warmth of Jazz, or the rhythm-centric focus of Hip Hop, mastering plays a pivotal role in bringing out the best in each genre.
Equalization (EQ) is a critical tool in mastering, serving to balance frequencies and enhance the character of a track. However, the approach to EQ varies significantly across genres. Here, we delve into how EQ strategies differ for Pop, Rock, Electronic, Classical, Jazz, and Hip Hop.
In pop music, EQ focuses on clarity and vocal presence. High frequencies are often boosted slightly to add 'air' and sparkle, making the track sound more 'alive'. Attention is paid to the mid-range, particularly around the vocal frequencies, to ensure that the vocals cut through the mix clearly without being harsh.
Rock music EQ is centered around the midrange frequencies to bring out the energy of electric guitars and the punch of the drums. Cutting some of the low-mids can help reduce muddiness, while a slight boost in the upper mids can enhance the presence of guitars and vocals.
Electronic music often requires a strong low-end and crisp high frequencies. Sub-bass frequencies are carefully managed to provide power without overwhelming the mix. High frequencies might be boosted for that characteristic brightness, giving life to synths and percussion.
In classical music, EQ is used sparingly and aims for transparency. The goal is to maintain the natural tonal balance of the instruments. Gentle adjustments might be made to correct any room or recording anomalies without affecting the inherent sound of the orchestration.
Jazz mastering involves balancing warmth with instrumental clarity. A slight boost in the lower-mid frequencies can add warmth, particularly to acoustic bass and piano. Care is taken to ensure the higher frequencies of brass and woodwind instruments are clear without becoming piercing.
Hip Hop music mastering focuses on the low-end punch and vocal clarity. The bass and kick drum are often given a boost in the lower frequencies for impact, while ensuring they don't overpower the mix. The vocal frequencies are carefully sculpted to ensure they sit prominently in the mix without clashing with other elements.
Each genre demands a unique approach to EQ, reflecting its distinct sonic characteristics. Mastering these subtleties in EQ settings is essential for a master that truly captures and elevates the essence of the music.
Our plugin BASSROOM is an EQ designed to fix the low-frequency balance of your overall track. BASSROOM analyses your production and shows you the EQ settings needed to get a powerful and well-balanced sound.
Load up BASSROOM on your master channel before your limiter. Select a preset or import your own reference track. Match the bands to the EQ target suggestions, then tweak to suit your preference.
The preset and and reference analysis targets are 100% unique to your music. BASSROOM analyses your audio to determine the EQ changes needed to match the tonal balance goal. It uses a revolutionary algorithm that is based on how the human ear perceives sonic energy. The unique filters in BASSROOM are optimized to shape low-frequencies during mastering. You won’t find another plugin that can adjust bass more transparently, retaining the punch and clarity of your music.
When adjusting the EQ on the master channel, you would generally want to go for broad Q bandwidths and EQ adjustments no greater than around 4dB. If the adjustments are greater than this then you may get more transparent results by going back to fix things in the original mix before mastering. Rules can always be broken and it’s the results that count. If a master needs to have a 6dB high-end boost to sound great then go for it.
Our plugin MIXROOM is perfect for dialling in a great sounding master the mid and high frequencies. It works in a similar way to BASSROOM where you can select a preset or load in your own reference track to generate tonal balance targets. You can also adjust the mid-side settings for each band, giving you total control over how you shape the frequencies across the stereo field.
When making EQ adjustments, be sure to adjust the output gain of the EQ to try and match the output loudness to the input loudness. This will help you objectively decide whether your EQ decisions are improving or deteriorating your sound when you bypass the plugin. Both MIXROOM and BASSROOM have a level match pointer on the output gain to help you do this instantly.
Dynamics and loudness are crucial elements in mastering, but their treatment can vary greatly across different genres. In this section, we explore how mastering professionals approach dynamics and loudness in Pop, Rock, Electronic, Classical, Jazz, and Hip Hop music.
Pop music often aims for a polished and consistently loud sound without sacrificing too much dynamic range. Compression is used to control dynamics, making the track sound full and vibrant at various listening levels. The goal is achieving a balanced loudness that can compete in the commercial market, ensuring the track stands out on radios and streaming platforms. Most pop tracks on Spotify have a maximum loudness of -6LUFS short term max.
The essence of rock music lies in its raw power and dynamic shifts. Mastering should preserve these dynamics, ensuring that the energy of the drums and guitars is not lost. Compression is applied more cautiously, maintaining the punch and allowing for natural crescendos and decrescendos that are characteristic of the genre. A loud target for rock is -7 LUFS short-term max.
In electronic music, mastering often pushes towards higher loudness levels due to the genre's prevalence in clubs and dance settings. Heavy compression and limiting are common to enhance the beat's impact. However, it's crucial to maintain enough dynamic range to keep the track from sounding flat and lifeless. Most Electronic tracks on Spotify have a maximum loudness of -6LUFS short term max.
Classical music requires a much more delicate approach. The dynamic range is wide, capturing the softest and loudest parts accurately. Compression is used sparingly, if at all, to preserve the natural dynamics that are fundamental to classical music's emotional impact. A good target for Classical music would be to not exceed -9 LUFS short-term.
Jazz music thrives on its dynamic subtleties and nuanced performances. Mastering should respect these qualities, using minimal compression to retain the music's natural feel and expression. Loudness is not typically pushed to the extremes, focusing instead on preserving the dynamics that bring out the best in jazz's improvisational nature. A good target for Jazz would be to not exceed -9 LUFS short-term.
Hip Hop mastering is often a balance between achieving a punchy, loud sound and preserving the groove's dynamics. Emphasis is on the low end, ensuring the bass and kick hit hard. Compression is used to bring vocals to the forefront and to maintain a competitive loudness level without squashing the track's dynamic feel. Most Hip Hop tracks on Spotify have a maximum loudness of -6LUFS short term max.
Each genre presents unique challenges in mastering dynamics and loudness. A successful mastering engineer not only understands these genre-specific nuances but also how to apply them to bring out the best in each track.
In the world of music production, ensuring your track stands up against well-established tracks in your genre is crucial. This is where REFERENCE by Mastering The Mix becomes an indispensable tool in the mastering process. REFERENCE helps you compare your mix to professionally produced tracks, providing clear insights into how your track measures up in terms of EQ balance, stereo width, and perceived loudness.
REFERENCE is designed to make the mastering process more accurate and efficient. By allowing you to directly compare your mix to selected reference tracks within your DAW, it provides real-time feedback on how closely your mix matches the tonal balance and stereo width of professional tracks in your genre. This immediate comparison is invaluable in making informed decisions during the mastering process.
Loading Reference Tracks: Begin by loading a few well-mastered tracks in the same genre as your mix. These tracks should represent the quality and sound you're aiming to achieve.
Matching Loudness: REFERENCE will automatically adjust the loudness of your reference tracks to match your mix. This is crucial for a fair comparison, as differences in loudness can deceive our ears into thinking one track sounds fuller or clearer than the other.
Analyzing Tonal Balance: Use the Trinity Display to visually compare the EQ balance of your mix to your reference tracks. This helps identify areas in your mix that might need more attention, such as insufficient bass or overly harsh high frequencies.
Stereo Width Comparison: REFERENCE also allows you to compare the stereo width of your mix with your reference tracks. This can highlight if your mix is too narrow or wide in certain frequency ranges, ensuring your track feels spatially on par with professional productions.
Dynamic Range Analysis: Finally, observe the dynamic range of your reference tracks compared to your mix. This can inform decisions on compression and limiting to either enhance or control dynamics.
Using REFERENCE effectively can significantly improve the quality of your mastering. It provides an objective lens through which to view your mix, helping you make decisions based on industry standards rather than guesswork. This can be especially helpful when mastering across different genres, as each genre has its unique sonic characteristics.
The key to mastering, as with many aspects of music production, lies in critical listening and informed decision-making. REFERENCE by Mastering The Mix aids in both, ensuring that your final master not only sounds great in isolation but also holds its own against the best in the business.
When examining Grammy-winning pop songs, the mastering often plays a pivotal role in their success. Take, for instance, a track like "Uptown Funk" by Mark Ronson featuring Bruno Mars, which won Record of the Year at the 58th Grammy Awards. The mastering of this track is a testament to precision and balance, contributing significantly to its widespread acclaim.
One of the standout characteristics of its mastering is the impeccable balance between clarity and warmth, ensuring that each element, from Bruno Mars' dynamic vocals to the track's vibrant horns and rhythmic bass, sits perfectly in the mix.
The high-end is crisp without being piercing, allowing the track's infectious energy to shine through, while the low-end is punchy and well-defined, providing a solid groove that is essential in pop music.
This sonic quality, achieved through meticulous mastering, played a crucial role in the song's ability to resonate with a broad audience and ultimately helped it secure its Grammy win.
As we journey through the intricate process of mastering across various musical landscapes - from the polished sheen of Pop to the raw energy of Rock, the pulsating beats of Electronic, the emotive dynamics of Classical, the sophisticated nuances of Jazz, and the rhythmic prowess of Hip Hop - one thing becomes clear: mastering is an art form that demands both precision and creativity.
The mastery of EQ, dynamics, and loudness plays a critical role in transforming a good mix into a great master. Each genre presents its own set of challenges and requires a tailored approach to truly shine. By understanding these unique requirements, mastering engineers breathe life into the music, ensuring it not only sounds its best but also resonates with its intended audience.
Moreover, tools like REFERENCE by Mastering The Mix are invaluable in this process, providing the necessary perspective to compare and refine our work against established standards. They enable us to achieve a balance where our music can stand proudly next to the giants in our genre.
In conclusion, mastering is more than just a final step in music production; it's a crucial bridge between the artist's vision and the listener's experience. It's a blend of science and soul, where technology meets art, ensuring every genre of music reaches its full potential. Whether you are a budding music producer or a seasoned mastering engineer, remember that the heart of mastering lies in understanding and respecting the diversity of music, and in using your skills to bring out the best in every track.
]]>In the realm of music production, the mastery of limiting and loudness management is not just a skill—it's an art. It’s what transforms a good mix into a sonic masterpiece, ready to captivate listeners in any environment. At the core of this process lies the subtle yet powerful technique of limiting, a tool for crafting the final sound of a track with precision and creativity. More than just adjusting volume, it's about sculpting a sound that's both powerful and true to the music's dynamic spirit.
In our digital era, the understanding of loudness units (LUFS) has become crucial. These units measure not just sound, but its impact, aligning with the standards of today's streaming platforms and broadcast norms. But the question remains: how do you balance raw loudness with the intricate dynamics that give a track its soul?
This is where Mastering The Mix's LIMITER and LEVELS come into play. LIMITER brings an intuitive, yet advanced approach to maximizing loudness without distortion, while LEVELS offers essential real-time feedback on loudness and dynamics. Together, they form the backbone of a mastering approach that ensures your tracks not only meet industry standards but resonate with emotional depth and clarity.
This guide is your deep dive into the nuanced world of limiting and loudness management. Whether you’re starting out or are a seasoned pro, these insights into LIMITER and LEVELS will revolutionize your approach, helping your tracks stand out in the competitive landscape of modern music. Get ready to unlock the full potential of your music and master the art of loudness and dynamics.
At the heart of mastering audio lies the limiter, a dynamic range processor that plays a crucial role in shaping the final sound of a track. A limiter is essentially a type of compressor, but with a sterner approach. It strictly prevents audio signals from exceeding a specified output level, thus avoiding distortion caused by clipping. However, limiters are not just tools for preventing peaks; they are sculptors of sound, enhancing the overall loudness while preserving the track's sonic integrity.
When used skillfully, limiters can add density and cohesiveness to a mix, making it sound fuller and more polished. The key is to use them judiciously – too much limiting can lead to a lifeless, over-compressed track, while too little may leave your music sounding quieter and less impactful compared to professionally mastered tracks.
Loudness Units Full Scale (LUFS) have become the standard unit of measurement for perceived loudness in audio production. Unlike traditional peak meters, LUFS meters consider how the human ear perceives loudness across different frequencies, offering a more accurate representation of how loud a track feels to listeners.
Understanding LUFS is critical in the current landscape where streaming services like Spotify, Apple Music, and YouTube use loudness normalization. This process adjusts tracks to a consistent perceived loudness level, meaning mastering your track to the right LUFS target is crucial for ensuring it sounds its best across all platforms.
Integrated LUFS (Loudness Units Full Scale) measures the perceived loudness of an entire audio track, providing an average level over time. It's essential for ensuring consistent loudness in music production and broadcasting, aligning with industry standards.
Short-term LUFS, on the other hand, measures loudness over short periods (usually 3 seconds). This metric helps in identifying and adjusting momentary loudness changes within a track, useful for balancing dynamic variations and understanding how loud the loudest part of an audio file is.
The primary challenge in using a limiter is achieving the right balance between loudness and dynamic range. The dynamic range is the difference between the loudest and quietest parts of a track, and it's a vital component of music that conveys emotion and excitement. Over-limiting can squash these dynamics, making a track sound flat and lifeless. On the other hand, under-limiting may leave the track too dynamic for commercial standards, resulting in a lower perceived loudness on streaming platforms.
The goal is to find a sweet spot where the track is competitively loud and maintains its dynamic vitality. This is where understanding the nuances of your limiter’s settings becomes crucial – from setting the right atack and release times to fine-tuning the output ceiling, each parameter plays a role in shaping the final sound.
In the following sections, we'll delve into how to effectively use Mastering The Mix’s LIMITER to achieve optimal loudness without sacrificing dynamics. We'll also explore how LEVELS can be a vital ally in monitoring and ensuring your track hits the sweet spot in loudness and dynamic range.
Mastering The Mix's LIMITER stands out as an indispensable tool for achieving professional-sounding loudness without compromising the quality of your music. Let's explore how to effectively use this powerful plugin.
Attack and Release: These settings control how quickly the limiter responds to the audio signal. A faster attack clamps down on transients swiftly, while a slower attack allows more of the transient through. The release setting dictates how quickly the limiter stops acting after the signal falls below the threshold. (we suggest leaving Release on auto for the most musical results)
Ceiling: This parameter sets the maximum output level of the audio signal, ensuring it doesn't clip. A well-set ceiling prevents digital distortion and maintains audio integrity.
Input Gain: Adjusting the input gain helps you drive the signal into the limiter, essentially controlling how hard the limiter works.
Start with a Moderate Setting: Begin with moderate settings for the input gain. This approach allows you to incrementally increase the loudness while preserving the dynamics.
Use the Analyze Feature: LIMITER's analyze feature is a game-changer. It suggests optimal settings tailored to your specific audio, helping you achieve a balanced sound.
Compare and Contrast: A/B test your limited signal with the original. This comparison is crucial for maintaining the integrity and energy of your track. LIMITERs bypass feature has built in level matching to help you make informed decisions.
Genre Considerations: Different genres demand different limiting approaches. For instance, a heavy rock track might benefit from a more aggressive limiter setting compared to a dynamic jazz piece.
Watch the Meters: Always keep an eye on the plugin's meters. They provide invaluable visual feedback on how your settings are affecting the audio.
In today's digital age, it's essential to master tracks with streaming services in mind. Platforms like Spotify and Apple Music have specific loudness targets (around -14 LUFS), and mastering your track to align with these standards ensures consistent playback volume across different songs and platforms.
You may wish to master your audio loud so it sounds comparable to other club releases. If this is the case, you would probably want to shoot for -6 short term LUFS as a maximum. It’s harder to get a great sound as this loudness. If you struggle, check out our post on How To Reduce Distrotion When Mastering.
While LIMITER helps you shape the loudness, LEVELS by Mastering The Mix is the perfect companion for monitoring it. This powerful plugin provides real-time visual feedback on LUFS (alongside a number of other important technical details you need when mastering audio), helping you ensure your track meets the loudness standards of various platforms. It’s a critical tool for making informed decisions about your limiter settings and overall mastering process.
Mastering the art of limiting is a balance of technical knowledge and creative intuition. With tools like LIMITER and LEVELS, you have the power to bring out the best in your music, ensuring it sounds great across all listening environments. In the next section, we'll delve deeper into balancing loudness and dynamics, guiding you through a practical application of these concepts.
Mastering the delicate interplay between loudness and dynamic range is essential for producing a compelling and sonically rich track. This section focuses on utilizing Mastering The Mix's LEVELS plugin to monitor and balance loudness, complementing the use of LIMITER for optimal mastering results.
Achieving the right loudness in a track is not just about hitting a numerical target. It's about preserving the natural ebb and flow of the music, the dynamic contrasts that make a track engaging and emotive. Overemphasis on loudness can lead to a flattened dynamic range, robbing the music of its vitality. Conversely, underemphasis may render the track weak and unimpressive when played alongside others. The key lies in finding that sweet spot where the track is loud enough to compete commercially yet retains its dynamic expression.
LEVELS by Mastering The Mix provides a comprehensive, real-time visual analysis of your track's loudness and dynamics. Its intuitive interface and accurate readings make it an indispensable tool for mastering, especially in a landscape governed by streaming platform standards.
LUFS Metering: The LUFS meter provides an instant readout of your track’s perceived loudness, helping you align with industry standards.
True Peak Metering: This ensures your audio doesn't clip after conversion to analog, maintaining quality across all playback systems.
Dynamic Range Indicator: It visually represents the dynamic content of your track, highlighting areas that may be over-compressed or too dynamic.
Stereo Field Analysis: This feature helps you gauge the stereo spread of your track, ensuring a balanced and immersive listening experience.
Set Your Loudness Targets: Begin by determining the ideal loudness level for your track, considering the genre and intended distribution platforms.
Monitor with LEVELS: As you apply limiting, continuously monitor the LUFS and dynamic range indicators in LEVELS. Adjust your limiter settings if the track is too loud or dynamic range is too compressed.
True Peak Caution: Ensure the true peak metering stays within safe limits to avoid clipping in the analog domain.
Dynamic Range Adjustments: If the dynamic range indicator shows excessive compression, revisit your limiter settings, particularly the input gain and attack time.
Stereo Field Checks: Use LEVELS to ensure a well-balanced stereo image, particularly important for ensuring consistent playback on various systems.
A common issue when striving for loudness is unintentional distortion. LEVELS helps identify areas where the signal might be too hot, allowing you to pull back and maintain clarity. This alignment between LEVELS and LIMITER ensures that you achieve a loud, dynamic, and distortion-free master.
Mastering a track is both an art and a science. In this section, we'll walk through a practical example of using Mastering The Mix’s LIMITER and LEVELS to achieve a master that's not only loud and competitive but also dynamically rich and distortion-free.
Begin by loading your mix into your Digital Audio Workstation (DAW). Insert LIMITER on your master channel as the final plugin in your chain. Follow it with LEVELS to continuously monitor the loudness and dynamics.
Monitor the loudest section of your mix, select your preset within LIMITER (like Club or Streaming), then click ‘Analyze’ in LIMITER. This will give you a gain range showing you the loudest and quietest settings to keep you in the right ball park. You’ll also get target suggestions for attack, link and ceiling to optimize them to suit your music.
Adjust the input gain to drive the signal into the limiter appropriately depending on your loudness goal. Fine-tune the attack settings if you wish, considering the genre and desired transients. Keep the ceiling just below 0 dBFS (e.g., -0.1 dB) to avoid clipping.
As you adjust LIMITER, keep an eye on LEVELS. Aim for your target LUFS reading without compromising the dynamic range. Adjust the limiter’s input gain to achieve the desired balance.
Continuously monitor the true peak meter in LEVELS to ensure no clipping occurs. Check the stereo field analysis to verify a well-balanced stereo spread.
After setting your limiter, bypass and re-engage it to compare the processed and unprocessed signals. Click the triangle icon to hear the delta, showing you exactly what’s being changes in your audio.Make incremental adjustments based on your observations.Use A/B testing with a reference track to ensure your master competes well in terms of loudness and quality.
Listen to the entire track with LIMITER and LEVELS engaged. Pay attention to how the dynamics feel throughout the song. You might find that a particular section gets pushed too loud and you need to pull back on the overall loudness. Make any final tweaks to ensure a consistent and impactful listening experience.
Once you’re satisfied, bounce out the final master. Ensure to select the appropriate bit depth and sample rate, avoiding any normalization.
Trust Your Ears: While meters and visual feedback are essential, your ears are the ultimate judge of the quality of your master.
Genre Considerations: Be mindful of the genre-specific loudness norms and tailor your settings accordingly.
Rest Your Ears: Take regular breaks to prevent ear fatigue, ensuring objective listening throughout the mastering process.
Mastering with LIMITER and LEVELS can significantly uplift the quality of your music, making it ready for commercial release. This walkthrough provides a framework, but remember, every track is unique. Experiment with different settings, trust your instincts, and use these tools to bring out the best in your music.
The journey of transforming a good mix into a professional-sounding master is nuanced and intricate, requiring a blend of technical precision and creative intuition. Through this comprehensive exploration of limiting and loudness management, we've delved into the art of using Mastering The Mix's LIMITER and LEVELS, uncovering the secrets to achieving a master that stands tall in both loudness and dynamic expression.
Remember, mastering is not just about adhering to standards or reaching numerical targets. It's about enhancing the emotional impact of your music, ensuring it resonates with its audience regardless of the listening environment. The tools and techniques discussed here are more than just means to an end; they are your creative allies, empowering you to sculpt a sound that is uniquely yours.
As you embark on your mastering endeavors, keep these key takeaways in mind:
Balance is Key: Strive for a master that is loud yet dynamically rich, avoiding over-compression and loss of musicality.
Trust Your Tools and Your Ears: Utilize LIMITER and LEVELS to their fullest, but always let your ears make the final judgment.
Continuous Learning: The landscape of audio production is ever-evolving. Stay curious and keep learning to stay ahead of the curve.
Whether you're a budding producer or an experienced engineer, mastering these skills will significantly enhance the quality of your productions. We encourage you to experiment with LIMITER and LEVELS by downloading the free trials, discover your unique sound, and elevate your music to the professional standards it deserves.
Remember, every track tells a story. It's your role as a producer to ensure that story is heard loud and clear.
]]>A key aspect of this process is Mid/Side (M/S) processing – a technique that allows you to treat the center and sides of a stereo image independently. This powerful tool can transform a flat, lifeless mix into a dynamic, three-dimensional soundscape. However, it requires a deft touch and a deep understanding of how to manipulate stereo signals without losing the essence of the track.
Another focus will be on leveraging stereo imaging tools and techniques to enhance width. With products like Mastering The Mix's REFERENCE, we'll demonstrate how to compare the stereo width of your production to industry-standard tracks, ensuring your mastering choices are informed and strategic.
Crucially, we'll also address the importance of mono compatibility. In our ever-evolving digital world, where music is often played on mono devices like smartphones and smart speakers, ensuring your track translates well in mono is not just a technical necessity but a commercial imperative. We'll delve into how to use our mixing and metering plugin LEVELS to assess and adjust the mono compatibility of your bass and overall mix, checking for phase issues and left-right balance - essential steps for maintaining the integrity of your sound across various playback systems.
As we journey through these advanced techniques, remember, mastering is as much an art as it is a science. The tools and methods we discuss are powerful, but they're most effective when applied with a creative ear and a deep understanding of the emotional impact you want your music to have.
So, let's embark on this exciting journey, equipped with the knowledge and tools to elevate your mastering skills to new heights. Stay tuned as we unfold the secrets of mastering stereo width and imaging, and prepare to bring a new dimension of depth and clarity to your music.
Stereo width in mastering isn't just about making a track sound 'bigger'; it's about crafting a soundstage that brings the listener into the heart of the music. This aspect of mastering involves manipulating how sound is perceived in the stereo field – the imaginary space between the left and right speakers or headphones. The goal is to create a sense of depth, space, and dimension, allowing each element in the mix to occupy its own distinct place.
The stereo width can make or break a track. A well-mastered track with optimal stereo width will sound full and immersive, drawing the listener in. Conversely, a track with poor stereo width management might sound cluttered or disjointed, with elements competing for space, or it might feel too narrow, lacking life and energy. The right balance can give a track a professional sheen, making it stand out on any playlist.
One common challenge in mastering stereo width is the temptation to overdo it. Wider isn't always better. Overextending the stereo field can lead to phase issues, where the mix sounds great on headphones but falls apart on mono systems. Another misconception is that stereo width is only about panning. While panning is a tool for creating width, true mastery comes from nuanced EQ adjustments, subtle reverb, and strategic use of stereo enhancement tools.
Stereo imaging should always serve the song and the emotion it conveys. For instance, a densely layered orchestral piece might benefit from a wide stereo image, allowing each instrument group to shine. Conversely, a minimalist acoustic recording might need a narrower, more intimate stereo field. Understanding the genre, the arrangement, and the artistic intent is crucial in making informed decisions about stereo width.
Mid/Side (M/S) processing is a revolutionary technique in the mastering arsenal that provides unparalleled control over the stereo field. It works by dividing the stereo signal into two components: the Mid (center content) and the Side (stereo content). This separation allows for independent manipulation of the center and the edges of the mix, offering a new dimension of creative possibilities.
In mastering, M/S processing can be a game-changer. By adjusting the Mid, you can emphasize or de-emphasize elements like vocals, bass, kick, or snare, which are typically panned center. Tweaking the Side lets you enhance or reduce the ambience, reverb, or the stereo spread of instruments like guitars, keyboards, or spatial effects. This precise control can create a more focused center, an expansive stereo image, or anything in between.
To effectively utilize M/S processing, start by critically listening to your mix and identifying what needs adjustment. Does the center need more focus? Could the sides use a little more air or width? Use M/S EQ to subtly enhance or cut frequencies in the Mid or Side channels. Remember, small changes can have a significant impact.
For instance, boosting high frequencies in the Side channel can add airiness and space, making the mix breathe. Conversely, cutting low frequencies in the Side can tighten the mix and improve mono compatibility.
Let’s consider a practical example. Imagine a track where the vocals feel buried in the mix. Using M/S processing, you can gently boost the upper midrange in the Mid channel to bring the vocals forward. Alternatively, if the mix feels too narrow, a slight boost in the high frequencies on the Side channel can open up the stereo field, giving a sense of expansiveness without altering the core elements of the mix.
When working with M/S processing, tools like Mastering The Mix's plugins can be incredibly valuable. They provide visual feedback and intuitive controls, making it easier to make precise adjustments and achieve the desired stereo image. For instance, using an M/S capable EQ plugin like MIXROOM can help you fine-tune the balance between the Mid and Side channels, ensuring a well-defined and captivating stereo image.
Mastering is not just about balancing levels and EQ; it’s also about creating a compelling stereo image. To achieve this, a range of stereo imaging tools are available, each designed to enhance the stereo width in different ways. From simple panning to advanced stereo wideners, these tools can be used to shape the spatial characteristics of your track.
Mastering The Mix offers several tools that can be invaluable in this process. One such tool is REFERENCE, which is designed to help you compare the stereo width of your production to professionally mastered tracks. This comparison is crucial because it provides a benchmark to aim for, ensuring your track won’t sound out of place when played alongside others.
Using REFERENCE is straightforward but requires a thoughtful approach. First, load it as the final plugin on your master channel. Next, drag and drop a professionally mastered song that you admire into REFERENCE. This track should ideally be in a similar genre to your own. As you play back your track, toggle between your mix and the reference track, paying close attention to how the stereo image compares.
In the trinity display, the Stereo Width display will show you how the different freuqnices compare in width. The thicker the blue display, the wider those frequencies are.
Once you have a clear idea of how your track stacks up against the reference, you can start making adjustments to enhance its stereo width. Here are a few techniques:
Strategic EQ: Apply subtle EQ boosts and cuts to the side channels to enhance certain frequencies, making the mix sound wider.
Stereo Widening Tools: Use stereo wideners sparingly to increase the perception of space. Be careful not to overdo it as this can lead to phase issues.
Don’t use the Haas Effect when mastering: The hass effect is where you delay one channel by a few milliseconds to create a sense of width. This technique when applied to the whole master can disorientate the listener.
Enhancing stereo width is a delicate process. Overuse of stereo widening can lead to a loss of focus in the center and potential phase issues, which can be detrimental in mono playback systems. Always check your mix in mono to ensure that key elements, like vocals and bass, remain clear and prominent.
In the pursuit of a captivating stereo image, it's crucial not to overlook mono compatibility. In today's diverse listening environment, your track might be played on anything from high-end stereo systems to mono smart speakers. Ensuring your track maintains its integrity in mono is not just a technical consideration; it's a key aspect of making your music universally enjoyable.
Mono compatibility is about making sure the essential elements of your mix, like vocals, bass, and kick, are clear and present when summed to mono. Here are some techniques to ensure your track performs well in both stereo and mono:
Regular Mono Checks: Frequently switch your mix to mono during the mastering process. This helps identify phase issues or elements that disappear or diminish in mono. (LEVELS has a mono button to quickly check whilst you’re also keeping an eye on all the important technical details fo your mix).
Balanced EQ and Panning: Ensure your EQ and panning decisions don’t overly favor one channel (left / right), which can cause elements to vanish in mono.
Careful Use of Stereo Enhancers: While stereo enhancers can add width, they can also create phase issues. Use them judiciously.
Mastering The Mix’s LEVELS plugin is a powerful tool for checking mono compatibility. Here’s how to use it effectively:
Low-End Mono Check: Use the LEVELS filter to focus on the low frequencies. Low-end elements should be mostly mono, ensuring they translate well on all systems. If you engage the filter and you see ‘red’ in the display, your low-end might be too wide. (Note that a touch of low-end width is ok and wont cause issues).
Correlation Meter: Keep an eye on the correlation meter in LEVELS. If it frequently dips below zero, this indicates phase issues, which can be a red flag for mono compatibility.
Left-Right Balance: LEVELS can also help check for left-right balance, ensuring that the mix is not biased towards one side, which can affect mono playback.
The key to mastering stereo width lies in finding the right balance. Your track should sound wide and immersive in stereo without losing coherence in mono. This balance often requires making compromises – for example, slightly narrowing the stereo image to ensure the kick and bass remain powerful in mono.
For those with more experience, consider experimenting with mid-side processing to adjust elements in the stereo field without affecting their mono presence. Also, try using dynamic EQ or multiband compression to address phase issues in specific frequency ranges.
Advanced stereo imaging techniques go beyond basic wideness; they involve creative and thoughtful manipulation of the stereo field to enhance the emotional impact of your music. This section explores innovative ways to use stereo width to bring out the best in different genres and how to make the final adjustments that elevate your track from good to exceptional.
Each genre presents unique opportunities and challenges for stereo imaging. For example:
Electronic and Dance Music: These genres often benefit from a wider stereo image to create an immersive, club-like experience. Experiment with side-chain effects on pads and synths to dynamically enhance the stereo field.
Rock and Pop: Here, a more conservative approach might be necessary. Focus on creating depth and space around the central elements like vocals and drums, using panning and subtle stereo delays.
Acoustic and Jazz: In these genres, realism and intimacy are key. A narrower, more natural stereo image often works best, with gentle panning to replicate a live stage setup.
Harmonic Exciters and Stereo Enhancers: Use these tools to add subtle width to specific frequency ranges without affecting the overall balance.
Automated Panning: Automating the panning of certain elements can add movement and interest, especially in more experimental or ambient tracks.
Frequency-Specific M/S Processing: Apply M/S processing to specific frequency ranges to widen the high-end while keeping the low-end tight and centered.
The last step in mastering stereo imaging is the final listen and adjustment phase. Here are some tips:
Reference Listening: Revisit your reference tracks. Ensure your track holds up in terms of stereo width and overall spatial quality.
Final Mono Check: Make one last check in mono to confirm that all critical elements are still present and coherent.
A/B Testing: Compare the before and after of your stereo imaging work. This can reveal if any changes have been too drastic or not effective enough.
Mastering stereo imaging is an art form that requires practice and experimentation. Don’t be afraid to try bold moves, but always be ready to step back if they don’t serve the song. Remember, the best stereo image is the one that enhances the listener's emotional connection to the music.
We’ve explored the complexities of Mid/Side processing, delved into the tools and techniques for enhancing stereo width, and underscored the importance of maintaining mono compatibility. Along the way, we've seen how genre-specific approaches can greatly influence our stereo imaging decisions and how creative applications can elevate a mix.
Stereo width is about balance: Not just making a track wider, but creating a spatial experience that complements the music.
Mid/Side processing is a powerful tool: It offers nuanced control over the stereo field, allowing for targeted adjustments.
Mono compatibility remains crucial: Even in a stereo-dominated world, ensuring your track translates well in mono is vital.
Creative approaches differ by genre: Understanding the stylistic nuances of your genre is key to effective stereo imaging.
Final adjustments are critical: Always refer back to professional tracks, check in mono, and be open to fine-tuning.
Mastering stereo width and imaging is as much an art as it is a science. It requires not only technical knowledge but also a creative ear and an understanding of how your decisions impact the listener's experience. Practice, experimentation, and critical listening are your best tools in developing these skills.
Throughout this journey, products like REFERENCE and LEVELS from Mastering The Mix have been invaluable in providing visual feedback and practical benchmarks for our mastering efforts. These tools are designed to help you achieve professional results, whether you're a seasoned pro or just starting out. Grab the free trial and take them for a spin in your next production.
]]>Mastering is a delicate balance of technical skill and creative intuition. Among the various tools in a mastering engineer's arsenal, the compressor is particularly crucial. It not only controls volume but also contributes to the overall feel and character of the music. However, compression can be a double-edged sword. When misapplied, it risks dulling a track's dynamics and emotional impact, underscoring the importance of a mastering engineer's expertise.
In this blog post, we embark on a journey to explore the world of compression in mastering. We'll delve into different compressor types, their unique sonic signatures, and critical settings such as threshold, ratio, attack, release, and knee. Our aim is to demystify this powerful tool, offering insights and practical guidance for mastering engineers of all levels. Whether you're beginning your mastering journey or looking to refine your techniques, this guide will illuminate the nuanced art of mastering compression.
Compression, in its most fundamental form, is the art of dynamic range control. It's a process that moderates the disparity between the loudest and quietest parts of an audio track. In mastering, this translates to a more consistent overall level, ensuring that a song can stand alongside others without drastic volume differences.
A compressor works by reducing the level of audio signals that exceed a certain threshold. This threshold is set by the engineer and is the point at which the compressor starts to act. Once the signal crosses this threshold, the compressor reduces its level based on a set ratio. For example, a 4:1 ratio means that for every 4 dB the signal goes over the threshold, it's reduced to 1 dB over the threshold. This process effectively narrows the dynamic range, making the loud parts quieter and allowing the overall level of the track to be increased without peaking.
However, compressors do more than just level management; they influence the tone and feel of the music. Fast attack times can emphasize the rhythm and add punch, while slower attack times preserve the natural transients, maintaining the life and energy of the music. The release time, or how quickly the compressor stops reducing the signal after it falls below the threshold, also plays a crucial role. It can either smooth out the sound or create a more pronounced rhythmic effect.
The key in mastering is to use compression subtly. It’s not about squashing the life out of a track but gently taming the dynamics to create a cohesive, listenable, and engaging final product that translates well across various playback systems.
One of the most common misconceptions about compression in mastering is that it's primarily used to make tracks louder. While compression does help in increasing perceived loudness, its primary role is to balance and enhance the track's dynamics without compromising its natural feel. Overcompression, a frequent pitfall, occurs when the dynamics are excessively restricted, leading to a lifeless and flat sound, often described as “squashed.” This not only robs the music of its emotional impact but can also lead to listener fatigue.
Another misunderstanding is the idea that one-size-fits-all when it comes to compressor settings. Each track has its unique dynamic characteristics, requiring tailored compression settings. What works for a rock anthem might not suit a delicate acoustic piece. It’s about understanding the nuances of the track and applying compression in a way that enhances rather than detracts from the music.
Lastly, there's a tendency to overlook the type of compressor used. Different compressors – VCA, Opto, FET, and more – each impart their character to the sound. Choosing the right compressor for the track’s style and desired outcome is as crucial as the settings themselves.
In mastering, compression is a tool of finesse and balance. It requires a deep understanding of both the technical aspects and the musical context. When used judiciously, it can elevate a track from good to great, ensuring that it not only sounds polished but retains the dynamic expression that gives the music its life and energy.
In mastering, the choice of compressor is critical, as each type imparts its unique character to the sound. Understanding these differences is key to selecting the right compressor for a given track.
VCA compressors are known for their precision and versatility. They offer a wide range of attack and release settings, making them suitable for a variety of audio material. VCA compressors are often favored for their transparency and ability to handle complex material without significantly coloring the sound. They are adept at providing a controlled, even compression, making them a popular choice for mastering engineers who require precise dynamic control.
Opto compressors are characterized by their smooth and musical compression. They use a light-sensitive element to control the gain reduction, resulting in a slower response compared to VCA compressors. This slower response lends itself to a more natural and gentle compression, often described as "musical." Opto compressors are excellent for adding warmth and glue to a mix without being overly aggressive, making them ideal for mastering applications where subtlety and musicality are paramount.
FET compressors are celebrated for their aggressive and punchy character. They emulate the sound of classic tube compressors, offering a colorful and lively compression that can add a sense of excitement and energy to a track. However, their aggressive nature and distinct coloration can make them less suitable for some mastering applications, where transparency and subtlety are desired.
In mastering, Opto compressors occupy a special place. Their hallmark is a slower, more gradual gain reduction, which closely follows the natural dynamics of the music. This characteristic creates a 'pumping' effect, a rhythmic expansion, and contraction of the audio signal that can add a pleasing, organic feel to the music.
Opto compressors are particularly effective in genres where preserving the natural dynamics and feel of the music is essential, such as acoustic, jazz, or classical. They add a subtle thickness and warmth to the track, enhancing the overall sonic texture without imposing an aggressive compression signature.
Furthermore, the gentle nature of Opto compression makes it a go-to choice for tracks that require a delicate touch. It can unify the elements of a mix, creating a cohesive and polished final product without sacrificing the track's dynamic expression.
In summary, the choice of compressor in mastering is a critical decision that can significantly impact the final sound. VCA, Opto, and FET compressors each offer distinct characteristics, and understanding these can guide mastering engineers in choosing the right tool for the track at hand. Opto compressors, in particular, stand out for their ability to impart a musical, organic feel to the music, making them a favored choice for a wide range of mastering applications.
Mastering compression is a delicate balance of art and science. The correct settings can enhance a track’s dynamics, clarity, and energy, while incorrect settings can compromise its integrity. Key to mastering this tool is understanding its fundamental settings: threshold, ratio, attack, release, and knee.
The threshold in a compressor determines the level at which compression begins to take effect. It is the point at which the compressor starts to reduce the gain of any signal exceeding it. Setting the threshold correctly is crucial for effective compression.
In mastering, a threshold set too high might result in negligible compression, failing to bring the desired consistency in dynamics. Conversely, a threshold set too low can lead to over-compression, squashing the track’s natural dynamics.
For example, in a track where the peaks are sporadic but significantly louder than the rest of the material, a lower threshold may be necessary to ensure these peaks are controlled. However, in a more consistently loud track, a higher threshold allows for the maintenance of natural dynamics while still taming the overall loudness.
The ratio determines how much compression is applied once the signal exceeds the threshold. It is expressed as a ratio of input level to output level (e.g., 4:1, meaning that for every 4 dB above the threshold, the output is reduced to 1 dB above the threshold).
In mastering, a high ratio can be too aggressive, leading to an unnatural sound. A ratio of 2:1 might be suitable for light compression, gently reducing dynamic range while maintaining a natural sound. For more pronounced compression, a ratio of 4:1 can be used, but it should be applied cautiously to avoid over-compression.
For instance, in a rock song with a wide dynamic range, a moderate ratio (around 3:1) might be used to achieve a punchier sound without losing the track's raw energy. In contrast, a classical piece, which typically requires minimal intervention, might only need a light ratio (such as 1.5:1) to subtly control the dynamics.
The attack time determines how quickly the compressor reacts to a signal exceeding the threshold. A fast attack time (measured in milliseconds) can clamp down on transients quickly, which might be desirable in a track with sharp, aggressive transients. However, too fast an attack can flatten the dynamics, stripping the life from the music. A slower attack, on the other hand, allows more of the transient to pass through uncompressed, maintaining the track's natural dynamics and feel.
For example, a fast attack (around 1-5 ms) might be used in a dance track to control sharp percussive elements, while a slower attack (perhaps 30-50 ms) could be better for a vocal-centric pop song to maintain the naturalness of the vocals.
Release time, conversely, is how quickly the compressor stops reducing the gain after the signal falls below the threshold. A fast release can restore dynamics quickly but might cause the compression to sound more apparent or ‘pumpy.’ A slow release smooths out the gain reduction but can also make the compression sound more sustained and less natural.
In mastering, setting the release time often involves finding a balance that complements the tempo and rhythm of the track. For example, a faster release might be suitable for a fast-paced electronic track to maintain its energetic feel, while a slower release could be more appropriate for a ballad to smoothly handle its dynamic ebbs and flows.
The knee setting controls how the compressor transitions from no compression to the set ratio as the signal approaches the threshold. A hard knee means that compression is applied abruptly as soon as the signal exceeds the threshold. This can result in a more aggressive and noticeable compression effect. A soft knee, however, introduces compression more gradually, starting slightly below the threshold and becoming more aggressive as the signal increases. This results in a more subtle and transparent compression.
In mastering, a soft knee is often preferred as it can provide a more natural-sounding compression. For instance, when mastering a jazz track, a soft knee would smoothly manage the dynamic range without imposing an overly processed sound. However, in genres like EDM or rock, where a more aggressive compression might be desired, a harder knee could be used to add punch and presence.
In conclusion, understanding and correctly setting the threshold, ratio, attack, release, and knee is crucial in mastering compression. Each setting plays a vital role in shaping the dynamics and overall feel of the track. Mastering engineers must carefully consider these settings, often experimenting and adjusting to find the perfect balance for each individual track. With skillful application, compression can enhance a track’s impact, ensuring it resonates with its intended audience.
Mastering often involves working with tracks that have large dynamic variations. The goal is to achieve a balance that allows quieter sections to be heard clearly without letting louder sections dominate excessively.
Compression in mastering can serve two primary purposes: adding character to the sound or controlling dynamics. The approach varies significantly based on the objective.
Adding Character: Choose a compressor known for its sonic character (like a vintage tube or FET compressor). Use moderate to high ratio settings and adjust the attack and release to shape the track’s transient response and tone.
Controlling Dynamics: Opt for a transparent compressor (like a VCA or a digital model). Set a lower ratio and a threshold that catches only the peaks. The aim here is not to color the sound but to subtly even out the dynamics.
Different sections of a song often require different compression approaches due to their varying dynamics and energy levels.
Verses vs. Choruses: Automate the compressor’s settings to suit the dynamic range of these sections. Choruses generally demand more dynamic control than verses. Using REFERENCE, compare your track to well-mastered songs to gauge appropriate compression levels for each section.
Breakdowns and Bridges: These sections might require lighter compression or even bypassing the compressor to preserve their distinct dynamic range and emotional impact.
Over-compression is a common pitfall in mastering that can lead to a loss of dynamic interest and emotional impact in a track.
Watch the Gain Reduction Meter: Regularly check the compressor’s gain reduction meter. As a general rule, seeing more than 3-4 dB of gain reduction can indicate over-compression.
AB Testing: Frequently bypass the compressor to compare the processed and unprocessed signals. This helps in assessing whether the compression is enhancing or detracting from the track.
Use Your Ears: Trust your ears more than the meters. Listen for signs of over-compression like pumping, a squashed sound, or a lack of punch.
Dynamic Range Meter: Use LEVELS as a dynamic range meter to objectively view the compression’s effect on the overall dynamics.
Reference Tracks: Utilize REFERENCE for A/B comparisons with professionally mastered tracks in your genre to set appropriate compression levels.
Mastering engineers often employ advanced and creative compression techniques to elevate a track’s quality. These strategies require a nuanced understanding of compression and how it interacts with different musical elements.
Side-Chain Compression: This technique uses the signal from one element (like a kick drum) to trigger compression on another (like a bassline), creating a rhythmic pumping effect. It’s particularly effective in electronic music for enhancing groove and space.
Mid-Side Compression: By compressing the mid and side signals separately, engineers can add depth and width to a mix. This approach can highlight the center elements, like vocals, while giving a sense of expansiveness to stereo components.
Serial Compression: Layering multiple compressors with subtle settings can yield a more complex and textured sound. This method allows for finer control over dynamics without over-processing.
Rock and Pop: Often benefit from punchy compression to enhance the energy, especially on drums and bass. A medium attack and fast release can add excitement.
Jazz and Classical: Typically require minimal and transparent compression. A gentle ratio and a soft knee setting help to maintain the natural dynamics and expression.
Hip-Hop and EDM: These genres can handle more aggressive compression to emphasize beats and basslines. Experiment with side-chain and mid-side techniques for a dynamic and immersive sound.
In summary, advanced compression techniques allow mastering engineers to creatively shape the sound and feel of a track. These methods, when applied judiciously, can significantly enhance the listening experience, making the track not just sonically appealing but also emotionally engaging.
Throughout this exploration of compression in mastering, we've delved into the nuances of various compressor types, deciphered critical settings like threshold, ratio, attack, release, and knee, and navigated through practical techniques to enhance a track's dynamics and character. From balancing dynamics in diverse sections to creative and genre-specific strategies, compression emerges as a multifaceted tool, crucial in the art of mastering.
As you integrate these insights into your mastering process, remember that experimentation is key. Each track presents a unique canvas for your skills — don't hesitate to try different approaches, tweak settings, and explore the full potential of compression. Your artistic intuition, guided by these principles, will lead you to discover the perfect balance that resonates with your musical vision.
]]>One of the most crucial tools when mastering is the equalizer, or EQ. Often perceived as the alchemist’s wand of audio, EQ in mastering is all about finesse and balance, gently shaping the overall tonal quality of your track. But what exactly does that entail?
Mastering EQ isn't just about adjusting frequencies; it's about understanding the complex relationship between every element of your mix. Imagine this: as you sweep the high frequencies to add sparkle and emphasize the hi-hats, you inadvertently set off a chain reaction—your low-end starts to feel lesser in comparison. It's a balancing act where every move has a consequence, a sonic game of chess that demands strategic foresight and acute auditory perception.
But mastering isn't just about reacting to what you hear; it's a proactive process. It's a process of constant reference and refinement, where EQ adjustments are made with a surgeon’s precision—after all, a mere 1dB change can sometimes mean the difference between good and great.
And then there's the challenge of the low end, a frequent stumbling block in home studios where monitoring accuracy and room acoustics often fall short. Too much bass and your track sounds muddy; too little and it lacks power. That's where tools like our BASSROOM plugin come into play, offering tailored EQ settings to bring balance to your bass without losing clarity or punch.
As we delve deeper into this world, we’ll explore how Mastering The Mix products like BASSROOM and MIXROOM can be instrumental in achieving a well-balanced spectrum, guide you through the advantages of dynamic EQs and mid-side processing, and help you decide between the character of analogue emulation and the precision of digital EQ. Prepare to unlock the full potential of your music as we master the art of EQ in mastering.
Mastering EQ isn't just about tweaking frequencies; it's a pivotal process that shapes the final tonal balance of a track. Unlike in mixing, where EQ changes can be more localized and specific, mastering EQ moves ripple across the entire soundscape. This is due to the fact that we’re no longer dealing with individual instruments but with the composite mix. A boost in the highs to add sheen to the hi-hats, for instance, will inadvertently brighten everything that shares that frequency space – from vocals to the sparkle on the cymbals.
The subtlety of mastering EQ cannot be overstated. Tiny nudges can mean the difference between a flat-sounding mix and one that breathes with clarity and depth. It’s a game of nuances, where a 1dB increase can open up the highs, or a slight dip in the mids can prevent the track from feeling crowded.
Making these decisions isn't just about what sounds 'better' but rather what serves the track in its intended context. The end goal is always to enhance the overall listening experience while respecting the mix’s original character. This is where visual aids can be invaluable.
The white EQ line in the Mastering The Mix's REFERENCE plugin provides a clear visual benchmark, showcasing the exact EQ adjustments necessary to align your track with your chosen reference. It's about getting that pro sound efficiently, with the flexibility to stray from the beaten path if the music demands it.
In mastering, EQ is much more than a tool; it's the paintbrush that defines the final hue of the musical canvas. Each stroke must be intentional, deliberate, and well-informed. With the help of tools like REFERENCE, the mastering engineer has both the precision and the guidance to make these crucial decisions confidently.
Mastering EQ is not just about making a track louder or adding a shiny top-end; it's a strategic endeavor aimed at ensuring the track feels balanced and dynamic. This involves careful listening and often focusing on the most dynamic parts of the track—where the full frequency range comes into play. This section of your mix will reveal how well your EQ settings work when the track is at its most complex.
One of the most effective tools in this process is the use of reference tracks. By selecting a professionally mastered track that resonates with the sonic goals for your own mix, you can set a benchmark for tonal balance. Reference tracks guide you towards the desired outcome, showing you what’s possible within the genre's expectations.
The visual feedback from tools like REFERENCE can be invaluable here. The white EQ line in REFERENCE offers real-time visual feedback, showing you precisely how your track compares to your chosen reference track in terms of EQ balance. It helps you to see the differences in the EQ curve and make informed adjustments accordingly.
Each production has different instrumentation and sounds so it would be impossible to get the level line at exactly 0dB. If the level line is within ±3dB, this shows a very similar tonal balance. Anything above 6dB indicates a considerable difference in tonal balance. Use your ears and discretion here to determine if you want to get your music sounding more like your reference track, or if you want it to have a heavier bass, or perhaps a brighter feel. The level line in reference can still be helpful here to keep you in an acceptable tonal balance range, but with the freedom to shape the sound as you feel best suits the song.
It also automatically level matches your references to the track you’re working on, which is absolutely critical for making unbiased decisions, as we humans perceive louder sounds to be richer in the low-end with more clarity in the high-frequencies (even when this is not the case).
Navigating the low-end during mastering can feel like walking a tightrope—too much, and your track sounds boomy and uncontrolled; too little, and it lacks power and presence. Home studios typically suffer from less-than-perfect room acoustics, making it challenging to accurately judge bass levels. For instance, room modes can either exaggerate or hide certain low frequencies, misleading you to make poor decisions.
Our plugin BASSROOM is an EQ designed to fix the low-frequency balance when mastering. BASSROOM analyses your production and shows you the EQ settings needed to get a powerful and well-balanced sound. Load up BASSROOM on your master channel, select a preset or import your own reference track. Match the bands to the EQ target suggestions, then tweak to suit your preference.
The preset and reference analysis targets are 100% unique to your music. BASSROOM analyses your audio to determine the EQ changes needed to match the tonal balance goal. It uses a revolutionary algorithm that is based on how the human ear perceives sonic energy. The unique filters in BASSROOM are optimized to shape low-frequencies during mastering. You won’t find another plugin that can adjust bass more transparently, retaining the punch and clarity of your music.
When adjusting the EQ on the master channel, you would generally want to go for broad Q bandwidths and EQ adjustments no greater than around 4dB. If the adjustments are greater than this then you may get more transparent results by going back to fix things in the original mix before mastering. Rules can always be broken and it’s the results that count. If a master needs to have a 6dB high-end boost to sound great then go for it.
Our plugin MIXROOM is perfect for dialing in a great sounding master the mid and high frequencies. It works in a similar way to BASSROOM where you can select a preset or load in your own reference track to generate tonal balance targets. You can also adjust the mid-side settings for each band, giving you total control over how you shape the frequencies across the stereo field.
Before and after making any EQ adjustments, it's essential to match the output gain to avoid being misled by changes in loudness. Our ears naturally perceive louder music as better, which can cause us to favor the louder signal when comparing two versions of a track. To make critical EQ decisions, level match the post-EQ signal to the pre-EQ level. This allows you to truly judge the impact of your EQ changes.
For example, after applying EQ, if you've boosted the overall level of the track, use the output gain control to bring it back down to the original level for a fair A/B comparison. This practice ensures that you're making decisions based on the quality of the EQ adjustments, not on volume differences. Both MIXROOM and BASSROOM have a level match pointer on the output gain to help you do this instantly.
Constant referencing is key to not losing sight of your mastering objectives. It's like having a GPS in unfamiliar territory; it helps you stay the course. By frequently A/B testing your master against a reference track, you ensure your track is competitive within its genre.
For instance, if you're mastering a rock song, you might use a reference track from a successful album within that genre to gauge your kick drum's punch or your guitars' brightness. By analyzing the frequency spectrum of the reference, you can identify areas where your track may be lacking or where it may be excessive.
(Note: I rarely close REFERENCE when mastering… It’s so effective at keeping me productive. I’m constantly jumping back and forth between the track i’m mastering and the reference track, try this in your next mastering session and see how you get on).
All the EQ techniques and approaches I've shared can also be applied when using dynamic EQ.
Dynamic EQs give you the ability to add movement in connection with the EQ curves in relation to the input audio. This gives greater control over the tonal balance.
For example, you can create an EQ boost with dynamic ducking when the audio signal get’s louder. This allows you to boost the brightness, whilst avoiding harshness when the mix itself gets brighter, such as the introduction of hi-hat or a brighter synth.
You can also use a dynamic EQ to increase or decrease the dynamic range of certain frequencies. You might keep the EQ node at 0dB and adjust the dynamic band only. Boosting in the low-end can give you a punchier kick and a looser bass. Boosting in the high-end can give you more transient hi-hats and breathe some extra life into the performance. Attenuating the dynamic range of the low-end can help control a kick with too much punch, or even glue the low-end together.
It’s a very versatile tool and you should consider it every time you make an EQ move.
Some examples of when to reach for a dynamic EQ are:
Not all mid-side EQs give you options to adjust threshold, attack and release. So you need to be careful about how the dynamic elements are affecting the punch of your master.
Unlike traditional mono or stereo processing, which affects the entire signal, mid/side processing allows you to treat the center and sides of a stereo channel independently. By isolating these signals, you can shape the balance of your stereo image, giving you more control over your mix. When used correctly, mid/side processing has the power to make your masters sound wider, deeper, and more focused.
When stereo mastering a song, you won’t have access to the original channels, or any way to rebalance the mix. But with a mid/side EQ, you can attempt to isolate the thump of the kick drum, the bite of the guitars, or the sizzle of the cymbals and give that frequency a boost or cut in the mix, depending on what is required to improve the sound.
One of the most common applications of mid side processing when mastering is to adjust the volume of the lead vocal. The vocal sits in the mid channel, and depending on the singer covers a frequency range somewhere between 300Hz to 7kHz. You can use a mid-focused EQ to adjust this range without affecting the audio information in the side channels in the same range. If you were boosting this range in stereo, you might make the synths in the side channels harsher. If you were attenuating this range in stereo, the audio in the side channels might lack presence.
Mid side can be used for a plethora of applications in mastering, too many to list. But the principle is always the same. With every EQ move, you need to clearly identify the goal, such as ‘make the vocal brighter’, ' lower the kick’, ‘make the synth sound wider’. Once you clarify the objective, you can determine where in the stereo field the element lives and apply the processing to either the mid or the side channel in that range. As with all these processes, a quick before and after test of the adjustment is always necessary.
Beginner producers make the mistake of not being subtle enough here. Mid-side processing is powerful enough to butcher a great sounding mix and totally destroy a well-crafted stereo and tonal balance. Your reference tracks will guide you here and keep you on the right path.
Should you use an Analogue Emulation or Digital EQ? There are thousands of different EQs available for you to shape your audio. Ultimately, you can use any EQ you like to adjust the tonal balance. You might choose to use a super clean parametric EQ that cleanly changes your sound without introducing extra vibe or character. Alternatively, there might be an analogue emulation plugin that you feel introduces a warmth that you can’t get with other EQs. This warmth, character, vibe or whatever you want to call it is often just harmonic distortion and specific filter traits.
Everyone has their different opinion on this. My personal experience is the following. When I first started, I thought buying a ton of analogue emulation EQs would dramatically improve my sound. In reality, I was able to achieve my sonic goals faster with a digital EQ combined with a versatile harmonic distortion plugin. Mastering is a passion but also a business for me, so it pays to be quick. For that reason, I haven't used an analogue emulation plugin on a master for years, as it can take time searching for the one that works best with your audio. Nonetheless, I’m aware it’s a super-fun process, so if you’re feeling creative and want to explore different sounds then go for it.
This journey through the nuanced world of EQ in mastering has covered everything from the foundational strategies to the specific tools that can make or break the final product. It underscores the importance of subtlety and context, the reliance on professional tools to compensate for less-than-ideal listening environments, and the power of referencing and level matching to make informed decisions.
As with any art, mastering is a blend of science and intuition, requiring both discipline and creative daring. Experimentation, guided by an educated ear, remains a cornerstone of mastering excellence. Trust your ears, seek feedback, and remember that the goal of mastering is to ensure that your music resonates as intended, on any system.
]]>Welcome to the definitive guide for all home studio enthusiasts looking to demystify and master the critical elements of monitoring and room acoustics for mastering. The final stage of music production, mastering, is where your creative impulses, technical skills, and acoustic environment coalesce to polish and prepare your track for the world's ears. However, without accurate monitoring and a well-treated room, even the most skilled mastering engineer can be led astray.
For those crafting their sonic masterpieces at home, the challenge often lies in optimizing limited resources to create a professional-sounding master. This is where the subtleties of sound play their part — the intricate dance of frequencies within your room, the honest reproduction of sound through your speakers, and the fine line between an acoustically untreated bedroom and a mastering suite.
In this blog post, we will delve into the why and how of setting up your space for the ultimate listening and mastering experience. We’ll explore how speaker selection can influence your perception, the importance of room setup for critical listening, the magic of acoustic treatment in transforming any ordinary room, and the nuances of monitoring calibration to ensure what you hear is what you get. Additionally, we'll look at how mastering with headphones can be a viable alternative or complement to traditional studio monitors.
Whether you are just starting or looking to refine your home mastering setup, this post aims to arm you with the knowledge to make informed decisions and apply effective solutions. So, let's embark on this journey to turn your home studio into a mastering sanctuary, where every note and nuance can be captured, evaluated, and celebrated.
To fully grasp the significance of room acoustics in mastering, we must first understand what 'room acoustics' entails. In the simplest terms, room acoustics refers to how sound waves interact within a given space. When sound emanates from your monitors, it's the beginning of a complex journey. These waves will travel outward, reflecting off surfaces, bouncing around corners, and either being absorbed or diffused by materials within your room. It's these interactions that determine the accuracy of what you hear and, by extension, the quality of your mastering efforts.
Home studios are often synonymous with acoustic challenges. Unlike professional studios, designed with sound in mind, home spaces must contend with less than ideal dimensions, resulting in common issues like standing waves, which cause certain frequencies to resonate unnaturally. Flutter echoes, another frequent troublemaker, introduce a rapid series of echoes that color the sound. Both phenomena can lead to a misleading representation of your mix and prove detrimental to the mastering process.
The shape and size of your room heavily dictate its acoustic properties. A square room, for example, is notorious for poor acoustics due to equal dimensions causing particular frequencies to build up at the same points. Larger rooms, while potentially offering a better frequency response, might not be practical or available for everyone. Thus, understanding your room's unique acoustic fingerprint is crucial for implementing effective solutions.
Why does all this matter for mastering? Because mastering is about balance and translatability. It's the final polish, the last check to ensure your track sounds good on all playback systems, from hi-fi equipment to smartphone speakers. If your room is giving you a skewed perspective, you're not mastering the music — you're mastering the room. That's why a solid understanding of room acoustics is not just helpful; it's essential.
As you embark on your mastering journey, remember that the goal is a neutral, reliable listening environment. Your room doesn't need to be perfect — few ever are — but it needs to be understood. With the right knowledge you can tame your room's acoustics and ensure that what you're hearing is as close to the true sound as possible, allowing your mastering process to be as effective and accurate as it can be.
Selecting the right speakers, or studio monitors as they’re professionally known, is like choosing a canvas for a painter. It's the medium on which the art will come to life, and for mastering engineers, that canvas is the sonic landscape painted by their studio monitors. The clarity, flat frequency response, and detail that studio monitors provide are paramount for making precise adjustments during the mastering process.
When it comes to mastering, not all speakers are created equal. Monitors suitable for mastering should ideally have a flat frequency response, ensuring that no frequencies are unduly emphasized or suppressed. This flatness doesn't mean the sound will be unexciting; rather, it ensures that you can hear the mix as it truly is, warts and all. Additionally, look for monitors with low distortion and high headroom, allowing you to assess the dynamics of your track without the speakers adding their coloration or limitations to the sound.
Home studio owners often have to balance cost with quality. While top-tier monitors might be out of reach, there are many affordable options that still provide a trustworthy frequency response. It's about finding the best monitors within your budget that provide the most neutral sound. Online reviews, forums, and user feedback can be invaluable in making this decision. Remember, the goal is to know your monitors and their sound well, almost like a trusting relationship between musician and instrument.
The true test of mastering is how well the final product translates across various listening environments. Your studio monitors should be a reliable reference point for this translatability. The ultimate aim is for the music to sound consistent, whether it’s played through a car stereo, a pair of earbuds, or a club’s sound system. Monitors that reveal every detail, from the deepest bass to the highest treble, will give you the best chance at creating a master that translates well across all these systems.
In summary, the selection and understanding of your speakers are pivotal to the mastering process. They are the lenses through which you view your work, and their accuracy is paramount to the success of your final product. By choosing the right monitors for your space and budget, and using analytical tools to understand their performance in your unique environment, you're laying the groundwork for a mastering process that leads to professional, polished, and well-translated tracks.
Mastering is a critical listening process, and the room in which you master needs to facilitate absolute precision. For home studio users, creating an ideal mastering environment involves strategic room setup and speaker placement. While the perfect room may not always be achievable, understanding the principles of sound behavior in a physical space can help you get the most out of what you have.
The cornerstone of good mastering room setup is speaker placement. Ideally, speakers should be positioned to form an equilateral triangle with the listening position. This means that the speakers are as far apart from each other as they are from you. Note that you cannot be at the terminal point because you don’t have a big ear in the centre of your forehead. You have 2 ears that are spaced 12-14 inches apart. For this reason, the triangle must terminate 12-18” behind the head of the listener. This will align the speakers such that the high frequencies will strike the ears directly rather than glance them.
You should align the tweeters at ear height only if you’re using point-source monitoring like Genelec Ones for example. Any other design will have the tweeter, mid-range and - in a 3 way design - the bass driver all in separate physical space. To ensure equal time of flight between drivers, you must align the ears with the acoustical axis of the speaker, not the tweeter. If you align ear height with a tweeter, and the midrange is below that by a few inches, then the time of flight for midrange will be greater than that of the high end. This is not good as it puts the drivers out of phase with each other. All monitors publish in their documentation where their acoustical axis is. In most 2 way designs, the acoustical axis is midway between the mid driver and tweeter.
Moreover, avoid placing monitors too close to walls, especially corners, where bass frequencies can build up and cloud your judgement. Around a foot away from the back wall is a good starting point, but this will vary based on your room’s acoustics and the type of speakers you're using. If you must put a monitor in a corner due to space limitations, then use the built-in shelving filters or a big low shelf attenuation using room EQ.
The one boundary you want to make friends with is your front wall. If you gap your monitors away from your front wall, you create a big issue that you cannot solve downstream. You make space for SBIR / boundary nulling due to the bass reflection from the front wall rejoining the direct signal out of phase. This causes a big peak and null in your frequency response making it impossible to get a high quality bass frequency response in your room.
Most home studios will not have been built with acoustic perfection in mind. Non-ideal room dimensions can introduce standing waves and modal issues, where certain frequencies are unnaturally prominent or absent. In these scenarios, symmetry becomes your friend. Centering your setup in the room creates a more predictable and even stereo image, allowing for a more balanced listening experience.
To counteract less than ideal dimensions, you can adjust speaker placement to minimize issues. For example, if you have a particularly resonant frequency at your mix position, moving your speakers slightly can change the modal distribution and potentially alleviate the problem.
Once your speakers are positioned optimally, sit in your listening position and play some audio. Move around slightly in your chair; if the sound changes drastically in different positions, further adjustments may be needed. This could be in the form of speaker angle, where toeing them in—angling them slightly towards the listening position—can help focus the sound and provide a more precise stereo image.
Remember that the goal is to create a balanced, symmetrical, and focused listening environment where you can trust your ears. It's about crafting a space that doesn't lie to you, so when you make a decision during mastering, it's based on reality, not on the quirks of your room and speaker setup.
Ultimately, the interaction between your speakers and your room defines your mastering environment. It's a delicate dance, one where each component influences the other. By following these guiding principles, you'll create a room setup that complements your speakers and fosters a true reflection of the sound within your tracks. This harmony is crucial for mastering at home, as it allows you to make confident, accurate adjustments that translate well outside of your personal space.
Mastering precision in a home studio environment often requires dealing with less-than-ideal room acoustics. The goal of acoustic treatment is to minimize the room's influence on the sound, enabling you to hear the music as it is rather than as your room interprets it. For mastering, this clarity is vital—it's about finesse and the subtle adjustments that make a track not just good, but great.
Before slapping foam on every wall, take the time to identify what specific issues your room has. Common problems include excessive reverb, standing waves, bass buildup, and flutter echoes—all of which can mislead your mastering decisions. A methodical approach involves using test tones and sweeps to find problematic frequencies and positions in the room where these issues are most pronounced.
Once the problems are identified, strategic treatment can commence. Absorption panels are the first line of defense against problematic reflections. They are most effective when placed at first reflection points—locations on your walls where sound from the speakers directly reflects towards your ears. However, bass traps are essential for controlling low-frequency buildups, particularly in corners where these frequencies tend to congregate.
Diffusers are another key element of acoustic treatment. While absorbers 'remove' energy from the room, diffusers scatter it, creating a more even and natural sound field. This is especially helpful in smaller rooms, where the distance sound travels before reaching your ears is relatively short, leading to an unnatural sense of space in the sound.
It's crucial to strike a balance between absorption and reflection. Over-treating a room can make it unnaturally dead, which is as misleading as too much echo. The aim is to create a space that still feels 'alive' but doesn't color the sound. In mastering, the room should be transparent, allowing you to make decisions based on the music and not the room's characteristics.
For the home studio user, professional acoustic treatment might seem out of reach, but there are affordable and practical solutions. DIY bass traps and absorption panels can be made with common materials. Additionally, everyday items like bookshelves filled with books and strategically placed furniture can serve as rudimentary diffusers and absorbers.
When you treat your room for mastering, you are essentially calibrating your most significant piece of gear—your workspace. A well-treated room allows you to hear with precision and make informed decisions, providing confidence that your masters will translate well in the real world. In your home studio, the power of proper acoustic treatment cannot be overstated—it is the unsung hero of many a polished master.
Monitoring calibration is the often overlooked but essential process for achieving a consistent and reliable mastering output. The goal is to ensure that your monitors are providing a true representation of the sound, free from distortion and imbalances that can lead to poor judgement calls during mastering.
Calibration involves setting up your monitors to a reference level that allows for both dynamic range and nuance detection. This doesn't mean always mixing at high volumes; in fact, mastering at moderately low levels can often provide a clearer perspective on balance and stereo imaging. A standardized level also helps in maintaining consistency throughout the mastering process, so your ears aren't fooled by the loudness when comparing different tracks or making critical adjustments. Check out our full step by step video below on how to calibrate your studio monitors:
The "sweet spot" in volume for mastering is often around 73-76dB dB SPL, but this may not always be practical or safe over long sessions, especially in a home studio environment. Instead, finding a comfortable level where the full frequency spectrum is audible and you can listen for extended periods without fatigue is key. Keep in mind that we hear frequencies differently at various volume levels. We hear less bass when it's quieter, and more bass when it's louder.
In some cases, the room's acoustics may be beyond what passive acoustic treatment can address. This is where room correction software and hardware come into play. These systems use microphones to measure the room's response and then apply corrective EQ curves to the monitor output. While not a replacement for acoustic treatment, they can certainly complement it, especially in troublesome or untreated spaces.
Calibrating your monitors is not just about the speakers themselves; it's also about ensuring consistency across various listening devices. This includes checking your masters on different playback systems to verify that your calibration translates well. Making adjustments back in the calibrated environment can help iron out any kinks that reveal themselves on other systems.
Calibration is not a set-and-forget process. It should be revisited periodically, especially if the room changes or new equipment is added to the setup. Keeping notes on your calibration settings can help in maintaining a benchmark and making the process more efficient over time.
Proper monitoring calibration sets the foundation for making confident mastering decisions. It ensures that what you're hearing is accurate and that the adjustments you're making will enhance the music, not just the sound in your room. By taking the time to calibrate your monitoring setup, you're investing in the trustworthiness of your mastering environment, and consequently, in the quality of your final product.
While speakers are the primary tool for mastering engineers, headphones play a complementary role, especially in a home studio setup. They offer an intimate listening experience and can reveal details that might be missed on speakers, particularly in untreated or imperfectly treated rooms.
Mastering requires headphones that offer a flat, neutral frequency response to accurately assess the mix without coloration. Open-back designs are generally preferred for mastering purposes due to their wider soundstage and more natural listening experience. However, they also allow sound to escape and enter, which could be an issue in noisy environments.
Just as with monitor speakers, headphones can also benefit from calibration. Software solutions are available that apply correction curves tailored to specific headphone models, aiming to bring their frequency response as close to flat as possible. Mastering The Mix's reference tools can be utilized here to ensure your headphone mixes translate well to speakers.
Despite their benefits, headphones do have limitations. They can exaggerate the stereo field and provide no sense of crossfeed (the natural interplay between left and right channels in a room), which can lead to mixes that don't translate well to speakers. Additionally, the lack of physical impact from low frequencies can mislead decisions about bass levels.
Take frequent breaks to prevent ear fatigue and preserve critical listening abilities.
Use crossfeed plugins to simulate speaker listening and achieve a more natural stereo image.
Compare your headphone master against speaker masters to ensure consistency and translation.
While they cannot replace speakers, especially in a professional mastering context, headphones are an essential tool in the mastering toolkit, offering a valuable second reference that can provide insight and depth to the mastering process in a home studio environment.
p.s. My personal favourite are the Sennheiser HD800s headphones.
Mastering is the final polish on the musical gem, and achieving professional results in a home studio is both an art and a science. Through careful speaker selection, room setup, acoustic treatment, monitoring calibration, and the strategic use of headphones, home studio users can approach the precision and clarity required for mastering at a high level.
Speaker Selection lays the foundation, with monitors that bring out every nuance of your mix.
Room Setup and Speaker Placement dictate how sound travels and interacts with the listener.
Acoustic Treatment ensures your room tells you the truth about what you’re hearing.
Monitoring Calibration establishes a reliable reference, so you can trust your ears and your gear.
Headphones offer an alternative perspective, revealing subtleties that may be obscured in a room.
While the limitations of a home studio are real, they are not insurmountable. With the right knowledge, tools, and critical listening skills, a mastering engineer can transform a home setup into a space where professional mastering isn't just possible—it thrives.
Mastering The Mix’s range of products can serve as valuable allies in this journey, providing the analytical and corrective assistance needed to refine your environment and your ears. Tools like REFERENCE and LEVELS can guide you towards better monitoring conditions, provide insights into acoustic treatment effectiveness, and help calibrate both your monitors and headphones for the utmost accuracy.
In the end, the quest for the perfect master is a quest for transparency—ensuring that the music translates well in any environment, touches the listener as the artist intended, and stands the test of time. By mastering the art of mastering in a home studio, you can ensure your music does just that.
]]>Mastering is the alchemy that transforms a mix into a final product — the bridge between creation and consumption. It’s an art form where technical precision meets nuanced artistry, ensuring that a piece of music sounds its best on any system, anywhere. Historically, mastering was the final check before mass-producing vinyl records, but today it's a critical step in digital music distribution, offering the last chance to optimize and unify sound for global audiences.
But mastering is not mixing rehashed. While mixing balances the elements within a song, mastering fine-tunes this balance and brings uniformity to an entire album. It's about achieving the right loudness without sacrificing dynamics, enhancing stereo width without losing focus, and striking the perfect EQ balance for a consistent playback experience.
In a world where music is consumed on a myriad of devices, from high-end sound systems to smartphone speakers, mastering is your music's passport to clarity, relevance, and resonance. Let's unpack this pivotal process, guiding you through loudness, dynamics, stereo width, EQ balance, and the basic mastering signal chain — essential knowledge for the modern music producer.
If you're a music producer, understanding the line that separates mastering from mixing is pivotal. Here’s the lowdown: Mixing is like assembling the pieces of a puzzle to see the full picture, while mastering is like putting a frame around that picture to display it.
Mixing is your creative playground. It’s where you balance levels, pan audio, apply effects, and sculpt the sound to create the final mix. A mix engineer focuses on the minutiae — the relationship between the kick and bass, the reverb on a snare, the EQ on a vocal. Each track has its place, and it's the mixer's job to ensure they all play nicely together. Actionable tip: Use reference tracks to guide your mix. A/B your work with tracks you admire to benchmark your sound.
Mastering takes a macro view. A mastering engineer steps in after the mix is complete, acting as a fresh set of ears. They apply broad strokes, using EQ, compression, limiting, and stereo enhancement to create a cohesive sound across an entire album or single. Their goal? Ensure that your music translates well whether it’s played on a club system, car stereo, or earbuds. Actionable tip: Before you start mastering, take a break after mixing. Fresh ears will give you a new perspective on what the track truly needs.
Mastering isn't just about making music loud; it's about making it lively. It’s where technicality and creativity collide to serve a multitude of goals, ensuring that your track isn’t just heard — it's felt.
The first goal of mastering is to bring out the clarity and ensure cohesion. Whether you’re polishing a single or an entire album, each track should sound crisp and part of a unified collection. Actionable tip: Apply subtle EQ adjustments to clear up muddiness and enhance brightness, ensuring each track has its own space while still belonging to the same family.
Your track should bang on every system — from a festival PA to a pair of AirPods. Mastering fine-tunes the mix for consistent playback across all listening environments.
Actionable tip: Check your masters on different sound systems and streaming platforms to verify consistency. Online services can simulate how a master will sound on various platforms.
Yes, volume is vital — it's a loudness-centric world out there. But a master that's loud yet lifeless is no master at all. Strive for a robust volume that retains dynamic vitality. Actionable tip: Target optimal loudness levels (like -14 LUFS for streaming) without over-compressing, keeping an eye on the meters, and an ear on the dynamics.
Mastering shapes the listener's perception. It’s about creating an immediate impact and a lasting impression. The final master should enhance the emotional response the mix is already primed to deliver. Actionable tip: Use reference tracks to gauge the impact of your master. If the emotional punch matches or exceeds that of the reference, you're on the right track.
Lastly, mastering ensures your music is ready for any format — from vinyl to streaming. The nuances of each require a tailored approach to preserve the integrity of your sound. Actionable tip: Understand the requirements for each format. For instance, vinyl demands attention to low-end management and stereo width.
By hitting these targets, you solidify your music's identity, ensuring it not only stands up to the competition but also stands out. Mastering is your last line of defense and your first step into the listener’s world. Make it count.
In the mastering realm, loudness isn't just about volume — it's about perception, presence, and power. Loudness gives your track the strength to stand alongside others and the subtlety to stand apart.
Perceived Loudness: This is how your listeners interpret the strength of your track. It's affected by frequency, dynamics, and duration of sound, and most accurately measured using LUFS.
Actual Loudness: This is measurable. It's the sound pressure level (SPL) or the electrical signal level.
Actionable tip: Focus on perceived loudness by using EQ to enhance critical listening bands (like presence and brilliance) without cranking the overall level too high.
The industry uses Loudness Units relative to Full Scale (LUFS) to measure loudness. This standard helps in maintaining consistency across various listening platforms.
Actionable tip: Familiarize yourself with the target LUFS for your distribution platform (e.g., Spotify targets around -14 LUFS, and if you want to create a loud club master, shooting for -6LUFS short-term maximum is ideal).
Our loudness meter LEVELS can help you keep all your technical details in check without taking you out of your creative flow.
Learn more about how LEVELS can help you create a technically excellent master .
The loudness war has pushed music to be louder at the expense of dynamic range. The casualty? Emotional impact. Listeners crave dynamics — the ebb and flow of energy in a track.
Actionable tip: Strike a balance. Use a combination of peak limiting and volume automation to maintain dynamic interest while achieving competitive loudness.
Maximizing volume should not sacrifice clarity. When a track is overly compressed, it can become fatiguing to the ear and lose its punch.
Actionable tip: Use a limiter judiciously. Set the threshold so that it only engages during the loudest parts, preserving the track's dynamics.
Loudness is not one-size-fits-all. A jazz piece will have a vastly different loudness profile compared to a heavy metal track.
Actionable tip: Use genre-appropriate reference tracks to inform your loudness decisions. Aim to match the overall energy while respecting the genre's dynamic characteristics.
Mastering with loudness in mind is a delicate dance between science and art. By understanding the nuances of loudness and applying these actionable tips, you’re equipped to master tracks that command attention while respecting the listener's experience.
Mastering dynamics is the art of shaping the ebb and flow of a track's energy. It's what makes a song breathe, pulse, and come alive. Dynamics determine the impact of quiet intros, explosive drops, and every nuance in between.
Dynamics: The range between the quietest and loudest parts of a track. It's the contrast that creates tension and release, an essential element of music that moves us.
Dynamic Range: This is the difference in volume between the softest and loudest sounds. In mastering, it's about managing this range to preserve the track's expressiveness without getting lost in translation.
Actionable tip: Use dynamic range meters during mastering to visualize the impact of your processing. Aim to preserve the track's natural fluctuations.
Compression in mastering is a balancing act. It's used to tame peaks that stick out and to bring up quieter sections, adding body and consistency to the track.
Compressors reduce the dynamic range by attenuating the volume when it exceeds a certain threshold.
Actionable tip: Set a compressor with a gentle ratio and a slow attack time to subtly even out the dynamic range without squashing the life out of your track.
Limiters are the last line of defense against clipping and the tool for achieving competitive loudness. They're similar to compressors but with a higher ratio and are designed to stop the audio from peaking above a set threshold.
Limiters ensure that the final level of the track doesn't exceed digital limits, preventing distortion.
Actionable tip: Use a limiter to cap the peaks while carefully listening for any signs of over compression, such as pumping or a loss of punchiness.
The pursuit of loudness can often lead to a reduced dynamic range. However, a track without dynamics can sound lifeless and fail to engage the listener.
Loudness vs. Dynamics: It's crucial to find the sweet spot where the track feels powerful yet dynamic and expressive. The following image is a visual of what an over compressed master looks like and should be avoided.
Actionable tip: Adjust the threshold and make-up gain on your compressor and limiter to achieve a loud master that still retains dynamic moments of impact. Increasing the attack of the limiter can also help to keep the processing transparent.
Different genres demand different dynamic considerations. For example, a classical piece may require a wide dynamic range, while a pop song might aim for a more consistent level throughout. Being aware of genre-specific dynamics can guide your mastering process effectively.
Actionable tip: Listen to well-mastered tracks in the genre you're working with and note the dynamic range they maintain.
Mastering with dynamics in mind is about preserving the emotional intent of the music while preparing it for the practical realities of playback across various systems and formats. By controlling dynamics thoughtfully, you ensure that the music’s heartbeat remains strong and resonant with every listener.
Stereo width in mastering is essential for creating an immersive listening experience. It's the element that gives a track its sense of space, allowing individual elements to breathe and find their own place in the sonic landscape.
Stereo Width: The perceived space between the left and right channels of a track. A wider stereo image can provide a more engaging and live-sounding master.
Actionable tip: Use our plugin 'REFERENCE' to compare your track’s stereo width with that of your chosen reference tracks to ensure you’re on the right path.
Manipulating the stereo image can be achieved through a variety of tools and techniques:
M/S Processing: Mid/Side processing allows you to separately adjust the mono (Mid) content and the stereo (Side) content, providing a fine-tuned control over the stereo image.
Stereo Enhancers: These can artificially widen the stereo image by manipulating phase and EQ.
Actionable tip: Utilize a plugin like Mastering The Mix’s 'ANIMATE' to push the boundaries of your stereo image in a controlled manner. Its 'Expand' module can increase the stereo image with a focus on transparency and avoiding phase issues.
While widening the stereo field can add a sense of grandeur to a master, it's not without its potential pitfalls:
Phase Correlation : Poorly managed stereo widening can lead to phase issues, which can cause elements of the mix to disappear or sound thin when played back in mono.
Actionable tip: Always monitor the phase correlation when adjusting the stereo width. Mastering The Mix’s 'LEVELS' includes a phase correlation meter to help keep an eye on this crucial aspect.
By mastering the art of stereo width and imaging, you ensure that your tracks have the dimensional depth to envelop listeners in an auditory experience that extends beyond the confines of their speakers or headphones. With the right tools and a careful ear, the stereo field becomes a dynamic playground that can enhance the emotional impact of any track.
Equalization in mastering is the decisive step that ensures every frequency has its rightful place, contributing to a cohesive and balanced sound. It's the fine-tuning that can make a master sound polished and professional across all playback systems.
The process of adjusting the balance between frequency components within an audio signal. In mastering, EQ is used to enhance the overall mix, ensuring clarity and balance without altering the mix's fundamental character.
Actionable tip: Employ a plugin like Mastering The Mix’s 'BASSROOM' for finalizing the low-end balance, ensuring your master has a solid and well-defined bass.
Mastering is your last chance to correct any minor frequency imbalances that can affect the translation of the mix. Problem areas can often be found in the muddy low-mids, harsh high-mids, or brittle highs.
Actionable tip: Use our 'REFERENCE' plugin to identify how your track stacks up against industry standards and spot frequency imbalances. This will help you make informed decisions on where to apply EQ adjustments.
A balanced EQ spectrum ensures that no frequency band is overpowering or underrepresented, providing a sense of depth and space:
The goal is to achieve a smooth, even distribution of frequencies, which translates well on any playback system, from a club PA to smartphone speakers.
Actionable tip: Use Mastering The Mix’s 'MIXROOM' for intelligent and genre-specific EQ recommendations, helping you to achieve a balanced spectrum more quickly and effectively.
In mastering, EQ is less about sweeping changes and more about subtle shifts that respect the mix's integrity while ensuring it translates universally. Mastering The Mix's suite of tools can guide you through these subtleties, ensuring that your master not only sounds great on your studio monitors but everywhere else, too.
The mastering signal chain is a sequence of processes that your audio undergoes during mastering. Each component in this chain plays a pivotal role in enhancing the track's overall sound quality.
Signal Flow: This is the order in which the audio signal is processed by various mastering tools. A typical chain starts with EQ, followed by compression, stereo enhancement, and finally, limiting. But there are many different variations on this and you can add other processes in the signal flow too.
Actionable tip: Insert our plugin RESO as the first plugin in your mastering chain to fix any problematic resonances. Then proceed to dynamic and stereo enhancements before applying broader EQ strokes and limiting.
EQ: The first step often involves EQ to remove any unwanted frequencies and resonances, creating headroom for the subsequent stages.
Compressor: A compressor then smooths out the dynamics, ensuring a consistent listening experience.
Stereo Imaging: Tools like 'ANIMATE' can then be employed to enhance the stereo width and add depth to the mix.
Optional EQ : Following stereo imaging, further EQ can be used to sweeten the track, subtly enhancing certain frequency ranges.
Optional Harmonic Disotortion : If the track needs more energy, harmonic distortion can be used to thicken up the sound.
Limiter: Finally, a limiter, such as the one in 'LEVELS,' ensures the track's loudness is optimized without clipping or distorting.
Actionable tip: Our plugin LIMITER analyses your audio and help you get the perfect loudness whether you’re optimizing for streaming or creating a super-loud master. It also suggests the best attack, link and ceiling settings based on your unique songs characteristics.
The order in which you apply these processes can significantly affect the final sound. For instance, applying EQ before compression can lead to a more natural sound, as the compressor reacts to the already balanced signal.
The limiter must be the final plugin as it controls the peaks of your audio. Inserting an EQ with boosts after the limiter would lead to clipping and distortion (which we want to avoid at this stage).
Actionable tip: Be methodical with your signal chain order. Use Mastering The Mix’s 'REFERENCE' to switch between your track and reference tracks quickly, ensuring that each step in your chain moves the master closer to your target sound.
Understanding and strategically setting up your mastering signal chain is crucial to achieve a polished and professional sound. Each element of the chain serves a unique purpose, and the cumulative effect of these processes is a master that’s ready to stand out in today's competitive music landscape.
Mastering is not a one-size-fits-all process, especially when considering the various formats and environments in which music is consumed. Understanding how to tailor your master for different mediums is essential for optimal playback quality.
Dynamic Range and Bass Considerations: Vinyl requires careful handling of low-end frequencies and dynamic range due to the physical limitations of the medium.
Actionable tip: Use the 'BASSROOM' plugin to ensure that your bass frequencies are well-balanced and mono-compatible to prevent playback issues on vinyl.
High-Resolution: CDs can accommodate higher resolution audio, so full dynamic range and frequency response should be maintained.
Actionable tip: Maintain the highest possible resolution and dynamic range until the final bounce, checking the final master with 'LEVELS' to ensure no peaks are clipping.
Loudness Normalization: Streaming services implement loudness normalization, so hitting a specific LUFS target is crucial.
Actionable tip: Utilize 'LEVELS' to check the integrated LUFS to make sure your master meets the loudness criteria of different streaming platforms.
Consistency Across Systems: A master should translate well, whether it’s played through headphones, car speakers, or club sound systems.
Actionable tip: Reference your master on different sound systems and through 'REFERENCE', ensure your track competes with the best in your genre.
By mastering with the format in mind, you ensure that your music not only sounds great but also respects the technicalities and nuances of each medium, providing your listeners with the best possible experience.
Throughout this article, we've explored the essentials of mastering, from understanding its purpose to diving deep into the technical aspects that make a great master. We’ve uncovered the significance of loudness, dynamics, stereo width, EQ balance, and the mastering signal chain, and how to tailor your sound for different formats.
Mastering is the final polish, the last step in presenting your music to the world. It’s about balance and finesse, ensuring that your track not only stands up to the competition but also resonates with the listener regardless of where or how they’re tuning in.
Remember, the tools and tips provided are designed to help you achieve technical excellence without stifling creativity. However, mastering is as much an art as it is a science, and it requires a critical ear, a touch of finesse, and an understanding of the emotional impact of music.
We encourage all producers, whether novice or seasoned, to continue honing their mastering skills. It’s a journey of constant learning and experimentation, with each track offering a new opportunity to refine your craft.
By keeping the principles we’ve discussed in mind and staying up-to-date with the latest tools and techniques, you’re well on your way to creating masters that not only sound exceptional on every system but also carry the emotional weight and intention of your music into the ears and hearts of listeners around the world.
]]>The integration of artificial intelligence (AI) in various sectors has been nothing short of a revolutionary whirlwind, altering landscapes of industries from healthcare to finance. But nowhere does this integration feel more like a dramatic symphony than in the world of music production. AI's foray into this creative domain isn't just a story of technological evolution; it's a fascinating tale of how the very essence of music creation is being re-examined and redefined.
AI, initially perceived as a cluster of complex algorithms suitable for computational tasks, has gradually morphed into a versatile artist in the music industry. From the days when music production was synonymous with tangible instruments and recording studios lined with analog equipment, we've entered an era where digital technologies, and specifically AI, have become integral to how music is created, produced, and even consumed.
Interest in AI Music peaked in April 2023 but remains high.
The integration of AI in music production sparks a debate as old as innovation itself: Do new technologies enhance the artist's toolkit, or do they threaten the very soul of artistic authenticity? In this blog post, we'll explore this intricate dance between AI and music producers. Is AI stepping on the toes of traditional composers, or is it waltzing seamlessly with them, leading to a new era of collaborative creation?
As we delve into this rhythm of revolution, expect to uncover:
So, whether you're a music producer, an AI enthusiast, or just a curious bystander in this symphonic interplay of technology and creativity, this post promises insights, debates, and maybe a new perspective on the role of AI in music production.
Tune your senses to this exploration, and let's find out if AI in music production is a threat looming on the horizon or a companion walking beside us into a new era of musical innovation.
The journey from vinyl to virtual reality in music production has been a blend of breakthroughs and bold ventures, but the entrance of AI into this arena could very well be the crescendo we didn't see coming.
Let's rewind the tape a bit. The history of music production is a testament to constant innovation. From the invention of the phonograph in the late 19th century to the synthesizers of the 1960s, each technological advancement redefined what music could be. The digital age accelerated this evolution exponentially, with computers, software synthesizers, and digital audio workstations (DAWs) transforming mere sounds into rich, complex musical compositions. The digital revolution set the stage for something even bigger, paving the way for AI's debut in music production.
The introduction of AI in music isn't just a new chapter; it's almost like a new genre. Initially, AI's role was confined to algorithms that could recommend music based on listener preferences or master tracks with subtle precision. However, the turn of the millennium saw AI stepping directly into the shoes of a composer and producer.
Machine learning models, trained on vast datasets of existing music, began to understand and replicate complex musical structures, creating compositions that blurred the lines between human and computer-generated music. Tools like Google's Magenta, based on TensorFlow, started not just mimicking existing music but generating novel compositions, opening a Pandora's box of creative and philosophical questions about the nature of art and creation.
In today's studios, AI's presence is both nuanced and notable. From AI-driven plugins that enhance mixing and mastering to sophisticated algorithms writing entire pieces, the spectrum of AI's involvement is vast. Tools like IBM's Watson Beat, AIVA (Artificial Intelligence Virtual Artist), and Amper Music are not only aiding composers in fleshing out ideas but also generating complete tracks that challenge our traditional notions of composition and creativity.
The statistics paint a clear picture: AI in music production isn't just a fleeting trend; it's reshaping the landscape in ways we're just beginning to comprehend. As AI continues to harmonize with traditional music production methods, it's essential to ask: Is AI's role in music merely about enhancing human creativity, or is it overshadowing the human element altogether?
In the next section, we'll tune into this debate, examining AI not only as a tool but as a potential titan in the arena of musical composition.
The notion of AI as a potential rival to human composers isn't just a speculative thought; it's a discussion echoing through the halls of recording studios and music academies alike. But how real is this threat?
The fear that AI might render human composers obsolete isn't unfounded. With AI systems like OpenAI's Jukebox composing songs that mimic styles ranging from Elvis Presley to Katy Perry, the lines are blurring. Albums like "I AM AI," produced by YouTube star Taryn Southern with the aid of Amper Music's AI, further fuel the debate. These aren't just algorithmic experiments but commercially viable tracks challenging the notion of authorship and creativity in music.
To understand the potential threat of AI, we must dissect what AI can realistically achieve. AI excels at pattern recognition, learning from existing musical structures, and creating compositions based on these datasets. However, AI's ability to evoke deep emotional resonance, a hallmark of great music, remains debatable.
Music, at its core, is an expression of human emotion and experience. While AI can replicate styles and structures, the jury is still out on whether it can truly capture the spontaneous burst of human feelings and the storytelling aspect that human composers bring to their music.
The debate brings us to a crucial juncture: Is the fear of AI replacing human composers justified, or are we witnessing a new kind of partnership in the making? Perhaps, the real question is not about replacement but about transformation and evolution.
In the upcoming section, we'll shift the lens to view AI not just as a rival but as a potential collaborator in the creative process. We'll explore how this technology, often seen as a competitor, could in fact be a powerful ally for musical innovation.
Rather than viewing AI solely as a threat to traditional composition, it's worth exploring its role as a dynamic ally in the creative process. How is this technological marvel reshaping music production, not by replacing, but by augmenting human creativity?
The advent of AI in music production is less about the robot apocalypse and more about a renaissance in creativity. AI tools provide a new palette for composers to experiment with sounds and structures that might be beyond human reach.
The real magic happens when human expertise meets AI efficiency. Music producers increasingly view AI as a collaborator, an entity that brings a new dimension to their creative arsenal.
AI, in these scenarios, isn't a soloist taking center stage but more of a section player in an orchestra, adding depth and dimension while being guided by the human conductor.
The evolution from seeing AI as a competitor to a collaborative partner involves a paradigm shift. It’s about recognizing AI’s role as a tool in the artist’s kit - one that can open doors to uncharted musical territories.
Embracing AI in music production can be empowering rather than threatening. It offers an opportunity to break free from traditional confines, pushing the boundaries of creativity and innovation. The key lies in using AI to do what it does best - managing data, identifying patterns, and executing repetitive tasks - thereby freeing the human artist to focus on the heart and soul of music: emotion and storytelling.
As we move forward, the partnership between AI and human creativity in music seems not just inevitable but potentially fruitful. In the next section, we'll delve deeper into the ethical and artistic implications of this relationship, exploring the balance between technological advancement and artistic integrity.
The rise of AI in music production isn't just a technological and creative shift; it also brings forth a myriad of ethical and artistic implications that warrant a closer look.
One of the most pressing concerns in the realm of AI-generated music revolves around originality and copyright. Who owns a piece of music when it's composed by AI based on learning from thousands of existing works?
Copyright Challenges: The legal system is still catching up with the complexities introduced by AI in music. Determining the copyright of AI-created music, especially when it's influenced by existing human compositions, remains a grey area.
Beyond the legalities, there's an artistic debate simmering beneath the surface. Can music created with the help of AI be considered 'authentic'? This debate taps into deeper questions about art and technology.
Artistic Merit: Discussions around whether AI-composed music can be considered art, or if it's merely a technological output, are ongoing.
The Human Touch: Many argue that the soul of music lies in its imperfections and human nuances, something an AI might not be able to replicate.
How ethical is the use of AI in music production? This question covers not only the creation of the music but also the data used to train these AI systems.
Data Transparency: Understanding where and how the training data for AI systems is sourced is crucial. There are concerns about consent and acknowledgment when it comes to using existing tracks for training purposes.
Bias and Diversity: AI systems are only as unbiased as the data they're fed. There’s a risk of perpetuating existing biases in genre, style, or cultural representation in music if AI isn't trained on diverse datasets.
Navigating these ethical and artistic complexities is crucial as we integrate AI more deeply into the music production process. Recognizing and addressing these concerns is not just about safeguarding human creativity but also about ensuring fairness, originality, and diversity in the music industry.
In the next section, we'll conclude by looking towards the future - pondering over how to best prepare for an era where AI is an undeniable part of the music production landscape, ensuring a harmonious balance between technological innovation and the timeless human essence of music.
As we stand on the cusp of a new era in music production, the integration of AI isn't just inevitable; it's already happening. The challenge now is not in resisting this wave but in learning how to surf it with skill and ethical consideration.
Embracing AI in music production requires a blend of openness, awareness, and strategic thinking. For the contemporary music producer, staying informed and adaptable is key.
Continuous Learning: Keeping up with the latest AI advancements and tools is essential. Understanding the mechanics and capabilities of AI in music can empower producers to use these tools effectively and creatively.
Skill Diversification: Encouraging a broader skill set that goes beyond traditional music production, including AI programming and data analytics, can offer producers a significant edge.
Looking ahead, the role of AI in music is set to expand, bringing both challenges and opportunities. Predictions about AI's future impact range from complete automation of certain production processes to new forms of interactive and adaptive music experiences.
Interactive Music Experiences: AI might lead to more immersive and interactive music experiences, adapting to listeners' environments or emotional states.
Automation in Production: Certain aspects of music production, particularly technical tasks like mixing and mastering, are likely to see increased automation.
As AI becomes more entrenched in the music industry, maintaining an ethical and artistic balance is paramount. This involves conscious decision-making about how AI is used and an ongoing dialogue about its implications in the industry.
Ethical Frameworks: Developing and adhering to ethical guidelines around AI use in music production, including issues like data privacy, copyright, and representation.
Celebrating Human Element: Ensuring that the unique qualities of human-created music are valued and preserved alongside AI-generated compositions.
In conclusion, the rise of AI in music production isn't a zero-sum game where the entry of AI spells the exit of human creativity. Instead, it's an evolving symphony where human ingenuity harmonizes with technological advancement, leading to new heights of creative expression.
As we navigate this new terrain, the focus should be on leveraging AI's strengths to enhance and expand the human creative spirit, rather than replacing it.
As we've journeyed through the evolving landscape of AI in music production, it's clear that this technological advancement is neither a panacea nor a peril but a powerful tool that, when used wisely, can enrich the music industry in unprecedented ways.
AI as a Catalyst, Not a Replacement: AI should be seen as a catalyst that can spur creativity, efficiency, and innovation in music production, rather than a replacement for the human composer.
Balancing Act: The true artistry in the age of AI will lie in balancing the technical prowess of AI with the irreplaceable depth, emotion, and intuition of human creativity.
Ethical Considerations are Crucial: Addressing the ethical implications, from copyright issues to ensuring diversity and bias mitigation in AI-generated music, is essential for a responsible evolution in the industry.
The future of music production, shaped by AI, offers exciting possibilities. From enhanced creative processes and new sounds to more efficient workflows, the potential is enormous. However, this new era also demands a more nuanced understanding and a thoughtful approach.
Collaboration Over Competition: Viewing AI as a collaborator and not a competitor can lead to more innovative and groundbreaking music production.
Continued Learning and Adaptation: Staying abreast of technological developments and continuously adapting one's skills will be key to thriving in this new landscape.
Preserving the Human Essence: Amidst technological advancements, preserving the human essence in music — the raw emotion, storytelling, and connection — remains paramount.
AI in music production is not about ushering in a cold, robotic future; it's about opening doors to new creative realms. It's an invitation to rethink and expand our artistic boundaries, where the combination of human and machine intelligence can lead to musical expressions yet unheard and unimagined. As we embrace this partnership, we may find that the greatest compositions lie not in human or machine alone, but in the harmony of their collaborative symphony.
]]>But what if I told you that the path to crystal-clear masters is paved with understanding a few fundamental mistakes? Dive into this guide and discover how to transform your cluttered compositions into sleek, harmonious anthems.
From the pitfalls of volume imbalances to the art of EQ-ing and the mysteries of frequency masking, we'll decode them all. Let's embark on this journey together, turning messy mixes into masterpieces.
One of the most common challenges faced by producers is the enigma of the 'messy mix'. At the heart of this issue often lies a trifecta of foundational errors.
First and foremost, a bad volume balance between the elements in a mix can create a chaotic soundscape where certain instruments or vocals either overpower or are lost entirely. This absence of hierarchy and space can leave the listener overwhelmed or unsatisfied.
Secondly, improper EQ settings can lead to an unnatural tonal balance, making some frequencies feel either too harsh or too subdued. A kick drum that lacks punch or vocals that seem distant can be the outcome.
Lastly, the menace of masking is another major culprit. When two elements in your mix occupy the same frequency range, they compete for attention, causing a muffled sound and diminishing clarity. This phenomenon, known as frequency masking, can blur the distinct character of instruments and make your mix sound crowded.
Addressing these challenges head-on is the first step to turning a messy mix into a clean, polished production.
Modern music compositions often boast a myriad of layers. Each layer has its role, and for the symphony to come alive, all the layers need to be in perfect balance. If the piano is too overpowering, it might drown out the delicate notes of a violin. This balancing act is vital not just for artistic reasons but for auditory ones as well.
Loudness perception is a unique trait of human hearing, best defined by the ISO’s equal-loudness contours (2003). At low volumes, our ears pick up mid-frequencies (like vocals) more prominently than ultra-low or high ones.
That’s why, at minimal volumes, the bass tends to vanish, leaving vocals still audible. As the volume rises, the ear starts picking up on the extremes, resulting in a more balanced hearing experience.
The challenge for mixers, however, is that they can't control the volume at which the listener plays the music. What they can control is the loudness of each element within the track.
Balancing the channels correctly does not only create a harmonious listening experience but simplifies the mixing process. By establishing a strong foundation at the start, you won’t have to rely heavily on plugins to correct fundamental issues.
When in doubt, look to the pros. One of the simplest ways to achieve balance is by using reference tracks. Our plugin ‘REFERENCE’ offers a streamlined way to compare your mix to industry favorites. By bringing in your reference tracks, you can see how your mix stands in comparison and adjust accordingly. A crucial tip to remember here is the need to level-match, ensuring that perceived loudness remains consistent.
Focus on one element at a time as you jump back and forth, and do your best to get it sitting at a comparable relative loudness in the context of your mix as it does in your reference track. Skipping this simple step will mean you’ll waste a lot of time later in the mix trying to solve a simple problem with complex EQ and compression tweaks.
Drums: Often considered the heartbeat of a track, drums set the pace. Starting with the snare as the foundation, the other elements of the drum kit are woven around it. The balance here should be such that each element supports the other rather than overshadowing it. Panning is a tool you must master to create a wide and immersive drum experience. A handy plugin, ‘LEVELS,’ can assist you in frequently checking your mix in mono to ensure compatibility on smaller speakers.
Bass: The bridge between the rhythm and the melody, the bass needs careful attention. Too loud, and it drowns the rest; too soft, and the track loses its depth. Regularly referring to your reference track ensures you're on the right track.
Vocals: Often the show-stealer, vocals must shine through. Introducing them early in the mix ensures they don't feel like an afterthought. Although they need to be the loudest, a careful balance ensures they don’t feel isolated from the other elements. Listening in both stereo and mono and jumping between monitors and headphones can help you dial in the right loudness quickly.
Instruments: Bringing in each instrument in the order of its importance ensures that every element finds its space in the mix. Panning, once again, plays a crucial role in providing each instrument its unique space.
Every genre has its own nuances. Whether it’s the roaring guitars of rock, the thumping kicks of hip-hop, the rhythmic bass of EDM, or the all-encompassing vocals in pop – understanding what takes precedence is essential.
Start with balance before you dive deep into the sea of EQ, compression, and effects. That initial investment in time paves the way for a smoother mixing process. Always remember, the right reference can guide those ears from the first note to the final master.
If you follow the balancing channels guidance, your mix will already sound pretty solid.
EQ's golden rule? make the changes that are needed to get your mix sounding great.
A wise first move is to cut away the unnecessary fluff. Sometimes, vocals or certain instruments can add too much low-end, crowding your mix. This excessive bass can overshadow the beats and main tones in your mix. When you clear out this excess using a high-pass filter, your mix feels airier and more refined.
Let’s look at EQing individual elements of your mix for a great tonal balance:
Step 1: Open REFERENCE. Your chosen comparison song should be set, with the chorus looped.
Step 2: Using REFERENCE, toggle between your mix and the comparison track. Observe the sonic nuances. Start with key elements, like vocals, and set your mix's tone from there. Ask yourself: How crisp are the vocals? How resonant is the mid-range? Adjust the EQ of individual elements until each part is sonically comparable to your reference track.
Note: Within REFERENCE, the 'level line' becomes an invaluable guide. It provides a visual cue on how to EQ your track to match your comparison. You might need to temper the mids or emphasize the high-end. The aim is to bring the level line near 0dB (within ±3dB is good!), suggesting your mix aligns well with the comparison.
Step 3: Apply this method across your mix, ensuring each element aligns with the comparison.
Does this process risk making your mix mimic others? Not at all! Your mix retains its unique flavor through its melody, arrangement, and style. No one could pinpoint your comparison track. This approach simply harnesses tried-and-true methods to elevate your mix's professional feel.
Our plugin MIXROOM allows you to import audio to create a sonic profile. It then analyzes your audio and shows the EQ changes needed to get your audio sounding just like the audio you’ve imported.
This isn’t your standard EQ ‘peak’ matching that you find in other EQs. This is our proprietary and unique tonal balance matching algorithm developed over years and now used by some of the most accomplished engineers in the industry. The sonic analysis identifies how the human ear perceives the tonal quality and works with incredible precision.
If you can get your hands on stems that are a great representation of the sound you’re looking for, MIXROOM will take your productions to another level.
To create the presets, we analyzed a plethora of professionally recorded stems from hit songs, such as Ed Sheeran's vocals which gave us our Pop Male Vocals preset in MIXROOM. The acoustic guitar stem in Oasis’ Wonderwall gave us our acoustic guitar preset. The list goes on.
Here’s a simple guide on how to finesse your sound with a reference stem using MIXROOM:
Step 1: Seek the stem that resonates with your desired sound. Perhaps it's tucked away in your previous tracks, or maybe it's up for grabs in a remix challenge. Don't shy away from seeking help from fellow artists.
Step 2: Initiate MIXROOM on the desired channel—say, the vocals. Hit the 'target' emblem in the bottom left corner, ushering in your chosen stem. Loop a section that epitomizes your ideal sound and click 'Create Targets'. No stems? No problem, delve into our rich preset collection.
Step 3: With MIXROOM diligently analyzing the imported reference, watch out for the indicators—first flashing in red, and then settling into green target lines.
Step 4: Navigate to the 'Add Smart Bands' within the options. Activate the EQ bands and behold—the tonal balance of your reference stem now infuses your audio.
Refine the settings in MIXROOM if you feel it’s needed.
You’ve laid a great foundation, but you might still have channels fighting for space in your mix, and it can be a challenge trying to fix this.
Sidechaining is an elegant technique to breathe life into a densely-packed mix. Imagine two sound elements—perhaps a kick and bass, or vocals paired with a piano. Both vying for attention within the same frequency spectrum.
With sidechaining, you give one priority over the other. You might use our plugin FUSER on your bass and sidechain the kick so the bass ‘ducks’ out of the way to make room for your kick allowing it to punch through the mix.
FUSER isn't just for rhythm. With it, instruments like guitars or synths graciously step aside, letting the vocals stand front and center, ensuring they're the heart and soul of your track.
Once FUSERs sidechain is set up, you’ll see frequency-specific mid-side channel conflicts glowing in the display.
Tap 'Resolve Conflicts'. This command gracefully manages these overlaps, ensuring clarity without overcompensation. The intelligent processing remains fully tweakable so you can adjust it to suit your preferences.
For those wanting hands-on control, create a node with a double-click on the frequency view. Pulling the node downward amplifies its effect. Fine-tune with options like mid/side, Q, attack, and release.
A little goes a long way with FUSER. Subtle integrating it across your mix in areas of key conflict will bring about serious clarity and focus to your mix.
Mixing music is akin to painting on a sonic canvas. Every stroke, every shade matters. As artists, we've all encountered the chaotic disarray of a messy mix, but as we've explored today, refining that cacophony into a symphony isn't out of reach.
Embracing balance, mastering EQ, and employing strategic tools like FUSER and MIXROOM can convert the mayhem into magic. Every instrument, every voice, and every beat deserve its rightful place in your track. Through mindful consideration of volume, space, and tonality, we've detailed how to usher every element into its perfect slot, crafting a mix that is both clear and captivating.
Remember, the journey from a muddy mix to a pristine production is a learning curve, punctuated with trials and triumphs. But with the right techniques in your toolkit, you're well on your way to creating sonic masterpieces that resonate deeply with listeners. Here's to crafting mixes that shine with clarity, character, and charisma!
]]>
If the answer is yes — to any of these questions — your vocal track is probably being affected by unwanted resonances.
That's where our RESO dynamic resonance suppressor plug-in comes in. It offers you an easy, powerful way to remove unwanted resonances from your vocal tracks.
Resonances, at least in the context of your mix, are perceived as a buildup of a specific frequency or set of frequencies. They can present themselves as an unwelcome hum-like sound in the lower portion of your track's frequency spectrum or as a high-frequency harshness or ringing in the 10–15kHz range.
Low-frequency resonances will steal headroom from your mix, while high-frequency resonances will cause near-instant listening fatigue in anyone who listens to your music.
RESO offers two methods for analyzing your vocal track: Frequency Sweep and Calculate Targets.
Loop a particularly resonant section of your vocal and instantiate RESO as the first plugin the vocal chain. (Setting RESO first works well as the audio is cleaner before it flows through other processes such as tuning or compression.).
To take advantage of the Frequency Sweep function, first locate disproportionately high peaks using RESO's easy-to-read frequency analyzer and peak scale, the latter of which is located on the plug-in's right-hand side. After that, hold down your computer's Control key and move your mouse around until you find a frequency that sounds unpleasant.
Lastly, create a Target Node by double-clicking your mouse whenever you identify an unwanted resonance. This will initiate dynamic reduction of the problem frequency.
Be sure not to add so much processing that the vocals begins to sound thin. In particular, over-reduction of esses can lead to the esses sounding like more of a ‘th’.
Usually around 4 - 8 well-positioned nodes will reduce unwanted resonances without making your vocal sound too processed.
To deploy the Calculate Targets function, simply instantiate RESO on your vocal track, start playback, and then click the Calculate Targets button.
RESO will then automatically identify problem frequencies by sampling the input spectrum multiple times per second, thereby building a record of maximum frequency magnitudes that are statistically higher than their neighbouring frequencies.
RESO then automatically creates Target Nodes, showing you how much reduction is required for transparent resonance suppression. Finally, click the Engage Targets button to initiate processing.
Frequency Sweep and Calculate Targets aren't mutually exclusive, either. Once you get comfortable with the plug-in, you'll likely combine the two functions. Many users like to sweep and destroy audible resonances with the Frequency Sweep function, then deploy Calculate Targets to locate and remove the harder-to-find ones.
If something doesn't sound right after you deploy RESO's automatic processing, you're not limited to the plug-in's decisions — after all, there’s no substitute for your ears! Therefore, RESO enables you to tweak the Target Nodes manually to best suit your track and overall mix.
At their most basic, RESO's Target Nodes are threshold controls, much like those on a standard dynamic compressor. Thus, when a specific frequency within your vocal track exceeds this threshold, RESO suppresses it.
Creating Target Nodes is easy. Just position your computer’s mouse at the peak level at which you want resonance suppression to commence, then double-click.
You can move Target Nodes anywhere within the frequency spectrum. When you want to lock a frequency, simply hold down your computer's Control key before moving the Target Node to retain the set frequency and vertically adjust the amount of dynamic reduction.
Be sure to jump between monitoring your vocal in solo, as well as hearing it in the context of the whole mix to set the resonance reduction. Soloing allows you to zero in on the issues and set the best possible frequency and Q. Listening in the context of the mix is helps you find the sweet spot for the threshold and amount of reduction.
Want to lock the peak reduction and adjust the frequency? No problem — simply hold down your computer's Shift key before moving the Target Node horizontally.
You can also click and drag to select multiple Target Nodes and adjust them simultaneously. Moreover, hovering over a Target Node will spontaneously reveal Solo, Bypass, and Delete controls.
RESO removes vocal resonances quickly, easily, and — most importantly — transparently.
Why does transparency matter? Simply put, using blunt force to remove resonances will have a detrimental effect on the sound of your vocal track as well as on the overall sound of your mix.
RESO will help you identify problem frequencies while also giving you an accurate projection of how much reduction you need to resolve the resonance without rendering your track deflated, thin, and lifeless.
So, what's RESO's secret? For starters, it employs mastering-grade filters that boast a near-linear response with negligible phase distortion when compared to equivalent minimum-phase filters.
In fact, RESO's filters produce significantly less pre- and post-ringing of transients at higher Q values than comparable market-leading filters.
Whenever you create a Target Node in RESO, the Q is automatically optimized for your track for the most musical result. This means that whenever you move one of the plug-in's Target Nodes around its interface, it automatically updates the Q (you can also scroll while hovering over a Target Node to adjust the Q manually).
Moreover, RESO's lower frequencies possess a smaller range, while its higher frequencies have a larger range. This gives you clean, transparent-sounding results.
Whenever you add a Target Node in RESO, it's created at a specific peak level that relates to your input volume. If you change the input gain on your vocal track by adding plug-ins or other processors, then RESO's processing will increase accordingly.
To compensate for this, RESO has included a Gain Link button that automatically adjusts your output gain if your input gain changes. This ensures that you can maintain the exact same volume, no matter what type of processing you apply to your vocal track.
You can also deploy RESO's gain link feature as a wet/dry slider. Begin by clicking the Gain Link button, then push the Input Gain slider up for more resonance reduction or down for less resonance reduction.
Vocal resonances are infuriating; they cause listening fatigue, make your effects processors go haywire, and rob your mixes of precious headroom. Quite simply, they make your vocal track — and your entire mix — sound bad.
That's why cleaning up resonances is essential for creating great-sounding vocal tracks and, ultimately, great-sounding mixes. Failing to remove vocal resonances will render your mixes weak, fatiguing, and hollow.
With our RESO dynamic resonance suppressor plug-in in your studio's arsenal, however, you'll be well on your way to achieving the loud, punchy, cohesive mixes we all strive for.
]]>
As with most aspects of the recording arts, experimentation will uncover a great many unique sounds. That said, there are certain tips and tricks that almost every engineer or producer makes use of.
In this post, we'll explore seven concepts that you can employ — right now — to add ambience and atmosphere to your tracks.
Especially if you're working on a busy mix, there's only so much space to go around — the sonic spotlight is only so big. Thus, if everything is up front and center, then nothing is up front and center — everything is just louder.
That's why it's important to decide which elements will be the focus of your mix and which elements will be the supporting elements in your mix. Choose what you want to highlight, then move everything else out of the way.
This concept applies to delay, reverb, and other spatial effects as well the tracks themselves. If you try to make everything sound huge and ambient, then nothing will sound huge and ambient — just really far away.
In many cases, lead vocals and instruments, bass, and drums will be the focal points of your mix; therefore, you'll want to keep those particular elements up front and in your face. Supporting instruments, such as synth pads and auxiliary percussion, are excellent candidates for heavy reverb processing.
Moreover, don't neglect the power of dynamics — everything doesn't have to be running on all cylinders all the time. Without volume variations (a chorus or drop with more energy than the rest of the song, for example), everything will sound the same, with nothing to grab your audience's ear.
The same also applies to reverb and ambient effects. Without dry elements to contrast with the reverb-drenched elements, everything will have an identical sound, with nothing standing out to attract your audience's attention.
If you're working with a dry-sounding arrangement, and none of its primary elements sound right with heavy reverb processing, try adding short ornamental elements to your mix. Then apply heavy reverb to those elements.
For example, add a reverb-drenched tambourine to a sparse vocal/piano arrangement. Or add a delay-soaked orchestra hit to an electronic arrangement.
A variation on this theme is to use your DAW's automation to apply heavy ambient processing to short, periodic sections of a track (for example, add delay or reverb to a single word in a vocal line), then let the effect's tail fade out. This will create a beautiful sense of space without burying the entire track in echo.
Reverb and delay affect the front-to-back portion of the sound field, and that's what we tend to focus on when trying to add ambience and atmosphere to our tracks. That said, stereo width — the left-to-right portion of the sound field — is equally as important if you want to create huge-sounding productions.
You generally don't want to spread your mix's primary elements too wide, as this can lessen their impact and intelligibility. Pads, drones, and textures, on the other hand, stand up to the ultra-wide treatment beautifully.
Want a truly massive sound? Try crafting three textural layers and having them play the same part in unison. Then, leave one panned center while you pan the remaining two tracks wide.
Need more movement? Try autopanning the second two tracks from left to right, synchronized to the tempo of your session.
Searching for a way to add ambient depth to a drum loop without washing it out? If so, try placing a secondary, rhythmically contrasting loop underneath your main drum loop.
Keep your main drum loop dry, submerge the secondary track with a 100% wet reverb, then balance the two as you would with a reverb's wet/dry mix control. Finally, instantiate a compressor plug-in on the reverb channel, set its sidechain input to the primary drum loop, then dial in the compressor until you achieve a musical-sounding ducking effect.
Want to take it even further? Try printing the reverb to an audio track for creative slicing and dicing!
Adding sustained textural elements — vinyl crackle, tape hiss, or other non-pitched sounds — to your drum loops is a great way to make them "pop." That said, using a static texture can often sound more like unwanted background noise than like an interactive part of your track.
To rectify this and really make your drum loop stand out, use your drum loop (or your grouped drum tracks) as a sidechain for a compressor plug-in instantiated on the textural sound track. This will cause the volume of the textural sound to mirror the rise and fall of the drums, thereby gluing them together in a synergistic way.
A fun variation of this technique is to chop the sustained textural sound, in time with your drum loop, and leaving gaps during the silence between beats. You could also create a complementary groove shadow from your edited region as well.
If you're working with a static, dull-sounding pad, drone, or sustaining synth texture, try livening it up with a modulation or pitch effect. It's amazing how much width, movement, and dimension modulation effects like chorus, flanging, and phasing can lend to an otherwise unremarkable texture.
If you really want to add width to a track, a detune effect can really make a difference. First clone your track. Then use a pitch-shift plug-in to bump one of the tracks up 10 cents and the other down 10 cents.
Finally, pan one track 100% left and the other 100% right. Doing this will add an almost unnatural width to your track.
If you're processing a background element, you can get pretty creative without affecting the vibe or character of your entire mix. Don't be afraid to get heavy handed if it sounds good!
Another great way to add ear-grabbing ambience to your tracks is to combine multiple delays and/or reverbs. This enables you to add long and short, as well as bright and dull, spatial effects to your tracks.
It's also extremely easy to tailor each effect's level and panning location using auxiliary sends and returns in your DAW (which can also be controlled using your DAW's automation), giving you a massive palette of ambient textures to work with.
A very effective way to add space and ambience to a track is to run it through multiple reverbs with increasingly longer pre-delay settings, and with each reverb sitting in a different spatial location.
For example, run your center-panned lead vocal track through two reverbs, one with its pre-delay synchronized to 1/8 of project's tempo, and another to 1/4 of your project's tempo. Pan the 1/8-set reverb aux track 100% left and the 1/4-set reverb aux 100% right.
If you're mixing in Dolby Atmos or Spatial Audio, you can also try placing the 1/8-set reverb in front (or above you) and placing the 1/4-set reverb behind you.
You could also set up two delays in parallel (i.e., the delay effects aren't feeding into one another), with one synchronized to 1/8 of project's tempo and another to 1/4 of your project's tempo, and with one panned 100% left and one panned 100% right.
Not only will multiple ambient effects lend ear-grabbing spatiality to your mix, since the effects are panned away from your center-panned track, you can get things pretty wet without robbing your track of clarity or intelligibility.
Nobody wants their mixes to sound flat and one-dimensional. That being said, you don't want your tracks to sound like they were recorded in an echo chamber, either.
That's why knowing where — and when — to place ambient effects within your mixes is so important. Keep following our blog for more ideas to help you level up your production skills.
]]>
Many producers try to fast track their careers by blatantly copying other successful producers. While this may give you a brief viral moment — your proverbial "15 minutes of fame," it's not an effective way to build a thriving, long-lasting future in the industry.
You'll never be as good as the original, and you should never settle for being the off-brand version of someone else. Instead, make your audience stand up, take notice, and exclaim, "I've never heard anything like that before."
Best of all, once you find a formula that's successful for you, you can apply it to future productions. You'll want every new song you create to remind your audience of your old song.
The trick, however, is to strike the perfect balance of being completely the same and of being completely different each time you create a new production.
In this post, we'll explore several ways you can find your own sound, as well as how you can apply this winning formula across all your productions.
There's a perfect word to describe plug-in presets: generic. It doesn't matter how awesome your celebrity-endorsed plug-in's celebrity-programmed presets are, they'll sound exactly the same on your productions as on everybody else who uses the same preset.
Worse yet, the individual who programmed the preset hasn't even heard your music. So, how are they supposed to know what settings will sound best for your productions?
It doesn't matter if you're using an equalizer, a compressor, a modulation effect, reverb, delay, or something else entirely. Listen to your music, trust your ears, and create your own settings that work for your music.
Even better, if you find settings that really work for your style of music, save them as a custom user preset. That way, you can use the same settings as a starting point for similar situations on future productions.
Once you really nail the settings on your plug-ins, you'll have other producers wanting to steal your presets!
Stock synth patches are another cause of generic-sounding productions. After all, if everybody is using the same patches, on the same soft synths, in exactly the same way, then listeners are going to get really tired of hearing the same sounds over and over again.
The last thing you want is for your music to sound exactly like the music your audience is already sick of hearing. But, if you treat them to something they've never heard before, they'll keep coming back because they'll want to hear what you come up with next!
Learning to create synth patches can be daunting, but you don't need a PhD to learn your way around oscillators and modulators. And with Syntorial and other similar training programs out there, teaching yourself how to program a synthesizer is easier than ever.
You'll be shocked at how unique your productions will sound when you build them out of sounds that nobody else has access to!
Over-reliance on popular commercial sample libraries is a surefire path to mediocrity. Everybody seems to gravitate towards the same samples, and everybody is tired of hearing them over and over again — including your audience.
That's why you should throw up a microphone (or two), play an instrument, make some noise, and create your own samples.
You don't need to be a virtuoso, either. If you can look up how to play a chord on a chart, you have everything you need to create a stellar-sounding one-shot.
Don't be afraid to step outside of your studio, either. Whether you look in a natural environment or in an urban setting, there are ear-grabbing sounds everywhere.
You can also create interesting samples using foley-like techniques: drop flatware, scrape metal objects together, stomp your feet, creak a door, ding a fork on a drinking glass — there's no shortage of interesting sounds out there.
Once you've created a sizable library of original samples, place them in your favorites folder for fast-and-easy access. And since every sample in the folder is uniquely yours — nobody else will have access to anything like them.
Simply put, your custom samples will sound like you!
Producers who've been at it a while tend to use the same signal chains over and over again, i.e., their signature vocal chain, their signature drum bus chain, and their signature snare drum chain. Truth be told, most seasoned producers and mix engineers rely on the same relatively small batch of plug-ins, with similar settings, even on large, complex projects.
Once you find something that works, there's no reason to reinvent the wheel every time you encounter the same situation. Not only will this save you time (and prevent option anxiety), but it will also form the bedrock of your signature sound as a producer.
Some producers even create DAW templates that automatically call up their most oft-used plug-ins and settings at the start of every new project.
You'll likely need to tweak the plug-ins in your template for each individual song; after all, every project is different. That said, reusing signal chains and settings that have worked in the past will still provide you with a sensible starting point.
And it will ensure that every project you work on exhibits the signature sound your audience has come to expect from your productions.
During the earliest years of recording, the goal was always to capture the most true-to-life sound possible. As such, most of the standard production repertoire we rely on today — sculpting better-than-real-world sounds with an equalizer, pushing a compressor to the point of distortion, adding artificial ambience to tracks, etc. — was unheard of.
But, thanks to maverick pioneers like Joe Meek, Bill Putnam, and Sir George Martin, the recording studio became an instrument unto itself, allowing producers to craft music in new, creative ways.
What's more, many of the coolest-sounding moments in recording history were happy accidents. Distorted guitars? Gated snares? Both of these now-commonplace sounds were discovered by chance.
With all the technology afforded to us in a modern DAW environment, there are ample opportunities for experimentation. If you really want to create a signature sound for yourself, don't be afraid to try techniques that contradict the popular notion of "the right way to do things."
Try using the "wrong" microphone for your application. Use settings that seem completely nonsensical (but sound really cool in your hands).
When (not if) you uncover something truly interesting and unique, you'll have other producers asking you how you did it. And, of course, whether or not you reveal your bizarre (yet eminently brilliant) secret will be entirely up to you.
The first step to finding your own unique voice as a producer is to understand — and to a limited degree emulate — what established producers are doing. The trick is to put your own spin on it.
For example, you could cop Pharrell Williams' trademark percussion and synth arrangements, but substitute your own samples and synth patches to give your productions a different sound. Or borrow Timbaland's farty bass textures but insert them into arrangements that would be outside Timbaland's funk-centric stock in trade.
You can also do a deep dive into vintage recordings to gain inspiration, even from genres completely dissimilar to yours. Who says you can't blend bebop with hip-hop? Or rockabilly with progressive metal? Or Motown with EDM?
Remember, you make the rules. If it sounds good, it is good.
Audio production is a creative art without limits. The only person who can place limits on it is you!
So, create music that you'd be excited to listen to. Considering the countless music aficionados out there, odds are there are plenty of people who share your tastes, and they'll be just as engaged and inspired by your productions as you are.
Keep following our blog for more tips, tricks, and philosophical musings about mixing, mastering, and audio production.
]]>As audio engineers, we’re constantly looking for ways to level up our mixes; buying new hardware or plugins, learning new production techniques, improving our listening environment.
All of these things can help, and they are all important at various times in your development as an engineer. However, one essential area for improvement is often overlooked; the ability of an engineer to listen accurately.
One of the major differences between amateur and professional audio engineers is the ability of professionals to analyse what they are hearing accurately.
When a pro listens to a mix, they immediately know which EQs need adjusting, and where on the frequency spectrum those adjustments need to be made. They can hear if a sound is affected by distortion, they can hear if there is too much or too little compression, they can hear if the stereo field is unbalanced and understand how to fix it.
Once upon a time (when I started on my journey as an engineer!) the only way you could improve your listening skills was to get in the studio and work on music. Of course, this is a legitimate way to improve these skills – but it’s no longer the quickest way to do it.
Audio Ear Training is specially designed to improve your listening skills quickly in a learning environment that is fun, friendly, competitive and personalised. Gamified ear training exercises improve the accuracy of your listening without it feeling like hard work.
The major benefits are that you will be able to work faster and more accurately when you mix – your skill set will become more professional. Imagine being able to instantly hone in on problem areas of the mix like resonant frequencies or distortion artifacts and fix them quickly and intuitively.
Ear Training can also improve your ability to detect subtle changes in audio. For example, amateur engineers often struggle to hear how a compressor is shaping a sound. There are ear training exercises that are specifically designed to get your hearing even subtle changes in compressor settings.
Yet another benefit of ear training is that it can teach you to listen to other people’s mixes more accurately; a vital skill if you want to learn from the best.
Listen to top-end mixes in your genre and accurately identify the different techniques that have been used while understanding your mix tools well enough to be able to recreate the sounds that you hear.
There are numerous types of ear training exercises available, and below, we run through a few of them. Each of this exercise are available on SoundGym, an advanced ear training website that is free to join.
We’ll explain how each exercise works, and how it helps you to improve you mixing skills.
Each exercise starts relatively easy and gets more challenging over time as you reach higher levels where more and more accuracy is required. This way of working helps you to gradually develop your skillset as you progress through each exercise.
Being able to identify specific frequency ranges is fundamental to audio engineering. With this skill, you can quickly remove unwanted resonances or boost underrepresented frequency ranges in a mix.
You’ll also be able to identify if different elements in your mixes are masking each other, and figure out where the problematic frequency range is.
These are some of the foundational skills of mixing and you can hone them in a game called Peak Master. In this exercise, a peaking filter is being used to boost certain frequencies of a sound source, and you need to find the boosted frequency. Flip the EQ on and off and try to figure out where the boost is occurring.
Being able to hear the balance between different mix elements accurately is vital if you’re trying to make sure that your track conforms to genre norms, trying to get multiple tracks to sit together comfortably on an album or EP, or if you’re trying to recreate the balance used in a reference track.
Being able to listen to another mix and recreate the balance that you hear is useful in all these situations and plenty more. Balance Memory is a game that can help you supercharge this skill.
You are presented with some music, and a set of faders. Your task is to memorise the balance that you hear, and recreate it by bringing each of the faders up to the correct level.
Compression is one of the most important tools in an engineer’s arsenal. Yet, for many new engineers it can be challenging to hear exactly what effect a compressor is having on a signal.
Dr. Compressor is a game that can train you to hear even subtle compression by asking you to compare two signals and decide which one is the most compressed. Attack and release times are kept short to make recognition easier, but as you progress up the levels the differences become harder to identify and your ears become more and more attuned to the sound of compression.
Being able to accurately analyse a stereo image is helpful for all kinds of reasons. As with level matching and balance, it can be an invaluable tool when you are analysing somebody else’s mixes; hearing something accurately is the first step towards reproducing something accurately.
It can also speed you up when mixing – rather than looking at where each of your individual tracks are panned, you’ll be able to quickly identify areas in a mix that need to be de-cluttered, or find gaps in the stereo image that you can plug with a new musical part.
Stereohead is a simple but incredibly effective game that helps you to develop this skill; you listen to two sources panned to opposite sides of a stereo mix and are asked to identify how wide the stereo image is.
As engineers we can work with all kinds of audio material; studio recordings, live recordings, samples, field recordings... the list goes on! We need to be able to trust our ears; can we recognise when one of these recordings is distorted in some way?
It’s the same when we put audio through effects, or when we’re routing audio in a DAW. Can we recognise the - sometimes subtle - distortion that can be caused by these processes? Sure, we frequently want distortion on recordings, but we need to be comfortable that we are in control of things; we decide when to apply distortion and how much we apply.
Distorted Reality is an exercise that can train your ears to hear distortion; you compare two sounds and identify which is the most distorted. Once again, a simple concept, but this game will really help tune you in to the sound of distortion.
If you’re serious about improving as an engineer then you’re probably already hard at work levelling up your skill set, learning new techniques and upgrading your studio equipment.
Don’t forget that your ears are your most important asset! Training them is a real shortcut to finishing mixes faster and to a higher standard.
]]>A well-crafted vocal will sound clean and intelligible without becoming shrill or fatiguing. In this post, we'll go over tried-and-true methods for getting pro-quality vocal tracks with maximum clarity and zero harshness.
An EQ plug-in isn't the first step to achieving a pro-level vocal track; rather, it begins at the tracking stage. Regardless of vocalist or genre, you should always strive to combine a stellar vocal performance with correct mic technique and proper gain-staging.
A microphone is the channel through which an artist connects with their audience. That's why utilizing proper mic technique is so important.
After all, even if a vocalist delivers a once-in-a-lifetime performance, if it's harsh and unpleasant, listeners won't want to stick around for very long. To capture a vocalist effectively, you'll want to ensure that their microphone is placed at a proper distance, especially if you're using a cardioid microphone (which is easily the most common type of studio vocal mic).
If you position the mic too close to the vocalist's mouth, then its proximity effect will overwhelm their vocal delivery with an excess of low-end frequencies. If you position the mic too far away, then their voice will sound thin and distant.
The microphone should be close enough to produce a full, present sound without any unnatural-sounding low end.
A seasoned vocalist will know how to regulate the volume of their voice and how to adjust their proximity to the microphone to compensate for expressive level variations.
If a vocalist is whispering, they should lean in. If a vocalist is going to belt, they should back off the mic.
Sometimes vocals are replete with plosives — the bursts of air created by p and b sounds that cause loud, low-frequency pops and thumps when they reach a microphone diaphragm. Solving this issue is as easy as placing a pop filter between the vocalist and the microphone.
Gain-staging is equally important. If your recording levels are too low, then you’re going to experience increased noise; if your levels are too high, then you might experience clipping.
Best practice dictates that peaks should hit between -12dBFS and -10dBFS while you're recording, so keep an eye on your meters!
Trying to use EQ to push the vocal clarity above the brightness of other elements in the mix is a fast way to introduce harshness into your mix.
Aiming to enhance the clarity and prominence of vocals in your mix, our plugin FUSER offers a strategic solution. When dealing with a mix that's dense with competing audio elements, achieving a balanced sound can be challenging. FUSER's primary goal is to alleviate this challenge and create a sonic environment where vocals can shine. Here’s a step by step approach to create space for vocals in your mix:
In essence, FUSER streamlines the process of carving out space for vocals within a mix. By addressing frequency conflicts and intelligently adjusting the audio, it empowers music producers to enhance the clarity, presence, and impact of vocals, making them the focal point of the composition.
Once you have a great-sounding vocal track to work with, it's time to break out your EQ plug-in so you can carve out a place for it in your mix.
Our MIXROOM plug-in is an excellent choice for sculpting vocal tracks. Not only does it offer top-level clarity and transparency via its state-of-the-art filters, but it also saves you time with application-specific presets and target frequencies.
Using MIXROOM is pretty straightforward. To start, instantiate the plug-in on an insert of your vocal track.
Next, choose an applicable channel preset from the Vocals section. MIXROOM's presets are far-reaching, with male- and female-specific selections along with a wide spectrum of genres to choose from.
You can also create custom target values to help you get closer to the sound of your favorite songs. You do this by clicking the target icon in the bottom left corner and importing reference tracks.
After that, choose a loud section of your vocal track, such as during the drop or chorus. MIXROOM's Target EQ Curve permits you to skip the guesswork and dial in a pro sound in mere seconds.
Moreover, you get an Add Smart Bands button that instantly loads EQ bands that match the Target EQ Curve, providing you with an intelligent starting point for your tone shaping.
It's vital that you try not to be heavy-handed with your EQ maneuvers. You'll be stunned at how much presence a gentle, narrow boost at 5kHz will add or how much body a subtle 500Hz bump will add.
You should also try to avoid soloing your vocal when EQing for the most part. While there will be times that you'll want to solo the track (i.e., to correct a hard-to-hear artifact or anomaly), it's more effective to EQ your vocals within the context of your entire mix.
Most of the time, perfecting a vocal track in isolation won't translate to a vocal track that sounds great in a mix. Quite the contrary, a track that sounds awful in isolation could be exactly what your mix needs!
Does your vocal track sound shrill? You could try to tame it with a high-frequency shelving curve on a standard static EQ, but this will likely not solve the problem, and making a drastic high-frequency cut will suck all the "air" out of your track.
So, what's the problem, then? The answer: unwanted acoustic resonances.
Resonances take place when one of the frequencies in your mix interacts with the natural frequency of something else within your mix. This will sound like an out-of-control vibration that you'll perceive as a buildup of a particular frequency or group of frequencies.
Unwanted resonances are the bane of every engineer's existence. They'll impart a harsh or muffled character to your vocal track by way of strident or buried upper mids and out-of-control lower frequencies.
Moreover, these unbridled frequencies will rob your vocal track of its dynamics, and masked frequencies will create a seriously off-kilter mix.
Solving unwanted resonances used to be a time-consuming endeavor. But, thanks to our RESO plug-in, locating and dialing out resonances is easier than ever.
RESO is a highly advanced dynamic resonance suppressor that not only locates these sonic gremlins but also eliminates problematic resonances — automatically.
To deploy RESO, load it onto an insert of your vocal track, then click the Calculate Targets button. The plug-in will then automatically provide you with Target Nodes, along with sensible setting suggestions.
Removing resonances from your vocal tracks will make them sound cleaner and clearer, plus it will permit you to add flattering EQ curves to your track without amplifying unwanted sonic artifacts.
Another phenomenon that can spoil your vocal tracks is sibilance. Sibilance refers to s, z, sh, ch, and similar sounds that contain lots of high-frequency information.
When sibilance is hyped or overemphasized, such as with high-frequency EQ boosts or dynamic compression, the result sounds very unpleasant.
The easiest way to eliminate sibilance is with a de-esser. A de-esser is essentially a high-frequency limiter that automatically engages when sibilance rears its ugly head.
This will allow you to boost your vocal track's upper-mid and high frequencies, which will really help it rise above your mix without accentuating every "s" an artist sings.
You can use a de-esser either during the tracking stage or during the mixing stage. A word of caution, however: use de-essers in moderation, or you'll cause your vocalist to sound like they have a speech impediment.
Vocals are the most important part of any modern song. Thus, the quality of your lead vocal track will make — or break — your mix.
But, by recording great-sounding vocals, EQ-ing them properly, and eliminating acoustic resonances and sibilance, you'll be able to build ear-grabbing mixes that can stand toe-to-toe with any pro-level release out there.
Keep following our blog for more expert recording, mixing, and mastering tips and techniques!
]]>In this post, we'll explore several tips and tricks for achieving clean low end in your masters, involving effective acoustic treatment and monitoring, proper gain staging, and well-placed EQ tweaks.
No matter what type of music you're working on, your studio's acoustics will always impact your productions. Even if you use an in-the-box workflow, mixing and mastering in an acoustically inferior room will affect your production decisions and lead to unsatisfactory results.
When it comes to low frequencies, the most common acoustic culprit you'll encounter in an untreated production space is standing waves. Standing waves are to blame if, when you play back a mix in your space, you notice that some frequencies (most notably lower frequencies) sound louder than they actually are and other frequencies sound quieter than they actually are.
If you move around your room while a recording is playing, you'll really hear how standing waves distort your frequency perception. Depending on where you are, you'll notice that particular frequencies become louder or weaker.
Moreover, bass frequencies tend to collect in corners. This creates a perceived boominess in the room, which will cause your mixes and masters to sound balanced in your space but bass-deficient everywhere else.
To solve this issue, you'll want to invest in a set of bass traps, which are specially tailored low-frequency absorbers that you place in the corners of your room. After installing bass traps, your mixes and masters will translate much better, and you'll notice that your room's low-frequency response is considerably more accurate.
If you feel that most commercially available bass traps are out of your budget, Foam Factory offers a wide range of bass and broadband absorbers at a wallet-friendly price.
Accurate monitoring is one of the most important elements of audio production, second only to proper acoustics. After all, if you can't hear a mix correctly, how are you supposed to master it?
Your studio monitors should give you an accurate, uncolored picture of your mix or master. It's this accuracy that will help your projects translate properly on headphones, car stereos, earbuds, club systems, Bluetooth speakers, and anywhere else that folks will listen to your music.
When it comes to monitor size, bigger isn't necessarily better. Rather, the size of your speakers should be appropriate for the size of your production space.
Studio monitors with 5- or 6-inch woofers are usually sufficient for small, bedroom-sized spaces.
Worried that you can't hear low frequencies with a smaller studio monitor? Don't. Most well-designed speakers — even smaller ones — are more than capable of reproducing low frequencies.
If your speakers appear to lack bass, they may be placed too close to the wall behind them, which can produce low-frequency dips, notches, and nulls. Try moving your desk and speakers further away from the wall and see if this improves the frequency balance of your studio monitors.
Finally, if your studio monitors cause your desk or speaker stands to vibrate or resonate, try isolating them with acoustic decouplers, such as Auralex MoPADs, Primacoustic Recoil Stabilizers, or RAB Audio ProJax.
It's important to leave yourself -3dB to -6dB of headroom when you're mastering. This enables you to add EQ boosts and other forms of additive processing to your master without driving the subsequent plug-ins in your mastering chain too hard.
So, what's headroom? Simply put, headroom is the available level above your audio's peaks before clipping occurs.
In a modern DAW, clipping occurs at 0dBFS (decibels full scale). Once audio hits 0dB, you'll experience shrill, nasty-sounding digital distortion.
If your master suffers from clipping, it will be fatiguing and unpleasant to listen to. Placing a brickwall limiter at the end of your mastering signal chain will safeguard against clipping; however, if you drive a limiter too hard, it will lend a lifeless, overcompressed sound to your master and, at extreme settings, create distortion.
If you're dealing with a super-hot mix, the easiest way to cool it down is with a gain plug-in. Place the gain plug-in first in your mastering effects chain, then dial it in so that the peaks of your audio are hitting somewhere between -3dB and -6dB.
An EQ is the primary tone-sculpting tool mastering engineers use to attain a super-clean low end. Our BASSROOM plug-in is tailor-made for this purpose; it not only sounds amazing, but it also suggests genre-specific EQ adjustments a mastering engineer in a world-class studio would make.
Using BASSROOM is simple. Instantiate it in front of your brickwall limiter in your master chain, choose a preset, or import a reference track, and the plug-in will handle it from there, giving you precise EQ target recommendations.
All you need to do at that point is match the bands to BASSROOM's EQ suggestions, then tweak them until you're satisfied with the sound.
BASSROOM doesn't employ generic presets either. Its intelligent algorithm actually listens to your music in much the same way that the human ear perceives sound.
This means that BASSROOM's suggestions are 100% unique to your music.
To top it off, BASSROOM's state-of-the-art filters are top shelf all the way; they're optimized specifically for low-frequency sound shaping, yielding hyper-transparent results with maximum punch and clarity.
Once you get BASSROOM up and running with the perfect preset or reference track, play back a bass-heavy section of the song you're working on. Targets will move to suggested EQ positions, intelligently selected by BASSROOM's algorithm based on the tonal balance of your mix.
Matching BASSROOM's EQ bands to the target suggestions will give your master a clear, well-defined low end that not only sounds great in isolation but also sounds balanced within the context of your entire production. Use the targets as a starting point, then use your ears to carry you the rest of the way.
BASSROOM includes five static frequency bands: 0–20Hz, 20–40Hz, 40–80Hz, 80–160Hz, and 160–320Hz. If you click on a band's gain adjustment readout, you'll be able to adjust it manually.
BASSROOM also allows you to adjust Q bandwidth by dragging the sides of each band. Dragging the Q bandwidth control towards the center of the box gives you a thinner Q; dragging the control towards the sides gives you a wider Q.
The first manual tweak many mastering engineers make with BASSROOM is to make a gentle 20Hz or 40Hz cut with a relatively wide Q setting. This will free up headroom, allowing you to tighten up the rest of your master's bass frequencies.
You can also make small 160Hz or 320Hz cuts to minimize boxiness, thereby bringing more clarity to your mix.
Dealing with a weak, thin-sounding mix? If so, a 320Hz boost will work wonders. Boost cautiously, however; the 160–320Hz region is notorious for adding muddiness to a mix.
You can also add warmth to a clinical-sounding mix with a gentle boost at 80Hz with a wide Q setting. Again, tread lightly; too big a boost at 80Hz will eat up your headroom quickly and can drive your limiter into distortion.
Every mastering engineer struggles with low end — even seasoned professionals. It's easily the most challenging aspect of mastering.
But, with effective acoustic treatment and monitoring, proper gain staging, and well-placed EQ tweaks (with BASSROOM in your plug-in arsenal), you'll be well on your way to achieving pro-level masters.
Keep following our blog for expert production tips, tricks, and techniques.
]]>But while recording a guitar might seem straightforward, ensuring it sits perfectly in a mix can be a delicate art. Especially when dealing with multiple guitar layers, it's easy to run into problems such as muddiness, frequency clashes, and a lack of definition.
In today’s digital age, with infinite tracks at our disposal, the temptation to layer multiple guitar parts can sometimes lead to a cluttered and incoherent mix. How do you ensure that your rhythm guitar doesn't drown out your lead? How can you mix two guitars so that they complement rather than compete with each other? And how do you maintain clarity when introducing effects like reverb, delay, or distortion?
These are just a few of the challenges that we'll be tackling in this blog post. We will dive deep into the best practices, techniques, and tools that will help you navigate the intricate world of guitar layering. By the end of this guide, you'll have a clear roadmap to ensure that every strum, pluck, and riff not only sounds great on its own but also fits seamlessly within your overall mix.
So, whether you're a seasoned producer, an aspiring sound engineer, or a guitarist looking to polish your recordings, join us on this sonic journey and discover the secrets to achieving guitar-layering perfection.
Ensuring that your rhythm guitar doesn't overshadow your lead is crucial for maintaining clarity and balance in a mix.
Firstly, it's essential to consider the frequency spectrum; rhythm guitars typically occupy the mid-range frequencies, and it's wise to carve out some of this space to let the lead shine through.
Subtly ducking in the rhythm guitar's mid-range whilst the lead is playing can provide a pocket for the lead guitar to sit comfortably. With our plugin FUSER, it’s straightforward to do.
The nodes vertical position sets the threshold. The lower it is, the more dynamic ducking you’ll get. A good starting point is to set the threshold so the sidechain is triggering the ducking, but keep it subtle.
Next, set the mid / side position of the node. This can be done using the knob in the control panel. Use the conflict display to help you set this.
Remember, red = mid and yellow = side. It’s best to focus the node where the majority of the conflict is occurring to remove the frequency clash without removing unnecessary audio.
Once that’s done, it’s time to address the attack and release parameters. FUSER has a trick up it’s sleeve to make this process easy.
Click the Delta (Δ) button in the top right corner whilst soling FUSERs channel, and the side chain channel. Now you’re hearing the audio that’s being removed as well as the sidechain, you can adjust the attack and release to perfeclty align with the transient and tail of your side chained signal. This will give you incredibly transparent ducking.
Watch This In Action Below:
Volume balancing is fundamental when layering guitars. In music production, if everything is loud, then nothing is loud. It’s all relative. With this in mind, your guitar layers should be stacked with the most important guitars being the loudest, and the supporting guitars tucked in behind them.
FUSER can help you here again. It analyses the perceived volume of the channel that FUSER is on and the sidechain, and gives you a gain match pointer to help you match them. The algorithm it tuned to how the human ear will determine loudness. This means that regardless of the tone, distortion, or any other effects, you’ll be able to set the perfect balance.
This feature is a guideline for balancing two channels equally. In this circumstance, where we want the rhythm guitar to sit behind the lead guitar, we’d need to set the gain slightly lower than the pointer. Anything around 1-3dB lower than the pointer will be a good supporting gain range. You certainly don’t want the gain to be above the pointer, as this means the rhythm is louder than the lead and will drown it out.
Crafting an impeccable mix with rhythm and lead guitars requires nuanced adjustments, especially when using effects such as reverb, distortion, and delay.
For Rhythm Guitars:
For Lead Guitars:
A way to bring incredible clarity to your mix is to set up your spatial effects like reverbs and delay on a bus, then add FUSER to the bus and sidechain the original channel. Going through the process again to resolve the conflicts will mean that the effects duck out of the way of the original signal, giving you a tight and focused sound.
This approach ensures that both rhythm and lead guitars, even with their respective effects, work harmoniously in a mix without frequency clashes.
Phase alignment is a foundational concept in audio recording and mixing. At its core, phase refers to the timing differences between waveforms, which can arise from various sources such as multiple microphones, layering similar sounds, or even time-based effects.
When two sounds are out of phase, they can cancel each other out, leading to a hollow or thin sound. Conversely, when sounds are in phase, they reinforce each other, resulting in a fuller and clearer audio output.
Proper phase alignment ensures that all elements in a mix work together harmoniously, preserving the clarity, punch, and depth of the original recordings. Without careful attention to phase issues, even the most expertly recorded tracks can lose their impact and definition.
With FUSER you can tweak the phase alignment of your guitar tracks within a range of ±180°. This becomes crucial in minimizing phase conflicts, particularly where guitar tones intersect, as when layering rhythm and lead parts.
The cutting-edge Phase Analyze function focuses on the integrated signal from both the primary and layered guitar tracks, examining every phase rotation from -180 to +180°.
It intuitively detects and establishes the best phase rotation, guaranteeing the least phase interference when harmonizing similar guitar tones. This works really well on individual and group channels.
Watch This In Action Below:
Layering guitars demands a blend of intuition and technical skill. With FUSER as a tool in your arsenal, you're equipped to ensure each layer harmoniously fits its place, free from phase interference. So, as you set out on your next project, let FUSER guide your way to sonic excellence. 🎸🎵
]]>In the world of music production, crafting the perfect kick sound is like assembling a delectable burger – a symphony of flavors that blend to create a satisfying experience. However, the art of layering kicks, like stacking ingredients, demands a delicate balance.
Aspiring music producers, much like culinary artists, often find themselves grappling with a phenomenon known as phase cancellation. This intricate dance of sound waves can either elevate a track to auditory excellence or unwittingly lead to its sonic downfall.
In this blog post, we venture into the heart of phase cancellation as it pertains to layering kicks. We’ll give you a simple solution that will make your layered kicks shine like never before.
Think of each kick as a different ingredient in your burger. Each kick has its own unique taste, or in our case, sound. When you layer kicks, you're essentially stacking these ingredients together to create a more flavorful and powerful kick sound.
Phase is like the timing or alignment of the sound waves that make up audio. Sound is made up of waves, sort of like ripples on water. When these waves line up perfectly, they reinforce each other and make the sound louder and stronger. But if they're not lined up well, they can actually cancel each other out, like when two ripples collide and disappear.
When you layer kicks, you're playing multiple sounds at the same time. If the waves of these kicks are not perfectly aligned, they can interfere with each other in a way that makes certain parts of the sound weaker or even disappear. This is called phase cancellation. It's like taking a bite of your burger and suddenly some of the flavors vanish because the ingredients didn't mix right.
Phase cancellation can occur when two kick waves meet and their peaks (high points) align with the troughs (low points) of each other. When this happens, they cancel each other out, just like two opposing ripples in water that meet and flatten each other.
When you layer kicks, you're trying to create a powerful, deep, and rich sound. But if you're not careful, the kicks' waves might collide in a way that weakens the overall impact of your sound. Instead of a beefy burger, you might end up with a limp one.
Automatic phase optimization is a powerful tool within FUSER by Mastering The Mix that serves as a game-changer in resolving phase cancellation challenges. When working with layered kicks or any complex mix, this feature offers a streamlined way to ensure that the phase relationships between different audio elements are optimized for maximum impact and clarity.
Here's how to effectively use the automatic phase optimization feature:
Plugin Integration: Start by adding FUSER to your audio track that contains one of the layered kicks you want to optimize.
Activate Automatic Phase Optimization: Within FUSER, click the automatic phase optimization button.
Real-Time Analysis: Once activated, FUSER will perform real-time analysis on the audio signal, specifically focusing on phase relationships. It will identify problematic frequencies and analyze how they interact with each other. This should all take about 2-5 seconds.
Filters With Visual Feedback: You can use the phase filters to focus the phase adjustments to only process a specific range. The filter visual shows the most significant areas with phase conflicts as a red glow to help you set the range more efficiently.
Adjustment and Monitoring: As FUSER performs its analysis, it will automatically jump to the optimized phase rotation. Keep monitoring the audio to gauge how these adjustments impact the mix. You can also have the option to make manual tweaks if needed.
A/B Testing: As FUSER optimizes the phase relationship, toggle the bypass on and off to hear the difference it's making. A/B testing allows you to directly compare the improved sound quality.
Iterative Process: Don't hesitate to repeat the process across other kick layers if you're dealing with complex audio arrangements. Each adjustment could lead to further improvements in sound quality.
Once you're satisfied with the results, save your project with the optimized phase relationships. You can now confidently move forward, knowing that your layered kicks are working harmoniously without the risk of phase cancellation.
In essence, the automatic phase optimization feature in FUSER simplifies the process of addressing phase cancellation issues. By relying on real-time analysis and intelligent adjustments, it empowers music producers to achieve clearer and more impactful layered kick sounds while avoiding the pitfalls of phase interference.
Play the video below to hear an audio example of how this sounds:
When it comes to layering kicks and avoiding the potential pitfall of phase cancellation, taking proactive measures can be your best defense. A key strategy lies in selecting kicks that naturally complement each other. Pay close attention to the frequency content and characteristics of each kick. Opt for kicks that occupy different frequency ranges to minimize the chance of interference. Furthermore, consider adjusting the pitch of one or more kicks to ensure better alignment of their waveforms. This can significantly reduce the likelihood of phase cancellation, allowing the kicks to work together harmoniously.
Moreover, keep in mind that the attack and release characteristics of your kicks play a role in phase interactions. Experiment with altering these attributes to find a balance that maintains the kicks' individuality while also helping them gel together effectively. The goal is to prevent any clashing peaks and troughs that might lead to phase issues. By incorporating these preventative measures into your kick layering process, you're not just avoiding phase cancellation – you're also setting the foundation for a well-balanced and impactful mix.
In the intricate dance of music production, monitoring emerges as an unsung hero, especially when dealing with phase cancellation. As you meticulously craft your layered kicks, the quality of your monitoring setup can make all the difference. Whether you're working with top-notch studio monitors or a trusted pair of headphones, the accuracy of what you hear directly influences your ability to identify phase issues. Listening through a reliable playback system ensures that you can catch subtle phase cancellations that might otherwise slip under the radar.
Additionally, switching between mono and stereo monitoring modes can be enlightening. Many phase problems become pronounced in mono, as any disparities in wave alignment become more apparent without the spatial information of stereo playback. Regularly toggling between these modes can serve as a reality check, guiding you to make necessary adjustments that maintain the integrity of your kicks in various listening environments.
Beyond the equipment, cultivating your critical listening skills is equally vital. Train your ears to detect when kicks are cancelling each other out or lacking impact due to phase issues. The ability to identify these nuances can lead to more refined mixing decisions and overall sound quality. As you venture into the world of layering kicks and addressing phase cancellation, remember that attentive monitoring isn't just a technical requirement – it's a gateway to elevating your production prowess and creating a mix that resonates powerfully across any playback system.
By integrating these preventative measures and acknowledging the paramount importance of monitoring, you're setting yourself up for success in the realm of layered kicks. These practices not only safeguard your mix from phase cancellation issues but also equip you with the tools needed to sculpt a sonic landscape that captivates listeners and stands the test of time.
Layering kicks is like cooking up a sonic masterpiece. But if you're not aware of the phase of those kicks, you might accidentally create a bland or even weak sound. Just like a chef balances flavors, a music producer balances phase to create a delicious kick that stands out and hits hard.
Remember, it's all about finding the right ingredients (kicks) that work together harmoniously and aligning their waves correctly to avoid phase cancellation and create a powerful, punchy sound that makes people want to groove!
]]>But that’s exactly what you get when you have conflicting channels fighting for space in your session.
You may have experienced the low frequencies of your bass overlap with the kick making it sound weak and thin, whilst robbing the overall mix of precious headroom.
Or perhaps you’ve mixed a song where the pianos, guitars and synths dominate the mid range making the vocals unintelligible. And using EQ to make the vocals cut through just made it sound harsh…
You’re not alone! These are problems that every artist, producer, and mix engineer have to deal with in every session. In this post I’m going to explain an incredibly effective and BRAND NEW way of dealing with conflicting channels.
With this information, you’ll make the cleanest mixes of your entire career, clearing up channel conflict with more precision than ever before.
If you’re a professional, this is for you because it will save you an insane amount of your precious time, whilst also giving you total control.
If you’re a beginner, this is for you because it’s immediately intuitive and you’ll get great results with ease.
The clash between two channels in a mix occurs due to a phenomenon called phase cancellation. When two sound waves with similar frequencies meet but are slightly out of phase (not perfectly aligned), they can interfere with each other. This interference causes certain frequencies to cancel each other out, resulting in a loss of volume and clarity in those frequency ranges.
Imagine two waves—one peaking while the other is troughing—meeting at a point. The crests of one wave align with the troughs of the other, leading to cancellation. This phenomenon is most prominent when the frequencies of the waves are close to each other.
Phase cancellation and clashes between channels can occur in various situations during music production and audio mixing:
Overlapping Frequencies: When two or more instruments or sounds in the mix have similar frequency content, such as competing basslines or vocal harmonies.
Mono Compatibility: When a mix is played in mono (a single channel), phase issues can become more pronounced, leading to cancellations that are less noticeable in stereo.
Microphone Placement: In multi-microphone setups, if microphones are placed too close to each other or not properly phased, phase cancellation can occur when the signals are mixed.
Parallel Processing: Using multiple instances of processing effects (like compression or EQ) on the same signal can sometimes introduce phase issues if not carefully managed.
Room Acoustics: In recording environments with poor acoustic treatment, reflections and resonances can lead to phase interactions between the direct sound and reflections.
Sample Layering: When layering multiple samples or sounds, their phase relationships might not align perfectly, causing certain frequencies to cancel each other out.
Stereo Imaging: If elements panned to opposite sides of the stereo field have similar frequencies, they can interact and create phase issues, especially when summed to mono.
The image below shows the overlapping frequencies and stereo position of the hit song ‘Uptown Funk’. Notice how the kick overlaps the bass, and how the vocals are set on top of the big synth.
A wise first step when fixing clashes is to address the volume balance between the two channels.
If the (supposedly) submissive channel is louder than the dominant channel, then you’re off to a bad start.
Our plugin FUSER helps you analyze the relative volume of two channels so you can quickly tweak it to suit your goals.
Step 1: Load FUSER on the submissive channel.
Step 2: Sidechain the dominant channel.
Step 3: Align the input gain knob to the level match pointer. This will set the submissive channel to the same loudness as the dominant channel.
Step 4: You might decide to set the volume of your submissive channel a little quieter. Or move on to side-chain ducking to create more space.
Creating space in a mix through sidechaining is straightforward. You identify two channels in a similar frequency range fighting for space in your mix, such as your kick and bass, or vocals and piano.
With sidechaining, you give one priority over the other. You might use our plugin FUSER on your bass and sidechain the kick so the bass ‘ducks’ out of the way to make room for your kick allowing it to punch through the mix.
FUSER is also effective on musical elements such as guitars, pianos and synths to carve out space for your vocals so they can be the focal point for your song.
Once FUSERs sidechain is set up, you’ll see frequency-specific channel conflicts glowing in the display.
Click the 'Resolve Conflicts' button to automatically initiate processing that transparently resolves channel conflict and frequency masking without overdoing it. The smart node is fully tweak-able, putting you in control.
To manually create a node, double-click the frequency display. Drag the node down to increase the processing amount. Adjust mid/side, Q, attack, and release individually.
Click the small triangle button in the top right to listen to the Delta signal. It will assist you in setting the perfect attack and release settings.
In the digital era, mid/side processing plug-ins work by converting a stereo track into two signals. The Mid signal contains the information that shows up in both speakers, while the Side signal contains the information that is unique to the left and right speakers.
Conflict becomes most prominent when clashing frequencies emerge in the same mid-side position within the mix.
On the contrary, frequencies that overlap across different mid-side positions might not disrupt each other significantly.
Keeping this notion in mind, we can precisely place side-chain ducking in the mid-side position where the clash occurs, while leaving the parts of the stereo image that aren't experiencing conflicts un-processed.
FUSER makes identifying mid-side specific clashing incredibly easy. The mid clashes are displayed as a red glow. The side clashes are shown as a yellow glow. It’s visual clarity for your mix.
With this information, you’re empowered to set each node to a mid-side position that alleviates the conflict without unnecessarily disrupting other parts of the sound.
The journey from a cluttered and confused mix to a polished masterpiece begins with understanding the intricacies of channel conflicts.
Armed with this newfound knowledge and the innovative techniques presented here, you have the means to reclaim control over your mix.
Your mixes will breathe with newfound clarity, letting each element shine harmoniously in its rightful place.
Embrace this innovative approach, and elevate your audio production to new heights.
]]>
The same goes for VEH3 Synths Root C 238 stabs, lo-fi trumpet samples, KSHMR Short Fill No. 2, and anything that sounds like a high-pitched "woooh." Truth be told, any commercially available sample library becomes passé once everybody starts using it.
The easiest solution to this problem is to create your own custom samples. And one of the most inspiring ways to make this happen is to step outside your studio.
In this post, we'll explore how to capture field recordings, as well as creative ways for you to use them in your own productions.
Before we dive into capturing field recordings, we should first define the term. In essence, a field recording is any audio that's captured outside of your studio.
Field recording is a vital part of both sound design and foley work (foley = the addition of recorded sound effects after the shooting of a film), and it's a great way to add character and authenticity to your productions. Best of all, every sound you capture will be uniquely yours — nobody else will have access to them.
The first thing you need to do when capturing a field recording is to decide on a location. You'll want to find a place that has interesting acoustics, with minimal background noise, and that’s safely accessible.
Natural environments, such as the woods or the beach are a treasure trove of interesting sounds. Thunderstorms are another great source of organic sonic textures and ambiences.
Urban settings, such as street scenes, crowded markets, and train stations are replete with attention-grabbing ear candy. Construction sites are especially cool, enabling you to craft the most authentic-sounding industrial music possible.
You can also find all sorts of compelling sounds around your own house. Drop flatware on a table, scrape metal objects together, stomp your feet, creak a door, ping on a drinking glass — there's an endless array of sounds in every environment.
If you want to capture high-quality field recordings, you need a high-quality signal chain, just as you would in your studio.
The most complex — and arguably effective — solution would be to employ a portable audio interface, a compatible laptop or smart device, and an appropriate selection of microphones.
Your audio interface will ideally be bus - or battery-powered, enabling you to roam freely without worrying about plugging into an AC outlet. As for microphones, you'll probably want a shotgun mic for distance miking, as well as a pair of small-diaphragm condensers for capturing stereo sources.
An alternative to a full-blown audio interface/computer setup is to use a portable field recorder, like the Zoom's series of Handy Recorders. These devices capture sound with a quality that rivals a studio setup — directly to an SD card — and they often include built-in microphones and other features field recordists will find incredibly useful.
One of the biggest sonic hurdles when capturing outdoor recordings is getting rid of unwanted wind noise. Therefore, you'll want to invest in a foam windscreen and/or a "dead cat" windjammer to ensure clean-sounding audio.
You'll also need a high-quality set of closed-back headphones, so you can monitor your recordings in real-time. This will ensure that the sound you capture sounds the same as what you're hearing in the real world.
Once you've accumulated a cool-sounding library of unique sounds, then the real fun begins: the sound designing. During this stage, you'll use all the processing tools at your disposal to augment, sculpt, and mangle your samples into never-before-heard masterpieces.
There are many creative ways to process your samples:
Compression and Gating will allow you to sculpt the envelope — the attack and release — of your sound. Whether you want a sound to be punchy and sustaining or short and rhythmic, a compressor and a gate are the right tools for the job.
Time stretching will allow you to take any sound and make it either sustaining and ambient or choppy and percussive.
Pitch shifting will allow you to completely alter the feel of a sound. This can lend an otherworldly quality to a sample, and you can even build melodies out of it.
Layering more than one sample together is a great way to build unique, one-of-a-kind sounds that nobody else has thought of. A croaking frog layered with a jackhammer? Hey, why not!
EQ and filters allow you to isolate certain sounds in your recording by filtering out frequencies. You can also highlight specific frequencies to create interesting sounds that couldn't possibly exist in the real world.
Reverb and delay add depth and dimension to an otherwise two-dimensional sound. The sky's the limit — don't be afraid to experiment with densities, reflections, and other settings you wouldn't normally use.
Distorting a sound is another great way to mangle it beyond recognition, while also thickening it, adding character, and rounding off sharp transients.
Slicing a sound into bits and placing the pieces in a nonsensical order is an effective way to inject it with an ear-grabbing unpredictability.
Modulation, such as chorus, flanging, phasing, or an LFO is a terrific way to add movement to your sound.
Auto pan effects can add movement and stereo width to your sounds. You can also pan random hits to create a mind-bending effect.
Sidechain sustained sounds like pads and drones to clean up your mix and minimize frequency conflicts.
Reversing a sound is a great way to transform it from ordinary to "how did they do that?". Reversing a sound and combining the result with other effects will really take it over the top.
Moreover, apply automation in your DAW to add movement, control effects, and introduce variation into your sounds.
After you've manipulated and mangled your field recordings into interesting samples, it's time to apply the samples to your productions. Don't be afraid to think outside the box — your samples can be used in a zillion unorthodox ways.
For example, any short sample can be pitched downward and used as a kick drum. Likewise, any blast of white noise can function as a snare.
A long, sustaining sample can be repurposed as a synth pad. Or you can layer your sounds with existing synth patches to give them new textures.
You can also import your created sounds into a sampler instrument, such as Native Instruments Kontakt, UVI Falcon, AIR Structure, Apple Logic EXS 24 mkII, Ableton Sampler, or an MPC or other hardware unit, and play them like musical instruments.
If you've captured an interesting atmosphere or ambience, transform it into drone or soundscape at a low volume to add fullness to your production and fill in gaps in your arrangement.
Add real-world percussive elements to your beats. For example, you could accent your rhythm section with stomps, handclaps, and industrial machinery to really make it pop.
You could also use sounds as transition effects. For example, reverse a sample of an airplane taking off, add an upward fade, then process it with effects and use it as a riser.
At the end of the day, you want your productions to stand out from the rest of the pack. And you can't do that if your music sounds exactly like everyone else's.
Keep following our blog, and we’ll keep providing you with creative and inspiring ideas that are guaranteed to help you level up your production skills.
]]>Many music producers don’t actually play an instrument. They might know their way around a keyboard and get a feel for what sounds good, but can’t knock out a performance on cue.
Whilst that shouldn't hold you back, it can put you at an obvious disadvantage when it comes to programming an instrument when compared to an expert recording.
You might not know this about me, but I have a first class honours degree in Drum Performance, and I have fronted bands singing and playing guitar. I mention this as it has dramatically impacted how I experience music and how I approach mastering in a musical way.
I want to draw back the curtain on some ideas that I think can give musicians an edge over non-musicians. Hopefully you can use these ideas to make your tracks sound more authentic and musical.
If you’re programming drums, bass, piano, or any other instrument, listen to great performances and see what tangible ideas and approaches you can extract.
People think that reference tracks are just for mixing and mastering, this is absolutely not the case. Reference tracks can be extremely useful when choosing samples, synth sounds, and also when actually programming the performances. In fact, it makes the mixing and mastering much easier later on.
Try this with your next production, fire up your reference tracks early on in the process. At this stage they don’t have to be a rigid sonic representation of the sound you’re going for. Focus on finding a track that brilliantly mirrors the vibe you’re going for.
At this early stage, your production will just be the bare bones with hardly any processing, whereas any reference track will be a full mix and master. This can make the comparison challenging, as the dynamics and density of the productions will be very different. This makes it all-the-more important to accurately level-match your reference track and your production. Our plugin REFERENCE does this for you automatically and updates in real-time so the comparison is always fair.
If you skip this step, you might succumb to the bias of ‘louder equals better’. Louder audio sounds richer in the low-end and clearer in the high end. If your track is quieter, you might feel like it’s necessary to add a low-end and high-end boost…But, those moves might just ruin your audio. Until you level-match it’s impossible to tell. So, respect your time and level-match early on.
During your analysis, zero in on the nuances of the various musical performances.
Pay attention to the details. Small details like timing, phrasing, and accents can make a big difference in the authenticity of a production, just as they do in a musician's performance.
Understanding how a musician uses structure, volume and articulation can help you create more authentic and expressive productions.
Music takes the listener on a journey, builds suspense and then releases at just the right moment for maximum satisfaction. You don’t want your musical performance to go full virtuoso immediately. A simple example of this is that most chorus’ have vocals at a higher pitch and louder than the verse, as this is more exciting.
Warm up your listener, reel them in, then hit them with the good stuff when the time is right. Use rests (or ‘silence’) to break up your melodies and rhythms… Trumpeter Miles Davis, one of the greatest Jazz musicians of all time, famously said "It's not the notes you play, it's the notes you don't play." This wisdom applies to any instrument in any genre. Keep it in mind, and you’ll find yourself writing more interesting and more credible music.
All the elements of your song should work together harmoniously.
When programming an instrument, think about the style, vibe, complexity and tone of the other instruments. When the vocalist enters, the other instruments don’t compete, they support. It would be inappropriate for a drum solo to outshine the vocalists' emotional middle-8 for example.
You can use the table below to help you identify what instruments would compliment your arrangement. For example, let’s say you started with a piano with the frequencies focussed in the high mids, a basic rhythm, warm timbre, laid-back energy, mostly centered stereo width and a medium volume (these attributes have been highlighted in blue on the graph below). To compliment your piano you might choose to add an element that doesn't conflict with the majority of these characteristics. You might choose a bass with the frequencies focussed in the low end, a complex rhythm, dark timbre, upbeat energy, mono stereo width and medium volume (these attributes have been highlighted in green on the graph below). If you have multiple elements in your mix that are very similar in all 6 categories your music will sound cluttered.
If you’re programming a bass line, listen to a few tracks that have an awesome bass player laying some cool and unique ideas down. Pino Palladino is regarded as one of the best bass players of our time, if you’re wondering about who to study, start with him.
The role of the bass player is to set the foundational tone of the track. Often playing the route note of the chords with occasional embellishments.
If the melody lines of other instruments are relatively complex, your bass line might be more simple in comparison. Sparser melody lines, or just a chord progression might lend itself better to a busier bass line.
Laying down a groove is one of the most rewarding parts of creating a song. You start to get the energy and drive of your song, helping it shape how the production will evolve into the finished masterpiece.
The drummer has a plethora of choices to make to accompany the other musical elements. Should the playing style be aggressive or soft? What is an appropriate and tasty ‘drum fill’ to introduce the chorus? What note division (quarter/eighths/sixteenths notes) will work best and will syncopation (when you accent or emphasize the offbeats) work well or be distracting? The list of possibilities is a long one, and in most cases you can go with your gut and what you’ve heard in the past for a good starting point.
With modern DAWs, loops and sample packs, you can lay down a beat very quickly. But a loop playing all the way through the track will sound unnatural and monotonous. Rather than relying heavily on loops, think about progressing the cymbals used, from hi-hats in the verse, to open hi-hats in the pre-chorus, and crashes or a ride for the chorus.
Similarly, evolving the snare sound through the track gives the track a more exciting dynamic range. A damper, less aggressive backbeat or even a cross stick sound might work well for the verse with the snare ramping up into a full rim-shot for the chorus. The snare sets the vibe for the song, so it’s important to get this right!
To level up your drum programming, steal a few cool ideas from great drummers (or well-programmed drums). Steve Gadd is regarded as one of the best drummers of all time. Check out a few youtube videos of him playing and see if there’s anything you can work into one of your songs.
The piano is a complex instrument, so I'll focus on the one technique that will elevate your piano programming: inversions.
Sometimes it can feel like your chords make too much of a leap from one chord to the next. It can make your progression feel jumpy and give your track an awkward and unmusical vibe. Using ‘Inversions’ is where you change the octave of some of the notes within a chord to make the note jumps as close as possible. If there are ‘common notes’ between two chords, try to match them up on the same octave. Don’t forget to check the relationship of the first and last chord in the progression.
Here is an example of how you can improve the flow of a i / III / IV / VII chord progression in the Dorian Mode using inversions.
Dorian D progression built from chord degrees. Feels jumpy and lacks flow.
Same Dorian D progression with the notes transposed to flow better.
The quantise function is a great time saver when it comes to making music, but too much quantisation will suck the emotion out of your music.
Humans don’t play instruments or sing perfectly in time, so they find it more difficult to connect with music that has these robotic characteristics. Once you have a midi part programmed into your DAW, you can manipulate it to achieve a well-timed and organic performance.
In most DAWs you can set the percentage of quantisation, so rather making everything land perfectly on the beat, you can quantise to around 70-80% to keep things a little loose. Listen to how this sounds in the context of your entire mix and you can quantise any individual notes that are too loose to 90-100%.
The velocity of the midi notes is also very important. When a pianist plays the piano, their fingers don’t press each key with equal pressure. The fingers don't land and lift from the keys at the same time either.
Add some velocity and note length variation between the notes of your chord progressions to emulate the performance of a pianist. It can make all the difference.
The statement that a musical performance has a greater emotional effect than a static performance has been scientifically proven and published in a paper titled, ‘Dynamic Emotional and Neural Responses to Music Depend on Performance Expression and Listener Experience’. 29 people were analyzed using fMRI (functional magnetic resonance imaging) technology and written examinations as they listened to two different versions of Frédéric Chopin's Etude in E major, Op.10, No. 3. One version was played expressively by a talented pianist, the other had the note length set and quantised to exactly match the original musical notation. The velocity was also set to constant throughout the performance. Below is a quote from the finding of this study.
“These results provide evidence that temporal dynamics of expressive rhythmic performance increase emotion-related neural activations, and that musically experienced listeners are more sensitive in this regard.”
Source: Chapin H, Jantzen K, Scott Kelso JA, Steinberg F, Large E (2010) Dynamic Emotional and Neural Responses to Music Depend on Performance Expression and Listener Experience. PLoS ONE 5(12): e13812. https://doi.org/10.1371/journal.pone.0013812
It’s unlikely that you’ll be an expert on every instrument, and that’s ok! Using the approaches set out in this blog post should help you create better performances that will help your music sound more authentic. Keep referencing, and keep learning from great performances.
]]>Gaining an understanding of how to make your music sound like your favorite mixes is a challenging endeavour. That's where our REFERENCE plug-in comes in.
REFERENCE puts state-of-the-art tools and real-world insights right at your fingertips, enabling you to easily match the sound of your favorite music. In this blog post, we'll explore the first five things you should do when you start using REFERENCE.
REFERENCE permits you to compare your mixes and masters to up to 12 reference tracks. So, which tracks should you use as references?
Narrowing down your options is easier than you'd think.
The most obvious starting point is to choose tracks with a sound you'd like to emulate. These would typically be well-produced, well-mixed songs that have generated commercial success and/or critical acclaim within their genre.
Speaking of genres, your reference tracks will be more beneficial if they're within the same genre as the mix you're working on. Music with similar instrumentation to your mix will also be the most effective.
Moreover, use high-quality music files. The last thing you'd want to do is try to replicate the sound of a harsh-sounding, low-bitrate MP3.
Still not sure which tracks to use? Check out our blog post, Best Reference Track For All Genres.
Once you've collected a nice cache of reference tracks, you'll be ready to load up an instance of the REFERENCE plug-in. Start by instantiating REFERENCE on your project's master bus after any metering plug-ins, such as LEVELS, but before any headphone or speaker correction plug-ins you may be using.
To load reference tracks into your session, either click the ADD REFERENCE TRACKS button or drag and drop your audio files onto the DROP FILE HERE area of the Wave Transport.
Next, decide whether you want to employ MIRROR mode or FREE mode, which are selected using their corresponding buttons on the bottom left-hand corner of the Wave Transport.
In FREE mode, you can navigate to any part of your reference track simply by clicking on its waveform. FREE mode also permits you to create loops by selecting and dragging over the part of the waveform you want to loop.
Loop endpoints are automatically shifted to the nearest beat, and you can create as many loops as you'd like. Activating a loop is as easy as clicking on it, and adjusting it is as easy as dragging the loop lines.
You can also delete a loop by clicking the cross in the top right corner of the loop..
In MIRROR mode, the plug-in will match the playback location of your reference track to your DAW's transport. This is ideal for applications involving different versions of the same track, enabling you to jump between versions at the same point in the song.
Because professional mixes have been subjected to the mastering process, they'll most likely sound louder than your unmastered mix. This is a problem, as louder is perceived by our ears as sounding better, with punchier low frequencies and clearer high frequencies.
Clicking the LEVEL MATCH button takes loudness disparities completely out of the equation. This function balances all your reference tracks so they have the same perceived loudness, enabling you to make informed mixing decisions free from perception biases.
You get three level-matching options: Match To Original, Match To Quietest, and All To -14 Short-Term LUFS.
Match To Original, which is the default setting, matches the volume of your reference tracks to the volume of your mix. Match To Quietest matches every track, including your mix, to the quietest track to ensure safe headroom.
Match To -14 Short-Term LUFS matches all tracks, including your mix, to -14LUFS, thereby simulating the normalization algorithm used by most music streaming services. REFERENCE also gives you the ability to match the loudness of individual tracks or all tracks.
There's no reason for you to start back at square one every time you fire up REFERENCE. And, thanks to its robust preset system, you won't have to.
Once you've got REFERENCE loaded with your chosen reference tracks and tweaked to your preferred settings, you can create a user preset using the plug-in wrapper in your DAW, then recall it at a later time.
REFERENCE's presets are all-encompassing, retaining your reference tracks, loops, LEVEL MATCH settings, and more.
We've found that creating genre-specific presets is a huge time saver, enabling you to pull up your favorite time-tested reference tracks, loops, and settings every time you encounter a similar project.
REFERENCE is one of the easiest ways to elevate your mixes to professional standards. Keeping REFERENCE open during your sessions is a surefire path to creating pro-sounding music.
REFERENCE's indispensable insights make matching the true peak, loudness, EQ balance, punch, and stereo width of your reference tracks a piece of cake.
With REFERENCE in your plug-in arsenal, you'll no longer need to second-guess whether your vocals are too hot, whether your kick drum is too loud, or whether your bass is too quiet.
REFERENCE takes all the guesswork out of getting the wet/dry mix of your reverb, delays, and other special effects right. It also provides an effective reality check, enabling you to achieve a tonal balance and stereo image comparable to commercially successful recordings within your genre.
REFERENCE allows you to:
Reference tracks aren't a crutch; they're used by producers and engineers of all levels of experience, from bedroom music makers to first-call pros in world-class production facilities. It's one of the few tried-and-true methods of ensuring that your projects can compete with professional commercial releases.
Keep in mind that reference tracks are meant to be used as guides, not as a hard-and-fast rule book. You should never feel pressured to make choices that you don't want to make.
While reference tracks are a useful tool for training — and retraining — your ears, your ears should always be the final decision-maker when mixing and mastering.
That said, reference tracks are by far the most trustworthy of advisers. So, when your ears aren't sure which direction you should go, fire up REFERENCE for a reality check.
If you really want to level up your mixes, be sure to check out our entire stable of professional plug-ins and keep following our blog for expert, studio-tested production tips, tricks, and techniques.
]]>
In this blog post I'm going to explain some incredible approaches that can dramatically improve your master in situations where you thought all hope was lost.
The beauty of these techniques is that you might find yourself employing them with great success even when you could address the issues in the mix. Either way, these ideas will 100% level up your knowledge and authority when you come to master music, and hopefully get you out of some sticky situations.
The main reason why fixing things in the master is tricky is that the processes you implement will affect all the channels in the range you’re adjusting. So, if you want to add an EQ boost to the snare in the 5kHz range, you’re also going to hear the boost in the vocal. If the vocal was already sounding good, it’s now going to sound too bright.
Mid, side, left or right focused EQ adjustments are the key to solving difficult individual instruments.
You need to first identify where in the stereo field the issue is. For this example, let’s say we want to add more body to a weak sounding vocal.
Step 1: Create a ‘mid’ EQ band.
Step 2: Solo the band and adjust the Q and frequency to identify where the low-end of the vocal is.
Step 3: Boost the EQ whilst listening to the whole mix. Compare and match the body of the vocal to a reference track using our plugin REFERENCE for extra marks.
You can use this technique to boost or attenuate specific instruments too. The amount you can adjust by depends on the relative balance of instruments occupying the same range. For example, you could boost the mid channel in the 500Hz-3kHz to increase the vocal as long as the snare and synths in that range aren’t too overpowering. I’ve found that I can usually fairly transparently adjust the vocal level in a master by around ±3dB before it starts sounding weird.
Hi-hats levels are a common issue, and can sometimes be very easily resolved. There are three reasons for this. Firstly, there are rarely any competing instruments in the hi-hat range. Secondly, the hi-hats are usually punchy and short, meaning you can dynamically control them without affecting the tonality or tail of any sounds in that range. And finally, they are often panned either in the mid or side channels.
Put all of these attributes together and you’ll find that you can use a multiband compressor to isolate the problem and fix it very transparently. Below are two different scenarios with equally effective solutions.
Scenario 1: Hi-hats are too loud. Hi hat’s are wide in the mix. The solution is to apply multi-band compression to focus on the hi-hats, in the side channel with the compressor going back to ‘zero’ quickly after the reduction.
Scenario 2: Hi-hats are too quiet. Hi hat’s are centered in the mix. The solution is to apply upward expansion to lift the hi-hats out of the mix in the mid channel.
When you’re mastering, try soloing the sides to see what's happening there. You can do that by clicking the ‘side’ button in LEVELS. Then head to the stereo field section and use the filter. Taking 10 seconds to do this will give you a really good understanding (both audibly and visually) of the mix you’re working with.
If you listen to the master and you get a sense that the reverb is overwhelming the mix, you’ll want to confirm the reverb's stereo position. The reverb might be in mono, in which case, sadly, there’s not much you can do. When you solo the sides like the example above, if you hear a lot of reverb relative to the other details in the side channel, then you’ll be able to reduce it.
Step 1: Create a side EQ band.
Step 2: Solo the band and adjust the Q and frequency to identify where the focus of the overwhelming reverb is.
Step 3: Reduce the EQ band whilst listening to the whole mix. Compare and match the reverb level to a reference track for extra marks.
Resonances plague your audio with unpleasant and irritating whistle-like noises. Removing them dynamically is the only way to do it transparently.
Resonances come and go in the mix. If you use a static EQ and cut the resonances then you’ll leave ‘non-resonant’ sections sounding hollow and weird.
Our plugin RESO automatically identifies the most resonant frequencies, then suggests an appropriate amount of dynamic reduction to solve the problem without making your mix sound weak or thin.
Transients poking through the mix too much? It can happen! It’s often a symptom of when someone is monitoring their music too quietly when they’re mixing (they boost the punch to compensate).
Using a generic compressor to clamp down on these transients is a quick way to get your mix pumping up and down in volume in an unnatural way; though this might work if you just want to take the edge of your transients by 1dB or so.
I have two suggested approaches here.
Firstly, you could isolate the issue using a Multi-band compressor (like we did for the weak hi-hats earlier in the blog).
Alternatively you could use a specific transient tool, like Spiff or Split EQ to bring down the transients of your master. These tools are expensive but pay for themselves once they’ve saved you once or twice.
Introducing more punch into your mix is a little easier than controlling them.
A simple solution is to use the PCH (punch) slider in our plugin LIMITER. It analyzes the transients in your mix and uses a musical and tasteful algorithm to enhance them in your mix.
For a more focused approach, you can use our plugin ANIMATE to focus the punch on a specific frequency range in the mix, whilst setting the mid-side adjustments as well as other transient related parameters.
Noise and hiss can be cool. It can add a lovely character and grit that gives the music charm and sets the tone.
But when it’s too much, it’s just annoying. Thankfully, it’s an easy fix, IF you have a snippet of the hiss isolated. This is either going to be at the beginning or end of the audio file when all other instruments have faded out. If you hear it, you’re good to go.
Step 1: Load a noise reduction plugin on your master channel.
Step 2: Monitor the isolated hiss and click ‘learn’ on your plugin.
Step 3: Adjust the reduction to taste.
You rarely need to completely remove it. Often just a slight reduction can really help.
This expert approach will surely knock your clients off their chair…
If you have conflicting channels (kick and bass, or vocal and music), you might be able to solve the problem using AI stem separation (I use this one which is the best I've tried so far).
Once you have the drums, bass, vocals and other elements separated in your session, you can process them individually.
Note that these stems will sound exactly the same when you combine them together as the mix that you uploaded for stem separation. So even if the channels sound a bit weird individually, any processing you do should still sound natural when you combine them all together.
This opens up a world of possibilities outside the scope of this blog post, but I wanted to include it so it can be a solution to an otherwise impossible problem that you might face.
These are irreversible and burnt into your track. You might need to embrace the sound and make it part of the character of the song. You wouldn't be the first.
Clipped audio is a common one. I sometimes see a vocal take during the chorus where the vocals just push past the limit and distort. It’s such a shame as it really does stick out like a sore thumb in the context of a mix.
Making a feature of it is a realistic solution. This means distorting the hell out of a phrase where the distortion is overloading so it’s hidden behind the effect. I digress, this is a mix fix rather than a mastering fix.
Whilst I’m digressing, another issue can be the vocalists being too far from the mic. This just makes the vocals sound distant. One way to make this seem purposeful is to drench it reverb so the vocal is more atmospheric.
There could be a possibility of fixing tuning and timing issues using the AI stemming approach. Without it, there’s not much chance of resolving tuning and timing issues during mastering.
The mid-side adjustments I’ve suggested earlier in the blog have their limits. There comes a point where the adjustments are just too much and no longer transparent. There might be occasions when pushing past ±3dB works. I’ve mastered tracks for clients where I’ve needed to push the low-end up by around 10dB, and it works!
It’s only possible if the kick and bass have a good balance relative to each other, but together are quiet or loud in the context of the whole mix. If the kick was a good volume but the bass was too quiet, then pushing up the low-end would result in the kick being boosted too heavily.
It’s always best to have the HQ wav or Aiff files. There’s no reversing the information lost from a lossy file type, such as MP3, AAC, Ogg Vorbis etc.
I’ve mastered thousands of songs for my clients, and I feel like I’ve seen it all. I’ve faced such a huge variation of different issues and I’ve always found it a fun challenge to try and fix them.
It’s kind of like a gym workout for your mastering skills. The greater your ability to identify and resolve tricky problems, the easier it is when it comes to the more straightforward stuff.
If you find yourself in a bind in the future search for ‘Difficult Mastering Problems And How To Solve Them’ in google and this blog post will show up.
]]>
Gather a bunch of recording engineers together and ask them which is better: analog or digital. The pandamonium that follows is sure to be entertaining.
Needless to say, both analog and digital technologies have their own unique pros and cons. In this post, we'll dive into the advantages and disadvantages of each.
Prior to the mass adoption of digital recording technology, recording engineers relied on high-quality magnetic tape to capture sound. While analog technology gave us what are arguably the most coveted recordings ever created, the engineers of the day were in a constant struggle against limited dynamic range, hiss, dropouts, and wow and flutter.
When digital audio recording first became viable in the 1970s, its proponents extolled it as "perfect." Indeed, digital technology boasted amazing clarity and then-unprecedented dynamic range, plus it was devoid of the hiss, dropouts, and wow and flutter inherent in analog technology.
That said, while early adopters celebrated digital recording technology for its pristine, hiss-free sound, the format came with its own set of complications, such as aliasing, quantization distortion, jitter, and shrill-sounding AD/DA conversion. In fact, many engineers and audiophiles found — and still find — these issues more objectionable than the shortcomings of analog technology.
Analog, literally speaking, is an analog — a replica or representation — of something. In professional studios, magnetic tape is the highest-quality method for analog recording and playback.
With a tape machine, continuous changes in voltage function as an analog to changes in amplitude, which are stored as magnetic charges on the tape.
During recording, electrical audio signals are routed through a coiled wire surrounding a magnet, which is in close proximity to a reel of magnetic tape. This coil of wire surrounding the magnet is called the record head.
As the tape passes through the magnetic field created by the record head, the particles along the tape are magnetically charged. The magnitude of the magnetic charges along the tape is analogous to the changes in the amplitude of the audio signal sent through the coil of wire.
When the tape is played back, the reverse happens. The charges on the magnetized tape create continuous changes in voltage on the playback head, which can then be routed to an amplifier and played through a transducer.
Since analog systems rely on continuous physical measurements, their accuracy of representation is limited by how precise your machine is calibrated, as well as by the characteristics of your storage media and playback system.
Unlike analog, digital recordings don't make use of continuous amplitude measurements. Rather, digital technology relies on samples, or static pictures of a sound at a specific point in time.
A single sample doesn't indicate changes in a waveform. Therefore, digital technology creates an approximation of a real-world sound by stringing multiple samples together.
During recording, a real-world sound is routed through an A/D converter, which converts analog voltage into digital information. This digital information can then be stored and manipulated in the same manner as any other computer data.
When played back, the digital signal is routed through a D/A converter, which converts the digital information back into analog voltage, which can then be sent to an amplifier and played back through a transducer.
Since digital recordings rely on momentary snapshots, their accuracy of representation is limited by the number of bits, as well as how many samples, that are employed.
Many analog adherents are quick to point out the theoretical shortcomings of digital audio while failing to recognize the physical limitations of analog audio.
To start with, the rate at which magnetic tape passes the record head affects the quality of the recording. For example, a faster tape speed (the pro standard is a brisk 30 ips) yields greater frequency response, less hiss, and shorter dropouts than a slower speed.
Beyond that, the width of the tape affects the quality of your recording. Wider tape allows for a higher-quality recording with fewer dropouts.
One of the biggest drawbacks to analog recording is its inherent self-noise, which results in an audible noise floor. What's more, exceeding the intended dynamic range of analog tape (which can be a rather nebulous number) results in a type of harmonic distortion called saturation.
It should be noted that although tape saturation detracts from a recording's accuracy and transparency, it creates an ear-pleasing sound that's actually desirable in some instances. Many digital recording systems include emulations of tape saturation, just to capture this sound.
Analog playback systems are also prone to mechanical imperfections resulting in unintended changes in frequency. This is known as wow and flutter.
The maximum dynamic range of a professional magnetic tape machine is typically less than 80dB, and their upper frequency limit, although theoretically unlimited, can reach as high as 30kHz on really high-end systems in real-world applications.
Digital recording technologies have just as many sonic problems as analog. That said, the problems inherent in digital formats are much more predictable than with analog.
The dynamic range of a digital system is determined by its bit depth, with each bit representing 6dB. Thus, a 16-bit system will yield 96dB or dynamic range and a 24-bit system will yield 144dB of dynamic range.
While digital systems boast a greater dynamic range than their analog counterparts, exceeding this range doesn't result in a pleasant harmonic saturation. Rather, you end up with quantization errors, which cause an ear-splitting, very undesirable form of distortion.
In other words, analog systems are still usable if you exceed their intended dynamic range, while digital systems are NOT usable if you exceed their intended dynamic range.
Digital systems are also limited by their sampling rate, which is governed by the Nyquist Theorem. The Nyquist Theorem states that the maximum frequency range of a digital system is one-half its sample rate.
Therefore, a digital system with a 44.1kHz sampling rate will be limited to a roughly 22kHz maximum frequency, a 48kHz system will be limited to a 24kHz maximum frequency, and a 96kHz system will be limited to a 48kHz maximum frequency.
Exceeding these upper frequencies results in what's known as aliasing, which is a ringing pitch in the audible range even though the actual audio is above the audible range.
To prevent aliasing, digital systems employ anti-aliasing filters in their A/D stage that prevent audio above the Nyquist frequency from entering the system. This, of course, means that digital systems purposely exclude some real-world audio information (albeit outside the realm of normal human hearing) during the recording stage.
Lastly, digital systems are prone to jitter, which refers to an instability in the clock speed of sample playback. This can result in unwanted clicks, pops, and other errors.
Most pro-level systems employ high-quality digital clocks that effectively safeguard against jitter.
In real-life studio applications, it's nearly impossible to argue against the superiority of modern digital recording.
Analog tape records audio in a linear fashion. Thus, if you want to locate a specific point in a recording, you need to rewind and/or fast-forward to that specific spot on the physical medium.
Likewise, if you want to make edits to an analog recording, you need to physically splice the tape with a razor blade and reassemble it. Worse yet, if you make a mistake, there are no easy solutions to reverse it.
In a modern DAW, life is much easier. You can copy, cut, and paste at will with a simple keyboard command, and if you make a mistake, you can hit the "undo" button — everything is non-destructive.
Moreover, digital processors can accomplish feats that engineers in the analog age never dreamed of. For example, our MIXROOM plug-in analyzes your audio, then provides you with a logical starting point from which you can tweak your track.
Back in the day, you had to suffer a lot of trial and error with a hardware equalizer to get to the same logical starting point.
Likewise, our RESO plug-in identifies problematic resonant frequencies, then guides you on how much reduction is necessary to dynamically resolve them. This is so much faster and easier than the old-school "sweep and destroy" method, it barely merits a comparison.
The power of the latest DAWs and plug-ins, as well as their ability to perform functions that we couldn't even imagine in decades past, is undeniable. Beyond that, plug-ins fit into your laptop, allow dozens of instantiations, and are instantly recallable.
Make no mistake, analog hardware sounds great. It has an almost magical warmth that's difficult to achieve with digital processing.
Digital plug-ins don't include real transistors, photocells, capacitors, or transformers; they can only mimic them. That's why running your mix through a couple of hardware Pultecs — even with the controls at zero — makes everything sound better.
Doing the same thing with software doesn't quite deliver the same effect, as plug-ins don’t contain the same magic fairy dust as hardware (it’s not really magic — like we noted previously, it’s transistors, photocells, capacitors, and transformers).
Plus, there's something to be said for the visceral feeling of pushing real buttons and turning real knobs on a piece of analog gear.
But, what about analog-modeled plug-ins? Do they sound like real hardware?
That’s difficult to answer definitively. After all, your real-world hardware 1176 probably doesn't sound the same as everybody else's hardware 1176s.
For the most part, each individual hardware unit has its own unique — albeit subtly unique — sound, even if they came off the same assembly line. Beyond that, as analog components age, they tend to sound different over time.
So, which sounds better? The truth is one doesn't sound better than the other, and a quick look into any professional engineer's studio demonstrates that pros use both.
World-class engineers use what works for them, so we recommend that you use what works for you.
Everything ages, but not everything ages gracefully. Case in point: software.
When you update your computer's OS, your plug-ins can stop working. Then you update your plug-ins to work with your DAW. Then the plug-ins stop working with your hardware.
The only way to avoid this is to use an old DAW and old plug-ins on an ancient computer.
The great thing about analog hardware is that it does the same thing today that it will do 20 years from now. Indeed, microphones, XLR cables, and hardware preamps and compressors still do what they were designed to do.
That said, while analog hardware may be future-proof, it's not guaranteed to work forever — it still needs maintenance. After all, transistors blow, resistors fry, and capacitors dry out.
And there's no guarantee that these worn-out components will still be available after your gear craps out on you. This means you'll need to replace them with their "modern equivalents," which may or may not sound the same as the vintage parts.
It's a great time to be a recording engineer. Both analog and digital technology has never been as powerful — and as affordable — as it is in the present.
Keep following our blog for more tips, tricks, and musings about recording, mixing, and mastering!
Are you tired of spending countless hours in the studio perfecting your mix, only to end up with lacklustre results after mastering?
Do you want to take your music to the next level and achieve that professional, polished sound that you’ve been striving for? Look no further!
In this comprehensive guide, we’ll show you the essential steps you need to take to prepare your music for mastering. Whether you’re a seasoned pro or just starting out, this guide will give you the tools and knowledge you need to achieve the best results possible.
You’ll see some of our plugins get mentioned, they’ve helped countless music producers get better sounding masters and I know they can do the same for you too. Download the free trials for Mac / Windows and follow along with the instructions in this guide.
Mastering is the process of taking a single file of a final mix and preparing it for release. The primary task is to adjust the overall loudness of the song and correct any tonal balance imperfections. It’s also common to apply additional processing to enhance the sound as much as possible.
As an artist about to release a song, you have a number of options of how to get your track mastered. You can do it yourself, (see our complete guide to mastering your own music here), you can hire a pro choosing from a range of great engineers on Soundbetter, or you can use an automated online mastering service.
All options have benefits and drawbacks in the price / speed / quality ratio. If you’re an experienced producer, learning to master yourself is all part of the process. If you’re serious about progressing your music career and have the budget, then hiring a pro can give you reliably great results. If you need a master fast, have a limited budget and don’t feel confident doing it yourself, then automated mastering would be ideal for you.
With any kind of mastering, the following rule applies: garbage in, garbage out. Mastering applies broad, general processing to your music, so the better your mix sounds the better the master will sound.
The advice in this eBook will help you elevate your mix and improve your chances of getting a great sounding master. Before moving forward, your mix should already be 90% ready for release. You can use this checklist to make sure you’re ready to move forward:
Simple Final Mix Check List |
Tick |
Happy with songwriting, arrangement and sound choices. |
|
The song is interesting and engaging from start to finish. |
|
The arrangement is uncluttered and musical ideas are clear. |
|
Channel volumes balanced well. |
|
No timing issues. |
|
No tuning issues. |
|
Eliminate all clicks, pops and unwanted noise from recordings. |
|
Fades added where needed. |
|
Low-cut filter applied to remove unwanted low-end (i.e. Vocals). |
|
Masking minimized using EQ and side chain ducking. |
|
Kick and bass low-end mono between 0-100Hz. |
|
Lead vocal in center with stereo effects. |
|
Snare in center. |
|
Reverb and delay level checked in headphones and monitors. |
|
Delete any channel that doesn’t improve the overall sound. |
|
You should be able to enjoy the music without hearing something that takes back into ‘mix engineer’ mindset. |
|
One of the most effective and reliable ways to elevate the sound of your music is to compare it to reference tracks that you love the sound of. When you A/B between your mix and the reference track you can identify any differences and make adjustments so your mix sounds comparable.
Whether you have high-end studio monitors with acoustic treatment or just a pair of headphones, if your mix sounds comparable to the reference track, it should sound good wherever it’s heard.
It can be frustrating not knowing exactly how to get your music comparable to your favorite mixes. So, Mastering The Mix created REFERENCE to help you compare the original production you’re working on in your DAW to your chosen reference tracks. If your mixes don’t sound as professional as commercially successful songs, REFERENCE will give you the tools and guidance to resolve this.
Load REFERENCE as the final plugin on your master channel. You can drag and drop your favourite songs into the UI, then jump back and forth between your original and your reference. We recommend that you loop the chorus as a starting point.
Before making any mix decisions when using reference tracks, it’s crucial that you match the perceived loudness of your reference track to your mix. Your mix will be much quieter than the mastered reference tracks. The way we humans perceive sound gives us the illusion that louder music sounds richer in the low-end with more clarity in the high end. This makes it virtually impossible to fairly compare the tonal balance of two songs at different volumes. Skipping this step can lead to disastrous EQ curves that butcher the sound of the music.
Our plugin REFERENCE automatically matches the perceived loudness of your production to your reference track in real time, allowing you to make informed decisions when comparing the sound.
When choosing a reference track, try to chose a track in the same genre that you’re working in. Also try to find a track with similar instrumentation and sounds to your production. There will be differences of course, but choosing a reference track that is a great representation of your sonic goal will help you make the best decisions for your track. Click here to see our extensive list of reference track suggestions for various genres.
Once you’ve selected a great reference track. You can use the trinity display in REFERENCE to show you how your tonal balance, stereo spread and punch compares to your reference track. The white Level Line shows you the exact EQ adjustment to apply to your mix to get your music sounding like your reference track. This takes out the guess work and helps you dial in a pro sound. If the level line is within 3dB, this shows a very similar tonal balance. Anything above 6dB indicates a considerable difference in tonal balance.
The Level Line is an absolute game changer when it comes to elevating your mix, for a more in-depth guide on how to use this tool, check out our guide: The Most Powerful Feature In REFERENCE.
The stereo width display shows you how wide the audio is across the entire frequency range of either the original track you’re working on or your reference. Understanding the stereo spread of both your original track and your reference will help you get closer to the sound you want. You can use panning in your DAW or a stereo tool to reduce or increase the width of a specific frequency range in your track. When using REFERENCE, adjust the stereo width of your mix until the stereo width visual is similar when you A/B your mix and reference track.
The punch dots in the Trinity Display will tell you how the short-term dynamic range of your track compares to the short-term dynamic range of your reference. The punch dots will move towards the 0dB line if that frequency range is more compressed in your original than in the corresponding frequency range of your reference.
The punch dots will move away from the 0dB Line if that frequency range is less compressed in your original than in the corresponding frequency range of your reference. The more opaque the dots are the larger the difference between the punch of your track and the punch of the reference. Fine tuning the perfect amount of punch will give your track the right energy without sounding weak.
Where possible, I would suggest making the tonal balance, stereo and compression adjustments to individual channels rather than the master channel. This gives you more control over the sound and greater flexibility if adjustments are needed further down the line.
By this point, you should be really happy with how your music is comparing to you reference tracks. Your mix should have a great tonal balance, a great stereo spread, and it should have the right amount of energy.
You get into ‘the zone’ when mixing a track, and it’s not always practical to be constantly thinking about the technical details. So, before bouncing out the final mix, it’s worth spending some time to make the necessary adjustments to prepare the audio for mastering. Mastering The Mix created a metering plugin called LEVELS that not only gives you the technical details of your audio but also identifies problems with your music's peaks, loudness, dynamic range and stereo spread.
We’ll start with the peak, which in the context of mixing relates to ‘headroom’. The headroom is the space between the highest peak of the audio and 0dBFS (decibels full scale). Leaving headroom during mixing is a good practice as it keeps the dynamics and transients of the audio unimpaired.
When mastering, the headroom gives you the flexibility to adjust the EQ without potentially pushing the audio too loud into the following plugins. Open up your metering plugin, then preview the loudest section of the mix and use a gain plugin (as an insert on your master channel) to adjust the peak of the audio to anywhere between -3dB and -6dB on your peak meter.
The next thing to consider is loudness. There should be no heavy compression or limiting on your master channel. These irreversible processes will reduce the dynamic range and will reduce the flexibility during mastering. If you have a compressor adding some subtle glue to your mix that you feel is integral to the sound of your song, then it can remain. But be sure that it’s not clamping down too hard on your transients. LEVELS uses LUFS (loudness unites full-scale) to measure loudness. Keeping your mix quieter than -16LUFS will ensure that you’ve retained the dynamic and loudness range of your music.
The dynamic range section of LEVELS gives you a visualization of the punch of your music. If your music is punchy, the oscilloscope will glow green. If you’ve over compressed your music, it will turn red.
A small dynamic range can come about from over compression and limiting. This can suck the life out of your music. It can also reduce the clarity and punch of the transients in your music. So to get a great final mix, you'll need to make sure that the dynamic range is on point.
To get a more dynamic mix, look at the compressors and limiters in your session and make sure they aren't working too hard. I usually don't compress any material with a ratio exceeding 4:1. If things sound too quiet in parts of your mix, automate to get your levels and then apply lighter compression. Mastering can lower the dynamic range of your track so it's better for your mix to be over dynamic than not dynamic enough.
Next up is checking the technical details relating to stereo spread. The vectorscope in LEVELS displays your audios placement in the stereo field. When the image is spread out across the circle, your mix is wide. When the image is simply one thin line down the middle, your mix is mono.
The L-R [Left Right] meter at the bottom of the central display area shows how even the left and right outputs are. If the pointer is central, then your music is evenly balanced between the two speakers. If you have an unbalanced output the pointer will hover either too far left or right and will glow red. If the L-R meter pointer starts moving too far left or right from the middle, it will glow red. This informs you that the output of your mix is too unbalanced. You need to balance the stereo field evenly to have a solid mix that translates well in all environments. Have a look at the core elements of your track and make sure the panning isn't dominating one side over the other.
I would suggest using a stereo placement plugin to place the low frequencies of your kick and bass in mono up to at least 80Hz and possibly as high as 300Hz. Play around with these settings to see what works gives you a solid sound without making the stereo width too central.
These are the core technical details that must be addressed when creating a final bounce for mastering. Click here to go a more in-depth and learn about phase and loudness range.
The final step is to export your audio so you have your new mix file ready for mastering. You’ve worked so hard up until this point it’s important to not undo your work by bouncing your audio incorrectly.
If you’re sending your music to a mastering engineer, be sure to leave a few seconds of silence at the beginning and ending. This allows them to create the fade in and out. If you’re uploading your track for automated online mastering then you’ll need to enter your own fade in and out exactly as you’d like it on the final master.
Select Wav or Aiff as the option for a high quality, lossless bounce.
The sample rate should remain the same as your original project session. There are no benefits to increasing the sample rate if the audio was not originally recorded and produced at a higher rate.
The bit depth/resolution refers to the dynamic range scale of the audio. 24bits is the most widely accepted resolution for online distribution. 16Bit is the format needed for CD and some online platforms that have not yet adopted 24bit. If your DAW offers 32bit, this will give your audio file a more precise dynamic range, but otherwise 24bit is great.
Dithering is a technical topic, but in a nutshell, it’s only necessary to dither the audio if you’re reducing the bit depth during mastering. So if your mix file was 24bit, and you want to create a 16bit master, you would need to dither. If your mix was 24bit and you’re bouncing a 24bit master, you don’t need to dither. In either case, leave dithering off when exporting your mix.
Normalization raises the volume so your audio is peaking at 0.0dB. You’ve already controlled the peak level using the gain plugin on your master channel so you don’t want to apply any normalization.
Offline bouncing when mixing is usually fine and more time-efficient. Real-time can be useful if you want to have another final listen to your mix.
Once you’ve got your mix bounced out, listen to the mix in multiple locations. Check it on as many pairs of speakers and headphones as you can get your hands on. The car test is always great, as the acoustics in a car are usually pretty good, many people listen to music in cars, and you will have a good idea of how music sounds in your own car. Listening through smaller speakers such as your laptop and phone is also a great way to check your bass doesn’t get lost on smaller speakers.
Do you best to compare your mix to other tracks (especially your reference track). Level matching is not easy in this context, but try to balance out the volumes between your mix and the reference tracks as best as you can.
Make notes if you feel any adjustments are needed, and make the changes in your mix session. Many producers repeat these listening tests a number of times before settling on a final mix, so don’t be disheartened if your mix isn’t perfect first time. Rinse and repeat until you’re happy, it will be worth it.
When you're ready, send the mix to your mastering engineer, start the master yourself, or upload to your chosen online mastering service.
Once you’ve got your mastered file back, you’ll want to check that you’re happy with the sound. There will be subtle (or sometimes not-so-subtle) changes that happen during mastering. Hopefully the changes are an improvement on the sound, but it’s important that you check.
Mastering The Mix created an audio quality control application called EXPOSE 2 that helps you identify issues with your music. So many tracks get distributed with a poor EQ balance, true peak clipping, over-compression, phase issues, and more. We want you to catch these issues before it's too late.
Open EXPOSE 2 (it’s a standalone app, so you won't need a DAW) then drag and drop your mastered audio file into it. Select the preset based on your distribution method (streaming, club, CD, etc). Any identified issues will be highlighted in red.
The analysis feedback gives you specific feedback on how to resolve the issues and optimize your music for release within a DAW. This empowers you to give the necessary feedback to whoever has mastered your track, though using automated online mastering is less flexible, you might have the option to chose a different preset.
A great way to understand how the mastering has affected your track is to load both your mix and your master into EXPOSE 2. You can click level match to make them both the same perceived loudness, then jump between to compare the sound. You’ll also see the tonal balance differences in the EQ compare section.
You can also do the same with your reference track. A/B test the sound and check the tonal balance using EQ compare. Anything within 3dB would show a very similar tonal balance. Above 6dB difference represents a fairly different tonal balance. For a complete guide on using the tonal balance tool in EXPOSE check out our guide: Compare EQ - The Ultimate Tonal Balance Tool
Repeating the listening test in the chapter ‘Checking Your Final Mix’ with the master file is a great way to be sure that you’re totally happy with how your music sounds before committing to releasing your music.
By following these steps, you’ll have a bulletproof approach to getting your track mastered and having total confidence in the music you’re releasing to the world!
]]>
Max Out Your Computer's Specs
If your analog front end is the heart of your studio, then your computer is its brain. Therefore, it's important that you keep it running as quickly and smoothly as possible.
At a minimum, you'll want to make sure that your computer's specs adhere to — or better yet exceed — the operating requirements of the software you're installing on it.
Beyond that, you'll want to ensure that your computer's processor is fast enough to meet the burden you're placing upon it. Since the processor is responsible for executing pretty much every function of your computer, we recommend that you get the fastest processor you can get your hands on, with as many cores as possible.
You'll also want to pack your computer with as much RAM as it can hold. Because RAM acts as a go-between between your processor and hard drive, it enables your computer to access information more quickly.
What's more, since many virtual instruments store samples in your system's RAM when you're playing them, producers who employ lots of soft synths will absolutely need to max out their computer's RAM.
Finally, since audio files require a massive amount of storage, you'll want to get your hands on as much hard drive space as possible.
For optimal performance, we recommend solid-state drives. For starters, SSDs are much faster than their mechanical counterparts, and since they don't contain moving parts, they're much better able to withstand the technical demands of media production.
As for the Mac vs. PC debate — it really doesn't matter. Use the computer that best matches your needs, workflow, and software requirements.
Don't Skimp on Your Audio Interface
If your computer is your studio's brain, then your audio interface is its circulatory system — it's the conduit that allows audio into and out of your entire recording rig. Therefore, you'll want to ensure that your interface truly fits your needs.
For starters, you'll want an interface with a channel count that can accommodate the number of individual tracks you'll want to record and play back simultaneously. You'll also need to ensure that it has enough built-in mic preamps for your needs, as well as enough headphone outputs.
You'll also need to make sure that the interface's connection type (i.e., USB 3, Thunderbolt) is compatible with your computer. And, if you're planning to grow your studio over time, be sure that your interface has one or more ADAT ports for expanding its I/O.
Finally, if you're interested in specific features, such as advanced monitoring capabilities, built-in DSP or modeling, or Dolby Atmos compatibility, you'll want to upgrade to an audio interface with those features.
Use the Best DAW for Your Needs
If you're trying to craft professional-level recordings with the feature-limited recording software that came free with your audio interface, you'll eventually hit a brick wall. And, since your DAW is smack-dab at the center of your studio, it's important that you find the one that best fits your goals and workflow.
Avid Pro Tools
If you're searching for a DAW that harkens back to an old-school analog studio, that's Avid Pro Tools in a nutshell. Pro Tools has long dominated high-end professional recording studios, as well as post-production houses, and its decades-long track record speaks for itself.
Pro Tools is available in three subscription-only tiers aimed at recording musicians, project studios, and professional studios, respectively, and it comes with all the features you'd expect in a top-tier DAW. If you record a lot of live audio and/or you need lock-tight compatibility with professional studios, Pro Tools is a great way to go.
Steinberg Cubase/Nuendo
Originally released as a MIDI-only sequencer in the late 1980s, Steinberg Cubase is one of the industry's oldest DAWs. Today, Cubase is extremely popular among composers, including the legendary Hans Zimmer, although its advanced tools are a great fit for any genre.
Steinberg Nuendo is similar to Cubase; although, it's optimized for post-production environments. Thus, Nuendo is frequently employed by film mixers and editors, as well as video game sound designers.
Cubase and Nuendo also include built-in Dolby Atmos renderers, making them great all-in-one solutions for working with immersive audio.
Apple Logic Pro
Another longstanding player in the DAW game is Apple Logic Pro. The perfect step up if you've been using GarageBand on your Mac, Logic Pro is arguably the most all-inclusive DAW software available, thanks to its robust collection of included plug-ins, virtual instruments, loops, and samples.
Logic Pro is also, at least at the time of this writing, the only way to mix in Apple's proprietary Spatial Audio format, which is how most everybody with an iPhone, Apple Music, and a set of AirPods is going to hear your music.
MOTU Digital Performer
MOTU Digital Performer is another mature DAW with a rich history. Digital Performer's refined design makes it very stable and CPU efficient, and its built-in instruments, effects, and notation capabilities are truly top notch.
While its audio features are second to none, Digital Performer's MIDI features are what really set it apart — especially if you use MIDI-equipped hardware. If you're a hardware synth geek, Digital Performer is the DAW for you.
PreSonus Studio One
PreSonus Studio One is one of the newer DAWs on the market, but it has quickly become one of the most popular. Studio One is frequently lauded for its streamlined, all-in-one approach to music production, which endeavors to tackle recording, producing, mixing, mastering, and performing with equal aplomb.
Studio One is also praised for its easy-to-use drag-and-drop workflow. Studio One is available in three tiers, including a subscription-based offering, along with a professional version and a lighter version aimed towards home recordists.
Cockos REAPER
If you're searching for a highly customizable DAW with an extremely robust online community, you'll be duly impressed by Cockos REAPER. REAPER boasts an unprecedented level of functionality; thus, it can take a while to wrap your head around it.
That said, its features, audio engine, and all-out performance are top shelf all the way. REAPER is also one of the most cost-effective DAWs out there, with a heavily discounted, yet fully functional, option for home recordists and non-commercial studios.
Image-Line FL Studio
There are many reasons why hip-hop and EDM producers flock to Image-Line FL Studio. For starters, its handling of loops is unmatched, thanks to its ultra-powerful pitch correction and time manipulation.
What's more, its included soft synths, effects, and other audio tools is truly second to none. To top it off, all four tiers of this DAW come with a lifetime license — no subscription, no paid upgrades — your one-time purchase gives you the software and all future upgrades forever!
Ableton Live
Musicians who want to use their DAW in a live-performance setting will find everything they're searching for in Ableton Live. Ableton Live is among the fastest, most fluid, most flexible DAWs out there, and it comes with all the effects, instruments, and sounds you could ask for.
You can use Ableton Live as a traditional linear DAW, but its real power is in its ability to function nonlinearly, in real-time. This allows you to move effortlessly between musical elements and play with ideas without stopping — a must have for live performances.
When Your DAW's Stock Plug-ins Aren't Enough
The plug-ins included with every major DAW give you a quality cache of bread-and-butter effects — you can get lots of mileage out of them. In fact, you'll find that most stock plug-in will cover your basic needs quite well.
That said, if you really want to take your productions to the next level, you'll want to get your hands on a good batch of high-quality third-party plug-ins with greater flexibility and advanced features.
EQ
EQ is arguably the most important item in a music producer's toolkit; it's what you use to make the disparate elements of your mix fit together cohesively. While your stock plug-ins will easily tackle all your basic tone-shaping needs, EQ is definitely an area that you don't want to skimp on.
For an excellent, go-to workhorse EQ, our MIXROOM plug-in checks all the right boxes. This intelligent processor analyzes your audio and shows you which frequencies to adjust, providing you with a solid starting point for your tracks.
What's more, our BASSROOM plug-in helps you nail your mix's low end by providing you with genre-specific EQ adjustments based on techniques employed by professional engineers.
You'll also find our RESO plug-in to be a bona fide mix saver. This dynamic EQ identifies problematic resonant frequencies and helps you resolve them — automatically — with absolute transparency.
Compression
Compressors are oft-used — and absolutely essential — studio tools, second only to EQ in importance. These processors are designed to reign in overly dynamic sources and to add punch and loudness to your tracks.
While your DAW's stock plug-ins will give you transparent dynamics control, to give your mixes a big-studio sound, you'll want to get your hands on analog models of the tried-and-true hardware compressors that pros rely on.
1176
The UREI 1176 is among the most oft-used hardware compressors in professional studios. Technically a FET limiting amplifier, the 1176 offers a lightning-fast attack, unmistakable sonic coloration, and its infamous "all-buttons-in" mode can be heard on drum buses in countless professional mixes.
There are many 1176 plug-ins out there, with offerings from Universal Audio, Waves, IK Multimedia, and Slate Digital being among the most popular. And, for the ultimate in all-buttons-in-mode cacophony, it's hard to top the Pulsar Audio Smasher.
LA-2A
The Teletronix LA-2A is another time-tested studio staple. Described as a leveling amplifier, the LA-2A employs an opto-compressor circuit to deliver gentle dynamics control while adding tube-fueled girth to your tracks.
Like the 1176, the LA-2A has been modeled to death. And, like the 1176, the most popular emulations are from Universal Audio, Waves, IK Multimedia, and Slate Digital.
Limiting
Technically speaking, a limiter is a compressor with an infinite ratio. These processors are placed at the end of your master bus signal chain to add volume while safeguarding against digital clipping, which is guaranteed to completely destroy your mix.
When you want to pump up your mix, you'll have to look long and hard to surpass our LIMITER plug-in. Not only does it deliver world-class results, but it also actually analyzes your audio and gives you suggestions to make your music sound its best.
Reverb/Delay
Unless you're shooting for a dry-as-toast mix, you're going to want at least one high-quality reverb plug-in. Deploying a top-quality reverb like Valhalla Plate or FabFilter Pro-R is an effective way to add a pro-level sheen to your mixes.
It's also a good idea to keep a convolution reverb on hand, which is designed to replicate real-world spaces with spot-on authenticity. Want to place your drum kit in a virtual re-creation of Galaxy Studios? With a convolution reverb, you can.
Although it's relatively expensive, Audioease Altiverb has been the industry-standard convolution reverb for about two decades.
Don't Forget These Indispensable Utilities
Besides our outstanding effects plug-ins, we've also developed software tools that are guaranteed to become an indispensable part of your creative workflow.
Our REFERENCE plug-in gives you a dead-simple way to compare your mixes with your favorite professional mixes. REFERENCE's robust features will give you powerful insight into your mix, enabling you to get closer to the sound of your favorite music than you ever thought possible.
When you're mixing, effective metering is essential. That's where LEVELS comes in. LEVELS provides you with presets for CD, Club, Streaming, YouTube, and more, ensuring that your music sounds its best no matter where and how your audience hears it.
There's a lot of amateur-sounding music out there. Our EXPOSE 2 application ensures that you won't be part of that crowd. EXPOSE 2 will help you avoid poor EQ balance, true-peak clipping, over-compression, phase issues, and all the other problems that often plague home-studio productions.
Conclusion
Following the suggestions in this post is a guaranteed way to move your humble bedroom studio closer to a professional level. Keep following our blog, and we'll continue to show you new ways to make your musical goals a reality.
]]>
That said, home studios do present challenges that professionals don't typically have to wrestle with. Fortunately, these issues can be effectively dealt with and taken out of the equation.
And it’s not as difficult as you’d think — a home recordist can craft studio-quality projects simply by tuning their room, choosing their gear wisely, and employing proper recording techniques.
In this post, we'll explore a handful of ways in which a recording musician can transform their bedroom-style studio into a recording space capable of producing professional-level recordings.
Level Up Your Room's Acoustics
The first thing you need to do is take an honest look at the acoustic environment you’re recording in. Even if you prefer an in-the-box workflow with lots of virtual instruments, an acoustically inferior room will still affect your production decisions.
That's why acoustic treatment is a must have for creating an effective recording, mixing, and production space. After all, when you put expensive, high-end gear in a bad-sounding space, it just highlights the worst aspects of your room — expensive gear can actually make your recordings sound worse!
The two most common acoustic issues you'll encounter in an untreated production space are flutter echoes and standing waves. These issues can be especially problematic in the typical square and rectangular rooms found in most residential dwellings.
Flutter Echo
Flutter echo is a form of slapback that occurs in square and rectangular rooms as a result of their parallel surfaces. If you clap your hands and hear a high-pitched, reverberant ring, this is flutter echo.
Flutter echo lends a hollow, tube-like character to your tracks. And if you combine several affected tracks, you'll end up with an especially bad-sounding mix.
Standing Waves
Standing waves occur at frequencies where the distance between two parallel surfaces is equal to one half its wavelength. If, when you play back a mix in your room, you notice that some frequencies (most notably lower frequencies) sound louder than they really are, and that others sound quieter than they really are, this is a result of standing waves.
You'll really notice how standing waves skew frequency perception if you walk around your room during playback. You'll observe that different frequencies get louder or softer, depending on where you're standing.
So, what do you do about it? The answer is acoustic treatment. Most rooms require a mixture of three types of acoustic treatment: absorption, diffusion, and bass traps.
Absorption
Absorption is designed to eliminate flutter echoes and slapback. These panels, which are typically made of foam or fabric, will alleviate your room of unwanted reverberant and ringing artifacts, and you'll also observe a distinct improvement in the clarity of your studio monitors, as well as better stereo imaging.
Diffusion
Diffusion minimizes soundwave buildup, attenuating artificially boosted and attenuated frequencies. These block-like panels also disrupt standing waves and flutter echo; but rather than eliminating errant frequencies altogether like absorption does, diffusion simply scatters them about.
As a bonus, diffusion panels are a great way to make a small, bedroom-sized space sound like a much larger room.
Bass Traps
Low frequencies are difficult to tame, especially in a small, rectangular room, where bass tends to collect in corners. If you've noticed an artificial boominess in your studio, and your mixes sound balanced in your mix space but bass-deficient everywhere else, you need to invest in some bass traps.
Bass traps are specially tailored low-frequency absorbers that you place in the corners of your room. After installing bass traps, you'll find that your room's low-frequency response is much better, and your mixes will translate much easier.
Wondering where to begin? Auralex's free room analysis offers an excellent starting point.
Bring Your Monitoring Up to Snuff
Effective monitoring is one of the most important elements of audio production. After all, if you can't hear it, you can't mix it.
Start with studio monitors that give you an accurate, uncolored picture of what you're hearing. This will help ensure that your mixes translate well to the outside world — headphones, car stereos, earbuds, club systems, Bluetooth speakers, etc.
The size of your studio monitors should correspond to the size of your room. Small, bedroom-sized spaces usually pair well with speakers boasting 5- or 6-inch woofers.
Even smaller studio monitors are capable of effectively reproducing low frequencies. If yours seem to be bass deficient, they may be placed too close to the wall behind them, resulting in boundary proximity problems.
Boundary proximity issues arise when sound waves reflect off the wall behind your studio monitors, resulting in nasty-sounding low-frequency dips, notches, or nulls. Thus, if you think your speakers aren't kicking out enough low end, it may be their positioning rather than their design that’s causing the problem.
You'll typically obtain better results by placing your studio monitors along a short wall to maximize the distance acoustic reflections travel to get back to your ears. This will decrease the volume of the reflections, thereby giving you greater accuracy.
You should also ensure that sound waves aren't reflecting off nearby hard surfaces. You don't want anything to affect what happens between the studio monitors and your ears.
Also, place your speakers in the shape of an equilateral triangle (where the length of each side of the triangle is the same). The speakers form two corners of the triangle, while your head forms the third.
In other words, the distance between the two speakers should be the same as the distance between each speaker and your head.
As far as height goes, your speakers' tweeters should be at or around ear level. Since high frequencies are highly directional, this will allow you to hear your mixes accurately and consistently.
If your studio monitors are causing your desk or speaker stands to vibrate or resonate, invest in acoustic decouplers, such as Auralex MoPADs, Primacoustic Recoil Stabilizers, or RAB Audio ProJax.
You should also make use of your studio monitors' built-in adjustments (if they have them) to fine-tune their sound. Some models feature simple EQs, while others include full-blown room correction.
Want to learn more about how to maximize your monitoring? Check out our article, How to Mix Better on Budget Speakers and in Bad Rooms.
Optimize Your Analog Front End
The easiest way to maximize the quality of your productions is to try for the best sound possible right from the beginning of your signal chain. "Fixing it in the mix" is more often than not a one-way ticket to a bad-sounding mix.
To start with, keep your instruments in tune. Even if you record an out-of-tune instrument with a world-class microphone, it will still be out of tune.
So, set your guitars up properly and tune them between takes. Tune your drums and dampen unwanted creaks and rattles.
The same goes for your studio gear — proper maintenance is essential. So, replace your janky tubes and crackly cables, fix those scratchy pots, and lubricate anything that squeaks.
When it comes to gear, you don't have to spend a bundle to get a solid-sounding analog front end.
Start with one or two do-it-all workhorse microphones. The time-tested Shure SM57 is an excellent starter mic that's capable of tackling just about anything with respectable results.
Once you’re up and running, you want a large-diaphragm condenser, which is considered the centerpiece of most studios. Therefore, if you only have one of them, you'll want it to be as sonically flexible as possible.
A versatile large-diaphragm condenser like the Audio-Technica AT4050 sounds excellent on everything, and you'll get tons of mileage out of its multipattern design. Regardless of which model you choose, look for flexibility-enhancing features like multiple polar patterns and built-in pads and filters.
If you record lots of acoustic instruments, you'll probably want a small-diaphragm condenser (or a pair of them). The Rode NT5 has long been popular for studios on a budget, thanks to its pro-level sound and relatively modest price tag.
As for other outboard gear like preamps, compressors, and equalizers, thanks to budget-friendly companies like Warm Audio, Golden Age Project, Lindell Audio, and Klark Teknik, you can get your hands on respectable re-creations of many high-dollar studio classics for a tiny fraction of the price of their vintage counterparts.
When tracking, maintaining proper levels is essential. If your levels are too low, you'll get unwanted noise; If your levels are too high, you'll get clipping.
Ideally, you should adjust your settings so that peaks hit between -12dBFS and -10dBFS while you're recording.
Proper microphone technique is equally as essential. If you position a cardioid mic too close to your source, then its proximity effect will create low-frequency mud; if you position it too far away, your source will sound thin and distant.
Above all, tell everybody (and yourself) to chill out! A relaxed, comfortable artist almost always delivers a better performance than a nervous, self-conscious one.
Conclusion
Needless to say, this post is merely a primer — there's a lot more information out there when it comes to audio production. Whether you're a beginning home recordist or a seasoned pro, keep following our blog, and you'll be well on your way to achieving your musical goals.
]]>What is Automation and Why Should You Use It?
Automation is a fancy word for letting a computer (in this case, your DAW) perform tweaks or adjustments for you during playback. This means that you can program your DAW to alter its parameters when, how, and where you want it to.
The concept of automation is a holdover from the old analog days, when engineers had to perform every knob, fader, and switch movement by hand. Indeed, it would often take several sets of hands to "perform" a mix by making pre-planned, on-the-fly adjustments during playback, while a separate 2-track reel-to-reel machine recorded the result.
For example, one individual might be responsible for bumping up the lead vocal fader during a song's chorus and lowering the piano during the song's bridge. Another individual might be charged with lowering the reverb mix when it became overwhelming and increasing it when sources became too dry.
A third individual might be tasked with riding the bass fader to make it sit right. Then, of course, someone needed to take command of the master fader during the song's final fadeout.
When technology became more advanced, high-end analog consoles began incorporating computerized automation, which enabled an engineer to program the console's motorized faders and other controls to perform these types of actions automatically during playback. This allowed a single individual to craft complex mixes.
Today's DAWs include a variation of this sort of automation. What’s more, modern DAWs can perform automation feats that even expensive, world-class consoles couldn't come close to in decades past.
So, why would you want to use automation? Simply put, it empowers you to create dynamic mixes, with laser-like timing and precision, quickly and easily.
Best of all, you can do this without relying on a roomful of people. And, unlike these said roomful of people, your DAW's automation will do exactly what you tell it to.
After all, humans make mistakes!
What Kinds of Things Can You Automate?
Modern DAWs allow you to automate just about everything. Not only can you automatically adjust most of your DAW's built-in parameters like volume, pan, sends, faders, and the like, but you can also automate changes inside of your plug-ins.
This allows you to engage or bypass a plug-in during certain portions of a song.
Want to add more depth to your song's chorus or add an effect to a single word or syllable on your vocal track? Automation can do all that and more.
You can automate individual plug-in parameters as well, allowing you to create filter sweeps with an EQ or alter the feedback on your delay plug-in for a specific section of your song.
You can automate a VCA or bus to control the level of an entire group of tracks simultaneously.
To top it off, you no longer need to rely on someone to create a fadeout at the end of your song. Automation works on your master fader, too, and with way more precision than a human.
Read, Touch, Latch, Write…What?
There are different ways to record automation. You can draw it in manually on a track like you're writing with a pencil, or you can record it into your DAW in real-time using an on-screen fader or an external controller.
When it comes to real-time entry, most DAWs employ four automation modes: Read, Touch, Latch, and Write. Most DAWs allow you to set the automation mode on a per-track basis, and It's vital that you understand the differences between these modes, so you'll know when to use each one.
Read Mode
When your track is set to Read mode, your DAW will perform all preexisting automation during playback. Thus, if you've recorded volume automation, the corresponding fader will move as you play the track back.
In Read mode, these moving controls won't create any permanent changes in your track, and no new automation data will be recorded. This mode is aptly named; it simply reads existing information.
Touch Mode
In Touch mode, when you "touch" a fader, knob, or button, it allows you to record new automation. But, as soon as you release the fader, knob, or button, it returns the automation back to its previous level or position.
This is great for making small tweaks to existing automation data.
For example, say you've already automated the volume on a vocal track but want to go back and adjust individual words and phrases. Push the fader up or down at the part you want to tweak, hold it at the new level until you want it to revert to its previous volume, then let it go.
Latch Mode
Latch mode works like Touch mode, but instead of returning to its previous level or position when you release the fader, knob, or button, Latch mode maintains the last value upon release.
This is great for making changes to existing automation that you want to maintain for extended periods.
For example, say you have a solo part you want to make louder from the last chorus until the end of your song. Touch mode would be inefficient for this purpose, as you'd need to keep your finger on the fader, knob, or button until the end of the song.
In Latch mode, you can make your change, then release your finger. As the mode's name suggests, it "latches" onto the last value and continues recording automation at that value for the remainder of playback.
Write Mode
Write mode is the big kahuna; it erases any preexisting automation in its path as soon as you hit the play button. Thus, it's intended for situations where you want to either record new automation from scratch, or for when you want to erase and redo an entire track of automation.
If your track is in Write mode, it will record new automation and delete any preexisting automation data as soon as the playhead passes over it; even if you don't touch a single fader, knob, or button.
Pro Tip: Always set your track back to Read mode when you've finished writing automation!
How Do You Use Automation?
Even though you can use automation to control pretty much every parameter in your DAW, volume is the most common. Volume automation gives you precise control over the levels of your tracks at every point in your song.
That said, in addition to controlling volume, you can use automation to control effects plug-in parameters in useful and creative ways.
Here are some common ways to deploy automation:
#1 — Automate Your Master Bus
Automating the volume on your master bus isn't limited to end-of-song fadeouts. Rather, you can use it to change the intensity of any section of your song.
For example, if your chorus doesn't sound more exciting than your verses, try bumping up your master bus a couple dB. Don't get heavy handed with this; you want it to sound natural.
Done well, meticulously tweaked master bus automation will convey the impression that the performers upped their intensity during the more exciting parts of your song.
#2 — Automate Individual Tracks
Got a cool drum fill, guitar lick, or synth line that's buried in your mix, but increasing the track volume throws your levels off kilter? If so, automating your track's volume can make this brief section pop without disturbing the balance of your mix.
Put your track in Touch mode, then increase the level of the track 1–3dB during the spot in question, then release the fader when the spot is finished. Doing this should make this brief moment of ear candy pop out of the mix without making the track perceptually louder.
#3 — Automate Programmed Parts
Programmed parts can sound…programmed. Volume automation is a great way to give them a more human-like flavor.
You can add dynamics similar to what a human musician would play, such as putting accents on downbeats or adding more volume to intense sections. You can also simply add enough subtle — and sporadic — volume variations to give your track a human touch.
#4 — Use More Automation, Less Compression
While compression is a great way to rein in overly dynamic sources, it's not intended to take the place of your faders. Abusing compressors in this way will give your tracks an artificial, lifeless quality that ultimately lowers the quality of your mix.
Fixing runaway dynamics with volume automation is a piece of cake. Start with 1–2dB boosts and cuts. If your track gets too quiet, boost; if the track gets too loud, cut.
Aim to get your tracks 90% of the way there with automation. Then you'll be able to apply compression in a more natural way.
Overcompression should be a stylistic choice, not a corrective measure.
#5 — Fix Plosives with Automation
Popped consonants, especially "p" and "b" sounds, are bona-fide track wreckers. Luckily, they can be attenuated with volume automation.
Start by zooming in until you see the plosive in your track's waveform. Next, draw in a fast volume automation fade under the offending plosive.
With practice, you'll be able to pencil in the perfect fade angle in one attempt.
#6 — Automate Your Effects Sends and Returns
You can use automation to make effects more intense on specific tracks during specific parts of your song. For example, you can add more reverb to your lead vocal during the song's chorus.
Do this by applying automation to the effect send level knob on your vocal track with a higher amount during the choruses and a lower amount during the verses. You can also automate the effects return's pan control to create a wider reverb in selected song sections for extra ear-grabbing drama.
#7 — Automate Your EQs and Filters
You can bring a track forward in your mix by increasing its high-frequency content with an EQ. Therefore, you can add a touch of sizzle to your lead vocals by automating a small high-end boost during your song's chorus, while letting it sit back during the verses.
You can also add a brief telephone effect to a vocal during a specific part of a song by automating low- and high-frequency cuts, then returning back to their original settings.
#8 — Create Tempo-synced Auto-pan Effects
Most DAWs allow you to draw in automation using different waveshapes, geometric shapes, or repeating patterns. Therefore, if you create pan automation on a stereo track, drawing in shapes constrained to the rhythmic values of your song's master tempo can create some mind-bending auto-pan effects.
Pro Tip: Set your project to "grid mode," and set your track so it snaps to the rhythmic value you want before drawing in the automation.
#9 — You Can Automate Anything!
Because you can automate almost any of your plug-ins' parameters, this gives you near-unlimited creative freedom.
You can shorten your analog delay plug-in's delay time and crank its feedback control for ear-splitting self-oscillation, then bring it back under control when things get too overwhelming. You can bury a track in constantly evolving cacophony by continually fluctuating a chorus plug-in's depth, rate, and mix controls.
The moral of the story is: don't be afraid to experiment. This isn't a test, and there are no wrong answers!
The Three Commandments of Editing Automation
While it's common to use your DAW’s faders for writing initial automation, you'll most likely perform most of your fine-tuning by editing breakpoints manually (breakpoints are the precise spots at which your automation's values change).
Breakpoints can be cumbersome to work with, as editing one section can accidentally affect the levels on another section of your song. Likewise, deleting a single breakpoint can cause the rest of the song's automation to collapse to zero.
#1 — Learn How Breakpoint Editing Works in Your DAW
Every DAW handles automation editing differently. Besides just drawing lines, you need to learn how to raise the level of a selected section of automation without affecting the rest, add and delete breakpoints, and switch between drawing tools.
#2 — Don't Be Afraid of the Freehand Drawing Tool
Your DAW’s straight line tool makes nice-and-neat fades; however, they can be too nice and neat — perfection simply doesn't sound natural. You'll typically get better-sounding fadeouts using a curve.
#3 — Keep an Eye Out for Level Changes
When you're editing automation manually, accidental level changes are inevitable. That said, they're not always obvious (at least not until you play your song back and it sounds wrong).
And the further you're zoomed out on your track, and the lower the track height, the less likely you'll notice them. Therefore, be sure to verify that a track that's supposed to start and end at the same value actually does.
Luckily, most DAWs display a numeric value when you select a breakpoint that you can use to check that both ends of the automation are correct.
Conclusion
When you learn to wield automation effectively, it will save you time, lend greater precision to your tweaks, and allow you to create effects you would be otherwise unable to create.
Keep following our blog for more mixing and mastering tips, tricks, and techniques.
]]>