Audio mastering engineers have felt increasing pressure over the years to master recordings at ever increasing loudness levels as compared to other contemporary recordings, by way of dynamic compression, peak limiting, and hard clipping.

This pursuit of loudness adds distortion, reduces fidelity and ultimately impairs audio quality.

When radio stations play the already compromised audio from contemporary music through their typical FM processing chains, this confluence of degradation causes serious audio quality issues on air.

Besides leading to listener fatigue, this can even be mistaken for poor reception by listeners, ultimately causing tune-out.

When pre-distorted contemporary recordings are encoded through typical perceptual codecs, the codecs end up wasting bits trying to faithfully encode the distortion components, in lieu of the original audio.

This paper shall examine what the music typically endures when broadcast on FM, and how that understanding led to the invention of the “undo” algorithm, which is applicable to both the broadcasting and recording industries, and automatically repairs some or most of the damage caused by these mastering techniques by adaptively de-clipping and de-compressing the mastered recordings.



For the past three decades, FM broadcasters have been engaged in what have become known as the “loudness wars”, using ever-advancing FM audio processors to make their stations sound louder than the competition on the dial while still (presumably) operating within legal modulation limits.

During an A/B comparison between two different audio sources, listeners tend to prefer the louder of the two, even if audio quality is somewhat reduced.

Even loudness differences as small as 0.1dB, which are imperceptible as a whole, can still impart differences to fullness, warmth, and clarity, which the listeners in turn perceive as a more favorable sound.

FM modulation imposes physical and electrical limitations which limit peak waveform excursion.

If additional loudness is desired when the signal is already at its maximum, something has to give. In the case of an FM signal, what gives is the top of the waveform, which is quite literally sheared off in a process known as “clipping”.

Depending upon the particular processor employed, the use of clipping can provide additional loudness without excessively degrading the audio, or can increase loudness but at the cost of introducing audible artifacts and distortion.

To a certain extent, then, one can see where the FM loudness wars – at least in their infancy – were somewhat understandable: Stations wanted to be perceived as clear and powerful to make an impression when scanning up and down the dial.

As is so often the case with such things, this power was quickly abused as stations continued to increase the amount of processing and chased one another around in an effort to be the absolute loudest signal in the market with no regard for audio quality.


Like an FM modulated signal, other analog and digital mediums have similar limitations that dictate waveform excursion.

During the process of recording, mastering, or mixing records, various techniques are used to reduce the peak-to-average ratios of signals such as dynamics compression, limiting, overdriving analog-to-digital converters, and hard digital clipping.

While these techniques do not cause significant problems when used judiciously, the loudness wars have crept into the studio over the past 10 to 15 years, particularly in the recording of popular music, which now has virtually no dynamic range.

Instead of music going from quiet to loud it goes from clean to distorted with a resulting brick-shaped waveform.

Such recordings have a busy, flat, and lifeless sound right out of the studio, a situation that is only made worse when the content passes through a typical FM air chain where it undergoes further dynamic range compression and clipping.

The result is nearly unlistenable audio that invites listeners to tune out.