Over on Macintouch, there’s a lot of discussion ongoing about music encoding — Which is better AAC or MP3? What bit rate is required for sufficient quality? How does the LAME encoder compare to the encoder used in iTunes 4?
Folks are expressing some fairly strong opinions. One popular post is someone’s personal comparison of music samples encoded with MP3 and AAC over a variety of bit rates. Sound quality is described using terms like “brighter”, “fuller”, “wider”, “clearer”, “deeper” and “more dynamic range”.
In my humble opinion, such reviews should be taken with a large grain of salt.
Some years ago, I worked as a student in the audio laboratory of Dr. Marshall Leach. During that time, I was amazed at the number of folks visiting the lab, claiming superior audio quality from things like gold capacitors, “high-end” cabling, and expensive speakers. What was even more amazing was that many of these folks continued to firmly hold their opinions in spite of laboratory demonstrations to the contrary.
A good example is loudspeakers. It’s actually not too difficult to build a near-optimal loudspeaker system — i.e. optimal in the sense that it produces a flat frequency response given a spectrally flat input (noise). Such speakers, which almost perfectly reproduce their input signals, are consistently rated poorly by audiophiles.
Anyway, back to encoding…
Today, I compared a music sample encoded on the Mac using the LAME MP3 encoder with VBR, high-quality, and specifying an average target bit rate of 128 kbps, with the same sample encoded in standard 128-kbps AAC using iTunes 4.
I listened to these two clips, running concurrently in QuickTime Player 6.2, and simply could not tell an audible difference. Physically the AAC file was 3.5 MB, compared to 4.6 MB for the MP3.
I would love to find a more technical comparison between these format. I would suppose that the such a comparison could be done through a frequency/spectral-power analysis of the data composing the song.
Has anyone seen anything like this available?