Welcome! Log In Create A New Profile

Advanced

Re: Compression Alert: "Urgh!"

MP3 Alert: "Urgh!"
May 08, 2006 03:18PM
Get it here:
[lostturntable.blogspot.com]

Finally! I can replace my moldy old cassette.
Re: Compression Alert: "Urgh!"
May 09, 2006 07:48PM
Aside from these files, I 've noticed that legions of kids apparently think that compressing to 256k somehow improves the sound. This has become a problem in file sharing.

It does not improve the sound in some cases the sound is worse. The MP3 format was tested to be used ideally at 128k. Anything larger than 160k is redundant and space consuming.

There are even a large amount of files showing up on Soulseek at 320k. The point of compression is....
Re: Compression Alert: "Urgh!"
May 09, 2006 08:44PM
No way, 128 isn't enough. I can def. tell the difference between 128 and 190. I think 190 is fine, but you're right, getting up to 320 is pointless.
Re: Compression Stuff
May 10, 2006 12:49PM
Here's what I'm thinking::

Or maybe it's not the compression rate causing your difference. What you're hearing (compression artifacts or something else?) could be equipment oriented; or on the decompression end (at home - not counting downloaded files over which you have no codec control). As with all digital file usage, CoDecs vary (not all players use the Fraunhofer code*) and several standards are at use (but using 192k versus 128k is not a workaround for this problem).
But, if you download at 192k file, how does one know it would sound better than if that person had traded a 128k file? Using lab standards, it wouldn't. Doing the math, to hear a difference, a human would have to have a hearing sensitivity 1.5 times that of ideal.

The MP3 standard was developed with no discernible change in quality at a factor of 12x from CDA to MP3. Testing has shown that, all else being equal, a human can hear the jump from 96k to 128k but can not hear the difference from 133k to 192k. You can match a CD perfectly at about 117k - the closest book compression rate being 128k. Above that (117k), you're requiring your gear to decompress at a higher rate during playback. The human threshold for quality loss is at 128k (no audible quality level change couple with maximum compression) but the digital threshold (no change in quality with some compression degree) doesn't occur until 256k.

(I know there's endless talk about lossy formats and at a professional level, data lost is data lost. If a library was to be archived with no loss, files would best remain as wav or CDA with no compression, but since that is not the issue, I'll go with the developed and proven MP3 standards.)

However, at the consumer end, the ideal for the format is to aim for the standard 128k. There are music files online that were ripped at 128k, then decompressed and recompressed at 192k with the associative loss in quality from 128 to 192. Once the data is lost it doesn't magically reappear. Keeping that example in mind, your decompression unit, upon playback, doesn't play an MP3 file exactly the same way twice. Therefore, the argument can be made that occasionally it will render a 128k file more accurately than a 192k file (if one is compressed at a 12x factor and the other at 8x - there's loss either way but you can not control the loss).

*Several MP3 players on the market over-emphasize high frequencies in order to account for highs which are neglected by other coders. This can cause a screechy sound but, again, is not the fault of the compression rate alone. Also take into account that the person on the other end doing the compressing may not understand that you don't normalize to 100% when doing a rip. A disconcerting majority of files have been normalized to 100% - something I didn't see several years back. The x-ing based code gets bad results at anything lower than 320k but it's not the fault of the compression side of the equation.

And don't get me started about variable bit rate!
Tested algorithms are off by 10-15% making a 192 averaged VBR file sound worse than a 112k in many instances - there has been no standard psychoacoustic model appplied to VBR. Then, on the decompression end, manufacturers and software writers have had to apply workarounds for the CoDecs in place of a standard.

So...
What we're seeing now online is a "more is better" philosophy which counteracts the point of compression and assumes a rethinking of a developed lab standard with no control of Codec standards.
In conclusion, it's a waste of space/time to post files for trade above 128k. Interested in a counterpoint.



reference math::
a purchased CD contains files in the CDA format encoded at 16 bits per sample × 44100 samples per second × 2 channels. The German standard for MP3 uses a psychacoustic model allowing a compression of a factor of 12x with no discernible loss in quality. 16x44100x2=1411.2k Divided by 12= 117.6k. With equipment limitations, this led to the 128k standard. Tests have shown that a listener keen enough to tell the MP3 from the CDA detects the same artifacts whether compressed at 128k or at 256k.

A couple links::
Fraunhofer site
CoDec comparisons

Sorry, only registered users may post in this forum.

Click here to login