Discussion in 'Audio, Video & Home Theater' started by twilyth, Jun 20, 2011.
I will again ask another golden ear to take a double blind listening test.
I see this turning into a trainwreck real soon.
First try this:
Result was easily noticeable on my setup. Took 1 play to spot it, second play to confirm it is indeed the right answer. If you can't hear the difference on that, don't bother arguing that 24bit 192kbps OF THE SAME RECORDING at 16bit 44.1k is going to sound different, because I can't find any. Most the time the 24bit is just better mastered than the 16bit version.
The thing about HDMI is most high end DACs are not readily accepting HDMI, it works on a Home Theatre situation, but a high end 2-ch built for music listening won't benefit from it and losing out on the higher quailty DACs and settle with a receiver.
First, which lossy compression was used nor its bitrate have anything to do with digital vs analog. If your source is crap, it's gonna be crap no matter what.
Second, that "test" is BS because both "clips" are the same:
The difference is nil.
You, good sir, have been a victim of confirmation bias.
cute, but Klipsch isn't the company they once were.
I've a question,
I am going to replace my AV Receiver within the next month or two and I MUST get one that supports uncompressed LPCM 5.1 audio!
Only thing is, I have no idea how to figure out if a receiver supports it or not!
How do I go about figuring this out????
For example... does this support it?
Would this also support it?
If they do, how do you know?? if not! how do I find the ones that do!
I want to find them as cheap as I possibly can too!
PS3 I take it?
Do you wanna bet a million dollars on your ears ?
The source never lies. They sounded exactly the same which is why I got suspicious and checked the source.
Did it occur to you, that you just don't hear any better then anyone else ?Secondly, I belive the experiment fundementally proves something else. But I encourage you to track down Ethan Winer over at real traps, and tell him what you claim to hear.
I'll be over here with a bag of popcorn.
Did it occur to you that the files were both 128kbps?
The bitrate here wasn't what Ford argued - he simply stated they sounded the same and proved it by showing that both examples link to the same file
Yes, and it was a 128Kbps file
The person on the website wasn't comparing, but lied
128Kbps vs 320Kbps
I can do a CD-Rip right now and make a .rar/.zip and input
128Kbps, 160Kbps, 320Kbps, 768Kbps, and lossless
and you'll hear the difference on each level
Sorry but no, they aren't, you can download them yourself if you want:
What FordGT90Concept found on the source of the page isn't *exactly* what is being played by each player.
IMO, on this piece of audio the difference is not noticeable, since it doesn't have many high frequency sounds. Although on some other music, for instance house music MP3s, I can say that I do note the difference between the two.
They both refer to clip2
aka. What streams to you is Clip 2
If you don't believe in me or the pages author, debug the webpage or better yet, look at your firewall logs and you'll see that both clips are played in the correct order (the audio player isn't even made by the pages author).
So if you're using digital connectors there's no sound quality benefit to having a sound card? Just in open al offloading for games (assuming creative based)? Opamps are pointless now??? What's the solution for headphones then? I don't know of any hdmi headsets...
There been all kinds of snake oil over the years regarding audio. What most are actually arguing about unknowingly is the way they prefer sound to be colored by a particular piece of gear. Which is fine, but misrepresenting this as a improvement one way or the other, is dishonest and ignorant.
well, this isn't a good test in reality. First off, almost every human can hear the encoding loss on MP3 format music at upto about 256kbps, IF they know what to listen for, also you loose dynamic headroom to.
But I am discussing difference between gold standard 16b 44.1khz and mp3 is rediculous. What you cannot hear the difference in will be 16b 44.1khz and 24b 96khz and 24b 192khz. Where you get improvements in these formats is as I mentioned, more smearing which lends a more analog quality to to the sound and more dynamic headroom specifically at the masterbus.
I took the initiative, grabbed my newest CD (best recording equipment and least damage to the disk) that has seen an optical drive only once before now and used WMP12 to rip a lossless wave (1411.2 kbps), MP3 @ 320 kbps, and MP3 @ 128 kbps (all 3 directly from the source disk). The HT Omega was at 100% volume, propagating the channels from 2 (stereo) to 6 (stereo surround), and no DSP effects enabled. The Klipsch Speakers were adding an additional 40dB boost.
Audicity showed pretty obvious differences between the 3 files but, in terms of how it sounds, they all matched (compared to lossless) to the ear which is all that actually matters.
I know you can tell the difference between 64 kbps MP3 vs 128 kbps but it would seem that anything higher than 128 kbps is moot unless you plan on converting it over and over and over again (each time losing a little more quality).
I clearly hear the encoding artifacts in the upper bands from 4k to 10k. Drives me batshit. Above 256k its not really problematic but below that, I can't stand it, its like a flickering flourescent ligh.
There might not have been any sounds that high in the song I was using.
I'd like to add my two cents on the human perceptible compression discussion.
In my experience I've seen files that could compress very well and sound virtually identical to their (far larger) lossless counterparts. On the other hand I've encountered files that did not respond well to compression and a great deal of effort went into finding settings which allowed for almost no noticeable deterioration in the signal. Feel free to simulate various compression settings in MATLAB, Sage, etc. and you'll see that there are times when high levels of compression lead to very little information loss and times when high levels of compression lead to massive signal corruption.
If I get around to it I'll post some examples but this thread is going to backlog my MATLAB work
Huh? Digital is just a method of storage. You're not sending digital to the speakers, the speakers are still getting analog (from the DAC on your receiver and then through an amp). A digital decoder should be able to take any input signal and send it to the DAC with the same reliability that your computer computes pi (so 1 error for every 4E20E20 computations* roughly ?).**
Obviously then you can debate the quality of the DAC and amplifiers, which is an open topic IMO. After a certain level of sophistication (and proper tuning) it just gets stupid and no one can tell the difference without some instrument (eg. SPL meter) assistance.
(IIRC) Lots of modern receivers will ADC any analog input signal, so you're still going to get sound that is only as good as the (in order). . .
-Analog input signal
*The number is fluff obviously, but it should be really big anyway.
**You may also perform some DSP on the decoded signal prior to sending it to the DAC.
I listened to the 2 clips on 2 systems. One has a Yamaha htr-5063 with polk monitor 40's and 30's plus RM8's for center and rear. The other has a yamaha rx-v765 with nht 3-way's front and center and nht zeros and sub zeros for the back and side.
I could tell the difference on the first system right away. it was subtle, but easy to pick the better sound.
The second system has better specs and the NHT 3-way speakers are incredibly clean and flat. I had a very hard time distinguishing on the second system even though it was technically better. I even tried listening in just stereo rather than enhanced 7.1 and it was still a toss up.
Separate names with a comma.