Quote:
Originally Posted by jfs
And most sources are 8 bit precision only.
The point is to improve quality and compression by minimizing the quantization error. I still don't get the maths behind it, but apparently it can work wonders on at least some sources.
So, 10 bit lets the encoder make a more accurate representation of the source material in less bits.
|
Someone can correct me if I'm totally off base, but thinking of it this way, there's a much larger "target space" which enables optimization algorithms to function more efficiently. Like, you're trying to pack the same info into 10 bits as 8 bits, you have more wiggle room in the 10 bit box, and that lets quantization and the trellis and/or cabac run time encoding be more efficient.
That's why even though the source is only 8 bit, upsampling it to 10 bit and then compression is almost always more efficient theoretically.