View Single Post
Old 2011-07-11, 12:35   Link #957
Dark Shikari
x264 Developer
 
 
Join Date: Feb 2008
Quote:
Originally Posted by cyberbeing View Post
Correct, encoding with 10-bit x264 means you won't produce extra banding beyond what was already in the source, and opens the door to dithering pre-processing actually being retained in the encode without an insane bitrate. I wasn't replying to you directly, just the thread in general with something I had posted in the playback fourm. Below were the preliminary results Dark Shikari posted, at which point 10-bit x264 was still being decoded to 8-bit without dithering. Things have likely improved with all the 10-bit related commits since then.

Source
Denoise/Deband -> 8-bit x264
Denoise/Deband -> 10-bit x264

The technicalities of why this all works goes over my head, but the basic idea was moving from 8-bit -> 10-bit eliminates something like 95% of the error x264 accumulates while encoding, at which point high bit-depths >=11-bit show insignificant gains.

Edit: Why does 10-bit save bandwidth (even when content is 8-bit)? [pdf link]
Each extra bit of intermediate precision halves the error caused by intermediate rounding.

So 10-bit removes 75% of the loss from intermediate rounding (vs 8-bit).

Higher bit depth would be better, of course, but higher than 10 means that it's no longer possible to use 16-bit intermediates in the motion compensation filter. Since 30-40%+ of decoding time is spent there, halving the speed of motion compensation would massively impact decoding CPU time, and since we've already eliminated 75% of the loss, it's just not worth it.

Last edited by Dark Shikari; 2011-07-11 at 12:49.
Dark Shikari is offline   Reply With Quote