View Single Post
Old 2007-07-25, 00:24   Link #92
peter the llama
Junior Member
 
Join Date: Jul 2007
Quote:
Originally Posted by Starks View Post
I've never quite understood the use of anamorphic resolutions for anime... Shouldn't we be aiming for square pixels?
No, you should be aiming to keep as much information as possible from the source, in as low a bitrate as possible. Upscaling dilutes the information from the source among more pixels. Downscaling obviously loses information. The scaling process itself is also lossy, since it can't be reversed exactly. (scaling up and down repeatedly would blur things).

I'm using "information" in the information-theory sense, as in Shannon information. If computer science isn't your cup of tea, the explanation works just as well if I said "detail" instead of "information".

Yes, square pixels are needed eventually to display correctly (assuming a display with square pixels[1]). The question is whether the upscaling happens before or after compressing with x264. If you do it before x264 has to compress more pixels. If you do it after (anamorphic), it's actually the player doing it. Also, then there's only one video scaling process, right from 720x576 to e.g. 1680x1050. Scaling is a lossy process, but that's negligible compared to the advantage of compressing only 7/10 the amount of pixels. (also think of the speed gain; it probably takes ~7/10 the time to compress compared to upscaling.)

[1] Computer monitors almost always have square pixels. Exceptions I can think of: With some vid cards, you can run a 720x480 vid mode displaying on a 4:3 NTSC TV. I have a widescreen LCD, and when you use the analog input is always stretches to full width. So a 1280x1024 vid mode stretched to a 16:10 screen means that image pixels aren't square. This is more relevant for games or other 3D thing, which is the only time you'd want to use a non-native resolution.
peter the llama is offline   Reply With Quote