View Single Post
Old 2007-04-16, 07:51   Link #11
Zero1
Two bit encoder
*Fansubber
 
 
Join Date: Jan 2006
Location: Chesterfield, UK
Age: 33
What the hell?

First off, those of you who voted True should gb2/doom9. I always thought that knowing sharpness affected compressibility was one of the more basic aspects of encoding. When you delve into the whys and wherefores, it can get a little complicated, but knowing an image becomes less compressible when you sharpen it is not rocket science. Hell you can even test it in something like Photoshop.

In fact, let's do an unscientific test. I say unscientific because first off there are colourspace conversions going on here, eg YV12 > RGB, and also to exaggerate the differences I saved them at what would be quality level 100 in Photoshop's JPEG compressor. Obviously for video encoding it's not going to be that high quality so the difference is not as great. Also for the first test I was lazy and used Photoshops internal blur/sharpen filters, so they may also exaggerate it, but judging by some encodes I've seen before, that may not be so far fetched.

Anyway, here is a frame I pulled from a DVD, it hasn't been altered in any way. At Q100 JPEG it's 219.6KB. I then lightly sharpened it using the unsharp mask (which is pretty nice because you get a good amount of control over it), it came out at 249.4KB. Then I blurred it with Photoshop's built in blur filter, which is pretty heavy handed. It came out at 184KB.


(Left to right: Original, Sharpened, Blurred)


So already you can see the difference, even if it is exaggerated for the sake of it. Now the thing I am getting at, why sharpening lowers compressibility is down to transform/quantisation, as jfs said earlier. I'll go into this in more detail later, but also what's happening is when you blur, pixels get averaged, so that also reduces the amount of detail to be coded, so the blurred image is smaller on two counts, one because it has less high frequency coefficients, and two because there is less detail/less unique values to code because it gets averaged; similar to how quantisation works where it works to reduce the number of unique values to code by rounding. When it does this and you get many values the same, they can be variable length coded by Huffman or whatever, but now I am going off on another tangent.

While I was in Photoshop, I thought, "To hell with it, I may as well do the job right", so I made a more appropriate comparison where I noted the filesizes of the image resized using different filters in AVISynth. Again this is not 100% accurate because of colour conversions, but for the sake of this, it's good enough. I tested with bilinear, bicubic, lanczos and lanczos4; again I took a screen cap of the preview window so you know I'm not making up the results


(Left to right: Bilinear, Bicubic, Lanczos, Lanczos4)

The script used was:
Code:
imagesource("1.bmp")
crop(8,0,-8,0)
*resize(640,360)
crop(0,4,0,-4)
Also while I'm here, I thought I'd show you the power of mod8/mod16 (mod16 doesn't apply for still images like JPEG, but mod8 still applies due to DCT). The first image is 640x352 and is 155KB, the second is 640x351 and is 155.4KB. Same image just one line shorter, but the filesize is larger.


(Left to right: Lanczos4 640x352, Lanczos4 640x351)

Ok so it's negligible, but I wanted to share anyway (but in most scenarios you might be cropping 360 > 352, which should be a better saving). However when we are talking encoding videos, it's good practice to make sure your resolution is divisible by 16, a) to avoid decoder problems (sometimes you get replicated lines or tearing) and b) it's just the little bit smaller in filesize (which when dealing with motion vectors and as many as 33,000 frames, it's worth doing even if only for the sake of making a non borked encode).

Well back to the original point. The image is transformed and quantised. A sharp image will have more high frequency coefficients than an unsharpened one, this leads to more unique values which first bloats the filesize, and doesn't compress as well during VLC due to it being less statistically redundant (as opposed to strings of same values). This also can cause ringing or Gibbs effect (a character flaw of the transform AFAIK) in addition to the halos you put in through sharpening.

Also if you study any of the low bitrate quantisation matrices, you will notice a lot of high value quantums. Depending on how coarse the matrix is, you might get a cluster of 255. The idea of this is to zero as many of the high frequency coefficients as possible. When you do this, it's effective to Huffman code too because it represents a string of same values as a codeword. The result of zeroing high frequency coefficients is a softer image. A good example of this is using MPEG-4 ASP where you have the H.263 quantisation type; it's often considered the best (out of that and MPEG or custom matrices), but gives a softer image than MPEG.

Quote:
Originally Posted by Quarkboy
What's more important to ask is: Does sharpening the video hurt compressibility enough that the more compression artifacts are noticeable compared to the increase of the visual appeal of the sharpened video.
Well like most things in encoding, as you suggest, it's all about tradeoffs. If you were to sharpen a video, you need consider the knock on effects if any. Will it cause an encode at "X" filesize to start to block? Does it create halos/Gibbs effect? If it does block, how much bigger does it need to be?, and also is that acceptable?

One such tradeoff is currently still being pondered by the scene. ASP or H.264? Do you want good quality and low playback requirements, or better quality and moderate to high playback requirements? The other tradeoff is the ever popular filesize. Do you want large filesizes and excellent quality, or low filesizes and good quality?

Obviously most people choose with moderation, but you do get other ends of the scale where you can find 80MB RM fansubs or 300-400MB H.264. It all plays down to your preference and what you think your userbase wants, which is true of anything be it filesize, sharpening etc.

Quote:
Originally Posted by Saber Cherry
Does that matter? No, not at all, because blurring a picture to the point that it can be stored as 3 bytes would be retarded - it's no longer the same image. In other words, sharpness is THE primary goal when encoding, since the human eye uses lines to define images. So it may not make sense to think of sharpness as a parameter that affects compressibility, but rather, compression as a process that changes the encoding while maintaining sharpness.
It's also no longer the same image if you sharpen the hell out of it. Come to think of it, it's not the same image even if you simply compress it since all frames other than I-frames are basically recycling parts of other frames. Hey, even I-frames aren't the same because they undergo quantisation and the high frequencies (read sharpness or detail) gets thrown out!

However I disagree. THE primary goal is a sensible trade off, which is perfectly fine if it's weighted either way, just as long as it isn't insane. If sharpness is THE primary goal, then I expect you to never use block based transforms and lossy codecs again. Ever.

You see the whole point and way these codecs work, is exploiting the human visual system, in that it doesn't notice finer details that easily, so these can be got rid of to free up more space for encoding the other parts of the image the eye does notice.

Quote:
Originally Posted by Saber Cherry
The still-image (keyframe and block-level) compression is affected in the exact same ways by the exact same things as non-video still-image compression. Compression techniques specific to video, such as motion vectors, are not particularly affected by things that affect still-image compression - you can reduce the contrast, reduce the number of colors, and smooth the image, which reduce the still-picture bitrate, but the numbers for motion vectors will remain constant... unless you make such drastic changes that motion-search fails.
Motion vectors are one thing, interframe compression is another. Changes you make to one frame, might affect how well the next frame matches it. If it doesn't work in your favour then it results in more residual to code, which may bloat the filesize somewhat after 33,000 frames.


The bottom line is sharpness does affect the compressibility, and it can be and has been proven.
__________________
Zero1 is offline   Reply With Quote