2009-05-04, 04:07 | Link #161 |
Two bit encoder
Fansubber
Join Date: Jan 2006
Location: Chesterfield, UK
Age: 39
|
Ah, the age old debate. A few key words: Diminishing Returns and Quantization.
Diminishing returns means exactly that. You can have a 175MB encode that may look Ok. Increase the filesize or quantizer by an amount (eg 50MB or q2) and you will notice an improvement. Increase it again by the same amount and you will notice another improvement, but the quality difference between the first and the second and the second and the third will be less. You increase the filesize by the same amount each time, but the quality does not increase by the same amount. Without going into ye olde long poste mode, the basics of this are high frequency coefficients, or basically sharp lines or detail within an image that are transformed. The sharper an image is or the more detail it has, the more high frequency coefficients it will have/produce, and these typically require more data to be coded. At a low quantizer, much of these high frequency coefficients are zeroed out. If you raise the quantizer, it is able to retain more of these, meaning that one encode will have more detail than the other. This can go on and on, but there gradually comes a point where it retains more and more high frequencies that the increase of detail is hardly noticeable by the human eye under normal conditions (for example not zooming in or studying a still image). So you have diminishing returns on two levels. One in the filesize vs detail retained and the other in detail retained vs detail appreciated by the viewer. Obviously some people have better eyes or better setups than others, so it's subjective, as is most aspects of encoding. Loosely related: http://forums.animesuki.com/showthre...567#post905567 As I go on to explain, this is why you get different quantization matrices. Some are tuned more for low bitrate, and as such discard more high frequencies after quantization, other are designed for quality and retain more high frequencies.
__________________
|
2009-05-04, 05:05 | Link #162 | |
x264 Developer
Join Date: Feb 2008
|
Quote:
In particular, the DCT of an edge is a Laplacian coefficient distribution from low to high, so naturally all coefficients beyond a certain point get zeroed. Additionally, my experience shows that the high frequency coefficients contribute less to sharpness than one would think. Visual sharpness not only depends on the actual sharpness of the boundary between dark and light, but also the contrast. As a potential psy optimization a few weeks back, I tried implementing the basic concept of CSF masking. The basic idea is that larger DCT coefficients tend to require less precision, so you can get away with rounding them down since the relative error is lower (the relative difference between 9 and 8 is much smaller than between 2 and 1). So, in trellis, I weighted the value of various DCT coefficients (larger values were lower weighted). ... the results were worse. In particular, the darkness of the edges was reduced, making them seem more blurred even though the actual boundaries of the edges might have been sharper. This makes me suspect that the logic of "high frequencies mean sharpness" is not always sound. |
|
2009-06-30, 22:02 | Link #163 |
Junior Member
Join Date: Jun 2009
Location: 20 minutes from the beach
|
As I am not one who is actively involved in the subbing side of fansubbing, can anyone tell me what the reasoning is behind releasing a 12 episode series that is 1MB to large to fit onto a single DVD, if there is one?
I have recently run into numerous series that were all just a tiny bit too large (100KB - 40MB) to fit onto the deceptively named 4.7GB standard DVD-R -_-;;; tangential mini-rant: Someone should track down the person responsible for the decision to allow data storage media manufacturers to round down to the nearest base10 number and use the binary name in packaging, then slap him/her around with a large trout. Every 1GB of advertised storage space is short by slightly more than 70MB... End result: a "4.7GB" DVD = 4,595,776 KB =/= 4.7GB -_-;;;;;;;; Back to my thread topic related question (^^;, is it possible for a layman like myself to use any of the standard fansubbing software out there to reverse engineer an .mkv and reduce the filesize slightly on my own? Recently, I have been experimenting with Aegis Sub to correct typos and grammatical errors of fansubs for my own use, and I was planning to teach myself how to time subs as well. I am not adverse to learning new software in order to achieve my goal of satisfying my compulsive organization and collecting/archiving tendancies. XD |
2009-06-30, 22:53 | Link #164 | ||
Excessively jovial fellow
Join Date: Dec 2005
Location: ISDB-T
Age: 37
|
Quote:
Quote:
you probably don't want to reencode the video as it is kind of a pain and takes a longass time to do.
__________________
|
||
2009-07-01, 14:39 | Link #165 | |
Senior Member
Join Date: Dec 2005
Location: Le Mans, France
|
Quote:
If the OS where to use the GB like it should be then nobody wouldn't be complaining about the size difference |
|
2009-07-01, 15:51 | Link #166 |
Saizen
Fansubber
Join Date: Jun 2004
Age: 39
|
Well, this is kind of off topic, but to be somewhat fair the prefixes for binary multiples were introduced after regular kilo-style prefixes became the standard and no one has felt much like adopting new prefixes that the average consumer isn't already familiar with. As confusing as this has become, neither party is technically "in error".
|
2009-07-31, 22:56 | Link #167 |
Junior Member
Join Date: Jul 2009
Location: Egypt
Age: 34
|
I think the 340-500MB is perfectly reasonable size for 720p, anything lower than that is just crap to me.
I archive the shows I download (can't buy them) and 340-500MD per episode suits my taste just fine.
__________________
|
2009-08-01, 08:57 | Link #170 |
uwu
Fansubber
Join Date: Dec 2005
|
I think you should all quit making comments like that and actually do it. (Same goes for the people in the karaoke thread who talk about Flash and whatever.) If we went through a whole phase where everyone encoded retardedly huge files, it would make the whiners appreciate the old (now current) file sizes more.
|
2009-08-01, 09:37 | Link #171 | |
Senior Member
|
Quote:
oohh, karaoke thread...should go look for this fail thread. you actually made me go out of hiding and reply, dammit. and thats a very serious thing to do!
__________________
|
|
2009-08-03, 14:43 | Link #177 | |
uwu
Fansubber
Join Date: Dec 2005
|
Quote:
Tears of JOY. ;_____; (Although I think the CRC should have been brute forced to all 8's in honor of ENDLESS EIGHT.) Last edited by Schneizel; 2009-08-03 at 14:55. |
|
2009-08-07, 21:00 | Link #178 |
Senior Member
Author
Join Date: Jul 2007
Location: Virginia Tech
|
http://i25.tinypic.com/5ytfdd.png
http://i30.tinypic.com/zn6emp.png http://i29.tinypic.com/bijvc0.png http://i25.tinypic.com/28s92l5.png Since I failed to remember to resize, here's another sample difference between the same screen at 350 MB and 175 MB. [grumble]At least now this failed encode has some value[/grumble]
__________________
|
2009-08-08, 10:35 | Link #180 |
but the kid is not my son
|
remember when single anime eps were only 175MB? those were the days
Leaving, that aside, I think 340MB is fine. But 400MB?! I mean, it's nice that they created dual layer DVDs for archivefags like me, but... help me a bit here, encoderman ;_;
__________________
|
|
|