View Single Post
Old 2011-02-05, 12:51   Link #66
tyranuus
Team Spice and Wolf UK
 
 
Join Date: Feb 2010
Location: England
Age: 36
Quote:
Originally Posted by TheFluff View Post
I kept trying to read this part of your post but you go on about how you think using hardware accelerated video filtering is a good idea and using the word "whilst" so I can't really take it seriously, sorry
I'm sorry, but if you're that hung up on the word 'whilst'...it does render this discussion a little pointless.
At the end of the day, my personal conviction is that if something does the job in hardware with reduced power consumption and heat, for most purposes acceptably; then it's a step forward. Everything with PCs has always been a trade off between the factors of heat, power and noise (usually in that order). This is simply another step along that road.

There's always going to be exceptions where something different is needed, or something a little bit more...flexible, but I'm surprised you don't seem to see benefits from being able to offload decoding and filtering in hardware even using a normal PC. Most people I know want a quiet (and cheap) PC that performs reasonably well for the tasks at hand, regardless of whats going on, hardware decoding works towards that goal, which is why everyone seems to be adding it. It makes a bigger difference on phones/ULV laptops/Netbooks/AIO boxes, but there is still a fair place in a standard PC for it.


Quote:
I have no idea about ATI (since lol ATI and lol ATI drivers) but nVidia have exactly three (3) different decoder chips that support H.264, across their entire range of products. See appendix A in the VDPAU documentation.

A quick look at ATI's UVD documentation hints that the situation is similar there as well; there seem to be three different feature sets (i.e. three different generations of decoder chips).
If you only meant between card generations then yes you're correct, that's about where things stand. From the way you'd phrased your response previously I'd thought you meant in general, rather than purely in terms of GPU addon ASIC.
I believe ATI may have undergone a couple of subrevisions on UVD suggesting revisions of the ASICs, but either way that's not really that important. In terms of drivers they're both pretty lol at points now, Nvidia certainly seem to have nosedived a bit. Then again I am a little biased after massive issues with Nvidia driver-caused DPC latency issues on a recent platform I used, which was an absolute nightmare.

Quote:
Where are they claiming that? It's bullshit. What they do support (with certain driver versions) is a bigger DPB, so you can use up to 16 reference frames at 1080p (nVidia supports this too), but just like nVidia they have a hard limit of max 2048 pixels in either direction and max 8192 macroblocks for the entire image. There is also a hard limit on the decoding speed (more specifically ~60fps at 1080p).
Try as an example:
http://blogs.amd.com/play/2010/04/28...E2%80%99s-new/
Multiple players (at the least, I recall it mentioned in release notes for MPC-HC and VLC) support ATI's accelleration.
Note also the particular stipulation that they are no longer limited to a hard limit of 2048, but support 4kx2k.

Now whether it works properly I don't know, I've already said that, but it does appear to disagree with your statement.

Quote:
You seem very confused. "HD audio" is audio with a bitdepth over 16 bits per sample and/or a sampling rate over 48kHz, much like HD video is video with a resolution higher than 720x576 (or so). It does not require any specific codec. Using DTS-HD or Dolby's TrueHD is generally a bad idea anyway since they compress considerably worse than FLAC, and all three are lossless anyway so it's not like converting between them makes a difference in quality. I don't see what this has to do with the discussion about DXVA, however.
HD audio can refer to both. In audiophile/music production terms, yes it does refer to the likes of 24bit or 96Khz/192kHz and the like, but in general usage it also refers to the mass market lossless audio codecs (DTSMA/THD, and occasionally its labelled onto raw PCM as well). This is not my choice of naming scheme, but simply the one in general use right now.
The difference between file size of a FLAC encode and a DHD encode often actually isn't that big, but FLAC is usually a fair bit more efficient than DTS-MA. Still, with space at a lower premium than it used to be, sometimes time/ease does come up.

You're right though, although audio format is still relevant in terms of playback in general terms (especially for thos who want to bitstream), this section all sprung up from the misunderstanding of which section of Haali's you were referring to rather than DXVA, so apologies for sidetracking there!

Quote:
That sure is a lot of words, but if you cannot decode 1080p h264 in software on a Core2Duo (no it doesn't matter which Core2Duo) you are doing something wrong. Even my old Athlon X2 that I bought in ~2005 or so could play 1080p with software decoding. In fact any multicore CPU bought after early 2007 or so should be able to decode 1080p in software (toy CPU's like Intel Atom's not included).
The sheer number of people out there I've seen commenting on audio desyncs, stuttering, framedrops or other general complaints using X2, early lower end C2 chips or laptop chips excluding Atom states that's not always the case, especially given the vast variations between bitrate and encode quality and settings out there.

You've got more efficient decoders out there like CoreAVC which will run most stuff on just about anything modern, but the most common software decoder is going to be FFDShow, and that does sometimes struggle to decode decent quality rips at 1080p on older CPUs. CoreAVC'll do it, but again, in the scheme of things not many people have that.
Most people want to whack a codec pack/ffdshow on and use WMP, or perhaps use MPC-HC and for it to work fine from the go. Most machines will have some background tasks running, be it MSN, AV, whatever meaning that 100% of the CPU usually won't always be available. You could argue that not having an exceptionally clean PC, and the fastest software decoder in this situation is doing it wrong, but it is a reasonable representation of the average PC.

If people are doing things like running subs or even karaoke subs increasing CPU usage elsewhere then that's even more apparent. (and given we're talking on an anime forum here, that IS relevant- it's not the video decoding itself but it's part and parcel with the viewing experience). In this sort of 'average joe' situation, with his average pc, offloading the decoding makes perfect sense. Most videos out there will play appropriately, it removes a lot of the dependancy on the host system, and frees up more resources for other tasks the system may or may not be handling.
As an example, despite the fact my machines are MORE than capable of software decoding (well bar my download box), I tend to use hardware accelleration for the aforementioned heat and power reasons, and also because it means I can watch a video whilst doing whatever else I want at the time with virtually no impact. Kinda nice to be able to watch a video and file convert or run an encode in the background with virtually no impact on the time taken. Makes waiting a lot more tolerable.


Quote:
Also, you keep using the word "whilst". Just for your information, using that in a forum post makes you look like a pretentious faggot. Hope this helps.

Heh, not the intention, just trying to have a realistic discussion on the subject. I wouldn't say using the word TWICE is a massive over use though, not like I used it every sentence!
__________________
Total Anime watched= Enough. What can I say? I'm a convert...
***
PRAY FOR SPICE AND WOLF III and faster Yenpress novel releases!
Reading: None at the moment

Last edited by tyranuus; 2011-02-05 at 15:47.
tyranuus is offline