View Single Post
Old 2009-07-03, 19:06   Link #474
escimo
Paparazzi
 
 
Join Date: Mar 2008
Location: Ice Box
Age: 32
Double posting... Sorry. Wanted to have a separate post for this tutorial.


Camera RAW


Here's a quick look into benefits of RAW format. Basically all DSLRs and quite many other digital camera's give you the option of capturing RAW images, but why would you like to fill your card with these huge images you can't even display without at least converting them into another format. There are two huge benefits.

Noise Reduction
Lets start with one of the smaller ones. Generally, at least in most of the cameras very little in camera processing is done to RAW images. Generally all you get is some curve and color space adjustments in camera but for example no noise reduction.

Why is this a good thing? Well a digital camera nowadays has about as much processing power as a basic smart phone so fairly generally noise reduction in camera isn't the most sophisticated. Some of the quality has been sacrificed on the altar of speed. With a modern computer you can do much much more without making processing of the images painfully slow. There's also very little or no options in cameras to how it's done. At best there are a few level options and that's it. Albeit being a bit inaccurate definition you could say that digital capture has two kinds of noise chromatic and luminance noise.

To better understand noise it's best to start with it's cause. Image sensor in a camera works a bit like a solar panel. But rather than creating electricity from light converts the difference in the amount of light into difference in voltage. It works in two stages, for each pixel in the picture there's usually four light sensitive sub pixels in the sensor, each of which have a colored filter. Usually two green sub pixels and one each for red and blue. Each of these sub pixels reacts to the light hitting the sensor and creates a corresponding voltage. When the image is exposed this voltage information is read from the sensor. Now this creates a bit of a problem. The pixels in the sensor are mostly sensitive to light but they're also a little bit sensitive to temperature. In fact just enough that the heat that is created by the sensor itself is recorded by it. ISO sensitivity works in digital cameras through increasing the voltage sensitivity when reading data from the sensor, which causes the amount of noise to increase as sensitivity is increased due to the the relative ratio of the actual recorded data and "false data" being recorded decreasing. Actually there's a little bit of noise caused by residual voltage but that's not very important and quite complicated matter.

A crude example. You have 2 images with identical exposure but one taken with 2 stops higher sensitivity and quarter of the light than the other.
In the first case with for example ISO 100 a pixel creates a signal of the strength 11X (arbitrary unit used just for this example). 10X of which being actual data and 1X of noise.
When you reduce the amount of light to a quarter of the original and compensate this by increasing the sensitivity to ISO 400 the sensor still creates 1X of noise but just 2X of actual data. The signal with strength of 3X is just interpreted by the camera as being the same as 11X in the first case. Well it's a bit more complicated than that but that's the basic principal.

The reason you generally get more noise in the dark areas of the picture is because just like a human eye the sensor records the amount of light on a logarithmic scale so the darker the detail in the picture the smaller are the differences in the voltage recorded. Extending the exposure time also increases the amount of noise because the sensor creates more heat the longer it's active since it needs constant electric current. Pixel density is also a factor in noise since if you increase the amount of pixels in the sensor without increasing it's physical size the absolute amount of light hitting each of the pixels is decreased. This is one of the reasons why cameras with smaller sensors tend to have higher noise levels as it's usually impossible to increase the actual sensitivity of the recording process enough to compensate it.

Now why deal with the noise yourself rather than letting the camera do all the work? Well this again comes down to marketing. There's a fairly constant competition between camera manufacturers who makes the cameras with the lowest noise. Low noise is however not usually achieved by making the sensors actually more sensitive but rather by more and more aggressive noise reduction. Image sensor creates a color for the pixel by interpolating the data of the sub pixels. Noise reduction is done in the exact same way but for larger areas of the picture and in camera generally done both for chromatic and luminance data. This inevitably causes some loss of detail. No noise reduction is in general done to RAW images, some cameras don't even compose of the final pixels from the sub pixels. Either way the data from the sensor is in effect recorded pretty much as it is. This allows you to for example disregard the luminance and interpolate only chromatic data. In this case the software reads the absolute luminance of the pixel and it's color, disregards the luminance and interpolates only chromatic data. This causes the rainbow colored noise to be reduced or even removed while keeping pretty much the original level of detail. You will end up with some monochromatic noise but it's usually fairly little concern.

Spoiler for Fairly large example picture:


Larger Bit Depth
Now this really makes a difference. Cameras create fairly standard JPEG files. Usually you can crank the compression ratio fairly low but it's still basically a compressed 24bit image. That's 8 bits per each red, green and blue channel. JPEG is often mistaken for 32bit format. It in fact in it's basic form isn't one due to the lack of Alpha-channel. Camera sensor however is usually capable of quite a bit more than that. Generally DSLRs can record RAW images of 12-14bit per channel bit depth. Now when you take into account that increasing the bit depth by 1bit you can double the amount of shades recorded. 12bit per channel image contains 16 times more shades per channel than 8bit per channel image, 14 bit has 64 times more shades. Well... In theory. In practice it's almost never perfectly linear nor do the images hold as much information as the bit depth would suggest but there's still a great difference.

So what? Well, if the pictures are perfect right out of the camera this doesn't matter. However if you want to make some exposure corrections especially if you want to make these adjustments only partially to the image this becomes fairly big deal. In optimal case, with a perfectly exposed image a 14 bit RAW gives you about 2 stops of headroom. You could increase or decrease the exposure by one full stop without any great loss of detail in the final image. This of course depends on the contents of the picture and the actual performance of your camera but basically +/- stop can be quite generally expected. Trying to do something like this on JPEG image will almost always cause either dark shades cut to black or light shades cut to white. And even if not you'll still lose a fair bit of shade data since the bit depth of the original and the final image being the same and the shade scale being altered.

Adobe's Camera RAW and for example Lightroom that uses it to raw processing has a couple of neat tools which are in fact fairly usable thanks to just this headroom. Digital fill light and recovery. Recovery allows you to fine tune the exposure of the highlight areas and fill light does the same for shadows. Especially recovery is fairly useful tool for finding a little extra definition in the highlight areas. For example clouds tend to respond very well to a careful use of it.

Spoiler for Fairly large example picture:


The "Cost"
Too good to be true? It sort of is. The biggest problem with RAW images is that there's absolutely no standard for them. The problem is that RAW images aren't images in the traditional sense. As a simplification you could say that RAW records how the image was captured rather than processing this data and recording what in fact was captured. Each image sensor and image processor combination produces an unique type of RAW image. This inevitably causes compatibility issues. Even the most common RAW processing programs might not support RAW files of new cameras immediately, not even the most common models. If I remember correctly it took about a month for Adobe to implement the support for EOS 500D's (EOS Rebel T1i) RAW format to their Camera RAW. Manufacturers usually have their own processing software but they in general aren't very good in terms of usability.

There's a non-standard alternative though. Adobe is campaigning heavily for their DNG (Digital Negative) format. Camera manufacturers are yet to catch on though. DNG as a format is fairly good because once you have a software that can convert camera's RAW format to DNG (usually lossless conversion) they're compatible with any software that supports DNG regardless whether these support the original RAW format. Adobe has DNG support in all of their major image manipulation software as do a few of their competitors. However as great as DNG is it's not without limitations. For example one major being that its adjustable light balance color temperature scale is only from 2000K to 50000K. Usually it's not a problem but poses some issues in some specialty cases, especially in the lower end of the scale. There's also a high chance that the format might need upgrading at some point or another which could render some older software unusable, especially in the case of third party software. To my knowledge this hasn't happened yet but doesn't remove the possibility as the technology continues to advance in a dizzying rate. One more and in fact the larger issue is that there aren't really cameras that would capture images in DNG format. At least I'm yet to come across one. So it pretty much inevitably needs conversion from the original format to DNG. This might change in the future if the camera manufacturers actually decide to support the format. Remains to be seen.

Final Words
Time for a conclusion. While RAW as a format has limitations, not least of which being that to be used it always needs conversion to another format, there are in my opinion a few huge pros that trump all of it's cons. In any case if you have the equipment to do so it's definitely worth looking into. There are a few very good software out there for processing them, for example Adobe Lightroom and Apple Aperture both of which are fairly affordable. At least I think putting half the price of an entry level DSLR to a software that allows you to have twice the fun with it is a good deal. And if you can actually use the camera manufacturers software without having a nervous breakdown it usually comes free with the camera.
__________________

Last edited by escimo; 2009-07-04 at 10:42.
escimo is offline   Reply With Quote