One of the most reoccurring situations I encounter when doing wildlife photography is reaching the buffer limit on my DSLR’s when I squeeze off a series of photos. I shoot with a variety of Nikon bodies and have a varied collection of SD memory cards, and regardless of the camera and the SD card I use, I occasionally fill the camera’s buffer and have to wait for a brief period of time for the images to finish writing to the memory card. This causes a pause in action and perhaps missing a shot that I may have otherwise got at the end of a long burst.
Here’s a little trick I use when shooting wildlife to increase the number of shots I can get in a long continuous burst of photos. I set my camera to record 12 bit lossless compressed RAW files instead of 14 bit lossless compressed RAW files.
What? Doesn’t a 14 bit image file contain more detail than a 12 bit file? Why would I shoot 12 bit image mode?
One has to look at the technical requirements of the photographic situation to determine the most optimal camera configuration. In the case of wildlife photography, for me, getting a good photo of the subject is paramount. When I push the shutter button, I want the camera to take photos until I tell it to stop taking photos. By reducing the bit depth to 12 bit RAW files, I can get up to 50% more shots in the buffer before the camera slows down to write the images to the memory chip. Simply put, a 12 bit RAW file is going to have noticeably smaller file size, thus the total number of images I can record to the buffer increases. The individual images write to the memory card faster because I’m sending less data to the memory card for each individual photograph.
For example, on the Nikon D750, shooting 14 bit RAW files, I can normally get about 16-18 shots before the buffer fills and the frame rate drops to a crawl. The D500 has more buffer room, as does the D850 and D810, but eventually those buffers fill up and the camera frame rate drops dramatically until the files are written.
What about image quality?
Aren’t 12 bit RAW files going to produce a lower quality image than a 14 bit RAW files?
Yes and no. Less file size means less data and less data means less something in the final image. Simple logic. The question then becomes, is there a difference that makes a difference? From my observations, usually not. 99% of the time, I can’t see a difference between a 12 bit file and a 14 bit file when I post process my images. Same detail, same resolution, same noise levels, same color response, same post processing requirements and the image looks the same, 99% of the time (maybe more)
The 1 or less than 1% of the images where I see a difference are generally images taken at such high ISO or in such bad light that some minor deviance may be noticeable. Maybe a little less highlight detail, maybe a slightly different white balance or color tinting, in extremely low or extreme light conditions or very poorly exposed images to begin with. The minuscule difference always shows up in images I’d never use to begin with and having a 14 bit RAW file of a poor scene doesn’t really give me that little extra to make a difference at final output, either on monitor or in print.
One reason you can’t see a difference is the fact that your computer monitor can’t show you the difference. Everything you see on your monitor is going to be reconstructed to a 10 bit or 8 bit color space anyway. The human eye can’t detect the difference either. No printer I’ve ever used is going to show a noticeable difference between a 12 bit and 14 bit file.
It’s psychological. We want to believe that there is a difference because more is better. Bigger is better. The conversion process is what makes this a viable approach to getting more bang for your buck. More isn’t better if it isn’t visible. Bigger isn’t better if it doesn’t produce a better visible result. Output devices (printers and monitors) can’t show you the difference. Couple that with the reality that most of what you do with your photos will require you to convert to 8 bit jpg files for output and you’ll be throwing away a lot more visual data doing that conversion and you still won’t see a significant difference in image quality between a SRGB jpg and a master RAW file. Yes, a RAW file will contain more accuracy than a jpg and a RAW file will give you more editing latitude in post processing, but the difference between a 12 bit RAW file and a 14 bit raw file is a far far smaller difference than going from a RAW file to jpg conversion.
I could place a 12 bit image next to a 14 bit image on the table and ask you to tell me which was which, and you’d never be able to give me a correct answer beyond the level of statistical chance. Why? Your eye can’t see the difference. You could only measure it by looking at the actual RAW data.
For the skeptical among us, here’s some additional technical analysis if you care to read it. It’s an older article, but the facts haven’t changed.
If your camera has an option to shoot RAW files in 12 bit mode, don’t be afraid to do it. It may give you the photos you’ve been missing due to the camera buffers filling up too quickly.
You could even go a step further and just set your camera to shoot jpg images. Your camera’s image buffer will go up dramatically shooting jpgs, but you’ll be giving up a lot more latitude by doing so.
Here’s a photo of a Cackling Goose taken using a 12 bit lossless compressed RAW file converted to JPG.
Here’s a photo of a Cackling Goose taken using a 14 bit lossless compressed RAW file converted to JPG.
Both photos were taken of the same goose in the same light using the same camera and same lens. The only difference is the RAW file setting and the movement of the goose in between photos.
I’ve never had a customer say to me “Oh, that looks like a 12 bit image”
If your camera can’t be set to record a 12 bit raw file, never mind.