The Future of HDR and its Use within the Camera

The Future of HDR and its Use within the Camera


A Guest post by Dave Ware from Whalebone Photography.

This note is aimed to be a quick discussion on High Dynamic Range and possible future enhancements to improve it.

What is High Dynamic Range?

High Dynamic Range is a digital processing effect used within photography to combine a number of images of differing exposures to create a consistently exposed picture throughout the entire frame. This increases the luminance (amount of light) visible within an image.

Why is it required?

The camera’s limitation of amount of colour and luminance it can record is governed by the sensor’s capability and the dynamic range of the camera’s electronics. For example, the Canon EOS 40D uses a 14 bit analogue to digital converter which digitises the analogue signals received from the sensor. The 14 digital bits allow 16,384 different colours to be recorded within the camera.

Looking a a histogram, the horizontal axis is the level of luminance of an image. The vertical axis represents the amount of the image which contains that level of light. For example, a histogram with a single line at the left hand edge shows that the image is purely black. Likewise, a single line at the right hand edge represents an image which is purely white. The amount of data which may be compressed within the histogram is limited by the dynamic range of the camera. A very low dynamic range results in the horizontal axis limits close together. A high dynamic range places these axis far apart.


Here, the exposure of the camera has been set for the balloons – this was chosen as the balloons were the subject of the image and the trees in this case were used to ‘frame’ the balloons. The histogram shows the spike on the left of the histogram representing the trees, and the data on the right represents the balloons and sky. If the photographer wanted both the balloons and the trees exposed then a compromise would have been required so that the balloons become slightly over-exposed and the trees only slightly underexposed.


The above image shows the traditional compromise – the sky has lost some of its saturation in colour, but the trees have retained some detail. Notice also that the histogram shows a slightly narrower spike on the right hand edge (the balloons are now slightly over-exposed), and the left hand edge indicates that more detail is present (the trees are no longer a complete silhouette).

So, to overcome this, the photographer may take a photo exposed for the background and then another photo exposed for the foreground. A few other photos are usually taken between these 2 exposures.

When combining each image, a visually pleasing picture is created and the effects can be quite dramatic. This is the basis of digital HDR. A quick Google search will provide some more examples.

The Future of HDR

Currently, HDR is a post-processing technique, but as cameras advance, its possible that this is an area which may be really improved by manufacturers.

The dynamic range of the camera is likely to be improved. The 14 bit ADC mentioned above allows 16,386 colours to be recorded. 24 bit ADCs have been in manufacture for many years which would allow a total of just under 17 million colours to be recorded! The sensor would have to be capable of matching this dynamic range and the camera’s internal processor would have to be capable of processing the data. This capability exists already as is evident in home computers which have operated from 32 bits for years and are now up to 64 bit processing. Whether or not the sensor is capable of this is another matter for discussion and the additional processing required would increase the amount of time to write the data to the memory card. This may limit the number of full speed frames taken before the cache is full and the camera writes the images to the memory card. These drawbacks are perhaps what is impeding the development of increased in-camera dynamic range as with many advantages, there is often a draw-back.

Another ‘in camera’ technique may be to use numerous sensors within the camera. If one sensor and accompanying electronics can be capable of a certain dynamic range, then 2 sensors may be used to increase the overall dynamic range. For example, one sensor can expose for the highlights and 1 sensor can be used to expose for the shadows, thus creating a higher dynamic range. Sensors can be made incredibly small – just look at the size of phones which have numerous megapixel cameras, and so it’ll probably be no issue squeezing 2 sensors (or more!) into a single camera. However, as the sensor size decreases, the noise of the recorded image (the ‘grainyness’ of the image) becomes greater. Once again, this is a trade-off between high dynamic range, image quality and size.

Another method could be to use an alternative tone-curve algorithm which is currently generally applied to images within the camera. When a photo is taken, signals from the sensor are turned into digital bits and sent to the camera’s computer. To make sense of these signals, the computer processes the data and turns them into something meaningful. This is a form of tone curve. Normally this is employed over the entire image as an ‘average’. Modern techniques however can apply an individual tone curve to every single pixel within the image. This can render a image exposed in a similar manner to that seen by the human eye (ie with a higher dynamic range). This inevitably will increase the processing time within the camera, although as the current method of HDR imaging is to take numerous photos at different exposures, the additional processing time for one single image is probably still a huge time saver.

This new tone-curve method is being advance by companies and Samsung has recently purchased a license to use the technology.

Perhaps other manufacturers have an alternative method, or do not consider high dynamic range of high importance in their cameras, or are just biding their time. This technology is still developing and is an exiting area of camera technology especially as the mega-pixel battle is becoming old news.

High Dynamic Range techniques can be overused and images can easily be made unnatural. The reason they are unnatural is because they extend the range possible by the human eye. It would be sad if technology removed the authenticity of photography, which separates this art from the art of painting (where both composition and exposure is only limited to imagination). If technology however was able to replicate the images as seen by the human eye, then perhaps that is an acceptable technological milestone.

Check out more of Dave’s work at Whalebone Photography.

Read more from our Post Production category

Darren Rowse is the editor and founder of Digital Photography School and SnapnDeals. He lives in Melbourne Australia and is also the editor of the ProBlogger Blog Tips. Follow him on Instagram, on Twitter at @digitalPS or on Google+.

Some Older Comments

  • rob brydon February 27, 2010 08:10 pm

    I'm using the HDR in my Sony A550 which BTW offers sveral levels of adjustment in camera. It takes 2 images and blends them perfectly, hand held. The trick is not to ocer do it. ;-)..It's great.

  • Dreamer of Pictures February 20, 2010 08:16 am

    Solve the problem at the time you shoot. In the example shown by the author, expose as was done in the first photo, for decent sky color, and simply fill-flash the foreground elements to bring out color and detail. Admittedly this won't work in every situation, because sometimes darker elements of a scene may be beyond the range of the flash, but it works very frequently.

  • Karen Stuebing February 14, 2010 06:14 am


    Here is one review of the Sony A550's in camera HDR. The reviewer admits he's not a big fan of HDR.

    There are some sample photos on this page.

    I think that when HDR is done well, it can produce some amazing photos.

    The question is whether you think the in camera processing works as well as post processing. It seems to me, as in all things with digital cameras, you're better off in control rather than using the default camera functions and then post processing to get what you want.

    That is a generalization, of course.

    To be honest, for $800, it wouldn't be worth it for me. But then again, I think shadows can be intriguing and I also pick the time of day for the effect I want in a photo. Sometimes you want a lot of contrast. Sometimes you don't.

  • yusran February 13, 2010 01:44 pm

    i think not many here know much about sony dslr. talking about in-camera hdr sony has made a great achievement in that regard. you can look at the review on Sony a550, sony a500 and the latest sony a450. These new range of sony entry-level has tone of features especially a550.

    these camera are capable of producing in-camera auto hdr effectively and you don't need a tripod to get it done. Sony achieves this is with some remarkable in camera processing that aligns two images pixel to pixel.

  • Richard Crowe February 13, 2010 05:10 am

    I wish that the successors to the 50D and 7D level of Canon cameras would have a five stop auto exposure compensation which could be accessed with burst mode. In other words, burst would shoot five shots +2, +1, 0, -1, -2 stops. This would facilitate HDRI shooting to a great degree. I believe that some Nikon models have the five stop capability but I don't know if it can be accessed by burst mode.

  • Walt Bobrowski February 12, 2010 11:44 am

    Here is an excellent technical paper on HDR microscopy which explains the underlying concept, examples of the excellent tonal range achievable in a single image from multiple exposures, and various software approaches to processing HDR images, with recommendations. All is applicable to routine digital photography. I have no affiliation with the author.

  • jake February 12, 2010 02:18 am

    The problem with such large amount of color information is that monitors cannot display all of these colors, nor can printers print them. It's wonderful to be able to record them, but technology on the other end has yet to catch up with today's cameras, let alone cameras of the future.

  • Mike February 11, 2010 08:51 am

    I just found out that Sony's newest DSLR, the a500 I believe, has an HDR function on that will take HDR images rather than stitching them together in Photoshop or Photomatix. I am hesitant to spend $800.00 on another camera just for that feature, however...

  • Dave Wilson February 10, 2010 12:21 pm


    You are right that 8 bit-per-pixel image can contain a maximum of 256 distinct colours. What we are talking about here, however, is not the number of bits per pixel but the number of bits per colour component. A JPEG image, for example, has 8 bits per component which adds up to a total of 24 bits of information per pixel (8 red, 8 blue and 8 green). If you store 14 bits per component, this equates to 42 bits per pixel (14 red, 14 blue and 14 green). HDR image file formats typically store either 16 or 32 bits per component meaning 48 to 96 bits per pixel.

    The number of distinct colours that a pixel can represent is given by 2 to the power of the number of bits used to describe the pixel.

  • Lorian February 10, 2010 10:48 am

    I'm baffled that there is a disagreement on how many colors are in an 8-, 12- or 14-bit image. This isn't some theoretical number it's well documented. 8-bit color has (2^8) 256 colors TOTAL. If you don't believe me, look it up. All the information above is accurate.

    Regarding the computer analogy, it is quite appropriate. If you read the article he never said that 32- or 64-bit processing had ANYTHING to do with the number of colors on the screen. He was making a correlation between how far we have come with technology and what is possible on the horizon.

    I love the articles on this site but most of the comments are just terrible. Let's use a little constructive critisicm and less subjective opinions.

  • Dave Wilson February 10, 2010 08:45 am

    I'm with Josh on the number of colours/channels definition. Remember that each of the colour components (R,G,B) are read and digitised at separate sites on the sensor. You, thus, have 14 bits (or, more normally 12 these days as far as I can tell) for each of red, green and blue resulting in a total possible number of actual colours equal to 2 ** ( 3 * 14) or about 4.4 trillion.

    In the real world, of course, this number is going to be a fair bit lower since noise will steal several bits of resolution.

  • Jordan February 10, 2010 08:23 am

    Two things:
    Keeping the dynamic range, the difference in power between the bright and dark, of the sensor the same and shrinking the step size would not help the problem. One would really need to increase the actual dynamic range of the sensor, which in digital means making the charge well deeper. For reasons that go beyond the scope of this forum, this has many technical problems.
    HDR allows the photographer creativity in expressing the lights and darks in interesting ways. If all one wants to do is compress the dynamic range of an image to avoid loosing shadows or blowing out highlights, there already exist photographic tricks, but the results of this are not always pleasing. HDR images can look unnatural, not because they create dynamic range greater than the eye can see in nature, but because it takes that range and compresses it to fit in 8bits. Automatic systems for dynamic compression will always suffer from this problem.

  • Whalebone February 10, 2010 07:07 am

    Hi Josh,

    Firstly I should just clarify that I wrote the article.

    For a JPEG image (or processed image), the number of bits is per channel. Directly from the sensor, the signals are digitised as 14 bits. So yes, you are correct that the resulting image is 8 bits (or so) per channel, but before this stage, each discrete signal must first be digitised into 14 bits.

  • Remington February 10, 2010 07:04 am

    Most HDR images I see are way overcooked. I think it's a process that is overused and abused. The best HDR images I have seen were ones I didn't know were HDR until I was told...and that's how it should be in my opinion. Perhaps we could do with less HDR post processing and create HDR images the old fashioned way; properly metering the scene and getting creative with a variety of Split Grad ND filters.

    Just my two cents, though.


  • dcclark February 10, 2010 03:01 am

    Arg, this whole color depth thing is really misleading. You need to be much clearer that the "8 bit" aspect of JPEGs is really "8 bits PER CHANNEL" -- meaning, 8 bits (256 shades) of red, 8 bits for green, and 8 bits for blue. That totals up to 24 bits, or millions of colors, possible for each picture.

  • Ken February 10, 2010 02:27 am

    The dynamic range of a digital image sensor is only partially related to the number of bits in which each pixel value is quantized - in reality, it's a good deal more complicated.

    Here's a good intro to the topic:

  • AlainP February 9, 2010 09:37 pm

    The number of bits of an image is per channel.. 8bit = 256 colors of Red, 256 of Blue and 256 green...

  • Travis February 9, 2010 12:50 pm

    Also, our current monitors and desktop printers can only display 8 bit images. Until higher bit depth display devices are available, we are left to "tone map" our HDR images so they look right.
    It is a great technology that I use quite often.

  • jay burlage February 9, 2010 10:32 am

    The next step will be video which is insanely exciting to me....

  • Whalebone February 9, 2010 08:47 am


    You're quite correct that the addressable range of a processor has nothing to do with HDR/colour etc. What I was refering to was the fact that as the dynamic range is increased, there is more data to process and the advantage of a wider addressable range will make this more efficient and accurate (especially in embedded processing within the camera). Sorry for the confusion, and thanks for raising the issue.

  • sp February 9, 2010 08:38 am

    For in-camera HDR that relies on sequentially taken images to be really viable, it would have to solve the problem of aligning them perfectly. The image used in the article is a good example to illustrate the challenge - a portion of the image (the balloons) may be moving rapidly while other portions (the trees) may be relatively static. Even if a tripod was used, there's enough "movement" to make alignment challenging.

    Sensors with better dynamic range is probably the better way to go.

  • Yael February 9, 2010 08:30 am

    @Randy Aldrich

    What you say is true about the memory, but it is allow true in the amount of color a system/ computer can manage. For example, a 8 bit color is 256 colors, 16 bit bit is 16k colors... You can look it up here :

  • Jack Larson February 9, 2010 08:00 am

    People's feelings about HDR cover the map. Personally I use it to deal with a range of light that exceeds the capture capacity of my camera. However, I think that people should use it any way they desire. Setting boundaries around what is acceptable I find unacceptable.

  • Danferno February 9, 2010 07:53 am

    The alternative tone curve thing is already available in Olympus camera's (and probably other brands too), results aren't brilliant though.

  • Adam February 9, 2010 07:45 am

    My Pentax K-x has in-camera HDR. When its enabled one press of the shutter results in three photos followed by a considerable period of time processing.
    I've used it a few times and have not really been impressed with the results. Its probably my technique (not every composition will look good in HDR), but there are definitely drawbacks to doing this in camera.
    When doing it manually afterwards it is possible to correct for mis-alignment in the images, and to adjust the strength of the HDR (the pentax does have strong and weak HDR) while viewing the result.
    Using this sort of in-camera HDR is a little like in-camera selective colouring. Its sort of handy to be able to do in-camera, but doing it in post gives so much more control.

  • Randy Aldrich February 9, 2010 07:27 am

    uhhhh... Good article regarding HDR but the comment about 32bit & 64bit Computer systems is entirely wrong.

    32 & 64 bit have nothing to do with color representation. 32bits refers to the amount of memory the machine can address.

    In the simplest terms I can think of:

    A 32bit Operating System/Machine can only have memory addresses which are 32 bits long.
    A 64bit Operating System/Machine can have memory addresses which are 64 bits long.

    Essentially it just increases the amount of memory the system can support. It has "zero" to do with colors/hdr/brightness/etc.

  • Josh Trumbull February 9, 2010 07:11 am

    I have a question regarding the amount of colors you state can be recorded. I have been under the impression that the number of bits is per channel. In order to get the number of colors that your camera can record you have to multiply the RGB channels. So, this means that an 8 bit image, which contains 3 channels can actually contain approx 16.7 million colors because each channel contains 256 (2^8) luminance values and 256^3 is approximately 16.7 million. Therefore your statement, "The 14 bit ADC mentioned above allows 16,386 colours to be recorded," should be allows 16,386 luminance values per channel to be recorded which would be approximately 4.3 trillion (16,386^3) colors. Please let me know if I have gone awry in my thinking.