Lumariver HDR User Manual

What is HDR?

The (in)famous "grunge" HDR look which to many has become synonymous with "HDR", despite that HDR techniques can produce perfectly natural-looking results. Lumariver HDR is not designed for this type of tonemapping.
The same image tone-mapped with Lumariver HDR. We strive for a natural photographic look. Image before tonemapping in bottom right. Note that the contrast in the sky and ground is kept true to the original while avoiding halo effects, which is a key factor for achieving a natural "true-to-the-scene" look.

HDR means High Dynamic Range and in photography this refers to techniques to capture and produce images of scenes with very large span between highlights and shadows (=high dynamic range), such as a sunset or a nighttime cityscape.

At image capture the photographer shots multiple frames in succession with different shutter speeds to record both highlights and shadows with high quality.

HDR software do two things: merging and tonemapping. The first step is merging, which is to merge the multiple exposures from the camera into one which contains the full recorded dynamic range. Since prints and screens are very limited concerning dynamic range, the range in the HDR picture must be compressed. If this is done simply by lowering the contrast until it fits the result will be flat and dull. The solution is to make various local adjustments in order to reduce global contrast while retaining the local. The dynamic range is then compressed but despite that the image still has high contrast in it. This is tonemapping.

Modern digital cameras have quite good dynamic range so if the light condition is only moderately challenging one can usually skip the merging step and shoot only one picture which then is tonemapped.

One can say that traditional "dodge and burn" is a sort of manual tonemapping, but HDR software usually use more complex algorithms. Early in the HDR software history it was found if these algorithms were fed with extreme parameter values a special "grunge look" was achieved which has become so popular that it is among many laymen synonymous with "HDR". This has also led to that HDR has got a somewhat bad reputation among many serious photographers which are not interested in a gimmicky look. It's a bit unfortunate since HDR techniques are valid also when you want to make natural-looking images in difficult light.

HDR techniques can for example be used as a modern alternative or complement to traditional graduated neutral density filters used for controlling bright skies in landscape photography.

Why yet another HDR software?

We made Lumariver HDR because we needed it for our own landscape photography. There are indeed many HDR programs on the market where you with a few clicks can produce the well-known "grunge HDR look", but few are effective tools for artifact-free merging and natural-looking tonemapping, and at the same time lets the photographer be in control. This software intends to fill that gap.

It has the following focus:

While the software is good for quick automatic results it's not what makes it different from mainstream. We see that the typical user is a photographer which is really picky about image quality and wants to have total artistic control of the result. While the software can be used stand-alone it's likely used together with other post-processing software such as raw converters and photo editors (Adobe's Lightroom and Photoshop being among the popular choices).

Lumariver HDR also provides the possibility to merge raw files into raw output (DNG) which is a rare feature. This allows both for more exact merging and a quick and smooth HDR work-flow.

The software uses "stops" as unit in the parameter settings, ie if you set the gradient filter to 2 stops it will apply exactly the same strength as a real two stop graduated neutral density filter would out in the field. We prefer this unit as it's what a photographer is used to working with and maintains an understanding of the light conditions in the original scene. To be true it requires that the software renders the image in a true neutral fashion, which it does, unlike most image processing software.

Work-flows

Lumariver HDR is designed to work well together with other photo software. It's natural to involve your favorite raw converter, but you can also include an advanced image editor like Photoshop in the work-flow if you like.

Lumariver HDR can load and process camera JPEGs directly, but since much of the dynamic range is lost in the camera's conversion to JPEG we recommend shooting to raw format.

The basic work-flow is as follows:

  1. Shoot in the field to raw format.
  2. Use your favorite raw converter to develop neutral 16 bit TIFF files which then are imported to Lumariver HDR.
  3. Merge and/or tonemap the image(s) in Lumariver HDR and output a new 16 bit TIFF file.
  4. Import that TIFF file into your raw converter and make final adjustments.

Lumariver HDR also supports raw format directly so you can import raw files and merge/tonemap directly in the raw domain and output a normal image file or a DNG raw file. Raw work-flow is discussed in more detail in a separate section.

Lumariver HDR's merging and tonemapping algorithms mimics how an infinitely patient human would manually merge and tonemap the images, but as such it cannot guarantee perfect results for all possible images. If you are a perfectionist and make images for fine art prints you will want to make some manual adjustments from time to time.

Some simpler adjustments can be made in the Lumariver HDR software itself, but if you want to make those pixel-peep adjustments some of us like to do it's better to export the masks and layers to a multipage document and edit in your favorite image editor, like Photoshop or Gimp. Masks and layers can be imported back in to Lumariver HDR and rerun if you want to. If your photo editor can only do 8 bit (like current Gimp) we recommend to do that, while in a full-featured 16 bit capable software you can wrap it up there.

So a work-flow with an image editor looks like this:

  1. Shoot raw.
  2. Develop to TIFF in the raw converter.
  3. Merge and/or tonemap in Lumariver HDR, possibly with some manual adjustments.
  4. Export to multipage TIFF or PSD.
  5. Import to the photo editor, adjust and finalize.

Extra steps if you have an image editor that supports only 8 bits:

  1. Export adjusted layers to 8 bit TIFF and rerun those in Lumariver HDR to get 16 bit output
    • Lumariver will deposterize 8 bit inputs to guarantee smooth toning.
  2. Import TIFF to the raw converter and finalize.

Lumariver HDR can do merging and tonemapping in a single run, or you can do the steps separately. A merged but not tonemapped image should preferably be saved to a HDR format such as floating point TIFF or OpenEXR so no precision is lost, but actually a 16 bit TIFF stores adequate range for most uses thanks to gamma-coding.

The merged file can then be opened and tonemapped. This means that Lumariver HDR can tonemap files merged in other HDR software, or you can merge files in Lumariver HDR and tonemap the output in some other software.

Raw work-flow

Lumariver HDR can read raw files and also write to the standardized raw DNG format, which allows for a raw-input-to-raw-output work-flow. When raw files have been read Lumariver HDR can write its output to a normal file format such as TIFF. This can be convenient and produces fine results, but we do not have the same broad feature set in raw conversion as the best dedicated raw converters, so when you need more flexibility you should write to DNG (meaning that no raw conversion is made) and open that in your raw converter and generate TIFF (or other format) output there.

Apart from convenience the key advantage of providing Lumariver HDR with raw input is that HDR merging becomes more exact. Raw files contain the unprocessed sensor data, i e no color conversions, no tone curve, no highlight compression or reconstruction, which all raw converters introduce even if you try to develop a neutral file. With the true raw input multiple exposures can be combined without needing to detect and undo the raw converter's "distortions" and thus an exact result can be had. We do recommend to do merging with raw input if you need the best quality. (The tonemapping step is not as sensitive since it looks at only one file.)

Typical raw work-flow for merging and tonemapping:

  1. Shoot raw, multiple exposures for HDR merging.
  2. Open and merge raw files in Lumariver HDR.
  3. Tonemap in Lumariver HDR, exactly to the look you want or coarsely for further adjustments in the raw converter.
  4. Export to DNG.
  5. Import to the raw converter, adjust and finalize.

If you want you can skip the tonemapping in Lumariver HDR and export a merged only DNG which you then tonemap with highlights/shadows adjustments in the raw converter. This typically works well when you don't need a strong compression. However, raw converters cannot compress that much (with good-looking results) and the DNG file format has a bit limited dynamic range, so if you need a bit more compression it's best to tonemap inside Lumariver HDR first, and then you can further tune it in the raw converter if desired.

As you can import and export both before and after tonemapping there are many work-flow variants. As an example, you could merge to DNG, develop to tiff in raw converter and then import that back into Lumariver HDR and tonemap. This work-flow fits well if you want to export to multipage TIFF or PSD for further editing in a photo editor, while using the demosaicer and highlight reconstruction of your favorite raw converter.

In addition to advanced multi-step work-flows Lumariver HDR can in raw mode also be used for very quick and convenient HDR merging. In some cases you may not have the intention to use any HDR tonemapper software at all, ie only use the raw converter, and in this case you can conveniently and automatically convert an HDR series of raw files to a merged raw DNG. In this case you can see Lumariver HDR as a DNG converter with HDR capability. So if you are out in the field and are faced with difficult lighting you no longer need to feel intimidated by introducing lots of post-processing work if you shoot a bracketed series. Just shoot, then auto-merge to DNG with Lumariver HDR and process that DNG in your raw converter just as if it was a single shot, only with noise-free shadows.

If you have a technical camera and need to do flatfield correction (often called "lens cast calibration", LCC, in the tech camera world) you can do so in the import step and thus get a calibrated output DNG which can be very helpful as many raw converters don't have smooth LCC work-flows.

Merging

Many HDR merging algorithms mix in a little of each image in every pixel. In theory it's the best approach since it gives you the most information per pixel. However, only in ideal cases there is absolutely zero difference in the scene between two exposures. For still life photography in the studio it's generally achievable but for outdoor photography there are nearly always some minor differences, wind blowing, clouds moving etc. When there are differences, mixing together the images lead to blurring and ghosting artifacts.

Another aspect is that there is little to gain from mixing pixels in practice. If the pixel is well exposed in one image there is insignificant noise already and there is no need to mix it with another.

For these reasons Lumariver HDR's merging algorithm is based on stitching, ie it avoids mixing pixels if possible and instead picks bright areas from the dark exposures and dark from the bright, and makes sure seams become invisible. One can say that it mimics how a patient human would manually merge the images, and as such the output is well-suited for further manual fine-tuning. Here is the algorithm's strategy outlined:

After the algorithm is complete you can review the result, see where the seams are and (through export/import) make manual adjustments if desired.

Merging options

The merger can be configured with three levels that guides how the algorithm should interpret the status of a pixel, if it's clipped, perfectly okay, a bit noisy or severely noisy (underexposed). The unit is "stops from saturation", i e 0.0 represents the maximum possible value, 1.0 is one stop down etc.

The range between Noisy and Clipping (default 3.75 stops) is the "noise-free" range which is the preferred pixel status, the merger will strive to include only those pixels. The range from Noisy down to Underexposed (4 - 6 stops per default) is considered noisy but usable, for example rather than falling back to blending a little bit of noise can be accepted. Below the Underexposed limit the pixels are considered to be so noisy that they're barely usable. It's considered a little bit less bad than clipped areas though, but are still avoided at almost all costs.

The default noise levels are made to match modern cameras but are a bit on the conservative side, that is you may want to set the levels farther away from saturation. Acceptable noise levels is a matter of taste, so you can make tests with your camera to decide what levels you think is suitable. With the highest dynamic range cameras today some will probably increase those settings to 5 and 7 stops or even 6 and 8.

The "Clipping" setting indicates at what level the merger should consider a pixel to be clipped. Clipped areas are the worst (since there is no information) and are the least likely to be included in a merge. The default value is 0.25, a quarter stop from saturation. You may wonder why this value is not 0.0, ie that no clipping would occur until the pixels reach the maximum possible value. If we look at actual raw pixels directly out of the camera this is the case, but due to white balance conversion, demosaicing, highlight reconstruction and color space conversion clipped areas may not reach the maximum value at all. In some cases when there has been a lot of highlight reconstruction in the input images you may want to set clipping to as high as 1.0 or 1.25 stops from saturation (if the merge seems to include clipped areas try this). The value will not affect the base exposure of course, as there is no darker exposure to bring in information from.

For clipping the merger looks at each color channel and differs between all-channels-clipped (no information) and partially-clipped (some color information left). Partially clipped areas are considered less bad to include in the merge than completely clipped areas (which are considered as the worst, since there is no information at all).

The merger have the following algorithm settings:

All except the "repro merge" is the same merging algorithm as outlined previously, but with different options concerning how it should look upon clipped areas and ghosting. Through the settings ghosting and using areas of clipped highlights can be disabled completely ("without ghosting" and "without clipping" or both), which can be worthwhile to test in cases you are not really satisfied with a merge. However those options increase the risk that all the brighter exposures are excluded from the merge, as the merger may not find a solution to avoid bad seams without including some clipped areas, or hiding a bad seam through wide blending (which may introduce ghosting). Note that the high-similarity blending close to seams will always be active as it makes no sense to disable that. With the "without ghosting" setting only the blending of low-similarity areas (ie areas that don't look the same between images) is disabled.

Finally, you can activate a completely different merging algorithm called "Repro merge", which is a very straight-forward algorithm. It assumes there is zero movement in the scene and that the brightest pixels which are not clipped are best exposed and picks those. There is no seam optimization as it assumes that there is no scene movement and thus it would be unnecessary. This algorithm should therefore generally not be used in live landscape scenes, but is very suitable for reproduction photography, for example when shooting paintings or slide film on a light table, where you can guarantee that there is no movement or change in light between shots.

Tonemapping

Lumariver HDR's tonemapper has a different approach than traditional tonemappers. One can say that like the merging algorithm it mimics how a human operator with all the time in the world would manually do it to achieve a natural-looking result. The idea behind this is that the output should have a neutral timeless look and also be understandable and open for further manual edits if desired. Traditional tonemappers do all sorts of non-linear changes which affects local contrast and saturation in ways that are hard to overview and tune further, and this we want to avoid.

Important note: we're not kidding when saying Lumariver HDR is "HDR for those that don't like HDR" -- the tonemapper is designed for natural-looking results true to the original scene rather than a typical "HDR look", in fact that grunge look is not possible to achieve at all. If you want that you can still use Lumariver HDR's merging engine and export to an other HDR software which has that type of algorithms.

The tonemapper identifies which bright parts in the pictures that need darkening ("burning") and generates a gray-scale layer which when multiplied with the image (=multiply blending mode) forms the final result. If you want this layer (actually layers, there can be several) it can be exported to Photoshop and similar applications for further work.

The challenge lies in bringing down the bright areas while keeping a natural look. Lumariver HDR's tonemapper does it in the following way:

Most scenes will do well with only the standard compression and possibly a gradient. Further tuning can be achieved by adding some soft compression, and for images with a well-defined bright area(s) the sharp compression can do magic.

You will notice that even for difficult pictures when the auto-mode makes a "wrong" decision the result will still most often look quite okay. It's only when the actual multiply blend layers are inspected you get a clear visualization of exactly what the algorithm has done, and then you can make your own artistic decisions about it. We think this is one of the strong points of Lumariver HDR, that what is done is visualized and can be further tuned. We want the photographer to be in full control throughout the whole artistic process.

The main competitors to HDR programs that strive for a natural look is today not other HDR software, but rather the shadow/highlights sliders in the raw converters which over the years have become better at compressing fairly large ranges to a natural-looking result. When one wants a quick result for less complex scenes the raw converter is often sufficient. However, for important images and/or really difficult lighting the Lumariver HDR tonemapper provides strong value; it can compress a larger dynamic range and keep a natural look, it doesn't affect saturation as a side effect, and you get to see the actual multiply maps and can export them for further fine-tuning and editing. As Lumariver HDR's algorithms are not real-time we have also allowed our-self to focus solely on image quality rather than taking shortcuts to achieve real-time rendering.

Lumariver HDR's output also combines well with raw converter's shadow/highlight pushing. One way to work is to do the major part of the compression using Lumariver HDR's tonemapper and some mild fine-tuning compression in the raw converter.

Tonemapping tips

Example showing two different ways of tonemapping the same file, and the advantage of mixing several compression methods instead of using just the standard compressor. The original image (not tonemapped) shown in the top, the foreground is very dark and in obvious need of brightening. The first tonemapped image shows the result if the standard compressor does it all; it's quite okay but we have lost some of the realism and it looks rather painterly. The reason for this is that local contrast is flattened too much, the dark clouds too heavily pushed and brighter parts darkened too much. In the second tonemapped image we have mixed all four compression methods, there is still standard compression but only 1.2 instead of 3.0 stops, which then has been reinforced by a light gradient and some soft and sharp compression. We then retain the true natural high contrast of the sky with dark clouds and sharp transition to the distant forest while still having a bright foreground.

First let the program make its automatic tonemapping and choose the parameters, then fine-tune the result if needed or desired. Here are some tips:

Creative dodge-and-burn

Creative dodge and burn is often a part of the process in making a fine-art print. This is (currently) outside the scope of the tonemapper, so if you need to do it we recommend to export the tonemapper's result for further editing in an image editor or a raw converter with local adjustments support.

The sharp compression connects all bright zones together and darken them with an equal amount. For most scenes this is exactly what you want, but in some cases you might want to darken one area more than an other. Currently this is not supported directly inside Lumariver HDR. The way to do it is to darken all with the same amount then import the resulting multiply layer (the sharp compression layer) in Photoshop or similar application and there duplicate, split and brighten/darken.

Indirect shadow pushing

The tonemapper user interface is oriented towards reducing the dynamic range by darkening the bright areas, ie you don't provide any parameters for shadow pushing. However, the effect is just the same, the total dynamic range is reduced, shadows and highlights are brought closer together.

The thinking is the same as for graduated neutral density filters, ie to brighten a dark area you reduce the brightness of the brightest area so you can increase total exposure and thus get brighter shadows. The tonemapper will automatically increase exposure after the dynamic range has been compressed.

Guiding the sharp compressor

The automatic masks for the dappled zones (red) and transition zone (green). The bright area is the sky (the image shows the result after the sky has been darkened and thus brought closer to the water reflection). These masks can be manually adjusted. In this case we may want to make the transition zone (which exists due the low contrast transition between sky and mountain in the background) a little bit smaller. The dappled zone looks perfectly alright though, it has identified the areas where light is shining through the tree branches. The tonemapper makes sharp transitions in dappled areas too, the difference is that a bit lower thresholds are used which in some cases can lead to less than perfect results on normal edges.

Compared to the other compression components the "sharp compressor" is a bit less all-around and requires more user input as it often requires some fine-tuning for best results. For images it's well-suited for it can be worth it though as it provides a very natural tonemapping with zero changes to local contrast. The typical example is a scene where you have a bright sky and a dark ground, you can then bring down the sky to a suitable level, the whole sky darkened with the same amount (=no local contrast changes) with a sharp edge towards the ground. One can see it as a razor sharp gradient filter (ie without the gradient) where the edge perfectly matches the scene.

To make best use of the sharp compression component your image should be critically sharp, so all edges in the image are sharp which is required for making artifact-free sharp transitions. A fuzzy edge will never look good with a sharp transition! This means shooting from a stable tripod (using cable release, mirror up), focusing well with a small aperture etc, ie standard landscape shooting technique. If you use lenses which have strong chromatic aberrations it's recommended to correct those in the raw converter before feeding it to the Lumariver tonemapper, as edges otherwise will appear fuzzy.

The sharp compressor works automatically, but you can also guide it. In some cases you can do it just for fine-tuning, in other cases it will be required to do it to get a good result; the sharp compressor is a sharp tool and the auto-parameterization can do mistakes and when it does it can become very visible, ie edges being put in the wrong places.

The work-flow is to always let the tonemapper make a first automatic run and then make adjustments to that.

Guiding masks

The sharp compressor has three special masks which can be edited in the GUI:

The masks are painted in the GUI and are simple coarse bit-masks (either white or black), no feathering is required.

When you make adjustments to the guiding masks the best work-flow is generally to first adjust the dappled mask, press re-generate and see how the new bright mask became, than possibly adjust that and re-generate to see how the transition mask became and then possibly adjust that and re-generate a third time.

When you increase the sharp compressor strength some edges that where previously sharp may now become inverted and the transition mask is then made larger (the opposite when reducing strength). If you want to "lock it down" so it is not changed anymore you can choose to edit it and just press done directly (ie make no changes), it will then be considered as a custom mask and will not be auto-generated. If you want to revert to auto-generation you choose "Revert" in the menu you get when right-clicking on the mask.

The dappled mask can generally be quite coarsely painted, it can for sure extend well into the bright area. However if you have a problematic area in the dark part that is not dappled (say a lichen-covered light gray stone towards a gray sky, the tonemapper could then confuse the gray stone with sky) you should make sure that part is not included in the dappled mask. Inside the dappled mask the thresholds between light and dark is smaller than for the other areas and thus it can make mistakes in border cases when light and dark is very close to each other.

If you think the tonemapper has included too little in the bright mask, paint those areas with the dappled mask and re-generate. Everything within the dappled mask is evaluated with lower threshold values and thus it's a larger chance that an area on the borderline between bright and dark will be considered as bright.

When the "re-generate" button is pressed the transition mask can either be used as is (no changes made to it), or be auto-expanded to suitable width to avoid/minimize halo effects. If you have created a mask from scratch and just casually masked some edges you probably want to auto-expand your mask. However if you have for example reduced the size of the auto-generated transition zones you will not want it to expand it again so then you make sure the "Expand Transition Mask" check-box is unchecked (which is default).

Without transition mask auto-expansion the tonemapper will restrict the transition to the areas in the mask, so you should then have them wide enough. A small transition zone leads to a short fast transition which may lead to a visible halo. After rendering you can inspect the multiply blend map (ie the "sharp compression" layer) to see if the result became like you intended.

When you edit the mask you can choose to reset the brush size to the "reference size". The radius of this brush is the shortest distance a transition zone needs to extend into the bright area in order to fade out the edge completely. Auto-expansion will always fulfill this. However for advanced manual editing you may want to experiment to have a tighter transition zone to keep some of the sharp edge. How far a transition zone extends into the dark area controls how wide the blend into the dark area will be. For stronger compression you may want to have a wider zone to avoid visible haloing.

The bright bitmap shows which pixels that the sharp compressor considers bright and want to darken. After the initial generate there are the following use cases:

If the first run is way off, ie the bright mask includes more or less the whole image or has missed an edge, you may consider to not use the sharp compressor at all for that image and only use the more robust and all-around standard compression. The sharp compression requires some conditions to be fulfilled to work well. Typical issues if the bright mask is off:

The strength and inverted edges

The sharp compressor strives to avoid inverted edges, ie when the bright area after tonemapping becomes darker than the original dark area. The example above shows a crop of the original image to the left, and in the middle what happens if we darken the sky considerably and keep a sharp edge. The sky is then darker than the mountain so the mountain looks unnaturally bright and there's a dark micro-halo at the edge. The relation between the sky and mountain has been inverted. While you can extract this result from the multipage output or force the tonemapper through guiding masks, it will in its automatic mode instead do as the image shows to the right: when inversion is detected it makes a wide transition. The wide transition avoids the unnatural bright-looking mountain and the micro-halo. If you still want a sharp transition you can reduce the strength (ie darken the sky less) so inversion doesn't occur.

In the first automatic run the sharp compression algorithm sets a suitable strength (in stops). The stronger compression that is applied the less likely it will do sharp transitions but instead do wide transition zones. The automatic run tries to find a balance.

When the sharp compressor finds an edge between bright and dark it makes a test to see if the bright area is still brighter after darkening. Say if the bright area is 1 stop brighter and the strength parameter is set to 2, it will be 1 stop darker after tonemapping, which would lead to an inverted edge, which looks unnatural and draws attention to it. To avoid inverted edges the tonemapper then makes wide gradient transitions at those places, ie transition zones that will show up in the transition mask.

Thus if you get more soft transitions and less sharp edge transitions than you want you can try to reduce the strength and re-generate. Sometimes you may think that the strength you can achieve without inverted edges is a bit low, but this means that it is the limit for the particular scene and you need to increase the strength of the other components to get more compression. The tonemapper is optimized to produce natural photographic result so it is hard to push it past that. In many cases as little as 0.5 stop sharp compression can still provide valuable offloading from the other components and yield a more natural-looking result.

Image input and output

Lumariver HDR currently supports the following file formats:

Preparing camera raw files

As Lumariver HDR supports camera raw files you can load them directly into the software, and that is often the best way. However in some cases you may need or want to use your favorite raw converter to produce input to Lumariver HDR and this section describes how to do it.

We recommend converting the camera raw files to 16 bit TIFF. General guidelines as follows:

The tonemapper works best if the input does not have unnecessarily large dynamic range, so if the input has more highlights than you want in the final print, it is best to choose your preferred level of clipping in the RAW converter.

Reconstructing highlights on other than the base exposure is usually a waste of time. If the stack is difficult to merge so you get clipped highlights from a file higher up in the stack in the final output it can be worthwhile to re-render the file with recovered highlights, but this should happen rarely.

If you do recover highlights, strive to make them pleasing to the eye (which may mean that you keep some clipping) and don't overdo it.

In the following sub-sections there are specific step-by-step descriptions for how to achieve this in a few popular raw converters.

Note that most raw converters are unable to make fully neutral renders, i e there is always some tone curve applied even when all settings are set to neutral, especially when there is clipping (blown highlights) in the image. If you are only going to tonemap a single image this is not a problem. However, an HDR merging algorithm prefers truly neutral input files, i e if there was two stop between exposures when shot all tones are also separated two stops in the files. With most raw converters this is however not possible to achieve in full, i e if you would darken the brighter file with two stops it will not match the darker file. This is however handled by Lumariver HDR which has a tone-curve matching algorithm, meaning that as files are loaded for merging their tone curves are adjusted to match the base exposure.

Some raw converters also silently modifies colors and saturation (so different exposures will have different color), and this will make correct merging very hard. If you have such a raw converter we recommend using a raw work-flow instead, or use a different raw converter when you render input for HDR merging.

When you import a TIFF image into a raw converter the neutral settings are truly neutral though (i e no changes to the picture). Thus you can use the raw converter for final adjustments of TIFF output from Lumariver HDR if you want to.

Adobe Lightroom 4

Lightroom 4, here with the base exposure in an HDR series.
Lightroom 4 rendering engine 2010 vs 2012 of an image with clipped highlights, i e a brighter image in an HDR series. The 2010 with -0.5 stop exposure is close to a true neutral rendering, note the white areas in the sky - the clipped areas, which will be brought in from a darker exposure when merging. The 2012 rendering engine despite neutral settings strongly brightens and desaturates the image to make highlight clipping less conspicuous, which indeed works but makes the resulting image hard to use by a HDR merger. Thus the 2010 rendering engine should be used when developing images for HDR merging.

Lightroom 4 has an advanced highlight compression and reconstruction algorithm which tries to make a good-looking film-like result regardless of how much clipped or underexposed the input image is. This is hidden away from the user and cannot be controlled, so it is in effect even with "neutral" settings applied. Although this is user-friendly it's not good for rendering neutral outputs to an HDR merger. The key problem is that Lightroom lets the status of the highlights affect the whole image. Lightroom is not alone with this, many raw converters work in the similar way as well as out-of-camera JPEGs. The overall goal of this approach is to hide away the linear and straight-off clipping behavior of digital camera sensors and instead mimic the non-linear behavior of film, i e a smooth compressing transition into blown highlights.

For example if there are a lot of clipped highlights (which there will be in the brighter images of a HDR series) Lightroom will desaturate and brighten the whole image to make a less conspicuous blend into the clipped areas. The result is similar to the look of overexposed film, which for general use is a good thing. However, this means that if you have a series of exposures from dark to bright the brighter ones will have different color than the darker ones. This makes it very hard for a stitching-based HDR-merger (as Lumariver HDR has) to make a good-looking mix.

Fortunately there is a workaround. This more automatic highlight behavior was introduced to the Lightroom 4 rendering engine (2012), while the older engines (2010 and 2003) are capable of more neutral results, and can be used inside Lightroom 4.

  1. Import the image.
  2. Select the 2010 rendering engine, as it is capable of more neutral renderings than the 2012 engine.
  3. Select the "Adobe neutral" profile under camera calibration.
  4. Move all "Basic" sliders to 0 and select linear tone curve, everything should be neutral.
  5. Neutrality of highlights can be further enhanced by reducing exposure with -0.5 stop. This is not strictly necessary though.
  6. If you want highlight reconstruction, increase the Basic/recovery slider until you get a pleasing look.
  7. Adjust white balance if desired.
  8. Apply lens corrections if desired.
  9. Export to 16 bit TIFF with a large color space such as Adobe RGB or Prophoto RGB. You may disable output sharpening.

If you make the settings under 2010 and then change to 2012 engine Lightroom will mimic the settings as good as possible with the new engine. This means -1.0 stop darker than the 2010 setting, -33 contrast and +25 black plus a tone curve with slightly pushed shadows. That is as neutral the 2012 engine can get but it will still present problems with highlight compression and color shifts, so if you are developing for HDR merging you really should use the 2010 engine.

If you are developing only a single file for the HDR tonemapper you can use the 2012 engine though, as the color shift issues will not have any impact when only one file is involved. It is still recommended to make a fairly neutral rendering, by using -1.0 stop as base exposure setting (reduce further if you want highlight reconstruction), -33 on contrast and +25 on black. You can leave tone curve linear.

If you shoot with a technical camera and need LCC correction (flatfield correction) Adobe nowadays provides a plugin to Lightroom to do this. You then need to convert your files to DNG before processing.

Phase One Capture One 7

Capture One 7, exposure has been reduced to restore highlights in the bright cloud. The image was shot with a technical camera, so a corresponding LCC has been applied too.

Capture One 7 is a popular choice among medium format photographers, but it has also support for most consumer camera models. It's often not capable of producing truly neutral results so it may be problematic to produce input for HDR merging (it may work, results will vary depending on camera model and image material), but for tonemapper input it's always fine. It can read DNG, so for HDR merging it may be better with a raw work-flow in Lumariver HDR and generate a merged DNG which is then loaded into Capture One for adjustments.

  1. Import the image.
  2. In color tab, for base characteristics: select a neutral ICC Profile and Curve.
    • Some cameras have many choices, others only one or a few. Consult the documentation which one that represents the most neutral rendition. If no documentation can be found use the ICC Profile with the least saturated colors and the Curve with the lowest contrast, this is most likely the most neutral setting.
    • Some cameras have obvious choices like "Linear" or "Linear scientific".
  3. Correct the white balance if desired.
  4. If you want highlight reconstruction, go to exposure tab, show exposure warning, and if there are red areas (=over exposure) reduce exposure until red areas disappear, but no further. Exposure warning is a bit conservative so one can let some red be left. Moving the pointer to the highlight you can see the pixel values. Clipping is at 255, after exposure has been reduced you should preferably have the highest values at clipping or at least 250.
  5. In lens tab: apply lens corrections and LCC if applicable.
  6. In details tab: you may keep default settings, or reduce them to zero if you wish (sharpening, noise reduction), but if you have an image with hot pixels (typical for longer exposures) enable Single Pixel noise reduction to remove them. Hot pixels are very bright and can confuse the merging and tonemapper algorithms and should therefore be removed.
  7. Export variant: 16 bit TIFF, use a large color space such as Prophoto RGB. You may in this step disable sharpening if not done earlier.

Capture One 7.1 has the feature to embed camera profile and a transfer function into the TIFF output. This is intended to be used for camera profiling and should theoretically be able to produce neutral output (you still need a close-to-neutral curve though or else highlights will generally be too compressed to make it possible to recover in full). At the time of writing Lumariver HDR doesn't support reversing the embedded transfer function though so we recommend to use an as linear curve as possible and a normal color space for output, such as Prophoto RGB.

RawTherapee 4

When you develop the file in the raw converter for input to Lumariver HDR, it should be with neutral settings, i e no shadow pushing. An image in need of tonemapping will thus look very dark, like in the example here.

RawTherapee is a free open-source raw converter. It is in most aspects at least as competent as the commercial programs, but is slower and not as user-friendly. It is capable of rendering truly neutral output, which makes it a good choice when rendering output for further processing, especially for HDR merging which is dependent on neutral input.

  1. Open the file.
  2. Select "neutral" processing profile.
  3. In color tab: select Prophoto "Working Profile" and a large color spaces for "Output Profile", such as RT_Large_gsRGB
    • Changing to large color space avoids clipping the camera's colors
  4. In color tab: apply your own custom camera input profile if applicable.
    • RawTherapee comes with camera color profiles for some cameras, if so you can use "auto-matched camera-specific color profile".
  5. In color tab: adjust white balance if needed (must be same on all images!).
    • If you want to you can postpone white balance adjustment though.
  6. For highlight reconstruction (optional), in exposure tab:
    • First Enable "clipped highlight indication", if no clipped highlights are shown after color space change there is no clipping and no need to reconstruct highlights.
    • You can enable the "raw histogram" to see how much is actually clipped in the raw file.
    1. Enable "Luminance recovery" highlight reconstruction.
    2. Reduce exposure slider just until highlight clipping almost disappears (usually -0.2 to -0.8 stops). This is to give space to the highlight reconstruction algorithm to work. Instead of the exposure slider one can use the "Highlight recovery amount" slider to achieve similar effect, but that will be non-linear and Lumariver HDR prefers linear files (although it can work with non-linear).
    3. Test all available highlight reconstruction algorithms to see which one you think look best, pick that one. "Blend" is usually a good choice. "Luminance recovery" is the most "true" one since it does not make up any information, but recovered highlights will then be monochrome which is rarely good looking.
    4. If there is no clipping indication at all, raise exposure a bit until some returns - Lumariver HDR merger prefers that (clipped) highlights go up and touches the top.
    5. For some cameras RawTherapee's raw format interpreter may have a too high clip level (usually called white-level or white-point), e g a camera may produce raw values up to 16383 but everything above 15000 is just noise. If RT thinks the clip level is 16383 for that camera highlights may get a pink or magenta tint. If you see that you need to apply a linear correction factor in the raw tab: "White Point: Linear Corr. Factor". In this example it would be 16383/15000 = 1.09 as correction factor. You don't need to know what the exact value is, just increase it slightly until the highlight renders without the pink tint (the required value is likely in the range 1.01 - 1.10).
  7. For technical cameras: enable flat field correction and apply LCC shot.
    • You may need to apply a linear correction factor to avoid clipping after LCC applied, i e reduce it. If you get problems with highlights getting a pink/magenta tint you have reduced it too much though.
  8. Enable hot pixel removal, and chromatic aberration correction if required.
    • Chromatic aberration makes edge detection more difficult so it should be removed if it is visible in the file.
    • The auto correction in the raw tab usually works very well.
  9. Save to 16 bit tiff. Due to the neutral profile the file will likely look a bit dark and dull, but that is okay since it is taken care of later in the work-flow.

When you develop for merging you may want to use only "Camera standard" as input profile in the color tab. In that case only color matrices will be used for color correction which means somewhat less correct colors but a linear result which is easier for the merger to merge. After merging and you open the merged (and possibly tonemapped) TIFF again in RawTherapee you can change the input profile to a custom profile and thus apply the non-linear LUT-based color corrections after merging. This is only for the perfectionist though, it is rare that the color corrections cause visible issues for the merger.

Image appearance in the GUI

When you view images in the GUI they may appear dark, and for true HDR images very much so, ie almost all black except for the highlights. The effect is further strengthened if you enable raw highlight reconstruction, since that will add some extra highlight range on top.

The reason for this is that Lumariver HDR shows you the true linear range, from highlights and down. To be able to show the true color of the brightest highlight the image may be be further darkened to avoid color space clipping. The merger won't scale for under-exposure either, it shows what's in the file. If the darkest image is under-exposed the result may be very dark indeed (if you export the image the under-exposure is corrected for though).

If the image is dark you can still inspect it before you tonemap it by using the exposure setting in the GUI. You temporarily brighten the image suitably to view the area of interest. You can also use the highlight brightness setting to compress the highlights in a way most raw converters do automatically.

When you tonemap the image the output from the merger will be corrected for underexposure (you get an "adjusted input" layer).

Color management

If the input files have ICC profiles Lumariver HDR tunnels it and writes the same to the output (if written to a format that supports ICC profiles, like TIFF). For merging the ICC profile of the first file is taken into account and the following are ignored - the files are assumed to be in the same color space as the base file.

The software expects that ICC profiles are of the basic type, that is specifies the colors of the R G and B channels and gamma coding. This is the case for all normal ICC profiles like sRGB, Adobe RGB and Prophoto RGB. Ie if you export a TIFF file from any imaging software to import to Lumariver HDR you don't need to worry.

The current version doesn't support changing the color space, so if you need to convert color space from say ProPhoto RGB input to sRGB output you keep the whole Lumariver HDR work-flow in Prophoto RGB and then make a final conversion step in your raw converter or photo editor.

We do recommend that you use the same color space through the whole work-flow (only possibly changing it for final output) and that it is large enough to fit the camera's colors. Prophoto RGB is the safe choice but it is a bit overkill in size, so one may prefer to use Adobe RGB or some other wide gamut color space instead.

The "default" color space sRGB is smaller than a modern camera's color space so we recommend to avoid that in the main work-flow. Reduction in quantization precision (eg 16 bit to 8 bit), resolution and color space is best left to the final output step. This way new output with possibly other color space needs can be generated based on a source that contains the full information.

The OpenEXR format doesn't support ICC profiles, but stores corresponding information in a chromaticity attribute. Lumariver HDR will convert to/from this attribute and ICC profiles transparently.

Raw file color management

Camera raw files can have DNG camera profiles applied (.DCP). You can either load a custom file, or select one of the .DCP files already installed on your system via Adobe Camera Raw. Thus we recommend that you you have Adobe Camera Raw installed (via Adobe Lightroom, Photoshop or the free Adobe DNG Converter software, can be downloaded from Adobe), as Lumariver HDR can make use of the DCPs. If you export to DNG the chosen .DCP and white balance will not matter though, as you can adjust these later.

Camera raw files are in a camera-specific color space without white balancing and needs to be converted if written to a regular file format as TIFF. Lumariver HDR will then convert to the Prophoto RGB color space using the built-in color matrices and the white balance setting on the camera. If a DCP file is available (either embedded in the DNG file if that is the raw format or via Adobe's DNG Converter) the color matrices from that file is used. Matrix conversions are linear and provide reasonably accurate colors. For even better color accuracy the DCP can have look-up tables with specific hue and saturation adjustments, these are applied too (except for the merging stage internally, as non-linear color changes will reduce merging precision, but all visible images in the GUI will be color-corrected). The color space is ProPhoto RGB throughout the process.

If the output file format supports ICC profiles (such as TIFF) a generated Prophoto ICC profile is embedded in the output. If the output is DNG (i e raw) no color conversion is made and the DCP file as available in the base exposure (or via DNG Converter) will be embedded in the output.

If you do not want to use the "as shot" white balance for the raw input you can specify it separately with color temperature and tint. This setting will then be applied to all files. If you write to DNG output the white balance setting doesn't matter, unless you use the gamma-encoding or desaturate highlight reconstruction is put in use.

At the time of writing many raw converters don't have correctly implemented color rendering of DNGs. Adobe's own Lightroom behaves correctly of course (as the DNG format is Adobe's creation), and there you can also choose many other DCP profiles ("Camera Calibration", try Neutral for example to get a neutral starting point). However programs like Aperture 5 and Capture One 7 may show a bit over-saturated colors and not behave well with other DCPs than "Adobe Standard". Your mileage may vary, and in some cases you may want to employ a TIFF work-flow.

Aligning images for merging

We hope you are using a stable tripod when shooting images for HDR merging. We also recommend to shoot with the mirror up (if applicable) and use the camera's built-in bracketing mode if available so you don't need to touch the camera between shots and there is minimal vibration when shooting. This way the images will not need aligning.

Should there be a minor misalignment of a few pixels because of disturbing the tripod a little the merger usually can do a good job anyway without alignment if there are enough low contrast regions to put the seams in.

Should you shoot hand-held or happen to disturb the tripod a lot between shots you will need to align the images. This feature has not yet been implemented natively in Lumariver HDR, but it can use the command line tool "align_image_stack" from the open source Hugin project. All you need to do is to install Hugin and Lumariver HDR will find the tool and use it for image aligning.

Hugin is free software and packages can be downloaded and installed from Hugin project downloads. Mac and Windows versions are readily available. Most Linux systems provide Hugin via the standard software installer, so if on Linux you do not need to download and install from the Hugin site.

Raw output is disabled if you have aligned the images, as the raw files color filter array will then no longer be aligned between images.

If you use raw input and write to a normal output like TIFF you can do alignment. However, the current implementation will not track highlight clipping of raw files as good after alignment which may lead to less efficient merges. Thus if you have raw input it is preferred that alignment is not necessary.

Notes on 32 bit TIFF input/output

TIFF with floating point samples is a de-facto HDR format, however some variation between different software can exist when it comes to how samples are interpreted. Lumariver HDR produces files with samples in the range 0.0 (black-level) to 1.0 (white-level) and has no gamma. If an ICC profile is provided it is rewritten to contain linear gamma.

Should input ICC profile have a gamma it is ignored - a 32 bit floating point TIFF file is always assumed to have linear gamma as it makes no sense to gamma code floating point samples. Should input file have a range larger than 0.0 to 1.0 it is scaled to fit.

These TIFF files have been tested to be compatible with Adobe Photoshop, Adobe Lightroom and Photomatix.

Raw input/output

Supported cameras

Currently Lumariver HDR supports cameras that have a Bayer filter array on the sensor with red green and blue channels, i e most cameras. There are a few exceptions though such as Sigma's Foveon cameras. For those models you will have to use the normal work-flow where you develop to 16 bit TIFF in a raw converter compatible with your camera.

Most camera manufacturers have unfortunately not yet started to use the standardized raw format DNG, which means that a separate parser for each manufacturer with tweaks for each model must be implemented. As many others we use DCraw as back-end for reading raw data and should thus support most cameras. As a small independent software development group we have limited ability to test them all though, but our intention is that if your camera has a Bayer array sensor it should work with Lumariver HDR. So if you run into problems with your camera raw files let us know. We will then want to have a raw file with both dark shadows and clipped highlights (more than one file is also ok) so we can test the code and establish proper white levels and black levels.

The raw files don't contain information about clipping level (ie white levels and black levels) which is one of the many things of raw formats that makes the life hard for software developers (yes, we'd too love if the manufacturers adopted DNG, which would mean more time to code tonemapping algorithms and less for tweaking raw support). Due to this a quite common issue is that the file loads fine, but you might see pink highlights even with "desaturate highlights" enabled. This means that the clipping level is set too high for the camera. Let us know and give us an example file with clipping in it and we'll fix the levels to properly support your camera.

White balancing

If you use raw input and write to raw output there is no color conversion made, meaning that white balancing will be left to the raw converter. If you instead for example develop to a TIFF file the file will be white balanced according to the white balance setting used on the camera when shot, or alternatively a white balance you provide.

The custom white balance can be given in daylight temperatures from about 1500K to 15000K. 5500K is midday sunlight on a clear day, 6500K is overcast. A tint can also be provided, 0 is neutral, useful range is typically between -150 (green cast) and +150 (magenta cast). How white balance is modeled is not standardized so it differs between programs, ie 5500K in one program may not look exactly the same as 5500K in another. In Lumariver HDR most of the white balance code comes from the DNG reference implementation, so the settings will match well what you find in Adobe's products.

Highlight reconstruction

The three modes of highlight reconstruction demonstrated on a cloud which has blown highlights (clipped raw channels). The images show the view in Lumariver HDR before tonemapping. With no highlight reconstruction the cloud becomes pink/magenta as the green channel is the most sensitive and clipped while most red and blue is left (red+blue=magenta). The default reconstruction mode (blend-to-whitepoint) solves this by blending all clipped highlights towards the white-point while keeping the known luminance. It's robust but the highlight may not keep it's natural color and it's generally best to brighten and compress them (lower contrast of highlights with an S-shaped contrast curve for example) in post-processing so the transition into white becomes smoother than here before tonemapping. Raw highlight reconstruction makes a full interpolation of all channels so there's no clipping left, which will add some extra range on top which makes the image slightly darker before tonemapping as seen here. As much of the red and blue channels are intact and the clouds have fairly uniform color the clipped data can successfully be reconstructed and restore the clouds with full detail. Raw reconstruction works best when there's some information left in at least one channel and highlight color is fairly predictable, if not the result can become rather ugly; it's a less robust algorithm than the blend-to-whitepoint option, but for pictures where it works, like here, it does magic. Finally a typical in-camera JPEG conversion is shown (it's also similar to default raw conversion): it clips conservatively and reduces contrast in the highlight range, the result is very stable but a considerable amount of highlight information recorded in the raw file is lost and linearity is distorted.
Blend-to-whitepoint vs raw reconstruction on a sunset. The raw reconstructed sun will look dark and dull before tonemapping as it has reconstructed the sun center with a color and do not know how bright a sun really is, but it's still the brightest feature in the image so when tonemapped and post-processed you'll have a bright shining sun which has retained the yellow color also in the center.
In this example the water reflection is so strong that all channels are clipped and the edges of the areas have not well-defined uniform color. In this case the raw reconstruction will fail to create a good looking highlight and instead you will see discolorings. However the end result will still be fine as long as you make sure to push this discolored range past the whitepoint as shown in the example above. You may think you lose some highlight range when doing so, and yes you are but not compared to other reconstruction algorithms, what you do is simply to remove just the parts so heavily clipped that no sane reconstruction could be made, there may still be well-reconstructed highlights left in other places in the image. This means that you can use raw reconstruction also on heavily clipped images if you want, but be prepared to see discolored highlights before tonemapping and contrast adjustment.
One of the brighter images from an HDR set, which has large areas of clipped highlights. The left image shows how it looks with blend-to-whitepoint highlight reconstruction enabled (default), and the right without any reconstruction at all. As the green channel is the most sensitive on most cameras and therefore clips first there will be a mix of red and blue left which forms a pink/magenta color. The highlight reconstruction desaturates clipped areas to neutral gray, but to keep the true linearity of the file (to maximize merging precision) they are not brightened to 100% white meaning that the result inside the program may look a bit flat and dull (as the left image illustrates). In a final merged, tonemapped and contrast-adjusted output this disappears though so you should not worry, the small image in the middle shows this final result.

Lumariver HDR has two highlight reconstruction algorithms. When loading a raw file you choose which one to use:

Being familiar with how a digital camera clips highlights (ie straight off one color channel at a time, typically green first) helps you understand Lumariver reconstruction algorithms better. So if you are new to the concept it's recommended to read the section "highlight clipping" first.

The "blend to whitepoint" algorithm makes sure all recorded luminance information is being used, ie if all but one channel is clipped this is still rendered with the correct luminance (keeps linearity of the file which maximizes merging precision). However as there is no color information left the color is set to the the white balance neutral, and to make it look good there is a gradual blend, ie desaturation, towards the brightest spot. This will often look good, but in some cases with large clipped areas you may get a dull gray. Standard conversions as seen in in-camera JPEGs also blends towards the whitepoint but clips the signal lower so you have full color information at the clipping point, this gives less dynamic range and less highlight information, but in return a very stable result. We have chosen to exclude that algorithm, as we think a user of this type of software can make manual evaluations of highlight quality and compress or push "bad" highlights out of the displayable range if necessary.

When you open a raw file which has clipped highlights you will see the result of the highlight reconstruction in the GUI, and also in the output file if you export to a non-raw format as TIFF. If the clipped highlights is only partially clipped (i e typically green channel clipped while red and blue are not) and is located in neutral white clouds the result will usually look very good. However if the highlight is heavily clipped and not of a neutral color the highlight reconstruction may produce a gray flat and dull look. A 100% clipped area will be neutral gray but not necessarily the brightest white, which may seem odd but this happens if the highlight is not neutral white, the blend-to-whitepoint reconstruction algorithm will then keep the luminance of the last known color to not alter the known linearity. In other words, the highlights you see in merging should be considered more as diagnostics (showing the true information content in the file) than a final result.

The "raw reconstruction" algorithm reconstructs all clipped channels in the raw file before demosaicing such that there are no clipped highlights left. Sounds too good to be true? Well, to some extent it is. Naturally there is some extent of interpolation involved (or guessing if you like), and the heavier clipped files you have the larger the likelihood that the end result will not look good. First the all-channels-clipped areas are approximated as a soft rounded increase, then the remaining areas which has at least one channel unclipped is interpolated with the assumption that the color inside the clipped area is the same as at its border.

Raw reconstruction usually works very well for mildly clipped clouds and sunsets. In sunsets you will get the center of the sun yellow/red as it was rather than white as for most raw conversions. If you adjust exposure and contrast in post-processing you can then control how much of the reconstructed color you should keep, when you push exposure the reconstructed areas are pushed outside the range and blended to white. This means that even if raw reconstruction result looks a little ugly, the result after tonemapping and possible post-processing can still be very good, and better than if other reconstruction algorithms were used.

If you merge files it's generally wise to only apply raw reconstruction on the base file (the darkest). It makes no sense to raw reconstruct the brighter as merging will bring in highlights from the darkest anyway, and as the brighter files are heavily clipped raw reconstruction will not yield a good result on them. To do this you need to open the base file separately and apply the raw reconstruction option, and then open the brighter files and apply the default blend-to-whitepoint option.

You should consider blend-to-whitepoint as the "safe default" and raw reconstruction as something that can make magic on some images and fail on others. Note that if you use raw reconstruction and export to raw (DNG) there will be no clipping in that file which means that when you open it in a raw converter you will not employ any highlight reconstruction algorithm that it may have. In some cases this is good, as the raw converters typically use more stable algorithms with less exciting results than Lumariver HDR can do, in other cases you may prefer to use the raw converter's algorithm and then you should use "blend-to-whitepoint" in Lumariver HDR which does not affect the raw content, with a little exception when it comes to tonemapped output, as described below.

If you use raw input, tonemap it and write to raw output and there are clipped highlights in areas that are darkened by the tonemapping Lumariver HDR reconstructs those highlights. The reason for this is that a raw converter only knows about the global clip level in a raw file, so if clipped samples are darkened the raw converter will no longer think these are clipped, and will render them as-is which for clipped highlights can result in very ugly results, usually flat magenta/pink areas. The reconstruction algorithm as chosen at raw input will be used at export.

If you use flatfield correction, raw reconstruction will always be used in the background to extend up to the original clipping level, but not beyond that. For typical flatfield corrections the reconstruction only need to extend fractions of a stop so you will not notice that it's being used, but will make any raw export nice to work with for a raw converter as the clipping level stays well-defined.

Highlight reconstruction can be disabled altogether, which means that the file will be rendered as is with clipped channels. This usually leads to pink/magenta highlights (as the green channel usually clips first). If your base exposure does not have any clipped highlights at all you can get a slightly larger use of the brighter exposures if highlight reconstruction is disabled (as blend-to-whitepoint reconstruction will affect almost-but-not-quite-clipped areas too and those will then be excluded in a merge). Disabling highlight reconstruction can also be used as "diagnostics" as it will be very obvious in the view where one or more channels are clipped.

Flatfield correction

Example of sensor dust spot removal using flatfield correction (100% crops shown). The reference shot records imperfections such as vignetting, sensor color cast (if any) and dust spots, which then are completely eliminated through flatfield correction. Dust spots are most visible in flat skies as in this example, but removal works also in detailed areas without distorting the underlying texture.
Example of a flatfield correction work-flow using a graduated ND filter at capture. The graduated filter is used in order to capture the original scene in one shot with reasonably good exposure over the whole sensor. The flatfield reference shot is made with the filter still on (hence darker top) so it can be cancelled out. Note the slight vertical streaks of magenta and cyan in the flatfield shot caused by the technical wide angle lens. The flatfield corrected shot is free from vignetting, color cast and dust spots, and the brightened foreground is now dark again as the graduated filter is cancelled out, but the floating point processing ensures that the low noise level is kept intact. The last image shows the final result after tonemapping in Lumariver HDR, exporting to DNG and final grading and cropping in a raw converter.

Lumariver HDR supports flatfield correction of raw files. A reference image is shot through a white diffuse card, which in a perfect system would result in a uniform white image, but in practice the effects of vignetting, dust spots on the sensor surface and sensor response is recorded. The reference image is then used when loading a real image to cancel out the system's imperfections; this is called flatfield correction. In the digital medium format world this is instead often called "lens cast calibration" or "lens cast correction".

Flatfield correction can be used for the following:

In the basic case a reference image is shot once and re-used for all images shot with the given lens. If you want to cancel out dust spots (optional) you need to shoot one reference image in the field, so you map out the dust you actually had on the sensor at the time shooting the real image. If your reference image was shot at an other occasion you should choose not to cancel out dust as it will not match. If you do so the dust in the reference shot will be removed so if you have a library of reference shots it's not absolutely necessary that those are perfectly free from dust (although it's recommended).

Lens color cast (LCC) correction is generally only applicable to technical digital medium format cameras which uses lenses with extremely short flange distance, but a color cast can also appear on standard mirror-less cameras when using analog wide angles via adapters.

Users of digital medium format technical cameras are very familiar with flatfield correction, or LCC as it's typically called there, as every wide-angle lens causes more or less color cast, and depending on shift and tilt settings it will be different meaning that a reference image is generally shot to be paired with every real image. It may sound cumbersome, but as the typical shooting process means carefully framed images shot from a tripod the few extra seconds for a reference shot does not add much time to the overall process. An important bonus of doing this is that any sensor dust will be precisely cleaned automatically.

If you don't have any lens color cast issues (ie all standard cameras) there's no need to do flatfield correction. Cancelling out the vignetting does make the life a bit easier for the tonemapper though as it restores the correct relative luminance levels within the image, but as vignetting is normally small it's generally not a problem.

A special use-case applicable to HDR photography is to use graduated filters in the field (to get better exposure of the ground if the sky is bright), and shoot a reference shot with the graduated filter on so you can cancel it out using flatfield correction, and then do all tonemapping in software. The reason for doing like this is to be able to capture a complete scene with good exposure in a single shot rather than a HDR series (for artistic and/or practical reasons).

Note that with this method you can employ stronger/sharper graduated filters and coarser placement, as you are just using them as tools for increasing exposure. This method means that dark areas will get more exposure, ie more photons captured so you get less shot noise, which can lead to better colors compared to when relying solely on the low read noise of a modern sensor. Objects that reach up above the horizon (such as a nearby tree) will be darkened by the filter of course and will in those areas be as noisy as without a graduated filter, but often these objects are textured and/or dark which hides noise, so the overall result is still generally very good.

DNG dynamic range

While DNG since the September 2012 1.4 release has floating point support and thus is a good format for HDR, most raw converters today are adapted for 16 bit DNG files which is what you get from cameras. Many do not even load a floating point DNG (Lumariver HDR can of course both load and save floating point DNGs). The dynamic range of a 16 bit DNG is of course a bit limited (if it had been gamma-encoded it would be much better but it is not). So what you can get after merging and saving to a standard 16 bit DNG is an output corresponding to a noise-free 16 bit camera perfectly exposed. For much work this is certainly adequate, which we discuss in more detail separately.

Linear encoding as in 16 bit DNGs means that the brightest stop will occupy half the integer range, next stop under a quarter, then 1/8th etc. This leads to the somewhat limited dynamic range. By applying an exponential tone curve (generally called gamma curve in graphics) one can split the stops more evenly over the available integer range and get a large increase in dynamic range, this is done in the TIFF format for example. If you are familiar with the DNG format you may know that it supports LUT-encoding (lookup-table for non-linear encoding) which makes gamma-encoding possible but unfortunately only with 16 bit precision, which means that dynamic range cannot be extended past 16 bit linear anyway (the DNG LUT encoding is only designed for expanding 8-10 bit non-linear camera encodings).

Tone-curve setting in the raw converter to cancel out a 1/2.2 gamma encoding of the DNG file. Example taken from Lightroom. This closely resembles a 2.2 gamma curve, but it is not important to have an exact match or even cancelling it out at all, just develop the file to your liking.

If you do need more dynamic range from your standard DNG than 16-bit linear encoding provides you can enable a workaround in Lumariver HDR's DNG export: by applying a gamma curve to the raw samples the dynamic range is increased to be as good as a gamma-encoded 16 bit TIFF. There is a drawback with this workaround though: as the color channels are affected directly this must be done with the desired white balance setting to avoid color shifts. If you don't choose a specific white balance the embedded "as shot" camera white balance will be used. When this type of file is handled in the raw converter the colors will be normal if the white balance is kept, and also if changing it using auto or the color picker, but if a white balance preset (cloudy, overcast etc) is used the colors will be a bit shifted.

The gamma-encoded DNG will when first opened be bright looking with low contrast (due to the gamma-curve), but this is easily restored with the raw converter's tone curve feature. We apply a 1/2.2 gamma and the image shows how to set the tone curve in the raw converter to cancel it out (mathematically you want to mimic a y=x^2.2 function in the 0 to 1 range). This gamma is the same that is typically used in 16 bit TIFF files, but for that format the programs converts automatically so you don't get to see the gamma-version of the image.

A side-effect advantage of using the gamma coding is that the limited tonemapping abilities of the raw converters will often work better and be able to produce more pleasing results when the input image is compressed and brightened through the gamma coding. Without gamma coding a merged HDR image is often very dark and a raw converter may not be able to tonemap that properly (by applying an inverse tone curve to in effect apply the gamma coding this can often be worked around though).

Lumariver HDR supports the following DNG sample formats:

How large dynamic range fits into a standard 16 bit DNG?

Example of the dynamic range in a plain 16 bit DNG file. To the left shadows of the darkest image in a HDR set pushed 8 stops. As seen the camera's noise is at an unacceptable high level despite that it is a "14 stop" camera. In the middle a 4 stop brighter image in the HDR set, pushed 4 stop to show the same shadow at the same brightness level. Some camera noise is visible, but acceptable if the shadow is kept reasonably dark. To the right the merged DNG which to fit the range is as dark as the darkest image in the set and is thus pushed 8 stops. Some quantization noise then appears but still at a low enough level that the image is not much noisier than the brighter image in the set. Conclusion: the plain 16 bit DNG can store a 4-5 stop HDR set span for a "14 stop" camera. Note that the result may vary depending on the raw converter's demosaicing precision. This example is made with RawTherapee which does demosaicing in integer space, the result becomes a little bit better if done in floating point space as Lumariver HDR does.

Of the various DNG raw formats discussed here it is obvious that the easiest to work with is the standard 16 bit linear encoding. Unfortunately this is also the DNG that has the least dynamic range, much less than a 16 bit gamma-encoded TIFF.

The question to ask is then exactly how useful is this format, can we merge several camera raw files into this encoding and keep the information, or is information lost due to the limited dynamic range? To answer this we need to compare noise levels. Assuming a completely noise-free image the signal-to-noise ratio in a 16 bit linear encoding is 96.3 dB, limited by the quantization noise. At the time of writing the best cameras concerning dynamic range (Nikon D800 etc) has about 81 dB signal-to-noise ratio, which leaves 15 dB or about 2.5 stops headroom in the DNG. That is a quite small amount. However, the noise characteristics of quantization noise and camera sensor noise is very different. Quantization noise is random and smooth while the sensor noise is blotchy and can have streaks, i e one can accept higher levels of quantization noise than sensor noise. This makes the comparison subjective rather than a mathematical exercise.

Comparing actual test images the headroom grows to about 5 stops. That is when you shoot HDR the brightest image has 5 stops longer shutter speed than the darkest, and those images can then be merged into a standard DNG without losing any significant information in the deep shadows of the brightest image. And this is for the cameras with the highest dynamic range.

A rule of thumb is to look at what dynamic range your camera has at DxOMark and consider the headroom up to 19. So if your camera has as much as 14 stops of dynamic range according to DxOMark, you get 19-14 = 5 stops of headroom, i e your HDR series can span 5 stops. Typical cameras are rated as 11-12 stops, and thus you get 7-8 stops of headroom in a merged DNG.

5-7 stops between darkest and brightest image in a HDR series is not a huge span, but for landscape photography this or slightly less is a very typical range. Two shots spaced 3-4 stops or 3 shots spaced 2 stops is what you typical do in difficult back-lit scenes, and thus the standard DNG supports this well even for the cameras with the highest dynamic range. It should also be put in relation to how good raw converters are at tonemapping, few if any of them can effectively tonemap files exceeding this dynamic range. Note that you can tonemap the file in Lumariver HDR before exporting to DNG if you want to, and thus reduce the dynamic range requirement both for the DNG file format and the raw converter handling it.

When you need a larger span than this then you need to either use the gamma-encoding which expands the headroom in the DNG with about 7 stops (i e from 5 to 12 for a "14 stop" camera), or use a real HDR format.

As discussed in the section "dynamic range in raw converters" some raw converters lose precision in the conversion meaning that you won't get the full potential from the DNG file. If you have such a raw converter you will probably always need to use gamma-encoding or tonemap it before you open it in the raw converter.

Dynamic range in raw converters

Most raw converters use floating point math internally, and is thus theoretically capable of HDR. However, the conversion to floating point can take place in a late stage dropping precision on its way. Some problems are:

Raw converters traditionally expect low bit input directly from cameras so it is understandable that shortcuts have been taken concerning precision. Unfortunately the integer math of 16 bit files (TIFF and DNG) can due to truncations in the calculations reduce precision also in plain 16 bit raw DNG files, i e you cannot have the full dynamic range the format is capable of. That demosaicing is 16 bit might seem irrelevant when you have 16 bit integer input, but in the dark shadows it can make a difference since the interpolations made by a demosaicer make use of fractional quantization steps, thus the darkest shadows may not be rendered as smooth as they could. It is certainly a minor problem though. In raw conversion most precision seems to be lost in color space conversions and various hidden tone curve and color correction processing. Some raw converters do it reasonably well, others lose significant amount of precision, i e 2-3 stops of shadow space.

Lumariver HDR uses floating point math throughout, even in the demosaicing. Great care has been taken to maximize the potential of all supported formats. If you suspect that your raw converters loses precision you can make a conversion in Lumariver HDR as a reference and compare, using the exposure slider to push the file.

How to relate to these precision issues?

All raw converters can do the traditional TIFF-based work-flow: 1) convert from raw to 16 bit TIFF in raw converter 2) merge and tonemap in Lumariver HDR and save to 16 bit or floating point TIFF (if the raw converter supports floating point, that is safer from a precision standpoint) 3) finalize in raw converter. Problems may only arise when you feed the raw converter with merged but not tonemapped input, such as a merged DNG.

Deposterization of 8 bit files

The program has an advanced de-posterizing algorithm so when an 8 bit file is up-sampled to the internal 32 bit floating point format single-step posterization is completely removed. This means that you can edit your multiply blend maps in 8 bit software and still not worry about banding.

However, while this approach is perfectly ok for the blend maps we strongly discourage from 8 bit files when it comes to image input, since it represents a coarser quantization than a better camera provides, and thus details and color information will be lost.

Multipage (layered) output

Layered output for a three image HDR merge. The exposure layers have an alpha channel so when layered on top they will form the same result as the output layer.
Layered output for a tonemapping using all compression methods (ie more layers in the output).

Multipage output can be produced from both merging and tonemapping. The pages get names to explain what they contain. When opened in a photo editor like Photoshop or Gimp each page becomes a layer.

A merged multipage file has the following layers:

A tonemapped multipage file has the following layers:

The idea of splitting the tonemapping components into several layers is to make it easier to edit and make own combinations. For example one may want to use more of the sharp compression edge layer and reduce the transition zones. Or duplicating the sharp compression layer and splitting it in several parts and apply different amounts of compression on different parts.

The Lumariver HDR user interface allows for adjustments though so the compression will generally be as you want it to be already when opened in Photoshop/Gimp etc.

The mathematically correct way to use the gray-scale tonemap layers is as multiply layers (multiply blend mode), that is if the "adjusted input" layer is multiply-blended with the "combined compression" layer the result is the same as the "output" layer (but a bit darker, more on this below).

The advantage of multiply blend is that it makes a linear predictable easy-to-understand conversion and no clipping can occur. This is an excellent blending mode for the tonemapper. The disadvantage is that you can only make things darker, not brighter, so the result is a darker image. In Photoshop you would solve this by stacking a levels/curves adjustment layer on top to restore the brightness to your liking. Since Photoshop uses floating point conversions you don't need to be worry about losing precision when going from a dark to a bright picture.

For simpler 8 bit photo editors (like Gimp) and editors without adjustment layers we recommend to use the output layer directly, or rather than using "multiply blend mode" try with "overlay", "soft light" or "hard light". Those modes can brighten the image and although the result is not as predictable and exact it often produces a result very similar to the contrast-adjusted multiply blend version. You may need to adjust the brightness of the tonemap layers to get a good result, depending on which image editor you use. Unfortunately the "overlay" and similar modes are not standardized between photo editors and can work a bit differently depending on application used.

Image output filter

To enhance the look of the image you can set the following in the GUI:

All these will be exported to all formats except HDR (OpenEXR) and DNG, as with DNG you're expected to further tune the raw in a raw converter. These settings can still be useful as a preview though, as a readily tonemapped image usually looks a bit flat without additional contrast applied. When you first open a raw image these settings are auto-set to produce a result similar to an in-camera JPEG.

Exposure is truly linear, ie there is no hidden compression or desaturation going on. Increasing exposure is typically used to clip away unnecessary or ugly highlights (caused by artifacts from a difficult highlight reconstruction). Decreasing exposure does not generally make sense as it will put the clip point below maximum. If the image is too bright it's generally better to adjust contrast or highlight brightness instead or reduce tonemapping strength.

Highlight brightness will apply a tone-curve that compresses (brightens) the highlights, and will thus as a side effect expand the mid-tones and shadows so you get more contrast in the picture. It's often worthwhile to brighten the highlights a little in a tonemapped image to get a more natural look.

The contrast applies the typical S-shaped tone-curve, ie compress highlights and shadows to increase mid-tone contrast. The tone-curve used in contrast and highlight brightness applies a DNG type curve, which has a film-like effect in terms of increasing the saturation slightly when contrast is increased. With both contrast and highlight brightness sliders set to 50% the resulting tone-curve will be shaped as a typical default curve used in camera JPEGs and raw converters.

Tips and tricks

Using Lumariver HDR as a file format converter

While not designed for the purpose Lumariver HDR can be used to convert from one file format into another, simply by opening just one file, don't do any tonemapping and save it to an output file of your desired format.

HDR for repro and "zero noise HDR"

If you photograph paintings or slide film it may be important to be able to register noise free dark colors but otherwise produce exactly the same image as if you make one shot (i e no tonemapping and no contrast adjustment). Lumariver HDR works well in this use case.

Shoot two raw pictures, one optimally ETTR and one 4-6 stops brighter. Make sure to have a stable rig so there is absolutely no movement between the frames.

Choose either a raw to DNG work-flow, or develop the files to 16 bit TIFF (with no highlight reconstruction) and merge. Use the repro merge option. Since there is no scene movement, no light changes and no rig movement the merging will be without problems.

Since a 16 bit TIFF file thanks to gamma encoding can store about 14-16 noise-free stops (a "14 stop" camera has only about 6-7 stops virtually noise-free) it is usually all you need also in final merged output in this use case. The advantage of 16 bit TIFF compared to real HDR format is the wider application support.

If you merge to DNG you will probably want to use the default linear 16-bit format as it is safest for archiving. The dynamic range is not as good as gamma-encoded TIFF but adequate for many repro applications.

If you instead of RAW input choose to develop to TIFF you should avoid any non-linear scaling in the raw conversion. This means that highlight recovery should be disabled and a completely neutral rendering should be made. Note that not all raw converters are capable of this, as described in the raw conversion section.

In addition to use the repro merge option (assumes zero movement, picks best exposed pixels only), you should also probably disable automatic contrast adjustment, so your output image looks exactly like the base exposure, the optimal ETTR image, but with noise-free shadows (if you write to DNG there is no contrast adjustment).

HDR shooting tips

When it comes to shooting a HDR series most shoot too many pictures with too closely spaced shutter speeds. In almost all cases you only need two or three pictures, spaced 2 to 4 stops. If you shoot two pictures a spacing of 3 stops is typically a good choice, and with three pictures 2 stops is usually best. If you have too large spacing between pictures there will be too little well-exposed overlap between adjacent pictures which means that Lumariver HDR will not be able to identify the tone curve. If this happens you can try to increase the noisy and underexposed values so the merger will look at more pixels.

If something is moving in your pictures, try to make one exposure well-balanced for all moving parts (typically the middle exposure of three). This allows the merging algorithm to pick that exposure for those parts to form a consistent ghost-free image.

Don't overdo it! Don't capture (significantly) more highlights than you need for a natural-looking image. If your eyes cannot see a highlight, i e the center of a light bulb, there's generally not a good idea to capture it in full. In indoor architecture photos with bright daylight shining in from the windows, the most natural-looking images have quite some overexposure in them. By not overdoing it you will notice that those two or three exposures are enough.

This software is tuned for natural-looking imagery so it will work best with moderately high dynamic range input. If it is more than 6 stops between brightest and darkest exposure it will be hard to get a natural-looking result.

Always shoot in raw format (to keep full undistorted dynamic range the camera captures).

Preferably use a tripod and shoot sharp pictures. This makes it easier both for the merger and the tonemapper.

Appendix

Highlight clipping in digital exposures

All current digital cameras use linear sensors that count the amount of light (number of photons) that hits each pixel (or "sensel"). A sensel has a limit of how many photons it can store, and when that limit is reached the rest is thrown away. That is, the sensel has the same linear sensitivity all the way up to saturation when the signal is clipped straight off. This is a key difference from film which has a non-linear response, highlights are compressed.

Compressing highlights rather than clipping them straight off leads to nicer-looking images, so therefore out-of-camera JPEGs and most raw converters tries to mimic a film-like behavior. This means that if the raw image has large blown areas the renderer will strongly brighten, lower contrast and desaturate the rest of the image so it blends nicely into the white blown areas. This leads to an image that looks similar to over-exposed film which generally is a good thing, but it is far from the "truth", i e what has actually been recorded on the sensor. Many raw converters also add "highlight reconstruction/recovery" to this, i e it makes and educated guess of what data would have been in the clipped zones and fabricates some highlight detail from it.

Another factor is that digital sensors generally have split colors into red, green and blue, and these have different sensitivity. This means that when an area is overexposed it is possible that only one or two of these channels are clipped. If you would do a naive conversion of the raw data you would thus get strong color shifts in blown areas (usually towards pink/magenta). Out of camera JPEG rendering is generally conservative and consider all areas with at least one channel blown as completely blown and render that as white, while many raw converters can use various algorithms to guess the values of the clipped channels and thus display more highlight detail.

Many photographers are not aware of how camera JPEGs and raw converters silently do this type of conversion and therefore believe that digital sensors are non-linear just like film. Most cameras show histograms on the rendered JPEG rather than the raw data, but many are not aware and think they see a true raw histogram, and thus it looks like the camera sensor indeed is non-linear. Today many of the major raw converters don't even make it possible to render a true-to-the-raw-data image, you may need to turn to more specialized software if you want to see what has actually been recorded.

When you shoot single images it is generally an advantage that film-like behavior is mimicked as it renders more pleasing images. However when you shoot an HDR series to merge the merger will perform best if the over-exposed images have not been "distorted" by the raw converter but rather kept the original contrast and color and lets highlights look ugly and clipped, as these will be brought in from a darker exposure anyway. So in this case you should either use a raw converter capable of neutral renders or feed the merger with the raw data directly.

Converting 0-255 pixel level to stops from saturation

If you use an image viewer that displays 8 bit pixel values in a scale from 0-255 and you want to convert such a value to the photographic stops-from-saturation value as used by Lumariver HDR, the formula is as follows: -LOG2((PIXEL_VALUE / 255)^2.2). That is, the gamma conversion needs to be made (2.2 in this example, used by sRGB and Adobe RGB color spaces, note that Prophoto RGB uses 1.8).

A few example conversions with gamma 2.2: 236 becomes 0.25 (default clip level), 72 becomes 4.0 (default noisy level), 38 becomes 6.0 (default underexposed level) and 1 becomes as far as 17.6 stops from saturation, i e a good bit past the noise limit of the best cameras today so it is true that even an 8 bit JPEG can dig into the noise of a digital camera's output.

Understanding dynamic range

Camera dynamic range

It is often claimed that modern digital cameras have 12 to 14 stops of dynamic range, and that this is so large range that shooting HDR series is nowadays obsolete. While it's true that dynamic range of cameras has improved and you can capture scenes in a single shot that was not possible a few years ago, the "14 stops of dynamic range" refers to engineering dynamic range and not what is useful to a photographer, which is much less.

In engineering terms dynamic range is defined as the number of stops down from saturation where the noise is equally strong as the signal. A photographic image where you have as much noise as signal is obviously not pleasing to look at. So how many useful stops do we have? This cannot be easily answered due to a number of factors:

If you do not push the shadows in post-processing at all (i e no tonemapping or tone curve compression) you don't need much dynamic range at all, a camera having 11 stop of engineering dynamic range or even less will do just fine. The reason for this is that dark noisy areas will be kept dark so the noise does not become visible.

However if you apply tonemapping or otherwise apply shadow pushing to the scene the dark areas will be brightened and noise becomes visible. If you pick the best "14 stop" digital camera of today one can say that it has about 6-7 "virtually noise free" stops, below that noise starts becoming visible, and below 10 stops the file falls apart.

Another more subtle aspect than noise is that with increased noise the color reproduction gets worse. Some think that this is a worse effect of underexposure than increased noise.

How much you will want to push a specific scene is dependent on that scene and on your artistic intention. How much noise you will accept is a matter of your personal view on image quality. It's thus hard to say anything in general when shooting a HDR scene is required.

One way to relate to this is to shoot 2 - 4 stop bracketed shots always when possible in tough scenes, and then later decide in post-processing if you will go with one shot or merge to HDR and tonemap.

It should also be said that 1/3 to 1 stop of dynamic range is typically lost simply by not exposing perfectly "ETTR" (Expose To The Right), ie keeping only as much highlights you need and putting them at the edge of saturation. Most cameras don't provide the proper tools to make a "perfect" ETTR exposure, ie histograms are showing in-camera JPEG renderings rather than the real RAW histogram.

16 bit TIFF has more DR than you may think!

How much dynamic range is there in a 16 bit TIFF file? The same as in a 16 bit camera raw file? No. It's considerably more.

Let us start with how a camera records data. As photographers we think of light in exponential terms, +1 stop means doubling, +2 stops is doubling twice that is 4 times the light and so on. A camera sensor however just records the number of photons in each pixel and this signal is linear. So if it has 16 bit quantization it records signal from 0 to 65535, and the topmost stop will occupy half of this, 8192 levels, the stop below 4096, after than 2048 etc down to only 1 for the bottom 16th stop. Obviously the bottom stops don't contain much information.

While it may seem much smarter to allocate 65536/16 = 4096 levels to each stop instead, it would be meaningless because the lower stops are noisy (noise from electronics etc) and contain less photons so we would not gain any more information. It would also make the hardware more complex to implement.

However if we instead look at the post-processing step and a 16 bit TIFF file, the 0 - 65535 range in those are gamma-encoded, meaning that the bottom stops get more levels compared to linear encoding and the top stops a bit less. With a typical gamma of 2.2 (used by sRGB and Adobe RGB color spaces) the bottom stop is at log2((1/65536)^2.2) = 35 stops! That is if we would see it in digital camera dynamic range terms a 16 bit TIFF file can store the information delivered by a camera capable of 35 stops.

However in HDR terms we are only interested in noise-free stops, which means that the bottom stop must have sufficient number of levels to contain a virtually noise-free image, or more specifically the information we store there should not become significantly noisier than it where in the first place. In tests with actual images described in the section about DNG dynamic range a camera file from a "14 stop" camera can be darkened as much as 12 stops without significantly increasing the noise. Considering such a camera has 6-7 "virtually noise free" stops the total range for practical use a gamma-encoded 16 bit TIFF has is 18-19 stops.

The possibility to separate the darkest and brightest image in a HDR series with as much as 12 stops is more than needed in most photography use cases, thus using floating point TIFF would generally be redundant. However, as discussed in the section about raw converter dynamic range some software make the gamma conversion of 16 bit TIFF files still in 16 bit space and thus effectively destroy all the dynamic range gained by gamma encoding. When using that kind of software it is safer to work with floating point TIFFs.

License notes

Lumariver HDR uses the open-source DCB demosaicing algorithm, which is covered by the following BSD license:

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Lumariver HDR uses the open-source OpenEXR library, which is covered by the following BSD license:

Copyright (c) 2002-2011, Industrial Light & Magic, a division of Lucasfilm Entertainment Company Ltd. All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Lumariver HDR also uses the open-source JPEG library made by the Independent JPEG Group, Adobe's DNG library, and Dave Coffin's DCraw (with restricted code removed).

©2012-2016 Xarepo AB Contact Us
top