Understanding Digital Exposure Theory

The +2/3rds EC isn't a specific component of using HWM, it's just a consequence of how I use HWM. It's just a compensation setting I often seem to end up with to improve shadow areas.
The way you are using the camera the EC is simply adjusting the ISO within the invariance range... it's not actually gaining you anything, nor improving shadow information. But it does run the risk of having the ISO cause ADC clipping should the metering not reduce the exposure enough. The metering is also adjusting the ISO w/in the invariance range, and the +EC is not allowing it to do what it is meant to.
If the +EC does not cause clipping it's not a negative either... it's just a risk for no real benefit (unless editing is an issue).
 
In your scenario, which exposure variable is being adjusted? Which one should be and why? In which situation, with which camera? Is adding the exposure compensation actually beneficial, or is it a potential negative? Are you loosing recorded dynamic range, shifting the recorded dynamic range, or extending the recorded dynamic range? And do you have a choice in that?
And most importantly, is there actually any point/potential benefit? If not, then why bother?

I know the answers to all of those questions, and I've done my best to explain them in the pdf. If you know all of the answers then great; and if you don't care, that's fine too.
I'm glad we can agree on that. (y)

It's plain we have nothing else in common so please put me on your ignore list. :)
 
As ISO in the digital domain is actually the photosite or channel amplifier gain, increased ISO can increase the noise as the amplifier noise figure is usually gain dependant.
I'm not exactly sure what you mean.
Analog ISO increases the sensor generated signal and noise simultaneously/equally.
 
The way you are using the camera the EC is simply adjusting the ISO within the invariance range... it's not actually gaining you anything, nor improving shadow information.

I understand what you say, but I've ended up here empirically, where I've taken images without EC, then decided to add some EC and realised with direct comparison, I get better shadow noise/detail when I add a bit of EC.

But I will have another look and do some more testing.

Thanks,
 
I understand what you say, but I've ended up here empirically, where I've taken images without EC, then decided to add some EC and realised with direct comparison, I get better shadow noise/detail when I add a bit of EC.

But I will have another look and do some more testing.

Thanks,
Sounds good to me... I don't have a D500. You would need a static scene with constant lighting to directly compare the results. And I think the software/method of recovery could make a difference. But the only thing that actually matters is what works best for you.
 
Sounds good to me... I don't have a D500. You would need a static scene with constant lighting to directly compare the results. And I think the software/method of recovery could make a difference. But the only thing that actually matters is what works best for you.

Actually, I find it difficult to confidently draw any conclusions on what works ! Lots of variables at play with these things.
 
This all sounds a bit imponderable to me. But not quite - I'm slightly hooked.

What the fcuk is HWM though, Graham (I asked myself) - but I just found the answer - and it's n/a, as they say, on my camera. But I do the same thing in a rough and ready way by visually assessing the scene and relating how I see the light to my experience of how my film / sensor are able to record it.

I remain puzzled about all the equations of keeping iso setting low and effectively underexposing, since the darkening will have to be recovered in the raw processing software if it hasn't been in camera. So what gives?

And yes to
I obviously want to have more insight into "how" the camera sees the world so I can better match what I "think" I want, with what the camera is going to give me.


I'm tempted to reassert my standard motto that you peg your exposure setting to the highlights that you don't want blown - pegging meaning that those highlights are your reference for the whole exposure - not that you expose to make them a mid-tone!

How I'd do this for a moving subject, though, is beyond my ken.
 
Last edited:
I'm tempted to reassert my standard motto that you peg your exposure setting to the highlights that you don't want blown - pegging meaning that those highlights are your reference for the whole exposure - not that you expose to make them a mid-tone!

How I'd do this for a moving subject, though, is beyond my ken.

The doing this with a moving subject (at least roughly) was the point of me using HWM. Assigning it to the Pv function key means I can easily press the BBF (back button focus) the Pv button (which is now the HWM button) and the release button all at the same time and instantly bias the exposure to the highlights. It's still matrix metering, but with the algorithm changed from a mid-tone bias to a highlight bias.

I should let Stephen answer the effectively underexposing question. But my general understanding is that for cameras with iso-invariant sensors, noise quality is the same throughout the iso-invariant range, and it doesn't matter whether you brighten your image in-camera (increasing iso) or in processing.

So there is no penalty in "underexposing" but there is a gain in making it less likely you blow out the highlights.

If you are "hooked", I found this useful when it came out. and it adds background to Stephens's paper.

https://photographylife.com/iso-invariance-explained#does-a-high-iso-actually-decrease-noise
 
I should let Stephen answer the effectively underexposing question. But my general understanding is that for cameras with iso-invariant sensors, noise quality is the same throughout the iso-invariant range, and it doesn't matter whether you brighten your image in-camera (increasing iso) or in processing.
Exactly right...
I remain puzzled about all the equations of keeping iso setting low and effectively underexposing, since the darkening will have to be recovered in the raw processing software if it hasn't been in camera. So what gives?
This very much depends on the camera being used... with a non-invariant camera raising the ISO increases the SNR and results in a less noisy image overall. The image will look better than one taken at a lower ISO and recovered in post.
With an ISO invariant camera, raising the ISO does not increase the SNR; the relative noise level is scene dependent (and how long/much is recorded). These images will look essentially identical when recovered.

I'm tempted to reassert my standard motto that you peg your exposure setting to the highlights that you don't want blown - pegging meaning that those highlights are your reference for the whole exposure
This is fine, but it usually means "underexposing" the rest of the scene/subject... it is pretty important that you do not shift the exposure left (save highlights) by reducing the sensor exposure (SS/Ap). It should be done by reducing the ISO; and what that means for the rest of the image/result depends on the ISO invariance characteristics of the camera. With an invariant camera it increases the recorded DR; with a non-invariant camera it is a shift of recorded dynamic range... shadow details/information are lost to noise in order to save the highlight detail/information.
Most cameras can be set to adjust the ISO first (auto ISO); but there are some modes where it won't (S/Tv), and it's more difficult/slow in full manual (manual ISO).

If you save the highlights by reducing sensor exposure (SS/Ap) with any camera it causes a shift of the recorded dynamic range. But this is typically worse than lowering ISO with a non-invariant camera because it decreases the sensor generated SNR.
Because shifting the recorded DR is a loss of information in the darker tones, it really should be weighed against the actual importance/potential benefit overall.
 
Last edited:
If you save the highlights by reducing sensor exposure (SS/Ap) with any camera it causes a shift of the recorded dynamic range.
Because shifting the recorded DR is a loss of information in the darker tones, it really should be weighed against the actual importance/potential benefit overall.
Thanks Steven - material to digest. But I do find that highlight areas are so often key to the success of an image, maybe especially in colour work. Maybe I need an iso-invariant camera!
 
I updated the pdf today with a small addition to the last section on pixel size vs sensor size:

"Just as it is the sensor size that matters in terms of the amount of light received/collected, it is also the physical output size that determines how it looks. I.e. how much enlargement of the sensor area that must occur. The M4/3 sensor will have to be enlarged 4x as much in area compared to the FF 35mm sensor in order to create the same output image size. Pixel dimensions only matter in terms of digital display; and what that actually means is dependent on the display device/resolution used."
I simpelt cant work out how that works. When printing i have an amount of pixels (it's the number of pixels i enter not a size in " or cm)that turns into a number of dots The size im able to print is determined by the number of dpi I require so I cant see how the physical size of any 24MP sensor is relevant in regards to output since it's merely a size of a file that should be the same for any 24MP sensor regardless of their physical size.
 
The size im able to print is determined by the number of dpi I require so I cant see how the physical size of any 24MP sensor is relevant in regards to output since it's merely a size of a file that should be the same for any 24MP sensor regardless of their physical size.
The PPI is mostly irrelevant. If you take the m4/3 image and print it at the same physical size/PPI as the FF image it will look about 2 stops worse; because it contains 2 stops less light/information and the sensor area has been enlarged 4x as much.
Just think of it in terms of film... if you want to make a big print, starting with a bigger negative is much better. There is no concept of PPI with film, but the basic principles are exactly the same.

PPI with printing is only really relevant when you get to the point where the printer is printing the individual pixel's square shape, and you view it at a distance where you can see that.
It's also relevant in the detail/resolution of the print, where if they are both actually 24mp recorded they are the same. But smaller sensors with smaller sensels/pixels makes it much harder to actually record the same image resolution... a smaller sensor almost always means less is actually recorded; even if it has a higher MP count.
 
Last edited:
The PPI is mostly irrelevant. If you take the m4/3 image and print it at the same physical size/PPI as the FF image it will look about 2 stops worse; because it contains 2 stops less light/information and the sensor area has been enlarged 4x as much.
I understand the side issue in relation to recording the signal both in terms of signal/noise, determening the ammount of amplification you can do before the noise becomes unbarable, and the relation between lens and pixel pitch in regards to system resolution but after the signal has left the sensor there is no physical size only 1's and 0's representing each photosite/pixel on the sensor and they does not carry any metric info
Just think of it in terms of film... if you want to make a big print, starting with a bigger negative is much better. There is no concept of PPI with film, but the basic principles are exactly the same.
Yes but in film you do an optical enlargement of a physical negative where you besides the number of photosite (silver grains) Which is directly proportional to the size/area of the negative and pretty important to the resolving power of the film also have to deal with the space between the grains wich when overenlarged (especially with 24x36mm) leads to the halftone effect and the resolving power og the enlarging lens and the paper
PPI with printing is only really relevant when you get to the point where the printer is printing the individual pixel's square shape, and you view it at a distance where you can see that.
It's also relevant in the detail/resolution of the print, where if they are both actually 24mp recorded they are the same. But smaller sensors with smaller sensels/pixels makes it much harder to actually record the same image resolution... a smaller sensor almost always means less is actually recorded; even if it has a higher MP count.
But isnt that due to the lens/sensor relationship. I dont see a relation between M4/3 and 24x36 sensors where you have double the resolution in the 24x36mm sensors.
To me the sensor size is relevant in regards to the signal entering it and the problems occuring are due to low signal/noise and issues with the resolving power of the lenses.
Man im stupid i dont get it.
 
but after the signal has left the sensor there is no physical size only 1's and 0's representing each photosite/pixel on the sensor and they does not carry any metric info
Yes, but those 1's and 0's represent the physical area they came from... they contain the light/data from that original area, and that light/data will be used to recreate the same physical area in the image.

Yes but in film you do an optical enlargement of a physical negative where you besides the number of photosite (silver grains) Which is directly proportional to the size/area of the negative
I think you are getting stuck on the dpi/ppi with inkjet printing. Instead consider sending a digital file to have a photographic print made... that system will use LED lasers with lenses to project the image onto the photographic paper. Or consider using a digital projector to display the image, also using an optical system to reproduce the image. Both are very much like the film enlarger, and the original sensor size has the same impact/relevance. You can consider the size and spacing of the pixels as being equivalent to the size/spacing of halide crystals in the film negative (just much more regular). A digital file is still very much a "digital negative."

To me the sensor size is relevant in regards to the signal entering it and the problems occuring are due to low signal/noise and issues with the resolving power of the lenses.
Man im stupid i dont get it.
I think you do get it... there's just one small part that isn't clicking.
If you take the same 24mp low light image with a 4/3 camera and a FF camera (same settings, different FL); and you then display them both at 100% on your monitor, the m4/3 image will look worse; noisier because it got less light (the lack of equivalence). Right?
But how can you make the 4/3 image look better on your monitor? By displaying it smaller, right? And when it reaches the 4/3 relative size (25% magnification) the relative quality of the two images will be equal (i.e. noise), just at different sizes... because they originated from different sizes. Or similarly, print it at a smaller size.

There are ceratinly other things that can play into it... different lenses between the systems, AA filter or not, etc. But the primary factor is the sensor size, and larger wins because it actually gets more light. And when you record the same image/composition the details are also projected larger onto the larger sensor, which makes them easier for the sensor to resolve.
 
Last edited:
If I sat down and read all that in the article before taking a photo I would have fallen asleep or the subject matter would be miles away. :sleep::D
 
If I sat down and read all that in the article before taking a photo I would have fallen asleep or the subject matter would be miles away. :sleep::D
How long did it take you to learn manual exposure theory? If you did, do you think it's helpful/important? And if so, why wouldn't you want to learn better exposure theory?
 
How long did it take you to learn manual exposure theory? If you did, do you think it's helpful/important? And if so, why wouldn't you want to learn better exposure theory?

I don't do theories. In theory man could reach the moon in 5 minutes, in practice that is impossible.
 
Last edited:
Yes, but those 1's and 0's represent the physical area they came from... they contain the light/data from that original area, and that light/data will be used to recreate the same physical area in the image.


I think you are getting stuck on the dpi/ppi with inkjet printing. Instead consider sending a digital file to have a photographic print made... that system will use LED lasers with lenses to project the image onto the photographic paper. Or consider using a digital projector to display the image, also using an optical system to reproduce the image. Both are very much like the film enlarger, and the original sensor size has the same impact/relevance. You can consider the size and spacing of the pixels as being equivalent to the size/spacing of halide crystals in the film negative (just much more regular). A digital file is still very much a "digital negative."


I think you do get it... there's just one small part that isn't clicking.
If you take the same 24mp low light image with a 4/3 camera and a FF camera (same settings, different FL); and you then display them both at 100% on your monitor, the m4/3 image will look worse; noisier because it got less light (the lack of equivalence). Right?
But how can you make the 4/3 image look better on your monitor? By displaying it smaller, right? And when it reaches the 4/3 relative size (25% magnification) the relative quality of the two images will be equal (i.e. noise), just at different sizes... because they originated from different sizes. Or similarly, print it at a smaller size.

There are ceratinly other things that can play into it... different lenses between the systems, AA filter or not, etc. But the primary factor is the sensor size, and larger wins because it actually gets more light. And when you record the same image/composition the details are also projected larger onto the larger sensor, which makes them easier for the sensor to resolve.
So what you're really saying is that due to the size of the sensor the files from m4/3 contains 4x as much noise as the 24x36 ones?
 
So what you're really saying is that due to the size of the sensor the files from m4/3 contains 4x as much noise as the 24x36 ones?

.... Hmm, technology never stands still and sensors benefit from technological advances over time and that makes it possible to gather light more efficiently. In deciding to change from my Canon FF 1DX-2 to an Olympus M1X I naturally did some research about FF vs m4/3. Regarding sensor resolution, it should be mentioned that the E-M1X has no anti-alias filter installed, so that it can capture all the detail its sensor resolves. Also, pixel density plays a part.

However, all this quickly becomes somewhat academic (not that academic papers should not be written!) when shooting mirrorless with the histogram displayed in the viewfinder and Topaz DeNoise AI which intelligently cleans up the image in first stage of post-processing.

There are pros and cons to both FF and m4/3 systems and we each have individually different photographic needs. There is no such thing as a perfect camera.
 
The two main reasons why larger sensors will always deliver higher image quality.

Firstly, comparing full-frame against M4/3 (as the maths is easier) full frame has 4x the sensor area. That equates to two stops of extra light/photons, or put another way, exposing full frame at ISO400 delivers the same total light/photons as exposing M4/3 at ISO100.

The second factor is how lens sharpness reduces as resolution increases, and M4/3 demands double the lens resolution to deliver the same level of detail as full frame, regardless of pixel count. The more you ask, the harder it gets - a bit like a car that will accelerate from 0-60 in 6 seconds, takes far longer to go from 60-120. What we call 'sharpness' has two components - resolution and contrast. The relationship between resolution (the fineness of detail) and contrast (how clearly those details are shown) is measured by MTF lens testing (Modulation Transfer Function - those wiggly graphs that manufacturers publish).

Have a look here https://www.lensrentals.com/blog/2019/09/mtf-tests-of-the-sigma-35mm-f1-2-dg-art/ and scroll down to the wiggly graph where it's immediately obvious how contrast drops as resolution goes up. Compare the brown line at 20-lpmm with light blue at 40-lpmm, which is the difference in lens resolution demands between full frame and M4/3 sensors - contrast drops about 20%. That's entirely typical and is a significant reduction given that the difference between a good lens and a really good lens is typically 5-10%.
 
So what you're really saying is that due to the size of the sensor the files from m4/3 contains 4x as much noise as the 24x36 ones?
Yes, but IMO it is better stated that the 24x36 contains 4x as much light/data.
It's the inverse square law that causes this. And it is the use of a longer FL at the same F# (a larger entrance pupil/aperture diameter) that allows it.

The noise component is more scene dependent (photon shot noise).
 
Hmm, technology never stands still and sensors benefit from technological advances over time and that makes it possible to gather light more efficiently.
The problem is that there are very few gains to be made anymore with CMOS technology... fill efficiency has been over 95% for the last decade. Even the move to BIS didn't provide very significant gains (but it all counts).

Regarding sensor resolution, it should be mentioned that the E-M1X has no anti-alias filter installed, so that it can capture all the detail its sensor resolves.
It can capture all the details the lens resolves... generally much less than the sensor could resolve. It requires a theoretically perfect lens used at f/4 to resolve 20MP on a 4/3 sensor (RGB); that same lens/Ap could resolve 74MP on a FF sensor... it's pretty simple, the larger format has more room to record more dots/details/MP.

But we don't have perfect lenses, so it usually requires a very good lens used at an even wider aperture... if that's even possible; and it's often not. Good thing we don't often actually need anything near 20MP of recorded resolution.
 
Last edited:
I have updated the PDF with a few pages talking about exposure settings, the marginal standard for image sharpness (standard CoC), and the standard for VR ratings (an even weaker standard).
 
Back
Top