ISO - Waste of time when shooting RAW?

With all the arguments going on, I'm wondering, as most cameras these days have Auto ISO, if your skill level is that low that you cannot expose properly, why you don't just use that option?
Auto-bracketing comes to mind too.
 
Basically, there are no downsides to ISO-invariance, but significant upsides (in extended dynamic range) if you care to exploit them.
I think there is a common conflict/confusion between DR in an image and the DR of the sensor...
The max DR of an image always goes from white to black. If the image is a print it may have a 100:1 contrast ratio max. If it is on a display screen it may be 1000:1 max. And on the camera sensor it may be 16,384:1.

That last one might look like an odd number, but that's what 14 stops of DR is in terms of contrast (min/max). And 1000:1 is very close to 10 stops of DR (1024:1). And 100:1 is also close to the static capacity of the human eye which is ~ 7 stops, 128:1 (fixed pupil size/photopic vision).
Remember the histogram? 5 stops discernible, 7 stops total.

So I have a sensor that can record 14 stops but I'm displaying it on a screen with 10 stops capability... By default the monitor simply clips the end(s) that is out of it's range, but the data is still there ("recoverable"). I can do whatever I want with that 14 stops... I can compress it (selective recovery). I can discard it if it doesn't fit (default). I can shift it to fit a different portion (exposure/global adjustments). Or I can choose not to record it initially.

Now, some are probably thinking that there is a huge difference between 7 stops in a print and 14 stops the sensor is capable of... but there really isn't. Generally, the usable DR of a camera is app/at least 2 stops less than the technical DR (as reported by DXO). What is the real/significant difference between absolute black, damn near absolute black, and really really REALLY dark blue in an image (or similarly on the bright end)? And what difference does it make if the scene being photographed only has 4 stops of DR in it to start with (even lighting/reflectivity)?

Remember that exposure is always logarithmic, it always goes from 0-255 (RGB) or 0-100 (LAB), and I can't change the step/increment...I.e. I can't stuff more into the middle of the scale. The only way to truly extend DR is to continue dividing the minimum and moving 0 farther left.
 
Last edited:
Basically, there are no downsides to ISO-invariance, but significant upsides (in extended dynamic range) if you care to exploit them.

I'm not saying there is a downside to the technology, clearly it is great. I'm saying there is a downside in the technique that is being suggested in this thread and the other forum thread that was linked to of always shooting at base ISO in every situation.

I thought I made that very clear, perhaps not.
 
I use Auto ISO almost exclusively...
damn, I suck....
Me too. I figure that ISO is the setting that has the least effect on the image I am trying to capture. By putting it in Auto, I can concentrate on shutter-speed and aperture, which do make a difference to the image. Sure, in low light, to minimise grain, I'll be conscious of the ISO and not use a SS higher than I need, nor an aperture smaller than I need. (That the RAW image data are independent of the ISO setting is an interesting concept.)
 
(That the RAW image data are independent of the ISO setting is an interesting concept.)
It's a wrong concept... which started this whole thing. Raw data is modified by the ISO setting used.

It's an interesting concept in that it *could be* separate from the raw data for a truly invariant sensor, but I'm not sure if/how that could work... the problem is that sensors do not perform completely linearly/smoothly in terms of sensitivity/reactance to light, so a simple multiplier (exposure factor) can't work. The software used to compile/convert the raw data would have to be highly tailored for that specific sensor design, and I have no idea how individual variations (tolerances) would affect the process. I suppose you could limit the DR/capacity of the sensor to the more linear (darker) portion of reactance, but that would be kind of counterproductive for photography.
 
Last edited:
People who think it's okay to ignore ISO should be burnt at the stake.

Would appreciate if someone could recommend a suitable aperture/shutter/ISO combo for bonfire photography. Assuming, of course, that ISO is important.
 
This is not nearly as simple as you would like...
First, what we are actually concerned with is light sensitivity (DR) and light gathered by the sensor, not the output. And at the pixel level, at best I would have 1 R/G/B pixel each for a 3 pixel sample (Bayer type). Secondly, we don't know where the minimum sensitivity/usability is. From those numbers the only thing I can tell is than none of them are at the minimum.... so I can't tell you how far the exposure can be reduced before some register as zero (black).

And "black" (or white) in the output does *NOT* mean that no usable data was recorded there... it only means that we cannot discern/see it in the output... that's what "recoverability" means. Another way to think of it is; how many times can you divide a number in half before reaching zero? It's pretty much infinite depending on how small of a unit/result you can use...
Ok, let's skip any actual numbers. The fact remains that if you under expose by 5 stops you will lose 5 stops of detail at the bottom end. Is this correct?
Some may say "who cares?", But that is important and it is a difference when compared to correct exposure at a higher ISO.
 
This is not nearly as simple as you would like...
First, what we are actually concerned with is light sensitivity (DR) and light gathered by the sensor, not the output. And at the pixel level, at best I would have 1 R/G/B pixel each for a 3 pixel sample (Bayer type). Secondly, we don't know where the minimum sensitivity/usability is. From those numbers the only thing I can tell is than none of them are at the minimum.... so I can't tell you how far the exposure can be reduced before some register as zero (black).

And "black" (or white) in the output does *NOT* mean that no usable data was recorded there... it only means that we cannot discern/see it in the output... that's what "recoverability" means. Another way to think of it is; how many times can you divide a number in half before reaching zero? It's pretty much infinite depending on how small of a unit/result you can use...
Can you recover black i.e. 0,0,0 ?
 
he fact remains that if you under expose by 5 stops you will lose 5 stops of detail at the bottom end. Is this correct?
No, not necessarily...
Perhaps there is nothing in the scene that is very dark and there is a lot of bright areas that exceed the sensor's/pixel's capacity, maybe a boating scene in bright light with water reflections and white boats/sails/etc. Underexposing that scene will extend the DR recorded.
Perhaps a scene does have dark areas as well as overly bright areas where the bright areas exceed capacity, but the dark areas are not dark enough so that the recorded signal is lost in the system noise. Again, underexposing that scene will extend the DR recorded.
Or perhaps the scene has both light and dark areas that exceed capacity/capability. Underexposing that scene will shift the DR.

You will reduce the recorded DR *if* no pixels saturate to FWC... you loose it from the top, not the bottom (i.e. you make brighter areas darker). Yes, this does mean dark areas of the scene may drop below minimum, but zero is still zero. The loss of DR is because nothing recorded at 255.

Do not confuse the 5-12 stops of DR capable of the output (print/display) with the 12-14 stops of DR the sensor may be capable of. And realize that even if your monitor is capable of 12 stops, your processing/raw conversion may be reducing it farther (i.e. high contrast images). Just because you are not seeing it, doesn't mean it wasn't recorded.

Most of this probably sounds like an endorsement for underexposing/using invariance. But it really isn't.
The problem with recording a lack of light is not only a possible reduction of DR. It is also a lack of information and accuracy (color/etc). Lets say it takes 10 generated electrons to be distinguishable. In a lack of light scenario, when some pixels reach 10 the bottom of the recorded DR is established. But due to photon shot noise others will still be below 10.
And if we follow that out, every stop of gradation requires 2x as much as the previous (10,20,40,80,160). It's still a difference of 1 stop, but the sampling frequency/accuracy within the higher stops is greater.
Most of this doesn't matter until you start editing/pushing color/exposures around. If you take a really dark image (i.e. black on black) and properly expose that so that it records in the dark end you record very little data with very little data separating the tones (plus shot noise). If you attempt to edit such an image to any significant degree you will quickly run into banding and noise issues that cannot be resolved due to this lack of data/accuracy. And have you noticed the color noise that exists in dark areas? That's because a pixel receives enough light to register while surrounding ones don't so the pixel outputs as it's native color or an errant color.

This might sound sound like an endorsement for ETTR... and technically it is. But in practicality many/most situations don't allow for such refinement, and the risk of causing/extending high end clipping is greater... and if a scene isn't heavily biased to the dark end, there isn't a lot of benefit any way. My preference is to preserve highlights and ignore ETTR *unless* the scene is very dark.
What's funny is that this is what automated exposure will do for you if you let it; if it is a very bright scene it will underexpose (save highlights), and if a scene is particularly dark it will overexpose (ETTR). But a "correct" manual exposure will happily clip either end...

Can you recover black i.e. 0,0,0 ?

0,0,0 in the raw file, no.
0,0,0 anywhere else, probably (w/ issues noted above).
 
Last edited:
No, not necessarily...
Perhaps there is nothing in the scene that is very dark and there is a lot of bright areas that exceed the sensor's/pixel's capacity, maybe a boating scene in bright light with water reflections and white boats/sails/etc. Underexposing that scene will extend the DR recorded.
Perhaps a scene does have dark areas as well as overly bright areas where the bright areas exceed capacity, but the dark areas are not dark enough so that the recorded signal is lost in the system noise. Again, underexposing that scene will extend the DR recorded.
Or perhaps the scene has both light and dark areas that exceed capacity/capability. Underexposing that scene will shift the DR.

You will reduce the recorded DR *if* no pixels saturate to FWC... you loose it from the top, not the bottom (i.e. you make brighter areas darker). Yes, this does mean dark areas of the scene may drop below minimum, but zero is still zero. The loss of DR is because nothing recorded at 255.

Do not confuse the 5-12 stops of DR capable of the output (print/display) with the 12-14 stops of DR the sensor may be capable of. And realize that even if your monitor is capable of 12 stops, your processing/raw conversion may be reducing it farther (i.e. high contrast images). Just because you are not seeing it, doesn't mean it wasn't recorded.

Most of this probably sounds like an endorsement for underexposing/using invariance. But it really isn't.
The problem with recording a lack of light is not only a possible reduction of DR. It is also a lack of information and accuracy (color/etc). Lets say it takes 10 generated electrons to be distinguishable. In a lack of light scenario, when some pixels reach 10 the bottom of the recorded DR is established. But due to photon shot noise others will still be below 10.
And if we follow that out, every stop of gradation requires 2x as much as the previous (10,20,40,80,160). It's still a difference of 1 stop, but the sampling frequency/accuracy within the higher stops is greater.
Most of this doesn't matter until you start editing/pushing color/exposures around. If you take a really dark image (i.e. black on black) and properly expose that so that it records in the dark end you record very little data with very little data separating the tones (plus shot noise). If you attempt to edit such an image to any significant degree you will quickly run into banding and noise issues that cannot be resolved due to this lack of data/accuracy. And have you noticed the color noise that exists in dark areas? That's because a pixel receives enough light to register while surrounding ones don't so the pixel outputs as it's native color or an errant color.

This might sound sound like an endorsement for ETTR... and technically it is. But in practicality many/most situations don't allow for such refinement, and the risk of causing/extending high end clipping is greater... and if a scene isn't heavily biased to the dark end, there isn't a lot of benefit any way. My preference is to preserve highlights and ignore ETTR *unless* the scene is very dark.
What's funny is that this is what automated exposure will do for you automatically; if it is a very bright scene it will underexpose (save highlights), and if a scene is particularly dark it will overexpose (ETTR). But a "correct" manual exposure will happily clip either end...



0,0,0 in the raw file, no.
0,0,0 anywhere else, probably (w/ issues noted above).

I think I'll read "No, not necessarily" as "Yes, but not always". Do correct me if that isn't right.

Since we are talking about post processing to recover data it is a RAW digital file, hence 0,0,0 when multiplied by anything is 0,0,0

I'll pose a real life scenario for you to contemplate. I photograph fungi. I also do time lapse of fungi. Some of these fungi glow in the dark. I have found that the best time lapse of luminous fungi also shows a little of the background. Since the light from the fungus is rarely enough to light the background I add a little light, but not much as I don't want to dominate the light of the fungus. So I have a (relatively) bright fungus plus a dark background. I am using a Sony A7R3
Do I
A) Use ISO800 at f18 at 60 sec exposure, which is aimed at -1 to -2 exposure for the fungus when it is at its brightest
B) Use ISO100 at f18 and 60 sec exposure, , which is aimed at -4 to -5 exposure for the fungus when it is at its brightest
OR - it doesn't make any difference.

With time lapse you delve into that grey area of predicting the future, which you are always liable to get wrong, so you have to allow the maximum leeway for correcting mis-predictions. I want the maximum possible options for bringing the background up so that it looks realistic enough for anyone who wants to use it in a film.

I think that I would be very silly to use option B as I would lose dark detail. Would you agree?
 
I think I'll read "No, not necessarily" as "Yes, but not always". Do correct me if that isn't right.

Since we are talking about post processing to recover data it is a RAW digital file, hence 0,0,0 when multiplied by anything is 0,0,0

I'll pose a real life scenario for you to contemplate. I photograph fungi. I also do time lapse of fungi. Some of these fungi glow in the dark. I have found that the best time lapse of luminous fungi also shows a little of the background. Since the light from the fungus is rarely enough to light the background I add a little light, but not much as I don't want to dominate the light of the fungus. So I have a (relatively) bright fungus plus a dark background. I am using a Sony A7R3
Do I
A) Use ISO800 at f18 at 60 sec exposure, which is aimed at -1 to -2 exposure for the fungus when it is at its brightest
B) Use ISO100 at f18 and 60 sec exposure, , which is aimed at -4 to -5 exposure for the fungus when it is at its brightest
OR - it doesn't make any difference.

With time lapse you delve into that grey area of predicting the future, which you are always liable to get wrong, so you have to allow the maximum leeway for correcting mis-predictions. I want the maximum possible options for bringing the background up so that it looks realistic enough for anyone who wants to use it in a film.

I think that I would be very silly to use option B as I would lose dark detail. Would you agree?


Steve (please bear with me on this :)) - if you take a 'correctly' exposed image of a black piece of card at ISO 1200; the card fills the frame; does the picture show a completely black piece of card or does it have 'Noise'?
 
Last edited:
Steve (please bear with me on this :)) - if you take a 'correctly' exposed image of a black piece of card at ISO 1200; the card fills the frame; does the picture show a completely black piece of card or does it have 'Noise'?
It has noise
 
I think I'll read "No, not necessarily" as "Yes, but not always". Do correct me if that isn't right.

Since we are talking about post processing to recover data it is a RAW digital file, hence 0,0,0 when multiplied by anything is 0,0,0

I'll pose a real life scenario for you to contemplate. I photograph fungi. I also do time lapse of fungi. Some of these fungi glow in the dark. I have found that the best time lapse of luminous fungi also shows a little of the background. Since the light from the fungus is rarely enough to light the background I add a little light, but not much as I don't want to dominate the light of the fungus. So I have a (relatively) bright fungus plus a dark background. I am using a Sony A7R3
Do I
A) Use ISO800 at f18 at 60 sec exposure, which is aimed at -1 to -2 exposure for the fungus when it is at its brightest
B) Use ISO100 at f18 and 60 sec exposure, , which is aimed at -4 to -5 exposure for the fungus when it is at its brightest
OR - it doesn't make any difference.

With time lapse you delve into that grey area of predicting the future, which you are always liable to get wrong, so you have to allow the maximum leeway for correcting mis-predictions. I want the maximum possible options for bringing the background up so that it looks realistic enough for anyone who wants to use it in a film.

I think that I would be very silly to use option B as I would lose dark detail. Would you agree?

You sound like a really fungi to be around.
 
I think I'll read "No, not necessarily" as "Yes, but not always". Do correct me if that isn't right.
Another way of saying the same thing.
Do I
A) Use ISO800 at f18 at 60 sec exposure, which is aimed at -1 to -2 exposure for the fungus when it is at its brightest
B) Use ISO100 at f18 and 60 sec exposure, , which is aimed at -4 to -5 exposure for the fungus when it is at its brightest
OR - it doesn't make any difference.
Depends entirely on the characteristics of the sensor... If it is ISO invariant it doesn't matter. If exposed at -2 you can recover 2, and if exposed at -5 it can be recovered 5. The only valid variable in the ISO invariant scenario is how much light is actually recorded. In a non-invariant scenario, using a higher ISO will produce better results.

It is important to note that a sensor may be ISO invariant at a given light level, while being less so at a another light level.
 
Another way of saying the same thing.

Thank you. You've managed to admit once that ISO invariance doesn't necessarily mean that you lose no data when shooting underexposed at a lower iso.

In my example your answer is wrong. It is A that is best.
To use your own words "Or perhaps the scene has both light and dark areas that exceed capacity/capability. Underexposing that scene will shift the DR."
My example has light areas and some very dark areas, including large areas that are black, even when not underexposed. I do this frequently and it is very apparent, even with a Sony A7R2, that a lot of dark detail is lost when underexposing an image. It is not that it has more or less noise, it just isn't there. If I use a Canon 5D2 then I also lose data AND the data that I can get is more noisy as predicted, so I prefer the A7R2 for this type of work.

Here's an example of a typical photo. How much data do you think you would lose by underexposing this?
2010_2013148-helicon-X2.jpg
 
Last edited:
It has noise

OK Steve, thanks for the reply. This scene which we have photographed should be completely black but it isn't - all the pixel data should have an RGB value of 0,0,0 but the pixels that are not (the noise), they have a different RGB value so there must be an 'error' in the system that has recorded this data; do you agree?
 
Last edited:
I've now spent countless hours reading up on this matter and the consensus on ISO invariant sensors is as follows:

(1) With a scene in front of the camera photons are emitted from that scene; light areas of the scene emit a higher volume of less random photons than dark areas of the scene which emit a lot less photons and their pattern much more random. It is this 'randomness' that causes photon noise and is why dark areas of an image appear 'more noisy' . This can not be changed and there is nothing the sensor can do about this.

(2) the 'front end noise' or as you call it the read noise is the noise first induced into the system by the sensor and is caused by numerous things some of which are thermal noise from the sensor (dark current) and the ability of the camera to 'reset' the sensor pixels. This noise is induced as photons hit the camera sensors photosites and induce a charge into the camera sensor, one way of reducing 'dark current' noise is to cool the sensor as it heats up with the charge build up, or average the dark current noise over the exposure time and subtract it before the analogue conversion.

(3) All the articles I have read state that it is at this point the Charge from the sensor now goes through an analogue conversion which produces a voltage from the sensor, this voltage is then subject to analogue amplification (gain) which directly depends on the ISO you have set (gain control) ; this information is then 'burn't into the file and can not be changed at a later date. When you set a higher ISO value you are telling the analogue amplifier to add more 'gain' and is the precursor to capturing a 'brighter' image.

(4) In respect to all of the above, sensor design/performance hasn't changed that much over the recent years and noise levels haven't improved drastically up to the point of the analogue conversion/amplification.

(5) Any noise introduced into the system after this point (back end or write) does not go through any analogue amplification which is good.

(6) The last stage of the process is the analogue to digital conversion or back end (write) noise. This conversion takes the voltage produced by the analogue amplification and converts into into a digital format to be displayed (a list of binary numbers containing colour and brightness information from each pixel on the sensor) The more information stored at this point is the Bit depth. This is where there has been major recent advances and the back end (write) noise is now so low that it almost doesn't matter where you move the brightness in the editing software (simulates quite closely the Digital conversion which takes place in the camera) because the noise added at this last stage is so minimal it can 'almost' be neglected. It is this low back end noise that makes the sensor 'ISO invariant', a true ISO invariant sensor has NO noise being added in the analogue to digital conversion.

This then gives us the following workflow:

(Photon induced Noise + Front End or read noise) > Analogue Conversion (Voltage obtained from electron charge CCD) > Analogue Amplification (Induced by ISO set on camera) > (Back end write noise induced by Digital conversion) = Output RAW file.

From the above if a High ISO Value is used (Increased analogue amplification) we are 'multiplying' the following noise induced into the system (Photon induced noise + Front end read noise + Analogue Conversion noise) and this at a later stage can not be effectively lowered in post production, If we originally use a low ISO value then this multiplication in the analogue amplification is lower before we go into the Digital conversion.

In the instance of the Photon noise we know that the randomness of photons emitted from dark objects already causes noise in the dark areas of our images so we would want to keep this multiplication as low as possible at this point to keep the noise in the dark areas of the photograph as low as possible.

With the ISO invariant sensor we can then use the Digital amplification (slider in lightroom) to brighten the lighter areas of the image to selectively edit the exposure of the image; keeping the noise in the shadows as low as possible but 'correct' exposures for our highlights; as the back end/write noise is so low in an ISO invariant sensor the noise produced by doing this is almost negligible. The resultant image will have dark/shadow areas with the minimum noise possible and the highlights will also be exposed correctly with an imperceptible increase in noise in these areas (The Photon noise from brighter areas of the scene is at it's lowest since the 'randomness' is lower.)

Going back to the microphone analogy - if the gain is turned way down and someone speaks quietly into the mic we can hardly hear them, if you turn the gain up (Higher ISO) you can then hear them but everything else (background sounds) is amplified as well. If you could keep the gain on the Mic low ( Low ISO) so that the background was quite but separated out his voice and digitally amplify just that (ISO Invariance) it would be the ideal solution.
You can't have a practically realisable, zero noise ADC. It always adds quantisation noise. (The difference between the incoming voltage and the digital level representing it - you'd need an infinite bit depth to truly represent the input).

Similarly, analogue amps amplify the signal and noise and add more noise. The amount added is the Noise Figure.
 
7 stops? According to the latest literature I've seen, within 50ms you can see more than 14. The new HDR TVs are all 14+.
 
No. It would be impossible to get a truly black card


Steve, don't jump ahead :) Do we accept an error has been induced into the 'system because the black card image no longer is black but has 'irregularities in it?
 
Thank you. You've managed to admit once that ISO invariance doesn't necessarily mean that you lose no data when shooting underexposed at a lower iso.
I don't think I ever said you loose "no data." But I think you are missing the point, ISO is not "exposure" and it does not "generate data" in the manner that collecting light does, it just "amplifies" the data. The only difference is that a non-invariant sensor/system benefits more from precise/tailored signal amplification (more accurately it is probably noise filtering) when the signal/data is at low levels.

Here's an example of a typical photo. How much data do you think you would lose by underexposing this?
I believe you did underexpose everything (else) in the image in order to retain the glowing fungi.

EDIT: I cannot really estimate how much data would be lost if you underexposed the image farther. It does not appear that anything is highlight clipped, so underexposure would be a loss at the high end, probably at a 1:1 rate.
 
Last edited:
7 stops? According to the latest literature I've seen, within 50ms you can see more than 14. The new HDR TVs are all 14+.
I assume this is in response to where I posted that the human eye has approximately a 7 stop range? I did specify that was for photopic vision (not dark adapted) and w/o adjustments of the pupil.
I *think* that if you look at an image as a whole the eye adjusts for to overall light level being received, and not for each portion w/in the total (i.e. when not scanning a larger scene).

I fail to see how the 14 stop DR of a HDR TV is relevant if you are editing on an LCD/LED monitor limited to 10 stops of DR or less.
 
I don't think I ever said you loose "no data." But I think you are missing the point, ISO is not "exposure" and it does not "generate data" in the manner that collecting light does, it just "amplifies" the data. The only difference is that a non-invariant sensor/system benefits more from precise/tailored signal amplification (more accurately it is probably noise filtering) when the signal/data is at low levels.


I believe you did underexpose everything (else) in the image in order to retain the glowing fungi.

EDIT: I cannot really estimate how much data would be lost if you underexposed the image farther. It does not appear that anything is highlight clipped, so underexposure would be a loss at the high end, probably at a 1:1 rate.
I do understand the point that the data gathered is the same, irrespective of the iso, but the data recorded is different. There are times that this does make a difference.
 
Steve, don't jump ahead :) Do we accept an error has been induced into the 'system because the black card image no longer is black but has 'irregularities in it?
So we really have a lens cap on system. Ok, the system has noise or irregularities
 
I do understand the point that the data gathered is the same, irrespective of the iso, but the data recorded is different. There are times that this does make a difference.
Yes, when ISO causes data to be clipped (highlights, all cameras), and in low light/signal situations for non-invariant cameras (most of them).
 
Yes, when ISO causes data to be clipped (highlights, all cameras), and in low light/signal situations for non-invariant cameras (most of them).
So, a digital signal at -12 and iso100 should be the same as a signal at -2 and iso102400? I have to admit that this is beyond what I would ever push things too, but it is a question. I realise now that I rarely use iso to adjust camera sensitivity. I use shutter speed as most of what I do is tripod work. This conversation has been useful in that it has made me think about the whole idea of invariance and will possible change some ways that I take photos. Thank you. That is too say, that you are (apart from the small data loss due to AD conversion) correct.

There have been some comments that this iso invariance is made unuseable by the fact that many cameras cannot display an image unless the iso is set to near optimum. This is not true for all cameras. With the Sony A7R3 (and R2) there is a setting called "Live view display". If this is set to "setting effect OFF", then the camera displays a fully lit scene. It gives the effect of night vision goggles when used with the viewfinder. This is one of the hidden advantages of a mirrorless camera. A DSLR cannot do this, though there is nothing stopping it with the live view screen.
 
Last edited:
So, a digital signal at -12 and iso100 should be the same as a signal at -2 and iso102400?
The effect of an ISO invariant sensor is that the image will look the same if ISO is used in camera as it would if underexposed at base ISO and pushed the same amount in post. I.e. w/ my D810 my noise tolerance is ~ ISO1600. I can use ISO 100 and push it 5 stops in post, I can use ISO800 and push it 1 stop, or I can use ISO1600... they will all look the same. So if ISO 1600 would cause particularly bright highlights to clip (or backlighting/sky) I'm better off underexposing for the highlights and pulling up the shadows in post.

The benefit isn't that the data in the raw file (digital signal?) is the same in all instances, it's not. If you allow highlights to clip you can't recover them. The benefit is that there is no penalty to underexposing *by reducing ISO* (any other form of underexposure is problematic). There are very few cameras that are currently ISO invariant. My D810 is one of them, at least up to a point... I don't believe it is actually entirely invariant across the entire range of available ISO's, but it is with any ISO I might use.

A DSLR cannot do this, though there is nothing stopping it with the live view screen.
Most DSLR's have the option for exposure preview when using live view. But that is different from jpeg review after capture, which will be underexposed according to the settings used in all cases.
 
Last edited:
So we really have a lens cap on system. Ok, the system has noise or irregularities

The lens cap is a good example Steve :) .....but we do accept there are errors as the image should be pure black and it isn't, so something has gone wrong as our camera isn't putting out an RGB number of 0,0,0 for a percentage of the pixels.
In fact we also know that 'noise' is more prevalent in any of the darker areas of any image, so something is 'going wrong' in the workflow from pressing the shutter button to the image we see on the screen. To produce the most accurate representation of the scene we are photographing we need to find where in the system the biggest areas of error are and try to reduce them to a minimum. In order to do this we need to 'reverse engineer' the whole process to analyse what happens - are you with me so far?
 
Last edited:
The lens cap is a good example Steve :) .....but we do accept there are errors as the image should be pure black and it isn't, so something has gone wrong as our camera isn't putting out an RGB number of 0,0,0 for a percentage of the pixels.
In fact we also know that 'noise' is more prevalent in any of the darker areas of any image, so something is 'going wrong' in the workflow from pressing the shutter button to the image we see on the screen. To produce the most accurate representation of the scene we are photographing we need to find where in the system the biggest areas of error are and try to reduce them to a minimum. In order to do this we need to 'reverse engineer' the whole process to analyse what happens - are you with me so far?
Do we know that noise is more prevalent in the darker areas? Surely the noise is the same in all areas, but in the light areas the signal is greater, so the signal/noise ratio is greater.
 
The effect of an ISO invariant sensor is that the image will look the same if ISO is used in camera as it would if underexposed at base ISO and pushed the same amount in post. I.e. w/ my D810 my noise tolerance is ~ ISO1600. I can use ISO 100 and push it 5 stops in post, I can use ISO800 and push it 1 stop, or I can use ISO1600... they will all look the same. So if ISO 1600 would cause particularly bright highlights to clip (or backlighting/sky) I'm better off underexposing for the highlights and pulling up the shadows in post.

The benefit isn't that the data in the raw file (digital signal?) is the same in all instances, it's not. If you allow highlights to clip you can't recover them. The benefit is that there is no penalty to underexposing *by reducing ISO* (any other form of underexposure is problematic). There are very few cameras that are currently ISO invariant. My D810 is one of them, at least up to a point... I don't believe it is actually entirely invariant across the entire range of available ISO's, but it is with any ISO I might use.

Most DSLR's have the option for exposure preview when using live view. But that is different from jpeg review after capture, which will be underexposed according to the settings used in all cases.

Surely an iso invariant camera will give the same RAW file at +0 and iso 102400 as it will at -10 and iso100? Or does "iso invariant" mean "iso invariant only up to 4 stops".

Will the D810 show a clear display on live view of an image taken at -5 or even -7 or -10? Or will it show a black image? The Sony camera can show a correctly exposed image even though the recorded image will be almost black.
 
Do we know that noise is more prevalent in the darker areas? Surely the noise is the same in all areas, but in the light areas the signal is greater, so the signal/noise ratio is greater.

Hi Steve, yes we do know there are more errors in the darker part of the image and I will explain in a bit how we know that.

Anyway, as the image is captured by the camera there are three distinct types of noise (errors) added to that image and they are:

(1) Photon Noise
(2) Front end read noise (Upstream)
(3) Back end read noise (Downstream)

So, lets look at No.3 first (Back End Read Noise):

There is an analogue signal that reaches the Analogue to Digital processor (ADC) of the camera as a voltage. The ADC takes this Voltage and outputs a string of binary numbers, the longer the stream of these binary numbers is reffered to as the BIT depth of the processor and the higher the bit depth the more information is contained.
This digital output from the camera is really important as it contains all the information from each pixel. This binary number is subject to Digital amplification. To brighten an image in lightroom you move the exposure (or sometimes referred to as the ISO adjustment) around - this is pretty much exactly the same as applying digital amplification and is applied to the whole of the image at the same 'rate'.
Camera manufacturers/sensor designers have made big gains in the Back End Read noise that it adds very little noise to the overal scene.

Imagine the below scenario which simply represents the BackEnd read noise scenario:

You have a printed photograph on a desk in front of you and you place apiece of opaque glass on top of it - you cant see the photograph underneath just the opaque glass on top. In this glass there are some irregularities spread through it (red and blue dots) accross the whole surface, but because the glass is so opaque you can't really see them. As you turn a control potentiometer (brightness slider) the glass proportionately clears and you can then see the photograph under it. The irregularities (red and blue dots) are still there and are also increased in the glass screen as you move the potentiometer (digital amplification) these dots become more/less visible in the photograph/glass composite as you move the slider.
In a true ISO Invariant sensor there are no irregularities so it doesn't matter where we place the brightness control - it adds no more irregularities to our image. (We are not at this stage yet.)

If you understand this we will go on to cover the other types of noise that gets added and why.

Thanks for your patience.
 
Last edited:
Call me stupid, but I don't understand what these red and blue dots are,
 
P
Red and blue dots = Back end read noise?

They wouldn't exist in a true ISO invariant sensor situation?

Perfect understanding :)

Call me stupid

OK Stupid :) (sorry, couldn't resist but it is a joke!)

I'll try again. Lets look at our example of the image (lets call it a photograph) on the table covered by our Opaque piece of special glass. This is a composite image consisting of the photograph on the bottom (which includes our (1) Photon Noise & (2) Front end Read noise). (3) The Back End Read noise, which includes the ADC and Digital amplifier is represented by the Opaque piece of glass lying on top of our photograph and covers it completely.

We know that the ADC ads errors (imperfections/noise) to our final image. The 'red & blue' dots represent these imperfections or what is refereed to as 'Back End Read Noise'. As we move a potentiometer on this opaque piece of glass it turns clear but still contains the imperfections (red & blue dots) . The potentiometer is just like the exposure slider (ISO slider) in lightroom - it is the digital amplifier part of the Back End Read Noise . As the opaque glass goes clear we can start to see the photograph under it but the imperfections in the glass also become more visible as they are 'amplified.'.(Increased)

This Back End read Noise is applied to the whole of our composite Image as it lies on top of it and we have to 'look through' it. With a Non ISO Invariant sensor this degrades the resultant image quality.

If we can make the opaque glass (Back End Read Noise) perfect with no imperfections (red & blue dots) we have a true ISO invariant sensor that doesn't care what digital amplification is applied to it as it has no imperfections anyway.

Does this help?
 
Last edited:
Back
Top