Beginner Dynamic range

Messages
5,923
Name
Dominic
Edit My Images
Yes
So here's my question or questions.
1. What is dynamic range?
2. How does it impact on a photograph?
3. Does it effect more in certain styles of photography?
4. Why are some cameras/sensors better than others, even from model to model?
 
I would describe it as likening it to your own eyes and how they handle looking at a scene with both lowlights and highlights and balancing it to make sense of it. Our own eyes are far better than any camera sensor and when camera sensors try to do this they struggle so depending on the type of shot you are taking it will influence the result hence the use of filters to help with balancing, particularly with landscape shots. More advanced sensors handle better than older, less advanced sensors - sensors model to same model - unsure how this would work??
 
1. depends on who you listen to, but in layman's terms range between max and min brightness values you can portray.

2. SMPTE say:
"Human vision has a wide latitude for scene brightness, and has multiple adaptation mechanisms that provide an automatic 'gain' to the visual system. The brightness range that people can see is much greater than the available simultaneous contrast range of current displays. HDR systems are intended to present more perceptible details in shadows and highlights thus better matching human visual system capabilities under the several image viewing conditions typically found in consumer environments. In particular, HDR allows distinguishing bright details in highlights that are often compressed in traditional video systems, including allowing separation of color details in diffuse near-white colors and in strongly chromatic parts of the image"

3. Not really, provided there's a large enough dynamic range in the scene

4. Pixel size, processing...
 
So here's my question or questions.
1. What is dynamic range?
2. How does it impact on a photograph?
3. Does it effect more in certain styles of photography?
4. Why are some cameras/sensors better than others, even from model to model?
1. As already mentioned dynamic range is the range between the brightest and darkest values that a camera can record. A wider dynamic range means that you gain a better gradation from light to dark.

2. A good example of dynamic range is a scene where you are looking at someone who has strong light behind them. With the naked eye you would be able to make out some of the facial features but when you take a picture exposing for the whole scene the person is almost completely silhouetted. This is because your eyes have far more DR than the camera. Apply this to how does it impact on a photograph, the more dynamic range the more detail you will get in highlighs and shadows.

3. I would disagree with st599 here, there are certainly scenes which are affected more, or should I say there are scenes that have more dynamic range than others. Classically landscapes (especially on bright sunny days) have a lot of tonal range and so dynamic range is important. Often the camera cannot capture the full dynamic range and this is where HDR photography comes into play. You expose several shots with different exposures to capture all of the range/tones, and then merge them together. Often people overcook HDR and it starts to look unnatural.

4.Technology/processing more than anything I think. I'm not sure how much pixel size plays a part these days as there are some cameras such as the Nikon D7200 that have smaller pixels that cameras such as the Canon 5DIII yet have considerably more DR. Yes pixel size helps, but the tech behind the sensor and processor seems to play more/ as much of a part.
 
Last edited:
So here's my question or questions.
1. What is dynamic range?
Your eyes do have a dynamic range… and a more extended one than any chip at
that. There is an upper comfortable limit (before light blinds you) and a lower thres-
hold where you can't see anymore. These are the extent of the dynamic range of
your eyes. The same goes with the chip but a narrower one… since it has no pupil.

2. How does it impact on a photograph?
If poorly metered the exposure may be too high for the chip, like blinding it. If the
exposure is too low, the chip does not have the time and/or the right amount of
light to see anything.

3. Does it effect more in certain styles of photography?
I don't think so. A good exposure or a bad one will render good RAW data on bad
RAW data… whatever kind of photography.
4. Why are some cameras/sensors better than others, even from model to model?
There are indeed different qualities of sensors, and different prices as well. Certain
sensors have a lower pixel count for the same image area. This means that each
pixel is bigger and more avid when it comes to catching those photons that found
their way in the body.
The one thing not mentioned here is the software operating between the chip and
the memory card. This software serves basically to extend the only native property
of the chip: sensibility. Aperture and time are ways to bring the quantity of light to
levels that will be easier to handle in the limited range of the software supported
sensor.
•••••••••
I would like to say more but I hate typing and I have now reached my limit…
I hope this short and not too technical
explanation helps you!
 
Last edited:
You've done better than me kodiak Qc, It would have taken hours to type that lot.
 
It would have taken hours to type that lot.

When I started, no one had yet answered you questions.
By the time I stopped, four did before me!

Was my effort making sense to you? Did it help?
 
Last edited:
Yes it helped, I now have a greater understanding, of what it is and how it works.
 
I think the difference with the human eye is that you are generally only looking at one part of the scene at a time but a camera has to take in the whole scene at once.
 
I think the difference with the human eye is that you are generally only looking at one part of the scene at a time but a camera has to take in the whole scene at once.


Does it not foveate? The main issue is the HDR to SDR conversion that people do. HDR on an HDR screen is much nicer.
 
I think the difference with the human eye is that you are generally only looking at one part of the scene at a time but a camera has to take in the whole scene at once.
Just done a bit of research on this and according to Cambridge in Colour the DR of the human eye isn't any more than the best cameras but because we instinctively focus on different parts of the scene the eyes adjust without us noticing and so we get a perceived much bigger DR. Contradicts what I've read before but there you go ;)
 
according to Cambridge in Colour

Please, don't stop there… seek other source!
My analogy to the eye is one equation. Now the eye vs the camera is
another equation.
 

Please, don't stop there… seek other source!
My analogy to the eye is one equation. Now the eye vs the camera is
another equation.
Well I have read other sources ;) The consensus is the DR of the human eye is 10-14 stops.
 
Last edited:

Sorry, I trust you did! ;-)
Lol, I just quoted them because they explained why we perceive the eye as having a greater range, but it does read as though that's the only source I read doesn't it ;)
 
doesn't it

Well, it caught me anyway!

The assumption they make is confusing an other point in this direction
I was doing as I was a member there.
 
Well I have read other sources ;) The consensus is the DR of the human eye is 10-14 stops.

Nope. That may be the instantaneous DR without adaptation, but the scotopic and photopic range is over 40 stops.
 
Nope. That may be the instantaneous DR without adaptation, but the scotopic and photopic range is over 40 stops.
What's that mean in English? ;)
 
I notice dynamic range in portraits - particularly fine art b&w conversions with delicate skin tones. In this context, digital irritates the hell out of me compared to medium format film, as I've always found it too harsh and contrasty.
 
The eye can detect light over a huge range using different adaptation states. Far more than the 14 stops you mentioned.

From ~10^-6 to ~10^8 cd/m^2.
Thanks. Yeah, that's what I said (or hoped I'd said) the eye adapts so DR is much wider, but without the ability to adapt the DR is no better than a camera.
 
…no better than a camera.

Right… now I understand where the discomfort is:
From the beginning, I spoke about eye vs sensor… only!
 
Thanks. Yeah, that's what I said (or hoped I'd said) the eye adapts so DR is much wider, but without the ability to adapt the DR is no better than a camera.

I can think of one camera with a similar DR to the instantaneous eye. It costs about £40k.

Most cameras are way below that. (Although there is an issue that there is no standardised measurement technique for stills cameras - so manufacturers choose one that aids their sensor).
 
Thanks. Yeah, that's what I said (or hoped I'd said) the eye adapts so DR is much wider, but without the ability to adapt the DR is no better than a camera.

I've finally found the source I was looking for*. It says that the instantaneous dynamic range of the human visual system is 10^3.73, but if the stimulus is viewed for greater than 1/2 a second it is at least 10^4.7 (the max of the screen they were conduction the tests on).

Off the top of my head 10^3.7 is a range of 5000:1 - about 12-13 stops, 10^4.7 is 50000:1 - 16-17 stops

*Kunkel and Reinhard, "A Reassessment of the Simultaneous Dynamic Range of the Human Visual System", Proceedings of the 7th Symposium on Applied Perception in Graphics and Visualization. July 2010.
 
I don't know if anyone has (or can), measured the sensitivity range of optic cells in the way we think of sensor DR (w/ single exposure time)... I'm not even sure they have measured it by disabling the variable aperture (iris/pupil). So I think any direct comparison of sensor DR to retinal DR is probably a bit misleading. Especially if you include night adapted viewing which cannot exist w/ daylight viewing.

However, as a system they are very similar. And "the sensors" share similarities in that they must absorb a minimum level in order to show a detail/color, they can only absorb up to a max limit before "clipping," and the R/G/B photosites are not really "RGB"; they are better understood as Short/Medium/Long.

1 by Steven Kersting, on Flickr
 
Back
Top