Variable ISO sensor plane

Fraser Euan White

Suspended / Banned
Messages
3,062
Name
Fraser White
Edit My Images
Yes
A good friend who I introduced to photography some time ago came up with a really good question the other day;

Why do camera manufacturers not develop a sensor that can have different ISO sensitivities across it's surface?

To try to explain, cameras have multi matrix metering why cant the sensor pixels or areas have individual ISO settings so that bright areas of the scene can be allocated a low ISO and dark areas of the scene been given a higher ISO?

Thoughts?
 
Hmmm ... could you possibly end up with an 18% grey overall image? ;)

Ok, I'll concede a 18% coloured melonge

Please note - tongue FIRMLY planted in one's cheek
 
Howwould the sensor/ camera know what you want dark or light?
 
The human eye does this quite successfully but we have the advantage of massive processing of the image. I doubt that there is any technical problem with the idea - it would be an entirely software thing. If you could set the two ISO limits, it would be ideal for dark church interiors.
 
The human eye does this quite successfully but we have the advantage of massive processing of the image. I doubt that there is any technical problem with the idea - it would be an entirely software thing. If you could set the two ISO limits, it would be ideal for dark church interiors.

Plus the human eye (all chordate species eyes by differing degrees?) has in effect two sensors I e. the rods and cones that appropriately in very low light receptive b&w and higher light levels in colour.
 
Because ISO has nothing to do with "sensitivity" with a digital camera... It is just signal amplification after the sensor, kind of like the brightness control for your monitor.

What you are wanting is basically what an ISO invariant sensor allows for (selective lightening w/o penalty).
 
Howwould the sensor/ camera know what you want dark or light?

By using the matrix/evaluative metering probably.

Steven - the Iso invariant sensor still needs editing/post processing though though?

What got me thinking again was a picture taken in our workshop by a camera phone - the workshop was dark but the picture of the car inside it was perfectly exposed yet the scene outside the shutter door which was in bright daylight was also perfectly exposed! The camera phone 'spat' this image out without any post processing?
 
...
What got me thinking again was a picture taken in our workshop by a camera phone - the workshop was dark but the picture of the car inside it was perfectly exposed yet the scene outside the shutter door which was in bright daylight was also perfectly exposed! The camera phone 'spat' this image out without any post processing?

It didn’t!

It saw the scene as having a hdr and either took 2 exposures and merged them (the iPhone does this IIRC) or it simply did It in processing.

And just like the ‘portrait mode’ on a phone; it works great for camera phones because they’re not examined critically. Have a really close look at the image with your photographers eyes though and you’ll soon it’s a s***ty blunt tool that bears no resemblance to a ‘photograph ‘.

The issue I have is that people are now so used to seeing overprocessed Instagram filtered images, that they don’t even hesitate when they see this kind of s***, I’m beginning to feel like a right miserable old fart when people are saying images are amazing and all I see is plastic overprocessed unreal garbage.
 
Last edited:
A good friend who I introduced to photography some time ago came up with a really good question the other day;

Why do camera manufacturers not develop a sensor that can have different ISO sensitivities across it's surface?

To try to explain, cameras have multi matrix metering why cant the sensor pixels or areas have individual ISO settings so that bright areas of the scene can be allocated a low ISO and dark areas of the scene been given a higher ISO?

Thoughts?

Because we don't want that - it would potentially end up with everything being the same tone, something like Chuckles described. And if you're wanting ISO to be individually adjusted at every pixel, as it would need to be, I'd imagine the processing power required would be impossibly huge.

What we actually want is highlights that are held just short of blowing, and (usually) shadows lifted so detail can be seen, but still look natural. There are other ways of achieving that (eg automated HDR, or ISO-invariant sensor) which are already in common use.
 
And if you're wanting ISO to be individually adjusted at every pixel, as it would need to be, I'd imagine the processing power required would be impossibly huge.

Even if each individual pixel could be processed in the camera it would make no difference at all because pixels with low light levels would have to be amplified more than pixels with high light levels so you would end up with noise levels the same as they are now.

And there are some programs now available which do adjust individual pixels - AI Gigapixel is one and Aurora HDR is another - but both require processing power far in excess of that available in any camera and even then take a very large amount of time to process just one image - would you really want to wait for minutes after a single shot before you could take the next one?
 
- would you really want to wait for minutes after a single shot before you could take the next one?
My standard burst rate is around 1 frame per ten minutes. Composing, analysing light, writing notes; all take time. I then spend more time at home processing HDRs. A couple of minutes spent unable to use a camera I am not about to use yet seems a bargain.
 
Last edited:
It didn’t!

It saw the scene as having a hdr and either took 2 exposures and merged them (the iPhone does this IIRC) or it simply did It in processing.

And just like the ‘portrait mode’ on a phone; it works great for camera phones because they’re not examined critically. Have a really close look at the image with your photographers eyes though and you’ll soon it’s a s***ty blunt tool that bears no resemblance to a ‘photograph ‘.

The issue I have is that people are now so used to seeing overprocessed Instagram filtered images, that they don’t even hesitate when they see this kind of s***, I’m beginning to feel like a right miserable old fart when people are saying images are amazing and all I see is plastic overprocessed unreal garbage.

I looked at some photos from the iPhone XS and at first look they look acceptable - as long as you don't want large prints or just to store them on a smartphone etc

https://www.dpreview.com/articles/5...s-a-leap-forward-in-computational-photography

Much of the ability to take photos which look sharp and well exposed is because of the large apertures used and the use of very low ISO - down to 16 ISO - both giving acceptable results because the very small sensors mean lenses of extremely short focal lengths have to be used.

This gives the ability to get sharp results from very close to infinity with no need to focus - perfect for phones.

But look closely at photos taken at night, like this one:

https://1.img-dpreview.com/files/p/sample_galleries/7780769194/6961356444.jpg

And in the sky you will see very high noise levels at 800 ISO - at the same ISO my Canon 1D MkIII has barely any noise - because of the much larger pixels.

But let's face it - these phones really are a miracle in being able to take photos which probably 90% of people will be very happy with.

And the technology which makes that possible will help to make our "real" cameras even better.
.
 
My standard burst rate is around 1 frame per ten minutes. Composing, analysing light, writing notes; all take time. I then spend more time at home processing HDRs. A couple of minutes spent unable to use a camera I am not about to use yet seems a bargain.

I can imagine Kipax disagreeing with that :LOL:
.
 
Just for info this is the Photo (It's downloaded from our facebook page so has lost quite a lot of quality)

Lotus Exige by Fraser White, on Flickr

Does anyone else not think that with developments in software this may well become the future? certainly saves any masking/post processing etc.
 
<snip>

Does anyone else not think that with developments in software this may well become the future? certainly saves any masking/post processing etc.

No doubt about it, and as you've just demonstrated, the future is already here in smartphones where it's filtering its way up.

We're a conservative bunch though and don't take so readily to the idea of the camera automatically 'distorting reality' too much. Maybe for us there will be more emphasis on the post-processing side, as mentioned above?
 
No doubt about it, and as you've just demonstrated, the future is already here in smartphones where it's filtering its way up.

We're a conservative bunch though and don't take so readily to the idea of the camera automatically 'distorting reality' too much. Maybe for us there will be more emphasis on the post-processing side, as mentioned above?

Mobile phones have the disadvantage that they are small, imposing size limits on both lens and sensor alike.
They have the advantage that they have a lot of processing power relative to size, and it's pretty easy in the form factor to add in an additional lens or two (or more..)
Their main focus for development now isn't trying to pack in a single bigger sensor, or a single bigger lens (they tried and failed), it's all about progressing computational photography, and enhancing the software.

That really shines when there are multiple data sources to take image data from, we'd have to see a significant change to the form factor of 'proper' cameras for that to take hold there I think.
 
still doesn't answer the question how does the camera know which areas you want dark and which you don't

Based on your friends suggestions the below two examples would be brightened up in areas I intend to be dark. how does camera know when to do what in whatever areas of the image?

13933739050_aa3610b8a3_b.jpg


25706203796_10083ac9fe_b.jpg
 
The issue I have is that people are now so used to seeing overprocessed Instagram filtered images, that they don’t even hesitate when they see this kind of s***, I’m beginning to feel like a right miserable old fart when people are saying images are amazing and all I see is plastic overprocessed unreal garbage.

I'm not the only one then.
 
still doesn't answer the question how does the camera know which areas you want dark and which you don't

Based on your friends suggestions the below two examples would be brightened up in areas I intend to be dark. how does camera know when to do what in whatever areas of the image?

13933739050_aa3610b8a3_b.jpg


25706203796_10083ac9fe_b.jpg
The camera would 'know' in the same way it 'knows' what you want to be in focus - you tell it. We already have a very many automated systems in our cameras and we can turn them on or off, and when on we can finesse shutter speeds, apertures, ISO, focus, exposure and quite a lot else. I cannot see why this idea would need to be different.
 
The camera would 'know' in the same way it 'knows' what you want to be in focus - you tell it. We already have a very many automated systems in our cameras and we can turn them on or off, and when on we can finesse shutter speeds, apertures, ISO, focus, exposure and quite a lot else. I cannot see why this idea would need to be different.

camera focusses on a single point that I tell it to OR on a subject that I have told is moving and it should try and keep in focus.
of course if you leave the camera in wide area focus it guess best (normally the closest object to the camera) to be in focus. But it is still focussing on *single* point.

This is completely different in that OP's friend want different ISO in various different points in the scene or possibly for every single photo diode. camera does not know which parts i want dark and which parts light unless I tell it. so i will have to tell it what to do now for many different points which does not make much sense since it'll take as long as post processing the image later (which is what OP is trying to avoid in the first place!).
 
still doesn't answer the question how does the camera know which areas you want dark and which you don't

Based on your friends suggestions the below two examples would be brightened up in areas I intend to be dark. how does camera know when to do what in whatever areas of the image?

13933739050_aa3610b8a3_b.jpg


25706203796_10083ac9fe_b.jpg


I think the goal would be to record the image in the same way the human eye 'sees' the image.

I find this a biggie for people coming to photography in that they expect the image to look like how they 'saw' it and not how a camera records it.
 
camera focusses on a single point that I tell it to OR on a subject that I have told is moving and it should try and keep in focus.
of course if you leave the camera in wide area focus it guess best (normally the closest object to the camera) to be in focus. But it is still focussing on *single* point.

This is completely different in that OP's friend want different ISO in various different points in the scene or possibly for every single photo diode. camera does not know which parts i want dark and which parts light unless I tell it. so i will have to tell it what to do now for many different points which does not make much sense since it'll take as long as post processing the image later (which is what OP is trying to avoid in the first place!).
You can have autofocus on or off and leave at that - the camera will do quite a good job of guessing. Or you can set the focus mode, choose one or several focus points and use back button focus to have much control.

You can have auto exposure on or off and leave it at that - the camera will do quite a good job of guessing. Or you can set the aperture and let the camera do the shutter and ISO. Or you can use full manual and set everything. Or you can control the exposure in conjunction with the camera in a myriad ways.

Why cannot variable ISO not work in the same manner: on/off, some control/no control, used in these zones/not used in those zones, etc/not etc?
 
Last edited:
Just for info this is the Photo (It's downloaded from our facebook page so has lost quite a lot of quality)

Lotus Exige by Fraser White, on Flickr

Does anyone else not think that with developments in software this may well become the future? certainly saves any masking/post processing etc.

The phone basically takes three (or more images), one for highlights, one for midtones and one for shadows, then merges them to form an image with quite amazing dynamic range. As others have said, if you go pixel peeping at smartphone images then they look awful, but for viewing on a phone screen or a small print they look amazing. There is a huge amount of post processing going on, it just happens behind the scenes and you have no control over it.
 
The phone basically takes three (or more images), one for highlights, one for midtones and one for shadows, then merges them to form an image with quite amazing dynamic range. As others have said, if you go pixel peeping at smartphone images then they look awful, but for viewing on a phone screen or a small print they look amazing. There is a huge amount of post processing going on, it just happens behind the scenes and you have no control over it.

that's just HDR, many camera can do that too but will give you jpg output. I suppose OP wants the same effect using ISO, dynamic range of the sensor and get a nice RAW output.

OK great we are on same page now, certainly an interesting option and one I'd be interested in trying out if at all possible and anyone cares to implement it :)
 
I get you now, yes it could. Would be interesting to discuss how feasible it is?
There would be a lot of software development involved which would have to be paid for by increased camera sales. It might pay better to spend the money on the sensor to give us better dynamic range. I am probably one of the few where this would be a godsend.

I have a vague recollection from many years ago when I tried Magic Lantern. Did they do something vaguely similar?
 
I think the goal would be to record the image in the same way the human eye 'sees' the image.

I find this a biggie for people coming to photography in that they expect the image to look like how they 'saw' it and not how a camera records it.

The problem is that the way we normally look at a scene, and how the camera sees it, are fundamentally different.

Our eyes (apart from being 3D) take in a very wide view up to around 180 degrees. Then we scan the scene in small sections and build up a detailed mental picture of the areas of interest (often ignoring other areas) but at any given moment we're restricted to a very small detail zone of just 2 degrees, individually adjusted for sharp focus and exposure.

A modest amount of HDR technique (basically just brightening shadows) often looks closer to our mental recollection but the photographer can use both exposure and focus to restrict and emphasise what we're allowed to see. Nandbytes photos above are excellent examples of how we can create a different take on things, that is probably some way from how our eyes/brain originally saw it. And different to how a pre-programmed camera would probably record it based on broad brush assumptions.

Personally, I wouldn't want a camera that automatically took away some of that control. I just want a well exposed image file of flexible Raw data that can be adjusted to taste in post-processing.
 
Firstly - I would like to thank everyone for their comments so far and how nice it is to see a thread not descend into a slanging match (y)

The problem is that the way we normally look at a scene, and how the camera sees it, are fundamentally different.

Our eyes (apart from being 3D) take in a very wide view up to around 180 degrees. Then we scan the scene in small sections and build up a detailed mental picture of the areas of interest (often ignoring other areas) but at any given moment we're restricted to a very small detail zone of just 2 degrees, individually adjusted for sharp focus and exposure.

A modest amount of HDR technique (basically just brightening shadows) often looks closer to our mental recollection but the photographer can use both exposure and focus to restrict and emphasise what we're allowed to see. Nandbytes photos above are excellent examples of how we can create a different take on things, that is probably some way from how our eyes/brain originally saw it. And different to how a pre-programmed camera would probably record it based on broad brush assumptions.

Personally, I wouldn't want a camera that automatically took away some of that control. I just want a well exposed image file of flexible Raw data that can be adjusted to taste in post-processing.

Hi Hoppy - my digital camera has lots of functions I don't use and I turn them off - hopefully this could be turned into a 'mode' that you wouldn't have to use.

Anyway - why have we got an ISO dial? (I believe it is an amplifier stage?). Wouldn't it be nice to not have an ISO control?

Matrix meetering must be able to 'read a scene' and evaluate light from dark areas so why can't the pattern it is reading be applied to different areas of the sensor to maximise the dynamic range of that area?

Therefore, unlike the camera phone taking several images of the same scene and combining them the camera would only take one image.
 
The problem is that the way we normally look at a scene, and how the camera sees it, are fundamentally different.

Our eyes (apart from being 3D) take in a very wide view up to around 180 degrees. Then we scan the scene in small sections and build up a detailed mental picture of the areas of interest (often ignoring other areas) but at any given moment we're restricted to a very small detail zone of just 2 degrees, individually adjusted for sharp focus and exposure.

A modest amount of HDR technique (basically just brightening shadows) often looks closer to our mental recollection but the photographer can use both exposure and focus to restrict and emphasise what we're allowed to see. Nandbytes photos above are excellent examples of how we can create a different take on things, that is probably some way from how our eyes/brain originally saw it. And different to how a pre-programmed camera would probably record it based on broad brush assumptions.

Personally, I wouldn't want a camera that automatically took away some of that control. I just want a well exposed image file of flexible Raw data that can be adjusted to taste in post-processing.

To add to that, the detail we see is actually a composite of hundreds of the very small detailed views combined (the eye is always moving) with the blurred motion while moving ignored.
 
To add to that, the detail we see is actually a composite of hundreds of the very small detailed views combined (the eye is always moving) with the blurred motion while moving ignored.

Well - we need to develop a sensor and software that does exactly the same a processor can do sums quicker than my thick brain so it can also match my eye.

Chop chop - couple of weeks work should be enough!
 
As I think has already been said, sometimes we don't want to record the scene as our eyes see it, sometimes we want something different so I hope there'll be an Off/On switch for any super duper as your eye sees it mode.
 
that's just HDR, many camera can do that too but will give you jpg output. I suppose OP wants the same effect using ISO, dynamic range of the sensor and get a nice RAW output.

Then, suddenly, it's no longer RAW
 
The way a human's vision works is very complex, way beyond simple exposure. But in terms of exposure I think having all images be recorded "as a normal person would see it" is much the same issue as "snapshots" have...

I think what is being questioned is the ability for a camera to evaluate a scene and break it into zones/areas of proper exposure and then adjust for that with ISO (rather than per pixel). That is certainly possible and it will probably be implemented at some point (the ISO would be digital gain rather than analog).

It occurs to me that there is an idea that arises from this that could have real benefits... if each pixel switched off after reaching optimal saturation rather than using a global shutter. Theoretically, this is certainly possible. The resulting "image" would be near pure white w/o procesing, but it would contain the maximum amount of data/accuracy at every pixel. But besides requiring processing to see/generate the desired image, it would also result in the SS being determined by the pixel receiving the least amount of light... all exposures would become "long exposures." This would basically be the opposite of underexposing an ISO invariant sensor so that all pixels record at least the minimum required.

Edit: it would even be possible for the resulting image display to be normalized simply by biasing the per pixel display luminance by the per pixel exposure time... i.e. the pixel that took the longest to reach saturation is displayed the darkest by default.
 
Last edited:
Great reply @sk66 ! This is what I'm imagining! Except my description is much poorer than your explanation! :)
 
Great reply @sk66 ! This is what I'm imagining! Except my description is much poorer than your explanation! :)

But why do you want that? Check Steven's third paragraph and note "it would also result in the SS being determined by the pixel receiving the least amount of light... all exposures would become "long exposures." " Even if that was possible, which I doubt from a sheer data processing perspective, where's the upside to balance the numerous downsides?

We already have ways of controlling exposure and manipulating dynamic range that are quite good enough, eg ISO-invariant or ISO-less sensors, most notably in some full-frame Sony and Nikon cameras, with most other new models not far behind. That's been debated before a few times. And you'd still need some kind of 'brightening' control (or ISO adjustment, as that's basically all it is) to generate an LCD image to work from or it'd usually be far too dark and mostly useless.
 
But why do you want that?
I think he was probably referencing the first part of breaking an image into zones and exposing them differently (digital gain)... I think some cameras already do this for the automated HDR feature (rather than combine actual multiple exposures).

Even if that was possible, which I doubt from a sheer data processing perspective
IDK why you would think that; pixels are already individually switched/read out/processed... they have to remain separate "data packages." And the processing capabilities are already amazing... Nikon's matrix metering scene recognition system processes the data through a database of over 30,000 images in the process of determining the optimal exposure.
 
Back
Top