What is resolution?

Messages
439
Name
Steve
Edit My Images
No
I had thought that resolution could easily be defined by a number and that any complexities in the concept were probably not my concern, but I am starting to wonder. I have been experimenting with pixel shift and it does seem to make a noticeable difference to the resolution, in most cases, but not all. I had this hope that I could somehow improve the resolution of diffraction limited pictures. Wouldn't that be cool if you could effectively increase the depth of field using higher fstops and pixel shift? But, it would seem that in practice I get little to no improvement at high fstops, whereas I get quite a lot of improvement (of an already good image) at low fstops. At f8 I get lots of improvement, but at f32 I can't see any. This started me thinking about what resolution actually means. I have vague memories of what randomness is and the many forms it comes in, but nobody really talks about it with photography, at least not to me. Maybe it is way to complex to be able to use it in any scientific way. I do know that diffraction applies one type of randomness to a photo, and that can vary depending on the content of the photo. The lens applies several other types of randomness and the camera itself applies others again. All of this can be learnt as an art, but I suspect not as a science. Still, it would be nice to understand a little more.
 
At a risk of accusing you of ‘overthinking’, a look at any lens mtf chart would have told you that diffraction is a killer of resolution.

And a study of how lenses are tested would show that resolution is already well documented scientifically rather than as an art.

In fact as a rule, forums are far more inclined towards the scientific than artistic measures of images.
 
In its most basic concept, resolution is defined in LPPM or line pairs per millimeter.
Mathematically it is quite simeple to calculate what a sensors maximum theoretical resolution should be. Unfortunately, things like diffraction and lens resolving power play a large part.
As you sem to have already discovered, your lenses perfom better stopped down a bit and then diffraction limitation comes into effect as you go past about f11.

I don't have sufficient education in this field to go into any real depth and therfore base my camera settings on empirical results as opposed to scientific.
 
At a risk of accusing you of ‘overthinking’, a look at any lens mtf chart would have told you that diffraction is a killer of resolution.

And a study of how lenses are tested would show that resolution is already well documented scientifically rather than as an art.

In fact as a rule, forums are far more inclined towards the scientific than artistic measures of images.

Perhaps you are right, but it is curious that I don't seem to be able to improve the resolution of diffraction limited images with pixel shift. There is virtually no improvement and I would have expected there to be some. There is a lot of improvement when there is only a small diffraction effect and the improvement seems to increase as the diffraction drops. Perhaps this has something to do with our perception of resolution? Is our perception of resolution different from the actual, scientific, resolution?
 



As a measurement, it may apply to audio as well as 2D graphic
representation; in which case it means: a measure of the amount
of detail in an image.

Resolution, to be of any significance. MUST be well thought out
and preserved throughout the workflow… so many ways to affect
negatively the outcome.

As Phil V mentioned, diffraction is one of the negative point; cho-
sing to shoot at the sweet spot of a given lens is the better option.
Often, this will imply stacking to increase DoF preferably to ƒ stop
options.
 
Pixel shift can't find detail where there is none to find. Once an image projected onto the sensor is blurred, the shifted pixel will only be shifted to another blurry pixel. You say there is "virtually no improvement" in the diffraction limited images but there's probably as much improvement on them as there is on the wider aperture images, it's just that a marginally "sharper" blurry image is harder to differentiate from the original than a similarly (in terms of pixel level) sharper pair.
 
Indeed Pixel Shift does not increase sensor resolution, it increases the amount of detail recorded... the size of the airy disc is still going to be the same and possibly (no experience) using pixel shift more of the airy disc will effect the final image.

Although there is a lot of talk about cameras being diffraction limited it is important to remember that diffraction is an optical property not a property of the sensor, the only part a sensor really plays is its ability through the density of the photo sites to record the airy disk which will be the same size on a densely populated sensor as a sparsely populated one (Given everything else is equal).

So I am not really surprised that you are not seeing any great improvement....
 
Been trying to think how to explain my reasoning without being face to face and can only come up with the below...

Diffracted/blurry with pixel shift
.....ooooo.....
....ooooo....
gives (about!)
....oooooo....
or
......oooo......
still blurry

while a sharper source image

......oo......
.....oo.......
will give (again, about!)
......o......

Almost certainly a huge oversimplification but the best I can come up with before my 2nd cup of coffee!
 
Perhaps you are right, but it is curious that I don't seem to be able to improve the resolution of diffraction limited images with pixel shift. There is virtually no improvement and I would have expected there to be some. There is a lot of improvement when there is only a small diffraction effect and the improvement seems to increase as the diffraction drops. Perhaps this has something to do with our perception of resolution? Is our perception of resolution different from the actual, scientific, resolution?
I’m not surprised at all, you can’t manipulate data that doesn’t exist.

And I can’t speak for your perception of what resolution is, your findings fall in line with my perception.
 
going through this one myself at the moment sent my sigma 150-600 sport back to be looked at as it was playing up a bit ,and I was told yesterday it had to go back to Japan as it had resolution issues .so without my main lens for up to 8 weeks
 
Pixel shift can't find detail where there is none to find. Once an image projected onto the sensor is blurred, the shifted pixel will only be shifted to another blurry pixel. You say there is "virtually no improvement" in the diffraction limited images but there's probably as much improvement on them as there is on the wider aperture images, it's just that a marginally "sharper" blurry image is harder to differentiate from the original than a similarly (in terms of pixel level) sharper pair.
I thought of that, but why do the images at, say, f16 appear to be higher res
Been trying to think how to explain my reasoning without being face to face and can only come up with the below...

Diffracted/blurry with pixel shift
.....ooooo.....
....ooooo....
gives (about!)
....oooooo....
or
......oooo......
still blurry

while a sharper source image

......oo......
.....oo.......
will give (again, about!)
......o......

Almost certainly a huge oversimplification but the best I can come up with before my 2nd cup of coffee!
Seems more rational than most. So if the f8 image has all this "hidden" resolution and f16 has less and f32 has none at all, how do I access all this wonderful hidden resolution?
 
Indeed Pixel Shift does not increase sensor resolution, it increases the amount of detail recorded... the size of the airy disc is still going to be the same and possibly (no experience) using pixel shift more of the airy disc will effect the final image.

Although there is a lot of talk about cameras being diffraction limited it is important to remember that diffraction is an optical property not a property of the sensor, the only part a sensor really plays is its ability through the density of the photo sites to record the airy disk which will be the same size on a densely populated sensor as a sparsely populated one (Given everything else is equal).

So I am not really surprised that you are not seeing any great improvement....
Starting to make it ckearer. I'll have to think about that. Can you say what the difference is between resolution and the amount of detail recorded?
 
Last edited:
I think you're worrying unduly.

Just enjoy these new super resolution sensors and pair them with good glass, use a tripod and above all enjoy your photography.

It's not all about how sharp an image is, in fact I sometimes defocus an image in post to make it a better image.
 
I don't seem to be able to improve the resolution of diffraction limited images with pixel shift.
If you think of diffraction in terms of a blur circle measured in millimetres or microns, then it becomes obvious why pixel shift can't make a difference.
 
If you think of diffraction in terms of a blur circle measured in millimetres or microns, then it becomes obvious why pixel shift can't make a difference.
You are quite right. The measured pixel moves from the centre of the Airie disc to the centre of the Airie disk, thus absolutely no change in clarity when the Airie disc dominates. Thankyou
So, pixel shift can only correct the failings of the sensor, not the failings of the lens or other things.
 
Last edited:
You are quite right. The measured pixel moves from the centre of the Airie disc to the centre of the Airie disk, thus absolutely no change in clarity when the Airie disc dominates. Thankyou
So, pixel shift can only correct the failings of the sensor, not the failings of the lens or other things.

Pixel shift can record more detail when used under normal circumstances, it isn't really correcting failings in the sensor. Diffraction is an optical feature where the light nearest the edge of the aperture is bent (diffracted) creating the airy disk, as the aperture decreases in size the airy disk will increase in size hence the image becomes more 'blurred' as you stop down, pixel shift cannot help to increase the apparent resolving power of the lens (not resolution) as the airy disk is still there and is still the same size.

In fact there is an argument to use a less densely packed sensor, the more photosites packed into the sensor area, the more will fall in the area of the airy disk (one reason cameras attached to microscopes are not usually high resolution), we used to find that a 12mp FF sensor had better diffraction performance than higher resolution cameras... The airy disk at the same given mag and aperture etc. will still be the same size just that less photosites record it...
 
Thanks to all who helped explain this to me. I now understand what pixel shift can, or more importantly, can't do. Resolution - well that still has questions, but I don't know how to frame them so they'll have to wait.
 
"Resolution" is all a matter of "matching dots"...
A detail is a point of light/shadow and it is projected as a dot by the lens. The size of this dot (airy disk) varies with the aperture of the lens... the smaller the aperture, the larger the dot. This dot is then recorded by dots on the sensor (diodes). It is converted into output dots (image pixels). and then displayed as dots (screen pixels/printer dots/etc)...

At whatever point in this system the dots are largest, that is the limit of resolution. For instance, a (theoretical) perfect lens set to f/11 can project a miximum of 16MP worth of dots onto a 35mm sensor (yellow-green wavelength, w/o AA filter). So 16MP is the maximum you can record, and the maximum you can reproduce, regardless of how many "dots" are on the sensor/monitor/print. At f/32 that goes down to ~ 2MP!

The reason you don't see any benefit with large amounts of diffraction is because the sensor dot (pixel/diode) is shifting *within* a single dot (airy disk)... they never shift far enough to record a different image, the information is always the same.

The way the A7rIII uses pixel shift doesn't increase resolution, nor potential resolution... what it does is collect more information in order to better define/determine the resolution that is there.
It is somewhat misleading to think of a sensor diode/pixel as an image pixel... In the end an image pixel/dot will *represent* a sensor diode/pixel, but the sensor diode only records a small portion of the information used in determining what that image pixel will be (current rgb designs). The rest of the information comes from what is gathered by other pixels and an overall algorithm/database/et-all... pixel shift is just another form of "oversampling."
 
"Resolution" is all a matter of "matching dots"...
A detail is a point of light/shadow and it is projected as a dot by the lens. The size of this dot (airy disk) varies with the aperture of the lens... the smaller the aperture, the larger the dot. This dot is then recorded by dots on the sensor (diodes). It is converted into output dots (image pixels). and then displayed as dots (screen pixels/printer dots/etc)...

At whatever point in this system the dots are largest, that is the limit of resolution. For instance, a (theoretical) perfect lens set to f/11 can project a miximum of 16MP worth of dots onto a 35mm sensor (yellow-green wavelength, w/o AA filter). So 16MP is the maximum you can record, and the maximum you can reproduce, regardless of how many "dots" are on the sensor/monitor/print. At f/32 that goes down to ~ 2MP!

The reason you don't see any benefit with large amounts of diffraction is because the sensor dot (pixel/diode) is shifting *within* a single dot (airy disk)... they never shift far enough to record a different image, the information is always the same.

The way the A7rIII uses pixel shift doesn't increase resolution, nor potential resolution... what it does is collect more information in order to better define/determine the resolution that is there.
It is somewhat misleading to think of a sensor diode/pixel as an image pixel... In the end an image pixel/dot will *represent* a sensor diode/pixel, but the sensor diode only records a small portion of the information used in determining what that image pixel will be (current rgb designs). The rest of the information comes from what is gathered by other pixels and an overall algorithm/database/et-all... pixel shift is just another form of "oversampling."
Thanks Steven.
Perhaps the normal image does some undersampling, specifically of the colour information. Maybe the pixel shift just corrects that, but there seems to be no oversampling.
Would there be any advantage in moving the pixels by 1/2 instead of 1? This would mean that you would need to process 8 images instead 4 (or would that be 9 images?).
 
Perhaps the normal image does some undersampling, specifically of the colour information. Maybe the pixel shift just corrects that, but there seems to be no oversampling.
One advantage of a 1-pixel shift just occurred to me. It reduces the impact of the Bayer filter. Instead of each pixel on the sensor only seeing green *or* blue *or* red, a 1-pixel shift would allow each pixel to see (green and blue) or (green and red). It seems to me that ought to permit more accurate colour rendering.
 
One advantage of a 1-pixel shift just occurred to me. It reduces the impact of the Bayer filter. Instead of each pixel on the sensor only seeing green *or* blue *or* red, a 1-pixel shift would allow each pixel to see (green and blue) or (green and red). It seems to me that ought to permit more accurate colour rendering.
Yes that's the only thing it does. The A7rIII uses 4 images with a 1px shift in a square pattern, so each image point is sampled by each diode in a 4 pixel RGB quad. This gives the output much more information/accuracy per pixel, not only in what that pixel should be (color accuracy), but where the transition point truly is (apparent sharpness/detail/contrast). And because it's a combination of 4 exposures it can extract more from shadows/darks (increase DR).
 
Thanks Steven.
Perhaps the normal image does some undersampling, specifically of the colour information. Maybe the pixel shift just corrects that, but there seems to be no oversampling.
Would there be any advantage in moving the pixels by 1/2 instead of 1? This would mean that you would need to process 8 images instead 4 (or would that be 9 images?).
"Oversampling" simply means to take more samples and output it at a lower resolution. You could say that the normal image is under-sampled in that it doesn't have enough accuracy. But really the pixel shift is oversampling as it is using ~170MP recorded output as 42.4MP.
This is the same thing as taking a high resolution image and displaying it at half size. That's effectively 2x oversampling and it causes a reduction of image noise and an increase in apparent sharpness/DR.

Half stepping the shift would simulate a pixel of 1/2 dimensions (1/4 area) and could theoretically provide benefits. But only if there is something to differentiate w/in that step size... highly unlikely IMO. In order to get the maximum your sensor is already capable of you need to be at an aperture no smaller than f/5 with a lens that is diffraction limited at f/5 (i.e. doesn't get sharper when stopped down farther). The reality is that everything you are doing (pixel shift/focus stack/etc) is in attempt to actually output 42MP worth of detail, which would be amazing.

The reality is that due to technique/lens/lighting/subject/etc we are often struggling to record even 12MP worth of actual detail (which would be a sharp/detailed image). And it is important to understand that this has nothing to do with image pixel dimensions; the only thing important is the amount of detail w/in an output image's physical size (i.e. A4) and the physical size it was originally recorded at (sensor size)....

This is how it usually works in reality, I'm going to use a real life scenario based on my use of 35mm (FF) sensors.
I have the best lens I can afford, and due to the subject requirements (higher SS, smaller apertures, handheld or unlocked tripod, lower light) I struggle to output 12MP worth of actual resolution using the D3 (12MP sensor w/ AA filter; it's not an actual possibility)... Maybe I'm only getting about 6-8MP output resolution (marginal "OK" images).
So I buy the D4 (16MP w/ AA filter). This does not jump me up to 16MP of resolution. Instead what I am doing is using a 4MP oversample compared to the D3, and I get maybe 10-12MP of actual detail (sharp images).
Then I upgrade to the D5 (21MP w/ AA filter) and now I'm getting 12-14MP... really detailed images (on a good day).
Or I use my D810 instead (36MP w/o AA filter) and now, if I'm really lucky I get maybe 20MP of detail actually recorded... Amazingly detailed/sharp; such an image has so much detail that it cannot even be seen unless it is viewed at a magnified level (enlarged).

What I have actually done throughout this process is increase the sensor resolution in order to "oversample" the 35mm area. The reason I am calling it oversampling is because the limit to recorded resolution is not primarily due to the sensor, it is elsewhere. And because the limit is elsewhere, oversampling never gives me a 1:1 increase in actual recorded detail. So what has occurred is that I have managed to incrementally increase the recorded detail that exists in the 35mm sensor area; and what now matters is how much I am going to enlarge it (i.e. to 297mm/A4 print), and how close I will view it (normal or enlarged/magnified).
This last factor is why larger formats/sensors always generate better results... they need less enlargement for a given output size.
 
Last edited:
Back
Top