A coil

I don't think that is the case... Strobes are quite cost effective compared to most constant lights that are actually of a high CRI across the spectrum. LEDs are also extremely low powered compared to any flash.
But yes, in a situation that is not color critical and where long SS's are suitable, then just about any constant light source can be suitable and easier to position/adjust.


I find this rather hard to believe... at f/16 you're getting a max of ~ 14MP resolution from the A7RII. Now, if the lens/technique in use is already significantly limiting the resolution recorded to near that level then yeah, I can see where it would make little difference.

How is this application so colour critical? I would have thought that the colour could change quite a lot depending on the angle of incident light, so why get too hung up about strict accuracy when nobody is going to notice anyway?

As for f16 only giving a max of 14MP resolution. You haven't tried it, have you? I use it all the time and it shows no significant loss of resolution. Maybe I'll figure out a way to upload a photo and show you.
 

Attachments

  • DSC07638y-helicon.jpg
    DSC07638y-helicon.jpg
    87.4 KB · Views: 20
  • DSC07638x-helicon.jpg
    DSC07638x-helicon.jpg
    128.4 KB · Views: 18
Here's another example. This one is shot with an A7R2 but is a single frame from a time laspe. I use f20 for this as focus stacking is effectively impossible with time lapse and I will often compress a little for the 4k frame. the two inserts show the full sized image plus a 100% crop. There is some minor loss of resolution, but it is still very good. I think that it is always worth trying things out to see how they work in reality rather than just following theory. In this case theory applies to only one aspect of the system, and the whole system can produce some unexpected results. That's not to say that the theory is wrong (though no credible theory says you will cut a 42MP res down to 14MP by using f16), just that it needs to be quantified by doing it.
 

Attachments

  • DSC09030x.jpg
    DSC09030x.jpg
    92.4 KB · Views: 10
  • DSC09030y.jpg
    DSC09030y.jpg
    148.7 KB · Views: 9
I would have thought that the colour could change quite a lot depending on the angle of incident light,
NO, NEVER!
Light is light and is not affected by AoI but keys will.
so why get too hung up about strict accuracy when nobody is going to notice anyway?
Wow… that is a very wrong approach to someone
coming from film… like me, where colours had to be
right at capture on film!

Many people have experience with given subjects
and memorized pretty well the possible colour range
of these. Others may have direct references under
the eye, etc.

If your final image has not natural rendition as goal but
only artistic intent… then no limit or restriction ahead.
 
Last edited:
I started photography with digital and never really got interested in film. Consequently I have never had the restraints that film places on the learning process. Colour is something that we see and it is only the seeing that counts. All the measurements in the world are only guides to the final objective, which is seeing.
Oddly enough my photography is very concerned with accurate depiction of fungi, as it is often used in scientific publications, but it is also very concerned with art. The science conveys information and if the information is essentially wrong then it is of little value. The art brings in the audience and if there is no audience then it is also of little value. Of course there are compromises, but I try to understand the critical features so I know to keep them true. Colour is one of them.
The final image is part of a recent timelapse and is reasonably true to colour, but I have learnt that colour can be a very variable thing. I use natural light with field photography and I know that colour can vary dramatically depending on the light. Our perception of colour is dependant on many things and is rarely simple. Angle on incidence does effect colour in nature because surfaces can reflect different incident angles in different ways. Structural colours give radically different colours at different angles (butterfly wings, etc) and can never be captured accurately with a camera. We do our best, but what a camera can reproduce is not always what we originally saw.

DO you think I managed to exceed 14MP with my f16 and f20 images?
 
How is this application so colour critical? I would have thought that the colour could change quite a lot depending on the angle of incident light, so why get too hung up about strict accuracy when nobody is going to notice anyway?
I didn't say it was color critical... I would say it's not because the subject is basically greyscale. FWIW, I've been messing with using DIY high CRI LED sources myself. But for equivalent money, strobes/speedlights are going to give a lot more power w/o the color limitations... i.e. greater versatility.

As for f16 only giving a max of 14MP resolution. You haven't tried it, have you? I use it all the time and it shows no significant loss of resolution. Maybe I'll figure out a way to upload a photo and show you.
It's a simple matter of physics that dictates you cannot possibly be recording more than 14MP of resolution at f/16 on a 35mm sensor... and that's for blue wavelengths. It actually averages as a max of somewhere around 8MP for RGB wavelengths combined (G is 2x more important as there ar 2x as many G filtered pixels... it's also the color used for luminance information).
This article explains the physics of it. And this table from the article shows the maximum resolution recordable per aperture/sensor size in B/G/R wavelengths (the Bayer filter array)... for FF at 35mm it's 14/7/5 MP respectively.
TABLA3.jpg


DO you think I managed to exceed 14MP with my f16 and f20 images?
No, not per image/slice at least. But even if you had it wouldn't be apparent in these small/compressed images...
In fact, a human cannot see more than ~12-14MP in any image with standard viewing (i.e. the image occupies ~45* FOV and can be viewed as a whole). That's not to say you need to record more than 14MP, you almost certainly don't. And if there isn't more than 14MP of detail in the scene, you couldn't record more regardless.
You do increase resolution by stacking images, but it isn't nearly 1:1.

In order to get the most resolution possible you need lighting appropriate to make the details visible, you need a very good stable setup/technique, you need a lens that is extremely sharp at a wide aperture, you need to use a wide aperture and record many more slices for the stack, you need a process to combine all of those slices without errors, and you need to display the image at a very large size in order to make it visible to a human with close/critical viewing.
Like this.
 
Last edited:
I didn't say it was color critical... I would say it's not because the subject is basically greyscale. FWIW, I've been messing with using DIY high CRI LED sources myself. But for equivalent money, strobes/speedlights are going to give a lot more power w/o the color limitations... i.e. greater versatility.


It's a simple matter of physics that dictates you cannot possibly be recording more than 14MP of resolution at f/16 on a 35mm sensor... and that's for blue wavelengths. It actually averages as a max of somewhere around 8MP for RGB wavelengths combined (G is 2x more important as there ar 2x as many G filtered pixels... it's also the color used for luminance information).
This article explains the physics of it. And this table from the article shows the maximum resolution recordable per aperture/sensor size in B/G/R wavelengths (the Bayer filter array)... for FF at 35mm it's 14/7/5 MP respectively.
TABLA3.jpg



No, not per image/slice at least. But even if you had it wouldn't be apparent in these small/compressed images...
In fact, a human cannot see more than ~12-14MP in any image with standard viewing (i.e. the image occupies ~45* FOV and can be viewed as a whole). That's not to say you need to record more than 14MP, you almost certainly don't. And if there isn't more than 14MP of detail in the scene, you couldn't record more regardless.
You do increase resolution by stacking images, but it isn't nearly 1:1.

In order to get the most resolution possible you need lighting appropriate to make the details visible, you need a very good stable setup/technique, you need a lens that is extremely sharp at a wide aperture, you need to use a wide aperture and record many more slices for the stack, you need a process to combine all of those slices without errors, and you need to display the image at a very large size in order to make it visible to a human with close/critical viewing.
Like this.
This is an interesting question. On the one hand your table appears to show that a 42MP ff camera when used with a lens at f16 would be no better than a 14MP camera, yet this is demonstrably not so.

Why not? I think there may be several reasons which include:
1. Overlapping small pixels, while not as good as non-overlapping small pixels, can still be better than large pixels for resolution. Your table does not allow for this.
2. There are many other factors that impact resolution, including lens, sensor, electronics, and use.

I spent most of my working life in the computer industry, where my greatest passion was working on large system performance. Theory was crucial in understanding how a system would work, but practical measurement was always the gold standard. I guess I have continued with that line with photography. We had a saying "Another beautiful theory shot down by a cold, hard fact". That's not to say that the theory is wrong in it's pure form, just that by itself it doesn't explain the system we are concerned with.
 
To follow on, this question about diffraction got me quite curious. Most learned sites seem to say the same, yet my experience is quite different. So I did a test. I took photos of a CD cover from a distance:
1. f8 at 42MP and sharpened. Then croped and upsized by 1200%
2. f16 at 42MP and sharpened. Then croped and upsized by 1200%
3 f8 downsized to 14MP then upsized to 42MP. (This was just using photoshop, so the result would not be perfect, but make of it what you will). Finally croped and upsized by 1200%

Observations:
a) The resolution at f16 is lower than at f8, but not by a huge amount. The main difference is a drop in contrast. Since high contrast is often the bane of good macro photography, this isn't a major problem.
b) The resolution at 14MP is noticeably less, to the extent that most of the writing on the CD is unreadable.

My conclusion is that, for practical photography, diffraction effects are overstated. I don't expect many photographers will take any notice of this conclusion, but for my purposes a 42MP camera with a good lens used at f16 is far superior to a 14MP camera (in spite of sites which i generally trust saying that's not true) . I'll probably do some more experimenting to see if f14 or even f11 has any advantage over f16, but the trade off of having to use more slices in the stack will probably outweigh any advantages.
 

Attachments

  • DSC06893.jpg
    DSC06893.jpg
    119.6 KB · Views: 7
  • DSC06894.jpg
    DSC06894.jpg
    94.1 KB · Views: 9
  • DSC06892.jpg
    DSC06892.jpg
    87.5 KB · Views: 9
This is an interesting question. On the one hand your table appears to show that a 42MP ff camera when used with a lens at f16 would be no better than a 14MP camera, yet this is demonstrably not so.

Why not? I think there may be several reasons which include:
1. Overlapping small pixels, while not as good as non-overlapping small pixels, can still be better than large pixels for resolution. Your table does not allow for this.
2. There are many other factors that impact resolution, including lens, sensor, electronics, and use.
There are a couple things that aren't made clear in the purely theoretical explanations.

One is the randomness of details in a scene... it is typically said that it requires 2 pixels to resolve/define a detail (i.e. line/edge). But if that detail is so small (or pixel so large) that it falls completely w/in a single pixel, then it basically requires 4 smaller pixels to define/see that edge (but it is still just one detail). This is why there is (almost) always some small increase in recorded resolution with an increase in MP's (greater sampling frequency).

Additionally, based on the theory one might assume that a 14MP sensor will resolve all of the 14MP of detail available in the same situation... that is unlikely. But the 42MP sensor would, or at least more nearly.
 
Last edited:
To follow on, this question about diffraction got me quite curious. Most learned sites seem to say the same, yet my experience is quite different. So I did a test. I took photos of a CD cover from a distance:
....
3 f8 downsized to 14MP then upsized to 42MP. (This was just using photoshop, so the result would not be perfect, but make of it what you will). Finally croped and upsized by 1200%
Definitely introducing a lot of mathematical errors by scaling that hard and in both directions...
My conclusion is that, for practical photography, diffraction effects are overstated.

I would tend to agree with that. Especially with macro photography... often the gains in overall image sharpness (DOF) are more significant that the loss of very fine details. Especially if the image is going to be viewed at a size that would make those fine details invisible anyway.


Edit: But you might find this interesting (I believe it's your setup). The 90/2.8 is rated to deliver 42MP on the A7RII *at f/2.8. At somewhere around f/16 it's delivering ~ 12MP, and at f/22 it's down to only 6MP.
(also note that they stop the scale at 12MP for the reasons I mentioned).

Screen Shot 2017-10-29 at 5.56.06 PM.jpg
 
Last edited:
The Sony 90mm f2.8 is a very good macro lens and it is what I mostly use. It would seem that what I do is impossible, but that doesn't worry me a lot. In fact it doesn't worry me at all. If I am really using a 12MP camera/lens system, I am somehow getting a vastly superior result than I was getting with a 12MP or a 22MP camera. It does make me wonder some more about how we see things as I am sure that the mathematic is right in it's own way. It's just that it's way is not the way we see things. And in the end, that is all that matters.
 
This is an interesting question. On the one hand your table appears to show that a 42MP ff camera when used with a lens at f16 would be no better than a 14MP camera, yet this is demonstrably not so.
<snip>

True. Those diffraction-limited theories are irritating. The science reads well enough, as far as it goes, but is incomplete - because the fact remains that a higher MP camera will always deliver more detail from the same lens, at all apertures. You sometimes might have to look closely, but it's there.

<snip> there is (almost) always some small increase in recorded resolution with an increase in MP's (greater sampling frequency).

Additionally, based on the theory one might assume that a 14MP sensor will resolve all of the 14MP of detail available in the same situation... that is unlikely. But the 42MP sensor would, or at least more nearly.

And that's the reason, sampling frequency and cascading MTFs.

Edit: I have some Imatest MTF graphs that show this, and a few ogher things, if anyone's interested :)
 
Last edited:
The Sony 90mm f2.8 is a very good macro lens and it is what I mostly use. It would seem that what I do is impossible, but that doesn't worry me a lot. In fact it doesn't worry me at all. If I am really using a 12MP camera/lens system, I am somehow getting a vastly superior result than I was getting with a 12MP or a 22MP camera. It does make me wonder some more about how we see things as I am sure that the mathematic is right in it's own way. It's just that it's way is not the way we see things. And in the end, that is all that matters.
You are (almost) certainly getting more resolution from the A7RII. Put the same lens on the A7R (36MP) and f/16 it's only getting around 6MP instead of the 12MP on the A7RII... it's just not 36MP vs 42MP.

When we get an image that records around 12MP actual we perceive it to be extremely sharp/detailed, because it is. But here's where what we see and actual recorded detail fail to correlate. When we enlarge that 12MP image we expect it to remain "sharp," but for some reason we do not really expect additional details that we could not see before to become visible. However, if there is more recorded resolution, then as the image is enlarged new details should become visible (like examining something with a magnifying glass).
What's hard to wrap our head around is the idea that most of our struggles/efforts are just to get 12-14MP recorded, regardless of the equipment used. A 12MP camera will almost never do it, a 22MP camera can in more cases, and a 42MP camera should be able to in most cases. But there is always the potential for other limiting factors to be introduced into the system.
 
You are (almost) certainly getting more resolution from the A7RII. Put the same lens on the A7R (36MP) and f/16 it's only getting around 6MP instead of the 12MP on the A7RII... it's just not 36MP vs 42MP.

When we get an image that records around 12MP actual we perceive it to be extremely sharp/detailed, because it is. But here's where what we see and actual recorded detail fail to correlate. When we enlarge that 12MP image we expect it to remain "sharp," but for some reason we do not really expect additional details that we could not see before to become visible. However, if there is more recorded resolution, then as the image is enlarged new details should become visible (like examining something with a magnifying glass).
What's hard to wrap our head around is the idea that most of our struggles/efforts are just to get 12-14MP recorded, regardless of the equipment used. A 12MP camera will almost never do it, a 22MP camera can in more cases, and a 42MP camera should be able to in most cases. But there is always the potential for other limiting factors to be introduced into the system.
The thing is that perceptually there is a small difference between the resolution of a photo taken at f16 and one taken at f8 and no difference between one taken at f8 and f2.8. Theory would lead is to think that there is a huge difference.
 
The thing is that perceptually there is a small difference between the resolution of a photo taken at f16 and one taken at f8 and no difference between one taken at f8 and f2.8. Theory would lead is to think that there is a huge difference.
Actually, with macro I would think the f/8 image would be perceived to have more resolution/detail due to DOF (it probably would actually have more detail). And keep in mind that if viewed at the same size a lot of the resolution advantage is lost/compressed. Think of it as two letter "A"s... one is made with a stamp and the other is printed by an inkjet printer. Viewed at the same size they look exactly the same, but the inkjet letter is made up of a lot of tiny dots (details) that you just can't see yet.
To put numbers to it, consider the f/16 image is probably in the 10-11MP range (based on DXO), the f/8 image is in the 14+MP range which is (near) the limit for visibility as a whole image, and the f/2.8 could be 42MP but you can't see it yet (flat focus plane subject). Or, the subject may only have ~14MP of details in it, so it's not possible to record more regardless.

But yeah, I wouldn't get too wrapped up about it...
 
Last edited:
Doesn't make sense. If the detail is there, you can expand it to see that detail - and you can't. It sometimes needs to be pointed out that the emperor actually has no clothes.
 
Doesn't make sense. If the detail is there, you can expand it to see that detail - and you can't. It sometimes needs to be pointed out that the emperor actually has no clothes.
Or, the subject may only have ~14MP of details in it, so it's not possible to record more regardless.

How much detail is there in a completely blurred image? (0MP) How much detail is in a picture of a blank white wall ? (0MP) How many MP to reproduce a checker board? (.000064MP)
I can't quantify how much 14MP of detail in a scene looks like, not to mention 42MP. I'm not even sure I can comprehend that because it is well beyond the limits of human vision...

Consider this, a high resolution computer monitor (i.e. retina) has a display with pixels ~100x100um in size. That's ~ 650 pixels on your camera's sensor, or ~ 9px/mm compared to 220px/mm. A 1:1 FF image of such a computer screen would have a max of ~ .07MP of detail (324px X 216px) regardless of what was displayed on it or how many pixels were used to record it.
I think the reality is that there just isn't normally that much actual detail in a scene to be resolved. Increasing the magnification doesn't increase the amount of detail visible/recordable either really... it just changes what is recordable (increases sizes bringing out details and simultaneously eliminates detail from the FOV/DOF).
 
Last edited:
To say that something resolves to 42MP and something else to 14MP says that the first has 3x the areal resolution of the second (or 1.73x the linear resolution). That is a lot and is clearly not what we get in the real world.
 
To say that something resolves to 42MP and something else to 14MP says that the first has 3x the areal resolution of the second (or 1.73x the linear resolution). That is a lot and is clearly not what we get in the real world.
Like I said, I firmly believe most of our efforts are just to record/display an actual 12-14MP that we can see... the difference between 12 and 14 may be indistinguishable w/o perfect vision. The difference between 8 and 12 noticeable but not huge, and below 6 is starting to get marginal. FWIW, the "standard" for acceptable sharpness in an image requires less than 2MP (but w/ digital we tend to be a lot more critical).

To resolve an actual 42MP would require a flat field subject (and a flat field lens) with a checkerboard of lines at 110LP/mm at 100% contrast, that doesn't exist in the real world AFAIK... well, it may, I'll just never see it...
 
Last edited:
I think that what is happening is that the effect of improving lens resolution as you stop down, due to lens effects, is being counteracted by decreasing resolution, due to diffraction effects. The net effect is improvement up to a certain fstop (lens dependant) and then a decline. The decline is nothing like the amount that would be due to diffraction alone. ie we do not have a one variable system, so any theory that only considers one variable is going to be very dodgy. Those MTF graphs are an attempt to describe lens effects, but I have never seen any that extend to f16. I would hope that macro lenses are designed to get their best resolution at high fstops.
 
Always interested if I can learn something new.

Oh bugga, I thought I'd got any with that ;) I've got some hand drawn graphs I did a while ago, somewhere...

I think that what is happening is that the effect of improving lens resolution as you stop down, due to lens effects, is being counteracted by decreasing resolution, due to diffraction effects. The net effect is improvement up to a certain fstop (lens dependant) and then a decline. The decline is nothing like the amount that would be due to diffraction alone. ie we do not have a one variable system, so any theory that only considers one variable is going to be very dodgy. Those MTF graphs are an attempt to describe lens effects, but I have never seen any that extend to f16. I would hope that macro lenses are designed to get their best resolution at high fstops.

That's exactly what happens. Raising the f/number improves sharpness as this reduces most aberrations, with the notable exception of diffraction which gets worse. When the f/number gets to a point where improvements from stopping down are overtaken by diffraction getting stronger, then that's where the lens peaks* and sharpness declines at higher f/numbers. But all lenses are affected in the same way, by the same amount (on the same camera/sensor) and usually from around f/11 or so onwards on full-frame, diffraction is the limiting factor regardless of lens quality.

*sharpness peak is usually around f/5.6 these days, though I've seen some of the very best lenses peaking at f/2.8, eg Canon 35/1.4 Mk2, Zeiss Otus 85/1.4.
 
Oh bugga, I thought I'd got any with that ;) I've got some hand drawn graphs I did a while ago, somewhere...



That's exactly what happens. Raising the f/number improves sharpness as this reduces most aberrations, with the notable exception of diffraction which gets worse. When the f/number gets to a point where improvements from stopping down are overtaken by diffraction getting stronger, then that's where the lens peaks* and sharpness declines at higher f/numbers. But all lenses are affected in the same way, by the same amount (on the same camera/sensor) and usually from around f/11 or so onwards on full-frame, diffraction is the limiting factor regardless of lens quality.

*sharpness peak is usually around f/5.6 these days, though I've seen some of the very best lenses peaking at f/2.8, eg Canon 35/1.4 Mk2, Zeiss Otus 85/1.4.
Thank you. It all starts to make sense and it fits well with my method of - read the theory as a first approximation, but try it to see what works best for me. In the end, reality trumps theory.
 
I think that what is happening is that the effect of improving lens resolution as you stop down, due to lens effects, is being counteracted by decreasing resolution, due to diffraction effects
Yes. And I think that very often the increased resolution of larger details (increased DOF) more than offsets the reduced resolution of fine details (diffraction).

I would hope that macro lenses are designed to get their best resolution at high fstops.
That is impossible. Light itself is "sharpest" when it is not interfered with...
A perfectly sharp (theoretical) lens with no inherent optical errors will be at it's sharpest at it's widest aperture (i.e. it's sharpness is only limited by diffraction due to aperture restriction... a "diffraction limited lens"). What normally happens is that there are optical errors which are minimized by stopping down the aperture, which also introduces diffraction... i.e. the sharpest aperture is typically a compromise where sharpness is increased *more* by correcting optical errors than sharpness is reduced by increasing diffraction.
There is an aperture where diffraction causes a point of light to spread to the size of 2 pixels on the sensor... this is the point of "diffraction limiting" sensor resolution (as Hoppy mentioned). Prior to that there is some loss of contrast due to a point of light covering more than 1 pixel, but it is still discernible as separate to some degree. And at even smaller apertures the point of light covers more than 2 pixels and the recorded resolution decreases (fine details are lost).

The "2 pixel limit" is due to the Nyquist limit; it takes 2 pixels to identify 1 detail. I.e. the Nyquist limit for your sensor with 220px/mm is 110LP/mm resolution; if the scene has that much detail and the rest of the system (lens, AA filter, etc) can deliver it.

try it to see what works best for me
Yes. Find what works best for what you have/what you're doing... that's all we can do.
Going back to the beginning... the reality is that you are not getting anywhere near 42MP of detail per image. But that's OK. You don't need that much detail and you're getting *more* than you were before... it's "what works best" for you. But if the subject does have that much detail and you needed to record it, then you would have to do things differently.
 
Last edited:
Always interested if I can learn something new.

Okay then, geek alert :D I did these tests a few years ago, hence not the best cameras available today but more than good enough for this. I use Imatest MTF http://www.imatest.com/

This is one lens, Sigma 35/1.4 Art, tested on three cameras with pixel counts of 21mp (Canon 5D2), 46mp (Canon 7D, full-frame equivalent), and 74mp (Nikon V1, FF equiv, via lens adapter). Test images are all from the centre of the frame, and are output as % MTF contrast (vertical axis) at resolutions of 30 lines-per-mm on the top three traces, and 48 lpmm on the bottom three. The difference between 30 and 48 is 1.6x crop factor.

The graph shows a number of things, but in relation to this thread and diffraction, it clearly shows that sharpness improves with more pixels, throughout the aperture range to f/16. It also shows how diffraction hurts smaller formats more - the bottom three traces decline at a faster rate after the peak. If this is taken further, much further with say a smartphone sensor's miniscule sensor/pixels, then not only does diffraction hit harder, but much sooner too. Correspondingly, some smartphones and small-sensor compacts use ND filters to control brightness rather than lens aperture as the diffraction effects even at f/2.8 are unacceptable.

Other points of interest, by testing at a set resolution (much better and more relevant than fixed contrast MTF50 that most lens testers use) it's easily possible to adjust by the crop factor and show exactly how the same lens performs on different formats. Note how the third trace down at 21mp on full-frame, is substantially better than anything on 1.6x crop format, even on a sensor with almost 4x the number of pixels (fourth trace down). For sharpness, bigger formats always win - because of the higher contrast delivered at the lower magnification they work at.

The law of diminishing returns - there's a big difference between 21mp and 46mp, but the jump up to 74mp yields much less improvement. While that's certainly true, the exact difference may be hidden/reduced by the AA filters which all these cameras have (of unknown strength). This really needs a retest on cameras without AA filters that are available now, but reading between the lines (haha) there may be evidence that the AA filter on the Canon 7D is capping peak sharpness slightly.

This Sigma 35/1.4 Art is a very sharp lens, peaking around f/2.8 to f/4 and diffraction-limited after that. Less sharp lenses peak later (maybe at f/5.6-8 in the centre, typically at higher f/numbers towards the edges that are more to aberrations) and at a lower level, but once diffraction cuts in they're all the same by f/11-16.

Ignore the dogleg at f/11 on the 7D. Pretty sure that's just a software sampling error that has bumped the percentage contrast by about 5% but by the time I realised this I'd packed everything away and found out what I needed to know. And that was, how many pixels do you need to max out the best lenses, before throwing any more at it makes no noticeable difference? With the caveat re AA filters, I'd reckon this is probably around 80mp (on FF), and that's in highly controlled and optimised tests like this. But I'll also offer this observation - a difference of 5% MTF is quite hard to discern, even at high magnification - you need more like 10% before it gets noticeable.

Now the Imatest MTF process uses quite small target areas dotted over the frame, but only needs one (a dark square on a light background, about the size of a playing card) to generate data. So you can cut out just one and position it in a scene, or even pin it to a moving subject and, given suitable lighting conditions, take a very accurate MTF sharpness reading directly from real world images. And that's where you get a shock - with all the variables of actual picture taking (eg focusing errors, camera-shake, subject movement, pixel counts, high ISO, viewing limitations etc etc) you can get pictures that look very sharp indeed but check out at maybe 20% below the potential maximum.

In the field, maxing out a really good camera/lens is bluddy hard, and frankly you rarely need to for excellent results. So bringing this full circle, diffraction is real and can really hurt peak sharpness, but if you stay away from the highest f/numbers you'll be probably be fine and paying closer attention to other aspects of technique are more important. Diffraction varies with format and FWIW, I try not to go above f/11 on full frame though f/16 is usually okay. On APS-C crop format, that translates to f/8-11, and on M4/3 f/5.6-8.

untitled-014.jpg
 
Okay then, geek alert :D I did these tests a few years ago, hence not the best cameras available today but more than good enough for this. I use Imatest MTF http://www.imatest.com/
I sometimes wish I had Imatest... but then I think I'm better off just not knowing ;)

You used test targets with fixed resolutions? 48LP/mm is only 60% of the theoretical max of the 21MP 5D2 (and ~30% of the V1), and would theoretically only require 7MP to resolve.
The chart shows pretty much what I would expect, but I'm not sure how you came to "maybe 20% below the potential maximum" from the results... unless I'm misunderstanding something.

As a side note, there doesn't appear to be a way to directly correlate a given resolution at a higher contrast to a higher resolution at a lower contrast (say a 10% minimum contrast requirement)... I was kind of hoping it would be "cleaner."
And I have no idea how to correlate/apply the eye's contrast response curve to MTF (smaller details require more contrast to separate them). I suppose it's irrelevant to the empirical data/testing, but not to the understanding of how an image will be perceived. I don't know why it never occurred to me before but, as recorded resolution increases the contrast decreases, which is exactly the opposite of what we need. Hmmm...

Edit: DXO takes the tested resolution, applies the response curve, and calculates the number of pixels required to generate that MTF... I wonder if the use of the response curve is why (just about) every lens is reported as being sharpest and with it's max resolution being at it's max aperture, when I don't believe that would be the case using more traditional methods (i.e. MTF 50).
I don't really like their use of the term "Perceptual MPs" either as it ignores display size/distance as a factor... i.e. there's no way you will ever see anything near 42MP of detail in an 8x10 print viewed from 12"... you might with a microscope if the display method could replicate the detail at that size (but it can't).
 
Last edited:
I sometimes wish I had Imatest... but then I think I'm better off just not knowing ;)

You'd love Imatest Steven, right up your street :) I use it for all sorts of testing - lenses obviously, but also for things like speedlite field coverage in the vignetting module, or just recently testing tripods for resistance to wind-buffet and mirror/shutter slap. That was quite scary, even with the wind machine turned down low (Beaufort Scale 2). There is no such thing as a rock solid tripod, and weight is your friend ;)

You used test targets with fixed resolutions? 48LP/mm is only 60% of the theoretical max of the 21MP 5D2 (and ~30% of the V1), and would theoretically only require 7MP to resolve.
The chart shows pretty much what I would expect, but I'm not sure how you came to "maybe 20% below the potential maximum" from the results... unless I'm misunderstanding something.

There's no resolution limit. You input whatever figure you want and read out % contrast at a fixed resolution, or just swap things around and read resolution at a fixed MTF. Most other internet lens tests quote resolution at MTF50, which is the wrong way around IMHO. The former method is both more relevant (because it emphasises contrast) and also makes it easy to directly compare the same lens on different formats simply by changing resolution by the crop factor, among other benefits. Camera manufacturers' MTF graphs do it the right way around, as does Roger Cicala on his Lens Rentals blog.

The standard Imatest target is a pattern of dark squares over the image area, on a lighter background (more reliable than straight black on white). They're tilted at a slight angle to avoid spurious resolution (aliasing). The system reads edge sharpness and I measure all four sides and average them - left and right usually read lower than top and bottom due to chromatic aberration that can be measured separately. I use my own custom targets though, made from scalpel-cut vinyl. The edges are cleaner, so I'm reading the lens and not the print quality of the target. I have multiple targets of different sizes that can be positioned anywhere in the frame, and they're manually focused individually with the aid of a wedge (like the Spyder LensCal). This is the only way to guarantee optimum focus, it eliminates field curvature effects and I can scale things up to test wide-angles at realistic shooting distances (my wide-angle target is the equivalent of 20ft wide).

This is what the standard Imatest test chart looks like
http://store.imatest.com/test-chart...ble-charts/sfrplus-chart-inkjet-on-paper.html

and if you scroll down here and look at the bicycle images, you'll see one of my custom targets attached to the crossbar
https://www.dpreview.com/reviews/up...n-100-400mm-f4-5-5-6l-is-usm-mark-ii-review/5

As a side note, there doesn't appear to be a way to directly correlate a given resolution at a higher contrast to a higher resolution at a lower contrast (say a 10% minimum contrast requirement)... I was kind of hoping it would be "cleaner."

As above, with Imatest you just input whatever test parameters you like and run the analysis, then rerun it with different parameters. But if you specified only 10% MTF contrast you'd get sky high resolution figures bumping up to Nyquist without much differentiation. The figures I use for lens tests are 24 lpmm on full-frame 36 lpmm on APS-C and 48 lpmm on M4/3 as I think the most significant and relevant sharpness data is the % contrast at those levels, and lens manufacturers would seem to agree. That gives a very good 'snapshot' of lens performance, though a full analysis would require many more traces at different parameters, dozens of runs. However, I've done a lot of crosschecking like that and always found that a sharp lens is a sharp lens, which ever way you test it. It's like a car that accelerates fast will also have a high top speed, and vice versa.

And I have no idea how to correlate/apply the eye's contrast response curve to MTF (smaller details require more contrast to separate them). I suppose it's irrelevant to the empirical data/testing, but not to the understanding of how an image will be perceived. I don't know why it never occurred to me before but, as recorded resolution increases the contrast decreases, which is exactly the opposite of what we need. Hmmm...

Absolutely. This is the basic concept of MTF testing and fundamental to understanding our visual perception of sharpness, that generally has more to do with high contrast at relatively low resolution than it does with spiralling pixel counts. We get excited when a new camera comes out with a few more megapixels, though given the levels we're at today it makes very little visible difference, if any, even with a high quality lens. But move from APS-C to full-frame, contrast automatically increases and the sharpness of all your lenses jumps up as if by magic.

Edit: there are some fundamental issues with DxO's methodology.
 
Last edited:
You'd love Imatest Steven, right up your street :) I use it for all sorts of testing - lenses obviously, but also for things like speedlite field coverage in the vignetting module, or just recently testing tripods for resistance to wind-buffet and mirror/shutter slap. That was quite scary, even with the wind machine turned down low (Beaufort Scale 2). There is no such thing as a rock solid tripod, and weight is your friend ;)
Ha!

No, there's no resolution limit.
Got it.
So where do you think they would plot out at if you plotted the 80LP/mm the 5D2 should theoretically be capable of? Based upon the way the 5D2 separated at 48LP/mm I'm guessing it would flatline at 0 or close to it. I suggested 10% as that correlates to the Rayleigh criterion for the minimum required contrast to perceive a detail, and would be a good indicator of the maximum resolution the system is capable of (possibly pushing near Nyquist as you said).

The figures I use for lens tests are 24 lpmm on full-frame 36 lpmm on APS-C and 48 lpmm on M4/3 as I think the most significant and relevant sharpness data is the % contrast at those levels, and lens manufacturers would seem to agree.
I always assumed lens manufacturers use 10 and 30 LP/mm as it makes the results look better... ;)

But move from APS-C to full-frame, contrast automatically increases and the sharpness of all your lenses jumps up as if by magic.
Yes, I've always correlated it to sensor size... not really to resolution directly. I guess for a given sensor size the reduced contrast (per pixel) at higher resolutions kind of negates itself for any given display size, much like per pixel noise does.
 
Last edited:
That is impossible. Light itself is "sharpest" when it is not interfered with...
A perfectly sharp (theoretical) lens with no inherent optical errors will be at it's sharpest at it's widest aperture (i.e. it's sharpness is only limited by diffraction due to aperture restriction... a "diffraction limited lens"). What normally happens is that there are optical errors which are minimized by stopping down the aperture, which also introduces diffraction... i.e. the sharpest aperture is typically a compromise where sharpness is increased *more* by correcting optical errors than sharpness is reduced by increasing diffraction.
There is an aperture where diffraction causes a point of light to spread to the size of 2 pixels on the sensor... this is the point of "diffraction limiting" sensor resolution (as Hoppy mentioned). Prior to that there is some loss of contrast due to a point of light covering more than 1 pixel, but it is still discernible as separate to some degree. And at even smaller apertures the point of light covers more than 2 pixels and the recorded resolution decreases (fine details are lost).
I probable should rephrase that. Since all lenses are a compromise the designers will always focus on some design aspect to the relative detriment of some others. I would hope that macro lens designers would focus on the small aperture performance of the lens to the relative detriment of large aperture performance. I have noticed that this is true with some lenses.
To diverge slightly, can anybody tell me, what is the equivalent quality of the MP-E 65mm at different magnifications? I understand that at 1:1 f16 is true, but at 5:1 f16 is really f64. I use f16 at all times, but the quality degrades noticeably at greater magnification than 3:1. Taking that a a starting point, what fstop should I be using at 4:1 or 5:1?
 
Last edited:
Ha!

Got it.
So where do you think they would plot out at if you plotted the 80LP/mm the 5D2 should theoretically be capable of? Based upon the way the 5D2 separated at 48LP/mm I'm guessing it would flatline at 0 or close to it. I suggested 10% as that correlates to the Rayleigh criterion for the minimum required contrast to perceive a detail, and would be a good indicator of the maximum resolution the system is capable of (possibly pushing near Nyquist as you said).

I always assumed lens manufacturers use 10 and 30 LP/mm as it makes the results look better... ;)


Yes, I've always correlated it to sensor size... not really to resolution directly. I guess for a given sensor size the reduced contrast (per pixel) at higher resolutions kind of negates itself for any given display size, much like per pixel noise does.

30 lpmm is a pretty representative standard I think. I don't see much point in 10 lpmm TBH and would rather see maybe 50 lpmm or something but it wouldn't change much. If you had maybe six lenses and tested them all every which way, I'd put money on them all finishing every test in the same order of merit. With Imatest, not only do you get the numerical data but also a graphic representation from 100% to zero contrast and any resolution parameters selected, so you can see at a glance what's going on at other frequencies - see attached.

The lens is a Canon 100-400mm Mk2, but this is just a random example and pretty meaningless on its own - a snapshot of a snapshot of a snapshot - but you can see how MTF contrast and resolution move in tandem down a fairly predictable line, closely tracking the diffraction limit at f/16 here (bottom graphic). When it gets down to 10% MTF and below approaching Nyquist, there are often little bumps and wiggles as the software tries to make sense of an image that's hardly got more definition than a bowl of porridge, not to mention any AA filter and spurious resolution artifacts. Results may be unreliable, but I would say mostly irrelevant too ;)

untitled-5858_YA26_02_sfr.png
 
I don't think it's possible to design lens elements to be "better" at smaller apertures than at larger apertures... You can either make elements better where it takes less stopping down to correct, or you can make them worse where smaller apertures deliver notably better performance. The main design criteria for a macro lens are close focus capability and a flat focus field.

All macro lenses loose light when used in close focus/macro mode but they usually don't report it, it's called "bellows factor." When you start going into macro/close focus the projected image circle expands (providing the "magnification") and when you spread the light out farther, there is less light per area. So when they say it's f/64 at 5:1 that's relating to the amount of light the sensor receives... but it's also expanding the size of the dots projected by the aperture and therefore the effect of diffraction (basically, the cause is different but the result the same).
2 F-stops doubles the minimum size of the airy disk (projected dot size/diffraction) as does doubling the magnification... so if f/16 is true at 1:1, 2:1 is f/32 (2 stops), 4:1 is f/64 (4 stops), and 5:1 would be about 1/3 stop smaller yet (f/75?). You can work this backwards from f/16 to get f/4 for 4:1, or f/3.5 for 5:1 for equivalent diffraction effect and light density.
 
Last edited:
When it gets down to 10% MTF and below approaching Nyquist, there are often little bumps and wiggles as the software tries to make sense of an image that's hardly got more definition than a bowl of porridge, not to mention any AA filter and spurious resolution artifacts. Results may be unreliable, but I would say mostly irrelevant too ;)
It's what I would expect to see from a lens that cannot resolve to the level of the sensor (as tested)... i.e. if the lens was tested at a wider aperture and could resolve to a finer level, then I would expect the curve to shallow out at some higher contrast level.
I'm just thinking out loud here but, it would seem to me that lower contrast levels may be relevant when you consider that contrast/localized contrast are things that are easy to adjust in post with digital images, whereas actual detail/sharpness can not be added in post...

BTW, Cy/mm ='s lines/mm? So approx 28lp/mm at MTF 50?
 
Last edited:
I probable should rephrase that. Since all lenses are a compromise the designers will always focus on some design aspect to the relative detriment of some others. I would hope that macro lens designers would focus on the small aperture performance of the lens to the relative detriment of large aperture performance. I have noticed that this is true with some lenses.

Yes, and lens design can be optimised for close working, but designers can do nothing about diffraction.

To diverge slightly, can anybody tell me, what is the equivalent quality of the MP-E 65mm at different magnifications? I understand that at 1:1 f16 is true, but at 5:1 f16 is really f64. I use f16 at all times, but the quality degrades noticeably at greater magnification than 3:1. Taking that a a starting point, what fstop should I be using at 4:1 or 5:1?

Both lens aperture and stated focal length are only true at infinity focus, and both change at closer distances - very significantly at macro range and at 1:1 an aperture of f/16 is exactly two stops higher than that at an effective f/32 (bellows factor). Nikon reports this correctly in the viewfinder, Canon does not.

It's very hard to test macro lenses at macro magnifications. Test slides exist, but are very expensive. If you need to know what kind of image quality you're getting at different settings, and how much depth of field etc, I would simply take some very careful test shots and compare. This will be much more telling than a bunch of data that would be hard to interpret and impossible to compare to anything else. I wonder if the issues you're seeing at 3:1 are actually the lens, or movement blur (camera shake) or focusing? Either way, that Canon MP-E 65 is unique and something of an odd-ball optically, but widely regarded as the best available for magnifications greater than 1:1 :)
 
I
Yes, and lens design can be optimised for close working, but designers can do nothing about diffraction.



Both lens aperture and stated focal length are only true at infinity focus, and both change at closer distances - very significantly at macro range and at 1:1 an aperture of f/16 is exactly two stops higher than that at an effective f/32 (bellows factor). Nikon reports this correctly in the viewfinder, Canon does not.

It's very hard to test macro lenses at macro magnifications. Test slides exist, but are very expensive. If you need to know what kind of image quality you're getting at different settings, and how much depth of field etc, I would simply take some very careful test shots and compare. This will be much more telling than a bunch of data that would be hard to interpret and impossible to compare to anything else. I wonder if the issues you're seeing at 3:1 are actually the lens, or movement blur (camera shake) or focusing? Either way, that Canon MP-E 65 is unique and something of an odd-ball optically, but widely regarded as the best available for magnifications greater than 1:1 :)
It's actually a very good lens and if it wasn't for the difficulties using it, I would use it much more. I avoid using 5:1 as I know I need to use less than f16, but that makes focus stacking very hard. I would need to use a powered rail like Stackshot and take dozens of slices, but that means taking about a hour just for one photo (including setup time). I don't think that is worth it. For up to 3:1 and sometimes 4:1 I can takes stacks using f16 and a manual focus rail. If the subject is a spider on a leaf, or worse in a web, I have to use flash and hand held and usually no more than 2:1. Focus stacking becomes problematic then but it can work at times.
 
It's what I would expect to see from a lens that cannot resolve to the level of the sensor (as tested)... i.e. if the lens was tested at a wider aperture and could resolve to a finer level, then I would expect the curve to shallow out at some higher contrast level.
I'm just thinking out loud here but, it would seem to me that lower contrast levels may be relevant when you consider that contrast/localized contrast are things that are easy to adjust in post with digital images, whereas actual detail/sharpness can not be added in post...

BTW, Cy/mm ='s lines/mm? So approx 28lp/mm at MTF 50?

While the sample graph above of the Canon 100-400 shows a fairly straight line, that varies - it can bow outwards or inwards as the optical characteristics tend towards either higher contrast at lower frequencies, or max resolution, and compensates for sharpness across the frame, too. So I'm not saying all lenses have the same shape graphs, but similar in like-for-like comparisons - that's what I've observed after testing around 120 lenses and it would be unusual for one lens to be notably better than another when tested in one way, and then for positions to reverse when tested in a different way. It could also be argued that this is because all lens designers today are basically striving for the same goals and come up with a similar set of compromises for best overall performance.

Software is very clever these days and some of the images I've seen from smartphones appear to defy physics. They can't of course, but sometimes they 'look' like they do and at the end of the day, that's what matters. Sharpening for example, is an optical illusion, but who cares when it looks so good? I apply zero sharpening for lens testing but if I forget to do that (and Lightroom applies modest sharpening by default) then the MTF figures jump up substantially with a tell-tale bump in the graph so it's easy to spot.

Cycles per mm = line pairs per mm.
 
Last edited:
...I'm not saying all lenses have the same shape graphs, but similar in like-for-like comparisons - that's what I've observed after testing around 120 lenses and it would be unusual for one lens to be notably better than another when tested in one way, and then for positions to reverse when tested in a different way. ...
It occurred to me that this might be due to the minimum size of a detail that can be projected onto a sensor due to diffraction. I.e. at f/2.8 the airy disk for a point of light (green wavelength) is ~ 4um optimally... so it doesn't matter how many pixels are used to record it, it's not going to get smaller (higher resolution). Even a perfect (non-existent) lens would put it at ~2um at f/1.4, which would equate to a min source detail size of ~1.6um at 1:1 reproduction (I think). It may seem contradictory but, a smaller source detail can be recorded if projected at a smaller size/lower magnification...
And we tend to forget that lens manufacture is a mechanical operation... i.e. at the basic level it isn't harder or cost more to create a better lens element. So unless there is a manufacturing error, it is reasonable that most lenses of the same type/construction will perform similarly.

We tend to talk in terms of lines; line pairs/mm. But were really talking about airy disks and as such it takes a minimum of 5 pixels to define the whole disk in a linear pattern. And then when you consider that with the Bayer array the sampling area used is not just the individual pixel, it is a 4 pixel area comprised of 2 green, 1 blue, and 1 red pixel each; so more optimally 4 pixels/airy disk (w/o an AA filter).

We also tend to forget that a MP increase needs to be considered linearly in the standard sense (LP/mm). I.e. the 12 MP increase from the D750 to the D810 is really only a 4MP increase horizontally, and Nyquist limits that to a 2MP gain at best (rough math).
We also tend to forget that Nyquist isn't the maximum that can be resolved, it is the maximum that can be *minimally resolved,* i.e. increased sampling gives increased resolution of the same information. 80MP on a FF sensor would likely provide some benefits, but with "equal negatives" as well.

Getting back to diffraction... I don't worry about it unless I'm willing to focus stack, and I'm usually not. I choose aperture solely based upon optical errors and the required DOF.
 
Back
Top