How much do we need to understand - of technology.

Thanks for all the replies. I won't try to respond to all of them as that would be unmanageable, so I'll to to explain a little more of where I'm coming from.
As with most things there is the science/factual/theoretical (i.e. pixel level diffraction, @ f/8), there is the effective result of that (output/viewed diffraction, @ f/16), and there is the practical implications for what you are doing specifically (the loss of some very fine details outweighed by greater DOF, fewer images/errors).
I just wanted to know if there was any complexities that would effect my work.
There shouldn't be; certainly nothing that would make it harder or more complex.
 
Because people hear that an image can be diffraction limited as they stop down. They believe that the image will be ruined if they stop down any more.

Diffraction and depth of field are two phenomena that are not actualy related except that they both increase as you stop down.
Once you pass the point of a diffraction limeted image, there is nothing you can do to make it resolve more detail.
However depth of field continues to increase as you stop down further. But the maximum sharpness can not exceed the level defined by diffraction.
But over all, more of the image will appear in focus.

Diffraction will set the maximum sharpness over the entire depth of field, and the entire image will be acceptable. Provided that is, it is still acceptable at the focus plane.
 
Last edited:
Diffraction is something that I've never really worried too much about. The theory may be that it'll make pictures less good than they could be but what matters to me and maybe what matters to others too is how the final picture looks and if the theory says that diffraction could be an issue but the final picture looks ok... then who cares about diffraction :D

I've taken pictures at apertures of and smaller than f16 with all of my cameras and I can't say that anyone has ever said "Oh er. Diffraction there mate!"

And on the general subject of how much we need to know.

Once upon a time I used to fix stuff... sometimes we'd be sent to fix a thing we might not have seen before and we just had to get on with it. I worked with a guy who had a different approach to me, he'd try to understand the thing and how it worked in the belief that understanding would help him to fix it. I had a different approach, I didn't care how it worked or if I understood it or not, I only cared why it didn't work and what I had to do to get it working again. Occasionally back then I had to engage my brain and work things out but most often a deep understanding of the finer points just wasn't required and I do believe that the same is mostly true in photography. I do think that we need to be vaguely aware so that if we need to we can do some deep thinking or go away and research and learn but mostly I think that photography is technically pretty simple. See something worth taking a picture of... aperture, shutter, ISO (there's a contentious one :D) perspective, framing and composition... Isn't that enough a lot of the time.
 
Diffraction is something that I've never really worried too much about. The theory may be that it'll make pictures less good than they could be but what matters to me and maybe what matters to others too is how the final picture looks and if the theory says that diffraction could be an issue but the final picture looks ok... then who cares about diffraction :D

I've taken pictures at apertures of and smaller than f16 with all of my cameras and I can't say that anyone has ever said "Oh er. Diffraction there mate!"

And on the general subject of how much we need to know.

Once upon a time I used to fix stuff... sometimes we'd be sent to fix a thing we might not have seen before and we just had to get on with it. I worked with a guy who had a different approach to me, he'd try to understand the thing and how it worked in the belief that understanding would help him to fix it. I had a different approach, I didn't care how it worked or if I understood it or not, I only cared why it didn't work and what I had to do to get it working again. Occasionally back then I had to engage my brain and work things out but most often a deep understanding of the finer points just wasn't required and I do believe that the same is mostly true in photography. I do think that we need to be vaguely aware so that if we need to we can do some deep thinking or go away and research and learn but mostly I think that photography is technically pretty simple. See something worth taking a picture of... aperture, shutter, ISO (there's a contentious one :D) perspective, framing and composition... Isn't that enough a lot of the time.

I guess that with diffraction, the effects really do matter with my photography. I Take macro time lapse, which eliminates the possibility of focus stacking. So, I am left with using higher fstops to get a good DOF. Also, the subjects can grow in unpredictable ways, so the subject can move relative to the best focus, so a very wide focus is good. On the other hand, I use 4k video output, so the native resolution (20MP or 40MP) can often be reduced - but not always as I do not know what crop will be best in the end. It is very much a balancing act and will vary with the particular subject.. I will use up to f22 with my older Canon cameras and up to f20 with the Sony cameras. The problem with using too high an fstop is the the result looks quite harsh - not a good look on screen. Another problem is that with increased fstop you get decreased light, which then generates it's own set of problems. I find that having a good understanding of the way things practically vary is good. Sometimes understanding the theory can help, but often it is of academic interest only.
 
Listenting to the radio yesterday I was reminded of the Dunning-Kruger effect and it seemed relevant to this thread. Their original paper was titled "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments" and is about how the more you know, the more you realize that there is much that you don’t know, or simply put, people with little knowledge of a subject tend to over-estimate their abilities. I think we’ve probably been there with looking at a new thing and thinking “how hard can it be”.
 
LMO and Diffraction

Many cameras are capable of Lens modulation Optimization,
This includes the ability to compensate for loss of sharpness due to Diffraction and also peripheral loss of sharpness

Fuji have done this since the introduction of X100s and X20 fixed lens cameras, and now include it to all XF lens firmware,
but not XC or Tuit Lenses. However it seems LMO can be applied in camera only, either as Jpegs or using the in camera raw processor.

It is unclear if any third party raw processors have yet included this ability. ?
But they are thought to be working towards it, as the necessary information is included in the Exif data attached to the raw file.
 
Two weeks ago I took the Following image using my XE2 and 18-55 lens set at f16 and at 30.5mm focal length (45mm FF equivalent)
I am not sure where I set the point of focus. as I do these things instinctively. But it would have been somewhere on the mast.
It is unusual for me to use such a small aperture on a crop sensor. However the massive depth of field needed it.

The image must be Diffraction limited although it appears sharp, it must have lost appreciable micro detail but still retains considerable edge sharpness. In some subjects like Macro photography this would have lost important subtle detail and appear harsh. But in bright lit conditions like this example it is normally perfectly acceptable.

Diffraction is of concern to those engaged in technically challenging photography, but rarely to the majority of us in shots like this.

_DSF4021X by Terry Andrews, on Flickr
 
Many cameras are capable of Lens modulation Optimization,
This includes the ability to compensate for loss of sharpness due to Diffraction and also peripheral loss of sharpness
You can't actually "restore" sharpness lost due to diffraction, or any other cause. LMO is no different from Lens Correction profiles except it apparently includes selective sharpening (contrast).

Diffraction, sharpness, and DOF (acceptable sharpness) are all directly linked; they all have to do with how large a point (blur spot/airy disk) is recorded and then rendered/viewed. And diffraction/DOF counter each other.
However, we really do not need much detail/resolution at all for an image to be "acceptably sharp." The standard requires less than 2MP in an image. Even if you updated the standard to the limits it would require not more than ~ 14MP in an image.
And sharpness is really hard to judge... you cannot know what was lost to errors/diffraction that you are not seeing in the image unless you have some reference to compare against. So it's not until things get really bad that it starts to appear degraded/unsharp to a viewer, even though it may be well short of the maximum potential.
 
Last edited:
Like so many questions like this, the answer is - it depends. As Bollygum said earlier, things change as standards go up and what was previously acceptable/invisible becomes unacceptable. The main variable with anything sharpness related is the size of output/print and these days that can be anything from a tiny smartphone image to an exhibition size print.

Mostly we view on a computer screen and that is pretty undemanding, so when we get a 24in print made of a favourite image it can be disappointing. This is where the knowledge bit comes in, when you need to know exactly what has caused this loss of sharpness and what can be done about it. It could be all sorts of things - camera-shake, subject movement, lens quality, focusing accuracy, depth of field, camera/sensor, or the settings used like high ISO or a high f/number that's running into diffraction.

Diffraction is always present, and increases as the f/number is raised whereas most other lens aberrations reduce. Peak sharpness is when the loss of sharpness from diffraction crosses over with the sharpness gains from raising the f/number. The peak can be as low as f/4 with the best lenses on full-frame and by f/16 sharpness will have dropped by maybe 20% and at that point all lenses will be affected equally regardless of optical quality. That sounds like a lot to lose, but may actually pass unnoticed in normal outputs, particularly when off-set against other very significant and obvious gains like deeper depth of field, or big improvements to sharpness around the edges that often lag a long way behind the centre and reach their peak at a higher f/numbers.

Where you draw the line is very subjective, but it's worth noting that digital photography is essentially an analogue process at the moment of capture. Things like sharpness don't suddenly go from good to bad when a certain limit is reached, it's a gradual change.

Diffraction also varies by format, in line with the crop factor. Eg if f/16 is the acceptable limit on full-frame, then don't go higher than f/11 on a crop-format camera, or f/8 on M4/3. Diffraction is severe with very small formats like compacts and smartphones that often use an ND filter in bright conditions to avoid having to push the f/number higher than f/5.6-ish.
 
Last edited:
You can't actually "restore" sharpness lost due to diffraction, or any other cause. LMO is no different from Lens Correction profiles except it apparently includes selective sharpening (contrast).

Diffraction, sharpness, and DOF (acceptable sharpness) are all directly linked; they all have to do with how large a point (blur spot/airy disk) is recorded and then rendered/viewed. And diffraction/DOF counter each other.
However, we really do not need much detail/resolution at all for an image to be "acceptably sharp." The standard requires less than 2MP in an image. Even if you updated the standard to the limits it would require not more than ~ 14MP in an image.
And sharpness is really hard to judge... you cannot know what was lost to errors/diffraction that you are not seeing in the image unless you have some reference to compare against. So it's not until things get really bad that it starts to appear degraded/unsharp to a viewer, even though it may be well short of the maximum potential.


That is not totally true any longer. Lens Modulation Optimizers now include Deconvolution Algorithms that actually reverse the effect of Diffraction to a significant extent and. and increase the capture of detail as well as improve contrast.

Diffraction and Depth of field are not directly linked. They are entirely independent phenomena, though both increase as a lens it stopped down. Depth of Field increased because the cone of rays become narrower and more points appear sharp.
Diffraction is a wave front Phenomena. that is caused when the wave front it obstructed by an object or restricted by an aperture (a circular object) it become worse at smaller apertures because a greater proportion of the wave front is affected. Airy disks are an effect not a cause. Diffraction and DOF do not counter each other in any way.

Effectively Diffraction sets the maximum sharpness of a system beyond which no further detail will be apparent. And is almost solely responsible for the fall off seen in lens charts as apertures are reduced.
 
That is not totally true any longer. Lens Modulation Optimizers now include Deconvolution Algorithms that actually reverse the effect of Diffraction to a significant extent and. and increase the capture of detail as well as improve contrast.
There's nothing new about deconvolution sharpening... it is an option in many editing programs and I believe ACR/LR/PS use it as well (although undocumented). The fact is it is still a digital manipulation in attempt to counter losses after the image is recorded by the lens/sensor. I'm not saying it's a bad thing... sharpening is almost always required.
Diffraction and Depth of field are not directly linked.
We'll have to disagree, although I think the difference is POV.
In terms of aperture, the rays become narrower because they are originating from a smaller area of the objective element. And the aperture that is restricting the light to being from that smaller area is also the obstruction that causes more light rays to bend/diffract around it's edges. I see that as a direct link and relationship.

*I do fully acknowledge that aperture is not the only factor affecting DOF (or even the most significant).
 
That is not totally true any longer. Lens Modulation Optimizers now include Deconvolution Algorithms that actually reverse the effect of Diffraction to a significant extent and. and increase the capture of detail as well as improve contrast...
<snip>

Don't believe the hype. As Steven says, Fuji's LMO is just localised sharpening applied to in-camera JPEGs. Use of the word 'deconvolution' is a marketing abuse.

It may work pretty well, but it's not magic nor anything you can't do yourself to any image.
 
Don't believe the hype. As Steven says, Fuji's LMO is just localised sharpening applied to in-camera JPEGs. Use of the word 'deconvolution' is a marketing abuse.

It may work pretty well, but it's not magic nor anything you can't do yourself to any image.

Deconvolution is is indeed not magic and has been used for some years to bring out details in all forms of scientific image making. It can bring out details not available in any other way.
What is new is being able to include the algorithms in Firmware.
It is not sharpening. Sharpening does not increase detail in the visible image, it only increases its contrast in some way.
Deconvolution type algorithms are used in MRI scans.
You should think of Deconvolution as reverse engineering fuzziness back to a viable image. To do that you need to know why it is fuzzy in the first place.
 
What deconvolution means in the Fuji LMO application, is the system knows the characteristics of Fuji lenses and when sharpness is reduced - at what focal lengths and apertures, which areas of the frame, and by how much. It then applies selective sharpening to JPEGs. The same data is used to correct distortion, vignetting and CA. And all pretty useful but not revolutionary and nothing you can't do to any image yourself.

I wouldn't call this deconvolution as Fuji does, as it simply applies sharpening to what is already there and nothing is 'put back' into the image, but that's just semantics.
 
What deconvolution means in the Fuji LMO application, is the system knows the characteristics of Fuji lenses and when sharpness is reduced - at what focal lengths and apertures, which areas of the frame, and by how much. It then applies selective sharpening to JPEGs. The same data is used to correct distortion, vignetting and CA. And all pretty useful but not revolutionary and nothing you can't do to any image yourself.

I wouldn't call this deconvolution as Fuji does, as it simply applies sharpening to what is already there and nothing is 'put back' into the image, but that's just semantics.

Sorry, you are confusing LMO Lens Modulation Optomisation with LCP, lens Correction Profiles.
LCP's do those things that you suggest both on Fuji Jpegs and raw files and are automatically applied in Adobe raw converter.

LMO's are only applied to jpegs and by the in camera raw processor in Fuji X cameras.
Adobe can not yet apply them via the raw processor.
LMO' are concerned with deconvolution, not basic lens corrections as you suggest.
 
Sorry, you are confusing LMO Lens Modulation Optomisation with LCP, lens Correction Profiles.
LCP's do those things that you suggest both on Fuji Jpegs and raw files and are automatically applied in Adobe raw converter.

LMO's are only applied to jpegs and by the in camera raw processor in Fuji X cameras.
Adobe can not yet apply them via the raw processor.
LMO' are concerned with deconvolution, not basic lens corrections as you suggest.

I'm not confusing anything. LMO works on the same principle as other lens corrections, using pretty basic data to apply selective sharpening where the system knows it's most needed, but it only enhances what's already there and doesn't recreate details that have been completely lost through diffraction. If you want to call it deconvolution as Fuji does, then that's fine but I think that's over-egging it. A lot. Proper deconvolution in this context would be massively complicated and unreliable at best; I'm not sure it's feasible.

Adobe could introduce a version of LMO easily enough, just as they create their own lens profiles for all the other stuff. Maybe they will, it could be very handy if it's sufficiently intelligent, but there doesn't seem to be much demand for it and the Fuji system has hardly raised a ripple. I think most people apply selective sharpening when and where it's needed with the local adjustment brush, which is highly adjustable and takes seconds.
 
I don't have a Fuji x-trans to mess with. But from what I've heard many prefer to use the raw files and a processing software with advanced sharpening algorithms such as Iridient (specifically the R-L deconvolution). That would seem to indicate that the built in jpeg processing with automatic LMO applied is nothing particularly special.
It seems that most of the X-trans marketing is just hyperbole and an attempt to differentiate themselves in the market. Which seems even more evident being that they reverted to a Bayer array for the medium format gfx-50 (where one would think IQ considerations would be paramount).
Not that I particularly care or that Fuji is any worse/different in this aspect... I just don't believe there is anything particularly special in all of it.
 
I don't think I'll be losing any sleep pining for deconvolution software. Anyway, provided I keep the raw files, I can apply any sophisticated software in future years. I suspect that there is little for me to gain in that area currently, but I will keep an eye on it anyway. Who knows what the future might bring.
 
Whatever deconvolution means to photographers, it is a nice word, like combobulation. I suspect we will collect more words to describe different ways of making sense of, or ways to scramble things up. There is a whole world of randomness out there waiting to be described.
 
Whatever deconvolution means to photographers, it is a nice word, like combobulation. I suspect we will collect more words to describe different ways of making sense of, or ways to scramble things up. There is a whole world of randomness out there waiting to be described.

Deconvolution is not like unscrsmbling eggs, there is real science behind it.
Diffraction is not random. It follow strict wave front rules. Hence the formation of airy discs. Its magnitude and form can be accurately calculated.albeit with complex algorithms,.
Deconvolution is using these algorithms to reverse engineer the result of diffraction to its starting point. Imperfect certainy, but much better than random guesswork.
 
Deconvolution is not like unscrsmbling eggs, there is real science behind it.
Diffraction is not random. It follow strict wave front rules. Hence the formation of airy discs. Its magnitude and form can be accurately calculated.albeit with complex algorithms,.
Deconvolution is using these algorithms to reverse engineer the result of diffraction to its starting point. Imperfect certainy, but much better than random guesswork.
Oh, but I do understand that. Diffraction is random, but like many forms of random, certain outcomes have different probabilities than other outcomes. While we can't predict a single event, we can predict the overall distribution.

But, none of this needs to be done in camera. It seems to be quite important in 3d microscopy as diffraction is likely to be the limiting factor.
 
Oh, but I do understand that. Diffraction is random, but like many forms of random, certain outcomes have different probabilities than other outcomes. While we can't predict a single event, we can predict the overall distribution.

But, none of this needs to be done in camera. It seems to be quite important in 3d microscopy as diffraction is likely to be the limiting factor.

I suspect that why it is done in camera, is that the manufacturer has a complete knowledge of the factors involved in its sensors and lenses. Whereas a stand alone software might struggle to acquire the necessary data set. However what I do not understand is that Fuji do attach that data to the raw file, but the raw processor software engineers have not yet incorporated the necessary algorithms to make use of it. Seems very lackadaisical to me. Even if it were only one stop equivalent improvement it would be more than worthwhile.
 
Last edited:
I suspect that why it is done in camera, is that the manufacturer has a complete knowledge of the factors involved in its sensors and lenses. Whereas a stand alone software might struggle to acquire the necessary data set. However what I do not understand is that Fuji do attach that data to the raw file, but the raw processor software engineers have not yet incorporated the necessary algorithms to make use of it. Seems very lackadaisical to me. Even if it were only one stop equivalent improvement it would be more than worthwhile.
I would guess that we will need more of this type of thing as pixels get smaller. There is quite a lot that can be done with software both in camera and in post. Does your camera have this feature?
 
I would guess that we will need more of this type of thing as pixels get smaller. There is quite a lot that can be done with software both in camera and in post. Does your camera have this feature?

Yes Fuji include it in all their X cameras, but only process it in the Jpegs, and also if you process your raws in camera... neither of which I do. But it does attach the necessary data in the raw file.
However I will not get the benefit until Adobe incorporate the algorithms into their raw processor. until then it is academic. We have been waiting nearly five years so far, since the first of their cameras had this feature. Which were the X100s and the little X20. I have not been holding my breath.

As the Adobe raw processor recently improved by an order of magnitude when processing Fuji files. They may possibly have incorporated it, but I would have thought that they would have said something. But then they never said that they had made the Fuji processing improvements either. May be they were embarrassed that it has taken them so long to get the hang of them.
 
It all sounds a bit like, there isn't a lot to gain currently.

It could be that the software companies can not see where an investment in deconvolution, will give them a pay back. So will only do it when someone brings it to market first.
So as to maintain market share. Not as an aid to photographers or camera manufacturers.
 
It could be that the software companies can not see where an investment in deconvolution, will give them a pay back. So will only do it when someone brings it to market first.
So as to maintain market share. Not as an aid to photographers or camera manufacturers.
Perhaps. But there is software that will do it. Some photographers must have used that software and if there was a big advantage, then there would be a demand for it. As far as I can ascertain, the only demand currently is for microscopy. As you say, even Fuji are very half-hearted about using it, so even they can't see it as a real advantage.
 
If deconvolution is to come to photography, I would guess it'd come to smartphones first. They're the cameras that are most challenged for lost detail, they suffer severely from diffraction, limited dynamic range and poor low light performance too. That's where the need is most apparent, and that's where the big bucks are to be won.

Or maybe it's already here :D
https://www.independent.co.uk/life-...otographer-fail-zaring-missouri-a8160771.html
 
G
If deconvolution is to come to photography, I would guess it'd come to smartphones first. They're the cameras that are most challenged for lost detail, they suffer severely from diffraction, limited dynamic range and poor low light performance too. That's where the need is most apparent, and that's where the big bucks are to be won.

Or maybe it's already here :D
https://www.independent.co.uk/life-...otographer-fail-zaring-missouri-a8160771.html
Good point. I read that Sony have included a space for 3rd party coding - in their phone chip hardware. That's a lot of work if you don't expect anyone to use it.
 
The prospect of being able to shoot at any aperture, approaching diffraction free, added to the ability to shoot ISO free, would be getting near to the point of an Ideal camera.
When shutter speed and aperture could be chosen for stopping power and depth of field alone.

The advantages in use and in marketing terms are so great, that I can not see manufacturers not working to achieve it.
 
The prospect of being able to shoot at any aperture, approaching diffraction free, added to the ability to shoot ISO free, would be getting near to the point of an Ideal camera.
When shutter speed and aperture could be chosen for stopping power and depth of field alone.

The advantages in use and in marketing terms are so great, that I can not see manufacturers not working to achieve it.
I think diffraction can be very slightly reduced, but eliminated - no.
 
That does represent progress of a kind though, I often find myself seeing nothing but faults in my photos these days.

Perhaps because you are seeing them through the eyes of Photography rather than as images. or as other people see them.
 
A photograph can be technically competent but devoid of artistic merit. What might the priority be?
 
Simon (post #72) brings up an interesting point. Back in film days, few people saw their shots printed much bigger than 10x8 inches unless they saw their slides projected. At 10x8, linear enlargement is about 8 1/2 times so the massive enlargements pixel peepers look at nowadays are way beyond what was usual, slides being viewed from a relatively great distance were accepted as being "sharp enough". We also now tend to look at pretty much every image hyper critically - where we used to take maybe 2 or 3 in the expectation that one of them might be spot on, we may well take 10 or more and peer at them to check if they're sharp enough (at 100%?) rather than look at the 3 enprints side by side to choose which went in the album or got blown up.

However, this is diverging rather from the original question of how much of the technology we need to understand rather than know how to use.
 
One doesn't.
All that is required, IMVHO, is a seeing eye, one is born with it or certain aspects of it can be learned, like rule of thirds.
Some people can 'see' a picture others never will.
However if both can be combined, so much the better.
 
Back
Top