- Messages
- 7,908
- Name
- Terry
- Edit My Images
- Yes
I have been thinking recently about what we see actually see when we examine digital Images.
This was brought to my attention when a recent online article used 300% images to show the difference between various digital camera outputs.
Now this immediately got me thinking back to the days I used to make giant 3 meter square enlargements for window displays. It did not matter what size film was used, as we enlarged them from anything between 13x18 cm cut film and 6x6 cm roll film. At whatever negative size we used, they were all grossly over enlarged. However from the outside of the window they all looked very good indeed. When examined close up it was easy to see the effect of the grain structure. This structure change as to the emulsion and developer used. The sharpest grain was produced by Adox R14 developed in Neofin blue, where even the edge effect of the compensating developer was apparent. While 100 asa cut film developed in D76 type developer produced a much softer granular effect.
But in all cases, the only actual artefacts were the high contrast edge sharpening. At such magnifications the detail simply dissolved into nothingness slowly but surely.
This is quite different to looking at an over enlarged digital image which is made up entirely of false detail.
Now that claim needs explaining.
The image produced by a sensor is a complex mixture of light and shade (luminance) data provided by each pixel, and Colour information provided by a pattern of individual filtered pixels. These are usually either the Bayer pattern, or more recently the more complex Fuji Xtrans pattern. The exception being the Foveon which stacks its filters.
For this reason, the colour information that falls on each pixel site has to be approximated by use of algorithms, It does this from the data that is obtained from the pixels that do receive the actual filtered red Blue and Green data. This is in itself complex because there are twice as many green pixels in a bayer array than either red or blue ones.
Other factors also come into play, in that there are inevitably gaps between the individual pixels that block light and can not record it. There are also micro lenses that help to gather light falling obliquely onto the pixels. And very often a low pass filter that spreads the light over more than one pixel to attempt to reduce Moire patterns.
It is true to say that even a raw file, direct from the camera, is not recording the actual information of any pixel. The information we get to see, is entirely one that has been produced by algorithms. It has been produced entirely from artefacts. That is if you define artefacts as the result “interpolated” calculated data. This is not surprising if one considers that the only information available includes noise, partial coverage of the sensor surface, defocussed and filtered data obtained from a pattern of four or more Pixels.
At best, and examined at 100% Digital images contain far more approximated and false information
than direct information. At 100% Fine detail is made up of mostly fine artefacts. At 300% It simply magnifies those faults and approximations with out offering any further detail.
When we produce large prints from our digital images we introduce a further pattern of dots and a further set of algorithms, that actual diffuse and disguise many fine artefacts, and in this regard often look rather better than the same file seen on our screens, other things being equal.
I would suggest Most files look at their best, and can only look best on a screen, at around 25%.
and at that size even gross artefacts are largely invisible.
I would also suggest that 100% is only useful during Processing and 300% during operations like drawing a path or selection.
At less than around 50%, things like the artefacts that cause waxy skin and other false data effects become virtually invisible.
It is true that as the algorithms used in raw processors improve, so does the “Realism” of artefacts.
But this “realism” will never be a true representation.
We tend to Judge camera output at 100%, I maintain that this is incorrect, as at that magnification we are seeing the output quality of the raw or jpeg processing, rather than the potential difference between cameras and sensors.
It is interesting that when reprocessing old raw files from early digital cameras, in the latest versions of Photoshop or Lightroom, that images can be greatly enhanced compared to the earlier processing. Old cameras are often better than we thought they were.
This was brought to my attention when a recent online article used 300% images to show the difference between various digital camera outputs.
Now this immediately got me thinking back to the days I used to make giant 3 meter square enlargements for window displays. It did not matter what size film was used, as we enlarged them from anything between 13x18 cm cut film and 6x6 cm roll film. At whatever negative size we used, they were all grossly over enlarged. However from the outside of the window they all looked very good indeed. When examined close up it was easy to see the effect of the grain structure. This structure change as to the emulsion and developer used. The sharpest grain was produced by Adox R14 developed in Neofin blue, where even the edge effect of the compensating developer was apparent. While 100 asa cut film developed in D76 type developer produced a much softer granular effect.
But in all cases, the only actual artefacts were the high contrast edge sharpening. At such magnifications the detail simply dissolved into nothingness slowly but surely.
This is quite different to looking at an over enlarged digital image which is made up entirely of false detail.
Now that claim needs explaining.
The image produced by a sensor is a complex mixture of light and shade (luminance) data provided by each pixel, and Colour information provided by a pattern of individual filtered pixels. These are usually either the Bayer pattern, or more recently the more complex Fuji Xtrans pattern. The exception being the Foveon which stacks its filters.
For this reason, the colour information that falls on each pixel site has to be approximated by use of algorithms, It does this from the data that is obtained from the pixels that do receive the actual filtered red Blue and Green data. This is in itself complex because there are twice as many green pixels in a bayer array than either red or blue ones.
Other factors also come into play, in that there are inevitably gaps between the individual pixels that block light and can not record it. There are also micro lenses that help to gather light falling obliquely onto the pixels. And very often a low pass filter that spreads the light over more than one pixel to attempt to reduce Moire patterns.
It is true to say that even a raw file, direct from the camera, is not recording the actual information of any pixel. The information we get to see, is entirely one that has been produced by algorithms. It has been produced entirely from artefacts. That is if you define artefacts as the result “interpolated” calculated data. This is not surprising if one considers that the only information available includes noise, partial coverage of the sensor surface, defocussed and filtered data obtained from a pattern of four or more Pixels.
At best, and examined at 100% Digital images contain far more approximated and false information
than direct information. At 100% Fine detail is made up of mostly fine artefacts. At 300% It simply magnifies those faults and approximations with out offering any further detail.
When we produce large prints from our digital images we introduce a further pattern of dots and a further set of algorithms, that actual diffuse and disguise many fine artefacts, and in this regard often look rather better than the same file seen on our screens, other things being equal.
I would suggest Most files look at their best, and can only look best on a screen, at around 25%.
and at that size even gross artefacts are largely invisible.
I would also suggest that 100% is only useful during Processing and 300% during operations like drawing a path or selection.
At less than around 50%, things like the artefacts that cause waxy skin and other false data effects become virtually invisible.
It is true that as the algorithms used in raw processors improve, so does the “Realism” of artefacts.
But this “realism” will never be a true representation.
We tend to Judge camera output at 100%, I maintain that this is incorrect, as at that magnification we are seeing the output quality of the raw or jpeg processing, rather than the potential difference between cameras and sensors.
It is interesting that when reprocessing old raw files from early digital cameras, in the latest versions of Photoshop or Lightroom, that images can be greatly enhanced compared to the earlier processing. Old cameras are often better than we thought they were.