Bridge, MFT, achromats, dSLR, primes - a journey of exploration

I have read references to using video frames to photostack shots.
Can someone point me to where I can learn how this is done ?
For example, the camera used, lens, etc..
 
I have read references to using video frames to photostack shots.
Can someone point me to where I can learn how this is done ?
For example, the camera used, lens, etc..

The easiest way is to use a Panasonic camera with that has Panasonic's Post Focus function. This captures a video while the focus is racked from the front to the back of a scene. You can then extract JPEGs from the video and stack them. You can use a macro lens or a non-macro lens, with or without extension tubes, close-up lenses etc, as long as the camera can control the lens focusing. You may find this video useful.

You can also use any camera that does video, using fixed focus to capture a video while you move the camera towards (or away from) the scene. This is more difficult.

I use Panasonic's Post Focus function for close-ups of flowers etc. It can also be used for smaller subjects, insects etc, but as with all close-up/macro techniques, it gets harder as the magnification goes up. Achim Kluck produces wonderful images of insects etc using this technique. (You have to look at the descriptions to find ones which used post focus as Achim uses other techniques too.)
 
Been pondering lighting options for macro use, whether to start with an arm to mount a diffused speedlight off the hotshoe (I already have an SB-700, which may be a little heavy for handheld work), small Godox or possibly one of the KX800's. Then I came across this, which looks interesting:

https://adaptalux.com/studio/flash/

Looks promising, but shame they won't be available until July and it'll be interesting to see what diffusion options they produce (if any)
 
Last edited:
Been pondering lighting options for macro use, whether to start with an arm to mount a diffused speedlight off the hotshoe (I already have an SB-700, which may be a little heavy for handheld work), small Godox or possibly one of the KX800's. Then I came across this, which looks interesting:

https://adaptalux.com/studio/flash/

Looks promising, but shame they won't be available until July and it'll be interesting to see what diffusion options they produce (if any)

Thanks. It does look interesting. I've had a look at the specifications, watched a couple of their videos and had a chat with them. For my purposes I can see several issues.

I could be wrong about this, but it looks to me as though the light comes out of a rather small area compared to other flashes. If so, diffusing it to provide an overall smooth(ish) light like I get with a KX800 might be tricky, or at least trickier than with a larger source area for the light.(You can see the diffusion arrangement I use with a KX800 here.)

The flashes are powered from a fixed battery inside the "pod" to which the arms connect. If it runs out of power you either have to have another one to replace it with, or wait until it is recharged, or run it from mains or a USB power supply. I think only the first of these would be practical from my point of view.

The 1/200 sec flash speed at full power is a bit slow. Whether one would need to use it at full power depends on the scene and the diffusion. Each head has a Guide Number of 32 so that might be enough to allow less than full power, depending on diffusion, distance to subject, ISO etc (I almost always use my KX800s at at most 1/2 power to keep the recharging time down to a couple of seconds at most. Any slower than that doesn't work for me.) However, although the amount of power may be ok, there are only five levels on the Adaptlux from full to minimum power (vs a more usual 8) and that might prove a bit crude (the difference between successive levels is more than a stop, possibly nearer to two stops I think given the power range from 1 to 1/100, which is somewhere between six and seven stops I think, with five power levels meaning this difference is achieved with four jumps between successive levels, compared to seven one-stop jumps for a typical setup of 1 to 1/128 power.).

Especially if a second (or third?) pod were used (I occasionally capture over 1,000 images in a session, and they quote 400 exposures per charge with multiple - I forgot to ask how many - arms), with two flash arms and perhaps also a white light arm for use as a focusing light, it could turn out to be a bit expensive.

On the plus side, the flexible arms look more robust than those on the KX800, and if there is a problem with one of them it can be replaced. With the KX800 if an arm breaks and you can't repair it (I have managed to do that so far, twice, on one of my KX800s but it has made me more reluctant to flex the arms on any of the three I have - I'm wondering if the plastic of the loc-line gets brittle with age). With the camera/lens setups I use with the KX800 it turns out that once I have got the arms aligned just right I can leave them in place. In general though, for example if using macro lenses, that would not be the case, especially if there is a large difference between the minimum and maximum magnifications being used. The Adaptalux arms look better in this respect, but I wonder if they may be a bit short, especially if working with longer lens arrangements like the MPE-65 at higher magnifications.

All that said, I can see me trying this later in the year. Thanks for the heads-up.
 
Last edited:
Thanks. It does look interesting. I've had a look at the specifications, watched a couple of their videos and had a chat with them. For my purposes I can see several issues.

I could be wrong about this, but it looks to me as though the light comes out of a rather small area compared to other flashes. If so, diffusing it to provide an overall smooth(ish) light like I get with a KX800 might be tricky, or at least trickier than with a larger source area for the light.(You can see the diffusion arrangement I use with a KX800 here.)

The flashes are powered from a fixed battery inside the "pod" to which the arms connect. If it runs out of power you either have to have another one to replace it with, or wait until it is recharged, or run it from mains or a USB power supply. I think only the first of these would be practical from my point of view.

The 1/200 sec flash speed at full power is a bit slow. Whether one would need to use it at full power depends on the scene and the diffusion. Each head has a Guide Number of 32 so that might be enough to allow less than full power, depending on diffusion, distance to subject, ISO etc (I almost always use my KX800s at at most 1/2 power to keep the recharging time down to a couple of seconds at most. Any slower than that doesn't work for me.) However, although the amount of power may be ok, there are only five levels on the Adaptlux from full to minimum power (vs a more usual 8) and that might prove a bit crude (the difference between successive levels is more than a stop, possibly nearer to two stops I think given the power range from 1 to 1/100, which is somewhere between six and seven stops I think, with five power levels meaning this difference is achieved with four jumps between successive levels, compared to seven one-stop jumps for a typical setup of 1 to 1/128 power.).

Especially if a second (or third?) pod were used (I occasionally capture over 1,000 images in a session, and they quote 400 exposures per charge with multiple - I forgot to ask how many - arms), with two flash arms and perhaps also a white light arm for use as a focusing light, it could turn out to be a bit expensive.

On the plus side, the flexible arms look more robust than those on the KX800, and if there is a problem with one of them it can be replaced. With the KX800 if an arm breaks and you can't repair it (I have managed to do that so far, twice, on one of my KX800s but it has made me more reluctant to flex the arms on any of the three I have - I'm wondering if the plastic of the loc-line gets brittle with age). With the camera/lens setups I use with the KX800 it turns out that once I have got the arms aligned just right I can leave them in place. In general though, for example if using macro lenses, that would not be the case, especially if there is a large difference between the minimum and maximum magnifications being used. The Adaptalux arms look better in this respect, but I wonder if they may be a bit short, especially if working with longer lens arrangements like the MPE-65 at higher magnifications.

All that said, I can see me trying this later in the year. Thanks for the heads-up.

Interesting, thanks for your thoughts. The extra attraction of the Adaptalux stuff is the continuous lighting side of things - could be good to have a play with, especially with the weather like it currently is and this time of year when there is very little (or no) insect life around.

I was almost sold on the KX800, but I suspect for my purposes and subjects I'd need to constantly adjust the setup of the heads and it really doesn't sound as though the arms would be robust enough. That said, there don't appear to be too many other reasonable options around...
 
The other solution that might work for me is the Nikon R1 system - anyone on here using it for insect macro work?
 
Been pondering lighting options for macro use, whether to start with an arm to mount a diffused speedlight off the hotshoe (I already have an SB-700, which may be a little heavy for handheld work), small Godox or possibly one of the KX800's. Then I came across this, which looks interesting:

https://adaptalux.com/studio/flash/

Looks promising, but shame they won't be available until July and it'll be interesting to see what diffusion options they produce (if any)
When I first checked out the adaptalux system I was impressed with it's flexibility, but horrified at the price!
 
When I first checked out the adaptalux system I was impressed with it's flexibility, but horrified at the price!

It is pretty expensive, but then that does tend to be the way when it comes to lighting. I daresay volumes are relatively small which won't help with prices. It does look very well thought through and made...
 
A Nikon R1 kit came up used so I've opted for that. Also purchased the diffusers for it from Macrodiffuser.com. I've yet to try it out (waiting for batteries), but hoping it'll do the job for me.
 
Part 1 - Introduction

Over the past couple of weeks I have been experimenting with alternative workflows for processing invertebrate images. It began as a simple question as to whether or not to make greater use of Topaz DeNoise AI now version 2 includes batch processing, but ended up with me deciding to make some major changes to my processing workflow. The following posts describe what I did and where it led me.

Towards the end of the exercise I worked on 60 image. I have posted the versions I ended up with for 24 of them in this thread in the forum. They are taken from this album at Flickr which contains final versions of all 60 of the images.

The following posts are structured as follows:

 
Last edited:
Part 2 - Current editing products and workflow

For some time I have been using the following products: Fast Picture Viewer, DXO PhotoLab 3 Elite, Silkypix 9 Pro, Lightroom Classic, Photoshop CC, DeNoise AI, Faststone Image Viewer, XnViewMP

I use these on raw files from 12 megapixel bridge cameras with small (1/2.3") sensors, 16 and 20 megapixel micro four thirds cameras, a 20 megapixel APS-C camera and (only for testing thus far) a 24 megapixel full frame camera.

The tests described below were done on a Windows 10 PC with an Intel Core i7 processor with 32 GB RAM, with operating system and temporary files on a 1TB SSD, and an NVIDIA GeForce RTX 2080.


Processing workflow

The processing part of my current workflow involves:

  • Raw conversion in PhotoLab, along with DXO's raw-only PRIME noise reduction, camera-/lens-specific sharpening, a mild application of PhotoLab's Smart Lighting to bring the overall lightness and the highlights and shadows to reasonable levels, a small boost to microcontrast and a mild application of PhotoLab's ClearView Plus to improve clarity. Export to DNG.
  • In Silkypix, a strong application of highlight reduction to the DNG file. Silkypix seems to me to be exceptionally good at this, retaining/revealing colours and detail in bright areas better than any of my other applications. We will see an example of this later. Export to TIF.
  • In Lightroom, image-specific adjustments to the TIF file, including some or all of Cropping, White balance, global adjustments to lightness ("Exposure"), contrast, highlights, clarity etc and local adjustments of lightness, contrast etc using radial and graduated filters or a smart adjustment brush, and simple cloning. Very occasionally a round trip from Lightroom to Photoshop and back for something I can't do in Lightroom. Export to JPEG in the target display size of 1300 pixels tall.
  • Noise reduction and sharpening of the display size JPEG using DeNoise AI.

You may wonder why I need noise reduction at this stage having used DXO's PRIME noise reduction at the start of the processing. We'll come back to this in more detail later, but basically I sometimes raise shadows a lot in images that are not very bright because they were exposed to protect highlights, and do this for images that were captured with small sensor cameras (smartphone size sensors) that have noisy sensors. On top of that I sometimes crop almost down to 100%, which is a big ask, especially for a small sensor camera even if exposed nicely.

Overall workflow

Starting typically with a folder containing a few hundred raw files from a session, create a longlist by going through them very quickly in Fast Picture Viewer and marking those I want in the longlist. Then run a Fast Picture viewer macro which creates subfolders for the ISOs that have been used (100-160, 200-320 etc) and moves the raw files into the appropriate folder.

In PhotoLab, go to each folder in turn, select all the images in the folder and apply a preset appropriate for the ISO and the camera. (I use the same ISO-specific preset for all cameras with a given sensor size (1/2.3", MFT, APS-C, FF). Export to DNG.

Gather all the DNG files together in a single folder, and all the raw files in another folder.

In Silkypix, go to the DNG files folder and select all the DNG files. Apply a preset (same one used in all cases) and output to TIFF.

Import the TIFF files into Lightroom. Apply image-specific adjustments (cropping, brightness, shadows, clarity etc) globally for an image and/or locally. The longlist contains all the captures that didn't instantly appear to be unusable as I went through the initial trawl very quickly. Now in Lightroom I will typically have a lot of fairly similar images of some subjects if they stuck around long enough. I work on one of the images and make use of Lightroom's synchronisation feature to copy the adjustments to similar images. I then choose which one(s) to proceed with. I typically go backwards and forwards a bit while honing the selection, tweaking the adjustments, and ending up with a shortlist of processed images.

Export to 1300 pixel high JPEGs. Apply DeNoise AI to the JPEGs using very low settings. This smooths out noisy backgrounds and sharpens details/makes details more visible.

Use Faststone Image Viewer to go through the shortlist and pick out the "keepers" and arrange the order the images will be presented in. Where I notice problems with prospective keepers go back and adjust the processing, or reject the image.

When the list of keepers has stabilised and I think the processing is as complete as I can be bothered to make it, go through the images multiple times using XnViewMP to see how they look, individually and in sequence. If I spot problems, go back to an earlier stage or reject a problematic image.

Then various administrative tasks including organising the images into an album at Flickr, making backups of keeper JPEGs and their raw files locally and in the cloud, documenting album names etc.

Current workflow advantages and issues

Although this workflow involves a lot of steps, the two batch processing stages are easy and quick to set up and deliver files that are easy to work with almost whatever state they were in originally, and some of them are in a rather poor state to begin with. It turns out that this workflow lets me deal with large numbers of images without feeling overburdened with the effort involved.

On the other hand, using a standardised approach for the PhotoLab and Silkypix stages can obviously have disadvantages when dealing with varied source images. This is not as bad as it might be because of the power and flexibility Lightroom gives me to make strong additional or somewhat reversing adjustments. However:
  • Lightroom gets to operate on TIFF files, and the white balance is somewhat "baked in". You can only change the Temp relative to what it has been fixed at, using a scale from -100 to +100. You cannot set a particular Temp in degrees and you cannot use presets such as Daylight and Flash. Tint can also be adjusted on a scale from -100 to +100. These constraints can make it difficult to adjust to a suitable white balance sometimes.
  • The way I use Silkypix tends to "flatten" images in terms of tonality. Although Lightroom does provide for strong and flexible tonality adjustments I sometimes find it difficult to create tonality that I am comfortable with from a Silkypix "flattened" image.
  • The Silkypix operation I use can sometimes produce wide, darkish halos along high contrast boundaries, halos which I find more or less impossible to fix. The only alternative sometimes is to bypass Silkypix and hand the PhotoLab DNG file directly to Lightroom. Highlight management may then sometimes be problematic in relation to what I'm trying to achieve.
  • DeNoise AI sometimes produces narrow sharpening halos even when used with its settings minimised. (Even when the sharpening slider is set to zero it seems that some sharpening is still done.) In such cases I have to use the display size JPEG as produced by Lightroom, possibly creating another version of it with Lightroom's output sharpening turned up, although this doesn't seem to me to be as good as DeNoise AI at revealing/enhancing/making details more visible. The real problem here though is not the sharpness, as the Lightroom version is generally good enough even if not quite up to the DeNoise AI standard. The problem is with the noise reduction, which I find DeNoise AI does really well for my images. If I can't use DeNoise AI because it is over-sharpening an image, then I may have to put up with a background that is "crunchier" than I'm comfortable with. I do have a Lightroom technique with which I can denoise almost any background no matter how noisy it is, but depending on the image that can get quite time-consuming (and I have a fairly low tolerance for how much time I will give an individual image before I throw it out).
I didn't have any of these issues in mind when I started out; I was just trying to work out how best to use DeNoise AI. As it eventually turned out though I may have stumbled upon a solution for all of them.
 
Last edited:
Part 3 - Using DeNoise AI as the raw converter

I could use DeNoise AI at one or more of three stages in the processing chain:
  • At the start, doing the raw conversion
  • From within Lightroom, while doing image-specific adjustments
  • At the end, used on the display size JPEG, as I have been doing up to now.
For best results, Topaz Labs recommends using DeNoise AI at the start of the processing, doing the raw conversion and applying noise reduction and sharpening, so I decided to try this.

I selected a raw file (from a 12 megapixel small sensor bridge camera) and made a reference image using my current workflow as described above. I then loaded the raw file into DeNoise AI and used it with default settings to produce a DNG file. Unfortunately the DNG file was not visible within PhotoLab and was unusable within Silkypix because of an extremely strong magenta cast which remained unchanged on setting the white balance to Flash in Silkypix. The DNG file loaded ok into Lightroom but the white balance was not adjustable in absolute measures (There are no settings for Daylight, Cloudy, Flash etc, and Temp can be set only as -100 to +100, not in degrees). I decided I would not want to use DeNoise AI DNG files under these constraints.

I tried exporting TIFF rather than DNG from DeNoise AI. PhotoLab saw the TIFF file but would not display it, instead giving a "Loading error" message. With both DNG and TIFF ruled out, this meant that none of PhotoLab's functions would be available unless I first converted to JPEG or PNG, which did not seem like a sensible approach.

Silkypix and Lightroom loaded the TIFF file ok. As with the DNG file, in Lightroom the white balance was only available with relative rather than absolute adjustments. I decided to proceed anyway, using my usual Silkypix preset and then doing image-specific adjustments in Lightroom. Comparing the two versions I liked the alternative, DeNoise AI etc version, and I still liked it better after trying to adjust the reference version to improve it.

Here we have the reference version on the left and the alternative version on the right, shown at my target 1300 pixel tall display size (if viewed full size at Flickr. Here I am using a downsized 1024 maximum side length as advised for this site).


1619 10 DNAI as raw processor vs reference version
by gardenersassistant, on Flickr

I prefer the alternative version: the colours and the tonality look more natural to me (for example looking at the wasp’s face, the reference version hues look slightly “forced”/”overcooked” even after having been toned down, and similarly for the light distribution on the face and on the back behind the head, where the highlights seemed better controlled in the alternative version. I got a similar impression of preferable tonality and hues when comparing the whole images.) Despite not having used PhotoLab’s lens-specific sharpening the detail such as the fine hairs look as good to me in the alternative version as in the reference version. The background looked smoother in the alternative version.

Despite the limitations on workflow options, this was promising, so I decided to do more tests of this approach.

I picked another five bridge camera raw files and as above compared reference and alternative versions. In each case I preferred the alternative versions. I decided to do a larger scale comparison which, as well as comparing my impressions of image quality and attractiveness, would give me a better idea about the practicality of the alternate approach for the larger numbers of images I typically process from a session.

I selected 40 raw files from seven cameras (small sensor, micro four thirds, APS-C and full frame) and created reference versions.

I then ran into problems with DeNoise AI. I loaded the 40 raw files into it and batch processed them using the DeNoise AI method with default parameters (DeNoise AI, the product, has two methods, one named DeNoise AI and another, legacy, method called AI Clear). When looking at the images in Lightroom after running the Silkypix preset on the TIFF files produced by DeNoise Ai it appeared that DeNoise AI had not applied any noise reduction. (DeNoise AI also hung at one point and I had to restart my PC to get DeNoise AI to run after that. There have been a number of complaints from the user community about DeNoise AI 2.0 being released in a buggy state. For example it can use graphics cards to speed up operations - which can be quite slow - but on my PC, with its modern, powerful graphics card with up to date drivers, I have to turn off the graphics card option in order to get DeNoise AI to work. Other people have had the same problem, and various other issues.)

I decided to try the legacy AI Clear method. That worked. However, with Raw files from Panasonic G3, G5, G80, G9, Sony A7ii and Canon 70D cameras (all except the Panasonic bridge camera images I had used for the previous tests) I saw serious white balance issues. Here is an example. On the left is a DNG file produced by PhotoLab from a raw file. On the right is a TIF file produced from the same raw file by DeNoise AI. (I got the same effect from the DeNoise AI method, and the same effect with other output file types.)


1619 06 Odd colour and tonality from DNAI raw conversion
by gardenersassistant, on Flickr

The PhotoLab version looks credible and is usable, either as is or with a minor white balance modification. No matter how hard I tried I could not adjust the DeNoise AI version to give me credible colours. You might think (I certainly did) that it should simply be a matter of adjusting the white balance. However, I suspect that only having relative adjustments available was limiting where I could take the colours. In comparison, the availability of absolute white balance settings, as for example with DNG files produced by PhotoLab, makes setting the white balance for flash-based images like these a one click operation. Since DeNoise AI's DNG files also only have relative white balance adjustments I could not produce a usable version of this image when using DeNoise AI as the raw converter.

I decided that using DeNoise AI as the raw converter was not an option I wanted to pursue.
 
Last edited:
Part 4 - Testing batch processing options

Having decided not to pursue the option of using DeNoise AI as the raw converter left me two options as far as DeNoise AI was concerned. One was to carry on using it on the display size JPEGs, the other to use it from within Lightroom. I decided to prepare a set of images on which to use and compare these two options.

The first stage was to prepare a set of images on which to use the two DeNoise AI options. I selected 60 raw files to work on. I picked them so as to give some variety in terms of sensor sizes (1/2.3", MFT, APS-C and FF) and also in terms of the quality of the raw files. I deliberately picked some that I knew from previous experience were difficult to handle given what I wanted from them.

I started out thinking about this in terms of my current workflow, so I did my usual PhotoLab and Silkypix batch operations on the 60 raw files and I started using Lightroom as usual to adjust the TIF files produced by Silkypix. While still in Lightroom I did a round trip to DeNoise AI to smooth out the backgrounds and reveal details in the subject.

This approach seemed to work quite well, apart from a bug which made using DeNoise AI a bit frustrating. I could invoke DeNoise AI from Lightroom and use it on the current image with no problem. When I returned to Lightroom there was a new (TIF) copy of the image, but it looked exactly the same as the original - the changes I had made in DeNoise AI had not been applied to it. Well, sometimes they were, and sometimes they were not. I eventually discovered that when they had not been applied I could make them appear by invoking DeNoise AI again, this time on the new (but unaltered) copy of the image rather than on the original. I would then be taken to DeNoise AI. If I then closed that down without doing anything, on return to Lightroom there would be a third copy of the image, this one containing the changes that I had made to the original version. And perhaps the second copy would now have the changes too. All very odd, but (once I had worked out what was going on) it seemed a bit tiresome but something I could live with (perhaps it will be fixed in a future version).

About half way through the 60 images I came across one in which the PhotoLab/Silkypix processing had produced a halo. Not a sharpening halo, a wider, dark halo along a bright edge. I have seen this before; Silkypix sometimes produces this effect when you pull highlights down a long way (which is what my Silkypix preset does). So for that image I used Lightroom on the DNG file produced by PhotoLab before Silkypix had been used. That solved the problem of the halo, but I still didn't like the look of the image. I wondered what would happen if I used Lightroom on the original raw file. I wouldn't be able to use PhotoLab's PRIME noise reduction, or its camera/lens-specific sharpening, but I would be using DeNoise AI's noise reduction and sharpening; perhaps that would be enough, so I tried it.

I liked the look of what I got by simply using Lightroom and the DeNoiseAI plugin, and when I tried it on the previous image and compared it to what I had got with PhotoLab+Silkypix+Lightroom+DeNoise AI I much preferred the look of the Lightroom+DeNoise AI version. I worked my way back through the 30 or so images I had already processed and in all but one case I preferred the Lightroom+DeNoise AI version.

What I liked about the Lightroom+DeNoise AI versions was that they looked more natural to me. Also, not being as flattened as much they had a stronger overall contrast that I found more pleasing to look at. This meant that with the highlights at about the same level in the two versions, the shadows tended to be darker in the Lightroom+Denoise AI versions and might therefore reveal less detail. On the other hand (and I suppose this sounds strange given the relative flatness of the PhotoLab+Silkypix etc versions), in comparison with the Lightroom+DeNoise AI versions, a lot of the PhotoLab+Silkypix etc versions had something of a garish, overstated look to them. Arguably I should be able to fix this by working harder on the PhotoLab etc versions (although when I tried I wasn't really convinced by the results), but why would I want to do this when I could get results I preferred in a very quick and straightforward way with Lightroom+DeNoise AI?

Having got all the way back to the first example, I had another go at the one image for which I preferred the PhotoLab+Silkypix etc version. This was image #1.2 in the top post. I worked hard with Lightroom+DeNoise AI, using local adjustments as well as global adjustments, but there was an area of highlight on the body between the shell and the head which I could not get under control like it was in the PhotoLab+Silkykpix etc version. I then outputted a TIF file from Lightroom, picked it up in Silkypix, adjusted the highlights and outputted a TIF file and picked that up again in Lightroom (there is no Lightroom plugin version of Silkypix). That worked to my satisfaction.

I was comfortable that I had found a way forward for images I couldn't fully handle with Lightroom+DeNoise AI. As well as looping out externally to Silkypix via TIF export/import, I assumed I could do similar for PhotoLab as you can invoke PhotoLab from within Lightroom. It turned out later that I had assumed wrong; PhotoLab gave a "Loading error" message when invoking it from Lightroom. Unfortunately it did the same when I exported TIF from Lightroom, which means that unlike Silkypix, PhotoLab is not accessible as part of a workflow based on Lightroom+DeNoise AI.

Before discovering this I used Lightroom+DeNoise AI for the other 30 or so images. I was content enough with what I got with all of them without wanting to invoke Silkypix or PhotoLab. I didn't regard a few of them as being of usable quality, but I felt the intrinsic difficulties presented by the raw files concerned would have prevented me getting an acceptable to my eye result no matter what processing I used. (This was based on experience; I didn't test it with alternative processing.) I did invoke Photoshop for one of the images for some cloning I couldn't do in Lightroom, but that would have also been the case had I been using Silkypix and/or PhotoLab.

At that point I started drafting these posts. However, the next day when I looked back through all 60 comparisons rather carefully in order to better describe what I liked about the Lightroom+DeNoise AI versions I was shocked to find that now my preferences were split about 50/50 as between the reference versions and the Lightroom+DeNoise versions. Perhaps this was because the previous day I was concentrating too much on just one aspect of the images, or perhaps had fallen into a trap of "I like the ones I just processed".

Anyway, this was getting tiresome.

Working out what to do next

I decided to be more methodical. Before I started the next stage I would work out what I was going to do and how I would assess the results. Using the same 60 raw files as before, I decided to do four raw conversions for each file and decide which conversion I wanted to post process.

The four versions would be prepared as follows and loaded into Lightroom to do the comparisons:
  • Lightroom, using Lightroom defaults
  • PhotoLab, using my usual presets
  • Silkypix, using Silkypix defaults
  • PhotoLab then Silkypix, using my usual presets for both
I would compare the four versions in terms of:
  • Tonality, which for this exercise I took to mean "Overall subjective quality of tones -- i.e. the overall and local contrast, the highlight and shadow qualities" (simplified version of a definition at http://www.idigitalphoto.com/dictionary/tonality)
  • Clarity and "dimensionality". The extent to which the image looked clean/clear, and the extent to which the subject stood out and/or had a somewhat 3D look to it.
  • Colour, including hues (credibility, whether pleasing to my eye, and how far retained in highlights) and saturation (not too much, not too little).
  • Detail, in terms of the amount of detail that was present, and its visibility. I would look at the whole images in Lightroom, almost as large as my intended display size (I mostly stuck to that and didn't zoom in, but I did go to 1:1 a handful of times to check on issues I thought I was seeing but could barely make out.)
Most of these factors can be adjusted in post processing. I would be looking for the version that I thought would most easily produce a result that I would like the look of.

I would compare pairs of images by switching between them rather than having them side by side on the screen as I find switching makes differences stand out much more clearly.

Having chosen a preferred conversion of each image I would then post process it with image-specific adjustments in Lightroom.

I would then use both DeNoise AI approaches for each image, and produce 1300 pixel high display size versions of both of them and of the version on which they were based. I would compare the three versions to see if DeNoise AI was providing any useful improvement, and if so which of the two DeNoise AI approaches I preferred the look of.

Well, that was the plan.....

Comparing the conversions

In general PhotoLab or PhotoLab+Silkypix produced the conversions I wanted to take forward.

PhotoLab and PhotoLab+Silkkypix did much better with images which needed a significant boost to lightness. I think this is because of the mild application of PhotoLab's Smart Lighting in my PhotoLab presets. Here is an example with the Lightroom and Silkypix versions on the left and the PhotoLab and PhotoLab+Silkypix versions on the right.


1619 07 globby LR,SP defaults vs PL,PL-SP GA presets
by gardenersassistant, on Flickr

Lightroom has an Auto Tone function and Silkypix has an Auto Adjust function, but for my purposes I don't feel these produce good results. Here is the same example with the Lightroom Auto Tone and Silkypix Auto Adjust versions on the left and the PhotoLab and PhotoLab+Silkypix versions on the right.


1619 08 globby LR,SP auto vs PL,PL-SP GA presets
by gardenersassistant, on Flickr

My invertebrate presets include a modest application of PhotoLab's ClearView function. Sometimes this can produce an unnatural looking harsh contrast on petals (for flowers I use different presets which have ClearView turned off). This occurred in one of the 60 I was working on. For that one I re-ran the PhotoLab processing with ClearView turned off and reapplied the Silkypix preset, after which I was content with the look of the petals in the PhotoLab and PhotoLab+Silkypix versions.

The PhotoLab + Silkypix versions were generally flatter looking than the PhotoLab versions, and sometimes I chose the more appealing PhotoLab version to use more or less "as is" in terms of tonality rather than using the PhotoLab+Silkypix version and "deflattening" it. However, in many cases Silkypix made a significant (and to my eye beneficial) difference to highlights. Most often this effect was localised in small areas, but sometimes it had widespread effects across the image. Here is one example of this, with the Lightroom version at the top left, Silkypix version bottom left, PhotoLab version top right, and PhotoLab+Silkypix version bottom right.


1619 11 PL+SP image-wide highlight reduction
by gardenersassistant, on Flickr

Highlight reduction is, for me, Silkypix' major strength, providing something that is very useful to me that my other applications cannot. Or so I thought.

In one of the images the subject had a small area of what looked like blown highlights. The big highlight drawdown done by Silkypix in the PhotoLab+Silkypix version had not cured the problem. I assumed nothing else would, but out of curiosity I tried using a PhotoLab local adjustment in that area. I started again with the raw file, set up the preset and then before running it I applied both global highlight reduction across the whole image and local highlight adjustment in the problem area. To my surprise, this produced (to my eye) a better result that Silkypix had achieved, as illustrated below on the right.


1619 04 Silkypix vs PhotoLab local highlight reduction
by gardenersassistant, on Flickr

This immediately opened up another possibility in my mind. If PhotoLab could give me better results than Silkypix for problem highlights, why continue using Silkypix? (Highlight management being the only reason I was using Silkypix.) Taking Silkypix out of the workflow would also remove the "Silkypix flattening" problem. I wouldn't be using batch mode any more and so the processing might take longer (but perhaps not, see below). But if I could get better image quality by using PhotoLab local corrections then it might be worth it. I played with the PhotoLab local corrections a bit - it looked like it was not just a matter of highlight management. The selection method in PhotoLab seemed to be rather good for some of things I wanted to do, and there was a good range of adjustments that could be made on the selections.

I decided to go through all 60 images yet again.
 
Last edited:
Part 5 - Moving away from batch processing

I started again with the raw files in PhotoLab and applied my normal presets. I then went through the images applying PhotoLab local adjustments where it seemed appropriate. I then exported them all as DNG and picked them up in Lightroom. It turned out that I did not feel the need to do much to many of the images other than cropping them to the same crops I had used for the other versions. This meant that the extra time spent in PhotoLab was offset by a reduction in the time needed in Lightroom. It felt as though dropping batch processing did not in practice seem as burdensome as I had feared.

Here are some of the image issues I had had to deal with whichever method I used, and for some of which PhotoLab local adjustments proved useful.

The overall lightness of some images had to be raised quite a lot, including images captured with a noisy small sensor like the one in the upper example below. In others I wanted to raise the background a lot, like in the lower example below, which is also from a small sensor bridge camera.


1619 01 Dark small sensor images
by gardenersassistant, on Flickr

In some cases the subject was relatively dark against a brighter background. I did not have any strong examples of this in the test images, but the two test images below illustrate the issue in a more moderate form (It is with some flower etc images, shooting contre-jour, that the issue can become acute.)


1619 02 Subject dark against lighter background
by gardenersassistant, on Flickr

It turned out that in terms of highlight control, PhotoLab local adjustments sometimes lagged behind Silkypix in what they could achieve, as illustrated below.


1619 03 Silkypix vs PhotoLab highlight reduction
by gardenersassistant, on Flickr

And then there was the amount of cropping I was doing, which interacted with some of the other issues. Hopefully you will see the following illustration switch between two versions every five seconds. One version shows the maximum crop that I tested and the other shows the crops that I ended up using.


1619 05 Biggest-crops-tested-and-used
by gardenersassistant, on Flickr

Given that the target display size was 1300 pixels high, the top two examples would have been 100% crops if I had used the maximum crop that I tested. This is a very big ask from a small sensor camera. In fact I did use that same crop for the one on the right, but eased off the crops for the others. It is a matter of taste, judgement and usage context as to which if any of these large crops produced an acceptable image. Whatever one's views on that, in case anyone is interested I have included the 100% crop versions as Images 9.1 to 9.4 in this Flickr album to illustrate that in the testing I did push some of the images very hard to see how the various workflows coped.

In terms of using PhotoLab for image-specific adjustments as well as raw conversion prior to using Lightroom, I felt the workflow handled the various difficulties as well as anything else I had tried.

I now had images that I felt were in a suitable condition for testing the two DeNoise AI alternatives.
 
Last edited:
Part 6 - DeNoise AI - plugin or final stage?

Getting back to the plan, I now had a converted and post processed version of each of the 60 images. I produced three 1300 pixel tall versions of each, one as it looked before applying DeNoise AI in Lightroom, one with DeNoise AI applied with Auto settings from within Lightroom, and one with DeNoise AI applied to the JPEG exported from Lightroom which had not had DeNoise AI applied from within Lightroom. I initially used Auto settings for the latter, but replaced them with versions using the zero setting for Sharpening in DeNoise AI because I saw some sharpening halos with the Auto settings.

In terms of noise reduction I was content with what DeNoise AI achieved on my sometimes crunchy backgrounds, whether applied from inside Lightroom or applied to the display size JPEG.

In terms of sharpening I had to look very carefully when comparing the images because the differences were at most rather subtle, and in some case pretty much indiscernible. However, despite being subtle, I felt the differences were significant, and beneficial. In every case that I noticed a difference I felt that one or other of the DeNoised versions was best (i.e. never the one to which DeNoise had not been applied). For 48 of the images I decided to use the version that used DeNoise AI from within Lightroom, using the JPEG-denoised version for the other 12. In some cases it could have been either.

The 60 images produced using this workflow are in this album at Flickr.
 
Last edited:
Part 7 - The new workflow

The way I do my photography means that I have to deal with large numbers of images (typically hundreds per session, occasionally over 1,000, often for several consecutive days, sometimes more than one session in a day). I therefore need to balance efficiency in handling the images with effectiveness in terms of producing images that I find pleasing. Recently I have been slowing down a bit when doing the processing, paying greater attention. In that context the idea of moving to a workflow that may be slower does not feel problematic. And in any case, it remains to be seen whether it really is so much slower.

I am going to try for a while using the PhotoLab (preset modified by image-specific adjustments) then Lightroom (image-specific adjustments), calling DeNoise AI from within Lightroom. Subject to DeNoise AI not falling over if I try to do too many at once, the application of DeNoise AI can be done on a batch basis from within Lightroom. I can select a number of the images and send them to DeNoise AI with a couple of mouse clicks, set them all to Auto settings (one click does all of them) and set them all processing (one click). In due course the adjusted versions turn up in Lightroom. (I'm puzzled as to why this seemed to work ok when previously it didn't. That is something I need to check more carefully.)

I like the way PhotoLab and Lightroom (differently) offer functionality that is very powerful (at least for what I want to do), particularly their different approaches to local adjustments. I have the feeling of a lot of power at my fingertips which can be deployed very flexibly with a fair degree of subtlety. One minor (or perhaps sometimes not so minor) element of that is the fact that PhotoLab DNGs do not pin down the white balance so in Lightroom I still have absolute white balance adjustments (in degrees, and with flash etc presets). As now, for some creative tweaks that I can't do in either Lightroom or PhotoLab, I can round trip seamlessly to Photoshop or, with not too much extra effort, round trip (not so seamlessly) to Silkypix should that be necessary.

I will try the same workflow for my other main subject areas: flowers and other botanical subjects (for the stills element of this subject area, and adjusted for the focus stacking element); common birds in flight; skyscapes and sunsets.

For now, that is the direction I am headed. It is anyone's guess as to how long it will be before I get diverted (yet again) in some other direction.
 
Last edited:
Very interesting, I've started reading through the posts and will continue when I've got a little more time.. I've not used any of the apps you've been using except Lightroom/PS.

P.S. I did wonder what you were up to when I saw all of the comparison images start to appear on Flickr! :)
 
Interesting Nick about denoise AI, I use sharpen AI mainly for old nature/wildlife photos that aren’t as sharp as I can get now it’s really good, has meant I’ve been able to get some nice prints made that previously weren’t very good
 
I’ve also just purchased a Canon 6D 2 for macro work
Mainly for increased quality from full frame but also the tilting rear screen appeals for low down work
Also the live view autofocus is apparently very good I’m hoping it’s going to make photographing damselflies easier
 
Interesting Nick about denoise AI, I use sharpen AI mainly for old nature/wildlife photos that aren’t as sharp as I can get now it’s really good, has meant I’ve been able to get some nice prints made that previously weren’t very good

DeNoise AI does sharpening as well as noise reduction, but it's the noise reduction I'm finding most useful. It handles my sometimes crunchy backgrounds really nicely. The extra sharpening is fine of course, but it is rather subtle. I could turn it up of course, but once or twice when I've done that I've seen halos. In any case I think I'm moving towards the view that too much sharpness (or perhaps it is really microcontrast) can make an image look overcooked and (for me) slightly uncomfortable to look at, and I'm quite comfortable with DeNoise AI's defaults.

I’ve also just purchased a Canon 6D 2 for macro work
Mainly for increased quality from full frame but also the tilting rear screen appeals for low down work
Also the live view autofocus is apparently very good I’m hoping it’s going to make photographing damselflies easier

I hope it goes well.

I found the live view autofocus on the 70D acceptable, although that was with close-up lenses on a telezoom. With a macro lens I didn't like the autofocus hunting I encountered, with or without extension tubes/teleconverter. With the Raynox 150 or 250 on an EF-S 55-250 STM, autofocus was slightly slower and less reliable than on my Panasonics. The single area focus box was rather large too, which reduced the precision with which I could place the centre of focus compared to the Panasonics with their very small focus area option. But perhaps one or both of these issues have been improved on the 6D 2.
 
I hope it goes well.

I found the live view autofocus on the 70D acceptable, although that was with close-up lenses on a telezoom. With a macro lens I didn't like the autofocus hunting I encountered, with or without extension tubes/teleconverter. With the Raynox 150 or 250 on an EF-S 55-250 STM, autofocus was slightly slower and less reliable than on my Panasonics. The single area focus box was rather large too, which reduced the precision with which I could place the centre of focus compared to the Panasonics with their very small focus area option. But perhaps one or both of these issues have been improved on the 6D 2.

Thanks it’s my first full frame camera
Hopefully the live view focus works well for macro
It’s apparently good for normal subjects like people etc but I haven’t been able to find out what’s it’s like for macro
 
Last edited:
Calibrating diffusers

A discussion at another site led me to think more carefully about my diffusers, and specifically about whether they are producing a problematic colour cast. It turns out that they are, and it varies a lot between diffuser setups. It also turns out to be fairly straightforward to deal with, with (for my images at least, and to my eye) clear benefits in terms of image quality.

There are several factors that could change the colour of light falling on the scene from a flash setup:
  • The colour of the light the flash heads are emitting, and this may vary between instances of a particular model
  • The materials the diffusers are made from
  • Reflections of the flash light from surrounding surfaces
  • Reflections of natural light if the flash light is not dominating the illumination.
The eventual colours seen in an image can also be impacted by:
  • The close-up lens(es), if used
  • The camera lens
  • The camera sensor
  • Any in-camera conversion/processing including any in-camera auto-setting or manual setting of white balance
  • Any out-of-camera conversion/processing
I started by testing eight diffuser arrangements, keeping as many of these factors constant as I could, by using:
  • The same camera and close-up lens
  • The same flash unit
  • Dominant flash illumination
  • The camera and subject in the same position
  • The same subject (the large 18% grey area of a ColorChecker Passport)
  • Shooting raw
  • Using the same conversion/post processing (Lightroom, with default settings).
I first tested the 8 setups shown below. All used a Panasonic FZ330 bridge camera and a Raynox 150 close-up lens. From 1.1 to 1.8 the setups were as follows (All diffusion layers are separated with air gaps, of varying depth):
  1. Metz 58 AF-2 with 4 layers of expanded polystyrene, two pairs of two different sorts of expanded polystyrene
  2. Yongnuo YN24EX with 2 layers of expanded polystyrene
  3. KX800 with two layers of plastic paper on each flash head and a wide layer of plastic paper in front
  4. Second KX800 with no diffusion
  5. Second KX800 with three layers of plastic paper on each flash head
  6. Second KX800 with one layer of expanded polystyrene and two layers of plastic paper on each flash head
  7. Third KX800 with one layer of expanded polystyrene and two layers of plastic paper on each flash head, and a wide layer of plastic paper in front (This is my currently used setup, with the diffusion layers that have been in place for quite a long time and possibly discoloured with age)
  8. As previous, but with all three plastic paper layers renewed with fresh material that has been stored, boxed, out of the light.

1625 Illustration 1 - Diffuser White Balance shifts
by gardenersassistant, on Flickr

The next eight images in the above illustration, 2.01 to 2.08, show the colour produced by each setup. These show huge variations in colour produced by the different setups.

The final two colours, 2.09 and 2.10 use the eighth setup (my currently used setup with mainly renewed diffusion material) on different cameras, an FZ200 bridge camera and a Panasonic G9 micro four thirds camera. Compared to the large differences between the different flash/diffuser setups, the differences between the three cameras using the same setup appear quite mild.

I next loaded the raw files in turn into PhotoLab, Silkypix and Lightroom and used their white balance pickers to measure the white balance setting needed to turn the colour to neutral grey. For each of the three applications the numbers varied somewhat from spot to spot. The table below shows typical values.


1625 Illustration 2 - White balance measurements for tested camera,diffuser setups
by gardenersassistant, on Flickr

The table shows that for any particular setup the measurements given by the three applications varied. Looking through the setups the differences between the three applications' assessments do not vary in any consistent way that I can detect. These variations suggested to me that the best way to handle this would be to use white balance values which are specific to the camera, lens, flash unit, diffusion setup and application used to convert the raw file. I imagine the most reliable approach would be to take a grey card capture during a session and use that to set the white balance, using the white balance picker in the application used for the raw conversion.

Not only do the three applications differ as to the white balance numbers they give for a particular image, but as illustrated below they also use different numbers for their flash preset, and in fact Silkypix offers three different flash presets.


1625 Illustration 3 - Application white balance settings
by gardenersassistant, on Flickr

That was all interesting in demonstrating that diffusers could have a big impact on colours, with different diffusers have different effects, and different applications using different flash settings, but the thing that proved really useful from the practical point of view was to set the camera white balance so that it produced neutral grey from the 18% grey card. I did this for an FZ200 with the flash unit and diffuser setup that I have been using with the Raynox 150 and 250, setting it using the diffusers with the renewed plastic paper layers. I started with the white balance numbers for that combination of kit taken from the measurement table above, probably (I don't remember) the PhotoLab numbers. I then captured a test image of the 18% grey card, used the colour picker in Lightroom to get the Temp and Tint numbers (this was a mistake, I should have done this in PhotoLab as I would be using that for raw conversion). I then compared those numbers to the ones I got when I applied the white balance picker. After two or three iterations I got the "out of the camera" Temp and Tint as close as I could to the white balance picker values. (It was the wrong white balance picker, but as it turned out this was not a serious mistake, and I will correct it for future use.)

To test whether this change would make any difference I incorporated the new camera Temp and Tint values into my flash preset on the FZ200 and went into the garden to try and find some subjects. Fortunately I did find a handful, enough for the test to be meaningful, especially since they were all on two choisya bushes that have long given me colour problems. They have leaves which vary greatly in hue, saturation and lightness, from quite bright yellows through to quite strong greens, ranging from highly saturated to highly unsaturated and also varying in lightness. This is not just variation as the season progresses. It is variation from leaf to leaf, and even across a single leaf. I have had a lot of trouble getting the yellows to what seemed a credible state, this being especially difficult to judge because of the large natural colour variations in the foliage.

Aside: If shooting raw, which I do, why does it matter what white balance I set on the camera? It is easy enough to alter during post processing. In my experience it is certainly easy to change the white balance from raw, but I find it can be, and with these particular bushes it too often has been, difficult to get to colours that I am fully comfortable with; and occasionally more than difficult - there have been occasions when I have given up and left the yellows in a state I didn't at all like. An approach that gives me credible colours "out of the camera" (as far as there is such a thing with raw) without me having to struggle with it would be very welcome.

I used my (currently) normal workflow:
  • Raw into PhotoLab and use a preset specific to the camera type and ISO, this time using "As shot" for the white balance. Output DNG.
  • DNG into Silkypix and use a common (all cameras, ISOs etc) preset, with "As shot" white balance and output TIFF.
  • TIFF into Lightroom, with "As shot" white balance, and apply image-specific adjustments. This did not include any white balance adjustments. Round trip from Lightroom to DeNoise AI using "Auto" settings and back to Lightroom. Output to 1300 pixel high JPEGs using Lightroom output sharpening at the "Low" setting.
The images from this test session are in this album at Flickr and the invertebrate images from #1 to #17 and #27 to #31 used the flash setup. It seemed to me that the new arrangement worked well. Simply using the "As shot" white balance produced colours that looked credible to me (even though I was using not quite the right numbers). This is an approach that I will be using in future.

I have posted eight of the images here in the forum for those who don't like to click away from the site.
 
Last edited:
Calibrating diffusers

A discussion at another site led me to think more carefully about my diffusers, and specifically about whether they are producing a problematic colour cast. It turns out that they are, and it varies a lot between diffuser setups. It also turns out to be fairly straightforward to deal with, with (for my images at least, and to my eye) clear benefits in terms of image quality.

There are several factors that could change the colour of light falling on the scene from a flash setup:
  • The colour of the light the flash heads are emitting, and this may vary between instances of a particular model
  • The materials the diffusers are made from
  • Reflections of the flash light from surrounding surfaces
  • Reflections of natural light if the flash light is not dominating the illumination.
The eventual colours seen in an image can also be impacted by:
  • The close-up lens(es), if used
  • The camera lens
  • The camera sensor
  • Any in-camera conversion/processing including any in-camera auto-setting or manual setting of white balance
  • Any out-of-camera conversion/processing
I started by testing eight diffuser arrangements, keeping as many of these factors constant as I could, by using:
  • The same camera and close-up lens
  • The same flash unit
  • Dominant flash illumination
  • The camera and subject in the same position
  • The same subject (the large 18% grey area of a ColorChecker Passport)
  • Shooting raw
  • Using the same conversion/post processing (Lightroom, with default settings).
I first tested the 8 setups shown below. All used a Panasonic FZ330 bridge camera and a Raynox 150 close-up lens. From 1.1 to 1.8 the setups were as follows (All diffusion layers are separated with air gaps, of varying depth):
  1. Metz 58 AF-2 with 4 layers of expanded polystyrene, two pairs of two different sorts of expanded polystyrene
  2. Yongnuo YN24EX with 2 layers of expanded polystyrene
  3. KX800 with two layers of plastic paper on each flash head and a wide layer of plastic paper in front
  4. Second KX800 with no diffusion
  5. Second KX800 with three layers of plastic paper on each flash head
  6. Second KX800 with one layer of expanded polystyrene and two layers of plastic paper on each flash head
  7. Third KX800 with one layer of expanded polystyrene and two layers of plastic paper on each flash head, and a wide layer of plastic paper in front (This is my currently used setup, with the diffusion layers that have been in place for quite a long time and possibly discoloured with age)
  8. As previous, but with all three plastic paper layers renewed with fresh material that has been stored, boxed, out of the light.

1625 Illustration 1 - Diffuser White Balance shifts
by gardenersassistant, on Flickr

The next eight images in the above illustration, 2.01 to 2.08, show the colour produced by each setup. These show huge variations in colour produced by the different setups.

The final two colours, 2.09 and 2.10 use the eighth setup (my currently used setup with mainly renewed diffusion material) on different cameras, an FZ200 bridge camera and a Panasonic G9 micro four thirds camera. Compared to the large differences between the different flash/diffuser setups, the differences between the three cameras using the same setup appear quite mild.

I next loaded the raw files in turn into PhotoLab, Silkypix and Lightroom and used their white balance pickers to measure the white balance setting needed to turn the colour to neutral grey. For each of the three applications the numbers varied somewhat from spot to spot. The table below shows typical values.


1625 Illustration 2 - White balance measurements for tested camera,diffuser setups
by gardenersassistant, on Flickr

The table shows that for any particular setup the measurements given by the three applications varied. Looking through the setups the differences between the three applications' assessments do not vary in any consistent way that I can detect. These variations suggested to me that the best way to handle this would be to use white balance values which are specific to the camera, lens, flash unit, diffusion setup and application used to convert the raw file. I imagine the most reliable approach would be to take a grey card capture during a session and use that to set the white balance, using the white balance picker in the application used for the raw conversion.

Not only do the three applications differ as to the white balance numbers they give for a particular image, but as illustrated below they also use different numbers for their flash preset, and in fact Silkypix offers three different flash presets.


1625 Illustration 3 - Application white balance settings
by gardenersassistant, on Flickr

That was all interesting in demonstrating that diffusers could have a big impact on colours, with different diffusers have different effects, and different applications using different flash settings, but the thing that proved really useful from the practical point of view was to set the camera white balance so that it produced neutral grey from the 18% grey card. I did this for an FZ200 with the flash unit and diffuser setup that I have been using with the Raynox 150 and 250, setting it using the diffusers with the renewed plastic paper layers. I started with the white balance numbers for that combination of kit taken from the measurement table above, probably (I don't remember) the PhotoLab numbers. I then captured a test image of the 18% grey card, used the colour picker in Lightroom to get the Temp and Tint numbers (this was a mistake, I should have done this in PhotoLab as I would be using that for raw conversion). I then compared those numbers to the ones I got when I applied the white balance picker. After two or three iterations I got the "out of the camera" Temp and Tint as close as I could to the white balance picker values. (It was the wrong white balance picker, but as it turned out this was not a serious mistake, and I will correct it for future use.)

To test whether this change would make any difference I incorporated the new camera Temp and Tint values into my flash preset on the FZ200 and went into the garden to try and find some subjects. Fortunately I did find a handful, enough for the test to be meaningful, especially since they were all on two choisya bushes that have long given me colour problems. They have leaves which vary greatly in hue, saturation and lightness, from quite bright yellows through to quite strong greens, ranging from highly saturated to highly unsaturated and also varying in lightness. This is not just variation as the season progresses. It is variation from leaf to leaf, and even across a single leaf. I have had a lot of trouble getting the yellows to what seemed a credible state, this being especially difficult to judge because of the large natural colour variations in the foliage.

Aside: If shooting raw, which I do, why does it matter what white balance I set on the camera? It is easy enough to alter during post processing. In my experience it is certainly easy to change the white balance from raw, but I find it can be, and with these particular bushes it too often has been, difficult to get to colours that I am fully comfortable with; and occasionally more than difficult - there have been occasions when I have given up and left the yellows in a state I didn't at all like. An approach that gives me credible colours "out of the camera" (as far as there is such a thing with raw) without me having to struggle with it would be very welcome.

I used my (currently) normal workflow:
  • Raw into PhotoLab and use a preset specific to the camera type and ISO, this time using "As shot" for the white balance. Output DNG.
  • DNG into Silkypix and use a common (all cameras, ISOs etc) preset, with "As shot" white balance and output TIFF.
  • TIFF into Lightroom, with "As shot" white balance, and apply image-specific adjustments. This did not include any white balance adjustments. Round trip from Lightroom to DeNoise AI using "Auto" settings and back to Lightroom. Output to 1300 pixel high JPEGs using Lightroom output sharpening at the "Low" setting.
The images from this test session are in this album at Flickr and the invertebrate images from #1 to #17 and #27 to #31 used the flash setup. It seemed to me that the new arrangement worked well. Simply using the "As shot" white balance produced colours that looked credible to me (even though I was using not quite the right numbers). This is an approach that I will be using in future.

I have posted eight of the images here in the forum for those who don't like to click away from the site.

This is interesting and very thorough as usual. :) I did wonder if the current diffuser setup you linked to in another thread would cause any discernible colour cast with the front layer being an off-white colour, but it seems from what you've said this has discoloured in the light over time and you've replaced it which makes sense. I'm intending to make something to provide better diffusion for my Nikon flash and am going to try using some softbox fabric for the front layer - I'll report back on how I get on. It might actually happen now I'm stuck at home for the next couple of weeks!
 
This is interesting and very thorough as usual. :) I did wonder if the current diffuser setup you linked to in another thread would cause any discernible colour cast with the front layer being an off-white colour, but it seems from what you've said this has discoloured in the light over time and you've replaced it which makes sense.

The replacement material is still producing a cast. But now I know that, and I know what white balance is needed to neutralise it. Definite progress.

I'm intending to make something to provide better diffusion for my Nikon flash and am going to try using some softbox fabric for the front layer - I'll report back on how I get on.

I would be interested to see a white balance test to see how neutral (or otherwise) it is.

It might actually happen now I'm stuck at home for the next couple of weeks!

I think that may be the case for lots of us - things that we have never quite got round to getting done. Are you self isolating because you might have the virus?
 
The replacement material is still producing a cast. But now I know that, and I know what white balance is needed to neutralise it. Definite progress.

Definite progress as you say and not something you'd necessarily think of when the light produced by flash units is supposed to be pretty neutral (although not necessarily calibrated between examples of the same unit) and all of the other factors seemingly coming into play

I would be interested to see a white balance test to see how neutral (or otherwise) it is.

Happy to do that assuming the project gets off the ground. This is the softbox material I bought:

https://www.amazon.co.uk/Seamless-D...s=neewer+softbox+fabric&qid=1584799525&sr=8-2

Are you self isolating because you might have the virus?

One of the kids started showing possible symptoms last week so we took the decision to self-isolate - we're all well so far and better safe than sorry really. It's tough going TBH!
 
Last edited:
Happy to do that assuming the project gets off the ground. This is the softbox material I bought:

https://www.amazon.co.uk/Seamless-D...s=neewer+softbox+fabric&qid=1584799525&sr=8-2

Oh, I've never seen that before. Nice find! I just ordered some.

One of the kids started showing possible symptoms last week so we took the decision to self-isolate - we're all well so far and better safe than sorry really. It's tough going TBH!

Tough going, I can imagine. But great that you are all well so far.
 
I’ve recently bought affinity photo for my desktop and laptop
First impression is that it’s really good
The raw conversion stage is really good there’s a lot of adjustments that you can do on the raw before you convert to tiff
I’m still learning how to use it and am hoping soon will get some subjects to photograph to try it out
I was using Adobe CC which is excellent but I’ve recently retired and now can’t justify paying monthly to use CC
Affinity is half price at the moment which clinched it for me and it’s a one off payment future updates are included
 
there are a few things that capture one cant do that Affinity will do and at the current price seems daft not to get it. The process to move an image form C1 to affinity and back again seems a lot simpler than using On1 that I also have. Just weighing up how much i would actually use it before purchasing
 
I’ve recently bought affinity photo for my desktop and laptop
First impression is that it’s really good
The raw conversion stage is really good there’s a lot of adjustments that you can do on the raw before you convert to tiff
I’m still learning how to use it and am hoping soon will get some subjects to photograph to try it out
I was using Adobe CC which is excellent but I’ve recently retired and now can’t justify paying monthly to use CC
Affinity is half price at the moment which clinched it for me and it’s a one off payment future updates are included

Hope it goes well. (I have Affinity Photo but have not spent much time with it. Similarly with Luminar.)
 
Hope it goes well. (I have Affinity Photo but have not spent much time with it. Similarly with Luminar.)
Thanks yes I’ve not done a lot yet just followed a couple of you tube guides on raw conversion
I’ve not found any subjects yet but should do in the next few weeks when the butterflies are out in numbers
 
The white balance story continues ....

Despite my having calibrated my most used diffuser I got renewed complaints about my flash white balance from the same person who had complained previously. It was very tempting to just ignore this because of his extremely unpleasant manner: hectoring, dismissive, disparaging, belittling and demeaning, and demanding evidence and then when evidence was produced refusing to engage with it. However, being unpleasant and being wrong are two different things; it was after all as a result of his earlier complaints that I had calibrated my diffusers, and that seemed to have been a very good move. So I decided to have a more thorough look at my flash white balances.

I went out into the garden and looked for insects etc to photograph. All I found were a few drab coloured flies and bees - not so good for testing white balance, although I did find some on different coloured foliage, so that helped. Given the difficulty in finding invertebrate subjects, I also photographed some flowers. I almost never use flash for flowers because I much prefer natural light. And I was using my FZ200 bridge camera; although that is my favourite tool for medium sized invertebrates, I much prefer my interchangeable lens cameras for flowers and other botanical subjects. Still, the flowers offered a wider variety of colours, which was useful in the context of looking at white balances.

As described previously I had calibrated the diffuser and so knew what white balance settings to use in post processing. However, I decided to double-check this. For each scene, after photographing the subject I captured an X-rite ColorChecker Passport reference shot. This gave me a neutral grey patch for setting the white balance for the scene. It also let me make a DCP camera profile, and I did this, using the X-rite software, for each scene (there are posts about using a ColorChecker Passport and making DCP camera profiles earlier in this thread. The first one is here.)

The 16 scenes that I ended up using are in this album at Flickr.

I started the processing in almost my usual way: first processing in DXO PhotoLab and then Silkypix, and then image-specific adjustments in Lightroom with a round trip out to and back from Topaz DeNoise AI before exporting a 1300 pixel high JPEG from Lightroom. The difference was that instead of using PhotoLab's camera profile I used the DCP profile for the scene, and rather than using the "As shot" white balance in PhotoLab, I used the ColorChecker grey patch for the scene to set the white balance. I then proceeded as normal.

However, after processing half a dozen images I was not comfortable with what I was seeing. It was not necessarily the white balance that was troubling me, but the overall look of the images. They didn't look natural to me, somehow "overcooked". Not so much the colours being too bright or garish, more that the microcontrast looked unreal. Detail was certainly standing out clearly, but too much so, uncomfortably so. (I think that, goaded by the criticisms, I was looking at what I was doing much more carefully than usual, with a good helping of self-doubt thrown in.)

I don't now recall the exact line of thinking, but I decided to try using just Lightroom and DeNoise AI. This seemed unlikely to be the best approach as I was using PhotoLab and Silkypix for functions they were particularly good at: lens-specific sharpening and noise reduction applied to raw files with PhotoLab; and highlight colour retention with Silkypix. Losing those functions might seriously damage image quality. But I tried anyway.

I used the DCP profiles in Lightroom and used the ColorChecker Passport grey patches to set the white balance, and did my usual type of image-specific adjustments. As before, I did a round trip to and back from DeNoise AI and output to JPEG from Lightroom.

I reprocessed the first half dozen scenes using this simplified approach. I was a bit surprised at how much I preferred the look of them compared to the versions I had done with the PhotoLab, Silkypix, Lightroom and DeNoise approach, which looked very crude in comparison. What really surprised me though was that I seemed to be able to get as much fine detail visible as I could with the more complicated approach. I processed the rest of the images using the simplified approach. Along the way it struck me that the highlights looked no worse (and in at least one case considerably better) than with the more complicated approach. I have posted the 16 images that I kept from the exercise in a thread in the forum.

As far as the DCP profiles were concerned, in general they seemed very similar, in the sense that if instead of using the profile created for a scene I used a different one, I generally saw little or no difference. This suggests that a single profile may be all I need for a particular camera. (I haven't checked, but I think that was a conclusion that I came to last time round with camera profiles.) There were two cases where the DCP profiles produced a wrong-looking result. In these cases I got results that looked ok to me by using (any) one of the other DCP profiles from the session or using the generic "Adobe Color" profile.

Setting the white balance from the ColorChecker Passport grey patch for a particular scene proved unnecessary. The Temp and Tint values I got were pretty much the same for all the scenes, and were nearly identical to the values I had previously derived by calibrating the diffuser. This might of course not be the case when the flash is not dominating the illumination (as it was for most of the exercise). This suggests capturing a grey card image when I slow the shutter speed down to increase the natural light component of the illumination.

For now I will see how it goes with the simpler Lightroom + DeNoise approach, using occasional grey card captures, and using one of the DCP profiles that I created for this exercise.
 
Last edited:
The white balance story continues ....

Despite my having calibrated my most used diffuser I got renewed complaints about my flash white balance from the same person who had complained previously. It was very tempting to just ignore this because of his extremely unpleasant manner: hectoring, dismissive, disparaging, belittling and demeaning, and demanding evidence and then when evidence was produced refusing to engage with it. However, being unpleasant and being wrong are two different things; it was after all as a result of his earlier complaints that I had calibrated my diffusers, and that seemed to have been a very good move. So I decided to have a more thorough look at my flash white balances.

I went out into the garden and looked for insects etc to photograph. All I found were a few drab coloured flies and bees - not so good for testing white balance, although I did find some on different coloured foliage, so that helped. Given the difficulty in finding invertebrate subjects, I also photographed some flowers. I almost never use flash for flowers because I much prefer natural light. And I was using my FZ200 bridge camera; although that is my favourite tool for medium sized invertebrates, I much prefer my interchangeable lens cameras for flowers and other botanical subjects. Still, the flowers offered a wider variety of colours, which was useful in the context of looking at white balances.

As described previously I had calibrated the diffuser and so knew what white balance settings to use in post processing. However, I decided to double-check this. For each scene, after photographing the subject I captured an X-rite ColorChecker Passport reference shot. This gave me a neutral grey patch for setting the white balance for the scene. It also let me make a DCP camera profile, and I did this, using the X-rite software, for each scene (there are posts about using a ColorChecker Passport and making DCP camera profiles earlier in this thread. The first one is here.)

The 16 scenes that I ended up using are in this album at Flickr.

I started the processing in almost my usual way: first processing in DXO PhotoLab and then Silkypix, and then image-specific adjustments in Lightroom with a round trip out to and back from Topaz DeNoise AI before exporting a 1300 pixel high JPEG from Lightroom. The difference was that instead of using PhotoLab's camera profile I used the DCP profile for the scene, and rather than using the "As shot" white balance in PhotoLab, I used the ColorChecker grey patch for the scene to set the white balance. I then proceeded as normal.

However, after processing half a dozen images I was not comfortable with what I was seeing. It was not necessarily the white balance that was troubling me, but the overall look of the images. They didn't look natural to me, somehow "overcooked". Not so much the colours being too bright or garish, more that the microcontrast looked unreal. Detail was certainly standing out clearly, but too much so, uncomfortably so. (I think that, goaded by the criticisms, I was looking at what I was doing much more carefully than usual, with a good helping of self-doubt thrown in.)

I don't now recall the exact line of thinking, but I decided to try using just Lightroom and DeNoise AI. This seemed unlikely to be the best approach as I was using PhotoLab and Silkypix for functions they were particularly good at: lens-specific sharpening and noise reduction applied to raw files with PhotoLab; and highlight colour retention with Silkypix. Losing those functions might seriously damage image quality. But I tried anyway.

I used the DCP profiles in Lightroom and used the ColorChecker Passport grey patches to set the white balance, and did my usual type of image-specific adjustments. As before, I did a round trip to and back from DeNoise AI and output to JPEG from Lightroom.

I reprocessed the first half dozen scenes using this simplified approach. I was a bit surprised at how much I preferred the look of them compared to the versions I had done with the PhotoLab, Silkypix, Lightroom and DeNoise approach, which looked very crude in comparison. What really surprised me though was that I seemed to be able to get as much fine detail visible as I could with the more complicated approach. I processed the rest of the images using the simplified approach. Along the way it struck me that the highlights looked no worse (and in at least one case considerably better) than with the more complicated approach. I have posted the 16 images that I kept from the exercise in a thread in the forum. [To be done].

As far as the DCP profiles were concerned, in general they seemed very similar, in the sense that if instead of using the profile created for a scene I used a different one, I generally saw little or no difference. This suggests that a single profile may be all I need for a particular camera. (I haven't checked, but I think that was a conclusion that I came to last time round with camera profiles.) There were two cases where the DCP profiles produced a wrong-looking result. In these cases I got results that looked ok to me by using (any) one of the other DCP profiles from the session or using the generic "Adobe Color" profile.

Setting the white balance from the ColorChecker Passport grey patch for a particular scene proved unnecessary. The Temp and Tint values I got were pretty much the same for all the scenes, and were nearly identical to the values I had previously derived by calibrating the diffuser. This might of course not be the case when the flash is not dominating the illumination (as it was for most of the exercise). This suggests capturing a grey card image when I slow the shutter speed down to increase the natural light component of the illumination.

For now I will see how it goes with the simpler Lightroom + DeNoise approach, using occasional grey card captures, and using one of the DCP profiles that I created for this exercise.
That’s not good that someone is like that I would be tempted to ignore them if that’s possible
I haven’t got experience in balancing flash white balance as I don’t use flash but do sometimes have problems with white balance getting it right especially viewing a finished shot on different devices
I have been calibrating my Windows pc and MacBook laptop and now colour is spot on when viewing on either but when I view on my iPad sometimes there’s a colour cast
But it’s OK my iPad is pretty old anyway
 
Affinity photo does seem to have a lot of colour adjustments possible on Raws
 
That’s not good that someone is like that I would be tempted to ignore them if that’s possible

It was very tempting, but I'm glad I didn't.

I haven’t got experience in balancing flash white balance as I don’t use flash but do sometimes have problems with white balance getting it right especially viewing a finished shot on different devices
I have been calibrating my Windows pc and MacBook laptop and now colour is spot on when viewing on either but when I view on my iPad sometimes there’s a colour cast
But it’s OK my iPad is pretty old anyway

Yes, calibration, where it is possible, is a very good thing to do.
 
I'm not seeing anything in Affinity that I don't see in my other editing applications. Am I missing something perhaps?
Yes you’re right there I’ve not used many different editing programs what I found in affinity is probably standard with most programs
 
For now I will see how it goes with the simpler Lightroom + DeNoise approach, using occasional grey card captures, and using one of the DCP profiles that I created for this exercise.

Well that didn't last long! While processing the first set of images after that I discovered that, at least for those images, using PhotoLab and Silkypix at the start of the processing did make a significant difference, both to the visibility of details and to tonality. So I'm back to using PhotoLab > Silkypix > Lightroom > DeNoise AI > Lightroom.

What is a bit different now though is using a DCP profile for the FZ200. I had made DCP profiles using the X-rite software, but I remembered that previously I had been told about how to make DCP profiles using Adobe DNG Profile Editor and had found that I preferred the look I got with them compared to the X-rite DCP profiles, which made the images look slightly garish. (I discussed this previously starting with this post.) So I have been using an Adobe DCP profile for the FZ200 rather than the X-rite version. I don't recall why I previously stopped using DCP profiles quite quickly. Just now it seems like a good idea, at least for the FZ200 and invertebrates with flash.

For a post at dpreview I processed a dozen FZ200 images from 2015 and 2016 using this processing workflow in order to test it, comparing the results with my previous versions of the images. I much preferred the new versions. I have posted them here in this forum.
 
Last edited:
It looks as though I may be in for a major change of kit for photographing small invertebrates, and possibly larger ones too. What makes this potential change particularly interesting to me is that it brings me back close to where I was at the start of this thread six years ago. At that time I had been using bridge cameras and close-up lenses for several years to photograph invertebrates. In order to improve my image quality I purchased a Canon 70D dSLR and a 100mm L macro and an MPE-65 1-5X macro lens. To my surprise and disappointment this new, specialised and high quality kit did not produce higher quality images of invertebrates. (The 70D was good for flowers, but even that was with a telezoom lens with and without a mild close-up lens, so I quickly returned the 100mm macro and MPE-65.) Since then I have periodically revisited macro lenses and associated kit and have accumulated various lenses (including repurchasing an MPE-65), extension tubes, teleconverters, reversing rings, flash units, brackets etc. But for invertebrates I have kept coming back to close-up lenses on telezoom lenses, either on fixed lens small sensor bridge cameras or on larger sensor APS-C and, more recently, micro four thirds cameras.

The core issue turned out to be depth of field. I like to photograph insects using whole body shots and "environmental" shots from further out. Although I like looking at other people's closer-in shots of flies' eyes etc, I don't often do that sort of shot myself. Especially for the whole body shots, I like to have as much of the subject in focus as I can. Since I work out of doors with active subjects stacking is not a suitable general purpose approach, although I have used it occasionally for invertebrates (and I do use it a lot for flowers). I therefore generally use single shots for invertebrates and in order to get as much depth of field as I can I use very small apertures. These small apertures cause a big loss of fine detail because of diffraction, and this brings high quality kit down to much the same level as lesser kit. In fact with the various kit I have tried over the years the results have been pretty much indistinguishable irrespective of sensor size, pixel count or lens quality.

This new branch in the story started a week or two ago. My wife and I are both on the highly vulnerable list and so have been in full lockdown for many weeks now. My only photographic opportunities are in our garden, and while the flowers have been good this year the number and variety of invertebrates has been even smaller than usual (they have been declining throughout the decade and more we have lived here).


1646 Illustration 1 Pond
by gardenersassistant, on Flickr

We have a tiny (4' x 3') pond and it recently came alive with two sorts of flies that so far have continued to turn up every year. One sort is long-legged flies, which are smallish and highly active, only settling - at most - for a few seconds at a time on the lily pads or surrounding stones and foliage. They have highly reflective bodies and in the sunny conditions in which they appear the water on and around the lily pads produces very strong highlights. They are difficult subjects, in a difficult environment, especially as I often have to reach out over the water to get at them, which can make it a surprisingly strenuous activity, especially on the hot days in which they are most in evidence.

The other flies are small, around 3mm from head to the tip of their wings when folded up on their backs. They are not quite as hyperactive as the long-legged flies, but they do tend to wander around unless you can catch them supping on something, typically the carcus of a similarly sized dead fly. They don't have particularly reflective bodies, just a white patch on top of their heads which is prone to blow out. However, depth of field is an issue. Unlike with larger flies such as the long-legged flies, or even larger flies such as the hoverflies which occasionally turn up at the pond, I have not been able to get sufficient depth of field to produce images that I am comfortable with. As well as the overly limited depth of field, the transition between in-focus and out of focus areas seems very ugly to me.
Given that these flies were about the only invertebrate subjects I had available to me, I decided to take my time and try harder with them and in particular to see if I could work out how to get more acceptable results with the small ones. I tried over and over again with my bridge camera and close-up kit. No matter how hard I tried, it simply wasn't working; there was not enough depth of field and no matter how hard I tried in terms of technique I couldn't get around that.

I had a rethink. I knew I could get greater depth of field with macro lenses, reversed lenses, extension tubes etc, but I had only dabbled with that approach, for a couple of reasons. One was that I was already using f/45 full frame equivalent. With some fairly heavy processing I can produce results that I find acceptable with f/45 as long as I keep the output size down to around 1300 pixels high. That seemed to be about the outer edge of the envelope, and my indoor testing had shown that decreasing the aperture further continued to worsen the diffraction softening.

The other reason was the handling characteristics of the macro/reversed/extension tube setups I had tried. I found these setups extremely difficult to use. I was used to having setups that were not overly heavy, which let me zoom in and out easily and quickly to find and frame subjects, and which let me use accurate and fast (enough), non-hunting autofocus at all the magnifications I used. In contrast the macro etc setups were mostly uncomfortably heavy and, whether or not heavy, all of them made locating and framing small subjects frustratingly difficult and all had to be used with manual focus, which I found slower and less accurate than close-up lens autofocus. Manual focus was also a lot harder on my eyes. With autofocus I simply had to put a small focus box over the place I wanted the centre of focus to fall, and for that I only needed to be aware of the outline of the subject. With manual focus I needed to look closely to see when the fine detail came into focus. This was hard work when looking at the whole scene on the LCD screen (the particular implementation of magnified screen options were not suitable for my - hand-held, quick response - purposes). I had to use the LCD screen because a lot of the angles I work at make the EVF impractical. And in the bright conditions in which these flies appeared reflections from the water made it even more difficult to see what was going on. (Reflections on the LCD screen were not too much of an issue as I use fairly deep LCD hoods.)

I decided to have a more thorough look at using ultra-small apertures. My thinking on the diffraction blurring issue was that it was just possible that I might get acceptable results using a couple of Topaz products that I have been using recently - DeNoise AI and Sharpen AI, which between them have five methods which might be helpful, and which have sometimes seemed to have remarkable success with rescuing apparently unusable images. In terms of the weight and awkwardness of the setups, the weight was not going to change but it was possible that with more experience and practice these setups might become significantly more usable. That has been the case with other technologies which have taken me a while to get used to, develop suitable techniques, short cuts, muscle memory etc.

I did some initial handling tests with various combinations of kit involving a full frame Sony A7ii, an APS-C Canon 70D and a micro four thirds G series camera, using EF mount macro lenses with them - Canon MPE-65 1-5X, Laowa 25mm 2.5X-5X macro, Meike 1.5X macro and Sigma 105mm 1X macro. I also tried 1.4X and 2X teleconverters. With an Olympus 60mm macro on micro four thirds I also tried extension tubes.

I did some measurements and calculations for the most likely looking combinations.


1646 Illustration 2 Measurments and calculations
by gardenersassistant, on Flickr

I then went out into the garden to try the combinations which looked most suitable from the measurements and calculations. On the pond I had the small flies and the long-legged flies as subjects. There was also a cucumber spider that had made a nest in a Choisya bush. It is difficult to be sure, but I think the spider was around 6mm in size. It spent most of the time in the not very photogenic position shown below on the right.


1646 Illustration 3 Choisya bush and spider
by gardenersassistant, on Flickr

I started with the pond. The first setup to show promise was the A7ii and Laowa 25mm macro. Because of the shape of the lens I could not attach a front-mounted flash and so I used a top-mounted KX800 twin flash. The Laowa and KX800 are shown here on the 70D. This KX800 setup that I used with the A7ii is a bit simplified compared to the one I generally use with my FZ bridge cameras (shown on the left below). The simplified version (shown in the middle below) has smaller diffuser boxes on the flash heads and no full width diffuser layer in front of the diffuser boxes. The illustration also shows a torch mounted on the side of the KX800 that I added after doing the test shots to help in future with finding subjects and focusing on them. (The torch is fixed with velcro so the direction it points in can be changed if needed to suit a different magnification.)


1646 Illustration 4 Laowa flash setup
by gardenersassistant, on Flickr

I had a short session with the A7ii and Laowa and then a short session with the usual kit I would use for this size of subject, an FZ330 and Raynox 250, using the flash setup shown on the left above. After processing the images from both these sessions I compared them and overall I preferred the look of the A7ii images. However, with such active subjects it is impossible to produce strictly like for like image pairs for comparison, so I was cautious about drawing too strong a conclusion about pairwise comparisons of somewhat similarly framed subjects. I thought it possible that the sharpest areas in the FZ330 images were sharper than the sharpest areas in the A7ii images. That would be reasonable given the additional diffraction blurring that the A7ii images must have been suffering from because of their smaller effective aperture. However, the A7ii images often appeared to have greater depth of field, and I could more often get the subjects a bit larger in the frame at a just about acceptable image quality. Finding and framing subjects was slower with the A7ii and failed more often than with the FZ330, but once I had subjects in the frame my failure rate with focusing seemed relatively low with the A7ii compared to my close-up lens kits when used for this size of subject. I found that surprising given that I was using manual focus, in a hurry, and couldn't see the LCD image very clearly because of the ambient conditions.

There was however one image which pushed my doubts to one side and motivated me that to continue with some more extremely small aperture experiments; the one below. Admittedly I was lucky to have the opportunity and lucky with the exact timing of the shot, and the in-focus areas might not be as sharp as I could get with a larger aperture, with the FZ330 or anything else (including of course the A7ii), but I have never achieved such DOF coverage and subject separation with these small subjects so large in the frame, and as such I found this a very appealing image, and one that I was fairly confident I could not have achieved with my bridge camera setups, or anything else operating at f/45 full frame equivalent.


1646 Illustration 5 1642 13 2020_06_02 DSC02305_PLab3 SP9LR 1300h-DNAIc-DNAI-PS-AISh2
by gardenersassistant, on Flickr

This made me curious to see what would happen if I used this same, tiny aperture, approach with larger subjects. I couldn't do this with the Laowa because it was too powerful for full body shots of larger subjects. After some more experimentation I decided to try using two setups, a Sigma 105 macro with a 2X teleconverter on the A7ii, and a Canon MPE-65 on the 70D

With the 2X teleconverter, the 105 macro on the A7ii gave me a maximum magnification of 2X, which would complement the 2.5-5X of the Laowa 25. Because of the 2X teleconverter the minimum nominal aperture was f/45 compared to the Sigma 105's native f/22. At the maximum 2X magnification the minimum effective aperture would be f/132, and at 1:1 it would be f/90. This would let me use similar effective apertures as I had been using with the Laowa 25 macro at around 3X magnification.

Here is what the setup looked like, with the front mounted Yongnuo YN24EX flash units.

1646 Illustration 6 Sigma flash setup by gardenersassistant, on Flickr

The peculiar shape of the diffusers is the result of some modifications I made to try to stop
the diffusers dipping into the pond water so much, which being front-mounted they were much more prone to do than with the top-mounted diffusers I had used with the Laowa 25 (which doesn't have a front thread and so can't use a front-mounted flash).

This setup was big and heavy, but it turned out to be surprisingly usable. Supporting the lens with my left hand it seemed quite nicely balanced, and the focus ring has a short travel (a bit less than 180 degrees from infinity to 1:1, and around 90 degrees from 1:2 to 1:1, which is the range I needed for subject acquisition and framing - which with the 2X teleconverter was actually 1:1 to 2:1).

I also used the Yongnuo YN24EX in front-mounted mode with the MPE-65 on the 70D. I made a mistake with the setup; I omitted to use a 1.4X teleconverter to let me get to the same small apertures, and so in terms of depth of field the MPE-65 results were not comparable with the results from the Laowa 25 or Sigma 105, so I will only refer here to the results with the A7ii, 2X teleconverter and Sigma 105 macro.

As with the Laowa 25, I had mixed feelings about the image quality of the test shots I captured with the Sigma 105 and 2X on the A7ii. I was seeing greater depth of field than I was used to. I also preferred the look of the A7ii images compared to similar scenes captured with my FZ cameras, although I might have difficulty demonstrating this. I think it had to do with the clarity and colour rendering with the A7ii; that Choisya bush has given me a lot of colour problems over the years, but the A7ii seemed to handle that side of things very well, producing results that looked rather natural to me. There were questions in my mind though about loss of (even more) detail from using these especially small apertures. For example, was the next image really any better than I can get with my bridge cameras?


1646 Illustration 7 1644 01 2020_06_07 A7ii+2X+105 DSC02819_PLab3 SP9LR 1300hDNAI-AIS
by gardenersassistant, on Flickr

I think so. The often problematic leaf colour looks good to me, and looking at the legs on both sides of the body it looks to me as though the depth of field is significantly larger than I can normally achieve. And the detail? I'm not sure. It looks borderline to me. Does that spoil the image for me? Not as much as when, with other setups, those leaves look a horrible colour that I can't seem to correct. And I do find the DOF coverage appealing. I think (today) that I can live with that balance of benefits and issues. Tomorrow I might think differently. I had a similar reaction to a lot of the test images.

I found a few of the images of the spider and prey a bit more more convincing. For example, as with the fly image above, the spider image on the left below looks to me like it has more of the subject covered by the DOF than I am used to seeing when the subject is such a large proportion of the frame, especially when as in this case the image has not been cropped vertically. And there is enough detail (albeit not very fine detail) to keep my eyes from feeling uncomfortable.


1646 Illustration 8 Spider DOF and PP raising detail
by gardenersassistant, on Flickr

On the right is what an out of the camera JPEG would have looked like (it is the JPEG embedded in the raw file). On the left is a processed version. It is not very subtle and may well come across as overcooked, but be that as it may I think it demonstrates that post processing can have a significant impact on the look of an image and the amount of detail it appears to have, not least one that is very soft because of extensive diffraction softening.

I think I will need to do more experiments to find the (to my eye) optimal balance. Perhaps I would find a slight increase in apparent detail at the cost of a slight reduction in DOF more comfortable to look at.

This album at Flickr contains some test shots (mainly of the spider and prey) captured with the A7ii, 2X teleconverter and Sigma 105 macro. This album at Flickr contains some test shots captured with the A7ii and Laowa 25 macro. This album at Flickr contains some test shots captured with a 70D and MPE-65, with an A7ii with Laowa 25mm macro and with an FZ330 and Raynox 250.

These images were captured in the context of quick tests of multiple options, and so there was a lot of lens changing; I did not attempt to keep the sensors clean. The very small apertures I was using make sensor dust stand out. I have only cleaned up the dust spots in some of the images, and even in those I may well have missed some of them. You are therefore likely to see dust spots in some of the linked images.

The raw files were put through my normal 4- to 7-product processing pipeline:
• DXO PhotoLab (preset)
• Silkypix (preset)
• Lightroom (image-specific adjustments)
• Topaz DeNoise AI (default settings)
• [for some] Topaz AI Clear (default settings)
• [for some] Topaz AI Sharpen (default settings)
• [for some] Photoshop (image-specific adjustments)

I have posted some of these images in this forum thread.

This is work in progress. :)
 
Last edited:
Have you tried anything to get more insects? Or do you not have the space? Apple trees are very good - whilst in bloom. Bramble is great as well, full of insects.
My garden is quite small and when neither the apple trees nor ivy are not flowering, I don't get very many insects.
I have planted some hanging baskets with bee friendly plants (in May) but despite them flowering nothing has come to them yet.
Obviously letting grass grow and leaving some small pieces of wood around are great ways to find things. I probably would have a lot less things to photograph without that.
All of these advice can cause marital problems - and you do so at your own risk.
 
Back
Top