- Messages
- 254
- Edit My Images
- No
*Long post warning* (apologies in advance)
I have a perhaps boring question for the more technically minded in this forum if I may.
A bit of background first. I am what people nowadays call 'a hybrid film shooter'. Essentially someone who has nothing against digital photography, but dislikes expensive/complicated DSLRs, loves film, film cameras, grain etc, and wants to achieve a scan as an end product, as opposed to a darkroom print. I like film cameras, I like the beauty of scanned black and white film, I like the process that gets me there. I do not have, nor I'm interested in setting up, a darkroom, and I do my film development using a dark bag, which suits me fine.
From the above it follows that I share with many film photographers who discuss the medium and the art online an interest for a) exposure/getting great exposures, b) development/processing my film well, BUT NOT c) darkroom printing, as my negatives need to be optimised for one purpose only: the best scan I can get.
Now to (slowly) get to the point of my post: whereas I have found that most advice on negative exposure and negative development applies equally, regardless of end product (whether a scan, a print, a slide projection etc), I'm not quite so sure this holds for one critical technical variable - what the zone system aficionados call 'effective film speed' or 'effective sensitivity'.
A rapid google forum search of any film stock currently in production will return endless threads on how 'it should be exposed so that it prints on contrast 2 paper'. So most advice on film regards the optimisation of its characteristic curve to fit the signal on the negative to an extremely non-linear, technically inferior (relative to the negative), but creatively interesting medium, such as the photographic paper. Cue endless posts on home-made densitometric tests, discussions on true film speed, metering for the shadows then placing any interesting shadows in Zone III and so on and so forth.
The key thing here is that, almost inevitably, the outcome of these discussions is that 'real' film speed is a fraction of the one declared by the manufacturer on the box (with a few exceptions - some films seem to be 'truer' to box speed). It is not uncommon to read that 400ISO film 'should be exposed at 160' and that 100ISO film 'is a 50EI or 25EI at most'.
Now to my question, which is very simple really. The theory behind the practice of finding the true, much lower, film speed is steeped in densitometry. I believe the end result sought is that of getting enough light in as to retain detail in the shadows above base+veil. One outcome of this is that the negative will be 'easier to print on normal contrast paper'. I have seen some prints, or scans of prints, obtained using this method, and they seem to focus on readable detail even in deep shadow areas (for example, undergrowth in shade - a rock in shade - etc).
But what does the above do to the highlights?
We know that exposure mostly affects the shadow density, whereas we fine tune highlights and contrast via development. However, we know most films have a characteristic curve which is linear, if at all, only in a middle segment, becoming non linear in its extremities (the 'toe' and the 'shoulder').
I have noticed that many prints or scans of prints shared online, and produced according to densitometric 'true film' speed principles seem to feature weirdly compressed, mangled up highlights. I have been wondering why that might be the case. Could it be that using a densitometrically-correct film speed that lifts the shadows 'above the toe' (pushing them into the linear section) might lead to pushing the highlights way beyond the linear section of the curve, into the shoulder - even if we reduce development considerably?
So my question is if by trying to save the shadows to make the negative easy to print, we might (for contrasty scenes) end up pushing the rest of the histogram so far up that we're now getting suboptimal mapping of the highlights (which is not a massive concern, I understand, for wet lab printing, but might results in not exploiting the film's tonal range to the fullest).
So essentially is a different mindset in order when shooting film that will be scanned? A scanner sensor, unlike an enlarger head, is an almost completely linear device (excluding non linearities determined by thermal noise I'd assume). So a scanner has no problem in digging into 'thinner' shadows that would make wet printing difficult. As a consequence, if my reasoning is correct, placing the histogram lower on the curve (ei exposing at box speed or higher speed than what results from densitometric tests) might mean we can map those highlights before the shoulder kicks in (depending of course on film used) thus getting a truer representation (read: more linear mapping) of the negative wrt what could have been achieved via a print.
Any thoughs or comments/corrections appreciated.
I have a perhaps boring question for the more technically minded in this forum if I may.
A bit of background first. I am what people nowadays call 'a hybrid film shooter'. Essentially someone who has nothing against digital photography, but dislikes expensive/complicated DSLRs, loves film, film cameras, grain etc, and wants to achieve a scan as an end product, as opposed to a darkroom print. I like film cameras, I like the beauty of scanned black and white film, I like the process that gets me there. I do not have, nor I'm interested in setting up, a darkroom, and I do my film development using a dark bag, which suits me fine.
From the above it follows that I share with many film photographers who discuss the medium and the art online an interest for a) exposure/getting great exposures, b) development/processing my film well, BUT NOT c) darkroom printing, as my negatives need to be optimised for one purpose only: the best scan I can get.
Now to (slowly) get to the point of my post: whereas I have found that most advice on negative exposure and negative development applies equally, regardless of end product (whether a scan, a print, a slide projection etc), I'm not quite so sure this holds for one critical technical variable - what the zone system aficionados call 'effective film speed' or 'effective sensitivity'.
A rapid google forum search of any film stock currently in production will return endless threads on how 'it should be exposed so that it prints on contrast 2 paper'. So most advice on film regards the optimisation of its characteristic curve to fit the signal on the negative to an extremely non-linear, technically inferior (relative to the negative), but creatively interesting medium, such as the photographic paper. Cue endless posts on home-made densitometric tests, discussions on true film speed, metering for the shadows then placing any interesting shadows in Zone III and so on and so forth.
The key thing here is that, almost inevitably, the outcome of these discussions is that 'real' film speed is a fraction of the one declared by the manufacturer on the box (with a few exceptions - some films seem to be 'truer' to box speed). It is not uncommon to read that 400ISO film 'should be exposed at 160' and that 100ISO film 'is a 50EI or 25EI at most'.
Now to my question, which is very simple really. The theory behind the practice of finding the true, much lower, film speed is steeped in densitometry. I believe the end result sought is that of getting enough light in as to retain detail in the shadows above base+veil. One outcome of this is that the negative will be 'easier to print on normal contrast paper'. I have seen some prints, or scans of prints, obtained using this method, and they seem to focus on readable detail even in deep shadow areas (for example, undergrowth in shade - a rock in shade - etc).
But what does the above do to the highlights?
We know that exposure mostly affects the shadow density, whereas we fine tune highlights and contrast via development. However, we know most films have a characteristic curve which is linear, if at all, only in a middle segment, becoming non linear in its extremities (the 'toe' and the 'shoulder').
I have noticed that many prints or scans of prints shared online, and produced according to densitometric 'true film' speed principles seem to feature weirdly compressed, mangled up highlights. I have been wondering why that might be the case. Could it be that using a densitometrically-correct film speed that lifts the shadows 'above the toe' (pushing them into the linear section) might lead to pushing the highlights way beyond the linear section of the curve, into the shoulder - even if we reduce development considerably?
So my question is if by trying to save the shadows to make the negative easy to print, we might (for contrasty scenes) end up pushing the rest of the histogram so far up that we're now getting suboptimal mapping of the highlights (which is not a massive concern, I understand, for wet lab printing, but might results in not exploiting the film's tonal range to the fullest).
So essentially is a different mindset in order when shooting film that will be scanned? A scanner sensor, unlike an enlarger head, is an almost completely linear device (excluding non linearities determined by thermal noise I'd assume). So a scanner has no problem in digging into 'thinner' shadows that would make wet printing difficult. As a consequence, if my reasoning is correct, placing the histogram lower on the curve (ei exposing at box speed or higher speed than what results from densitometric tests) might mean we can map those highlights before the shoulder kicks in (depending of course on film used) thus getting a truer representation (read: more linear mapping) of the negative wrt what could have been achieved via a print.
Any thoughs or comments/corrections appreciated.
Last edited: