It seems obvious that a line has to be drawn between images which accurately capture the reality of the moment when the shutter button was pressed, and those images that have been manipulated. How do we judge the realism of an image? With our eyes. Our eyes which do a phenomenal amount of image processing in the brain before presenting the image we "see". They do so much excellent white balancing that it's taken decades of research to get close to auto white balancing in digital cameras. The brain fills in the blind spot. It concatenates the rapid flickering of numerous saccades and prior expectations to erase the differences between the high resolution and colour vision of the central spot with the grayer much diminished resolution of the rest. It adjusts perspective so that minor perspective distortions are rectified. In high contrast scenes the saccades as the eye scans through dark and light areas are used in a kind of HDR to improve the "as seen" dynamic range.
I could go on, but I hope it's clear that our eyes and brain do something very far from presenting a simple snapshot of a moment to our seeing consciousness. Yet what we judge the realism of a photograph by is comparison with what we "see" after all this concatenation and rectification of multiple images in the brain. But the camera creates its image from one instant shot on a uniform high resolution sensor, and what's more does so with more detail resolution and dynamic range than many of us are capable of seeing. Hence the need in producing the in-camera JPEG image of doing some kind of white balancing, some kind of choice of how to truncate and compress the dynamic range from the sensor, and so on. The camera maker gives the user a variety of modes which adjust the parameters of this internal conversion, such as contrast, saturation, sharpening landscape, portrait, night, sunset, etc.. The camera maker also supplies the user with a RAW file converter which can do exactly the same adjustments to the RAW file as the JPEG converter in the camera used.
So if you're a believer in the image purity and realism of "getting it right in the camera", just using the unprocessed ex-camera JPEG as Reuters seem to have decided to do, what is the logic of accepting the ex-camera JPEG, while denying the exactly equivalent processing done later in computer to produce an identical image? The only possible justification for insisting on the ex-camera JPEG is because it limits the scope for processing to the simple global adjustments that used to be done routinely in the darkroom print making process. I have a camera, however, whose in-camera jpegs can go further than that. They correct chromatic aberration and geometric distortion in the camera maker's own lenses. They can do auto shadow lifting in high contrast images. In fact the camera can do in-camera HDR, combining several images of different exposures to enhance the dynamic range. If I forget my wide angle lens it can stitch some images together in a panorama employing the usual cylindrical perspective projection of panoramas. That uses the usual cylindrical perspective projection our eyes prefer for such side angle shots, but which as far as I know is not offered by any lens. Until very recently these image combining shots and lens correcting shots were not available in-camera. We can look forward to future cameras which will be able to do the kinds of vertical alignment and two point perspective preferred by architectural photographers in the camera, as automatic settings, for ex-camera jpegs. I know that some of these ex-camera jpeg features are considered by some image realism purists to be going too far. In other words drawing the realism line between ex-camera jpegs and computer post processing won't hold, because the in-camera computers and algorithms are becoming more and more sophisticated.
Before the invention of photography we had to rely on artists to capture images for us. Of course artists can draw imagined images, but if asked nicely can produce realistic accurate drawings. Drawings? In which object is separated from background by drawing a line round the edge? A line which does not exist either in reality or the image processing of the visual cortexes of our brains. It seems we accept as a realistic image something which represents accurately certain important features of what we see, even though we see nothing like the drawn monochrome outlines and hatching of a pencil sketch. What's more, it shouldn't be lo before our cameras start offering as one of the in-camera creative modes a "pencil sketch" mode, producing ex-camera jpegs which look just like good realistic pencil drawings.
I sympathise a lot with those who think photography should preserve realism and eschew too much image processing. I attended a famous photographic society exhibition a few years ago which shocked me. I considered more than half the images weren't what I consider photographs at all. They were exercises in creative photoshoppery. Many of the combined images which were taken on quite different days in quite different places. Nothing wrong with that as an art form, but I personally don't consider it should be called photography.
I agree that a line must be drawn between photographic realism and creative photoshoppery. But to try to draw that line between ex-camera jpegs and computer post processing is drawing it in the tidal shifting sands and image processing technology. To try to define what constitutes "too much" image processing fails to understand just what an extraordinary amount of image processing is done in the brain, and what kind of thing the mind is doing when it judges a pencil drawing as an accurate and realistic representation.
We need to develop aa proper theory of realistic representation and from that what constitutes unrealistic image manipulation. One attempt at doing that has been made by the excellent and expensive image processing house which some of the top news photographs use to process their images. They claim that they never add or remove remove any pixel not in the original image, just changing the tonal relationships between pixels. In other words they don't add any person or thing to an image, such as a person missing from a group shot, or remove anything, such as an annoying telephone wire.
That's a good start, but we see more in an image than what is actually seen within the image. For example shadows tell us where the sun was, although the sun is not in the image. I recall that notorious much disputed image of a grieving funeral crowd in a dark narrow alley. People objected to the unrealistic photoshoppery. The photographer said they were mistaken, all the people in that image were exactly there at the time in that place, he'd added nothing, subtracted nothing, just adjusted light and shade to bring out some darkly shaded faces. But in doing so he'd created the effect that the crowd had been very oddly llit by a multitude of external light sources, some of them exactly angled to pick out individual faces. So changing the tonal relationships between pixels in such a way as to imply a light source not originally lighting the original image should also be forbidden.
Does that mean that shadow lifting of dark facial shadows, which my camera can do automatically as a JPEG setting, should be outlawed? What about achieving pretty much the same effect by means of a slight shadow lift by on-camera flash? That's something which most wouldn't notice as an extra light source added by the photographer, but the eagle eyed forensic photographer would spot it from the catch light in the eyes.
I wish I could think of an answer to what properly constitutes photographic realism. I fear that is a shape-shifting chimera defined, like what constitutes an unrealistically wide angle view, as much by fashion, taste, and visual education, as by technology.