Beginner Resizing - better quality when reduced.

Messages
475
Name
Jim
Edit My Images
Yes
I've been resizing a few pictures to upload and in the process noticed that the smaller sized ones (reduced by about half) look better when viewed at 100%. They're sharper and more well defined. So are pictures that I've taken at reduced size jpegs in camera. Why is that? Is it to do with my monitor and how I'm viewing them, not the actual pictures themselves?
 
Last edited:
There are lots of factors that affect the apparent sharpness of an image. It's possible to make a quite fuzzy image look sharp if the size is reduced enough.

Also if the image is bigger in pixel size than the monitor you are using, and you are viewing the whole image (for instance a 3000 pixel wide image on a 2400pixel wide screen), their will always be a degree of interpolation going on that will inevitably reduce the apparent sharpness.

This I think is often why smaller images appear sharper on the screen as they are now at "full resolution" on the screen (for instance a 1500 pixel wide image on a 2400 pixel wide screen).

HTH

David
 
Thanks, I was guessing it was something like that. It makes sense as the smaller size looks better on the laptop with the lower resolution screen. I haven't noticed the difference on my desktop HD monitor.
 
It's all to do with magnification, by reducing the amount of resolution (i.e. reducing magnification) you make things sharper, with less noise.

You can think of it in the reverse: the more you magnify something the more flaws you can notice, reduce the magnification and the flaws go away (a bit like getting too close to a heavily made up Hollywood star!).
 
so in the case of cropping would it work in reverse as your magnifying arent you or does that somehow make things appear sharper?
 
Cropping is magnifying so you are making image quality worse, which is pretty intuitive if you imaging cropping out 90% of the image.
 
So if I have images at 4500 pixels wide, what resolution monitor would I want to get a true representation of that image viewed at 100% ? The same or higher? Or am I not getting it?!
 
Last edited:
So if I have images at 4500 pixels wide, what resolution monitor would I want to get a true representation of that image viewed at 100% ? The same or higher? Or am I not getting it?!
Not getting it.

An image from a typical modern DSLR viewed at 100% from 18" away is like viewing a 20" wide print from the same distance.

For a fair viewing on your screen, just resize the image to the size you'll print, then zoom accordingly and view from the correct distance.

Rushed, I'm sure I could write it better, but as a generalisation, start printing stuff if you want to know what it looks like, most photographers have taken to looking too close and have no idea what they're actually looking at.

I've got an A3 print from a 300d shot at 1600 ISO and it's perfectly acceptable.

And people tell me that they wouldn't shoot with a 7d over 800 :(

Most of them have never printed anything large, never shot fast film, frankly have no understanding beyond what they've read on the internet.
 
Well, all that considered Phil I'm going to stop over thinking it and just make sure I'm shooting good, sharp images and not worry about it, just do some prints and see how they turn out.
 
I've got an A3 print from a 300d shot at 1600 ISO and it's perfectly acceptable.

And people tell me that they wouldn't shoot with a 7d over 800 :(

Most of them have never printed anything large, never shot fast film, frankly have no understanding beyond what they've read on the internet.

I have 300D prints too but the most shocking (in a good way) is a print I did from a Medion compact which is the second worst digital camera I've ever had the worst being a keychain camera from years ago which hardly qualified as a camera... anyway, I cropped the Medion shot and filled an A4 sheet with it borderless and honestly you'd think it was taken with a decent camera :D
 
Thing about digital IS that it is digital.. its paint-by-numbers... camera doesn't capture an image, it captures a data-table a computer can 'paint-by-numbers' a picture from.
Whatever you are looking at, is a reproduction, after a computer has looked at the data-table and paited the picture you are looking at, filling in the mubered squares with what IT thinks the colour should be... not necesserily what the camera saw.
Subject of sharpness is a toe-curler, especially in widget where a lot of what is 'perceived' as sharpness isn't actual focus resolution....
Imagine pointing a torch at a wall in a dark room... you get a circle of light, and towards the edges it 'fringes' into light shadow, before disappearing into dark shadow.
In the analogue world of film, the film probably sees and captures all of that fuzzy graduation around the bright spot... in digital, computers work on 'on' or 'off' not shades in the middle, so digital processing will inevitably 'clip' the shading where it thinks the bright spot ends, brightening up the darker bits and darkening the lighetr bits depending on where it's 'threshold' levels for any shade happen to be.
Result is that the more 'clinical' digital reproduction isn't necessarily 'sharper' or better resolved, but digital clipping has simplified the shading and produced something with greater contrast that has a higher 'perceived' sharpness... ad the topic descends into a lot of highly technical debate.... but I hope you get the idea.
NOW, camera just records a data-table..... when you load that data-table into a viewer the computer reads the data, and paints by number what it thinks the scene should look like, and applies this sort of digital threshold clipping to the number to put something on the screen..... but t doesn't.... it hands that to the display driver which looks at the 'new' data-tale made by the display package and that then re-interpolated that second generation 'paint-by-numbers' to light up the actual LCD pixels on the screen....
Which brings me to the bit of yoru query thik hasn't been touched so much on, which was that an 'original' digi-file looks OK on screen, but re-sized can look 'better'.... to which there is a pretty simple explanation.... see above on the dfference betwee actual and perceived sharpness and perceived sharpness enhancement introduced by 'pant by numbers' threshold clipping....
If you re-sze an image... the original data-table is interpolated and a display file created, and sent to display adapter.... so you are looking at a third generaton, tree times (or more)clipped and processed interpretaton of the image.
If you re-size that image, you apply another layer, of processing where the computer again interpolates and clips thresholds with even more 'simplification' boosting contrast and perceived sharpness...

IE: its not the smaller number of pixels that have increased perceived sharpness, its the extra 'paint-by-numbers' processes applied, and the greater degree of digital threshold clipping, simplifying the image, artificially removing 'zones of confusion', and crating a grater contrast, not 'sharpness', it just adds to the perception of sharpness.

But, bottom line is "Does it look OK?"
 
Last edited:
Back
Top