Thing about digital IS that it is digital.. its paint-by-numbers... camera doesn't capture an image, it captures a data-table a computer can 'paint-by-numbers' a picture from.
Whatever you are looking at, is a reproduction, after a computer has looked at the data-table and paited the picture you are looking at, filling in the mubered squares with what IT thinks the colour should be... not necesserily what the camera saw.
Subject of sharpness is a toe-curler, especially in widget where a lot of what is 'perceived' as sharpness isn't actual focus resolution....
Imagine pointing a torch at a wall in a dark room... you get a circle of light, and towards the edges it 'fringes' into light shadow, before disappearing into dark shadow.
In the analogue world of film, the film probably sees and captures all of that fuzzy graduation around the bright spot... in digital, computers work on 'on' or 'off' not shades in the middle, so digital processing will inevitably 'clip' the shading where it thinks the bright spot ends, brightening up the darker bits and darkening the lighetr bits depending on where it's 'threshold' levels for any shade happen to be.
Result is that the more 'clinical' digital reproduction isn't necessarily 'sharper' or better resolved, but digital clipping has simplified the shading and produced something with greater contrast that has a higher 'perceived' sharpness... ad the topic descends into a lot of highly technical debate.... but I hope you get the idea.
NOW, camera just records a data-table..... when you load that data-table into a viewer the computer reads the data, and paints by number what it thinks the scene should look like, and applies this sort of digital threshold clipping to the number to put something on the screen..... but t doesn't.... it hands that to the display driver which looks at the 'new' data-tale made by the display package and that then re-interpolated that second generation 'paint-by-numbers' to light up the actual LCD pixels on the screen....
Which brings me to the bit of yoru query thik hasn't been touched so much on, which was that an 'original' digi-file looks OK on screen, but re-sized can look 'better'.... to which there is a pretty simple explanation.... see above on the dfference betwee actual and perceived sharpness and perceived sharpness enhancement introduced by 'pant by numbers' threshold clipping....
If you re-sze an image... the original data-table is interpolated and a display file created, and sent to display adapter.... so you are looking at a third generaton, tree times (or more)clipped and processed interpretaton of the image.
If you re-size that image, you apply another layer, of processing where the computer again interpolates and clips thresholds with even more 'simplification' boosting contrast and perceived sharpness...
IE: its not the smaller number of pixels that have increased perceived sharpness, its the extra 'paint-by-numbers' processes applied, and the greater degree of digital threshold clipping, simplifying the image, artificially removing 'zones of confusion', and crating a grater contrast, not 'sharpness', it just adds to the perception of sharpness.
But, bottom line is "Does it look OK?"