Scrap your lenses! They won't work on new breed of cameras!!!

Wonder how long it would take for the police to come running if I was to use that mofo for taking pics of the houses of parliament or something!

They police would have the image of you firing that beast burned onto their retina's before being vapourized within fractions of a second, mwha ha ha ha! :LOL:
 
What a load of complete and utter ****. Correcting for optical errors in camera is a total waste of time. If you understood how linear and ray optics worked, you'd realise that chromatic and spherical aberation, etc etc can all be (atleast as damned near as) eliminated by conventional lens design.

Improving an image in software is always going to be destructive, so stating what you did in the OP is simply wrong.

If you really understood what it is you are defending, you'd also know what negative refractive index (Optical metamaterials) lenses were. These are at the very early stages of research at the moment, and this so far poses the ONLY threat to completely changing the way in which lenses are designed.

A shed load of money will continually pumped into optical design, and saying that A) software correction is better than physical correction, and B) Current lenses are useless, is just wrong.

I ask you to please understand what you are talking about before flipping out on a article that has no fact in it what so ever!
 
I do not know why so many have been down on Hoppy.?

He has introduced a topic that has provoked thought and discussion and diverse opinion.

Are our camera instantly out of date? Of course not.

Are there any cost /quality/ size / complexity advantages in using software/ firmware in lens/image solutions? again of course there is.

Is any one being forced to believe it?.../ no ... our prejudices are our own affair.

In the highest quality scientific image solutions, firmware has an established position in removing residual defects.

Will this be used on normal Photographic high end lenses? Certainly.

To reposition light targets by computation, is no less valid than using the refraction of various glasses for the same purpose.

It is the cost/ image quality results, that will decide the issue. Not prejudice.
 
I do not know why so many have been down on Hoppy.?

He has introduced a topic that has provoked thought and discussion and diverse opinion.

Are our camera instantly out of date? Of course not.

Are there any cost /quality/ size / complexity advantages in using software/ firmware in lens/image solutions? again of course there is.

Is any one being forced to believe it?.../ no ... our prejudices are our own affair.

In the highest quality scientific image solutions, firmware has an established position in removing residual defects.

Will this be used on normal Photographic high end lenses? Certainly.

To reposition light targets by computation, is no less valid than using the refraction of various glasses for the same purpose.

It is the cost/ image quality results, that will decide the issue. Not prejudice.

There are ways of getting a point across, and starting a sensible discussion. His way, was wrong. His responses, even worse.

Gary.
 
Thanks terry :)
 
I am just waiting for the time when some one will design a lens,
that is little more than a lump of magnetic crystalline totally transparent and non reflective gel; That can have its shape, refractive index, focal length and transmission characteristics changed electronically; so as to vary in response to the received image on the sensor.

This must be formulated so as to result in an image that is sharp in every part regardless of distance, or focal length set. At the same time depth of field must be able to be user defined on demand.

I put no limitations on the use of firmware, software, or new technology.

The sole criteria is the ability to produce the controlled and highest image quality at any focal length and at negligible unit cost.:)
 
I don't see why that process should not continue, especially if at the same time you remove a few barriers that have previoiusly held things back. That's how we get a sudden big jump. Your comment is not so much cynical, more a not taking everything into account.

Well this way of "removing barriers" is like pretending they never existed in a first place not really removing them.

I must say that the more I read this thread, the more I convinced that puddleduck was right - you are the marketing man dream customer...

The problem is I don't think you have a clear understanding of what those lens faults really are and how and they can be fixed by a software (may be you do, but I can't see it judging from this thread).

For example, the lens distortion can't be transparently fixed just with a snap of a fingers. It will alter the frame perimeter by applying affine transformations to the image geometry and this in almost all the cases will require you to throw some pixels away by croppint it back to rectilinear form. This in turn wil almost always waste some sensor pixels. In addition (and it's more important for me personally) it may stretch some parts of the image which effectively will blur them. You can try to fix it by sharpening but it will only do so well for smaller amounts of this stretch and not the larger ones. The conclusion here is that there is no sofware in the world that fixed lens distortion that will give you a better quality image than the one you taken (assuming capture was perfect in respect of the sharpness). It is not physically possible to create one either - you can only approximate it but it has a drawbacks.

Then there is a CA - those are an interesting and possibly the most difficult case. Have you ever thought why Lightroom or other RAW conversion software does give you an option to manually correct CA (they do auto correction on a RAW but yet give you manual option as well)? The reason being that no formula exists that will alllow you to fix it in post processing. The CA occurs when three optical planes (R, G and B filtered by the sensor filters) are alightly misaligned. You as a lens manufacturer may know how far they are misaligned, but the problem is that it only manifests and needs correction at the edges. And this is a 3D information that is lost on 2D projection when sensor captured the image. To fix this in ideal and perfcet way without sacrificing image quality you would need to know with absolute certainty all those edges and also be able to distinguish CA from the real colour occuring at thos edges. This is the reason we do have a manual options for those CA corrections in most RAW convertors.

The only relatively reliable imperfection that you can fix is the vignetting and even this is flawed since it is rising S/N ration in the corners.

With the logic you are using it may as well be that we don't need Zoom lenses - we can always use digital zoom right?...

Software and math as good as they are cannot create information out of nothing really so they have a limitation.
 
idiots
 
I am just waiting for the time when some one will design a lens,
that is little more than a lump of magnetic crystalline totally transparent and non reflective gel; That can have its shape, refractive index, focal length and transmission characteristics changed electronically; so as to vary in response to the received image on the sensor.

What is a magnetic crystalline? And how does a magnetic field alter the path of light? or is that not what you are getting at?

Also, for something to be totally transparent, there has to be nothing there in the first place.

Upon encountering a change in refractive index, there will always be a reflection according to the Fresnel reflection equations. For there to be no reflection, there must be no change in the index of refraction between two media. For a case where there is no change, the object ceases to be a lens.

As for changing the index of refraction of a material, this might actually be possible, though I have never heard of it being developed. Remember that the index of refraction is intrinsically liked to the relative permittivity, that being how a material responds to an external electric field, and so changing this becomes difficult.
 
What a load of complete and utter ****.
Improving an image in software is always going to be destructive,

All digital imaging with or with out a lens necessarily discards information.
No digital image can be made with out firmware/software manipulation.

If you really understood what it is you are defending, you'd also know what negative refractive index (Optical metamaterials) lenses were. These are at the very early stages of research at the moment, and this so far poses the ONLY threat to completely changing the way in which lenses are designed.

This is still rather more fanciful than software correction, which is a practical and well used scientific technique.

A shed load of money will continually pumped into optical design, and saying that A) software correction is better than physical correction, and B) Current lenses are useless, is just wrong.

A) software correction can be better in practical/ cost effective terms, even in the highest quality scientific lenses.

B)You have fallen for a debating hyperbole, used to provoke discussion.

I ask you to please understand what you are talking about before flipping out on a article that has no fact in it what so ever!

Indeed it has a factual base, however it conclusions are somewhat suspect.
However software corrections have been used for lenses corrections for several years; That this will have and already has specific uses in high end cameras is not open to dispute.

That it is becoming a serious tool in the redesign of quality lenses is also a fact.

I will of course have no effect on existing lenses, except for their earlier demise than might have been expected.

Few of us would actually wish to cart around heavy multi element lenses, if there was an equal quality alternative.
 
What is a magnetic crystalline? And how does a magnetic field alter the path of light? or is that not what you are getting at?

Also, for something to be totally transparent, there has to be nothing there in the first place.

Upon encountering a change in refractive index, there will always be a reflection according to the Fresnel reflection equations. For there to be no reflection, there must be no change in the index of refraction between two media. For a case where there is no change, the object ceases to be a lens.

As for changing the index of refraction of a material, this might actually be possible, though I have never heard of it being developed. Remember that the index of refraction is intrinsically liked to the relative permittivity, that being how a material responds to an external electric field, and so changing this becomes difficult.

I am afraid you need a Humour/ wishful thinking transplant.

How ever the eye contains a crystalline Gell .. the rest is of course nonsense. :)
 
All digital imaging with or with out a lens necessarily discards information.

Granted. However the argument originally suggests (not states admitedly) that good quality lenses are a thing of the past. This I refuse to believe.

software correction can be better in practical/ cost effective terms, even in the highest quality scientific lenses.

Granted. However these are not experimental grade optics under discusion in the OP. So software editing will struggle to produce the same quality image from a poor quality lens, than an unedited image from a good optic.

B)You have fallen for a debating hyperbole, used to provoke discussion.

Biting was too easy :D
 
Well this way of "removing barriers" is like pretending they never existed in a first place not really removing them.

The main barrier to lens design that I'm talking about is the reflex mirror and its effect on back focus. The lens designer has one arm tied behind their back before even starting. This barrier is real and big.

The problem is I don't think you have a clear understanding of what those lens faults really are and how and they can be fixed by a software (may be you do, but I can't see it judging from this thread).

Maybe I do...

For example, the lens distortion can't be transparently fixed just with a snap of a fingers. It will alter the frame perimeter by applying affine transformations to the image geometry and this in almost all the cases will require you to throw some pixels away by croppint it back to rectilinear form. This in turn wil almost always waste some sensor pixels. In addition (and it's more important for me personally) it may stretch some parts of the image which effectively will blur them. You can try to fix it by sharpening but it will only do so well for smaller amounts of this stretch and not the larger ones. The conclusion here is that there is no sofware in the world that fixed lens distortion that will give you a better quality image than the one you taken (assuming capture was perfect in respect of the sharpness). It is not physically possible to create one either - you can only approximate it but it has a drawbacks.

Distortion is really easy to fix. I've been doing it for ages, undetectably. I also use the same principles to correct converging verticals, and to a much greater degree of correction. There is so much stretch in a good digital image, so much sharpness and so many pixels, it can be done invisibly. What you say is true, but really isn't a significant concern in practise given the small levels of correction we're talking about here, it's just a few per cent. Where the bespoke distortion programs really make a difference is with the moustache type of distortion common to wide zooms, which is hard to correct manually.

Then there is a CA - those are an interesting and possibly the most difficult case. Have you ever thought why Lightroom or other RAW conversion software does give you an option to manually correct CA (they do auto correction on a RAW but yet give you manual option as well)? The reason being that no formula exists that will alllow you to fix it in post processing. The CA occurs when three optical planes (R, G and B filtered by the sensor filters) are alightly misaligned. You as a lens manufacturer may know how far they are misaligned, but the problem is that it only manifests and needs correction at the edges. And this is a 3D information that is lost on 2D projection when sensor captured the image. To fix this in ideal and perfcet way without sacrificing image quality you would need to know with absolute certainty all those edges and also be able to distinguish CA from the real colour occuring at thos edges. This is the reason we do have a manual options for those CA corrections in most RAW convertors.

Have you tried the custom lens-specific correction for CA found in Canon's DPP Raw software? It works very well and makes a dramatic difference to the CA-rich images I shot with an EF-S 17-85mm, at the wide end, max aperture. Transforms them no less; it really is that good. And that only uses the 2D data you speak of (I assume it does) but I don't think it's really necessary to go further than that with this feature.

However, the 3D data is also known, and if it is known, it can be corrected. It may take a lot of processing power but if the camera knows the performance characteristics at all times - all focal lengths, all apertures, all focusing distances - then in theory algorithms can be devised to manage them. I don't think we need to go this far, but the potential is there and processing power is not a problem these days.

The only relatively reliable imperfection that you can fix is the vignetting and even this is flawed since it is rising S/N ration in the corners.

Thanks for that guarded concession. You will know that vignetting is a major impediment to wide-aperture lenses and that anything which can be done to correct it will have big benefits.



There are always problems in forming an image and whether they are resolved optically or by digital processing, both methods have their drawbacks. But I think the overall result is better if both methods are used in tandem.

Speaking of optical hurdles, how much better things would be if we had a concave sensor instead of a flat one? I'm not aware that this is remotely practical, but in combination with digital processing it would make a huge difference to lens design. Juist a thought :)

Edit: a further comment on vignetting etc. This is already being heavily corrected in some compacts, as is CA and distortion. They can do this without the user even knowing what's going on because the image is always seen on the LCD, after processing. What the eye doesn't see etc. It cannot be done acceptably in a DSLR because of the optical viewing through the lens which would look bad, and would give the game away. When this is replaced with an EVF, DSLR-style lenses can benefit, too.
 
Speaking of optical hurdles, how much better things would be if we had a concave sensor instead of a flat one? I'm not aware that this is remotely practical, but in combination with digital processing it would make a huge difference to lens design. Juist a thought :)

I read a report on a paper covering just that subject a few months ago.
It seems silicone is not flexible, but they have had some success using using small sections that are joined. Seems like a major difficulty to me. so I won't hold my breath.
Though why silicone wafers can't be formed by molecular deposition, like lens coatings, I don't know? then they could be any shape required. But I am sure there is a very good scientific reason why not.
 
The question of CA is an interesting one. when lenses are tested like in the "AP" they use the ISO12233 slanted knife-edge at key focal lengths.

CA aberrations are constant for any given focal length/aperture/ focus. However this does not mean you can always see it. Mostly it simply degrades the image sharpness.

I always correct for CA in photoshop during raw processing. I use a set of adjustment I have found visually for my 17-55 f2.8 lens and plotted as a graph.
Admittedly this makes no allowance for aperture or focus, but it gives a sharper result than not using it.

That it works I have no doubt, as since using it, CA has not ever been a problem.

That is to not to say I have never had any fringing, as some times I do see some inconsistent colour fringes in shots directly into the light. But those few occasions are easily dealt with by desaturating them. I suspect this is more a sensor micro-lens problem rather than a simple lens CA one.

My pocket Minolta G600 never shows CA or fringing. I doubt it is fitted with the mythical perfect lens. though it must be a very good one.
I can only suppose it suffers from CA like any other lens , but is wonderfully well corrected in firmware. Though interestingly there is no attempt in correcting for pin/barrel distortion.
 
I do not understand why any one should somehow fear the correction of residual lens distortions by software.
Any one into panoramic photography (mosaic or 360x180) will understand that as part of the process of stitching every image is digitaly reshaped to take its place on a sphere defined by the focal length and further reshaped to remove any lens based distortion such as pin/barrel/moustache. In this form the images can be matched together perfectly.

In the case of a partial panorama or mosaic it is rendered so as to be viewed and printed on a flat surface, using one of a number of possible projections.

All that said any given pixel may have been stretched and moved and blended to a very considerable extent. However the resultant final image should be sharp well corrected and of excellent quality.

Such manipulations, carried out invisibly to the user, are massive compared to the ones suggested in the OP, for lens corrections.
 
I do not understand why any one should somehow fear the correction of residual lens distortions by software.
Any one into panoramic photography (mosaic or 360x180) will understand that as part of the process of stitching every image is digitaly reshaped to take its place on a sphere defined by the focal length and further reshaped to remove any lens based distortion such as pin/barrel/moustache. In this form the images can be matched together perfectly.

In the case of a partial panorama or mosaic it is rendered so as to be viewed and printed on a flat surface, using one of a number of possible projections.

All that said any given pixel may have been stretched and moved and blended to a very considerable extent. However the resultant final image should be sharp well corrected and of excellent quality.

Such manipulations, carried out invisibly to the user, are massive compared to the ones suggested in the OP, for lens corrections.


This entire thread seems to be moving off in a weird tangent. I think the only issue people have, and it's rather simple....the only issue people have, Hoppy telling them their kit is a load of old poo, and said kit will be useless in a year or two. He also tried forcing his opinion that the new panosonic "super camera" signals the end of the DSLR market and all associated kit.

It was silly, and it got the deserved response. IMO.

Gary.
 
I don't mind change, but I'm sure that we don't know what the future is likely to be or how it will affect 'photography' yet - we can only look back and see where it came from.

I'd expect the every increasing use of software code to manipulate images to be part of it though and whether that is 'in camera' or in a 'digital darkroom' I can't see that it is really that worth worrying about either.
 
This entire thread seems to be moving off in a weird tangent. I think the only issue people have, and it's rather simple....the only issue people have, Hoppy telling them their kit is a load of old poo, and said kit will be useless in a year or two. He also tried forcing his opinion that the new panosonic "super camera" signals the end of the DSLR market and all associated kit.

It was silly, and it got the deserved response. IMO.

Gary.

It is gratifying that such expensive POO can take such excellent photographs.:D

I think the thread has moved more in the direction that was intended by the OP.

No one has seriously contended that our existing kit is Poo

The real point is how lenses will be designed and manufactured in the future and the effect software manipulation will have on effectiveness, cost and quality.

Those who believe that only diffraction media alone can be used in the creation of images are only partially correct, The best option is to use optical elements for the things they can do most effectively and firmware to remap that image for the final high quality result.

The question of short back focus lenses as required in a Leica M camers or a micro4/3 camera is a special case of the general principal. And because of the extreme angles of incidence is as much a sensor problem as a lens one.

This is far from being a silly thread.

The thinking involved in the new generation of lens designs will eventually replace the existing ones with ones that are higher quality, lighter, smaller and less expensive.

Any changes if any to the cameras they fit on will be down to equally new thinking.
 
This is far from being a silly thread.

It's an extremely silly thread due to the way in which it was started. At best it needs one hell of a mop up to stop people who stumble across it to think we are all a bunch of infighting numpties. These here forums used to be full of helpful and well thought out threads - An excellent resource for all. This thread is an example of how crap it can be though. A discussion exists which you are clearly trying to rescue (good on you too), but IMO any discussion is pointless in a thread which has 4 pages of marketing crap being forced upon the world. A new thread should be started, or this one should be mopped up.

Gary.
 
I think the thread has moved more in the direction that was intended by the OP.

At last :)

I read a report on a paper covering just that subject a few months ago.
It seems silicone is not flexible, but they have had some success using using small sections that are joined. Seems like a major difficulty to me. so I won't hold my breath.
Though why silicone wafers can't be formed by molecular deposition, like lens coatings, I don't know? then they could be any shape required. But I am sure there is a very good scientific reason why not.

The problem with this one is that while a concave surface is really good for imaging wide angle lenses, and can sort out lots of imaging problems at a stroke, this becomes less desirable as focal length increases and even to the point where it is undesirable in long lenses.

A variable curve sensor would be needed for our purposes, and even if this were possible, I think it is more easily sorted by the lens.
 
I am just waiting for the time when some one will design a lens,
that is little more than a lump of magnetic crystalline totally transparent and non reflective gel; That can have its shape, refractive index, focal length and transmission characteristics changed electronically; so as to vary in response to the received image on the sensor.

well, I believe that people are working on similar kind of things:

Link
 
Well, I dunno. A year or two ago I switched back to film, mostly, although I have and for some things still use, a perfectly nice D200.

There were a few reasons, but one of them was that I felt a little bit exploited by the 'must have latest upgrade' thing. Sure a D700 would give me marginally better digital image quality and a bit of improved low-light capability, but it seems to me that manufacturers expecting us to spend a few grand every couple of years to 'stay current' are basically taking the ****.

It might be different if it were a completely new capability, of the sort you get from going say 35mm to medium or large format, or SLR to rangefinder. Not for a relatively marginal improvement though.

So frankly, I'm not too worried about the scenario described unless a) Ilford, Kodak et al all go out of business or b) somebody makes a digital Leica equivalent taking M-mount lenses, with roughly D700 imaging capability, that I can actually afford.
 
Well, I dunno. A year or two ago I switched back to film, mostly, although I have and for some things still use, a perfectly nice D200.

There were a few reasons, but one of them was that I felt a little bit exploited by the 'must have latest upgrade' thing. Sure a D700 would give me marginally better digital image quality and a bit of improved low-light capability, but it seems to me that manufacturers expecting us to spend a few grand every couple of years to 'stay current' are basically taking the ****.

It might be different if it were a completely new capability, of the sort you get from going say 35mm to medium or large format, or SLR to rangefinder. Not for a relatively marginal improvement though.

So frankly, I'm not too worried about the scenario described unless a) Ilford, Kodak et al all go out of business or b) somebody makes a digital Leica equivalent taking M-mount lenses, with roughly D700 imaging capability, that I can actually afford.

Apart from a Canon G3 and G6 I did not move to digital for serious work till the 40D. It had a quality of image and selection of features at a price that seemed very reasonable. No more expensive in fact than the G3 had been.
I have used virtually every format and film stock available professionally since the late 40's.
Nothing has been so convenient and so easy to obtain the quality I require, than the possible exception of a Rolleiflex.
I can see no need to upgrade, as it is called, any time soon.

I would certainly not move back to film simply because the advances in digital were moving faster than my wish to buy new kit.
at the moment there is nothing on the market that would tempt me.

Now if a DSLR or equivalent came out the size of my Olympus OM1n with equivalent or even smaller lenses then I might well be tempted.

Digital DSLR's with full or aps size sensors that have a decent pentaprism and professional features are just too large and heavy. It does not seem to be a feature that Canon and Nikon are concerned about.

However it is a feature that could loosen my purse strings. If it takes lenses with digital correction so be it.
 
Yep, that's sort of why an M-mount, Leica-simple body with D700 image quality would tempt me far more than a breeze-block sized body containing several computers that are smarter than the ones used to put men on the moon.

I was quite tempted by the D700 but for the situations in which I'd want the high-ISO capabilities, I'd also want something that was small and simple to use.
 
Yep, that's sort of why an M-mount, Leica-simple body with D700 image quality would tempt me far more than a breeze-block sized body containing several computers that are smarter than the ones used to put men on the moon.
I wonder if Cosina/Voigtländer are up for it? :D
 
I wonder if Cosina/Voigtländer are up for it? :D

Well, Zeiss have pretty much indicated that they'll probably produce (ie get Cosina to make to their design) a digital Ikon one day, but not until they can make it work without compromise and the technology is 'mature' enough, which sounded to me like 'won't be obsolete in a couple of years'. I'll have a dig around and see if I can find the interview ...
 
Back
Top