Well this way of "removing barriers" is like pretending they never existed in a first place not really removing them.
The main barrier to lens design that I'm talking about is the reflex mirror and its effect on back focus. The lens designer has one arm tied behind their back before even starting. This barrier is real and big.
The problem is I don't think you have a clear understanding of what those lens faults really are and how and they can be fixed by a software (may be you do, but I can't see it judging from this thread).
Maybe I do...
For example, the lens distortion can't be transparently fixed just with a snap of a fingers. It will alter the frame perimeter by applying affine transformations to the image geometry and this in almost all the cases will require you to throw some pixels away by croppint it back to rectilinear form. This in turn wil almost always waste some sensor pixels. In addition (and it's more important for me personally) it may stretch some parts of the image which effectively will blur them. You can try to fix it by sharpening but it will only do so well for smaller amounts of this stretch and not the larger ones. The conclusion here is that there is no sofware in the world that fixed lens distortion that will give you a better quality image than the one you taken (assuming capture was perfect in respect of the sharpness). It is not physically possible to create one either - you can only approximate it but it has a drawbacks.
Distortion is really easy to fix. I've been doing it for ages, undetectably. I also use the same principles to correct converging verticals, and to a much greater degree of correction. There is so much stretch in a good digital image, so much sharpness and so many pixels, it can be done invisibly. What you say is true, but really isn't a significant concern in practise given the small levels of correction we're talking about here, it's just a few per cent. Where the bespoke distortion programs really make a difference is with the moustache type of distortion common to wide zooms, which is hard to correct manually.
Then there is a CA - those are an interesting and possibly the most difficult case. Have you ever thought why Lightroom or other RAW conversion software does give you an option to manually correct CA (they do auto correction on a RAW but yet give you manual option as well)? The reason being that no formula exists that will alllow you to fix it in post processing. The CA occurs when three optical planes (R, G and B filtered by the sensor filters) are alightly misaligned. You as a lens manufacturer may know how far they are misaligned, but the problem is that it only manifests and needs correction at the edges. And this is a 3D information that is lost on 2D projection when sensor captured the image. To fix this in ideal and perfcet way without sacrificing image quality you would need to know with absolute certainty all those edges and also be able to distinguish CA from the real colour occuring at thos edges. This is the reason we do have a manual options for those CA corrections in most RAW convertors.
Have you tried the custom lens-specific correction for CA found in Canon's DPP Raw software? It works very well and makes a dramatic difference to the CA-rich images I shot with an EF-S 17-85mm, at the wide end, max aperture. Transforms them no less; it really is that good. And that only uses the 2D data you speak of (I assume it does) but I don't think it's really necessary to go further than that with this feature.
However, the 3D data is also known, and if it is known, it can be corrected. It may take a lot of processing power but if the camera knows the performance characteristics at all times - all focal lengths, all apertures, all focusing distances - then in theory algorithms can be devised to manage them. I don't think we need to go this far, but the potential is there and processing power is not a problem these days.
The only relatively reliable imperfection that you can fix is the vignetting and even this is flawed since it is rising S/N ration in the corners.
Thanks for that guarded concession. You will know that vignetting is a major impediment to wide-aperture lenses and that anything which can be done to correct it will have big benefits.
There are always problems in forming an image and whether they are resolved optically or by digital processing, both methods have their drawbacks. But I think the overall result is better if both methods are used in tandem.
Speaking of optical hurdles, how much better things would be if we had a concave sensor instead of a flat one? I'm not aware that this is remotely practical, but in combination with digital processing it would make a huge difference to lens design. Juist a thought
Edit: a further comment on vignetting etc. This is already being heavily corrected in some compacts, as is CA and distortion. They can do this without the user even knowing what's going on because the image is always seen on the LCD, after processing. What the eye doesn't see etc. It cannot be done acceptably in a DSLR because of the optical viewing through the lens which would look bad, and would give the game away. When this is replaced with an EVF, DSLR-style lenses can benefit, too.