Archived posting to the Leica Users Group, 2005/04/10
[Author Prev] [Author Next] [Thread Prev] [Thread Next] [Author Index] [Topic Index] [Home] [Search]At 11:50 AM 4/10/2005, Christopher Williams wrote: >I've read a few reviews from professionals who are now saying that >a FF 16mp chip now surpasses lens quality. Meaning they are now seeing the >limits in 35mm designed lenses. It is impossible to make pixels small enough to surpass the MTF of good 35mm film lenses. Low pass filters, built into the sensor intended for "film lens" use, reduces the MTF of the lens to match the MTF requirement of five o nine micron pixels. Good 35mm film lenses can resolve far greater than the five-nine micron spacing therefore Mr. Nyquist rears his ugly head. Digital lenses are designed to meet the reduced MTF requirement of digital sensors. Film lenses are designed to get all they can out of a lens since film, being a homogenous dispersion, has no MTF requirements. Superb digital images are "created" by the camera firmware and subsequent host software, not the sensor itself. This is why certain software packages produce better pictures than others. A sensor is a grid. Film lenses are not designed to lay down an image with resolution matching the grid spacing. Actually, a lens has to be designed to produce an image roughly four times less than the grid MTF in order to not produce ugly artifacts. Digital lenses are designed this way. Film lenses require low pass filters at the sensor. Camera firmware and host software can resolve any left over artifacts. There's a lot of shuckin' and jivin' (interpolation) going on behind the scenes that digital camera makers don't tell. There's many decades of software expertise in adjusting digital images in order to produce outstanding images. Nasa, JPL, and other users have been dealing with digital sensors in space, medicine, etc, for a very long while. The magic you see in digital imaging is from the firmware/software. Yes, good lenses and sensors produce better pictures. But it's still the firmware/software that help it to actually be better. A true case of making a silk purse out of a sow's ear. Truly raw pixel by pixel images are ugly! When raw is selected on a camera, it actually means raw after the first level of firmware has corrected each pixel for level, interpolated data into dead pixels, and put the whole image into the correct color space. Then you get out the so called raw image for PS or whatever to massage. Host software knows neither the idiosyncrasies of each pixel nor the overall color balance of the array. Internal camera firmware does. If a camera simply produced a pixel data stream directly off of the sensor, you would spend a lifetime attempting to correct it just so that what you see looks somewhat like what you photographed. The image processors in cameras are VERY powerful. RISC processors running at 200-300 MHz. As I said, there's a heap of shuckin' and jivin' goin' on before any pixel sees the outside world. JB