Computational Imaging: How Much Imaging – How Much Computation?
This talk will discuss (from the viewpoint of a physicist with background in optical engineering and information) how much imaging and how much computing is (or should be)in computational imaging, aiming for high information efficiency.
For an opticist the order of the keywords “computational imaging” is inverted: imaging is the first operation in the sequence, as optics does the encoding for redundancy reduction, which has necessarily to be done before electronic noise is added in the channel. Before computers emerged, decoding was performed solely by optical “hardware” and the observer. As for today, it appears as if digital image processing evolved to a new quality: computational imaging, with much more imaging involved than during the times of digital imaging processing. Optics and computational imaging are not anymore in different faculties. Computational imaging will definitely have a great future – and it has a history: opticists do computational imaging for many years, without even knowing the term, as the author of this abstract had to notice to its own astonishment.
As paradigms, we will discuss a few 3D-“cameras” which exploit different complexity of “imaging = encoding” and “computation = decoding”. Among others deflectometry, with a dynamic range of up to 106, SEM like images by pure optics, with large depth of field and very low noise and the 3D motion picture camera, with full 3D information in each camera frame.