Automation by cognitive artificial vision already advances industries, space, health care, and infrastructure inspection. Many problems still remain complex. Noisy and scarce data challenge neural systems in inferring reliable and useful signals at scale. Our lab explores learning formulations of multi-tasking, self-supervision, and weak supervision to improve machine autonomy and human-computer interfaces in real-world settings.

We study the influence of motion estimation and concurrent tasks on video restoration performance. We develop tailored neural architectures that efficiently address multiple vision tasks such as video denoising, deblurring, stabilization, and segmentation.

Our research is firmly grounded in applications. We develop single and multi-camera systems that leverage passive and active vision as well as additional modalities. In particular, we are interested in enabling microcameras, which compromise video and image quality, to see like regular cameras. We also study multi-camera cooperations for robust robotic vision.