AI-Driven Dim-Light Adaptive Camera (DimCam) for Lunar Robots

Ran Duan, Bo Wu, Long Chen, Hao Zhou, Qichen Fan

Research output: Journal article publicationConference articleAcademic researchpeer-review

Abstract

The past decade has been a boom in lunar exploration. China, India, Japan and other countries have successfully landed landers or rovers on the lunar surface (Wu et al., 2014, 2018, 2020; Prasad et al., 2023). Future missions to explore the Moon are focusing on the lunar south pole (Peña-Asensio et al., 2024). The solar altitude angle at the lunar south pole is extremely low, resulting in low solar irradiance and large areas often in dim light or shadows. The permanently shadowed regions (PSRs) at the lunar south pole are also likely containing substantial amounts of water ice (Li et al., 2018). Future lunar robots exploring the lunar south pole will need to operate in low light or shadowed regions, making camera sensors sensitive to the dim-light environments necessary for these robots. Common night vision sensors usually use near-infrared cameras. However, sensors based on passive infrared technology have image resolution limited by several factors, including the intensity of infrared radiation emitted by the object, the sensitivity of the camera, and the performance of the optical system. For instance, thermal imagers typically have a resolution of 388*284 pixels only. We have developed a dim-light adaptive camera (DimCam) that is ultra-sensitive to the varying illumination conditions driven by AI to achieve high-definition imaging of 1080P or above, for future lunar robots operating in shadows or dim-light regions. The DimCam integrates two starlight-level ultra-sensitive imaging sensors connected by a rigid base to provide stereo vision in low illumination environment. An AI edge computing unit is embedded inside the DimCam to adaptively denoise and enhance image quality. The AI module uses an end-to-end image denoising network to identify and remove noises in the images more accurately by utilizing depth information from the stereo sensors. Compared with traditional monocular denoising algorithms, the denoising network based on stereo vision can significantly improve denoising effects and efficiency by enhancing the signal-to-noise ratio of the data input in the front end. The superposition of overlapping scenes can be regarded as a delayed exposure. Concurrently, the residual analysis of the aligned images aids in noise identification. In addition, for pixels obscured by noise, more accurate pixel values can be restored through interpolation or replacement using depth information obtained from the stereo sensors. Subsequently, a pre-trained lightweight deep network modified from Zero-DCE (Guo et al., 2020) is used for image quality enhancement in terms of brightness and contrast, providing high-quality images even in low-light environments for subsequent applications, such as positioning and navigation of robots, 3D mapping of the surrounding environment, and autonomous driven. We have tested the DimCam in a simulated environment in the laboratory, and the results show that the DimCam has promising performances and great potential for various applications.

Original languageEnglish
Pages (from-to)141-146
Number of pages6
JournalInternational Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives
Volume48
Issue number1
DOIs
Publication statusPublished - 11 May 2024
EventISPRS Technical Commission I Midterm Symposium on Intelligent Sensing and Remote Sensing Application - Changsha, China
Duration: 13 May 202417 May 2024

Keywords

  • Deep Learning
  • Dim Light
  • DimCam
  • Lunar Robots
  • Lunar South Pole

ASJC Scopus subject areas

  • Information Systems
  • Geography, Planning and Development

Fingerprint

Dive into the research topics of 'AI-Driven Dim-Light Adaptive Camera (DimCam) for Lunar Robots'. Together they form a unique fingerprint.

Cite this