News
Article
Author(s):
Researchers at the European Space Research and Technology Centre (ESTEC) have developed a new framework for onboard hyperspectral image processing that uses deep learning to analyze massive volumes of spectral data in real time. Their review highlights lightweight neural networks, generative models, and hardware accelerators as key technologies shaping the next generation of spaceborne Earth observation.
Spectroscopy from Space, A New Frontier
A new study published in the IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (1) reviews the rapid evolution of deep learning methodologies in hyperspectral image (HSI) analysis for spaceborne remote sensing. Authored by Nafiseh Ghasemi, Jon Alvarez Justo, Marco Celesti, Laurent Despoisse, and Jens Nieke of the European Space Research and Technology Centre (ESTEC) in Noordwijk, Netherlands, the paper, titled, “Onboard Processing of Hyperspectral Imagery: Deep Learning Advancements, Methodologies, Challenges, and Emerging Trends,” explores how artificial intelligence (AI) can transform spectroscopic imaging from raw data collection to real-time interpretation in orbit (1).
Imaging spectroscopy, which captures hundreds of contiguous spectral bands across the visible to shortwave infrared (VSWIR) range, enables scientists to detect chemical and physical properties of Earth’s surface through subtle variations in reflected light. Each pixel in a hyperspectral data cube acts as a distinct spectral fingerprint, revealing detailed compositional information about land, ocean, and atmospheric features (1,2).
Artist’s AI rendition of HSI calibration for field analysis © arozzmer-chronicles-stock.adobe.com

Deep Learning Architectures for Hyperspectral Processing
The authors provide an extensive analysis of neural network (NN) architectures, convolutional neural networks (CNNs), autoencoders, deep belief networks (DBNs), generative adversarial networks (GANs), and recurrent neural networks (RNNs), and assess their suitability for hyperspectral data processing. In particular, CNNs have shown strong promise for both spectral and spatial analysis, enabling efficient pixel-wise classification and target detection directly from orbital platforms (1).
Unlike traditional machine learning (ML) methods such as support vector machines (SVMs) or k-nearest neighbor (KNN) algorithms, deep learning can automatically extract nonlinear spectral features without manual preprocessing. This automation allows the detection of materials or environmental anomalies that might otherwise be invisible to human analysts.
A key finding from Ghasemi and coauthors is the demonstrated efficacy of lightweight CNNs and 1D-CNN models (one-dimensional CNNs) for onboard processing. These architectures require fewer computational resources, critical in satellite environments where energy and memory are severely limited. The study notes that the Phi-Sat-1 mission has already proved the feasibility of deploying compact neural networks to detect cloud cover in real time under constrained conditions (1).
Spectral Data Compression and Hardware Acceleration
Hyperspectral missions, such as ESA’s upcoming Copernicus Hyperspectral Imaging Mission for the Environment (CHIME), will generate data at rates exceeding 5 Gb/s, or roughly one terabyte per orbit (1). To manage this data volume, onboard data compression and selective analysis are essential.
The review discusses the use of GANs for both noise reduction and data augmentation—two strategies that improve the reliability of hyperspectral calibration models when limited training data are available. Hardware acceleration is another focus area, with field-programmable gate arrays (FPGAs) and custom low-power processors emerging as tools to execute deep learning inference in orbit (1).
By codesigning neural networks with specialized hardware, researchers can achieve high-throughput spectral analysis while reducing energy consumption. This approach supports onboard applications such as real-time classification, anomaly detection, and the identification of marine debris through reflectance-based spectral signatures.
Future Directions With Smarter Satellites and Sustainable Operations
According to the authors, the integration of deep learning into future hyperspectral missions could make satellites far more autonomous. For example, ESA’s CHIME mission, operating over the 400–2500 nm visible-near-infrared (vis-NIR) range with ≤10 nm spectral resolution, will support monitoring of biodiversity, soil health, and agricultural sustainability using onboard analysis to optimize data transmission (1).
However, several challenges remain. Limited labeled datasets, noisy atmospheric data, and the need for rigorous in-orbit validation continue to hinder the full adoption of deep learning models. The paper recommends expanding research into self-supervised and reinforcement learning to improve adaptability and robustness under operational conditions.
The authors conclude that lightweight architectures, when paired with onboard AI hardware, will define the next generation of spectroscopic imaging systems. Such advancements promise to extend the analytical reach of remote sensing beyond Earth orbit—potentially paving the way for autonomous planetary exploration missions.
References
(1) Ghasemi, N.; Justo, J. A.; Celesti, M.; Despoisse, L.; Nieke, J. Onboard Processing of Hyperspectral Imagery: Deep Learning Advancements, Methodologies, Challenges, and Emerging Trends. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 4780–4790. https://ieeexplore.ieee.org/document/10834581 (accessed 2025-10-21).
(2) Bourriz, M.; Hajji, H.; Laamrani, A.; Elbouanani, N.; Abdelali, H. A.; Bourzeix, F.; El-Battay, A.; Amazirh, A.; Chehbouni, A. Integration of Hyperspectral Imaging and AI Techniques for Crop Type Mapping: Present Status, Trends, and Challenges. Remote Sens. 2025, 17, 1574. DOI: 10.3390/rs17091574
Get essential updates on the latest spectroscopy technologies, regulatory standards, and best practices—subscribe today to Spectroscopy.