Lucas van den Bosch, winter semester 2020/21


Hyper- and multispectral imaging capture a scene in both spatial (position in space) and spectral (wavelength) dimensions, where each sampled spatial position contains multiple intensity values corresponding to different spectral bands. Going from multispectral to hyperspectral imaging (HSI), the spectral sampling is of much higher density, enabling the reconstruction of continuous spectra. This data of higher dimensionality allows both to identify materials or objects using the spectral data and to identify their location using the spatially resolved image plane, usually exhibiting a 2D spatial dimensionality. This technology has found applications in numerous fields while also requiring a large amount of data processing and careful selection of lighting sources. With hyperspectral cameras getting smaller, less expensive and featuring integrated data processing, this imaging field is growing.[1]2][3]

HSI is also known as imaging spectroscopy or 3D spectroscopy, as conventional spectroscopy analyses the continuous spectrum of point source only, still enabling the identification of materials. On the other hand, conventional imaging using grayscale or RGB (three-color) sensors are limited in their ability for identification but usually offer higher spatial resolution instead.[4][5]

Spectral imaging

Hyper and multispectral imaging capture a wider region of the electromagnetic spectrum. Their range covers ultraviolet light (UV, wavelength 200–400 nm) to the visible spectrum (VIS, 400–780 nm), near-infrared (NIR 780–2500 nm) up to mid-infrared (MIR 2500–25 000 nm), far surpassing the spectral band offered by conventional camera systems.[6]

Spectral imaging systems are sampling radiation intensity entering the camera at different wavelengths. The total interval from lowest to highest wavelength covered by this is the spectral range, usually sectioned into multiple spectral bands of equal bandwidth. Each of these is simplified as having a defined wavelength window to respond to, but due to material characteristics, this response is not uniform. It follows a complex sensitivity distribution which can spread across a wider wavelength range and into adjacent bands[7].

These sampled spectral bands can be separated with gaps or placed directly following each other. The spectral resolution describes the limit of separating adjacent spectral bands[6] and is not directly related to the separation or overlap between them.

Definition of different imaging approaches based on spectral resolution. With RGB imaging, the spectral sensitivities for each color are visualized in grey. Source: Lucas van den Bosch, https://commons.wikimedia.org/wiki/File:Spectral_sampling_RGB_multispectral_hyperspectral_imaging.svg, CC BY-SA 4.0

A camera integrates incoming radiation intensity over a specific spectral response curve which covers a limited wavelength range[6]. By themselves, camera sensors can only measure one such intensity channel and naturally produce grayscale images (single-band), barring specialty sensor technologies. Therefore, any spectral discrimination has to take place before the light is captured by the sensor pixels.

With conventional RGB (color) cameras, this is done using a Bayer filter array, where the available pixels are grouped into 2 × 2 clusters, each consisting of two green, one red and one blue spectral filter. Spatial resolution is reduced to obtain spectral resolution and data interpolation reconstructs the missing color values[7]. Alternatively, the native spatial resolution can be kept by separating incoming light into three spectral bands using beam splitters and capturing each with a separate sensor[2].

Multi- and hyperspectral

Multi- and hyperspectral imaging systems are subject to the same design considerations, but are capturing more than three spectral bands in the final image. Multispectral cameras sample 4 to 12 spectral bands usually in the visible to infrared range, separated by gaps[5][3][8][29]. They cannot provide a real spectrum in every pixel, the discrete and sparse data is unsuitable to profiling materials[2][5].

Hyperspectral imaging systems capture from tens up to hundreds of spectral bands depending on camera technology with specialty cases going into thousands. They usually cover more than one subwavelength range such as VIS and NIR. Together with a high sampling density this enables creating continuous spectral profiles for characterization and identification but increases the data processing complexity considerably.[9][5][3][8][9][10] Further distinctions such as ultraspectral exist, but offer no objective and clear separation from the hyperspectral label.[2]

Acquisition techniques

Multi-/hyperspectral imaging technologies face one fundamental design problem: a 2D sensor needs to capture a 3D data structure, containing one spectral and two spatial dimensions. This is solved by either spreading spatial information over time by scanning two or one directions or spreading the spectral information into multiple components before reaching the sensor array. Both require data processing to assemble the spatio-spectral data cube.[11][12] Additionally there are mixed approaches combining aspects of the two, or indirect capture relying on computational processing to fill information. Lastly, approaches of higher dimensionality for 3D and video capture are outlined.

Acquisition techniques for multispectral imaging. Spatial axes x and y, spectral axis λ (wavelength). Source: Lucas van den Bosch, https://commons.wikimedia.org/wiki/File:Multispectral_acquisition_techniques.svg, CC BY-SA 4.0

Classical / direct approaches

Point Scanning

Point scanning (also known as whiskbroom scanning) captures the full spectrum at a single location and uses either translational or rotational scanning to reconstruct the whole scene one-by-one. This results in a slow operation and requires precise positioning, while allowing no live image to be displayed. Motion artifacts are possible. It does allow a very high spectral resolution for detailed material analysis (hyperspectral), usually in microscopic scanning.[1][12][3][6][11][13]

Line Scanning

Line scanning (push broom scanning) captures the scene along one spatial dimension and uses the perpendicular axis for spectral discrimination. The missing second spatial dimension is obtained by translational or rotational scanning, the former usually done by moving the object in front of the camera or moving the camera across the object. Thus, one spatial dimension is spread over time, which can result in motion artifacts. This provides no live image and requires relative movement, but is very fast.[3][14][12][11][6]

Spectral scanning

Spectral scanning (plane scanning, staring, staredown, sequential 2D imaging, image-frame sensors) captures the whole scene at once in a fixed setup, sampling both spatial dimensions. It limits the recorded spectral band for each capture, spreading the spectral sampling over multiple exposures and assembling the full data structure digitally. Spectral discrimination is performed by electronic or manual filters in front of the image sensor or by using a band-limited illumination source. This approach is not suitable for moving environments and typically has lower spectral resolution than point/line scans but offers high spatial resolution and a live image. An inexpensive solution involves placing different color filters in front of a RGB color camera and reconstructing the spectral information in software[7].[3][14][13]

Spatio-spectral scanning

Spatio-spectral scanning (tilted sampling) captures the whole 2D spatial information of the scene but is spectrally discriminating it along one spatial direction simultaneously. Typically this uses a variable filter in front of the image sensor which can be of higher quality than filters used in plane scanning approaches. Multiple exposures with a shifted camera capture the missing spatial information, using a sufficiently high frame rate. Data processing reassembles it to obtain the full spatio-spectral data cube but heightens the storage requirements. This approach captures the scene from different positions in space, and thus is commonly used to reconstruct spatial 3D data, covered in a later section.[15][4]

Snapshot imaging

Snapshot (single-shot, multi-point spectrometry) acquisition captures the full spectral and spatial data at once, thus providing the complete data cube without multiple exposures or scanning. This is implemented by a combination of image-division and dispersion (wavelength-discriminating) optical elements in front of the sensor and data processing. This allows for higher frame rates enabling spectral video and typically has a high light throughput. On the other hand, both spatial and spectral resolution are reduced compared to most other techniques. The data acquisition is still not fully instantaneous, as the sensor exposure time and frame readout rate can lead to motion blur and artifacts.[5][16][13][12][15]

Computational / indirect approaches

The previous four acquisition techniques represent the classical approaches to enable multi-/hyperspectral imaging, using optical elements. All data processing is reconstructing from fully available information. The following techniques deviate from this, by either interpolating data or using multiple sensors. They reduce the number of measurements or employ lower-cost solutions, sometimes in a trade-off with increased computational processing requirements.

Characterized RGB cameras (indirect spectral imaging) use readily available, low cost RGB color image sensors in smartphones or photography equipment to reconstruct hyperspectral data. These capture in three wide, overlapping spectral bands in specific spectral response functions and require a spectral and radiometric calibration. Some approaches use multiple cameras, possibly of different manufacturers to extend the spectral range. These solutions provide very high spatial resolution but often limited radiometric resolution, sampling each intensity value at only 8 bit per channel.[7][15][17][18]

Instead of using RGB cameras, multi-camera imaging systems can also capture band-limited projections of a scene, e.g. using filters in front of each. As the cameras are placed next to each other, their data is also spatially shifted, and data processing assembles both for correct spectral and spatial correspondence. This is usually limited to 4 up to 9 spectral bands.[15] When using only one (multi-/hyperspectral) camera and moving it into different positions for each capture for later data reconstruction, this is called step-and-stare[9].

Sparse sampling uses optical fibers to project sections of the scene onto an image sensor and reconstructing missing data digitally.[19][20]

With compressive sampling, a digital micromirror device (DMD) projects the spatially and/or spectrally discriminated scene onto a linear sensor, saving cost compared to area sensors especially for measuring outside the visible range using exotic sensor materials.[21][22]

Higher dimensionality

For capturing a scene not only in two spatial dimensions, but three (3D) while also retaining spectral information, multiple imaging approaches are used. These are sometimes called 4D imaging because of the fourth spectral dimension. The first group captures only shallow / almost flat structures within the depth-of-field (DOF) of the optical system. The parallax effect inherent in spatio-spectral scanning can be used. Paint brushes on the surface of an oil painting were digitized using a modified multi-band flat bed scanner. Alternatively, transverse field detection uses specialty sensor technologies.[15][23][12]

For deeper, unrestricted 3D surfaces spatio-spectral scanning combined with a moving camera can be used. The standard stereoscopic 3D imaging using two cameras next to each other provide much higher spatial resolution, and computer vision algorithms can reconstruct the surface from its data.[15][4][9]

Hyperspectral video is usually implemented using single-shot / snapshot imaging systems capturing at high frame rates[2]. The inverse can also be used, by capturing video with spectral information and reconstructing a missing spatial dimension by a slow camera movement[24].

Lighting

Lighting plays an important role in multi-/hyperspectral imaging as both the spectral characteristics of the used light source and the interaction of the light with the object to be captured are crucial for accurate measurements. Each material shows a different response to light reflection, absorption, scattering and excitation across the wavelength spectrum.[1][13]

Measurement modes

There are three main measurement modes describing the interaction of the lighting with the object:

  • Reflectance measurement illuminates the object from the same side of the camera, usually in a chosen angle or polarization setup to avoid direct/specular reflections, which carry no information about the chemical composition of the sample. Additionally, a diffuser such as an integrating sphere can be involved. This detects external characteristics such as color, size, shape and surface defects, is easy to use and provides a high light intensity.[1][2][25][3]
  • Transmittance measurements place the light source on the opposite side of the sample and camera, shining through the object. While the light level is considerably lower, this light carries information about the internal composition of the sample, including defects and concentration differences.[1][2]
  • Interactance (fluorescence) measurements represent a combination of reflectance and transmittance modes, placing the light source on the same side as the camera. This is done in such a way as to reduce direct reflections as much as possible and analyses liquids, semisolids and solids. The sample reacts to the absorbed light and emits radiation of its own.[1][3][2]

Light sources

Light sources can be broadly categorized into broadband for object illumination and narrowband sources for excitation. For the spectral scanning acquisition technique, they can be spectrally filtered (tuneable light) to achieve a wavelength discrimination as an alternative to filtering inside the camera.[1][14][2]

  • Daylight provides a broad spectrum but requires intensity calibration and is changing over time with clouds, time of day and other factors. Halogen lamps are similarly broadband with a smooth, continuous spectrum without peaks up into the NIR range, but suffer from short lifetime, high heat output and changes in their spectrum with temperature.
  • LEDs (light emitting diodes) typically exhibit a non-uniform spectrum, but recent advancements increase their light quality considerably. They can be used as broadband white light or narrowband colored light or pulsed for excitation applications. The are low cost, small, provide a fast response and long lifetime with low heat output and energy consumption.
  • Lasers are a high coherence, high intensity monochromatic light with high directionality, usually employed for excitation applications.
  • Fluorescent lights show a complicated spectral power distribution with peaks and there are also Xenon or mercury discharge lamps.

[7][2][13][3]

Technical implementations

The type of acquisition of a multi-/hyperspectral imaging system (point, line, spectral scan or snapshot) results in fundamentally different camera designs to achieve the spectral and spatial discrimination and data capture. The choice of sensor type and acquisition system is highly dependent on the application.[13]

Approaches for multispectral imaging of a plane onto a linear or 2D detector array. Source: Lucas van den Bosch, https://commons.wikimedia.org/wiki/File:Multispectral_imaging_approaches.svg, CC BY-SA 4.0

Point scanning systems

With point scanning, incoming light through a point aperture is projected onto a dispersive element (usually a prism). This spreads the light into its spectral components along one direction, to be captured by a linear detector. Light passes through the same path every time, making calibration easier. Common implementations include interferometer setups and the Confocal Raman microscopy (CRM). Not shown here is the scanning motion required to capture multiple points, realized with linear motors or Galvo mirrors.[14][11][16][1][12]

Line scanning systems

Line scanning systems usually involve a narrow slit aperture and disperse the light with a prism or other optical element into its spectral components but still keeping the spatial sampling along one axis. The dispersed light hits a 2D sensor array and capturing a complete area requires a scanning motion in a translational or rotational setup. Some applications involve rotating an endoscope head (medical field) or the camera itself to achieve this. The usual scenario is an industrial conveyor belt with a stationary camera or airborne system with a camera moving across the scene.[11][16][24][1][12]

Snapshot systems

Snapshot imaging systems exist in a wide variety of technical implementations regarding optical components and data processing. Hagen and Kudenov provide a comprehensive review including the history and detailed technical considerations along with an analysis of the theoretical utilization of the detector arrays[12]. Integral field techniques split the image into multiple parts, either by a slicer mirror assembly, a fiber bundle or an lenslet array, to be rearranged, dispersed and projected on the sensor. Multispectral beamsplitting (MSBS) relies on beamsplitting prisms, a filter array, tilted filter stack or a volume hologram element to produce multiple spatially identical but spectrally different images to be captured by separate sensor arrays. Computed tomography imaging spectrometry (CTIS) uses a transmissive dispenser element mixing a spectral and spatial separation onto the sensor, requiring extensive data processing. Multi-aperture filtered cameras (MAFC) separate incoming light using a lenslet array in combination with spectral filters. Similar to them, spectrally resolving detector arrays (SRDA) (also called mosaic filter-on-chip cameras) place an array of color filters directly on the sensor pixels to discriminate a larger amount of spectral bands than regular, Bayer-filtered RGB-cameras, although the principle and lens setup is identical. A tuneable echelle imager (TEI) employs a Fabry-Perot etalon in the light path, which spectrally samples at periodic intervals by wavelength-dependent transmission. The diffraction grating afterwards spreads these apart to be captured by a 2D sensor for digital post-processing. Image-replicating imaging spectrometer (IRIS) use an array of waveplates and Wollaston polarizers to produce distinct, spectrally separated images on a sensor, each of them spatially fully preserved. Coded aperture snapshot spectral imagers (CASSI) use a coded aperture mask in the light path, producing many gaps in the image field. The space saved by the gaps is used to disperse the remaining image areas, for digital reconstruction. Image mapping spectrometry (IMS) slice the image into interleaved lines, where each separated light path represents the scene sampled at a regular interval. Prism and lenslet arrays disperse onto a sensor for later data reassembly. This involves less computational load and therefore enable a real-time operation. Snapshot hyperspectral imaging Fourier transform spectrometers (SHIFT) appear similar to an IRIS, but produce a Fourier-transformed sub-images on the sensor. A Multispectral Sagnac interferometer (MSI) also performs a Fourier transformed image, using a optical setup in which two beams run in opposite directions. Both require an inverse Fourier transform for image data recovery.[12][26][15][27][11]

Optical elements

As outlined previously, multispectral cameras employ a selection of optical elements to project a scene onto a sensor with the needed spectral separation. Many implementations use regular objective lenses, as available for other camera systems, which differ in their focal length (and therefore Field of View), sensor size, magnification ratio, maximum resolution, and wavelength transmission characteristics.[3]

Image mapping

Image mappers redistribute areas of the light path into another angle or position. For this, slicing/faceted mirrors, digital micromotor devices (DMD) or fiber bundles are used.[22][12]

Dispersion and filtering

For dispersing light into different wavelength bands, dispersion elements are used: prisms, diffraction grating or a prism-grating-prism assembly (PGP). Alternatively, filters only transmit a certain portion of the incoming light. Simple filters can be mounted inside a filter wheel or placed directly on top individual pixels of a sensor array, or somewhere in between or used to control the spectral characteristics of the light illuminating the scene. Also possible are bandpass-beamsplitters. Among the tuneable filters, the most prevalent are the Acousto-optic tuneable filters (AOTF) enabling very fast switching or the Liquid crystal tuneable filters (LCTF) providing better image quality. Another approach are linear variable filters (LVF), also called wedge filters, which show a variable spectral transmission along one axis.[3][6][1][12]

The following characteristics can quantify the quality of an optical multi- or hyperspectral system:

  • light throughput/transmission in %
  • geometric distortion in %
  • spatial resolution in µm/px
  • number of spectral channels
  • spectral resolution in nm
  • spectral accuracy in %
  • spectral bandwidth in nm
  • dispersion in px/nm

Most spectral characteristics are measured by their full width at half of the maximum value (FWHM) of the spectral response curve.[6][22][27][26]

Sensor technology

Image sensors (focal plane array: FPA) capture the incoming light and transform the intensity into an electrical signal to be used in data processing[6]. The choice of image sensor depends on a number of factors, the most important being the wavelength range to be captured. Different sensor materials exhibit their own spectral bands they are best suited for: the two most common technologies, CCD (charge-coupled device) and CMOS (complementary metal-oxide-semiconductor) use Silicon and best used in the range of 400 to 1000 nm. For NIR, InGaAs (Indium Gallium Arsenide) (900–1700 nm) or MCT (Mercury Cadmium Telluride, HgCdTe) (900–2500 nm) are best.[5][2][1]

Individual pixels exhibit the following technical characteristics:

  • pixel size in nm
  • noise level
  • saturation limit
  • signal-to-noise-ratio (SNR)
  • dynamic range in dB
  • bit depth in bit
  • quantum efficiency
  • linearity
  • wavelength sensitivity

For the whole sensor assembly, the frame readout time in s, frame rate (sampling rate) in Hz and the integration time (exposure time) in s together determine how movement is rendered in the final image.[26][12][27][25] For example, if a low light level necessitates a long exposure time, movement during the exposure results in motion blur. If the frame readout time is considerably long relative to movement across the sensor, rolling shutter artifacts can become visible as the sensor is not read out instantaneously, but sequentially line-by-line. A global shutter solves this.[12][15]

Data processing

Data representation

Image data captured by HSI is stored inside a data cube (hyper cube, data volume, spectral cube/volume), which can be thought of as a collection of grayscale images at different wavelengths or multiple spectra at their respective spatial locations. The data points within are called voxels or vector pixels, each represents the captured radiation intensity at the spatial location in a specific spectral band.

Typical dimensions (expressed in x × y × λ, 2D spatial × 1D spectral) are:

  • 1600 × 1300 × 5 with filtered separate cameras onboard a DJI quadrocopter
  • 285 × 285 × 60 with IMS
  • 512 × 512 × 31 to 1392 × 1040 × 251 with LCTF
  • 512 × 512 × 510 with CRM
  • 1 × 307 × 191 to 1 × 1004 × 826 with line scanning systems, where the scanned spatial axis is not inherently limited by the sensor array.

For scientific applications and professional camera systems, each pixel is sampled at 16 bit resolution, whereas inexpensive RGB cameras usually only provide 8 bit.[3][2][27][14][13][15]

Data storage format

The data storage format depends on the acquisition technique as each pixel line or spectra is written into memory directly after capturing. Point scans produce a band-interleaved-by-pixel (BIP) structure, where each pixel contains all spectral bands before the neighboring pixel is saved. Line scanning is band-interleaved-by-line (BIL), each pixel line contains the full spectrum, before the next line is saved. Band-sequential (BSQ) data suits itself to plane scanning methods, where all available spatial information is contained in an image, and the next 2D image is for another spectral band. For snapshot systems, the specific storage format depends on the camera setup and data processing and after completion the full data cube is saved at once.[5][2][12]

Pre-processing

Hyperspectral data often necessitates a multi-step data processing chain to obtain useful information out of a large amount of images, which contain noise and redundancies inherent in imaging systems and the spectral responses of materials. After image acquisition and storage, pre-processing steps are done to calibrate the captured data, remove artifacts and noise and apply geometric and intensity corrections to enhance quality.[28][14]

Spectral unmixing

Afterwards, the main analysis often involves spectral unmixing. The recorded spectra can represent a combination of different materials mixed together, either linearly or non-linearly. Unmixing algorithms separate the spectral responses and their relative intensities, for direct analysis and matching with existing spectral libraries.[6][28][14]

Dimensionality reduction

Dimensionality reduction improves models and reduces computational load. Hyperspectral data typically contains only minor differences in spectral responses in adjacent bands, resulting in a high amount of redundancy. Simple algorithms such as skipping bands lead to information loss, whereas more involved processes consider all available bands. This is an essential step before feature extraction to reduce overfitting and collinearity. To obtain the most relevant information and represent it in a lower dimensionality, supervised or unsupervised feature extraction algorithms are used. These often also rely on prior knowledge and statistical techniques.[29][6][14][28][6]

Compression

Due to storage requirements and limited transmission bandwidth in remote sensing applications, data compression is a central part of many hyperspectral processing chains. Lossless compression keeps all data intact, albeit limited in performance, whereas lossy compression achieves much higher compression rates by discarding parts of the data. Other algorithms are principal component analysis (PCA), multivariate curve resolution (MCR) or multivariate image regression or video compression codecs.[30][25][31]

Software

There are various software solutions for multi- and hyperspectral image processing and analysis:

[1]

Visualization

Visualization techniques can be categorized into image-based for spatial relationships often employing false color, plot-based for spectral characteristics such as scatter plots and principal component analysis (PCA), or feature-based for grouping in 2D or 3D representations of clusters.[14]

Applications

In Additive Manufacturing, multispectral cameras are used to implement more accurate thermal imaging or predict part quality using neural networks[32][33]. Recycling is improved by accurate sorting of plastic parts, minerals or glas[34]. The condition and authenticity of artwork can be checked[26].

In the medical and biomedical field, it provides surgical guidance and disease diagnosis, including cancer detection.[6][13][26]

In food processing and research, quality assurance is improved as hyperspectral images can reveal foreign particles, defects and analyze the chemical composition.[7][34][26]

Minerals are detected and the cover homogeneity measured in geological applications. In forestry, infected trees, their status and the planning of clearing operations along with vegetation identification are performed. Also, environmental parameters can be monitored. For agriculture, plant health, pest control and ripeness measurements along with crop stress location and crop productivity are analyzed. [28][7][26][5]

Comparison of single band, multi- and hyperspectral images using an airborne system and the resulting spectral resolution. The different spectral response curves allow identification of land areas. Source: Lucas van den Bosch, https://commons.wikimedia.org/wiki/File:Mono,_Multi_and_Hyperspectral_Cube_and_corresponding_Spectral_Signatures_modified.svg, CC BY-SA 4.0

In land management, use changes like urban growth, settlements and population movements as well as fire and flood risks and water quality for public safety can be monitored. The military uses hyperspectral cameras to detect explosives, adversarial targets and land mines.[28][7][26]

Air-borne applications include sea and coastal oil spill and water quality monitoring. In the atmosphere, pollutants and changes of the environmental conditions are measured. In space, the thermodynamics and kinematics of stars and galaxies as well as their chemical composition using spectra are analyzed.[28][26][4][7]

Abbreviations

General

2D Two-dimensional
3D Three-dimensional
FWHM Full Width at Half Maximum
HSI Hyperspectral imaging (system)
IR Infrared
MIR Mid Infrared (2500–25 000 nm)
NIR Near infrared (780–2500 nm)
RGB Red, green, blue
UV Ultraviolet (200–400 nm)
VIS Visible range (400–780 nm)
SNR Signal-to-noise ratio

Technologies and materials

CCD Charge-coupled device
CMOS Complementary metal-oxide-semiconductor
InGaAs Indium Gallium Arsenide
Laser Light amplification by stimulated emission of radiation
LED Light emitting diode
MCT Mercury Cadmium Telluride

Camera systems

CASSI Coded aperture snapshot spectral imager
CRM Confocal Raman microscopy
CTIS Computed tomographic imaging spectrometer
DOF Depth of Field
FOV Field of view
FPA Focal plane array (imaging sensor)
FTIR Fourier transform infrared imaging
IFS-F Integral field spectrometer with fiber arrays
IFS-L Integral field spectrometer with lenslet arrays
IFS-M Integral field spectrometer with faceted mirrors
IMS Image mapping spectrometer
IRIS Image-replicating imaging spectrometer
MAFC Multiaperture filtered camera
MSBS Multispectral beamsplitting
SHIFT Snapshot hyperspectral imaging Fourier transform spectrometer
SRDA Spectrally resolving detector array
TEI Tuneable echelle imager
MSI Multispectral Sagnac interferometer

Optical components

AOTF Acousto-optic tuneable filter
DMD Digital micromirror device
FP Fabry-Pérot interference filter
LCTF Liquid crystal tuneable filter
PGP Prism-grating-prism
LVF Linear variable filter

Data handling

BIL Band-interleaved-by-line
BIP Band-interleaved-by-pixel
BSQ Band-sequential
PCA Principal component analysis
MCR Multivariate curve resolution

Literature

  1. Kamruzzaman M, Sun D-W (2016) Introduction to Hyperspectral Imaging Technology, in: Computer Vision Technology for Food Quality Evaluation, pp. 111–139: Elsevier.
  2. Di Wu, Sun D-W (2013) Advanced applications of hyperspectral imaging technology for food quality and safety analysis and assessment: A review — Part I: Fundamentals. Innovative Food Science & Emerging Technologies 19, 1–14. 10.1016/j.ifset.2013.04.014.
  3. Lodhi V, Chakravarty D, Mitra P (2019) Hyperspectral Imaging System: Development Aspects and Recent Trends. Sens Imaging 20(1). 10.1007/s11220-019-0257-8.
  4. Grusche S (2014) Basic slit spectroscope reveals three-dimensional scenes through diagonal slices of hyperspectral cubes. Applied optics 53(20), 4594–4603. 10.1364/AO.53.004594.
  5. Adão T, Hruška J, Pádua L, Bessa J, Peres E, Morais R, Sousa J (2017) Hyperspectral Imaging: A Review on UAV-Based Sensors, Data Processing and Applications for Agriculture and Forestry. Remote Sensing 9(11), 1110. 10.3390/rs9111110.
  6. Lu G, Fei B (2014) Medical hyperspectral imaging: a review. Journal of biomedical optics 19(1), 10901. 10.1117/1.JBO.19.1.010901.
  7. Nieves JL (2020) Hyperspectral Imaging, in: Shamey R (ed.) Encyclopedia of Color Science and Technology, pp. 1–9. Berlin, Heidelberg: Springer.
  8. Shippert P (2003) Introduction to Hyperspectral Image Analysis. Online Journal of Space Communication.
  9. Pust O, Fabricius H (2018) Continuously variable bandpass filters aid optics and HSI. Phototics Spectra 52(6). pp. 51–55.
  10. Kale KV, Solankar MM, Nalawade DB, Dhumal RK, Gite HR (2017) A Research Review on Hyperspectral Data Processing and Analysis Algorithms. Proc. Natl. Acad. Sci., India, Sect. A Phys. Sci. 87(4), 541–555. 10.1007/s40010-017-0433-y.
  11. Willett RM, Duarte MF, Davenport MA, Baraniuk RG (2014) Sparsity and Structure in Hyperspectral Imaging Sensing, Reconstruction, and Target Detection. IEEE Signal Process. Mag. 31(1), 116–126. 10.1109/MSP.2013.2279507.
  12. Hagen N, Kudenov MW (2013) Review of snapshot spectral imaging technologies. Opt. Eng 52(9), 90901. 10.1117/1.OE.52.9.090901.
  13. Halicek M, Fabelo H, Ortega S, Callico GM, Fei B (2019) In-Vivo and Ex-Vivo Tissue Analysis through Hyperspectral Imaging Techniques: Revealing the Invisible Features of Cancer. Cancers 11(6). 10.3390/cancers11060756.
  14. Labitzke B (2013) Visualization and Analysis of Multispectral Image Data. Dissertation, University of Siegen.
  15. Aasen H, Honkavaara E, Lucieer A, Zarco-Tejada P (2018) Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows. Remote Sensing 10(7), 1091. 10.3390/rs10071091.
  16. Wang YW, Reder NP, Kang S, Glaser AK, Liu JTC (2017) Multiplexed Optical Imaging of Tumor-Directed Nanoparticles: A Review of Imaging Systems and Approaches. Nanotheranostics 1(4), 369–388. 10.7150/ntno.21136.
  17. He Q, Wang R (2020) Hyperspectral imaging enabled by an unmodified smartphone for analyzing skin morphological features and monitoring hemodynamics. Biomedical optics express 11(2), 895–910. 10.1364/BOE.378470.
  18. Seoung Wug Oh, Michael S. Brown, Marc Pollefeys, Seon Joo Kim (2016) Do It Yourself Hyperspectral Imaging With Everyday Digital Cameras. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 10.1109/CVPR.2016.270.
  19. Bershady MA (2009) 3D Spectroscopic Instrumentation.
  20. Geiger AC, Ulcickas JR, Liu Y, Witinski MF, Blanchard R, Simpson GJ (2019 - 2019) Sparse-sampling methods for hyperspectral infrared microscopy, in: Dhar NK, Dutta AK, Babu SR (eds.) Image Sensing Technologies: Materials, Devices, Systems, and Applications VI, p. 46: SPIE.
  21. Fowler James (2014) Compressive pushbroom and whiskbroom sensing. IEEE International Conference on Image Processing.
  22. Bedard N, Hagen N, Gao L, Tkaczyk TS (2012) Image mapping spectrometry: calibration and characterization. Optical engineering (Redondo Beach, Calif.) 51(11). 10.1117/1.OE.51.11.111711.
  23. Tominaga S, Nakamoto S, Horiuchi T (2014) Estimation of surface properties for art paintings using a six- band scanner. Journal of the International Colour Association 2014.
  24. Bachmann C, Eon R, Lapszynski C, Badura G, Vodacek A, Hoffman M, Mckeown D, Kremens R, Richardson M, Bauch T, Foote M (2019) A Low-Rate Video Approach to Hyperspectral Imaging of Dynamic Scenes. J. Imaging 5(1), 6. 10.3390/jimaging5010006.
  25. Barbara Boldrini, Waltraud Kessler, Karsten Rebner and Rudolf W. Kessler (2012) Hyperspectral imaging: a review of best practice, performance and pitfalls for in-line and on-line applications. Journal of Near Infrared Spectroscopy 20(5). 10.1255/jnirs.1003.
  26. Pawlowski ME, Dwight JG, Nguyen T-U, Tkaczyk TS (2019) High performance image mapping spectrometer (IMS) for snapshot hyperspectral imaging applications. Optics express 27(2), 1597–1612. 10.1364/OE.27.001597.
  27. Gao L, Kester RT, Hagen N, Tkaczyk TS (2010) Snapshot Image Mapping Spectrometer (IMS) with high sampling density for hyperspectral microscopy. Optics express 18(14), 14330–14344. 10.1364/OE.18.014330.
  28. Camps-Valls G (2014) Hyperspectral Image Processing. València, Spain.
  29. Sowmya V, Soman KP, Hassaballah M (2019) Hyperspectral Image: Fundamentals and Advances, in: Hassaballah M, Hosny KM (eds.) Recent Advances in Computer Vision, pp. 401–424. Cham: Springer International Publishing.
  30. Dua Y, Kumar V, Singh RS (2020) Comprehensive review of hyperspectral image compression algorithms. Opt. Eng 59(09). 10.1117/1.OE.59.9.090902.
  31. Babu KS, Ramachandran V, Thyagharajan KK, Santhosh G (2015) Hyperspectral Image Compression Algorithms—A Review, in: Suresh LP, Dash SS, Panigrahi BK (eds.) Artificial Intelligence and Evolutionary Algorithms in Engineering Systems, pp. 127–138. New Delhi: Springer India.
  32. Murphy RD (2016) A Review of In-situ Temperature Measurements for Additive Manufacturing Technologies.
  33. Gerdes N, Hoff C, Hermsdorf J, Kaierle S, Overmeyer L (2020) Snapshot hyperspectral imaging for quality assurance in Laser Powder Bed Fusion. Procedia CIRP 94, 25–28. 10.1016/j.procir.2020.09.006.
  34. Beyerer J, Puente León F, Frese C (2012) Automatische Sichtprüfung. Berlin, Heidelberg: Springer.