Gottfried Langhans, Wintersemester 2018/19
An infrared camera is used to detect and measure the temperature or temperature differences of objects whose temperatures are higher than 0 Kelvin (-273,15 °C). There are several different types of cameras on the market with different properties and areas of application. One criterion to classify infrared cameras is the spectral range they work with, others are the type of the detector or the frame size. This article deals with those different types of detectors, their features, the physics behind the detection method and gives some information about adjustments and calibrations in the end.
The most important component of an infrared camera is the detector or detector array which is discussed in the following section. Besides that, the cooling of quantum detector infrared cameras is an important part as well. Other components like the optics and the housing are basically similar to cameras working in the visible spectral range and are not discussed in more detail in this article.
In general, an infrared camera behaves like a normal camera just working in a different spectral range. In industrial applications, infrared cameras are used to measure and detect temperatures at one sensor point (pixel) or in a 2D area using an infrared sensitive pixel array with a specified height and width. Typical array sizes are 512 x 512, 256 x 256, 128 x 128 or other formats like 640 x 512 or even HD resolution. For continuous applications, for example examining objects on a conveyor or inspection of welding lines, line scanners are used. These only have one line of pixels and the measuring frequency is matched to the speed of the conveyor or laser in a way that a continuous picture can be recorded. 2D sensor array types are called focal plane arrays (FPA) and are placed behind the lenses of the camera in the focal plane, evaluating every pixel at the same time. [1][2]
The infrared spectrum with wavelengths from 0.75 µm to 1000 µm, as shown in Electromagnetic waves is usually subdivided in three spectral ranges: Near infrared (NIR) with wavelengths from 0.75 µm to 1.5 µm, mid-infrared (MIR) with wavelengths from 1.5 µm to 20 µm and far infrared (FIR) with wavelengths from 20 µm to 1000 µm as shown in the following table.[3]
Name | Abbreviation | Spectral range in µm |
---|---|---|
Near infrared | NIR | 0.75 - 1.5 |
Mid-infrared | MIR | 1.5 - 20 |
Far infrared | FIR | 20 - 1000 |
Table 1: Division of the infrared spectrum into three spectral ranges
The temperature and the elemental composition of the atmosphere also influence the measurements. There are two bands of good transmission through the atmosphere in the infrared spectral range, also called atmospheric windows. Usually infrared cameras are divided and work in those two bands because in between there is no good transmittance as shown in the picture on the right. The short-wave infrared band (SWIR) ranges from 2 µm to 5 µm and the long-wave infrared band (LWIR) ranges from 8 µm to 14 µm. Cameras working in the band between SWIR and LWIR are called MWIR-Cameras (mid-wave infrared band).[4]
Atmospheric transmittance at wavelengths between 0 and 15 µm. Source: https://commons.wikimedia.org/wiki/File:Atmosfaerisk_spredning.png, retrieved on 07.02.19. License: Not protected by copyright https://creativecommons.org/publicdomain/mark/1.0/deed.en |
The field of view (FOV) value is a parameter that determines the spatial resolution of a camera for a given distance between the camera and the object. For a given FPA with a horizontal width b_h and a vertical height b_v, the FOV depends on the installed optics of the camera. Therefore, the FOV is limited by rays passing through a (thin) lens and hit the pixels of the edges of the FPA as illustrated in the picture on the right.
To calculate the FOV, the following formulas for the horizontal FOV (HFOV) and vertical FOV (VFOV) can be used for a camera lens with a focal length f and the FPA dimensions b_h and b_v:
HFOV=2\arctan \frac{b_h}{2f}
VFOV=2\arctan \frac{b_v}{2f}
b_h: Width of the FPA
b_v: Height of the FPA
f: Focal length of the lens
The horizontal length H and the vertical length V of the FOV for a given distance D between the lens and an object can be calculated by the following formulas:
H=2D\tan \frac{HFOV}{2}
V=2D\tan \frac{VFOV}{2}
D: Distance between the lens and an object
Of course the field of view extends with increasing distance D, but as a consequence the special resolution is decreasing.[5]
Besides the definition of the FOV as written above there is also angular expression of the FOV which is called angular field of view (AFOV).
Field of view (red frame) and the important lengths to calculate the FOV |
Optical aberrations exist in every optical system and are therefore not infrared-specific. The effect that lenses cause the refraction of infrared radiation and a spreading of the object radiance distribution occurs leads to a blurring effect at the detector array which is called aberration.
Assuming an object has such a shape, size and distance from the camera that it would exactly be depicted in only one detector pixel, the blurring effects due to optical aberrations lead to a blurred image of the object at the focal plane. Therefore, also neighboring detector pixels are irradiated by radiation from the object. This leads to wrong measurement results. For one thing, for the neighboring detector pixels which record a higher radiation intensity, for the other thing, for the “object pixel” which records a lower value.[5]
In a more detailed treatment of optical aberrations, a distinction in two classes is made between monochromatic (with aberrations for example: defocus, spherical aberration, field curvature and more) and chromatic aberration, which can be read in more detail in standard literature for optical systems.
To register the radiation, or – as in most cases – the change in radiation, from an object as a thermogram, the detector needs to convert the incident electromagnetic radiation into an electrical signal. This can be realized via two different working principles. Those two different types of detectors are explained in more detail in the following sections.
Thermal detectors measure temperature changes by detecting the change of properties in an electric circuit. As they need no cooling or optomechanical scanning they are generally cheaper than quantum detectors, which are discussed in the following section, and are nowadays widely used for industrial applications. Three different ways to measure these temperature changes are explained here: Bolometers use the change of an electrical resistance to detect a temperature difference and are the most commonly produced and used type of thermal detector array, thermopiles convert the temperature difference into a change of voltage and pyroelectric detectors work with a change of electric charge. [1][6]
A Bolometer consists of two equal thermistors which are temperature sensitive electrical resistances connected through a bridge circuit. One of them is exposed to the incident radiation, one is shielded from it. If radiation reaches the detector, the resistance of the exposed thermistor and therefore the voltage in the bridge circuit changes. For an ambient temperature change, both thermistors change resistance by the same value and the bridge circuit is designed in a way that no change in voltage occurs.[1]
A thermopile is the serial connection of thermocouples which use the connection of two different metals. At the ends of these metals, a voltage is generated if the temperature changes. A common material combination is antimony and bismuth Sb-Bi. Each pair of metals of a thermocouple needs to be placed with one end at the absorbing detector chip and the other at a reference plane which is not exposed to the incident radiation. Through the temperature difference of those two planes a voltage is observed.[1]
Pyroelectric detectors use a crystal with a highly asymmetric structure. This leads to a dipole behavior while the crystal is permanently electrically polarized. If the temperature changes, a change in the polarization occurs. This can be detected by electrodes in the form of a change in the electrical charge.[1]
Quantum detectors, also called photonic detectors, are counting incident photons by detecting changes in atomic states and the free electron density inside a semiconductor caused by the absorbed energy of the photons. If an incident photon hits the detector surface with a certain amount of energy, an electron gets released and increases the density of free electrons in the detector. This event changes the conductivity of a photoconductive detector and hence the output voltage. The change in conductivity inside the photoconductive detectors is measured with electrodes attached to the detector material. If a low resistance material is used, the series load resistance is large compared to the sample resistance and the detector is operated in a constant current circuit while the signal is detected as a change in voltage generated across the sample. The signal, as a change in current in the bias circuit, is used in high resistance photoconductors where a constant voltage circuit is preferred. Common photonic detectors work slightly differently as they do not measure a change in conductivity. They use an abrupt p-n junction prepared in the semiconductor where incident photons create electron-hole pairs which generates a photocurrent. This photocurrent shifts the current-voltage characteristic of the detector and therefore the number of incident photons can be detected. Typical materials used in photonic detectors are Si, InAs, InSb, HgCdTe. [2][6]
Cooling of quantum detectors
The fact that any body with a temperature higher than 0 Kelvin emits thermal radiation means that a detector at room temperature also emits thermal radiation. As quantum detectors are counting photons, it is necessary to cool them down to temperatures between 77 to 200 Kelvin. [1]
Depending on the field of application, there are different cooling systems installed in infrared cameras. Cryogenic dewars filled with liquid nitrogen were used in the past in laboratories where liquid nitrogen was available. For most other applications today, it is quite impractical using pour-filled cryogenic dewars. Other cooling methods are Joule-Thompson coolers which are able to cool down very fast, Peltier coolers which are less expensive and the most commonly used Stirling cycle coolers that are preferred because they are efficient and able to cool down to 80 K in about 4 minutes.[7]
Both types of detectors have their advantages and disadvantages as they use different principles to detect incident radiation. Quantum detectors are counting the incident photons and do not depend on a time dependent temperature change. Therefore, the response time is a lot faster and the achievable framerate is much higher than for thermal detectors. Besides that, a very high detectivity close to the theoretical limit and also very good NETD values can be reached with cooled infrared cameras (for the definition of the detectivity and NETD see the following sections). One disadvantage of a quantum detector is, that the cooling needs additional space in the housing, additional power and in some cases cooling liquids as described in the previous subsection. Except for quantum detectors cooled with liquid nitrogen. Another big disadvantage is the high cost for a quantum detector which starts at around 60 000 € and goes up to more than 120 000 €. In contrast to that, good thermal detectors with prices around 10 000 € are much cheaper, the cheapest ones start already at around 300 €. Also, there is no need for cooling which – besides extra costs – simplifies the handling and reduces noise. As uncooled infrared cameras have no moving parts, the risk of failure is reduced. Besides the lower reachable framerate of thermal detectors as mentioned above, other disadvantages compared to quantum detectors are: a lower detectivity (only about half as good as for quantum detectors), a smaller range of observable temperatures and a restricted life span. In the end it depends strongly on the application which type of infrared camera is the best choice. [2]
Infrared detectors are classified and compared by several specific parameters. A few of them are explained in the following section.
The responsivity is defined as the ratio between the fundamental (most influencing) component of the electrical output signal, which is the voltage U or the current I and the fundamental component of the incident radiation power \Phi which leads to the following formula for the responsivity:
R_D=\frac{U}{\Phi}=\frac{I}{\Phi}
U: Fundamental voltage component of the electrical output signal
I: Fundamental current component of the electrical output signal
\Phi: Fundamental component of the incident radiation power
It provides information on how sensitive the detector is to incident radiation and is usually a function of the bias voltage or current, the operating electrical frequency and the wavelength. The wavelength of the incident radiation has an especially strong influence on the responsivity of the quantum detector. Furthermore, there exists a typical dependence between the size of the detector array and its responsivity. [1][7]
The NEP (Noise Equivalent Power) value describes the amount of noise in the thermal image. It is the amount of radiation power which is necessary to generate a voltage at the detector output equivalent to the voltage the noise would produce. As the NEP is an amount of power, its unit is Watt and the smaller the value the less noise is in the image. Values of 10^{-17} W can already be reached.
The detectivity of a detector is the reciprocal of NEP and the higher its value, the better is the detector. Of course, the detectivity of cooled detectors is much higher than of uncooled ones. Both, NEP and the detectivity are proportional to the detector area. This is important because this correlation makes comparisons between detectors with different area sizes possible. [1][7]
The noise equivalent temperature difference (NETD) is the figure of merit for FPAs. It is the minimum temperature difference a black body object in the FOV of a FPA has to perform to get the root mean square (rms) of the detector signal to the same value as the rms value of the noise in the signal would be. It depends on the camera optics, the frequency band, the detector area, the detectivity of the detector and the spectral blackbody emittance. For example, by reducing the size of the frequency band of the camera the NETD value decreases which equals a better detectivity of the detector. To determine the NETD value experimentally, the camera is pointed at a temperature-stabilized blackbody emitter and the fluctuations of the measured temperature lead to the NETD value. Typical values of the NETD for infrared cameras are lower than 50 mK for quantum detectors and above around 100 mK for thermal detectors. [4][5][6][7]
The fact that each pixel is evaluated separately leads to a different electrical response signal for each pixel (for example because there are already manufacturing-related differences) though the exact same incident radiation irradiates the entire detector. The sensitivity to radiation of each detector pixel has to be adjusted to get a uniform picture of a uniform object in the FOV, therefore, this adjustment is called nonuniformity correction (NUC). It consists of three steps. First, a signal offset correction for a given object temperature is performed to bring all detector signals into the dynamic range of the detector electronics. After that, the signal slope for different radiant powers of a given object is corrected and in the final step again the offsets of the signals get synchronized. In practice, the NUC is performed by putting something with a rough, non-reflecting surface (a body with an emissivity near a value of 1) for example a piece of paper of homogeneous temperature directly in front of the detector in a way, that each pixel should register the same signal. Then the correction of the offsets and slopes is performed electronically by the camera software.[5]
In almost every detector array exist so-called “bad pixels” which are detector pixels that are not working or cannot be corrected by a NUC. For example, very good manufacturers state that their detectors only have 0,01 % bad pixels. For an FPA with 640 x 512 pixels this would be a maximum of 32 bad pixels out of the 327680 pixels. To generate a full-size picture, manufacturers implement a bad pixel correction (BPR). Usually the bad pixels are corrected by replacing the signal of the bad pixel by the weighted average of the neighboring pixels. Problems arise if bad pixels form clusters because temperatures measured in this area of the detector array are not accurate. [5]
As mentioned in Infrared Thermography each body with a temperature higher than 0 Kelvin emits radiation and has its material and surface dependent emissivity with a value between 0 and 1. Only a theoretical perfect black body has an emissivity of 1. To calibrate an infrared camera to measure absolute temperatures, the typical standard is to use an almost perfect black body. This can be a heat pipe cavity type black body which can reach an emissivity higher than 0.9996 or other cavity shaped black body designs that have very high emissivity values. This black body is heated to a known temperature and the camera is pointed with its FOV at the black body in a way that every pixel is exposed to the same radiation. Then the output temperature signal is set to the temperature of the black body and the camera is calibrated. To measure the real temperature of an object, its emissivity (which is in general temperature-dependent) has to be known. This emissivity-temperature curve can then be input into the camera software to measure the absolute temperatures of the object. [5]