The images we see from space contain colorful arrays of galaxies, nebulas, planetary surfaces, and other mesmerizing anomalies that truly make us wonder what else is out there. Most of these images are derived from satellites, radio telescopes, and while they always contain an intricate array of different colors, how these space images come about are totally different from conventional photographs. Photos utilize focused light that’s captured on a light-sensitive surface like film or a charge-coupled device in a digital camera, whereas satellite images are created when certain wavelength light intensity measurements (both visible and invisible) are combined.
Some of the light wavelengths satellites measure might appear unnatural to humans, since their instruments collect visual, chemical, and physical information about Earth—all of which are reflected in these images. Remote sensing scientists keep developing more satellite tools to broaden the amount of information they’re capable of collecting. These instruments perform various tasks like bouncing light and radio waves off Earth and measuring returned energy, along with light detection and ranging (also known as LiDAR), and traditional radar technology. Most instruments are passive, meaning they record light that the planet’s surface reflects or emits.
The information is turned into data-based maps (images) that measure details ranging from plant growth and cloudiness to temperature. The conversion process the measurement information goes through also determines whether natural or false-color images are developed. Images are formed measuring different light waves with the distance between each wavelength determining its amount of energy. Small wavelengths usually contain higher amounts of energy, while longer wavelengths contain lower-energy waves.
Since most electromagnetic radiation that Earth-observing satellites absorb comes from the sun, one of the determinants of whether light is absorbed, emitted, or reflected is the object’s chemical makeup. These reactional patterns to light wavelengths are their spectral signatures. Temperature also factors into the direct measurements of satellite instruments with shorter wavelengths coming from warmer temperature, while longer wavelengths indicate colder temperatures. Satellite sensors use different shades of gray in their initial images so they can individually tune into specific wavelength bands. The brightest spots on these gray-shaded images represent areas that reflect or emit the greatest amount of light, while the darker spots indicate otherwise.
Researchers choose three different bands that are represented in shades of red, green, or blue to enhance the detail qualities of these different gray shades in the original images, which are combined to get a full representation since red, blue, and green light can create most visible colors by being combined. A false-color image uses at least one invisible wavelength, despite being represented using one of these colors. This is why the final color sequence of images might contrast to initial expectations.
NASA normally uses one of four wavelength band combinations, depending on what is depicted in the image. Changes in plant health, for instance are indicated using NIR (red), green (blue), and red (green). Floods and recently burned land use SWIR (red), NIR (green), and green (blue). Differentiations among snow, ice, and clouds are reflected using blue (red), along with two different SWIR bands (green and blue), while thermal infrared depicts temperatures (usually in shades of gray).
The entire false-color imaging process is very comprehensive and while this piece describes the “gist” of what goes into forming these images, this should at least help you gain a sense of appreciation of the process that brings these breathtaking images of outer space to life.
Filed Under: Aerospace + defense