Hyperspectral Imagery


03/10/2016

This page is a brief introduction to hyperspectral imagery and my current graduate school research. It’s specifically written with a general audience in mind, so I hope you find it useful and informative, and maybe even interesting!

So, what is a hyperspectral image?

Most of us are familiar with the idea of an image being composed of pixels (picture elements). Each of these pixels has its own color, determined by the spectrum of the material present within that pixel. Spectrum is really just a fancy word for rainbow, and every material has a unique spectrum associated with it, just like all humans have unique fingerprints. The spectrum measures the light intensity of the material across all wavelengths in the electromagnetic spectrum (Figure 1).

Screen Shot 2016-03-01 at 9.49.20 PM.png

Figure 1: Sample spectra of dry and healthy grass through the visible and infrared portions of the electromagnetic spectrum. (Source: Dr. John Kerekes, RIT)

In Figure 1 we see the spectra for both dry and healthy grass. Their reflectances vary drastically from ~4-6 microns. If we could capture this information in an image, then it would theoretically be possible to determine where dry and healthy grass are located. If the image was taken from an airborne sensor and showed a wide swath of land, this could be enormously helpful for farmers in monitoring crop health. Remember, these differences are not just present for dry and healthy grass. Every material has a unique spectrum.

Now let’s take a step back from thinking about the full spectrum of materials. The pictures we take with our phones and consumer cameras are composed of three components: a red, green, and blue layer, captured through the use of different filters. These three layers combine to give us the full-color images we are used to seeing (Figure 2).

RGB_web.jpg

Figure 2: Red, green, and blue layers combine to form a true-color image.

Now, in Figure 2, and in the information our cameras collect, we don’t get the material reflectance at every single wavelength. We get the reflectance of all combined red wavelengths, all combined green wavelengths, and all combined blue wavelengths, which join to form a true-color image. So while we can make pretty pictures out of this three-band information, it doesn’t provide us with much information.

This is where hyperspectral imaging comes in. Instead of broadband filters (all red, all green, all blue), we use a different instrument called a spectrograph to collect the full spectrum of every pixel in an image (Figure 3). The term hyper typically means that we are collecting the reflectance values at hundreds of wavelengths. Multi means tens of wavelengths, and filters come back into play.

cube

Figure 3: Hyperspectral image cube

In Figure 3 we can start to visualize some of this data in three dimensions. The top of the cube in Figure 3 shows a true-color image formed by three combined wavelengths of the collected information (red, green, and blue). This gives us an idea of the scene we are looking at, the intersection of a grassy park and a large lake. The columns down the side of the cube represent the spectrum associated with each pixel. So if you removed a slice from the middle of this image stack (like a Jenga block), you would be looking at the image represented by one wavelength. Since you only have one wavelength (one color) of information, the image will only vary in intensity. Information about each pixel is represented by its brightness. For instance, if we go back to look at Figure 1 and imagine a hyperspectral image of a farm, we could extract the wavelength layer corresponding to 5 microns. In the resulting reflectance image, dry grass would appear extremely bright (close to white), and healthy grass would be very dark (closer to black).

Back to Figure 3, you’ll notice that a couple of the wavelength layers are almost black. These layers may correspond to bad bands (wavelengths) that are missing due to a malfunction in the instrument, or more commonly because of the atmospheric effects seen by a satellite. At certain wavelengths, the atmosphere is totally opaque, and no useful information is obtained about the ground materials. Every little bit of useless information we can remove will help, because these datasets are absolutely massive. The images I worked with for this project had over ten million values of information.

 

What is hyperspectral imagery good for?

Continuing with the comparison of dry and healthy grass, this is an example of classification. We can write computer code that will inspect the spectral information of each pixel and assign it to a class. Now, the computer isn’t that smart; they’re technically just clusters of similar pixels. It’s up to the image analyst to go back and assign labels to each class (i.e. grass, trees, dirt, roads, water, etc.). See Figure 4 for an example and more details.

kmeans

Figure 4: A hyperspectral image classified into 6 pixel clusters. On the left is the original hyperspectral image shown in it’s true-color three-band representation. The user predefines regions of similar material (middle). For instance, in the middle image I selected water as red, green as pavement, blue as dirt, and the last three colors as different types of vegetation (healthy and dry grass, and trees). Once these classes have been indicated, we can program a computer to determine the average spectrum of each material, and compare all other pixels to this spectrum. Through a recursive process, the program will assign each pixel to its nearest class (right). Although this requires user input, the process is sped up considerably by having computational power.

Another possibility with hyperspectral imagery is anomaly detection (the current focus of my work). This resembles the classification problem, except that there are only two classes: “normal” pixels, and “anomalous” pixels. Basically, you can compute the average spectrum of the entire image, and compare each pixel spectrum to that average. Large differences between the image pixel and the average spectrum indicate that the pixel is likely anomalous.

A classic example of anomaly detection is an image of pure vegetation, with camouflage tents and tarps mixed in. Similar to Figure 2, an RGB image wouldn’t provide much information, and the tarps would easily blend into the vegetation. However, the spectrum of plastic and trees/grass differ significantly and this becomes an easy detection problem. An example of anomaly detection is shown in Figure 5.

anomaly.jpg

Figure 5: A subset of an hyperspectral urban image (left). In the most basic anomaly detection algorithm, the average spectrum of the original image is computed, and all other pixels are compared to that. Larger differences between the pixel and average spectrum indicate a possible anomaly. The middle image shows a score map corresponding to the left, where darker pixels indicate a further distance from the average image spectrum (more anomalous). Lighter pixels are similar to the expected mean of the image. Notice that the wall on the upper right of the image is highlighted as an anomaly. On the right is a histogram of the pixel distances, or scores from average. Higher scores (far to the right end of the distribution) correspond to more anomalous (dark) pixels.

Anomaly detection is the case where we don’t have any information about what we’re looking for; we just want to find things in the image that aren’t quite right. This leads us to consider exactly what is an anomaly? For instance, an image of a lake, with a small wooden dock sticking out over the water. Humans would see this and think “okay that makes sense,” whereas a computer sees this and returns “ANOMALY ANOMALY!!” The task of giving computers a human context to work with is enormously difficult.

The next step after anomaly detection is target detection. In this case, we know what we are looking for, or what material we are interested in. Now instead of comparing each image pixel to the average image spectrum, we basically compare each image pixel to the known spectrum of the material of interest. The material spectrum is normally measured in a lab, which means that atmospheric effects are not taken into account. As an example, consider an image of the ocean in contested territory. It would be of interest to monitor any incoming or outgoing vessels, and the material of a ship would significantly vary from the ocean. Since in this case we know exactly what we are looking for, the problem becomes one of target detection. See Figure 6 for an example of this work.

target

Figure 6: This example of target detection is a bit more complex because it’s one of subpixel detection, meaning the pixels are not entirely composed of a single material. For instance, in this image (left), there are small, green-painted wooden blocks scattered through the grass (yellow ellipse), but a single pixel might include grass and a wooden block. So the pixel spectrum won’t exactly match our target spectrum. To improve my results, I classified the image first, and then used those statistics for the target detection. Remember, the basic idea is comparing our known target signature with every pixel in the image. What I’ve left out is an extra parameter used to describe the image. By defining this parameter based on pixel class, I can increase the detection rate. The 5-class breakdown of my image is shown in the middle, and the results are shown on the right. Brighter pixels indicate a higher likelihood of target detection, and we can see the targets appear exactly where we expected.

The possibilities are endless, but to wrap this up I’ll mention one more application of hyperspectral imaging: change detection. This requires multiple hyperspectral images of the same area, but by comparing the spectra of pixels on one day compared another, it becomes easy to highlight where changes are occurring. This could be breaking ground for a new building, forests or crops suffering from a drought, or monitoring deforestation, among many other things. In fact, the previous example of ships in contested waters also applies to change detection.

Of course, the above examples are all done with very basic data used for in-class examples and practice. It’s up to you to use your imagination and figure out everything else that could be accomplished with these techniques. Figure 7 shows one practical example implemented by the Rochester Institute of Technology’s Center for Imaging Science.

blue_tarp_GE_Jacmel_03

Figure 7: Following the 2010 earthquake in Haiti, RIT became part of a disaster response team organized to assist in the relief efforts on Haitian soil. One “important information element to the relief effort was the location of Internally Displaced Persons (IDPs), necessary because damaged structures forced a large number of residents to flee their homes. Questions arose as to the location and dynamics of refugee camps, e.g., assessing the growth of informal settlements and routes to reach these locations. Using the knowledge that IDPs typically set up camp with blue tarps, a target detection algorithm was employed to locate blue tarps in aerial, multispectral imagery taken over Haiti. These results were output in a Google Earth file that attached each tarp detection to geospatial coordinates, allowing emergency response teams to quickly determine where IDPs were sheltered, what the best way to reach them was. (RIT source paper)

 

When and where is hyperspectral imagery acquired?

I had to continue with the who/what/when/where/why theme, at least somewhat.

Hyperspectral imagery can be acquired in many ways. For remote sensing applications, the two most common ways are airborne vehicles that fly at low altitudes and have high ground resolution, and satellites that operate above the Earth’s atmosphere (although almost all satellites are multispectral).

Earth_Observing-1

Figure 8: Artist’s drawing of NASA’s Hyperion hyperspectral imager instrument. (Wikipedia Commons)

While airborne vehicles can gather higher resolution imagery over specific locations, it requires a lot more time to investigate flight paths and schedule flyovers. Satellite imagery on the other hand, comes on a regular basis. Unfortunately, satellite repeat visits take too long for many new applications and ideas. Even for applications that would only require satellite imagery once a week, pesky clouds have a habit of disrupting observations, especially in Rochester!

 

Finally, why am I studying hyperspectral imagery?

We’ve already covered this somewhat. One reason is that multispectral or RGB images just don’t provide enough information to make useful observations. We need continuous spectra covering hundreds of wavelengths, but humans simply can’t compare that much information. Remember the camouflage tarp vs. grass/trees example. We would fail miserably trying to separate out tarps from vegetation if we just had an RGB image. Computers do the hard work for us, comparing hundreds of values against hundreds of other values to find similarities and differences, which we can then convert back into an easy-to-comprehend format.

Second, even if humans were capable of this work, the huge factor is time. We can acquire enormous datasets, and there simply are not enough people to pore over all the data. Especially in fields where information is needed fast, using computers to analyze hyperspectral imagery is imperative.

So for me, hyperspectral imagery is attractive because I get to work with computers and write code, I get to automate tasks which will save time, and I can see the impacts of my work in real life.

 

If you’re still reading, WOW, I’m impressed. I find this stuff exciting and fascinating, but it’s hard to tell how other people would feel about it when I live in a little academic bubble. I hope this was informative and you learned a lot; feel free to leave a comment with any questions or suggestions!