COLOR INFORMATION CORRECTION OF IMAGES USING LIGHTWEIGHT CAMERA SYSTEMS

: The paper is about the idea of a method, with which lightweight camera systems can be extended to get more accurate color meta-information for still images and for frames gained from streamed videos. This meta-information can give more information about the lighting conditions and about the colors of objects in the picture. By having more accurate colors in the picture, many typical in situ and post process visual tasks can be done with greater reliability. This extension could enhance color identification of images taken by low budget camera systems to measurement devices.


Introduction
In the last few years, the research group at the University of Pécs, Faculty of Engineering and Information Technology was involved in several projects connected to drones, more specifically quadcopters. These devices are well suited for some specific aerial projects, since they are rather easy to control and they are able to stand still and hover at any position. One purpose of the work is of course to implement and probe control algorithms for students in an interesting way [1]. The general aim for these drones is to create fully-and semiautonomous systems, which can navigate indoors and outdoors, can avoid collision and can recognize and follow objects. The Parrot AR.Drone 2.0 drones were used, because they are lightweight, have built-in 720p wide

Color information of images
Digital images store color information in a numeric system, where each point (pixel) in the image is represented by three subpixels. These subpixels are the three, independent primary colors, which can additively mix up and represent the resultant color. The primary colors are normally (but not necessarily) the red (R), green (G) and blue (B) colors. Depending on the bit-width in which one primary color is stored, intensities of primary colors can change in given steps. The 8 bits per subpixel is a common setup. Recombining the three (3*8 = 24) subpixel values, a pixel with 2 24 different color values can be got. This is often referred as true color (16.7 mill. colors).
The colors of the image are sensed by a Charge Coupled Device (CCD) or Complementary Metal-Oxide Semiconductor (CMOS) sensor. Although these two types of sensors work in a different way, the basis of them is common. The incoming photons generate current flow through the photodiodes, which can be measured and transformed into pixel intensities. Since this method is for intensities only, the image is grey-scale until now. If color filters (e.g. Bayer filter) are applied the R, G, B filtered intensities for one (bigger) pixel can be sensed.
The incoming radiation can be direct, or reflected from object surfaces. The color of objects depends on the illumination and the spectral reflectance properties of the surface itself. These steps in a nutshell show what the sensed part in digital imaging is. However, at this point the data achieved is raw data and need some camera specific processing to be an image. Final steps at this processing are the optional white balance and exposure settings (burned in JPG and can be post processed in RAW protocols). After these steps, the camera specific Red, Green, Blue (RGB) values of the pixels are converted into some general RGB space (sRGB, Adobe RGB). This process is however less and less standard if small, cheap camera systems are selected.

White balance of images
The human perception is pretty good at compensating for dominant colors in illumination and will reconfigure its sensory system to be less sensitive for dominant colors and more sensitive for less dominant ones. In other words, objects with the same color will appear the same under different illuminations.
Digital cameras have to use a compensation mechanism as well, to be able to depict the real colors of the taken scene. This option is the White Balance (WB) settings of the image. Most cameras have some often used presets for that, like daylight, cloudy, shaded, incandescent, flash, and so on... The presets have to be applied before taking pictures, and this static correction will be burned into the images if the JPG format is used as output. Modern digital cameras have the capable electronics to measure illumination conditions rather fast with some tricky algorithms based on sensor data (data as a RAW image) just before the real picture is taken. This is called the Automatic White Balance (AWB) setting. These algorithms are quite smart, but they can be tricked as well. Just imagine a painting having only blue shades under normal illumination. How can the algorithm decide, whether it is a blue painting with 'white' illumination, or a grey shaded one, with bluish illumination? Or what happens with multiple illumination scenes [7]? A further point is, that the AWB algorithms of digital cameras tend to tune image colors more to be pleasant, than to be accurate.
The color of an ideal blackbody radiator, which does not reflect any radiation is depending only on its temperature. Since that, its color can be characterized by its temperature. This is called Color Temperature (CT) and measured in Kelvins (K). Nonblack-body radiator colors can be described by the closest color blackbody radiator temperature. This is called the Correlated Color Temperature (CCT). The color balance of an image can be characterized by its CCT as well.
The ideal white balance of an image has a CT of white (around 4000-5000K). Under 4000K colors are more warm (reddish) and above 5000K, colors are more cold (blueish).

The motivation -color based projects
As the mentioned drones were used partly for color based projects, it is very important, to measure the actual colors of objects and be able to monitor and track them based on that. It is common, that small, built-in camera systems do not too much G. VÁRADY Pollack Periodica 14, 2019, 1 settings, and they operate mostly in automatic mode, regarding exposure, white balance and other parameters. It is also common, that these types of cameras will not put too much, if any information into the extra metadata field of the pictures, called the Exchangeable Image File Format (EXIF). Video streams or arbitrary frames from it have also no EXIF information embedded.
One project was to monitor the growth and evolution of a special sedge-colony. The task was to fly over the sedge-colony and to take some pictures. Among other parameters, the task was to monitor the color changes of the sedge leaves during their growth. The growth in focus has a time frame of several weeks and images should be taken and compared monthly (daily?). The problem was that the illumination conditions changed among these images. Clean and overcast sky with different illuminance levels were all there. The images had to be taken on swampy ground, walking into the sedge was also not a good idea, because that would have damaged some members of the colony. The first idea of a drone-mounted color reference arose during this project.
Another project is to recognize and follow objects. The recognition methods are working on different form and tag detection, but color recognition would also be interesting. Changing illumination occurs during indoor use even more frequent. Flying in front of the windows and with sunlight, then going on a corridor with fluorescent lamps, then Light Emitting Diode (LED) lamps, and then shadow and sunlight again means a constant changing color of objects in images. The AWB settings try to compensate, but there are still big differences in the RGB values of interesting object colors. The AWB algorithms are also not exactly known for the build-in, small and mostly cheap camera systems. Even other automatic settings are not stored in the images, so post-process corrections are hard without references. The drone-mounted color reference could solve this problem as well.

Color reference based image corrections
The color reference based image correction is a well-known and used process among digital photographers and pre-print experts. The basic method is to use standard color samples as reference points in the pictures. Knowing the color properties of the reference samples, it can be seen, how big differences are present on these color samples in the picture. It is even possible, to fine-tune the image colors in a way, that standard color samples will have more accurate colors. If these changes are saved as profiles, these profiles can be applied to any other images taken with the very same settings and illumination to get more realistic colors in the picture.
The first standard color sample set was introduced in 1976 [6], by the company MacBeth. Hence the name MacBeth ColorChecker. After several merges and acquisitions, the name changed to Gretag-MacBeth ColorChecker and finally to X-Rite ColorChecker. The 24 color samples were chosen to have the most frequently used color tones for portrait imaging (Fig. 1).
The correction procedure is not standard, but most of the algorithms make a transform model based on the color difference of the corresponding sample-pairs. This transformation then applied to every pixel of the source image to get the color corrected image. One result can be seen in Fig. 2, where the upper shadowy image was corrected and more accurate colors achieved (look at the most right colors).

The need for an alternative method
According to the previous section, color correction is an easy procedure. However, there are circumstances, where no colored reference samples are able to be shown in the 8 G. VÁRADY   Pollack Periodica 14, 2019, 1 pictures, as already mentioned in section 5. It is often problematic to have a reference sample set in the picture at arbitrary position, especially if some measurements are done with drones. Mounting a full size ColorChecker onto the drone is not practical because of balance and loosing too much area of the image. It would be also a problem that the reference samples should be recorded next to the object, under the same illumination.
Another issue is, that built-in or mountable, cheap drone cameras use automatisms, which are not or not well documented for AWB and other manipulations on image data. This means that it cannot be really followed up, what happened with the colors in pictures taken. The solution would be again a reference for that.
For solving this later problem, an illumination independent color reference should be used in the images. That could be some self-illuminated source, e.g. an LED, or several LEDs. They are rather small, lightweight, with low power consumption. A simple electronics can drive them, so this solution will not affect the general battery-times of drones. Since these color samples are self-luminous, their colors are not affected by the illumination. What are they good for then? The change in the cameras automatic settings can be monitored by observing the change of the colors of the LEDs. With this knowledge, it is possible to monitor the dynamically changing settings among photo shots or video streams.

What are some LED references good for?
To check the concept of using only a few color samples, two setups were prepared using some LEDs in the corners of the images. The breadboard version of the model used only one LED at once in the upper right corner. Several images with different LED colors (red, green, blue, white) were taken with the same setup. This imitates actually the same setup as if all four LEDs were in the four corners of the image.
The first series was taken with a Logitech c905 webcam, which is a low budget 2 MP camera with a resolution of 1600x1200. This camera was attached to a Personal Computer (PC) and pictures were taken with the included Logitech software. The software gives just a few, rough data about the actual settings and the images taken have no EXIF data at all. This circumstances are similar to them we have with cheap, built-in cameras. One shot of the setup can be seen in Fig. 3.
During the first series, images with high and low CCT setups were taken. The software did not give any numeric values for that. All eight images with low and high CCT settings and with the four colors LED-s in the upper right corner (one at once) were taken. The illumination and all other settings of the camera were the same. Since the LED-s positioned close to the lens caused some flare artifacts at first, a simple white sheet as diffusor was used later on just in front of the LED-s.
The question in this first series was, whether it is possible to transform the colours of the low color temperature image to the colors of the high color temperature image using only the LED-s as reference. Fig. 4 shows the images with all three LEDs.
As the next step, the changes of colors of the 4 LED pairs (low and high temperature images) were checked and tried to describe the transformation based on these differences. The transformation was defined simple. A global (white) difference was Pollack Periodica 14, 2019, 1 calculated from the R,G,B differences as using them for the pixels R,G,B-channels as a coefficient. See these ratios below ( Table I). Using this transformation, all the pixel colors in the high color temperature image were changed, resulting in a corrected image. That image should be close to the low color temperature image. 2x4 pictures have been investigated. Four were taken with low correlated color temperature settings, four with high CCT settings. The cropped RGB LED image parts (R1, R2, G1, G2, B1, B2) are shown below in Fig. 5. The white LED had not been included; later it will be explained why.    The R1,G1,B1 image parts are from the low CCT settings, the R2,G2,B2 parts are from the high CCT settings. It can be seen in Fig. 4 and Fig. 5 that the color appearence of the LED-s are different, however, they were the same. The different WB settings changed the weights of the received R,G,B filtered radiation, trying to compensate for the according illumination settings (WB). With low CCT settings, low CCT illumination is supposed, where the algorithm adds more weight to the blue components, resulting now in a bluish skew in colors.
The weighted mean of the pixels RGB values from all the cropped LED images were taken, Also calculated the ratio of the RGB components of the low CCT and high CCT LED images were calculated and the corresponding k R , k G , k B values were taken. The values are shown in Table I.
The ratios show, how the mainly red, green and blue color patches differ. The white LED used was an RGGB LED cluster and the cluster members did not mix enough. As a result, they were also R,G,G,B patches with the same information as with the separate RGB LEDs so no calculation were made with them.
As a fast approximation, the corresponding R1,G1,B1 and R2,G2,B2 LEDs RGB values were mixed with a ratio of 1:1:1, creating a virtual white LED from them. The RGB component ratios of these two virtual LED-s were calculated, the resulted data is shown in Table II. The assumption was that modifying the pixel colors with these ratios in the high CCT setting image, colors will get closer to the low CCT setting image.
Since the work was developed with JPG pictures, the gamma correction of the pictures should have calculated with. That means a type of nonlinearity added to the linear scale of 0-255 RGB values. Since humans can differentiate smaller steps in brighter shades, it is worth to use more values in the higher part, and less in the lower part 0-255 scale. The common gamma value for sRGB and Adobe RGB systems is 1/2.2.
After the gamma correction the ratios in Table II were applied for every pixel in the high CCT setting image and a modified image shown in Fig. 6 was resulted. Although the method is rather simple, the modified high CCT image got closer to the low CCT image. Just to check the changes, the red, green, blue, white blocks of the Lego Duplo color matrix from all three images (low CCT setting , high CCT setting, modified high CCT setting) were taken and their mean RGB values and differences were compared. Cropped images are shown in Table III, data is shown in Table IV.  As we can see in Table III and Table IV, the modified high CCT colors are closer to the low CCT ones. This shows that the LED-based reference method can work to correct colors, even with a simple algorithm.
The above example shows similar conditions to that, when a video stream with a WB setup is started and illumination changes among the video. It is also similar when work is evaluated with a fixed WB setting but illumination changes during the photo shots.
In the second trial, the question was what the AWB function of the c905 camera is good for. Two images were taken with AWB settings under two different illuminations. One was a simple fluorescent lamp (fl. lamp), typical in offices with a CCT of 3500K, natural white. The second one was a diffuse, non-direct daylight (dif. day) which is close to a CCT of 6200K, shadow. Hence the AWB setting, the camera compensated for the different WB illumination conditions (Fig. 7). The calculation method was the same as by the first series. The LED differences were used to modify the pixels of the image taken under a fluorescent lamp. The red, green, blue and white blocks in the pictures were compared, as shown in Table V. As it can be seen, even with AWB settings, the method could correct the colors of three patches from four.

Conclusion
Based on the above measurements, the method of using on-board mounted LED color references for color correction seems to be beneficial. There are several use cases, where conventional methods are not plausible (e.g. Color Checker). Typically, outdoor color-based tasks with drones are in need of some sort of color correction methods also for in situ tasks, but also for being able to post process image color data. The LED references are a good base for that. Since rather simple calculation methods are working and give good results in color correction, further investigations in more sophisticated transformation algorithms are valid.