Wednesday, August 26, 2009

Activity 14 | Pattern Recognition

Sorting or classifying objects using visual information depends on the features of the objects. The human brain can draw, process and manipulate these features from a previous encounter to identify objects. This ability is being implemented in computer vision although it is not on the same level as the human brain.

In this activity, we implemented pattern recognition using images of different groups of objects and obtained sets of features to describe the objects and to use for recognizing new sets of objects. The groups of objects I used in this activity are 10 coins, 10 fasteners, and 10 leaves as shown by Figures 1-3. The features that were obtained are the area, perimeter and the sum of the RGB values of the object images.


Figure 1. Object group 1: coins.


Figure 2. Object group 2: fasteners.


Figure 3. Object group 3: leaves.

The first five objects of each group is used as the training set or the first set of examples where we get the basis for pattern recognition. The last five objects of each group is used as the testing set or the prediction set which will be used for testing the basis features. The same features was obtained for the test set and will be used for predicting which group of objects it belongs.

The features were obtained using basic image processing techniques such as segmentation by thresholding then counting of the number of pixels of the segmented image to get the area. The perimeter was obtained by getting the length of the contour of the segmented image while the RGB sum is simply the sum of the RGB values of the object image. The basis for pattern recognition is the mean of the features of the five sample objects for each group.

An object is classified into a group by computing the Minimum Distance of its features with respect to the basis vector or the mean feature vector. The test object is then classified into the group with the smallest feature distance. The figures below show the test objects and the Euclidean Distance for each feature. The highlighted areas represent the group with the minimum distance and thus the group where the test object is classified.


Figure 4. Euclidean Distances for test objects composed of coins.


Figure 5. Euclidean Distances for test objects composed of fasteners.


Figure 6. Euclidean Distances for test objects composed of leaves.

All in all the accuracy of the pattern recognition is 100%. I give myself a grade of 10 for this activity for the successful implementation of the Minimum Euclidean Distance for object classification.

Thursday, August 6, 2009

Activity 12 | Color Image Segmentation

It is a common problem in image processing when the region of interest is more than a problem of separating it from a background, or an application of threshold especially when the image involves a great variety of colors. In this activity the region of interest is isolated from the image by image segmentation using color in such a way that different brightness levels are not a problem by converting the RGB color space of the image to the Normalized Chromaticity Coordinates or NCC which is shown in Figure 1.


Figure 1. Normalized Chromaticity Coordinates

Image segmentation can be done in two ways. First is the parametric segmentation and second is non-parametric segmentation. Parametric segmentation involves the use of image patches of the color of the ROI and its histogram. The ROI can be isolated by doing histogram backprojection to the image using the histogram of the patch of the ROI. Figure 2 shows the original image used for this activity while Figure 3 shows the patches used and the resulting image after applying histogram backprojection to the image using the histogram of the color patch.


Figure 2. Original image. Hibiscus.

Activity 11 | Color Camera Processing

One of the major factors used in judging the quality of digital cameras is its ability to capture colors satisfactorily. Most cameras nowadays give a set of options on the lighting conditions for capturing digital images. These set of options is often called White Balance Setting that gives a set of white balancing constants needed for different conditions.

In this activity, the digital camera of a Sony Ericsson phone was used to take pictures of objects using the different White Balancing Options which are bulb, cloudy, daylight, and fluorescent. Figure 1 below shows the images captured.





Figure 1. Different white balancing settings for differently colored objects (left) and objects of the same hue (right) like objects that have the color green. The white balancing settings are (top to bottom) bulb, cloudy, daylight, and fluorescent.

From the images obtained, it is observed that the bulb and fluorescent settings produced bluish images although the bulb setting images are more bluish and the fluorescent setting images are dimmer, while the cloudy setting produced images with slightly yellowish images and the daylight setting produced brighter colors and less yellowish images than the cloudy setting.

Now we apply two different White Balancing algorithms to these images namely the White Patch Algorithm and the Gray World Algorithm. The White Patch Algorithm is done by dividing the RGB channels of the original image by the RGB value of the white patch in the original image. Figure 2 shows the white patches used for each image in Figure 1 respectively and Figure 3 shows the resulting images after the White Patch Algorithm.





Figure 2. White patches used for White Patch Algorithm for the respective images in Figure 1.





Figure 3. White Balanced images using White Patch Algorithm.

As observed from the first row of images in Figure 3 it seemed that the white balancing went wrong. By examining the RGB channels (see Figure 4) of the original image of one of the wrongly white balanced images we see that there are areas in the image where the R values are very small and this includes the white patch used for the White Patch Algorithm. Performing the balancing on the red channel will result to very high values for the areas having high red values and very low values for the areas with very low red values thus resulting to the first row of images in Figure 3. It can also be noted that these images were captured with the White Balance setting turned to 'Bulb' which may be the source of error for the resulting image.



Figure 4. Respective R, G, and B channels of the original image for the resulting image (first row) in Figure 3.


Now we examine the second white balancing algorithm which is the Gray World Algorithm. In this algorithm the balancing constant is based on all the respective RGB values of the image and not just a white patch in the image. Figure 5 shows the resulting white balanced images of the original images in Figure 1.





Figure 5. White Balanced images using White Patch Algorithm.


Summary of Images











From the obtained white balanced images, we can see that the White Patch Algorithm gives better results but has a disadvantage when the RGB values of the original image has extreme values that could be very low or very high for particular areas. The Gray World Algorithm did not give a good white balancing result because it depends on the most dominant color on the image although extreme RGB values is not a problem with it.

I give myself a 9 for this activity since I was able to implement correctly the two white balancing algorithms and was able to identify possible sources of error, although I was not able to adjust the unsatisfactory results obtained for the images with 'Bulb' white balance setting.

I thank Kaye Vergel for lending me her Sony Ericsson phone with digital camera, Gilbert Gubatan for the red umbrella, Cherry Palomero for the pink clear folder and yellow hand fan, Luis Buno for the blue logbook, Shamandura Cabato for modeling her green shirt, Winsome Rara for the green eyeglass case, Jica Monsanto for the green box and the white piece of paper, NIP for the green floor paint and green colored chalk, and lastly Miguel Sison for the comments and recommendations on the unsatisfatory results for the white balancing of the images with 'Bulb' white balance setting.