Introduction

This package contains a set of tools to classify the pixels of digital images into colour categories arbitrarily defined by the user. It contains functions to

It is a simple version of the multivariate technique known as Support Vector Machine (Cortes and Vapnik, 1995; Bennet and Campbell, 2000), adapted to this particular use.

The procedure

The basic steps of the procedure are the following:

  • One or more digital images in JPEG or TIFF format is imported into R. The categories to identify are represented in this set (the test set).
  • The values of the three three colour variables (or bands) that compose each image (R, G, and B) are transformed into proportions (r, g and b).
  • The pixels of the image are plotted in the plane defined by two of the transformed variables (the user can select them arbitrarily) and, hopefully, they would form separate clusters (pixel categories).
  • The user then traces straight lines that separate the pixel clusters. Using the mathematical expression for these rules and the rgb values, each pixel can be tested for membership in each category (see below).
  • Recording the results of the tests as 1 or 0 (pass/fail), an incidence matrix is build for that rule. This is the result of the procedure, which can be submitted to posterior analysis or used to create a new version of the original image showing the category of each pixel.

The second step simplifies the problem because it makes one of the variables dependent on the other two (as r + g + b = 1). Moreover, the transformation eliminates colour variations due to differences in illumination.

The expressions for classification rules are the same as the expression for a straight line but using one of the comparison operators \(<\), \(\leq\), \(>\) or \(\geq\). For example: \(r \geq a g +c\), being \(a\) and \(c\) the slope and intercept of the line, and \(r\) and \(g\) the colour variables selected for the classification. A single line can produce two classification rules.

Using several rules per category

When there are more than two categories, or when the cluster of points has a complex shape, a single rule is not enough. In these cases the procedure has additional steps:

  • several rules are defined for each category,
  • incidence matrices are created for each rule,
  • the incidence matrices are combined with the & operator to obtain the category incidence matrix.

The last step is equivalent to estimate the union of the incidence matrices, i e \(\mathbf{M} = \mathbf{M}_{1} \cap \mathbf{M}_{2} \cap \ldots \cap \mathbf{M}_{p}\), being p the number of rules.

Concave category shapes

A caveat of the method is that the rules must delimit a convex polygon to combine the individual rule results successfully (in a convex polygon, a line joining any two internal points is contained in the polygon). Not all clusters have convex shape. In these cases, the cluster must be divided in convex sub-polygons (subcategories) for which rules are defined as before. The incidence matrices of the subcategories are combined using the | operator, i.e. \(\mathbf{M} = \mathbf{M}_{1} \cup \mathbf{M}_{2} \cup \ldots \cup \mathbf{M}_{s}\), being s the number of subcategories. Note that any polygon, convex or not, can be subdivided in triangles and, as triangles are convex polygons, it is always possible to solve this problem. Note that the goal is to obtain a minimal set of convex polygons, not a complete triangulation. The example presented below is one of such cases.

The session

What follows is a sample session illustrating both the method and the use of the package functions. It uses an example image and a test set created by cutting small areas out of the example image. It is not a good test set, see below, but it is enough to show how the method works, and its problems.

Loading the functions

The package is loaded in the usual way:

library(pixelclasser)

Image loading and transforming

Figure 1 shows the example images included in the package. The goal of this example session is to classify the pixels of the example image into dead, oak and ivy categories. The small images are fragments of the main image and are the test set, i.e. representatives of each class. In a real case, a more extensive test set should be used to represent the whole variation of the categories.

Figure 1. The example image and the test images.

Figure 1. The example image and the test images.

As the images are included in the package as external (non R) data, they are loaded with the following code:

ivy_oak_rgb <- read_image(system.file("extdata", "IvyOak400x300.JPG", package = "pixelclasser"))
test_ivy_rgb <- read_image(system.file("extdata", "TestIvy.JPG", package = "pixelclasser"))
test_oak_rgb <- read_image(system.file("extdata", "TestOak.JPG", package = "pixelclasser"))
test_dead_rgb <- read_image(system.file("extdata", "TestDeadLeaves.JPG", package = "pixelclasser"))

The function read_image() performs the first step of the procedure. It stores the image as an array of rgb values, which are the proportion of each colour variable (i.e. R /(R+G+B), and so on). This uses functions from packages jpeg or tiff, and uses the extension in the file name to identify which one to use.

Pixel distributions in rgb space

Before plotting pixels and lines, it is convenient to define a set of colours to use throughout the session:

transparent_black <- "#00000008"
brown <- "#c86432ff"
yellow <- "#ffcd0eff"
blue <- "#5536ffff"
green <- "#559800ff"

The next step is to visualize the distribution of the pixels in rgb space, but only two variables are needed. Any pair of variables would do, but a particular combination might produce a better display of the clusters. It is a matter of try the three possible combinations to select the most convenient.

Plotting the pixels is a two-step procedure (Fig. 2): a void plot is drawn (using plot_rgb_plane()) and then the pixels are added to the plot (the use of a transparent black colour, #00000008, creates a “density plot” effect):

plot_rgb_plane("r", "b", main = "Image: ivy and oak")
plot_pixels(ivy_oak_rgb, "r", "b", col = transparent_black)
Figure 2. The plane of variables r and b, with the pixels of the example image.

Figure 2. The plane of variables r and b, with the pixels of the example image.

The coloured lines are an aid to interpret the graph: no pixels could be found outside the blue lines, and the red lines converging in the barycentre of the triangle (r, g, b) = (1/3, 1/3, 1/3), define the areas where a colour is dominant. Note that graphical parameters (main to set the title of Fig. 2) can be passed to the function to change the final appearance of the graph. All the auxiliary lines are optional. Figure 3 omits them and uses different colour variables to plot the pixels.

plot_rgb_plane("r", "g", plot_limits = FALSE, plot_guides = FALSE, plot_grid = FALSE)
plot_pixels(ivy_oak_rgb, "r", "g", col = transparent_black)
Figure 3. Another representation of the pixels of the example figure. The auxiliary lines were omitted and the colour variables changed.

Figure 3. Another representation of the pixels of the example figure. The auxiliary lines were omitted and the colour variables changed.

There are two clear pixel clusters and a small, but noticeable, quantity of pixels in between. Also visible are linear patterns that are artefacts created because the RGB data are discrete variables (eight bit in the most common cases). These are more appreciable in the following graphs, which are restricted to the area occupied by the pixels. In the following examples g and b will be used as variables x and y for plotting and pixel classification.

Plotting the pixels of the test images

The following code plots the pixels of the example image on the gb plane and then adds the pixels of the test images, using arbitrary colours (Fig. 4). Usually, only the pixels of the test images are plotted before placing the rules, but there both the “problem” and test images were plotted (note that real studies involve many problem images, not one). This was done to assess the quality of the test images (see below). To create the figure, the graphic parameters xlim and ylim were used to limit the extent of the plot to the area occupied by the pixels:

plot_rgb_plane("g", "b", xlim = c(0.2, 0.6), ylim = c(0.1, 0.33))
plot_pixels(ivy_oak_rgb, "g", "b", col = transparent_black)
plot_pixels(test_oak_rgb, "g", "b", col = green)
plot_pixels(test_ivy_rgb, "g", "b", col = blue)
plot_pixels(test_dead_rgb, "g", "b", col = brown)
Figure 4. The pixels of the dead (brown), ivy (blue) and oak (green) test images overlayed on the pixels of the example image (black). Only the area occupied by the pixels was represented.

Figure 4. The pixels of the dead (brown), ivy (blue) and oak (green) test images overlayed on the pixels of the example image (black). Only the area occupied by the pixels was represented.

Figure 4 shows that the clusters of pixels in the ivy_oak_rgb image correspond to dead leaves (on the left), and oak and ivy (on the right).

The small areas taken as test images were not representative of the whole pixel set in the image, as they do not cover the same area as the black pixels. This is not a surprise given that a single sample was collected for each type of pixel. In a real study, the set of test images must be selected to be representative of the classes. This is, obviously, a need for any method that uses a test set, not only for pixelclasser.

Warning: plotting several million points in an R graph is an slow process. Be patient or use images as small as possible. Using a nice smartphone with a petapixel camera sensor to capture images is good for artistic purposes, but not always for efficient scientific work.

Defining the rules

Defining the rules that classify the pixels is a matter of tracing straight lines to separate the clusters. In this example, a single line more or less equidistant to both clusters should suffice to separate them. The intermediate points will be arbitrarily ascribed to one category.

The rules are defined by setting the name of the rule, the colour variables to use, the coordinates of two points in the plane and a comparison operator. The exact placement of the line is an arbitrary decision, as the method does not include any mechanism to place it automatically.

There are two methods to create the rule. The first uses the function create_rule(), which receives a list with the coordinates of two points defining a line in the selected subspace, and returns a pixel_rule object. In the following example, the points with coordinates (g, b) = (0.345, 1/3), and (g,b) = (0.40, 0.10) defined the position of the first line, and were selected by trial and error. The adequate operator must be included in the rule definition:

rule_01 <- pixel_rule("rule_01", "g", "b", list(c(0.345, 1/3), c(0.40, 0.10)), "<")
rule_02 <- pixel_rule("rule_02", "g", "b", list(c(0.345, 1/3), c(0.40, 0.10)), ">=")

Both rules are described by the same line but use different comparison operator. rule_01 includes the pixels at the left (under) of the line and rule_02 those at the right (over) and on the line, i.e. the dead leaves and the fresh leaves, respectively. Each line can generate two rules, but beware: if > and < define the rules, then the points on the line will not belong to any category, and if >= and <= are used, the points on the line will belong to the two categories simultaneously. The function that classifies the pixels can identify the second type of error, but if there are legitimate unclassified points, the errors of the first type can pass unnoticed.

The second method uses place_rule(), which is a wrapper for graphics::locator() that allows the user to select the two points by clicking in the rgb plot with the mouse:

rp01 <- place_rule("g", "b")
rp02 <- place_rule("g", "b", "v")
rule_07 <- pixel_rule("rule_07", "g", "b", rp02, ">=")

The function returns an object of class pixel_rule_points that can then be passed to pixel_rule() in the parameter line_points. The second example produces a vertical line (use “h” for horizontal lines) When using these options the function adjusts the coordinates, as it would be difficult to create true vertical or horizontal lines with the mouse. To make the code run automatically, create_rule() is used in this vignette, but using place_rule() is the easiest way to define the rules. It is even easier to place the call to place_rule() in the call to create_rule() to avoid creating the intermediate pixel_rule_points object:

rule_07 <- pixel_rule("rule_07", "g", "b", place_rule("g", "b"), "<")

Note that both pixel_rule() and the pixel_rule_points object must use the same colour variables as axis. pixel_rule() throws an error if this condition does not hold.

The rule objects store the values passed as parameters, the parameters of the equation of the line (a and c), and a textual representation of the equation which will be evaluated by the classification function. To check the correctness of the rules, the lines can be added to the plot, as in Fig. 5:

plot_rgb_plane("g", "b", xlim = c(0.2, 0.6), ylim = c(0.1, 0.33))
plot_pixels(ivy_oak_rgb, "g", "b", col = transparent_black)
plot_pixels(test_oak_rgb, "g", "b", col = green)
plot_pixels(test_ivy_rgb, "g", "b", col = blue)
plot_pixels(test_dead_rgb, "g", "b", col = brown)
plot_rule(rule_01, lty = 2, col = brown)
Figure 5. Same as Fig. 4 but the line between dead and live leaves (defining `rule01` and `rule02`) was added.

Figure 5. Same as Fig. 4 but the line between dead and live leaves (defining rule01 and rule02) was added.

In order to classify the fresh leaves into ivy and oak categories, more rules are needed. The pixels of the oak test image were plotted and used to define additional rules (again by trial and error):

rule_03 <- pixel_rule("rule_03","g", "b", list(c(0.35, 0.30), c(0.565, 0.10)), "<")
rule_04 <- pixel_rule("rule_04","g", "b", list(c(0.35, 0.25), c(0.5, 0.25)), "<")

Figure 6 shows the pixels of the oak test image and the lines that limit this class. Line type and colour were set using the graphical parameters lty and col (see graphics::par)) ... argument of plot_rule():

plot_rgb_plane("g", "b", xlim = c(0.2, 0.6), ylim = c(0.1, 0.33), plot_limits = F, plot_guides = F)
plot_pixels(test_oak_rgb, "g", "b", col = green)
plot_rule(rule_01, lty = 2, col = green)
plot_rule(rule_03, lty = 2, col = green)
plot_rule(rule_04, lty = 2, col = green)
Figure 6. The pixels of the oak test image and the lines that limit this category. Note that the line separatin the dead leaves is used again.

Figure 6. The pixels of the oak test image and the lines that limit this category. Note that the line separatin the dead leaves is used again.

Then the graph was redrawn (Fig. 7), with the ivy pixels plotted instead of the oak pixels to check whether the rules are adequate and enough to classify that class. Labels to identify the lines and their associated rules were added to the plot, using the parameter shift to place them conveniently:

plot_rgb_plane("g", "b", xlim = c(0.2,0.6), ylim=c(0.1,0.33), plot_limits = F, plot_guides = F)
plot_pixels(test_ivy_rgb, "g", "b", col = blue)

plot_rule(rule_02, lty = 1, col = green)
label_rule(rule_02, label = expression('L'[1]*' (R'[1]*',R'[2]*')'), shift = c(0.035, -0.004), col = green)

plot_rule(rule_03, lty = 1, col = green)
label_rule(rule_03, label = expression('L'[2]*' (R'[3]*',R'[5]*')'), shift = c(0.20, -0.15), col = green)

plot_rule(rule_04, lty = 1, col = green)
label_rule(rule_04, label = expression('L'[3]*' (R'[4]*',R'[6]*')'), shift = c(0.19, 0.0), col = green)
Figure 7. The pixels of the ivy test image and the lines defining the rules to classify the oak pixels (not shown). The lines can be used to define the ivy category, but the overlay between the two classes prevents a complete separation between the two categories. Some of the pixels trespass into the oak area.

Figure 7. The pixels of the ivy test image and the lines defining the rules to classify the oak pixels (not shown). The lines can be used to define the ivy category, but the overlay between the two classes prevents a complete separation between the two categories. Some of the pixels trespass into the oak area.

The graph shows two problems: a) part of the ivy pixels are inside the area delimited by the oak rules, i.e. both categories overlap. As a consequence, some ivy pixels will be miss-classified. b) The shape of the ivy cluster is not convex.

To solve the second problem, two subcategories must be defined as explained before. There was no need to place additional lines in this case. The first subcategory is delimited by L1 and L3, and the second by L2 and L3. To do this, two new rules are needed:

rule_05 <- pixel_rule("rule_05", "g", "b", list(c(0.35, 0.30), c(0.565, 0.16)), ">=")
rule_06 <- pixel_rule("rule_06", "g", "b", list(c(0.35, 0.25), c(0.5, 0.25)), ">=")

There is an intentional error in the coordinates of the second point of rule_05, which should be the same as in rule_03. It is left here to show later how the internal checks of the classification function allow to detect such errors.

Note that because no points can be found outside the blue triangle, its borders are implicit rules that close the polygons defined by the explicit rules.

Creating the classifier objects

After the rules have been defined, they must be included in classifier objects which will be used later by classify_pixels(). This function receives a list of objects of class pixel_category, each containing the information needed to identify the pixels belonging to a particular category. These objects contain a list of objects pixel_subcategory, each containing a list of objects of class pixel_rule. This is a nested structure which always has three levels (rule, subcategory, and category) even when no subcategories would be needed for the classification. In these simple cases, a subcategory object containing the rules is internally added to the category object (named S0). This consistency in the structure of the objects simplifies the code of classify_pixels().

Creating the classifiers is simple once the rules have been defined. The following code defines a class classifier that can identify the dead leaves:

cat_dead_leaves <- pixel_category("dead_leaves", blue, rule_01)

pixel_category() needs a label for the category, a colour to identify the pixels if an image file is generated, and a list of rules that define the category. Here, the list contains a single rule. This is a simple case where no subcategories are needed and a list of rules suffice to classify the pixels. See below for a more complex case. The corresponding classifier for the living leaves is:

cat_living_leaves <- pixel_category("living_leaves", yellow, rule_02)

In these examples pixel_category() detects that the the list contains only rules, not subcategories, and wraps them into an object of type pixel_subcategory.

A classifier object for oak pixels needs three rules:

cat_oak_leaves <- pixel_category("oak_leaves", green, rule_02, rule_03, rule_04)

Finally, the classifier for ivy pixels is the most complex, as it is composed of two subcategory objects that must be defined explicitly and then included in the class classifier:

subcat_ivy01 <- pixel_subcategory("ivy01", rule_02, rule_06)
subcat_ivy02 <- pixel_subcategory("ivy02", rule_04, rule_05)

cat_ivy_leaves <- pixel_category("ivy_leaves", yellow, subcat_ivy01, subcat_ivy02)

Note that rules and subcategories cannot be mixed in the list passed to pixel_category(), so sometimes a subcategory object containing a single rule must be created by the user to include it the category object with other subcategories. pixel_category() checks for the type of the objects in the list and complains if they are not adequate or a mix of classes.

Classifying the pixels

Function classify_pixels() uses a list of categories to classify the pixels. As a preliminary example, the example image will be classified in dead and living leaves. The parameters are the object to classify (of class pixel_transformed_image) and the list of pixel_category objects:

dead_live_classified <- classify_pixels(ivy_oak_rgb, cat_dead_leaves, cat_living_leaves)
#> Pixels in category "dead_leaves": 80718
#> Pixels in category "living_leaves": 39282
#> Pixels in category "unclassified": 0
#> Duplicate pixels: 0
#> Total number of pixels: 120000

Note that a category named unclassified is automatically added to the classes defined by the user. With the rule set used in this example, the unclassified class must contain zero pixels. The function outputs counts of pixels in each classes, which are useful to verify the consistency of the rules. The function also detects duplicate pixels, i e those counted in more than one class (the sum of the pixels in each class is larger than the total number of pixels).These messages can be suppressed by setting verbose = FALSE in the function call.

The result can be saved as a JPEG (or TIFF) file:

save_classif_image(dead_live_classified, "DeadLiveClassified.JPG", quality = 1)

The type of the file is automatically selected from the file name (only JPEG or TIFF files allowed). Note the use of the quality parameter, which is passed to the underlying function, to set the quality of the JPEG file produced to its maximum value.

The final classification includes the three categories:

ivy_oak_classified <- classify_pixels(ivy_oak_rgb, cat_dead_leaves, cat_ivy_leaves, cat_oak_leaves)
#> Pixels in category "dead_leaves": 80718
#> Pixels in category "ivy_leaves": 12077
#> Pixels in category "oak_leaves": 18428
#> Pixels in category "unclassified": 8777
#> Duplicate pixels: 0
#> Total number of pixels: 120000

The function informs that several points were left unclassified. This is the consequence of the error in the definition of rule_05 noted above. If the error is corrected, and the image classified again:

rule_05 <- pixel_rule("rule_05", "g", "b", list(c(0.35, 0.30), c(0.565, 0.10)), ">=")
subcat_ivy02 <- pixel_subcategory("ivy02", rule_04, rule_05)
cat_ivy_leaves <- pixel_category("ivy_leaves", yellow, subcat_ivy01, subcat_ivy02)
ivy_oak_classified <- classify_pixels(ivy_oak_rgb, cat_dead_leaves, cat_ivy_leaves, cat_oak_leaves)
#> Pixels in category "dead_leaves": 80718
#> Pixels in category "ivy_leaves": 20854
#> Pixels in category "oak_leaves": 18428
#> Pixels in category "unclassified": 0
#> Duplicate pixels: 0
#> Total number of pixels: 120000

no unclassified pixels remain. It can now be saved, in this case as a TIFF file:

save_classif_image(ivy_oak_classified, "IvyOakClassified.TIFF")

Figure 8 shows the original image and the results of the two classifications.

Figure 8. From left to right: the original example image, the image produced by the first classification (dead vs. living leaves) and the image produced by the second classification (dead/ivy/oak leaves).

Figure 8. From left to right: the original example image, the image produced by the first classification (dead vs. living leaves) and the image produced by the second classification (dead/ivy/oak leaves).

Dead and fresh leaves were correctly differentiated in the first classification. The second classification was accurate for dead and oak pixels but, as expected, part of the ivy pixels were miss-classified as oak pixels because of the overlap between these two categories.

Measuring the quality of the classification

A qualitative assessment of the results of the classification can be done by inspecting the classified image, but a quantitative measure of the errors is necessary in many cases. A simple method for error estimation is to apply the classification rules to the images in the training set. If the rules are well chosen and the classes are separable, no errors would be expected. Function classify_pixels() generates a report on the classification of the pixels that is useful for this purpose:

test_dead <- classify_pixels(test_dead_rgb, cat_dead_leaves, cat_ivy_leaves, cat_oak_leaves)
#> Pixels in category "dead_leaves": 23088
#> Pixels in category "ivy_leaves": 0
#> Pixels in category "oak_leaves": 0
#> Pixels in category "unclassified": 0
#> Duplicate pixels: 0
#> Total number of pixels: 23088
test_ivy <- classify_pixels(test_ivy_rgb, cat_dead_leaves, cat_ivy_leaves, cat_oak_leaves)
#> Pixels in category "dead_leaves": 0
#> Pixels in category "ivy_leaves": 11108
#> Pixels in category "oak_leaves": 709
#> Pixels in category "unclassified": 0
#> Duplicate pixels: 0
#> Total number of pixels: 11817
test_oak <- classify_pixels(test_oak_rgb, cat_dead_leaves, cat_ivy_leaves, cat_oak_leaves)
#> Pixels in category "dead_leaves": 2
#> Pixels in category "ivy_leaves": 328
#> Pixels in category "oak_leaves": 8525
#> Pixels in category "unclassified": 0
#> Duplicate pixels: 0
#> Total number of pixels: 8855

In these examples no errors were produced when classifying the dead leaves pixels, but 709 out of 11108 ivy pixels (6.4 %) were erroneously classified as oak pixels, and 328 out of 8855 oak pixels (3.7 %) were classified as ivy pixels.

Comparing these proportions with the results of the classification of the whole image indicates that these values underestimate the errors, specially for the ivy pixels. This was caused by the deficiencies of the test set in this simplified example. To see a real example of the use of pixelclasser, with a better test set and an assessment of the classification errors in the context of the experimental error of the whole study, see Varela et al. (2021).

References

Bennet, K. P. and C. Campbell (2000). Support vector machines: hype or Halleluiah. SIGKDD Explorations 2, 1–11.

Cortes, C. and V. Vapnik (1995). Support-vector networks. Machine Learning 20, 273–297.

Varela, Z.; Real, C.; Branquinho, C.; Afonso do Paço, T. and Cruz de Carvalho, R. (2021). Optimising Artificial Moss Growth for Environmental Studies in the Mediterranean Area. Plants, 10, 2523.