Poster Paper Proc. of Int. Joint Colloquium on Emerging Technologies in Computer Electrical and Mechanical 2011

Conversion between Colour Spaces Hemanth Yaji1, Brian Stimpson2 1

Department of Electronics and Communication Engineering, Kurunji Venkatramana Gowda College of Engineering, Kurunjibhag, Sullia, India Email: hemanthyaji@yahoo.com 2 The Department of Electronic and Electrical Engineering, The University of Strathclyde, 204, George Street, Glasgow, United Kingdom Email: b.stimpson@eee.strath.ac.uk Abstract: Function approximation by neural networks is found to be the efficient method for conversion between colour spaces. The performance of the neural networks is evaluated for conversion of a Red, Green, Blue (RGB) colour space to a perceptually uniform Hue, Saturation, Value (HSV) colour space and vice versa. The set of equations given by Smith are used for converting a RGB colour space to a HSV colour space and vice versa. A suitable architecture for the neural networks is selected. We have found that the neural networks fail to approximate a function at the region of ambiguity. The region of colour space where the error is huge, during approximation by using the neural networks is found. Suitable algorithms are developed to post process the approximation results to reduce the errors, and improve the measured accuracy. We found that the computational efficiency for conversion between colour spaces by neural networks will be better compared to conversion using Matlab or any other type of conversion, if the neural network has only a few number of neurons in the hidden layer, which can be achieved, if there is no ambiguity in the target colour space. Matlab is used for programming.

of neural networks. A MLP neural network with one hidden layer consisting of 30 neurons created using newff, is used for converting a RGB colour space to a HSV colour space and vice versa. The number of neurons here is arbitrarily chosen as 30. Newff is the neural network toolbox function in Matlab, for creating a feed-forward back propagation network. The number of neurons in the hidden layer is chosen empirically here. A data set, with uniformly spaced data, with 24,389 samples is used for training a MLP neural network, for converting a RGB colour space to a HSV colour space. Any further increase in the number of samples, leads to memory problems. The neurons are trained using the function trainlm, the Matlab function for Levenberg-Marquardt [5] (LM) optimisation algorithm. The LM algorithm is designed specifically for reducing sum of squares of error. LM optimisation algorithm is used to train the neurons to reduce the Mean Square Error (MSE) between the target colour space and the approximated colour space. Trainlm will train the neurons for the given approximation as long as the weight, net input and the transfer function have derivative functions. Back Propagation weight bias learning is achieved by the Matlab function learngdm, the default. Learngdm is the gradient descent with momentum weight and bias learning function. It produces the weight change factor according to gradient descent for a given learning rate and momentum constant [6]. The Learning Rate and the Momentum Constant are taken as 0.01 and 0.9 respectively, the default values in Matlab. The logsig squashing transfer function activates the neurons in the hidden layer and the output layer. The MSE goal is taken to be 0.000004, the equivalent of Quantisation Error in a 2563 (16,777,216) RGB colour space. An early stopping is applied by specifying the number of epochs. The number of epochs is arbitrarily taken as 500. The time taken for training the neural network to approximate a RGB colour space to a HSV colour space is measured. The correction algorithm for Hue, as a circular measure, as explained below is applied to find all types of errors. Hue traverses from zero to 360 degrees or zero to 2ď ° radians showing a circular measure and the usual way of finding the errors will not apply. Assuming that the neural networks learn well to approximate the conversion of a RGB colour space to a HSV colour space and the error in Hue will not be more than 0.5, the following correction algorithm for absolute errors greater than 0.5 in Hue will be able to compute the real error in Hue during mapping.

Index Terms: Colour Spaces, Neural Networks

I. INTRODUCTION The neural networks take a very little computation for function approximation [1]. For mapping continuous functions, multi layer perceptron (MLP) neural networks provide accurate and efficient approximation than any other approximation techniques [2]. This paper evaluates the performance of MLP neural networks for converting one colour space to another. Here RGB colour space is converted to a HSV colour space and vice versa, using MLP neural networks. The set of equations given by Smith [3] have been used for converting a RGB colour space to a HSV colour space and vice versa. The set of equations provided by Smith have been used in Matlab tool box functions rgb2hsv and hsv2rgb for converting a RGB colour space to a HSV colour space and vice versa. The revised codes by P. Gravel for faster execution and less memory are provided by Matlab in the function rgb2hsv for converting a RGB colour space to a HSV colour space. II. TRAINING OF THE NEURAL NETWORKS Hudson and Postma [4] have provided the guidelines for selecting a right architecture for neural networks. Comparisons are obtained by the analysis of general features ÂŠ 2011 ACEEE DOI: 02.CEM.2011.01.549

69

Poster Paper Proc. of Int. Joint Colloquium on Emerging Technologies in Computer Electrical and Mechanical 2011 TABLE I. MEASUREMENT OF WORST CASE ERROR FOR A NEURAL NETWORK TO APPROXIMATE A RGB COLOUR SPACE TO A HSV COLOUR SPACE, TRAINED FOR A RGB COLOUR SPACE OF SIZE 24,389

RGB colour space would be assigned to Value and the neural network would approximate Value very well without exhibiting

Error in Hue > 0.5 Corrected Hue = Hue-1 Error in Hue < -0.5 Corrected Hue = Hue+1 III. RGB TO HSV CONVERSION A neural network configured as mentioned above, is used to convert a RGB colour space of size 24,389 samples to a HSV colour space. Table 1 illustrates the worst case error or the Maximum Absolute Error (MAE) during the generalisation by the neural networks to approximate the conversion of a RGB colour space to a HSV colour space. Memorisation is the performance of the neural network to approximate the data set, taken to train the neural network, which is a RGB colour space of size 24,389 samples. Generalisation performance has been taken as a measure of ability of the neural network to approximate a RGB colour space of size 1,331,000 to a HSV colour space. Any further increase in the size of the RGB colour space leads to memory overflow problems. The correction algorithm has been applied to Hue in evaluating the generalisation performance. If the worst case error in saturation were of huge magnitude for the same colour space that had a sufficiently huge error in Hue as well, the error would be huge. Thus the location of the initial colour space where the errors occur during approximation should be considered during the evaluation of performance. The worst case error of 0.2676 in Hue, 0.9583 in Saturation and 0.0764 in Value occur when the RGB colour space corresponding to Black [0.0000 0.0000 0.0000] is approximated by the trained neural network. The target value of HSV colour space corresponding to Black in a RGB colour space is [0.0000 0.0000 0.0000], but the neural network approximates the target value to be [0.2676 0.9583 0.0764] leading to errors. Similar explanation to the worst case error during approximation could be given to the performance of the neural network in trial number 2 and the worst case errors occur at the Black corner of the RGB colour space. The Fig. 1 shows the histogram of huge errors in Hue and Saturation during training for the neural network trained in trial number 1. The frequency of occurrence of huge errors is not much as huge errors in Saturation occur only a few times especially those over 0.4 have occurred only once. The errors in Value for trial number 1, for the trained data are less than 8% of the actual value, as seen from the Fig. 2. Value is taken as a simple measure along the vertical axis in a HSV colour space and it is simply assigned the maximum of Red, Green and Blue. Because of this any of Red, Green and Blue in a © 2011 ACEEE DOI: 02.CEM.2011.01.549

Figure 1. Histogram of Errors in Hue and Saturation for the trained data

Figure 2. Histogram of Errors in Value for the trained data

huge approximation errors. In an orthogonal coordinate system the distance (S) between any two neighbouring points with coordinates [r,,z] and [r+dr,+d,z+dz] placed at a very small distance is given by

S

dr 2 + ((r + r + dr)/2) 2 d 2 + dz 2

(1)

Where, dr ,d and dz are very small quantities. In a HSV colour space, Hue and Saturation by their relative angle and distance respectively define the position of any point on the Hue circle and Value would define the height of a point, given jointly by Hue and Saturation. The approximation errors using (1), during the conversion of a RGB colour space to a HSV colour space in terms of the geometrical distance of the approximated HSV colour space and the target HSV colour space for the neural network trained in trial number1 is shown in the form of histogram in Fig. 3.

Figure 3. Error as a measure of Euclidean Distance for RGB to HSV approximation

70

Poster Paper Proc. of Int. Joint Colloquium on Emerging Technologies in Computer Electrical and Mechanical 2011 The initial colour space RGB at [0.0000 0.0000 0.0000] would be approximated to HSV colour space given by [0.2676 0.9583 0.0764] but the target HSV colour space is [0.0000 0.0000 0.0000] leading to a geometrical error of 0.9699. The second hugest geometric distance error would occur when RGB colour space at [0.0357 0.0357 0.0357] is approximated to HSV colour space at [0.2505 0.8004 0.0924], but the target HSV colour space is [0.0000 0.0000 0.0357]. This would lead to a geometric error of 0.8086. The values of RGB colour space for geometric error greater than 0.3 in HSV colour space are given in Table 2. TABLE II. RGB

COLOUR SPACE FOR GEOMETRIC ERROR GREATER THAN

The computed Regression Value (R) is 0.995; greater the Regression Value better would be the correlation between the approximated and the target values. A regression value of unity means perfect correlation. Here the relation for the best linear fit is calculated to be Approximated HSV(Ahsv)=0.995´Target HSV (Thsv) + 0.0036 (2) In (2), 0.995 is the slope of the linear regression and 0.0036 represents the Y intercept of the linear regression. For ideal approximation the Y intercept should be zero and the slope should be one. The error in terms of the geometric distance would be greatly contributed by the error in Saturation. The geometric error is important only if the error were perceived which will occur for Value greater than 50% of its maximum magnitude. But for Value greater than 50%, the error in Saturation would considerably be lower. Moreover the geometric error on the Grey Axis or elsewhere is greatly because of Saturation, as the error in Hue is concerned only if the corresponding value of Saturation is 25 % of its maximum magnitude. It is observed that during the approximation of a RGB colour space to a HSV colour space, huge errors have occurred or the neural network has failed to approximate well when all three coordinates of the RGB colour space are equal which is on the grey axis or when any two of the RGB coordinates are equal to each other or at least two coordinates are nearly equal to each other. We have found that the approximation errors mainly occur where the ambiguity occurs or in other words the non smoothness of the function would lead to errors during approximation by the neural networks. The approximation errors will only be of concern, if the magnitudes of Saturation and Value are at least 25% and 50% respectively of their maximum magnitudes at the point of error. If the error occurs for Saturation and Value less than these magnitudes called close to aqua [7], the errors will not be perceived and hence will be of no interest. The Neural Networks are used to convert a HSV colour space to a RGB colour space [8]. Huge errors, in this approximation don’t happen, as the function that defines this conversion is well defined and the function is smooth. Importance Sampling [9] is used to sample the data densely in the region of ambiguity and this data set is used to train the neural network [10]. However the huge errors that have occurred at the region of ambiguity persist. But the errors of very small magnitudes don’t occur.

30% OF

THE ACTUAL VALUE OF HSV COLOUR SPACE DURING APPROXIMATION

The error histograms have been plotted using Matlab function hist with 100 bins. Two separate single row vectors for the approximated HSV colour space and the target HSV colour space have been obtained. Fig. 4 presents the linear regression between the network response and the target using the Matlab function postreg. The function, postreg also computes the correlation coefficient between the network response and the target.

IV. CONCLUSIONS AND FUTURE WORKS The neural networks have failed in approximating a RGB colour space to a HSV colour space at the region of ambiguity. However the neural networks approximate very well, the conversion of a HSV colour space to a RGB colour space because of a very low ambiguity. In other words, the non smoothness of the function causes the errors during approximation by the neural networks. A uniform colour space that has no ambiguity has to be invented. We found that the computational efficiency for converting a RGB colour space to a HSV colour space by neural networks is better compared

Figure 4. The approximated HSV Colour Space as a function of the target HSV Colour Space

The symbols Ahsv and Thsv represent the approximated HSV colour space and the target HSV colour space respectively. © 2011 ACEEE DOI: 02.CEM.2011.01.549

71

Poster Paper Proc. of Int. Joint Colloquium on Emerging Technologies in Computer Electrical and Mechanical 2011 to any other type of conversion or conversion using Matlab, if the number of neurons in the hidden layer is very few, which can be achieved, if the target colour space has no ambiguities. A training function for neural networks has to be found that can train the neural networks in ambiguity.

[4] P. T. W. Hudson, E. O. Postma, Choosing and using a neural net, Artificial Neural Networks An Introduction to ANN Theory and Practice, 1995, 273-87. [5] J. J. More, The Levenberg-Marquard t algorithm: implementation and theory, Lecture notes in mathematics 630, Proceedings of the Biennial conference on numerical analysis, 1978, Springer Verlag, p. 105-116. [6] N. Kamiyama, N. Iijima, A. Taguchi, H. Mitsui, Y. Yoshida and M. Sone, Tuning of learning rate and momentum on backpropagation, Communications on the Move, 1992, 16-20 November, Singapore, ICCS/ISITA ’92 (Cat. No.92TH0479-6), 2, p. 528-32. [7] Darrin Cardani, Adventures in HSV Space, [online]. Available: http://visl.technion.ac.il/labs/anat/hsvspace.pdf [8] Hemanth Yaji, Conversion between Colour Spaces, Master of Science Thesis, Glasgow, United Kingdom, University of Strathclyde, February 2005, 90 p. [9] I. Beichl, F. Sullivan, The importance of importance sampling, Computing in Science and Engineering, March/April 1999, 1(2), 71-73. [10] Byung Chun Yu, Conversion between Colour Spaces by Neural Networks, Master of Science Thesis, Glasgow, United Kingdom, University of Strathclyde, September 2006, 111 p.

REFERENCES [1] Christopher M. Bishop and Geoffrey Hinton, Neural Networks for Pattern Recognition, USA, Oxford University Press, ISBN 0198538499, 1996, 504 p. [2] K. A. Osman and A. M. Higginson, Output Coding in Function Approximation Applications: An empirical Study of RBF and MLP Performance, Intelligent Engineering Systems Through Artificial Neural Networks (5) Fuzzy Logic and Evolutionary Programming, Proceedings of the Artificial Neural Networks in Engineering (ANNIE’95), 1995, 12-15 November, St Louis, Missouri, p. 21520. [3] Alvy Ray Smith, Color Gamut Transform Pairs, Computer Graphics, SIGGRAPH 78 Conference Proceedings, August 1978, 12(3), p. 12-19.

© 2011 ACEEE DOI: 02.CEM.2011.01.549

72