(IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 5, No. 7, 2014
Image Segmentation Via Color Clustering Kaveh Heidary Department of Electrical Engineering and Computer Science Alabama A&M University 4900 Meridian Street, Huntsville, AL 35810 USA
Abstract—This paper develops a computationally efficient process for segmentation of color images. The input image is partitioned into a set of output images in accordance to color characteristics of various image regions. The algorithm is based on random sampling of the input image and fuzzy clustering of the training data followed by crisp classification of the input image. The user prescribes the number of randomly selected pixels comprising the trainer set and the number of color classes characterizing the image compartments. The algorithm developed here constitutes an effective preprocessing technique with various applications in machine vision systems. Spectral segmentation of the sensor image can potentially lead to enhanced performance of the object detection, classification, recognition, authentication and tracking modules of the autonomous vision system. Keywords—Clustering; Classification; Image Segmentation; Machine Vision
I.
INTRODUCTION
Background removal and image segmentation constitute fundamental components of many autonomous vision systems. Segmentation is utilized in order to separate regions or entities of potential interest from each other and from inconsequential image background for further processing. This paper presents an operationally robust and computationally efficient algorithm for segmentation of the input image based on color. The complex image at the sensor output, which is presented to the autonomous vision system, is partitioned into multiple less complicated images prior to further processing [1-5]. Following the segmentation phase, pertinent members of the resultant image set are processed by the corresponding target classification, recognition, identification, and authentication layers of the machine vision system. The image segmentation process at lower levels entails ascribing to each pixel the appropriate class label, while at the higher levels segmentation involves utilization of lower level information for associating salient parts of the image with known objects of interest [6-10]. Target detection, classification, recognition, and authentication procedures which are based on two dimensional spatial signatures acquired with various modalities including infrared and electro optical imagery involve utilization of spatial filters [11-15]. The spatial filters may be applied directly to the image at the sensor output or to the resultant images following the segmentation stage. This paper provides the formulation and implementation of an efficient lower level image segmentation algorithm. Image pixels are classified in accordance to their color attributes regardless of spatial relationships. The color classifier is computed using a set of
randomly selected pixels, obtained from the input image, which are partitioned using a fuzzy clustering procedure. A set of prototype color vectors are computed from the resultant fuzzy sets and are subsequently utilized to segment the input image. II.
BAKGROUND
Image segmentation is used in order to partition the input image into its salient components for further processing. Segmentation is utilized in various machine vision applications such as object recognition and tracking as well as image compression, editing, and retrieval. Segmentation involves clustering the image feature vectors such as pixel intensity levels and colors [16-18]. In top-down image segmentation the input image is partitioned in accordance to the relationship between the image content and the images of various objects in the database including object shapes, contours, textures, and colors. The bottom-up image segmentation, on the other hand, utilizes the intensity, color, texture, and region boundaries to break up the image into its more basic components. Despite the impressive results of recently reported bottom-up image based segmentation algorithms, they often fail to capture fundamental relationships among image elements. The inherent difficulty encountered by low-level image based segmentation algorithms is due to potentially sharp intensity and color variations within the object boundaries. High-level segmentation algorithms rely on image features such as contours and shapes as segmentation primitives in order to reduce the computational complexity. Detection of edges and contours in the input image is achieved through convolving the grayscale image with local derivative filter operators [1920]. Different regions that are circumscribed by distinct closed contours are subsequently recognized as the respective image segments. This Paper presents an unsupervised learning algorithm for segmentation of color images. III.
CLUSTERING ALGORITHM
Given a set of N data points in M-dimensional space, and a user specified integer representing the number of clusters (classes) Q, the algorithm described here computes a set of Q prototypes and a Q ´ N membership matrix. Each prototype is a vector in M-space and is the optimal representation of the corresponding class. Each element of the membership matrix represents the degree of membership (association) of a data point in the respective cluster. X = {X n : 1 £ n £ N } X n = [ xmn : 1 £ m £ M ]
(1) ;1 £ n £ N
(2)
10 | P a g e www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 5, No. 7, 2014 Y = {Yq : 1 £ q £ Q}
[
(3)
]
Yq = y mq : 1 £ m £ M ;
1£ q £ Q
(4)
xmn , y mq Î
(5)
Where X , Y represent, respectively, the set of data points and prototype vectors in M-space, and is the set of real numbers. Our objective is to utilize data points in Eq. (1) in order to partition M-space into Q distinct regions with each region represented by a prototype vector Y . In the operation phase, an unlabeled vector is classified based on its distance with respect to the prototypes. In crisp classification, for example, the input vector is assigned uniquely to the class with the closest prototype with respect to the input vector. In fuzzy classification, on the other hand, the input vector is assigned to all classes with varying degrees of association. q
Each original data point in the trainer set will be linked to all Q regions (classes) with varying degrees of association determined by elements of the membership matrix. The initial membership matrix is generated by assigning random numbers drawn from independent and identically distributed uniform probability functions to each matrix element. We will describe an iterative algorithm for computation of the prototype vectors. The prototype vectors are then used to make hard decisions with regard to new input data points. A new data point is associated with the prototype (class) to which it is closest in accordance to some predefined distance metric. S = [ s qn ] ;1 £ q £ Q ,1 £ n £ N s qn =
å
The process starts with generating a random membership matrix, called the zero-order membership matrix S . Matrix elements are chosen from a uniform probability distribution function s Î [0,1]. The matrix is then normalized by setting the sum of each column to one. The randomly generated membership matrix is then utilized to compute Q zero-order prototype vectors, one for each cluster. A particular prototype vector is computed as the weighted sum of the entire set of data points, where each data point is weighted in accordance to its association to (membership in) the respective cluster. (0)
(0)
qn
Y
(0)
{
å (s ) Yq
=
(7)
2
(d qn / d pn )u -1
Q
q =1
N
qn
(0)
Where, Y represents the zero-order prototype vector q
associated with cluster-q, X n is the nth data vector denoting a
[ ]
(1)
(1)
= sqn ;
(8)
/ d pn )u -1
p =1
d pn = Yp - X n
(9) qn
an Q
å (d
an =
qn
pn
(0) qn
(0)
/ d pn
)
(13)
2 u -1
1
(0)
Q
å q =1
qn
qn
(12)
p =1
Where, S is the membership (association) matrix, s denotes the degree with which data point-n is associated with (is member of) cluster-q, and d is the distance between data point-n and prototype-q. Here, Euclidean distance is used as a measure of distance between vectors in M-space. The exponent parameter u Î {[1, ¥ ]} is user-specified and determines the fuzziness of the clustering process. It is noted from Eq. (8) that the membership matrix is normalized such that sum of each column is equal to one. When u = ¥ , each data point belongs to all clusters uniformly and s = 1 / Q , 1 £ n £ N , 1 £ q £ Q . when u = 1 , however, clustering is not fuzzy and each data point is associated with a unique cluster. For crisp (hard) clustering, elements of the membership matrix are given as follows: s = 1 , d < d "p ¹ q and s =0 otherwise. In hard qn
1 £ q £ Q, 1 £ n £ N (0)
sqn =
2
qn
u
n =1
(1)
å (d
(11)
å (s ) (0)
S 1
Q
;1 £ q £ Q
Xn
n =1
(0)
1
å
(10)
u
(0)
qn
p =1
an =
(0)
N
(0)
}
= Yq : 1 £ q £ Q
typical trainer, sqn (1 £ q £ Q,1 £ n £ N ) are elements of the randomly generated zero-order membership matrix, and u is the user-specified exponential parameter. The zero-order prototype vectors are then utilized to compute the first-order membership matrix as shown below.
(6)
an Q
clustering, u = 1 , each column of S contains a single one and the rest of entries for that column are zero. The value of u affects the rate of convergence of the algorithm. In experiments conducted on diverse sets of RGB images, we have found that setting u = 2.5 , in general, leads to fast convergence and accurate results.
(14)
1 Q
å (d
(0) qn
(0)
/ d pn
)
2 u -1
( 0 ) p =1
d pn = Yp - X n (0)
G d
(1)
(1)
[ ]
= g qn
(15)
; g qn = sqn - sqn
(1)
(1)
(1)
(0)
(16)
( )
= max q , n g qn
(1)
Where, S
(1)
(17) (1 )
and G denote, respectively, the first order
membership and gradient matrices, and d is the first order gradient. Next, the computed first order membership matrix is used in order to compute the first order prototype vectors using Eq. (11), where the superscript 0 is replaced with 1. subsequently, the computed first order prototype vectors are (1)
qn
11 | P a g e www.ijacsa.thesai.org
(IJA ACSA) Internatioonal Journal of A Advanced Compuuter Science andd Applications, Vol. 5, No. 7, 2014
utillized to compu ute the second d order membeership matrix and a thee gradient as sh hown in Eq. (13 3) and Eq. (17)), respectively. The iterative process descrribed above continues c untill a useer prescribed stopping critterion is mett. The stoppiing critterion may be the maximum m number of iteerations (orderrs), in w which case the process is teerminated wheen the number of iterrations is reach hed. One may also use the gradient g value or thee relative chan nge of gradieent between two t consecutiive iterrations as the stopping s criteriion. For the ex xperiments in th his papper the iteratio on process term minates when the t gradient faalls bellow the user prrescribed thresh hold, i.e. d < T = 0.001. (r)
IV.
TESTS WITTH SIMULATED DATA
that thee algorithm finds appropriate prototypes forr two classes. Figure 2 shows the deegree of associiation of variouus data points (1-100)) to each protootype (class), aand is an illusttration of the computted membershhip matrix for the fuzzy claassifier. It is seen thhat the first fiftyy data points arre more stronggly associated with thhe first prototyype (q=1), whhereas the lastt fifty points have hhigher associattion to the seecond prototyppe (q=2), as expecteed. Fuzzy classsification mayy be utilized foor assignment of classses to new uunlabeled inpuut data, wheree each input vector is assigned prrobabilities off membership in respective classes . In some appllications, crispp classificationn of the input data maay be desired, where a typiccal input vectoor is assigned exclusiv ively to the claass whose proototype is closeest, based on Euclidiian distance, too the input vecttor.
ng algorithm described d above was used to The clusterin parrtition various synthetically generated g data sets into classes, num mbers of which h were prescrib bed by the useer. The algorith hm com mputes a prottotype for eacch class and the membersh hip maatrix for the entire e data seet. Each traineer may then be asssigned to a unique u class by binarizing g the computted meembership matrrix. Likewise, new unlabeleed input data are a classified based on distance beetween the daata point and the t com mputed prototy ypes. In the examplle of Figure 1, the input dataa set is comprissed of points in the xy-plane x of the Cartesian co oordinate systeem. thee data points were generateed by a pair of 2D Gaussiian distributions with h means at (1,3), (-2,-1) and d equal standaard devviations set to one. The xy components c off each data po oint were generated by independen nt distribution ns. Fifty poin nts were randomly selected from m each distrib bution and weere com mbined to form m the unlabeled d set of one-hu undred input daata poiints. Parameteers of the clustering algoritthm were set to he iteration prrocess converg ged Q = 2, u = 2, T = 0.001 , and th afteer ten roundss. Figure 1 shows the ev volution of tw wo proototypes. Both prototype vecctors started veery close to eaach othher at the proxim mity of the cen nter of gravity of the entire daata set. As the iteratiions proceed, it i is seen that prototypes mo ove tow ward centers of o the respecttive distributio ons, where fin nal vallues of the com mputed prototyp pes are shown as triangles.
Fig. 2.
Trainer data assocciation factor.
In tthe example oof Figure 3, thhe input data sset comprises 125 pooints in the xxy-plane, randdomly chosenn from three Gaussiaan distributionns with means at (1,3), (-2,-66), (3,-2), and differennt variances allong two axess. The clusterinng algorithm was tassked to partitioon the above uunlabeled set oof data using parameeters Q = 3, u = 2, T = 0.001 . As expecteed, all three prototyype vectors aree initially very close to each oother and are situatedd virtually at tthe center of gravity of thee entire input data seet. As the iteeration processs proceeds thhe prototypes traversee the xy-plane toward their fi final destinationns denoted as trianglees. It is notedd that the com mputed protottypes in this examplle are not eqqual to mean vectors of thhe respective classes . This is a bypproduct of dattaset compositiion and does not aff ffect ability oof the compuuted prototypee vectors to accurattely classify neew and unlabeled input data.
Fig.. 1. Evolution off prototype vectors in a two-class pro oblem.
In Fig 1, usin ng circles and stars to denote the data poin nts asssociated with two t classes is for illustratio on purpose on nly, andd the algorithm m is entirely oblivious to the class c dispositio ons of input data po oints. Despite complete lack of knowled dge aboout origins of the data set members m in Fig gure 1, it is seeen
Fig. 3.
Prototype evolutioons in a three-classs problem
12 | P a g e www.ijacsa.thesai.orrg
(IJA ACSA) Internatioonal Journal of A Advanced Compuuter Science andd Applications, Vol. 5, No. 7, 2014
In order to ob btain quantitatiive performancce results for the t fuzzzy clustering algorithm a the following f tests were conducteed. Wee started with h two Gaussiaan distribution ns in 2D-spacce, whhere the x and d y componen nts of each sam mple point weere obttained from ind dependent distrributions with equal e variances. In the first ex xample, classess A and B werre obtained fro om twoo 2D distribu utions with eq qual variancess A=B. Equ ual num mbers of rando omly generated d trainers from m each class weere com mbined and were w subsequen ntly utilized as a the unlabelled traiiner set. Fuzzy clusteriing was applieed to the abovee set of unlabelled traiiners in order to t evolve two prototypes. A large number of testt points were then t generated from the distribution functio ons desscribed above. Binary classiffication was utiilized to label all thee test vectors in n accordance to o their Euclideaan distances with w resppect to the abo ove two compu uted prototypess. Figure 4 show ws the classificcation error raate as function of thee separation faactor with the number of un nlabeled traineers utillized from each h class as param meter. SF F=
m -m A
B
(18)
sA + sB 2
2
Where, SF reepresents the separation s facttor between tw wo distributions, and d m, s denote, respectively, the mean-vector he particular data d set. For eaach andd the standard deviation of th testt case, the num mber of trainerrs was fixed an nd the separatiion facctor was varied from 0.25 to 4. 4 Error rate is the percentagee of input testt vectors that are a missclassified. As A expected, the classifier performan nce impproves as the separation facctor increases. It is noted th hat num mber of train ners has virtu ually no effeect on classiffier perrformance. In the examplle of Figure 5, the two Gausssian distributio ons havve unequal staandard deviatio ons such that s = 2s and all othher parameters are same as beefore. It is seen n that the numb ber of ttrainers in thiss case has an sllightly more pronounced effe fect of tthe classifier performance. B
A
Fig. 5.
Effect of SF on classification erroor rate
V.
EXPERIMENTS W WITH REAL DA ATA
In tthis section thhe fuzzy clusttering algorithhm described above iis applied to thhe task of segm mentation of ccolor images. The sett of RGB vecttors associatedd with a group of randomly selectedd pixels of thee input color image constitutte the trainer set. Thhe computed prrototypes compprise a set of ccolor vectors which aare subsequenttly utilized to ppartition the inpput image. Thee example of F Figure 6 show ws the input im mage (upperleft), an and the result of color segm mentation. In tthis example one-hunndred pixels (N N=100) were rrandomly seleccted from the input im mage, compriising the unlabbeled trainers which were used aas the input of the fuzzyy clustering pprocess. The algorithhm was taskedd to partition the training sset into three classes (Q=3). Figurees 7 and 8 show w, respectivelyy, the trainers and thee evolution of class prototyppes in the RGB B-space. It is noted tthat all three pprototypes are initially very cclose to each other aand are proxim mate to centroiid of the trainning set. The prototyypes migrate ttoward their ffactual positioons and true prototyypes are evolveed as shown inn Figure 8. Thhe initial and final vaalues of the R RGB coordinatees of the protootype vectors for threee classes are listed in Tablee 1. In this exaample it took twenty iterations for all three prottotypes to reacch their final destinaations. The com mputed prototyypes were thenn utilized for crisp cllassification off all the input image pixels. One of three possiblle labels were aassigned to eacch pixel of the input image. The im mages of Figuree 6 show resultss of the filterinng process.
Fig.. 4. Effect of SF F on classificationn error rate Fig. 6. Original input im mage (upper-left)), and color-segm mented images. Classes-oone (upper right), ttwo (lower-left), aand three (lower-riight)
13 | P a g e www.ijacsa.thesai.orrg
(IJA ACSA) Internatioonal Journal of A Advanced Compuuter Science andd Applications, Vol. 5, No. 7, 2014
Fig. 9.
Input image
Fig.. 7. Training sett comprised of one-hundred rando omly selected pix xels from m the original inpu ut image
Fig. 10. Output images.
Fig.. 8. Evolutions of prototypes fo or classes-one th hrough three. Iniitial estim mates of prototypees are denoted as circles c and final estimates are trianglles. TABLE I. I
C Class-one C Class-two Classthree
INITIAL AN ND FINAL PROTOTY YPE VECTORS.
In nitial Prototypess R G B
Finall Prototypes R G B
0.8 853 0.678 0.49 92 0.8 861 0.687 0.49 92 0.8 880 0.700 0.50 07
0.566 0.255 0.233 0.946 0.716 0.345 0.885 0.757 0.623
Figuures 11 and 112 show the seet of one-hunddred training vectorss and the evollution of four prototype vecctors for the Mondriian of Figure 9. As beforee, all four prrototypes are initiallyy close to the ccenter of masss of the traininng set. In this experim ment, The proccess convergedd after six iterattions. Circles and triaangles in Figurre 12 denote, rrespectively, thhe initial and final vvalues of thee prototype vvectors. The above four computted prototype RGB vectors were used to assign each pixel of the inputt Mondrian to one excllusive class, charactterized by thee prototype wiith the smalleest Euclidean distanc e with respect to the RGB veector of the pixxel. Each one of the filtered imagees in Figure 10 shows the ppixels of the respecttive class with all other pixelss set to white.
In the next example the trainer set co onsisted of on nehunndred pixels raandomly selectted from the Mondrian M paintiing of F Figure 9. Fuzzzy clustering was w used to parrtition the train ner set into four classses, and the resspective RGB prototype p vecto ors were computed. All pixels of the input imag ge were subseq quently classifiied crissply in accorrdance to the prototype with the smallest Eucclidean distancce with respecct to the correesponding pixeels. Thee images of Figure F 10 sho ow the result of input imaage seggmentation, wh here pixels off the correspo onding class are a turnned on while pixels p of all oth her classes are set to white. Fig. 11. Training set compprised of one-hunndred randomly seelected pixels of the Monddrian
14 | P a g e www.ijacsa.thesai.orrg
(IJA ACSA) Internatioonal Journal of A Advanced Compuuter Science andd Applications, Vol. 5, No. 7, 2014
Fig.. 12. Evolutions of prototypes fo or classes-one th hrough four. Iniitial estim mates of prototypees are denoted as circles c and final estimates are trianglles.
The images of o Figure 13 show an inpu ut image and the t resuultant filtered images which are produced by the prototy ype vecctors computed d from fuzzy clustering c of th he trainer set in nto threee groups. The one-hundred d element train ner set, shown in Figgure 14, was obtained by random r sampliing of the inp put imaage. Figure 15 shows the evo olution of the prototype vectors.
Fig.. 13. Upper left sh hows the input imaage. The input imaage is filtered usin ng a threee-class filter.
Fig. 15. Evolutions of prrototypes for classses-one through three. Triangles denote finnal prototypes.
In tthe next exam mple the input image is parrtitioned in a hierarchhical manner. First, the imagge is sampled rrandomly and the sam mples are groouped into twoo classes usinng the fuzzy clusteriing algorithm m. This leads to computattion of two prototyypes, which aree used to carryy out crisp seggmentation of the inpput image. Thhis process prroduces two iimages, each compriised of the innput image ppixels that beelong to the respecttive class with all other pixells set to white. Each of the two gennerated imagees is treated ass a new input iimage and is partitiooned into two cclasses, resultinng in four new w images. The processs continues foor a user speccified number of partition rounds.. Thee original set oof training pixeels were selectted randomly from thhe input imagee of Figure 16, and were parrtitioned into two claasses using fuzzzy clustering. The images of Figure 17 show th the result of thhis two-class segmentation process. The Class-oone image, connsisting of thee leaf and bugg only which constituute the foregrround in the original imagge, was then sample d randomly too form a new set of trainerss which were partitiooned using fuzzzy clustering, rresulting in coomputation of two new w prototypes. The image (Cllass-one) was subsequently partitiooned using thee computed pprototypes. Thhe images of Figure 18 show the seegmentation reesults. VI.
Fig.. 14. Training set comprised of onee-hundred random mly selected pixelss of the input image.
CONCLLUSIONS
Thiis paper provvides a compputationally eefficient and operatioonally robust algorithm foor segmentation of color images . Tests using syynthetically geenerated data sets as well as real RG GB images havve demonstrateed the efficacy of the image segmenntation proceddure developedd here. The allgorithm has practicaal applicationns in machinne vision sysstems where partitiooning the sennsor images in accordancce to color charactteristics of varrious image reegions can preecede higher level pprocessing layeers such as reecognition andd tracking of targets.. Future workk will includde utilization of different distanc e measures such as Mahhalanobis disstance and applicaations to multi--spectral imagee segmentationn.
15 | P a g e www.ijacsa.thesai.orrg
(IJA ACSA) Internatioonal Journal of A Advanced Compuuter Science andd Applications, Vol. 5, No. 7, 2014 [3]
[4]
[5]
[6] [7]
Fig.. 16. Input image.
[8] [9] [10]
[11] Fig.. 17. The result of o segmentation of o the input imag ge into two classses, foreeground and backg ground. [12]
[13] [14]
[15] Fig.. 18. The result off segmentation of the t foreground into o two classes
VII.
[16]
ACKNO OWLEDGEMENT TS
Partial sponssorship for this t work waas provided by Deppartment of Defense through US Army A RDECO OM W9911NF-13-1-01 136. [1]
[2]
REFERENCES Carson, C., Bellongie, S., Greensspan, H., Malik, J.: J Blobworld: image segmentation using u expectation--maximization an nd its application to image querying. IEEE Trans. Patttern Anal. Mach. Intell. 24 (8), 20 002, pp. 1026–1038. Cheng, H.D., Jiaang, X.H., Sun, Y.,Wang, Y J.: Color image segmentatiion: advances and prrospects. Pattern Recog. R 34 (12), 200 01, pp. 2259–2281.
[17] [18]
[19] [20]
Heiidary, K., Caulfielld, J. :Color Classiification using marrgin-setting with ellippsoids. Signal Image and V Video Processingg. 2012, DOI 10. 1007/s11760-012--0349-6. Heiidary, K., Caulfieeld, J. : Presmooothing effects in Artificial Color imaage segmentation. Computer Visionn and Image Undderstanding 117, 20113, pp. 195-201. Heiidary, K., Caulfielld, J. :Discriminattion among similaar looking noisy coloor patches using M Margin Setting. Opptics Express 15 (11), 2007, pp. 6275. Battchelor, B.G. (Ediitor), Machine Vission Handbook, Sppringer, 2012. Stegger, C., Ulrich, M M., Wiedemann, C.: Machine Visionn Algorithms and Appplications, Wiley --VCH, 2007. Davvies, E.R.: Com mputer and Machiine Vision: Theoory, Algorithms, Praacticalities, Academ mic Press, 2012. K.: Image Processsing - Principles annd Applications, Achharya, T., Ray, A.K Willey, 2006. Zhuu, H., Zheng, J ., Cai, J., Thalm man, N.M.: Objeect-level image seggmentation using low level cues.. IEEE Transacttions on Image Proocessing 22 (10) 20013, pp. 4019-40227. Bellongie, S., Malikk, J., Puzicha, JJ.: Shape matchiing and object recoognition using sshape contexts. IEEE Transactioons on Pattern Anaalysis and Machinne Intelligence 24 ((24) 2002, pp. 5099-522. Bhaanu, B.: Automatiic target recognitiion: State of the aart survey. IEEE Traansactions on Aeroospace and Electroonic Systems 22 (44) 1986, pp. 3643799. Heiidary, K.: Distortioon tolerant correlaation filter design.. Applied Optics 52 ((12) 2013, pp. 25770-2576. Heiidary, K.: Synthettic template: effecctive tool for targget classification andd machine visionn. International JJournal of Advannced Computer Scieence and Applicattions 4 (10) 2013, pp. 22-31. Heiidary, K., Caulfie ld, H.J.: Needles in a haystack: fasst spatial search for targets in similaar-looking backgrrounds. Journal oof the Franklin Insttitute 349 2012, ppp. 2935-2955. Maartin, D., Fowlkess, C., Malik, J.: L Learning to detecct natural image bouundaries using loocal brightness, color, and textuure cues. IEEE Traansactions on Patteern Analysis and M Machine Intelligennce 26 (5) 2004, pp.5530–549. Yu,, S., Shi, J.: Multiclass specttral clustering. P Proceedings of Inteernational Confereence on Computer Vision, 2003. Lezzoray, O., Chaarrier, C.: Coloor image segm mentation using morrphological clusteering and fusion with automatic scale selection. Patt ttern Recognition L Letters 30, pp. 3977-406, 2009. Maarr, D.C., Hildrethh, E.: Theory off edge detection. Proceedings of Royyal Society of Lonndon, 1980. Arbbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contouur detection and Hieerarchical Image Segmentation. IEEE Trans. Paattern Analysis Maachine Intelligencee 33 (5) 2011 pp. 8898-916.
16 | P a g e www.ijacsa.thesai.orrg