Image Emotional Semantic Query Based On Color Semantic Description Wei-Ning Wang, Ying-Lin Yu...
-
Upload
angela-griffin -
Category
Documents
-
view
218 -
download
2
Transcript of Image Emotional Semantic Query Based On Color Semantic Description Wei-Ning Wang, Ying-Lin Yu...
Image Emotional Semantic Query Based On Color Semantic DescriptionWei-Ning Wang, Ying-Lin YuDepartment of Electronic and Information Engineering, South China University of Technology, Guangzhou, P.R.China
Proceedings of the Fourth International Conference on Machine Learning and Cybernetics, Guangzhou, 18-21 August 2005
Outline
• Introduction
• Color image segmentation
• Semantic Descriptions of Colors
• Image query using semantic descriptions of colors
• Result
Introduction
• Content-based image retrieval (CBIR) system supports image searches based on perceptual features, such as color, texture, and shape.
• Users prefer using keywords in semantic level to conduct searches rather than using low-level features. (semantic-based retrieval)
• Color is one of the main visual cues, according to the strong relationship between colors and human emotions, an emotional semantic query model based on color semantic description is proposed in this study.
Introduction
Our method• Segment images using color clustering in
L*a*b* space. • Generate semantic terms using fuzzy clustering
algorithm and describe the image region and the whole image with the semantic terms.
• Present an image query scheme through image color semantic description, which allows the users to query images with emotional semantic words.
Color image segmentation• In this paper, we propose an effective image
segmentation method, which involves three stages: 1.Image preprocessing. Edges are removed
and the images are smoothed by Gaussian kernel.
2.Color space conversion. The color space is converted from RGB space into L*a*b* space.
3.Color clustering in L*a*b* space.
Color segmentation
1.Initialize clustering centroids. Initial K(1) clustering centroids μ(1)
j and empty cluster set
Q(1)j (j=1,2,…, K(1) ).
2.In the L*a*b* space, when performing the i-th round
overlap, for each pixel Pr(L,a,b) in the image, find a
cluster K(i) which satisfies
and put into cluster K(i) .
Color segmentation3. Update the cluster centroids as follows:
where Nj is the number of pixels in the cluster Q(i)j.
4. For each cluster , if there exists a cluster , which satisfies:
merge Q(i)j1 and Q(i)
j2 into Q(i+1)j , the cluster centroid updated
as
K(i+1) = k(i) -1
Color segmentation5. Repeat stage 2 to stage 4 until all the clusters
are convergent.
After image segmentation, the images are divided into K color regions.
Semantic Descriptions of Colors• We propose a color description model which can
automatically generate the semantic terms of image segmentations and the whole image through a fuzzy clustering algorithm.
– LCH space conversion– Fuzzy Clustering– Regional Semantic Description of Colors– Global Semantic Description of Images
LCH space conversion
• L*C*h* space is selected because its definitions and measurements are suited for vision perception psychology.
-L: lightness
-C : color saturation
-H: hue
Fuzzy Clustering
• According to the findings of color naming, wedevelop our color semantic terms to name colorsand describe them.
Fuzzy ClusteringInput data sequence, where xi denotes feature L*or
C* of the i-th region and n is the number of colorregions. 1.Initialize the 5 membership functions. c0=min{x1,x2,
…,xn}c6=max{x1,x2,…,xn}compute c1,c2,... c5
cj = c0 +j(c6 + c0)/6
Fuzzy Clustering
2.For each xi , compute μij using the following
rules:
Rule 1 : if , xi<= c1 , μi,1=1 and μi,k≠1= 0
Rule 2 : if , xi>=c5 , μi,5=1 and μi,k≠5 = 0
Rule 3 : if , cj < xi <= cj+1 , μij = (cj+1 - xi)/(cj+1 - cj),
μi,j+1 =1- μij and μi,k≠j,j+1=0
μij : the membership value that the i-th pattern belongs to
j-th semantic term, 1 <= i <= n , 1 <= j <= 5
cj < xi <= cj+1
xi
μij = (c3-xi) / (c3-c2)μi,j+1 =1- μij
Fuzzy Clustering3. Update the class centroids c1,c2,... c5 using the
following equations:
4. Repeat step 2 and step 3 until c are unchanged.
Fuzzy ClusteringMembership functions of the lightness
Membership functions of the saturation
Fuzzy Clustering
Membership functions of the hue
Fuzzy Clustering
• According to the above membership functions, all the semantic terms (hue, lightness, saturation) can be automatically generated from the L*C*h* space values.
• Colors can be described by the 3-demension semantic vector of hue, lightness and saturation, each vector containing a hue term, a saturation term and a lightness term.
• There are 150(5*5*6) vectors in total. • The semantic vector
Qn= [q1,q2,q3], n=1,2,…,150,q1=1,2…,5,q2=1,2,…,5 , q3=1,2,3,…6. ex: [521] = very light weak red
Fuzzy Clustering
• colorless = lightness We use the semantic terms “black”, “dark grey”, “medium grey”, “light grey” and “white” to describe the color and then we get 125 semantic vectors in total.
• For given color Si, we can compute the membership
of the 125 semantic vectors.• The vector with the largest membership is selected
as the semantic description of color Si. • Following this method, we can get semantic terms
for every color.
Regional Semantic Description of Colors• Suppose the segment is composed by K pixels,
Si , i=1,2,…,K , is the color of pixel, the membership of the pixel Si for the 125 combinations μQn
(Si), n=1,2,…125 can be computed.
• Computing all the pixels together, we can get membership of the whole region for the 125 combinations, and the histograms.
• The combination of the largest membership is selected as the regional semantic description.
Global Semantic Description of Images • In order to describe the global character of an
image, average lightness, average saturation and average color contrast of the image are defined as follows.
• Suppose the image I has n pixels. Global light
Global saturation
Global contrast
where Li , Ci , ai , bi denote the L*,C* , a, and b values of the i-th pixel in the image respectively.
Image Query using Semantic
Descriptions of Colors • <expression> ::= <global exp> | < regional exp> | <regional exp > or <global exp>
| <regional exp> and <global exp> . • <global exp > ::= global <global exp>. • <global exp>::= lightness <comparison> <light_val> | saturation <comparison>
<saturation_val> | contrast <comparison> <contrast_val>| not <global exp> | <global exp> or <global exp> |<global exp> and <global exp>.
• <regional exp> ::= <region attributes> < comparison> <number> | not <regional exp> | <regional exp> and <regional exp >|<regional exp > or <regional exp> .
• <region attributes> ::= <region size>. • <region size> ::= the percentage of the <region> • <region> ::= hue equal to <hue_val> | lightness < comparison> <light_val> |
lightness <comparison> <grey light_val> |saturation <comparison> <saturation_val> | not <region> | <region> or <region> | <region> and <region >.
• <hue_val> ::= red | orange| yellow | green | blue | purple. • <comparison> ::= is greater than | is equal to | is less than | is greater than or
equal to | is less than or equal to. • <light_val>::= very dark| dark | medium | light | very light. • <saturation_val> ::= colorless | weak | moderate | strong | very strong. • <contrast> ::= very low | low | medium| high | very high . • < grey light_val> ::= black | dark grey | middle grey | light grey | white .
Result - sad
• (the percentage of the ((light is less than middle light and hue is equal to blue) or (light is less than middle light grey)) is greater than 40%) or (global average lightness is less than dark)
Result - warm• (the percentage of the( (light is greater than dark and
saturation is greater than weak) and (hue equal to red or hue equal to orange or hue equal to yellow)) is great than 20 % ) and (global light is greater than very dark).