Tagging Like Humans: Diverse and Distinct Image Annotation · 2018. 6. 11. · Tagging like Humans:...

9
Tagging like Humans: Diverse and Distinct Image Annotation Baoyuan Wu 1 , Weidong Chen 1 , Peng Sun 1 , Wei Liu 1 , Bernard Ghanem 2 , and Siwei Lyu 3 1 Tencent AI Lab 2 KAUST 3 University at Albany, SUNY [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract In this work we propose a new automatic image anno- tation model, dubbed diverse and distinct image annota- tion (D 2 IA). The generative model D 2 IA is inspired by the ensemble of human annotations, which create semantically relevant, yet distinct and diverse tags. In D 2 IA, we gener- ate a relevant and distinct tag subset, in which the tags are relevant to the image contents and semantically distinct to each other, using sequential sampling from a determinantal point process (DPP) model. Multiple such tag subsets that cover diverse semantic aspects or diverse semantic levels of the image contents are generated by randomly perturb- ing the DPP sampling process. We leverage a generative adversarial network (GAN) model to train D 2 IA. Extensive experiments including quantitative and qualitative compar- isons, as well as human subject studies, on two benchmark datasets demonstrate that the proposed model can produce more diverse and distinct tags than the state-of-the-arts. 1. Introduction Image annotation is one of the fundamental tasks of com- puter vision with many applications in image retrieval, cap- tion generation and visual recognition. Given an input im- age, an image annotator outputs a set of keywords (tags) that are relevant to the content of the image. Albeit an im- pressive progress has been made by current image annota- tion algorithms, to date, most of them [31, 26, 12] focus on the relevancy of the obtained tags to the image with little consideration to their inter-dependencies. As a result, al- gorithmically generated tags for an image are relevant but at the same time less informative, with redundancy among the obtained tags, e.g., one state-of-the-art image annota- tion algorithm ML-MG [26] generates tautology ‘people’ and ‘person’ for the image in Fig. 1(f). This is different from how human annotators work. We illustrate this using an annotation task involving three hu- man annotators (identified as A1,A2 and A3). Each annota- tor was asked to independently annotate the first 1, 000 test images in the IAPRTC-12 dataset [8] with the requirement of “describing the main contents of one image using as few tags as possible”. One example of the annotation results is presented in Fig. 1. Note that individual human annotators tend to use semantically distinct tags (see Fig. 1 (b)-(d)), and the semantic redundancy among tags is lower than that among the tags generated by the annotation algorithm ML- MG [26] (see Fig. 1(f)). Improving the semantic distinc- tiveness of generated tags has been studied in recent work [23], which uses a determinant point process (DPP) model [10] to produce tags with less semantic redundancies. The annotation result of running this algorithm on the example image is shown in Fig. 1(g). However, such results still lack in one aspect when com- paring with the annotations from the ensemble of human annotators (see Fig. 1(e)). The collective annotations from human annotators also tend to be diverse, consisting of tags that cover more semantic elements of the image. For in- stance, different human annotators tend to use tags across different abstract levels, such as ‘church’ vs. ‘building’, to describe the image. Furthermore, different human annota- tors usually focus on different parts or elements of the im- age. For example, A1 describes the scene as ‘square’, A2 notices the ‘yellow’ color of the building, while A3 finds the ‘camera’ worn on the chest of people. In this work, we propose a novel image annotation model, namely diverse and distinct image annotation (D 2 IA), which aims to improve the diversity and distinc- tiveness of the tags for an image by learning a generative model of tags from multiple human annotators. The distinc- tiveness enforces the semantic redundancy among the tags in the same subset to be small, while the diversity encour- ages different tag subsets to cover different aspects or differ- ent semantic levels of the image contents. Specifically, this generative model first maps the concatenation of the image 7967

Transcript of Tagging Like Humans: Diverse and Distinct Image Annotation · 2018. 6. 11. · Tagging like Humans:...

Page 1: Tagging Like Humans: Diverse and Distinct Image Annotation · 2018. 6. 11. · Tagging like Humans: Diverse and Distinct Image Annotation Baoyuan Wu1, Weidong Chen1, Peng Sun1, Wei

Tagging like Humans: Diverse and Distinct Image Annotation

Baoyuan Wu1, Weidong Chen1, Peng Sun1, Wei Liu1, Bernard Ghanem2, and Siwei Lyu3

1Tencent AI Lab 2KAUST 3University at Albany, SUNY

[email protected], [email protected], [email protected], [email protected],

[email protected], [email protected]

Abstract

In this work we propose a new automatic image anno-

tation model, dubbed diverse and distinct image annota-

tion (D2IA). The generative model D2IA is inspired by the

ensemble of human annotations, which create semantically

relevant, yet distinct and diverse tags. In D2IA, we gener-

ate a relevant and distinct tag subset, in which the tags are

relevant to the image contents and semantically distinct to

each other, using sequential sampling from a determinantal

point process (DPP) model. Multiple such tag subsets that

cover diverse semantic aspects or diverse semantic levels

of the image contents are generated by randomly perturb-

ing the DPP sampling process. We leverage a generative

adversarial network (GAN) model to train D2IA. Extensive

experiments including quantitative and qualitative compar-

isons, as well as human subject studies, on two benchmark

datasets demonstrate that the proposed model can produce

more diverse and distinct tags than the state-of-the-arts.

1. Introduction

Image annotation is one of the fundamental tasks of com-

puter vision with many applications in image retrieval, cap-

tion generation and visual recognition. Given an input im-

age, an image annotator outputs a set of keywords (tags)

that are relevant to the content of the image. Albeit an im-

pressive progress has been made by current image annota-

tion algorithms, to date, most of them [31, 26, 12] focus on

the relevancy of the obtained tags to the image with little

consideration to their inter-dependencies. As a result, al-

gorithmically generated tags for an image are relevant but

at the same time less informative, with redundancy among

the obtained tags, e.g., one state-of-the-art image annota-

tion algorithm ML-MG [26] generates tautology ‘people’

and ‘person’ for the image in Fig. 1(f).

This is different from how human annotators work. We

illustrate this using an annotation task involving three hu-

man annotators (identified as A1,A2 and A3). Each annota-

tor was asked to independently annotate the first 1, 000 test

images in the IAPRTC-12 dataset [8] with the requirement

of “describing the main contents of one image using as few

tags as possible”. One example of the annotation results is

presented in Fig. 1. Note that individual human annotators

tend to use semantically distinct tags (see Fig. 1 (b)-(d)),

and the semantic redundancy among tags is lower than that

among the tags generated by the annotation algorithm ML-

MG [26] (see Fig. 1(f)). Improving the semantic distinc-

tiveness of generated tags has been studied in recent work

[23], which uses a determinant point process (DPP) model

[10] to produce tags with less semantic redundancies. The

annotation result of running this algorithm on the example

image is shown in Fig. 1(g).

However, such results still lack in one aspect when com-

paring with the annotations from the ensemble of human

annotators (see Fig. 1(e)). The collective annotations from

human annotators also tend to be diverse, consisting of tags

that cover more semantic elements of the image. For in-

stance, different human annotators tend to use tags across

different abstract levels, such as ‘church’ vs. ‘building’, to

describe the image. Furthermore, different human annota-

tors usually focus on different parts or elements of the im-

age. For example, A1 describes the scene as ‘square’, A2

notices the ‘yellow’ color of the building, while A3 finds the

‘camera’ worn on the chest of people.

In this work, we propose a novel image annotation

model, namely diverse and distinct image annotation

(D2IA), which aims to improve the diversity and distinc-

tiveness of the tags for an image by learning a generative

model of tags from multiple human annotators. The distinc-

tiveness enforces the semantic redundancy among the tags

in the same subset to be small, while the diversity encour-

ages different tag subsets to cover different aspects or differ-

ent semantic levels of the image contents. Specifically, this

generative model first maps the concatenation of the image

7967

Page 2: Tagging Like Humans: Diverse and Distinct Image Annotation · 2018. 6. 11. · Tagging like Humans: Diverse and Distinct Image Annotation Baoyuan Wu1, Weidong Chen1, Peng Sun1, Wei

Figure 1. An example illustrating the diversity and distinctiveness in image annotation. The image (a) is from IAPRTC-12 [8]. We

present the tagging results from 3 independent human annotators (v)-(d), identified as A1, A2, A3, respectively, as well as their ensemble

result (e). We also present the results of some automatic annotation methods. ML-MG [26] (f) is a standard annotation method that requires

the relevant tags. DIA (ensemble) [23] (g) indicates that we repeat the sampling of DIA for 3 times, with the requirement that each subset

includes at most 5 tags, and then combine these 3 subsets to one ensemble subset. Similarly, we obtain the ensemble subset of our method

(h). In each graph, nodes are candidate tags and the arrows connect parent and child tags in the semantic hierarchy. This figure is better

viewed in color.

Figure 2. A schematic illustration of the structure of the proposed

D2IA-GAN model. SDD−I indicates the ground-truth set of di-

verse and distinct tag subsets for the image I , which will be de-

fined in the Section 3.

feature vector and a random noise vector to a posterior prob-

ability with respect to all candidate tags, and then incorpo-

rates it into a determinantal point process (DPP) model [10]

to generate a distinct tag subset by sequential sampling. Uti-

lizing multiple random noise vectors for the same image,

multiple diverse tag subsets are sampled.

We train D2IA as the generator in a generative adver-

sarial network (GAN) model [7] given a large amount of

human annotation data, which is subsequently referred to

as D2IA-GAN. The discriminator of D2IA-GAN is a neural

network measuring the relevance between the image feature

and the tag subset that aims to distinguish the generated tag

subsets and the ground-truth tag subsets from human anno-

tators. The general structure of D2IA-GAN model is shown

in Fig. 2. The proposed D2IA-GAN is trained by alternative

optimization of the generator and discriminator while fixing

the other until convergence.

One characteristic of the D2IA-GAN model is that its

generator includes a sampling step which is not easy to

optimize directly using gradient based optimization meth-

ods. Inspired by reinforcement learning algorithms, we de-

velop a method based on the policy gradient (PG) algo-

rithm, where we model the discrete sampling with a dif-

ferentiable policy function (a neural network), and devise a

reward to encourage the generated tag subset to match the

image content as close as possible. Incorporating the pol-

icy gradient algorithm in the training of D2IA-GAN, we can

effectively obtain the generative model for tags conditioned

on the image. As shown in Fig. 1(h), using the trained gen-

erator of D2IA-GAN can produce diverse and distinct tags

that are closer to those generated from the ensemble of mul-

tiple human annotators (Fig. 1(e)).

The main contributions of this work are four-fold. (1)

We develop a new image annotation method, namely di-

verse and distinct image annotator (D2IA), to create rel-

evant, yet distinct and diverse annotations for an image,

which are more similar to tags provided by different hu-

man annotators for the same image; (2) we formulate the

problem as learning a probabilistic generative model of tags

7968

Page 3: Tagging Like Humans: Diverse and Distinct Image Annotation · 2018. 6. 11. · Tagging like Humans: Diverse and Distinct Image Annotation Baoyuan Wu1, Weidong Chen1, Peng Sun1, Wei

conditioned on the image content, which exploits a DPP

model to ensure distinctiveness and conducts random per-

turbations to improve diversity of the generated tags; (3) the

generative model is adversarially trained using a specially

designed GAN model that we term as D2IA-GAN; (4) in

the training of D2IA-GAN we use the policy gradient algo-

rithm to handle the discrete sampling process in the gener-

ative model. We perform experimental evaluations on ESP

Game [20] and IAPRTC-12 [8] image annotation datasets,

and subject studies based on human annotators for the qual-

ity of the generated tags. The evaluation results show that

the tag set produced by D2IA-GAN is more diverse and dis-

tinct when comparing with those generated by the state-of-

the-art methods.

2. Related Work

Existing image annotation methods fall into two general

categories: they either generate all tags simultaneously us-

ing multi-label learning, or predict tags sequentially using

sequence generation. The majority of existing image anno-

tation methods are in the first category. They mainly differ

in designing different loss functions or exploring different

class dependencies. Typical loss functions include square

loss [31, 22], ranking loss [6, 12], cross-entropy loss [33]),

etc. Commonly used class dependencies include class co-

occurrence [25, 28, 13], mutual exclusion [2, 29], class car-

dinality [27], sparse and low rank [24], and semantic hierar-

chy [26]. Besides, some multi-label learning methods con-

sider different learning settings, such as multi-label learn-

ing with missing labels [25, 28], label propagation in semi-

supervised learning [15, 5, 4] and transfer learning [18] set-

tings. A thorough review of multi-label learning based im-

age annotation methods can be found in [32].

Our method falls into the second category, which gener-

ates tags in a sequential manner. This can better employ the

inter-dependencies of the tags. Many methods in this cate-

gory are built on sequential models, such as recurrent neural

networks (RNNs), which work in coordination with convo-

lutional neural networks (CNNs) to exploit their representa-

tion power for images. The main difference of these works

lies in designing an interface between CNN and RNN. In

[9], features extracted by a CNN model were used as the

hidden states of a RNN. In [21], the CNN features were in-

tegrated with the output of a RNN. In [14], the predictions

of a CNN were used as the hidden states of a RNN, and

the ground-truth tags of images were used to supervise the

training of the CNN. Not directly using the output layer of

a RNN, the work in [11] utilized the Fisher vector derived

from the gradient of the RNN, as the feature representation.

Although RNN is a suitable model for the sequential im-

age annotation task for its ability to implicitly encode the

dependencies among tags, it is not easy to explicitly embed

some prior knowledge about the tag dependencies like se-

mantic hierarchy [26] or mutual exclusion [2] in the RNN

model. To remedy this issue, the recent work of DIA [23]

formulated the sequential prediction as a sampling process

based on a determinantal point process (DPP) [10]. DIA en-

codes the class co-occurrence into the learning process, and

incorporates the semantic hierarchy into the sampling pro-

cess. Another important difference between DIA and the

RNN-based methods is that the former explicitly embeds

the negative correlations among tags i.e., avoiding using se-

mantically similar tags for the same image, while RNN-

based methods typically ignore such negative corrlations.

The main reason is that the objective of DIA is to describe

an image with a few diverse and relevant tags, while most

other methods tend to predict most relevant tags.

Our proposed model D2IA-GAN is inspired by DIA, and

both are developed based on the observations of human an-

notations. Yet, there are several significant differences be-

tween them. The most important difference is in their ob-

jectives. DIA aims to simulate a single human annotator

to use semantically distinct tags for an image, while D2IA-

GAN aims to simulate multiple human annotators simulta-

neously to capture the diversity among human annotators.

They are also different in the training process, which will

be reviewed in the Section 4. Besides, in DIA [23], ‘di-

verse/diversity’ refers to the semantic difference between

tags in the same tag subset, to which we use the word ‘dis-

tinct/distinctiveness’ for the same meaning in this work. We

use ‘diverse/diversity’ to indicate the semantic difference

between multiple tag subsets for the same image.

3. Background

Weighted semantic paths. Weighted semantic paths [23]

are constructed based on the semantic hierarchy and syn-

onyms [26] among all candidate tags. To construct a

weighted semantic path, we treat each tag as a node, and the

synonyms are merged into one node. Then, starting from

each leaf node in the semantic hierarchy, we connect its di-

rect parent node and repeat this connection process, until

the root node is achieved. All tags that are visited in this

process form the weighted semantic path of the leaf tag.

The weight of each tag in the semantic path is computed

inversely proportional to the node layer (the layer number

starts from 0 at leaf nodes) and the number of descendants

of each node. As such, the weight of the tag with more

specified information will be larger. A brief example of the

weighted semantic paths is shown in Fig. 3. We use SPT

to denote the semantic paths of set T of all candidate tags.

SPT indicates the semantic paths of the tag subset T . SPI

represents the weighted semantic paths of all ground-truth

tags of image I .

Diverse and distinct tag subsets. Given an image Iand its ground-truth semantic paths SPI , a tag sub-

7969

Page 4: Tagging Like Humans: Diverse and Distinct Image Annotation · 2018. 6. 11. · Tagging like Humans: Diverse and Distinct Image Annotation Baoyuan Wu1, Weidong Chen1, Peng Sun1, Wei

Figure 3. A brief example of the weighted semantic paths. The

word in box indicates the tag. The arrow a → b tells that tag

a is the semantic parent of tag b. The bracket close to each box

denotes the corresponding (node layer, number of descendants, tag

weight). Boxes connected by arrows construct a semantic path.

set is distinct if there are no tags being sampled from

the same semantic path. An example of the distinct

tag subset is shown in Fig. 3: SPI = {lady →woman → (people, person), cactus → plant} includes

3 semantic paths with 7 tags, such as {lady, plant}or {women, cactus}. A tag set is diverse if it in-

cludes multiple distinct tag subsets. These subsets cover

different contents of the image, due to two possible

reasons, including 1) they describe different contents

of the image, and 2) they describe the same content

but at different semantic levels. As shown in Fig. 3,

we can construct a diverse set of distinct tag subsets like

{{lady, cactus}, {plant, cat}, {woman, plant, animal}}.

Furthermore, we can construct all possible distinct tag

subsets (ignoring the subset with a single tag) to obtain the

complete diverse set of distinct tag subsets, referred to as

SDD−I . Specifically, for the subset with 2 tags, we will

pick 2 paths out of 3 and sample one tag from each picked

path. Then we obtain in total 16 distinct subsets. For the

subset with 3 tags, we sample one tag from each semantic

path, leading to 12 distinct subsets. SDD−I will be used as

the ground-truth to train the proposed model.

Conditional DPP. We use a conditional determinantal point

process (DPP) model to measure the probability of the tag

subset T , derived from the ground set T given a feature x

of the image I . The DPP model is formulated as

P(T |I) = det(

LT (I))

det(

LT (I) + I) , (1)

where LT (I) ∈ R|T |×|T | is a positive semi-definite ker-

nel matrix. I indicates the identity matrix. For clarity,

the parameters of LT (I) and (1) have been omitted. The

sub-matrix LT (I) ∈ R|T |×|T | is constructed by extracting

the rows and columns corresponding to the tag indexes in

T . For example, assuming LT (I) = [aij ]i,j=1,2,3,4 and

T = {2, 4}, then LT (I) = [a22, a24; a42, a44]. det(

LT (I))

indicates the determinant of LT (I). It encodes the negative

correlations among the tags in the subset T .

Learning the kernel matrix LT (I) directly is often diffi-

cult, especially when |T | is large. To alleviate this problem,

we decompose LT (I) as LT (i, j) = viφ⊤i φivj , where the

scalar vi indicates the individual score with respect to tag

i, and vT = [v1, . . . , vi, . . . , vT ]. The vector φi ∈ Rd′

corresponds to the direction of tag i, with ‖ φi ‖= 1,

and can be used to construct the semantic similarity ma-

trix ST ∈ R|T |×|T | with ST (i, j) = φ⊤

i φj . With this de-

composition, we can learn vT and ST separately. More

details of DPP can be found in [10]. In this work, ST is

pre-computed as:

ST (i, j) =1

2+

〈ti, tj〉2‖ti‖2‖tj‖2

∈ [0, 1] ∀ i, j ∈ T , (2)

where the tag representation ti ∈ R50 is derived from the

GloVe algorithm [17]. 〈·, ·〉 indicates the inner product of

two vectors, while ‖ · ‖2 denotes the ℓ2 norm of a vector.

k-DPP sampling with weighted semantic paths. k-DPP

sampling [10] is a sequential sampling process to obtain a

tag subset T with at most k tags, according to the distri-

bution (1) and the weighted semantic paths SPT . It is de-

noted as Sk-DPP,SPT(vT ,ST ) subsequently. Specifically, in

each sampling step, the newly sampled tag will be checked

whether it is from the same semantic path with any previ-

ously sampled tags. If not, it is included into the tag subset;

if yes, it is abandoned and we go on sampling the next tag,

until k tags are obtained. The whole sampling process is re-

peated multiple times to obtain different tag subsets. Then

the subset with the largest tag weight summation is picked

as the final output. Note that a larger weight summation in-

dicates more semantic information. Since the tag weight is

pre-defined when introducing the weighted semantic paths,

it is an objective criterion to pick the subset.

4. D2IA-GAN Model

Given an image I , we aim to generate a diverse tag set

including multiple distinct tag subsets relevant to the im-

age content, as well as an ensemble tag subset of these dis-

tinct subsets, which could provide a comprehensive descrip-

tion of I . These tags are sampled from a generative model

conditioned on the image, and we use a conditional GAN

(CGAN) [16, 30, 1] to train it, with the generator part G be-

ing our model and a discriminator D, as shown in Fig. 2.

Specifically, conditioned on I , G projects one noise vector z

to one distinct tag subset T , and uses different noise vectors

to ensure diverse/different tag subsets. D serves as an ad-

versary of G, aiming to distinguish the generated tag subsets

using G from the ground-truth ones SDD−I .

4.1. Generator

The tag subset T ⊂ T = {1, 2, . . . ,m} with |T | ≤ kcan be generated from the generator Gθ(I, z), according to

the input image I and a noise vector z, as follows:

Gθ(I, z;ST , SPT , k) ∼ Sk-DPP,SPT

(√

qT (I, z),ST

)

. (3)

7970

Page 5: Tagging Like Humans: Diverse and Distinct Image Annotation · 2018. 6. 11. · Tagging like Humans: Diverse and Distinct Image Annotation Baoyuan Wu1, Weidong Chen1, Peng Sun1, Wei

The above generator is a composite function with two parts.

The inner part qT (I, z) = σ(

W⊤[fG(I); z] + bG

)

∈[0, 1]|T | is a CNN based soft classifier. fG(I) represents

the output vector of the fully-connected layer of a CNN

model, and [a1;a2] denotes the concatenation of two vec-

tors a1 and a2. σ(a) = 11+exp(−a) is the sigmoid function.√

a indicates the element-wise square root of vector a. The

parameter matrix W = [w1, . . . ,wi, . . . ,wm] ∈ Rm×d

and the bias parameter bG ∈ Rd map the feature vector

[fG(I); z] ∈ Rd to the logit vector. The trainable parame-

ter θ includes W,bG and the parameters of fG . The noise

vector z is sampled from the uniform distribution U[−1, 1].The outer part Sk-DPP,SPT

(√

qT (I, z),ST ) is the k-DPP

sampling with weighted semantic paths SPT (see Section

3). Using√

qT (I, z) as the quality term and utilizing the

pre-defined similarity matrix ST , then a conditional DPP

model can be constructed as described in Section 3.

4.2. Discriminator

Dη(I, T ) evaluates the relevance of image I and tag sub-

set T : it outputs a value in [0, 1], with 1 meaning the highest

relevance and 0 being the least relevant. Specifically, Dη is

constructed as follows: first, as described in Section 3, each

tag i ∈ T is represented by a vector ti ∈ R50 derived from

the GloVe algorithm [17]. Then, we formulate Dη(I, T ) as

Dη(I, T ) =1

|T |∑

i∈T

σ(

w⊤D[fD(I); ti] + bD

)

, (4)

where fD(I) denotes the output vector of the fully-

connected layer of a CNN model (different from that used

in the generator). η includes wD ∈ R|fD(I)|+50, bD ∈ R

and the parameters of fD(I) in the CNN model.

4.3. Conditional GAN

Following the general training procedure, we learn

D2IA-GAN by iterating two steps until convergence: (1)

fixing the discriminator Dη and optimizing the generator

Gθ using (5), as shown in Section 4.3.1; (2) fixing Gθ and

optimizing Dη using (8), as shown in Section 4.3.2.

4.3.1 Optimizing Gθ

Given Dη , we learn Gθ by

minθ

Ez∼U[−1,1]

[

log

(

1−Dη

(

I,Gθ(I, z))

)]

. (5)

For clarity, we only show the case with one training im-

age I in the above formulation. Due to the discrete sam-

pling process S(vT ,ST ) in Gθ(I, z), we cannot optimize

(5) using any existing continuous optimization algorithm.

To address this issue, we view the sequential generation of

tags as controlled by a continuous policy function, which

weighs different choices of the next tag based on the im-

age and tags already generated. As such, we can use the

policy gradient (PG) algorithm in reinforcement learning

for its optimization. Given a sampled tag subset TG from

S(vT ,ST ), the original objective function of (5) is approx-

imated by a continuous function. Specifically, we denote

TG = {y[1], y[2], . . . , y[k]}, where [i] indicates the sampling

order, and its subset TG−i = {y[1], . . . , y[i]}, i ≤ k includes

the first i tags in TG . Then, with an instantialized z sampled

from [−1, 1], the approximated function is formulated as

Jθ(TG) =

k∑

i=1

R(I, TG−i) log

(

t1∈TG−i

q1t1

t2∈T \TG−i

q0t2

)

, (6)

where T \ TG−i denotes the relative complement of TG−i

with respect to T . q1t = σ(

w⊤t [fG(I); z] + bG(t)

)

indicates

the posterior probability, and q0t = 1 − q1t . The reward

function R(I, TG) encourages the content of I and the tags

TG to be consistent, and is defined as

R(I, TG) = − log(

1−Dη(I, TG))

. (7)

Compared to a full PG objective function, in (6) we have

replaced the return with the immediate reward R(I, TG),and the policy probability with the decomposed likelihood∏

t1∈TG−iq1t1

t2∈T \TG−iq0t2 . Consequently, it is easy to

compute the gradient∂Jθ(TG)

∂θ, which will be used in the

stochastic gradient ascent algorithm and back-propagation

[19] to update θ.

When generating TG during training, we repeat the sam-

pling process multiple times to obtain different subsets.

Then, as the ground-truth set SDD−I for each training im-

age is available, the semantic F1−sp score (see Section 5) for

each generated subset can be computed, and the one with

the largest F1−sp score will be used to update parameters.

This process encourages the model to generate tag subsets

more consistent with the evaluation metric.

4.3.2 Optimizing Dη

Utilizing the generated tag subset TG from the fixed gener-

ator Gθ(I, z), we learn Dη by

maxη

1

|SDD−I |∑

T∈SDD−I

[

β logDη(I, T )− (1− β)· (8)

(

Dη(I, T )− F1−sp(I, T ))2]

+ β log(

1−Dη(I, TG))

(1− β) (Dη(I, TG)− F1−sp(I, TG))2,

where semantic score F1−sp(I, T ) measures the relevance

between the tag subset T and the content of I . If we set the

trade-off parameter β = 1, then (8) is equivalent to the ob-

jective used in the standard GAN model. For β ∈ (0, 1), (8)

also encourages the updated Dη to be close to the semantic

score F1−sp(I, T ). We can then compute the gradient of

(8) with respect to η, and use the stochastic gradient ascent

algorithm and back-propagation [19] to update η.

7971

Page 6: Tagging Like Humans: Diverse and Distinct Image Annotation · 2018. 6. 11. · Tagging like Humans: Diverse and Distinct Image Annotation Baoyuan Wu1, Weidong Chen1, Peng Sun1, Wei

5. Experiments

5.1. Experimental Settings

Datasets. We adopt two benchmark datasets, ESP Game

[20] and IAPRTC-12 [8] for evaluation. One important rea-

son for choosing these two datasets is that they have com-

plete weighted semantic paths of all candidate tags SPT ,

the ground-truth weighted semantic paths of each image

SPI , the image features and the trained DIA model, pro-

vided by the authors of [23] and available on GitHub1.

Since the weighted semantic paths are important to our

method, these two datasets facilitate the evaluation. Specif-

ically, in ESP Game, there are 18689 train images, 2081

test images, 268 candidate classes, 106 semantic paths cor-

responding to all candidate tags, and the feature dimension

is 597; in IAPRTC-12, there are 17495 train images, 1957

test images, 291 candidate classes, 139 semantic paths of

all candidate tags, and the feature dimension is 536.

Model training. We firstly fix the CNN models in both

Gθ and Dη as the VGG-F model2 pre-trained on ImageNet

[3]. Then we initialize the columns of the fully-connected

parameter matrix W (see Eq. (3)) that corresponds to the

image feature fG(I) using the trained DIA model, while the

columns corresponding to the noise vector z and the bias

parameter bG are randomly initialized. We pre-train Dη by

setting β = 0 in Eq. (8), i.e., only using the F1−sp scores

of ground-truth subsets SDD−I and the fake subsets gen-

erated by the initialized Gθ with z being the zero vector.

The corresponding pre-training parameters are: batch size

= 256, epochs = 20, learning rate = 1, ℓ2 weight decay

= 0.0001. With the initialized Gθ and the pre-trained Dη ,

we fine-tune the D2IA-GAN model using the following pa-

rameters: batch size = 256, epochs = 50, the learning rates

of W and η are respectively set to 0.0001 and 0.00005, both

learning rates are decayed by 0.1 in every 10 epochs, ℓ2weight decay = 0.0001, and β = 0.5. Besides, if there are

a few long paths (i.e., many tags in a semantic path) in SPI ,

the number of subsets in SPI , i.e., |SDD−I |, could be very

large. In ESP Game and IAPRTC-12, the largest |SDD−I | is

up to 4000, though the |SDD−I | values for most images are

smaller than 30. If |SDD−I | is too large, the training of the

discriminator Dη (see Eq. (8)) will be slow. Thus, we set a

upper bound 10 for |SDD−I | in training, if |SDD−I | > 10,

then we randomly choose 10 subsets from SDD−I to up-

date Dη . The implementation adopts Tensorflow 1.2.0 and

Python 2.7.

Evaluation metrics. To evaluate the distinctiveness and

relevance of the predicted tag subset, three semantic met-

rics, including semantic precision, recall and F1, are pro-

posed in [23], according to the weighted semantic paths.

1Downloaded from https://github.com/wubaoyuan/DIA2Downloaded from http://www.vlfeat.org/matconvnet/pretrained/

They are denoted as Psp, Rsp and F1−sp, respectively.

Specifically, given a predicted subset T , the corresponding

semantic paths SPT and the ground-truth semantic paths

SPI , Psp computes the proportion of the true semantic paths

in SPT , and Rsp computes the proportion of the true se-

mantic paths in SPI that are also included in SPT , and

F1−sp = 2(Psp ·Rsp)/(Psp+Rsp). The tag weights in each

path are also considered when computing the proportions.

Please refer to [23] for the detailed definitions.

Comparisons. We compare with two state-of-the-art im-

age annotation methods, including ML-MG3 [26] and DIA4

[23]. The reason we compare with them is that both

they and our method utilize the semantic hierarchy and the

weighted semantic paths, though with different usages. We

also compare with another state-of-the-art multi-label learn-

ing method, called LEML5 [31], which doesn’t utilize the

semantic hierarchy. Since both ML-MG and LEML do not

consider the semantic distinctiveness among tags, their pre-

dicted tag subsets are likely to include semantic redundan-

cies. As reported in [23], the evaluation scores using the

semantic metrics (i.e., Psp, Rsp and F1−sp) of ML-MG and

LEML’s predictions are much lower than DIA. Hence it is

not relevant to compare with the original results of ML-MG

and LEML. Instead, we combine the predictions of ML-MG

and LEML with the DPP-sampling that is also used in DIA

and our method. Specifically, the square root of posterior

probabilities with respect to all candidate tags produced by

ML-MG are used as the quality vector (see Section 3); as

there are negative scores in the predictions of LEML, we

normalize all predicted scores to [0, 1] to obtain the poste-

rior probabilities. Then combining with the similarity ma-

trix S, two DPP distributions are constructed to sample dis-

tinct tag subsets. The obtained results denoted as MLMG-

DPP and LEML-DPP, respectively.

5.2. Quantitative Results

As all compared methods (MLMG-DPP, LEML-DPP

and DIA) and the proposed D2IA-GAN sample from DPP

distributions to generate tag subsets, we can generate multi-

ple tag subsets using each method for each image. Specifi-

cally, MLMG-DPP and DIA generate 10 random tag subsets

for each image. The weight of each tag subset is computed

by summing the weights of all tags in the subset. Then, we

construct two outputs, including: the single subset, which

picks the subset with the largest weight from these 10 sub-

sets; and the ensemble subset, which merges 5 tag subsets

with top-5 largest weights among 10 subsets into one unique

tag subset. The evaluation of the single subset reflects the

performance of distinctiveness of the compared methods.

The evaluation of the ensemble subset measures the per-

3Downloaded from https://sites.google.com/site/baoyuanwu2015/home4Downloaded from https://github.com/wubaoyuan/DIA5Downloaded from http://www.cs.utexas.edu/ rofuyu/

7972

Page 7: Tagging Like Humans: Diverse and Distinct Image Annotation · 2018. 6. 11. · Tagging like Humans: Diverse and Distinct Image Annotation Baoyuan Wu1, Weidong Chen1, Peng Sun1, Wei

evaluation metric→ 3 tags 5 tagstarget method↓ Psp Rsp F1−sp Psp Rsp F1−sp

LEML-DPP [31] 34.64 25.21 27.76 29.24 35.05 30.29

single MLMG-DPP [26] 37.18 27.71 30.05 33.85 38.91 34.30

subset DIA [23] 41.44 31.00 33.61 34.99 40.92 35.78D2IA-GAN 42.96 32.34 34.93 35.04 41.50 36.06

LEML-DPP [31] 34.62 38.09 34.32 29.04 46.61 34.02

ensemble MLMG-DPP [26] 30.44 34.88 30.70 28.99 43.46 33.05

subset DIA [23] 35.73 33.53 32.39 32.62 40.86 34.31D2IA-GAN 36.73 42.44 36.71 31.28 48.74 35.82

Table 1. Results (%) evaluated by semantic metrics on ESP Game.

The higher value indicates the better performance, and the best

result in each column is highlighted in bold.

evaluation metric→ 3 tags 5 tagstarget method↓ Psp Rsp F1−sp Psp Rsp F1−sp

LEML-DPP [31] 41.42 24.39 29.00 37.06 32.86 32.98single MLMG-DPP [26] 40.93 24.29 28.61 37.06 33.68 33.29subset DIA [23] 42.65 25.07 29.87 37.83 34.62 34.11

D2IA-GAN 43.57 26.22 31.04 37.31 35.35 34.41

LEML-DPP [31] 35.22 32.75 31.86 32.28 39.89 33.74ensemble MLMG-DPP [26] 33.71 32.00 30.64 31.91 40.11 33.49subset DIA [23] 35.73 33.53 32.39 32.62 40.86 34.31

D2IA-GAN 35.49 39.06 34.44 32.50 44.98 35.34

Table 2. Results (%) evaluated by semantic metrics on IAPRTC-

12. The higher value indicates the better performance, and the best

result in each column is highlighted in bold.

formance of both diversity and distinctiveness. Larger dis-

tinctiveness of the ensemble subset indicates higher diver-

sity among the consisting subsets of this ensemble subset.

Moreover, we present two cases by limiting the size of each

tag subset to 3 and 5, respectively.

The quantitative results on ESP Game are shown in Ta-

ble 1. For evaluating single subsets, D2IA-GAN shows the

best performance evaluated by all metrics for both 3 and 5

tags, while MLMG-DPP and LEML-DPP perform worst in

all cases. The reason is that the learning of ML-MG/LEML

and the DPP sampling are independent. For ML-MG, it

enforces the ancestor tags to be ranked before their de-

scendant tags, while the distinctiveness is not considered.

There is much semantic redundancy in top-k tags of ML-

MG, which is likely to include fewer semantic paths than

those of DIA and D2IA-GAN. Hence, although DPP sam-

pling can produce a distinct tag subset from top-k candi-

date tags, it covers fewer semantic concepts (remember that

one semantic path represents one semantic concept) than

DIA and D2IA-GAN. For LEML, it treats each tag equally

when training, while totally ignoring the semantic distinc-

tiveness. It is not surprising that LEML-DPP also covers

fewer semantic concepts than DIA and D2IA-GAN. In con-

trast, both DIA and D2IA-GAN take into account the se-

mantic distinctiveness in learning. However, there are sev-

eral significant differences between their training processes.

First, DPP sampling is independent with the model train-

ing in DIA, while the generated subset by DPP sampling is

used to updated the model parameters in D2IA-GAN. Sec-

ond, DIA learns from the ground-truth complete tag list,

and the semantic distinctiveness is indirectly embedded into

the learning process through the similarity matrix S. In

contrast, D2IA-GAN learns from the ground-truth distinct

tag subsets. Last, the model training of DIA is indepen-

dent of the evaluation metric F1−sp, which plays the impor-

tant role in the training of D2IA-GAN. These differences

are the causes that D2IA-GAN produces more semantically

distinct tag subsets than DIA. Specifically, in the case of

3 tags, the relative improvements of D2IA-GAN over DIA

are 3.67%, 4.32%, 3.93% at Psp, Rsp and F1−sp, respec-

tively; while being 0.14%, 3.86% and 0.78% in the case of

5 tags. In addition, the improvement decreases as the size

limit of tag subset increases. The reason is that D2IA-GAN

may include more irrelevant tags, as the random noise com-

bined with the image feature not only brings in diversity, but

also uncertainty. Note that due to the randomness of sam-

pling, the results of single subset by DIA presented here are

slightly different with those reported in [23].

In terms of the evaluation of ensemble subsets, the im-

provements of D2IA-GAN over three compared methods

are more significant. This is because all three compared

methods sample multiple tag subsets from a fixed DPP

distribution, while D2IA-GAN generates multiple tag sub-

sets from different DPP distributions with random perturba-

tions. As such, the diversity among tag subsets generated by

D2IA-GAN is expected to be higher than those correspond-

ing to three compared methods. Subsequently, the ensem-

ble subset of D2IA-GAN is likely to cover more relevant

semantic paths than those of other methods. It is supported

by the comparison through the evaluation by Rsp: the rel-

ative improvement of D2IA-GAN over DIA is 26.57% in

the case of 3 tags, while 19.29% in the case of 5 tags. It

is encouraging that the Psp scores of D2IA-GAN are also

comparable with those of DIA. It demonstrates that training

using GAN reduces the likelihood to include irrelevant se-

mantic paths due to the uncertainty of the noise vector z, be-

cause GAN encourages the generated tag subsets to be close

to the ground-truth diverse and distinct tag subsets. Specif-

ically, in the case of 3 tags, the relative improvements of

D2IA-GAN over DIA are 2.80%, 26.57%, 11.77% at Psp,

Rsp and F1−sp, respectively; the corresponding improve-

ments are −4.11%, 19.29%, 4.40% in the case of 5 tags.

The results on IAPRTC-12 are summarized in Table 2. In

the case of single subset with 3 tags, the relative improve-

ments of D2IA-GAN over DIA are 2.16%, 4.59%, 3.92%at Psp, Rsp and F1−sp, respectively. In the case of

single subset with 5 tags, the corresponding improve-

ments are −1.37%, 2.11%, 0.88%. In the case of ensem-

ble subset and 3 tags, the corresponding improvements

are −0.67%, 16.49%, 6.33%. In the case of ensemble

subset and 5 tags, the corresponding improvements are

−0.37%, 10.08%, 3.0%. The comparisons on above two

benchmark datasets verify that D2IA-GAN produces more

semantically relevant, yet diverse and distinct tag sub-

sets than the compared MLMG-DPP, LEML-DPP and DIA

7973

Page 8: Tagging Like Humans: Diverse and Distinct Image Annotation · 2018. 6. 11. · Tagging like Humans: Diverse and Distinct Image Annotation Baoyuan Wu1, Weidong Chen1, Peng Sun1, Wei

methods. Some qualitative results will be presented in the

supplementary material.

5.3. Subject Study

Since the diversity and distinctiveness are subjective

concepts, we also conduct human subject studies to com-

pare the results of DIA and D2-GAN on these two criterion.

Specifically, for each test image, we run DIA 10 times to

obtain 10 tag subsets, and then the set including 3 subsets

with the largest weights are picked as the final output. For

D2-GAN, we firstly generate 10 random noise vectors z.

With each noise vector, we conduct the DPP sampling in

Gθ for 10 times to obtain 10 subsets, out of which we pick

that with the largest weight as the tag subset corresponding

to this noise vector. Then, from the obtained 10 subsets,

we again pick 3 subsets with the largest weights to form

the output set of D2-GAN. For each test image, we present

these two sets of tag subsets with the corresponding image

to 5 human evaluators. The only instruction to evaluators

is to determine “which set describes this image more com-

prehensively”. Besides, we notice that if two tag sets are

very similar, or if they both are irrelevant to the image con-

tents, human evaluators may pick one randomly. To reduce

such randomness, we filter the test images using the follow-

ing criterion: firstly we combine the subsets in each set to

an ensemble subset; if the F1−sp scores of both ensemble

subsets are larger than 0.2, and the gap between this two

scores is larger than 0.15, then this image is used in subject

studies. Finally, the numbers of test images used in subject

studies are: ESP Game, 375 in the case of 3 tags, and 324 in

the case of of 5 tags; IAPRTC-12, 342 in the case of 3 tags,

and 306 in the case of of 5 tags. We also present the com-

parison results using F1−sp to evaluate the compared two

ensemble subsets. The consistency between the F1−sp eval-

uation and the human evaluation is computed. Subject study

results on ESP Game are summarized in Table 3. With hu-

man evaluation, D2IA-GAN is judged better at 240375 = 64%

of all evaluated images over DIA in the case of 3 tags, and204324 = 62.96% in the case of 5 tags. With F1−sp evaluation,

D2IA-GAN outperforms DIA at 250375 = 66.67% in the case

of 3 tags, and 212324 = 65.43% in the case of5 tags. Both eval-

uation results suggest the improvement of D2IA-GAN over

DIA. Besides, the results of these two evaluations are con-

sistent (i.e., their decisions of which set is better are same)

at 239375 = 63.73% of all evaluated images in the case of 3

tags, while 222324 = 68.52% in the case of 5 tags. It demon-

strates that the evaluation using F1−sp is relatively reliable.

The same trend is also observed for the results obtained on

the IAPRTC-12 dataset (see Table 4).

Moreover, in the supplementary material, we will

present a detailed analysis about human annotations con-

ducted on partial images of IAPRTC-12. It not only shows

that D2IA-GAN produces more human-like tags than DIA,

but also discusses the difference between D2IA-GAN and

# tags → 3 tags 5 tags

metric ↓ DIA D2IA-GANtotal

DIA D2IA-GANtotal

wins wins wins wins

human evaluation 135 240 375 120 204 324

F1−sp 125 250 375 112 212 324

consistency 62 177 63.73% 65 157 68.52%

Table 3. Subject study results on ESP Game. Note that the en-

try ‘62’ corresponding to the row ‘consistency’ and the column

‘DIA wins’ indicates that both human evaluation and F1−sp eval-

uation decide that the predicted tags of DIA are better than those of

D2IA-GAN at 62 images. Similarly, human evaluation and F1−sp

evaluation have the same decision that the results of D2IA-GAN

are better than those of DIA at 177 images. Hence, two evalua-

tions have the same decision (i.e., consistent) on 62 + 177 = 239

images, and the consistency rate among all evaluated images are

239/372 = 63.73%.

# tags → 3 tags 5 tags

metric ↓ DIA D2IA-GANtotal

DIA D2IA-GANtotal

wins wins wins wins

human evaluation 129 213 342 123 183 306

F1−sp 141 201 342 123 183 306

consistency 82 154 69.01% 58 118 57.52%

Table 4. Subject study results on IAPRTC-12.

human annotators, as well as how to shrink that difference.

6. Conclusion

In this work, we have proposed a new image annota-

tion method, called diverse and distinct image annotation

(D2IA), to simulate the diversity and distinctiveness of the

tags generated by human annotators. D2IA is formulated

as a sequential generative model, in which the image fea-

ture is firstly incorporated into a determinantal point pro-

cess (DPP) model that also encodes the weighted semantic

paths, from which a sequence of distinct tags are generated

by sampling. The diversity among the generated multiple

tag subsets is ensured by sampling the DPP model with

random noise perturbations to the image feature. In addi-

tion, we adopt the generative adversarial network (GAN)

model to train the generative model D2IA, and employ the

policy gradient algorithm to handle the training difficulty

due to the discrete DPP sampling in D2IA. Experimental re-

sults and human subject studies on two benchmark datasets

demonstrate that the relevant, yet diverse and distinct tag

subsets generated by the proposed method D2IA-GAN can

provide more comprehensive descriptions of the image con-

tents than those generated by state-of-the-art methods.

Acknowledgements: This work is supported by Tencent

AI Lab. The participation of Bernard Ghanem is supported

by the King Abdullah University of Science and Technol-

ogy (KAUST) Office of Sponsored Research. The partici-

pation of Siwei Lyu is partially supported by National Sci-

ence Foundation National Robotics Initiative (NRI) Grant

(IIS-1537257) and National Science Foundation of China

Project Number 61771341.

7974

Page 9: Tagging Like Humans: Diverse and Distinct Image Annotation · 2018. 6. 11. · Tagging like Humans: Diverse and Distinct Image Annotation Baoyuan Wu1, Weidong Chen1, Peng Sun1, Wei

References

[1] Y. Bai, Y. Zhang, M. Ding, and B. Ghanem. Finding tiny

faces in the wild with generative adversarial network. In

CVPR. IEEE, 2018.

[2] X. Chen, X.-T. Yuan, Q. Chen, S. Yan, and T.-S. Chua.

Multi-label visual classification with label exclusive context.

In ICCV, pages 834–841, 2011.

[3] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-

Fei. Imagenet: A large-scale hierarchical image database. In

CVPR, pages 248–255. IEEE, 2009.

[4] C. Gong, D. Tao, W. Liu, L. Liu, and J. Yang. La-

bel propagation via teaching-to-learn and learning-to-teach.

IEEE transactions on neural networks and learning systems,

28(6):1452–1465, 2017.

[5] C. Gong, D. Tao, J. Yang, and W. Liu. Teaching-to-learn

and learning-to-teach for multi-label propagation. In AAAI,

pages 1610–1616, 2016.

[6] Y. Gong, Y. Jia, T. Leung, A. Toshev, and S. Ioffe. Deep

convolutional ranking for multilabel image annotation. arXiv

preprint arXiv:1312.4894, 2013.

[7] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu,

D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Gen-

erative adversarial nets. In NIPS, pages 2672–2680, 2014.

[8] M. Grubinger, P. Clough, H. Muller, and T. Deselaers. The

iapr tc-12 benchmark: A new evaluation resource for visual

information systems. In International Workshop OntoImage,

pages 13–23, 2006.

[9] J. Jin and H. Nakayama. Annotation order matters: Recur-

rent image annotator for arbitrary length image tagging. In

ICPR, pages 2452–2457. IEEE, 2016.

[10] A. Kulesza and B. Taskar. Determinantal point processes for

machine learning. arXiv preprint arXiv:1207.6083, 2012.

[11] G. Lev, G. Sadeh, B. Klein, and L. Wolf. Rnn fisher vectors

for action recognition and image annotation. In ECCV, pages

833–850. Springer, 2016.

[12] Y. Li, Y. Song, and J. Luo. Improving pairwise ranking for

multi-label image classification. In CVPR, 2017.

[13] Y. Li, B. Wu, B. Ghanem, Y. Zhao, H. Yao, and Q. Ji. Fa-

cial action unit recognition under incomplete data based on

multi-label learning with missing labels. Pattern Recogni-

tion, 60:890–900, 2016.

[14] F. Liu, T. Xiang, T. M. Hospedales, W. Yang, and C. Sun.

Semantic regularisation for recurrent image annotation. In

CVPR, 2017.

[15] W. Liu, J. He, and S.-F. Chang. Large graph construction for

scalable semi-supervised learning. In ICML, 2010.

[16] M. Mirza and S. Osindero. Conditional generative adversar-

ial nets. arXiv preprint arXiv:1411.1784, 2014.

[17] J. Pennington, R. Socher, and C. D. Manning. Glove: Global

vectors for word representation. In EMNLP, volume 14,

pages 1532–43, 2014.

[18] G.-J. Qi, W. Liu, C. Aggarwal, and T. Huang. Joint inter-

modal and intramodal label transfers for extremely rare or

unseen classes. IEEE transactions on pattern analysis and

machine intelligence, 39(7):1360–1373, 2017.

[19] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning

representations by back-propagating errors. Cognitive mod-

eling, 5(3):1, 1988.

[20] L. Von Ahn and L. Dabbish. Labeling images with a com-

puter game. In Proceedings of the SIGCHI conference on

Human factors in computing systems, pages 319–326. ACM,

2004.

[21] J. Wang, Y. Yang, J. Mao, Z. Huang, C. Huang, and W. Xu.

Cnn-rnn: A unified framework for multi-label image classi-

fication. In CVPR, pages 2285–2294, 2016.

[22] Z. Wang, T. Chen, G. Li, R. Xu, and L. Lin. Multi-label

image recognition by recurrently discovering attentional re-

gions. In CVPR, pages 464–472, 2017.

[23] B. Wu, F. Jia, W. Liu, and B. Ghanem. Diverse image anno-

tation. In CVPR, 2017.

[24] B. Wu, F. Jia, W. Liu, B. Ghanem, and S. Lyu. Multi-

label learning with missing labels using mixed dependency

graphs. International Journal of Computer Vision, 2018.

[25] B. Wu, Z. Liu, S. Wang, B.-G. Hu, and Q. Ji. Multi-label

learning with missing labels. In ICPR, 2014.

[26] B. Wu, S. Lyu, and B. Ghanem. Ml-mg: Multi-label learning

with missing labels using a mixed graph. In ICCV, pages

4157–4165, 2015.

[27] B. Wu, S. Lyu, and B. Ghanem. Constrained submodu-

lar minimization for missing labels and class imbalance in

multi-label learning. In AAAI, pages 2229–2236, 2016.

[28] B. Wu, S. Lyu, B.-G. Hu, and Q. Ji. Multi-label learning with

missing labels for image annotation and facial action unit

recognition. Pattern Recognition, 48(7):2279–2289, 2015.

[29] P. Xie, R. Salakhutdinov, L. Mou, and E. P. Xing. Deep

determinantal point process for large-scale multi-label clas-

sification. In CVPR, pages 473–482, 2017.

[30] W. Xiong, W. Luo, L. Ma, W. Liu, and J. Luo. Learning to

generate time-lapse videos using multi-stage dynamic gener-

ative adversarial networks. In CVPR, 2018.

[31] H.-F. Yu, P. Jain, P. Kar, and I. Dhillon. Large-scale multi-

label learning with missing labels. In ICML, pages 593–601,

2014.

[32] D. Zhang, M. M. Islam, and G. Lu. A review on auto-

matic image annotation techniques. Pattern Recognition,

45(1):346–362, 2012.

[33] F. Zhu, H. Li, W. Ouyang, N. Yu, and X. Wang. Learn-

ing spatial regularization with image-level supervisions for

multi-label image classification. In CVPR, 2017.

7975