Recent Paper of Md. Akmal Haidar

19
Recent Paper of Md. Akmal Haidar Meeting before ICASSP 2013 報報報 報報報 2013/05/23

description

Recent Paper of Md. Akmal Haidar. Meeting before ICASSP 2013. 報告者:郝柏翰. Outline. “Novel Weighting Scheme for Unsupervised Language Model Adaptation Using Latent Dirichlet Allocation”, 2010 - PowerPoint PPT Presentation

Transcript of Recent Paper of Md. Akmal Haidar

Page 1: Recent Paper of Md.  Akmal Haidar

Recent Paper of Md. Akmal Haidar

Meeting before ICASSP 2013

報告者:郝柏翰2013/05/23

Page 2: Recent Paper of Md.  Akmal Haidar

2

Outline

• “Novel Weighting Scheme for Unsupervised Language Model Adaptation Using Latent Dirichlet Allocation”, 2010

• “Unsupervised Language Model Adaptation Using Latent Dirichlet Allocation and Dynamic Marginals”, 2011

• “Topic N-gram Count Language Model Adaptation for Speech Recognition”, 2012

• “Comparison of a Bigram PLSA and a Novel Context-Based PLSA Language Model for Speech Recognition”, 2013

Page 3: Recent Paper of Md.  Akmal Haidar

Novel Weighting Scheme for Unsupervised Language Model Adaptation Using

Latent Dirichlet Allocation

Md. Akmal Haidar and Douglas O’Shaughnessy

INTERSPEECH 2010

Page 4: Recent Paper of Md.  Akmal Haidar

4

Introduction

• Adaptation is required when the styles, domains or topics of the test data are mismatched with the training data. It is also important as natural language is highly variable since the topic information is highly non-stationary.

• The idea of an unsupervised LM adaptation approach is to extract the latent topics from the training set and then adapt the topic specific LM with proper mixture weights, finally interpolated with the generic n-gram LM.

• In this paper, we propose the idea that the weights of topic models are generated using the word count of the topics generated by a hard-clustering method.

Page 5: Recent Paper of Md.  Akmal Haidar

5

Proposed Method

• We have used the MATLAB topic modeling toolbox to get the word-topic matrix, WP, and the document-topic matrix, DP, using LDA.

• Hard-clustering:– For training, topic clusters are formed by assigning a topic to a

document as:

• Adaptation:– we can create a dynamically adapted topic model by using a

mixture of LMs from different topics as:

Page 6: Recent Paper of Md.  Akmal Haidar

6

Proposed Method

• To find topic mixture weight , we propose a scheme where the unigram count of the topics.

• The adapted topic model is then interpolated with the generic LM as:

• TF represents the number of times word is drawn from topic .

• is the frequency of in document .

Page 7: Recent Paper of Md.  Akmal Haidar

7

Experiment Setup

• We evaluated the LM adaptation approach using the Brown Corpus and WSJ1 corpus transcription text data.

Page 8: Recent Paper of Md.  Akmal Haidar

8

Experiments

Page 9: Recent Paper of Md.  Akmal Haidar

Unsupervised Language Model Adaptation Using Latent Dirichlet Allocation and

Dynamic Marginals

Md. Akmal Haidar and Douglas O’Shaughnessy

INTERSPEECH 2010

Page 10: Recent Paper of Md.  Akmal Haidar

10

Introduction

• To overcome the mismatch problem. we introduce an unsupervised language model adaptation approach using latent Dirichlet allocation (LDA) and dynamic marginals: locally estimated (smoothed) unigram probabilities from in-domain text data.

• we extend our previous work to find an adapted model by using the minimum discriminant information (MDI), which uses KL divergence as the distance measure between probability distributions.

• The final adapted model is formed by minimizing the KL divergence between the final adapted model and the LDA adapted topic model.

Page 11: Recent Paper of Md.  Akmal Haidar

11

Proposed Method

• Topic clustering

• LDA adapted topic mixture model generation

Page 12: Recent Paper of Md.  Akmal Haidar

12

Proposed Method

• Adaptation using dynamic marginals– The adapted model using dynamic marginals is obtained by

minimizing the KL-divergence between the adapted model and the background model subject to the marginalization constraint for each word w in the vocabulary:

– The constraint optimization problem has close connection to the maximum entropy approach, which provides that the adapted model is a rescaled version of the background model:

Page 13: Recent Paper of Md.  Akmal Haidar

13

Proposed Method

• The background and the LDA adapted topic model have standard back-off structure and the above constraint, so the adapted LM has the following recursive formula:

Page 14: Recent Paper of Md.  Akmal Haidar

14

Experiments

Page 15: Recent Paper of Md.  Akmal Haidar

Topic N-gram Count Language Model Adaptation for Speech Recognition

Md. Akmal Haidar and Douglas O’Shaughnessy

INTERSPEECH 2010

Page 16: Recent Paper of Md.  Akmal Haidar

16

Introduction

• In this paper, we propose a new LM adaptation approach by considering the features of the LDA model.

• Here, we compute the topic mixture weights of each n-gram in the training set using the probability of word given topic , as confidence measures for the words in the n-gram under different LDA latent topics.

• The normalized topic mixture weights are then multiplied by the global count of the n-gram to determine the topic n-gram count for the respective topics.

• The topic n-gram count LMs are then created using the respective topic n-gram counts and adapted by using the topic mixture weights obtained by averaging the confidence measures over the seen words of a development test set.

Page 17: Recent Paper of Md.  Akmal Haidar

17

Proposed Method

• The probability of word under LDA latent topic is computed as:

• Using these features of the LDA model, we proposed two confidence measures to compute the topic mixture weights for each n-gram:

• The topic n-gram language models are then generated using the topic n-gram counts and defined as TNCLM.

Page 18: Recent Paper of Md.  Akmal Haidar

18

Proposed Method

• In the LDA model, a document can be generated by a mixture of topics. So, for a test document , the dynamically adapted topic model by using a mixture of LMs from different topics is computed as:

• The ANCLM are then interpolated with the background n-gram model to capture the local constraints using the linear interpolation as:

Page 19: Recent Paper of Md.  Akmal Haidar

19

Experiments