Music Information RetrievalInformation Universe
Seongmin [email protected]
Dept. of Industrial EngineeringSeoul National University
2
contents
3
Brief history of MIR and state of research Cross media retrieval supporting Natural language queries
like mood, melody information.- Contain semantic information taken from community data bases- “A Music Search Engine Built upon Audio-based and Web-based
Similarity Measures”
Query by Example- You have an example query having the same representation in
the database.- For music search: humming, recorded by cell phones,
microphones- “Music Structure Based Vector Space Retrieval”
4
Stages of First Paper “A Music Search Engine Built upon Audio-based and
Web-based Similarity Measures”
5
Stage 1: Preprocessing the Collection Using information in the ID3 tag- Artist- Album- Title
all duplicates of tracks are excluded to avoid redundancies
Live or instrumentals of the same song removed
6
Stage 2: Web based features addition Search on the web for- “artist”music- “artist”“album”music review- “artist”“title”music review –lyrics
7
Stage 2: Web based features addition (2) Every term is weighted according to the term frequency
×inverse document frequency (tf×idf) function. w(t,m) of a term t for music piece m. N is the total number of documents.
8
Stage 3: Audio Based Similarity measures For each audio track, Mel Frequency Cepstral
Coefficients (MFCCs) are computed on short-time audio segments (called frames)
each song is represented as a Gaussian Mixture Model (GMM) of the distribution of MFCCs
Kullback-Leibler divergence can be calculated on the means and covariance matrices
A rank list of similar tracks is found based on this measure corresponding to each track
9
GMM(Gaussian Mixture Model) a probabilistic model for representing the presence of sub-
populations within an overall population
the mixture distribution that represents the probability distribution of observations in the overall population
10
Stage 4: Dimensionality Reduction chi square test to distinguish the most similar terms using
audio similarities
A is the number of documents in s which contain t B is the number of documents in d which contain t C is the number of documents in s without t D is the number of documents in d without t N is the total number of examined documents
11
Stage 5: Vector Adaptation Smoothing for tracks where no related information
12
Querying the Music Search Engine method to find those tracks that are most similar to a
natural language query
extend queries to the music search engine by the word music and send them to Google
Query vector is constructed in the feature space from the top 10 pages retrieved
Euclidean distances are calculated from the collection tracks and a relevance ranking is got
13
Evaluating the System to evaluate on “real-world” queries, a source for phrases
which are used by people to describe music is needed
Tags provided by AudioScrobbler groundtruth is used
227 tags are used as test queries
14
Goal of the evaluation Goals- Effect of dimensionality on the feature space- Retrieving relevant information - Effect of re weighting of the term vectors- Effect of query expansion
Metrics used : precision values for various recall levels
15
Performance Evaluation -I audio-based term
selection has a very positive impact on the retrieval
setting 2/50 yields best results
16
Performance Evaluation -II Effect of re weighting using various re weighting
techniques
the impact of audiobased vector re-weighting is only marginal
17
Performance Evaluation –III (other metrics)
18
Examples
19
System design of Second paper “Music structure based vector space retrieval”
20
Music Layout : The Pyramid
21
Stage 1: MUSIC INFORMATION MODELING Music Segmentation by smallest note length
Cord modeling
Music region content modeling
22
Stage 2: MUSIC INDEXING AND RETRIEVAL Harmony Event and Acoustic Event- each song’s cord and music region information is represented as
a Gaussian Mixture Model (GMM) of the distribution of MFCCs
n-gram Vector- The harmony and acoustic decoders serve as the tokenizers for
music signal- an event is represented in a text-like format
23
Stage 3: Music information retrieval
24
Summary Natural query vs. query by example Information from web and audio Audio frame segmentation KL divergence vs. vector space modeling Analyzing audio features Data itself vs. metadata domain knowledge of music
Top Related