VizTree

92
1 23 12 5 19 4 13 29 26 9 3 10 7 24 6 11 28 17 20 21 2 18 8 30 25 14 15 27 16 22 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

description

10001000101001000101010100001010100010101110111101011010010111010010101001110101010100101001010101110101010010101010110101010010110010111011110100011100001010000100111010100011100001010101100101110101. - PowerPoint PPT Presentation

Transcript of VizTree

Page 1: VizTree

1 23 12 5 19 4 13 29 26 9 3 10 7 24 6 11 28 17 20 21 2 18 8 30 25 14 15 27 16 22

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Page 2: VizTree

10001000101001000101010100001010100010101110111101011010010111010010101001110101010100101001010101110101010010101010110101010010110010111011110100011100001010000100111010100011100001010101100101110101

01011001011110011010010000100010100110110101110000101010111011111000110110110111111010011001001000110100011110011011010001011110001011010011011001101000000100110001001110000011101001100101100001010010

Here are two sets of bit strings. Which set is generated by a human and which one is generated by a computer?

Here are two sets of bit strings. Which set is generated by a human and which one is generated by a computer?

Page 3: VizTree

VizTree VizTree 10001000101001000101010100001010100010101110111101011010010111010010101001110101010100101001010101110101010010101010110101010010110010111011110100011100001010000100111010100011100001010101100101110101

01011001011110011010010000100010100110110101110000101010111011111000110110110111111010011001001000110100011110011011010001011110001011010011011001101000000100110001001110000011101001100101100001010010

“humans usually try to fake randomness by alternating patterns”

Lets put the sequences into a depth limited suffix tree, such that the frequencies of all triplets are encoded in the thickness of branches…

Page 4: VizTree

Motivating exampleMotivating example

You go to the doctor because of chest pains. Your ECG looks strange…

You doctor wants to search a database to find similar ECGs, in the hope that they will offer clues about your condition...

Two questions: •How do we define similar?

•How do we search quickly?How do we search quickly?

ECG

Page 5: VizTree

Indexing Time SeriesIndexing Time SeriesWe have seen techniques for assessing the similarity of two time series.

However we have not addressed the problem of finding the best match to a query in a large database (other than the lower bounding trick)

The obvious solution, to retrieve and examine every item (sequential scanning), simply does not scale to large datasets.

We need some way to index the data...Database C

2

1

4

3

5

7

6

9

8

10

Query Q

Page 6: VizTree

We can project time series of length n into n-dimension space.

The first value in C is the X-axis, the second value in C is the Y-axis etc.

One advantage of doing this is that we have abstracted away the details of “time series”, now all query processing can be imagined as finding points in space...

Page 7: VizTree

R10 R11 R12

R1 R2 R3 R4 R5 R6 R7 R8 R9

Data nodes containing points

…these nested MBRs are organized as a tree (called a spatial access tree or a multidimensional tree). Examples include R-tree, Hybrid-Tree etc.

R10 R11

R12

Page 8: VizTree

R10 R11 R12

R1 R2 R3 R4 R5 R6 R7 R8 R9

Data nodes containing points

We can use the MINDIST(point, MBR), to do fast search..

MIN

DIST(

, R10

) = 0

MIN

DIST(

, R11

) =10

MIN

DIST(

, R12

) =17

R10 R11

R12

Page 9: VizTree

If we project a query into n-dimensional space, how many additional (nonempty) MBRs must we examine before we are guaranteed to find the best match?

For the one dimensional case, the answer is clearly 2...

More generally, in n-dimension space

we must examine 3n -1 MBRs

This is known as the curse of dimensionality

For the three dimensional case, the answer is 26...

For the two dimensional case, the answer is 8...

n = 21 10,460,353,201 MBRs

Page 10: VizTree

1.5698 1.0485

Magic LB data

0.4995 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515 0.6596 0.6672 0.6751 0.6843 0.6954 0.7086 0.7240 0.7412 0.7595 0.7780 0.7956 0.8115 0.8247 0.8345 0.8407 0.8431 0.8423 0.8387 … …

RawData

Page 11: VizTree

Motivating example revisited…Motivating example revisited…

You go to the doctor because of chest pains. Your ECG looks strange…

Your doctor wants to search a database to find similar ECGs, in the hope that they will offer clues about your condition...

Two questions:

•How do we define similar?

•How do we search quickly?How do we search quickly?

ECG

Page 12: VizTree

• Create an approximation of the data, which will fit in main memory, yet retains the essential features of interest

• Approximately solve the problem at hand in main memory

• Make (hopefully very few) accesses to the original data on disk to confirm the solution obtained in Step 2, or to modify the solution so it agrees with the solution we would have obtained on the original data

The Generic Data Mining AlgorithmThe Generic Data Mining Algorithm

But which approximation should we use?

But which approximation should we use?

Page 13: VizTree

Time Series RepresentationsTime Series Representations

Data AdaptiveData Adaptive Non Data AdaptiveNon Data Adaptive

SpectralWavelets PiecewiseAggregate Approximation

Piecewise Polynomial

SymbolicSingularValue Approximation

Random Mappings

PiecewiseLinearApproximation

AdaptivePiecewiseConstantApproximation

Discrete Fourier Transform

DiscreteCosineTransform

Haar Daubechies dbn n > 1

Coiflets Symlets

SortedCoefficients

Orthonormal Bi-Orthonormal

Interpolation Regression

Trees

NaturalLanguage

Strings

SymbolicAggregateApproximation

NonLowerBounding

ChebyshevPolynomials

Data DictatedData DictatedModel BasedModel Based

Hidden Markov Models

Statistical Models

Value Based

Slope Based

Grid ClippedData

Page 14: VizTree

• Create an approximation of the data, which will fit in main memory, yet retains the essential features of interest

• Approximately solve the problem at hand in main memory

• Make (hopefully very few) accesses to the original data on disk to confirm the solution obtained in Step 2, or to modify the solution so it agrees with the solution we would have obtained on the original data

The Generic Data Mining Algorithm (revisited) The Generic Data Mining Algorithm (revisited)

This only works if the approximation allows lower bounding

This only works if the approximation allows lower bounding

Page 15: VizTree

• Recall that we have seen lower bounding for distance measures (DTW and uniform scaling) Lower bounding for representations is a similar idea…

What is Lower Bounding?What is Lower Bounding?

S

Q

n

iii sq

1

2D(Q,S)

M

i iiii svqvsrsr1

21 ))((DLB(Q’,S’)

DLB(Q’,S’)

Q’

S’

Lower bounding means that for all Q and

S, we have: DLB(Q’,S’) D(Q,S)

Lower bounding means that for all Q and

S, we have: DLB(Q’,S’) D(Q,S)

Raw Data

Approximation or “Representation”

Page 16: VizTree

In a seminal* paper in SIGMOD 93, I showed that lower bounding of a representation is a necessary and sufficient condition to allow time series indexing, with the guarantee of no false dismissals

In a seminal* paper in SIGMOD 93, I showed that lower bounding of a representation is a necessary and sufficient condition to allow time series indexing, with the guarantee of no false dismissals

Christos work was originally with indexing time series with the Fourier representation. Since then, there have been hundreds of follow up papers on other data types, tasks and representations

Christos work was originally with indexing time series with the Fourier representation. Since then, there have been hundreds of follow up papers on other data types, tasks and representations

Page 17: VizTree

0 20 40 60 80 100 120 140

C

An Example of a An Example of a Dimensionality Reduction Dimensionality Reduction Technique ITechnique I

0.4995 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515 0.6596 0.6672 0.6751 0.6843 0.6954 0.7086 0.7240 0.7412 0.7595 0.7780 0.7956 0.8115 0.8247 0.8345 0.8407 0.8431 0.8423 0.8387

… …

RawData

The graphic shows a time series with 128 points.

The raw data used to produce the graphic is also reproduced as a column of numbers (just the first 30 or so points are shown).

n = 128

Page 18: VizTree

0 20 40 60 80 100 120 140

C

. . . . . . . . . . . . . .

An Example of a An Example of a Dimensionality Reduction Dimensionality Reduction Technique IITechnique II

1.5698 1.0485 0.7160 0.8406 0.3709 0.4670 0.2667 0.1928 0.1635 0.1602 0.0992 0.1282 0.1438 0.1416 0.1400 0.1412 0.1530 0.0795 0.1013 0.1150 0.1801 0.1082 0.0812 0.0347 0.0052 0.0017 0.0002

… …

FourierCoefficients

0.4995 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515 0.6596 0.6672 0.6751 0.6843 0.6954 0.7086 0.7240 0.7412 0.7595 0.7780 0.7956 0.8115 0.8247 0.8345 0.8407 0.8431 0.8423 0.8387 … …

RawData

We can decompose the data into 64 pure sine waves using the Discrete Fourier Transform (just the first few sine waves are shown).

The Fourier Coefficients are reproduced as a column of numbers (just the first 30 or so coefficients are shown).

Note that at this stage we have not done dimensionality reduction, we have merely changed the representation...

Page 19: VizTree

0 20 40 60 80 100 120 140

C

An Example of a An Example of a Dimensionality Reduction Dimensionality Reduction Technique IIITechnique III

1.5698 1.0485 0.7160 0.8406 0.3709 0.4670 0.2667 0.1928

TruncatedFourierCoefficients

C’

We have discarded

of the data.16

15

1.5698 1.0485 0.7160 0.8406 0.3709 0.4670 0.2667 0.1928 0.1635 0.1602 0.0992 0.1282 0.1438 0.1416 0.1400 0.1412 0.1530 0.0795 0.1013 0.1150 0.1801 0.1082 0.0812 0.0347 0.0052 0.0017 0.0002

… …

FourierCoefficients

0.4995 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515 0.6596 0.6672 0.6751 0.6843 0.6954 0.7086 0.7240 0.7412 0.7595 0.7780 0.7956 0.8115 0.8247 0.8345 0.8407 0.8431 0.8423 0.8387 … …

RawData

… however, note that the first few sine waves tend to be the largest (equivalently, the magnitude of the Fourier coefficients tend to decrease as you move down the column).

We can therefore truncate most of the small coefficients with little effect.

n = 128N = 8Cratio = 1/16

Page 20: VizTree

1.5698 1.0485

TruncatedFourierCoefficients

0.4995 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515 0.6596 0.6672 0.6751 0.6843 0.6954 0.7086 0.7240 0.7412 0.7595 0.7780 0.7956 0.8115 0.8247 0.8345 0.8407 0.8431 0.8423 0.8387 … …

RawData

Page 21: VizTree

An Example of a An Example of a Dimensionality Reduction Dimensionality Reduction Technique IIIITechnique IIII

1.5698 1.0485 0.7160 0.8406 0.3709 0.4670 0.2667 0.1928

TruncatedFourierCoefficients 1

0.4995 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515 0.6596 0.6672 0.6751 0.6843 0.6954 0.7086 0.7240 0.7412 0.7595 0.7780 0.7956 0.8115 0.8247 0.8345 0.8407 0.8431 0.8423 0.8387 … …

RawData 1

0.7412 0.7595 0.7780 0.7956 0.8115 0.8247 0.8345 0.8407 0.8431 0.8423 0.8387 0.4995 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515 0.6596 0.6672 0.6751 0.6843 0.6954 0.7086 0.7240

… …

RawData 2

---------------------------

1.1198 1.4322 1.0100 0.4326 0.5609 0.8770 0.1557 0.4528

TruncatedFourierCoefficients 2

--------

The Euclidean distance between the two truncated Fourier coefficient vectors is always less than or equal to the Euclidean distance between the two raw data vectors*.

So DFT allows lower bounding!

*Parseval's Theorem

n

iii cqCQD

1

2,

Page 22: VizTree

1.5698 1.0485 0.7160 0.8406 0.3709 0.4670 0.2667 0.1928

TruncatedFourierCoefficients 1

0.4995 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515 0.6596 0.6672 0.6751 0.6843 0.6954 0.7086 0.7240 0.7412 0.7595 0.7780 0.7956

RawData 1

0.7412 0.7595 0.7780 0.7956 0.8115 0.8247 0.8345 0.8407 0.8431 0.8423 0.8387 0.4995 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515

RawData 2

1.1198 1.4322 1.0100 0.4326 0.5609 0.8770 0.1557 0.4528

TruncatedFourierCoefficients 2

Mini Review for the Generic Data Mining AlgorithmMini Review for the Generic Data Mining Algorithm

0.8115 0.8247 0.8345 0.8407 0.8431 0.8423 0.8387 0.4995 0.7412 0.7595 0.7780 0.7956 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515

RawData n

1.3434 1.4343 1.4643 0.7635 0.5448 0.4464 0.7932 0.2126

TruncatedFourierCoefficients n

We cannot fit all that raw data in main memory. We can fit the dimensionally reduced data in main memory.

So we will solve the problem at hand on the dimensionally reduced data, making a few accesses to the raw data were necessary, and, if we are careful, the lower bounding property will insure that we get the right answer!

Disk

MainMemory

Page 23: VizTree

0 20 40 60 80 100 120 0 20 40 60 80 100 120 0 20 40 60 80 100 120 0 20 40 60 80 1001200 20 40 60 80 100 120 0 20 40 60 80 100120

Keogh, C

hakrabarti, Pazzani &

Mehrotra K

AIS 2000

Yi & F

aloutsos VLD

B 2000

Keogh, C

hakrabarti, Pazzani &

Mehrotra SIG

MO

D 2001

Chan &

Fu. IC

DE

1999

Korn, Jagadish &

Faloutsos.

SIGM

OD

1997

Agraw

al, Faloutsos, &

. Swam

i. FO

DO

1993

Faloutsos, R

anganathan, &

Manolopoulos. SIG

MO

D 1994

Morinaka,

Yoshikawa, A

magasa, &

Uem

ura, PA

KD

D 2001

DFT DWT SVD APCA PAA PLA

0 20 40 60 80 100120

aabbbccb

aabbbccb

SAX

Lin, J., Keogh, E

.,

Lonardi, S. & C

hiu, B.

DM

KD

2003

Page 24: VizTree

Jean Fourier

1768-1830

0 20 40 60 80 100 120 140

0

1

2

3

X

X'

4

5

6

7

8

9

Discrete Fourier Discrete Fourier Transform ITransform I

Excellent free Fourier Primer

Hagit Shatkay, The Fourier Transform - a Primer'', Technical Report CS-95-37, Department of Computer Science, Brown University, 1995.

http://www.ncbi.nlm.nih.gov/CBBresearch/Postdocs/Shatkay/

Basic Idea: Represent the time series as a linear combination of sines and cosines, but keep only the first n/2 coefficients.

Why n/2 coefficients? Because each sine wave requires 2 numbers, for the phase (w) and amplitude (A,B).

n

kkkkk twBtwAtC

1

))2sin()2cos(()(

Page 25: VizTree

Discrete Fourier Discrete Fourier Transform IITransform II Pros and Cons of DFT as a time series Pros and Cons of DFT as a time series

representation.representation.

• Good ability to compress most natural signals.• Fast, off the shelf DFT algorithms exist. O(nlog(n)).• (Weakly) able to support time warped queries.

• Difficult to deal with sequences of different lengths.• Cannot support weighted distance measures.

0 20 40 60 80 100 120 140

0

1

2

3

X

X'

4

5

6

7

8

9

Note: The related transform DCT, uses only cosine basis functions. It does not seem to offer any particular advantages over DFT.

Page 26: VizTree

0 20 40 60 80 100 120 140

Haar 0

Haar 1

Haar 2

Haar 3

Haar 4

Haar 5

Haar 6

Haar 7

X

X'

DWT

Discrete Wavelet Discrete Wavelet Transform ITransform I

Alfred Haar

1885-1933

Excellent free Wavelets Primer

Stollnitz, E., DeRose, T., & Salesin, D. (1995). Wavelets for computer graphics A primer: IEEE Computer Graphics and Applications.

Basic Idea: Represent the time series as a linear combination of Wavelet basis functions, but keep only the first N coefficients.

Although there are many different types of wavelets, researchers in time series mining/indexing generally use Haar wavelets.

Haar wavelets seem to be as powerful as the other wavelets for most problems and are very easy to code.

Page 27: VizTree

0 20 40 60 80 100 120 140

Haar 0

Haar 1

X

X'

DWT

Discrete Wavelet Discrete Wavelet Transform IITransform II

We have only considered one type of wavelet, there are many others.Are the other wavelets better for indexing?

YES: I. Popivanov, R. Miller. Similarity Search Over Time Series Data Using Wavelets. ICDE 2002.

NO: K. Chan and A. Fu. Efficient Time Series Matching by Wavelets. ICDE 1999

Later in this tutorial I will answer this question.

Ingrid Daubechies

1954 -

Page 28: VizTree

0 20 40 60 80 100 120 140

Haar 0

Haar 1

Haar 2

Haar 3

Haar 4

Haar 5

Haar 6

Haar 7

X

X'

DWT

Discrete Wavelet Discrete Wavelet Transform IIITransform III

Pros and Cons of Wavelets as a time series Pros and Cons of Wavelets as a time series representation.representation.

• Good ability to compress stationary signals.• Fast linear time algorithms for DWT exist.• Able to support some interesting non-Euclidean similarity measures.

• Signals must have a length n = 2some_integer • Works best if N is = 2some_integer. Otherwise wavelets approximate the left side of signal at the expense of the right side.• Cannot support weighted distance measures.

Page 29: VizTree

0 20 40 60 80 100 120 140

X

X'

eigenwave 0

eigenwave 1

eigenwave 2

eigenwave 3

eigenwave 4

eigenwave 5

eigenwave 6

eigenwave 7

SVD

Singular Value Singular Value Decomposition IDecomposition I

Eugenio Beltrami

1835-1899

Camille Jordan (1838--1921)

James Joseph Sylvester 1814-1897

Basic Idea: Represent the time series as a linear combination of eigenwaves but keep only the first N coefficients.

SVD is similar to Fourier and Wavelet approaches is that we represent the data in terms of a linear combination of shapes (in this case eigenwaves).

SVD differs in that the eigenwaves are data dependent.

SVD has been successfully used in the text processing community (where it is known as Latent Symantec Indexing ) for many years.

Good free SVD Primer

Singular Value Decomposition - A Primer. Sonia Leach

Page 30: VizTree

0 20 40 60 80 100 120 140

X

X'

eigenwave 0

eigenwave 1

eigenwave 2

eigenwave 3

eigenwave 4

eigenwave 5

eigenwave 6

eigenwave 7

SVD

Singular Value Singular Value Decomposition IIDecomposition II

How do we create the eigenwaves?

We have previously seen that we can regard time series as points in high dimensional space.

We can rotate the axes such that axis 1 is aligned with the direction of maximum variance, axis 2 is aligned with the direction of maximum variance orthogonal to axis 1 etc.

Since the first few eigenwaves contain most of the variance of the signal, the rest can be truncated with little loss.

TVUA This process can be achieved by factoring a M by n matrix of time series into 3 other matrices, and truncating the new matrices at size N.

Page 31: VizTree

0 20 40 60 80 100 120 140

X

X'

eigenwave 0

eigenwave 1

eigenwave 2

eigenwave 3

eigenwave 4

eigenwave 5

eigenwave 6

eigenwave 7

SVD

Singular Value Singular Value Decomposition IIIDecomposition III

Pros and Cons of SVD as a time series Pros and Cons of SVD as a time series representation.representation.

• Optimal linear dimensionality reduction technique .• The eigenvalues tell us something about the underlying structure of the data.

• Computationally very expensive.• Time: O(Mn2)• Space: O(Mn)

• An insertion into the database requires recomputing the SVD.• Cannot support weighted distance measures or non Euclidean measures.

Note: There has been some promising research into mitigating SVDs time and space complexity.

Page 32: VizTree

0 20 40 60 80 100 120 140

X

X'

Cheb

Chebyshev Chebyshev PolynomialsPolynomials

Pros and Cons of Chebyshev Pros and Cons of Chebyshev Polynomials as a time series Polynomials as a time series representation.representation.

• Time series can be of arbitrary length• Only O(n) time complexity• Is able to support multi-dimensional time series*.

Ti(x) =1

x

2x2−1

4x3−3x

8x4−8x2+1

16x5−20x3+5x

32x6−48x4+18x2−1

64x7−112x5+56x3−7x

128x8−256x6+160x4−32x2+1

Basic Idea: Represent the time series as a linear combination of Chebyshev Polynomials

Pafnuty Chebyshev1821-1946

• Time series must be renormalized to have length between –1 and 1

Page 33: VizTree

0 20 40 60 80 100 120 140

X

X'

Cheb

Chebyshev Chebyshev PolynomialsPolynomials

Ti(x) = Pafnuty Chebyshev1821-1946

In 2006, Dr Ng published a “note of Caution” on his webpage, noting that the results in the paper are not reliable..

“…Thus, it is clear that the C++ version contained a bug. We apologize for any inconvenience caused…”

Both Dr Keogh and Dr Michail Vlachos independently found Chebyshev Polynomials are slightly worse than other methods on more than 80 different datasets.

Page 34: VizTree

0 20 40 60 80 100 120 140

X

X'

Piecewise Linear Piecewise Linear Approximation IApproximation I

Basic Idea: Represent the time series as a sequence of straight lines.

Lines could be connected, in which case we are allowedN/2 lines

If lines are disconnected, we are allowed only N/3 lines

Personal experience on dozens of datasets suggest disconnected is better. Also only disconnected allows a lower bounding Euclidean approximation

Each line segment has • length • left_height (right_height can be inferred by looking at the next segment)

Each line segment has • length • left_height • right_height

Karl Friedrich Gauss

1777 - 1855

Page 35: VizTree

0 20 40 60 80 100 120 140

X

X'

Piecewise Linear Piecewise Linear Approximation IIApproximation II

How do we obtain the Piecewise Linear Approximation?

Optimal Solution is O(n2N), which is too slow for data mining.

A vast body on work on faster heuristic solutions to the problem can be classified into the following classes:• Top-Down • Bottom-Up • Sliding Window • Other (genetic algorithms, randomized algorithms, Bspline wavelets, MDL etc)

Extensive empirical evaluation* of all approaches suggest that Bottom-Up is the best approach

overall.

Page 36: VizTree

0 20 40 60 80 100 120 140

X

X'

Piecewise Linear Piecewise Linear Approximation IIIApproximation III Pros and Cons of PLA as a time series Pros and Cons of PLA as a time series

representation.representation.

• Good ability to compress natural signals.• Fast linear time algorithms for PLA exist.• Able to support some interesting non-Euclidean similarity measures. Including weighted measures, relevance feedback, fuzzy queries… •Already widely accepted in some communities (ie, biomedical)

• Not (currently) indexable by any data structure (but does allows fast sequential scanning).

Page 37: VizTree

Piecewise Aggregate Piecewise Aggregate Approximation IApproximation I

0 20 40 60 80 100 120 140

X

X'

x1

x2

x3

x4

x5

x6

x7

x8

i

ijjn

Ni

Nn

Nn

xx1)1(

N

i iiNn yxYXDR

1

2),(

Given the reduced dimensionality representation we can calculate the approximate Euclidean distance as...

This measure is provably lower bounding.

Basic Idea: Represent the time series as a sequence of box basis functions.

Note that each box is the same length.

Independently introduced by two authors• Keogh, Chakrabarti, Pazzani & Mehrotra, KAIS (2000) / Keogh & Pazzani PAKDD April 2000

• Byoung-Kee Yi, Christos Faloutsos, VLDB September 2000

Page 38: VizTree

Piecewise Aggregate Piecewise Aggregate Approximation IIApproximation II

0 20 40 60 80 100 120 140

X

X'

X1

X2

X3

X4

X5

X6

X7

X8

• Extremely fast to calculate• As efficient as other approaches (empirically)• Support queries of arbitrary lengths• Can support any Minkowski metric@

• Supports non Euclidean measures• Supports weighted Euclidean distance• Can be used to allow indexing of DTW and uniform scaling*• Simple! Intuitive!

• If visualized directly, looks ascetically unpleasing.

Pros and Cons of PAA as a time series Pros and Cons of PAA as a time series representation.representation.

Page 39: VizTree

Adaptive Piecewise Adaptive Piecewise Constant Constant Approximation IApproximation I

0 20 40 60 80 100 120 140

X

X

<cv1,cr1>

<cv2,cr2>

<cv3,cr3>

<cv4,cr4>

Basic Idea: Generalize PAA to allow the piecewise constant segments to have arbitrary lengths. Note that we now need 2 coefficients to represent each segment, its value and its length.

50 100 150 200 2500

Raw Data (Electrocardiogram)

Adaptive Representation (APCA)Reconstruction Error 2.61

Haar Wavelet or PAA Reconstruction Error 3.27

DFTReconstruction Error 3.11

The intuition is this, many signals have little detail in some places, and high detail in other places. APCA can adaptively fit itself to the data achieving better approximation.

Page 40: VizTree

Adaptive Piecewise Adaptive Piecewise Constant Constant Approximation IIApproximation II

0 20 40 60 80 100 120 140

X

X

<cv1,cr1>

<cv2,cr2>

<cv3,cr3>

<cv4,cr4>

The high quality of the APCA had been noted by many researchers. However it was believed that the representation could not be indexed because some coefficients represent values, and some represent lengths.

However an indexing method was discovered!

(SIGMOD 2001 best paper award)

Unfortunately, it is non-trivial to understand and implement and thus has only been reimplemented once or twice (In contrast, more than 50 people have reimplemented PAA).

Page 41: VizTree

Adaptive Piecewise Adaptive Piecewise Constant Constant Approximation IIIApproximation III

0 20 40 60 80 100 120 140

X

X

<cv1,cr1>

<cv2,cr2>

<cv3,cr3>

<cv4,cr4>

• Pros and Cons of APCA as a time Pros and Cons of APCA as a time series representation.series representation.

• Fast to calculate O(n). • More efficient as other approaches (on some datasets).• Support queries of arbitrary lengths.• Supports non Euclidean measures.• Supports weighted Euclidean distance.• Support fast exact queries , and even faster approximate queries on the same data structure.

• Somewhat complex implementation.• If visualized directly, looks ascetically unpleasing.

Page 42: VizTree

0 20 40 60 80 100 120 140

X

X'

00000000000000000000000000000000000000000000111111111111111111111110000110000001111111111111111111111111111111111111111111111111

…110000110000001111….

44 Zeros 23 Ones 4 Zeros 2 Ones 6 Zeros 49 Ones

44 Zeros|23|4|2|6|49

Clipped DataClipped Data

Tony Bagnall

Basic Idea: Find the mean of a time series, convert values above the line to “1’ and values below the line to “0”.

Runs of “1”s and “0”s can be further compressed with run length encoding if desired.

This representation does allow lower bounding!

Pros and Cons of Clipped as a Pros and Cons of Clipped as a time series representation.time series representation.

Ultra compact representation which may be particularly useful for specialized hardware.

Page 43: VizTree

Natural LanguageNatural Language

0 20 40 60 80 100 120 140

X

• Pros and Cons of natural language as Pros and Cons of natural language as a time series representation.a time series representation.

• The most intuitive representation!• Potentially a good representation for low bandwidth devices like text-messengers

• Difficult to evaluate.

rise, plateau, followed by a rounded peak

rise,

plateau,

followed by a rounded peak

To the best of my knowledge only one group is working seriously on this representation. They are the University of Aberdeen SUMTIME group, headed by Prof. Jim Hunter.

Page 44: VizTree

0 20 40 60 80 100 120 140

X

X'

0

1

2

3

4

5

6

7

a a b b b c c b

a

b

b

b

c

c

b

a

Symbolic Symbolic Approximation IApproximation I

Basic Idea: Convert the time series into an alphabet of discrete symbols. Use string indexing techniques to manage the data.

Potentially an interesting idea, but all work thus far are very ad hoc.

Pros and Cons of Symbolic Approximation Pros and Cons of Symbolic Approximation as a time series representation.as a time series representation.

• Potentially, we could take advantage of a wealth of techniques from the very mature field of string processing and bioinformatics.

• It is not clear how we should discretize the times series (discretize the values, the slope, shapes? How big of an alphabet? etc).

• There are more than 210 different variants of this, at least 35 in data mining conferences.

Page 45: VizTree

0 20 40 60 80 100 120 140

X

X'

aaaaaabbbbbccccccbbccccdddddddd

dcba

SAX: Symbolic Aggregate

approXimation SAX allows (for the first time) a symbolic representation that allows

• Lower bounding of Euclidean distance

• Dimensionality Reduction

• Numerosity Reduction

0

-

-

0 20 40 60 80 100 120

bbb

a

cc

c

a

baabccbc

Jessica Lin

1976-

Page 46: VizTree

Comparison of all Dimensionality Comparison of all Dimensionality Reduction TechniquesReduction Techniques

• We have already compared features (Does representation X allow weighted queries, queries of arbitrary lengths, is it simple to implement…

• We can compare the indexing efficiency. How long does it take to find the best answer to out query.

• It turns out that the fairest way to measure this is to measure the number of times we have to retrieve an item from disk.

Page 47: VizTree

Data BiasData Bias

Definition: Data bias is the conscious or unconscious use of a particular set of testing data to confirm a desired finding.

Example: Suppose you are comparing Wavelets to Fourier methods, the following datasets will produce drastically different results…

0 200 400 600 0 200 400 600

Good forwaveletsbad for

Fourier

Good forFourier bad forwavelets

Page 48: VizTree

• “Several wavelets outperform the DFT”.

• “DFT-based and DWT-based techniques yield comparable results”.

• “Haar wavelets perform slightly better that DFT”

• “DFT filtering performance is superior to DWT”

Example of Data Bias: Whom to Believe? Example of Data Bias: Whom to Believe?

For the task of indexing time series for similarity search, which representation is best, the Discrete Fourier Transform (DFT), or the Discrete Wavelet Transform (Haar)?

Page 49: VizTree

To find out who to believe (if anyone) we performed an extraordinarily careful and comprehensive set of experiments. For example…

• We used a quantum mechanical device generate random numbers.

• We averaged results over 100,000 experiments!

• For fairness, we use the same (randomly chosen) subsequences for both approaches.

Example of Data Bias: Whom to Believe II?Example of Data Bias: Whom to Believe II?

Page 50: VizTree

Powerplant Infrasound Attas (Aerospace)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

1

DFT

HAAR

I tested on the Powerplant, Infrasound and Attas datasets, and I know DFT outperforms the Haar wavelet

I tested on the Powerplant, Infrasound and Attas datasets, and I know DFT outperforms the Haar wavelet

Pru

ning

Pow

er

Page 51: VizTree

Network EPRdata Fetal EEG

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

DFT

HAAR

Stupid Flanders! I tested on the Network, ERPdata and Fetal EEG datasets and I know that there is no real difference between DFT and Haar

Stupid Flanders! I tested on the Network, ERPdata and Fetal EEG datasets and I know that there is no real difference between DFT and Haar

Pru

ning

Pow

er

Page 52: VizTree

0 0.05

0.1 0.15

0.2 0.25

0.3 0.35

0.4 0.45

0.5

Chaotic Earthquake Wind (3)

DFT

HAAR

Those two clowns are both wrong!I tested on the Chaotic, Earthquake and Wind datasets, and I am sure that the Haar wavelet outperforms the DFT

Those two clowns are both wrong!I tested on the Chaotic, Earthquake and Wind datasets, and I am sure that the Haar wavelet outperforms the DFT

Pru

ning

Pow

er

Page 53: VizTree

The Bottom LineThe Bottom Line

Any claims about the relative performance of a time series indexing scheme that is empirically demonstrated on only 2 or 3 datasets are worthless.

Powerplant Infrasound Attas (Aerospace)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

1

DFT

HAAR

Network EPRdata Fetal EEG

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

DFT

HAAR

0 0.05

0.1 0.15

0.2 0.25

0.3 0.35

0.4 0.45

0.5

Chaotic Earthquake Wind (3)

DFT

HAAR

Page 54: VizTree

So which is really the best technique?So which is really the best technique?I experimented with all the techniques (DFT, DCT, Chebyshev, PAA, PLA, PQA, APCA, DWT (most wavelet types), SVD) on 65 datasets, and as a sanity check, Michail Vlachos independently implemented and tested on the same 65 datasets.

On average, they are all about the same. In particular, on 80% of the datasets they are all within 10% of each other. If you want to pick a representation, don’t do so based on the reconstruction error, do so based on the features the representation has.

On bursty datasets* APCA can be significantly better

Page 55: VizTree

Lets take a tour of other time series problemsLets take a tour of other time series problems

• Before we do, let us briefly revisit SAX, since it has some implications for the other problems…

Page 56: VizTree

• One central theme of this tutorial is that lowerbounding is a very useful property. (recall the lower bounds of DTW /uniform scaling, also recall the importance of lower bounding dimensionality reduction techniques). •Another central theme is that dimensionality reduction is very important. That’s why we spend so long discussing DFT, DWT, SVD, PAA etc.

• Until last year there was no lowerbounding, dimensionality reducing representation of time series. In the next slide, let us think about what it means to have such a representation…

Exploiting Symbolic Representations of Time SeriesExploiting Symbolic Representations of Time Series

Page 57: VizTree

• If we had a lowerbounding, dimensionality reducing representation of time series, we could…

• Use data structures that are only defined for discrete data, such as suffix trees.

• Use algorithms that are only defined for discrete data, such as hashing, association rules etc

• Use definitions that are only defined for discrete data, such as Markov models, probability theory

• More generally, we could utilize the vast body of research in text processing and bioinformatics

Exploiting Symbolic Representations of Time SeriesExploiting Symbolic Representations of Time Series

Page 58: VizTree

Exploiting Symbolic Representations of Time SeriesExploiting Symbolic Representations of Time Series

-3

-2 -1 0 1 2 3

DFT

PLA

Haar

APCA

a b c d e f

SAX

SAX

There is now a lower bounding dimensionality reducing time series representation! It is called SAX (Symbolic Aggregate ApproXimation)

SAX has had a major impact on time series data mining in the last few years…

ffffffeeeddcbaabceedcbaaaaacddee

Page 59: VizTree

Time Series Motif Discovery Time Series Motif Discovery (finding repeated patterns)(finding repeated patterns)

Winding Dataset ( The angular speed of reel 2 )

0 500 1000 150 0 2000 2500

Are there any repeated patterns, of about this length in the above time series?

Are there any repeated patterns, of about this length in the above time series?

Page 60: VizTree

Winding Dataset ( The angular speed of reel 2 )

0 500 1000 150 0 2000 2500

0 20 40 60 80 100 120 140 0 20 40 60 80 100 120 140 0 20 40 60 80 100 120 140

A B C

A B C

Time Series Motif Discovery Time Series Motif Discovery (finding repeated patterns)(finding repeated patterns)

Page 61: VizTree

· Mining association rules in time series requires the discovery of motifs. These are referred to as primitive shapes and frequent patterns.

· Several time series classification algorithms work by constructing typical prototypes of each class. These prototypes may be considered motifs.

· Many time series anomaly/interestingness detection algorithms essentially consist of modeling normal behavior with a set of typical shapes (which we see as motifs), and detecting future patterns that are dissimilar to all typical shapes.

· In robotics, Oates et al., have introduced a method to allow an autonomous agent to generalize from a set of qualitatively different experiences gleaned from sensors. We see these “experiences” as motifs.

· In medical data mining, Caraca-Valente and Lopez-Chavarrias have introduced a method for characterizing a physiotherapy patient’s recovery based of the discovery of similar patterns. Once again, we see these “similar patterns” as motifs.

• Animation and video capture… (Tanaka and Uehara, Zordan and Celly)

Why Find Motifs?Why Find Motifs?

Page 62: VizTree

Definition 1. Match: Given a positive real number R (called range) and a time series T containing a subsequence C beginning at position p and a subsequence M beginning at q, if D(C, M) R, then M is called a matching subsequence of C.

Definition 2. Trivial Match: Given a time series T, containing a subsequence C beginning at position p and a matching subsequence M beginning at q, we say that M is a trivial match to C if either p = q or there does not exist a subsequence M’ beginning at q’ such that D(C, M’) > R, and either q < q’< p or p < q’< q.

Definition 3. K-Motif(n,R): Given a time series T, a subsequence length n and a range R, the most significant motif in T (hereafter called the 1-Motif(n,R)) is the subsequence C1 that has highest count

of non-trivial matches (ties are broken by choosing the motif whose matches have the lower variance). The Kth most significant motif in T (hereafter called the K-Motif(n,R) ) is the subsequence CK that has

the highest count of non-trivial matches, and satisfies D(CK, Ci) > 2R, for all 1 i < K.

0 100 200 3 00 400 500 600 700 800 900 100 0

T

Space Shuttle STS - 57 Telemetry ( Inertial Sensor )

Trivial Matches

C

Page 63: VizTree

OK, we can define motifs, but OK, we can define motifs, but how do we find them?how do we find them?

The obvious brute force search algorithm is just too slow…

The most reference algorithm is based on a hot idea from bioinformatics, random projection* and the fact that SAX allows use to lower bound discrete representations of time series.

* J Buhler and M Tompa. Finding motifs using random projections. In RECOMB'01. 2001.

Page 64: VizTree

a b a b

c c c c

b a c c

a b a c

1

2

58

985

0 500 1000

a c b a

T ( m= 1000)

a = 3 {a,b,c} n = 16 w = 4

S ^

C1

C1 ^ Assume that we have a time series T of length 1,000, and a motif of length 16, which

occurs twice, at time T1 and

time T58.

A simple worked example of the motif discovery algorithmA simple worked example of the motif discovery algorithm

The next 4 slides

Page 65: VizTree

a b a b 1

c c c c 2

b a c c 3

a b a c 4

1

2

58

985

457

1 58

2 985

1 2 58 985

1

2

58

985

1

1

A mask {1,2} was randomly chosen, so the values in columns {1,2} were used to project matrix into buckets.

Collisions are recorded by incrementing the appropriate location in the collision matrix

Page 66: VizTree

A mask {2,4} was randomly chosen, so the values in columns {2,4} were used to project matrix into buckets.

Once again, collisions are recorded by incrementing the appropriate location in the collision matrix

1 2 58 985

1

2

58

985

2

1

a b a b 1

c c c c 2

b a c c 3

a b a c 4

1

2

58

985

2

1 58

985

Page 67: VizTree

1 2 58 985

1

2

58

985

27

1

0

1 1 E( , , , , ) 1-

2

t i w id

i

k wi ak a w d t

iw a a

We can calculate the expected values in the matrix, assuming We can calculate the expected values in the matrix, assuming there are NO patterns…there are NO patterns…

3 2

1

0

2

2 1

1

3

2

2 1

3

• Create an approximation of the data, which will fit in main memory, yet retains the essential features of interest

• Approximately solve the problem at hand in main memory

• Make (hopefully very few) accesses to the original data on diskto confirm the solution obtained in Step 2, or to modify the solution so it agrees with the solution we would have obtained on the original data

The Generic Data Mining AlgorithmThe Generic Data Mining Algorithm

But which approximationshould we use?

But which approximationshould we use?

• Create an approximation of the data, which will fit in main memory, yet retains the essential features of interest

• Approximately solve the problem at hand in main memory

• Make (hopefully very few) accesses to the original data on diskto confirm the solution obtained in Step 2, or to modify the solution so it agrees with the solution we would have obtained on the original data

The Generic Data Mining AlgorithmThe Generic Data Mining Algorithm

But which approximationshould we use?

But which approximationshould we use?

This is a good example of the Generic Data Mining Algorithm…

Page 68: VizTree

0 200 400 600 800 1000 1200

0 20 40 60 80 100 120

A

B 0 20 40 60 80 100 120

C

D

A Simple ExperimentA Simple Experiment

Let us imbed two motifs into a random walk time series, and see if we can recover them

Page 69: VizTree

A

C

B D

Planted MotifsPlanted Motifs

Page 70: VizTree

0

20

40

60

80

100

120

0

20

40

60

80

100

120

““Real” MotifsReal” Motifs

Page 71: VizTree

Some Examples of Real MotifsSome Examples of Real Motifs

0 500 1000 1500 2000

Motor 1 (DC Current)

2500

3500

4500

5500

6500

Astrophysics (Photon Count)

Page 72: VizTree
Page 73: VizTree

Winding Dataset ( The angular speed of reel 2 )

0 500 1000 150 0 2000 2500

A B C

Finding these 3 motifs requires about 6,250,000 calls to the Euclidean distance function

Motifs Discovery Challenges Motifs Discovery Challenges How can we find motifs…

• Without having to specify the length/other parameters • In massive datasets• While ignoring “background” motifs (ECG example)• Under time warping, or uniform scaling • While assessing their significance

Page 74: VizTree

TGGCCGTGCTAGGCCCCACCCCTACCTTGCAGTCCCCGCAAGCTCATCTGCGCGAACCAGAACGCCCACCACCCTTGGGTTGAAATTAAGGAGGCGGTTGGCAGCTTCCCAGGCGCACGTACCTGCGAATAAATAACTGTCCGCACAAGGAGCCCGACGATAGTCGACCCTCTCTAGTCACGACCTACACACAGAACCTGTGCTAGACGCCATGAGATAAGCTAACACAAAAACATTTCCCACTACTGCTGCCCGCGGGCTACCGGCCACCCCTGGCTCAGCCTGGCGAAGCCGCCCTTCA

The DNA of two species…

CCGTGCTAGGGCCACCTACCTTGGTCCGCCGCAAGCTCATCTGCGCGAACCAGAACGCCACCACCTTGGGTTGAAATTAAGGAGGCGGTTGGCAGCTTCCAGGCGCACGTACCTGCGAATAAATAACTGTCCGCACAAGGAGCCGACGATAAAGAAGAGAGTCGACCTCTCTAGTCACGACCTACACACAGAACCTGTGCTAGACGCCATGAGATAAGCTAACA

Page 75: VizTree

C T

A G

C T

A G

C T

A G

C T

A G

C T

A G

C T

A G

0.20 0.24

0.26 0.30

CCGTGCTAGGGCCACCTACCTTGGTCCGCCGCAAGCTCATCTGCGCGAACCAGAACGCCACCACCTTGGGTTGAAATTAAGGAGGCGGTTGGCAGCTTCCAGGCGCACGTACCTGCGAATAAATAACTGTCCGCACAAGGAGCCGACGATAAAGAAGAGAGTCGACCTCTCTAGTCACGACCTACACACAGAACCTGTGCTAGACGCCATGAGATAAGCTAACA

Page 76: VizTree

CCGTGCTAGGGCCACCTACCTTGGTCCGCCGCAAGCTCATCTGCGCGAACCAGAACGCCACCACCTTGGGTTGAAATTAAGGAGGCGGTTGGCAGCTTCCAGGCGCACGTACCTGCGAATAAATAACTGTCCGCACAAGGAGCCGACGATAAAGAAGAGAGTCGACCTCTCTAGTCACGACCTACACACAGAACCTGTGCTAGACGCCATGAGATAAGCTAACA

CC CT TC TT

CA CG TA TC

AC AT GC GT

AA AG GA GG

C T

A G

CCC CCT CTC

CCA CCG CTA

CAC CAT

CAA

CC CT TC TT

CA CG TA TC

AC AT GC GT

AA AG GA GG

CC CT TC TTCC CT TC TT

CA CG TA TCCA CG TA TG

AC AT GC GTAC AT GC GT

AA AG GA GGAA AG GA GG

C T

A G

C T

A G

CCC CCT CTC

CCA CCG CTA

CAC CAT

CAA

CC CT TC TT

CA CG TA TC

AC AT GC GT

AA AG GA GG

CC CT TC TTCC CT TC TT

CA CG TA TCCA CG TA TC

AC AT GC GTAC AT GC GT

AA AG GA GGAA AG GA GG

C T

A G

C T

A G

CCC CCT CTC

CCA CCG CTA

CAC CAT

CAA

CC CT TC TTCC CT TC TT

CA CG TA TCCA CG TA TC

AC AT GC GTAC AT GC GT

AA AG GA GGAA AG GA GG

CC CT TC TTCC CT TC TT

CA CG TA TCCA CG TA TG

AC AT GC GTAC AT GC GT

AA AG GA GGAA AG GA GG

C T

A G

C T

A G

CCC CCT CTC

CCA CCG CTA

CAC CAT

CAA

Page 77: VizTree

CCGTGCTAGGCCCCACCCCTACCTTGCAGTCCCCGCAAGCTCATCTGCGCGAACCAGAACGCCCACCACCCTTGGGTTGAAATTAAGGAGGCGGTTGGCAGCTTCCCAGGCGCACGTACCTGCGAATAAATAACTGTCCGCACAAGGAGCCCGACGATAGTCGACCCTCTCTAGTCACGACCTACACACAGAACCTGTGCTAGACGCCATGAGATAAGCTAACA

CA

AC AT

AA AG

CACA

AC ATAC AT

AA AGAA AG

CA

AC AT

AA AG

CACA

AC ATAC AT

AA AGAA AG

CACA

AC ATAC AT

AA AGAA AG

0.02

CACA

AC ATAC AT

AA AGAA AG

0.04

0.03

0.09 0.04

0.07 0.02

0.11 0.03

0

1

Page 78: VizTree

CCGTGCTAGGCCCCACCCCTACCTTGCAGTCCCCGCAAGCTCATCTGCGCGAACCAGAACGCCCACCACCCTTGGGTTGAAATTAAGGAGGCGGTTGGCAGCTTCCCAGGCGCACGTACCTGCGAATAAATAACTGTCCGCACAAGGAGCCCGACGATAGTCGACCCTCTCTAGTCACGACCTACACACAGAACCTGTGCTAGACGCCATGAGATAAGCTAACA

OK. Given any DNA string I can make a colored bitmap, so what?

Page 79: VizTree
Page 80: VizTree

Two QuestionsTwo Questions

• Can we do something similar for time series?

• Would it be useful?

Page 81: VizTree

Can we do make bitmaps for time series?

aa ab ba bb

ac ad bc bd

ca cb da db

cc cd dc dd

a b

c d

aaa aab aba

aac aad abc

aca acb

acc

accbabcdbcabdbcadbacbdbdcadbaacb…

Yes, with SAX!

Time Series Bitmap

Page 82: VizTree

a b

c d

5 7

3 3

Imagine we have the following SAX strings…

abcdba bdbadb cbabca

There are 5 “a”There are 7 “b”There are 3 “c”There are 3 “d”

7 We can paint the pixels based on the frequencies

Time Series BitmapsTime Series Bitmaps

Page 83: VizTree

While they are all example of EEGs, example_a.dat is from a normal trace, whereas the others contain examples of spike-wave discharges.

Page 84: VizTree

We can further enhance the time series bitmaps by arranging the thumbnails by “cluster”, instead of arranging by date, size, name etc

We can achieve this with MDS.

July.txt June.txt April.txt

May.txt Sept.txt

March.txt

Oct.txt Feb.txt

Nov.txt Jan.txt

Dec.txt

August.txt

January

0

100

200

300

DecemberAugust

One Year of Italian Power Demand

We can further enhance We can further enhance the time series bitmaps the time series bitmaps by arranging the by arranging the thumbnails by “cluster”, thumbnails by “cluster”, instead of arranging by instead of arranging by date, size, name etcdate, size, name etc

We can achieve this with We can achieve this with MDS.MDS.

Page 85: VizTree

A well known dataset Kalpakis_ECG, allegedly contains 70 ECGS

If we view them as time series bitmaps, a handful stand out…

normal1.txt normal10.txt normal11.txt

normal12.txt

normal13.txt normal2.txt

normal3.txtnormal4.txt

normal5.txt

normal6.txt

normal7.txt

normal8.txt

normal9.txt

normal14.txtnormal15.txt

normal16.txt

normal17.txt

normal18.txt

Page 86: VizTree

Some of the data are not heartbeats! They are the action potential of a normal pacemaker cell

0 100 200 300 400 500

0 100 200 300 400 500

initial rapid repolarization

0 100 200 300 400 5000 100 200 300 400 500

ventricular depolarization

initial rapid repolarization

“plateau” stage

repolarizationrecovery phase

0 100 200 300 400 500

normal1.txt normal10.txt normal11.txt

normal12.txt

normal13.txt normal2.txt

normal3.txtnormal4.txt

normal5.txt

normal6.txt

normal7.txt

normal8.txt

normal9.txt

normal14.txtnormal15.txt

normal16.txt

normal17.txt

normal18.txt

Page 87: VizTree

We can test how much useful information is retained in the bitmaps by using only the bitmaps for clustering/classification/anomaly detection

Page 88: VizTree

1

2

3

4

5

11

13

12

14

15

6

9

10

7

8

16

18

17

19

20

1

2

3

4

5

11

13

12

14

15

6

9

10

7

8

16

18

17

19

20

We can test how much useful information is retained in the bitmaps by using only the bitmaps for clustering/classification/anomaly detection

Cluster 1 (datasets 1 ~ 5):

BIDMC Congestive Heart Failure Database (chfdb): record chf02

Start times at 0, 82, 150, 200, 250, respectively

Cluster 2 (datasets 6 ~ 10):

BIDMC Congestive Heart Failure Database (chfdb): record chf15

Start times at 0, 82, 150, 200, 250, respectively

Cluster 3 (datasets 11 ~ 15):

Long Term ST Database (ltstdb): record 20021

Start times at 0, 50, 100, 150, 200, respectively

Cluster 4 (datasets 16 ~ 20):

MIT-BIH Noise Stress Test Database (nstdb): record 118e6

Start times at 0, 50, 100, 150, 200, respectively

Data Key

Page 89: VizTree

We can test how much useful information is retained in the bitmaps by using only the bitmaps for clustering/classification/anomaly detection

1

2

17

18

27

28

29

30

3

4

5

6

21

22

7

8

9

10

11

12

13

14

15

16

19

20

26

25

23

24

1

2

17

18

27

28

29

30

3

4

5

6

21

22

7

8

9

10

11

12

13

14

15

16

19

20

26

25

23

24

Page 90: VizTree

Here the bitmaps are almost the same.

Here the bitmaps are very different. This is the most unusual section of the time series, and it coincidences with the PVC.

Here is a Premature Ventricular Contraction (PVC)

Page 91: VizTree

Premature ventricular contraction Premature ventricular contractionSupraventricular escape beat

Annotations by

a cardiologist

Page 92: VizTree

The Last WordThe Last WordThe sun is setting on all The sun is setting on all other symbolic other symbolic representations of time representations of time series, SAX is the only way series, SAX is the only way to goto go