tensorflow machine learning - Lee...

20
Tensorflow Tutorial IST 597 Kaixuan Zhang

Transcript of tensorflow machine learning - Lee...

Page 1: tensorflow machine learning - Lee Gilesclgiles.ist.psu.edu/.../tensorflowmachinelearning.pdftensorflow machine learning Author: 张凯旋 Created Date: 9/12/2018 1:03:32 AM ...

Tensorflow Tutorial

IST 597

Kaixuan Zhang

Page 2: tensorflow machine learning - Lee Gilesclgiles.ist.psu.edu/.../tensorflowmachinelearning.pdftensorflow machine learning Author: 张凯旋 Created Date: 9/12/2018 1:03:32 AM ...

Overview

• Principle Component Analysis (PCA)

• Support Vector Machine (SVM)

• Softmax Regression

Page 3: tensorflow machine learning - Lee Gilesclgiles.ist.psu.edu/.../tensorflowmachinelearning.pdftensorflow machine learning Author: 张凯旋 Created Date: 9/12/2018 1:03:32 AM ...

Reference

• http://www.ccs.neu.edu/home/vip/teach/MLcourse/5_features_dimensi

ons/lecture_notes/PCA/PCA.pdf (PCA)

• http://cs229.stanford.edu/notes/cs229-notes3.pdf (SVM)

• http://www.sfs.uni-tuebingen.de/~ddekok/dl4nlp/softmax-

regression.pdf (softmax regression)

Page 4: tensorflow machine learning - Lee Gilesclgiles.ist.psu.edu/.../tensorflowmachinelearning.pdftensorflow machine learning Author: 张凯旋 Created Date: 9/12/2018 1:03:32 AM ...

PCA

Unsupervised Learning, Dimensionality Reduction, Manifold Learning

Page 5: tensorflow machine learning - Lee Gilesclgiles.ist.psu.edu/.../tensorflowmachinelearning.pdftensorflow machine learning Author: 张凯旋 Created Date: 9/12/2018 1:03:32 AM ...

• Data Normalization• Covariance Matrix• Eigenvalue and Eigenvector• Eigenvector Selection• Dimensionality Reduction

PCA algorithm

′X = X − µσ

Σ = 1n

′X T ′X U = eigenvectors(Σ) Xld = ′X Ureduce

Page 6: tensorflow machine learning - Lee Gilesclgiles.ist.psu.edu/.../tensorflowmachinelearning.pdftensorflow machine learning Author: 张凯旋 Created Date: 9/12/2018 1:03:32 AM ...

[x_mean, x_varia] = tf.nn.moments(X, axes = [0])

X_norm = tf.nn.batch_normalization(X, x_mean, x_varia, offset = 0, scale = 1, variance_epsilon = 0)

f (X,µ,σ 2,β,γ ,ε ) = γ X − µσ 2 + ε

+ β

γ scale

β

ε

offset

variance_epsilon

Page 7: tensorflow machine learning - Lee Gilesclgiles.ist.psu.edu/.../tensorflowmachinelearning.pdftensorflow machine learning Author: 张凯旋 Created Date: 9/12/2018 1:03:32 AM ...

import tensorflow as tfK = 2X = tf.Variable(tf.random_uniform([10, 3], -5, 3), dtype = tf.float32)[x_mean, x_varia] = tf.nn.moments(X, axes = [0])X_norm = tf.nn.batch_normalization(X, x_mean, x_varia, offset = 0, scale = 1, variance_epsilon = 0)X_sigma = tf.matmul(X_norm,X_norm,transpose_a=True)/10[eigenvalues, eigenvectors] = tf.self_adjoint_eig(X_sigma)eigenvectors_sel = eigenvectors[:,3-K:3]X_prime = tf.matmul(X_norm,eigenvectors_sel)X_new = tf.matmul(X_prime,eigenvectors_sel, transpose_b= True)with tf.Session() as sess:

tf.global_variables_initializer().run()print(sess.run(X_prime))

Page 8: tensorflow machine learning - Lee Gilesclgiles.ist.psu.edu/.../tensorflowmachinelearning.pdftensorflow machine learning Author: 张凯旋 Created Date: 9/12/2018 1:03:32 AM ...

PCA in Sklearn

from sklearn.decomposition import PCA

pca = PCA(n_components = 1)

pca.fit(X)

X_pca = pca.transform(X)

X_new = pca.inverse_transform(X_pca)

Page 9: tensorflow machine learning - Lee Gilesclgiles.ist.psu.edu/.../tensorflowmachinelearning.pdftensorflow machine learning Author: 张凯旋 Created Date: 9/12/2018 1:03:32 AM ...

SVM

Supervised Learning, Kernel Method

Hinge Loss Function

max(0,1− yi (!w ⋅ !xi − b))

Loss Function

1n

max(0,1− yi (!w ⋅ !xi − b))

i=1

n

∑ +α !w 2

Page 10: tensorflow machine learning - Lee Gilesclgiles.ist.psu.edu/.../tensorflowmachinelearning.pdftensorflow machine learning Author: 张凯旋 Created Date: 9/12/2018 1:03:32 AM ...

Load Data

import matplotlib.pyplot as pltimport numpy as npimport tensorflow as tffrom sklearn import datasets

sess = tf.Session()

# iris.data = [(Sepal Length, Sepal Width, Petal Length, Petal Width)]iris = datasets.load_iris()x_vals = np.array([[x[0], x[3]] for x in iris.data])y_vals = np.array([1 if y == 0 else -1 for y in iris.target])

Page 11: tensorflow machine learning - Lee Gilesclgiles.ist.psu.edu/.../tensorflowmachinelearning.pdftensorflow machine learning Author: 张凯旋 Created Date: 9/12/2018 1:03:32 AM ...

Data Preprocessing

train_indices = np.random.choice(len(x_vals), round(len(x_vals)*0.8),replace=False)

test_indices = np.array(list(set(range(len(x_vals))) - set(train_indices)))x_vals_train = x_vals[train_indices]x_vals_test = x_vals[test_indices]y_vals_train = y_vals[train_indices]y_vals_test = y_vals[test_indices]

batch_size = 100x_data = tf.placeholder(shape=[None, 2], dtype=tf.float32)y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)

Page 12: tensorflow machine learning - Lee Gilesclgiles.ist.psu.edu/.../tensorflowmachinelearning.pdftensorflow machine learning Author: 张凯旋 Created Date: 9/12/2018 1:03:32 AM ...

Build ModelA = tf.Variable(tf.random_normal(shape = [2, 1]))b = tf.Variable(tf.random_normal(shape = [1, 1]))model_output = tf.subtract(tf.matmul(x_data, A), b)l2_norm = tf.reduce_sum(tf.square(A))alpha = tf.constant([0.01])classification_term = tf.reduce_mean(tf.maximum(0., tf.subtract(1., tf.multiply(model_output, y_target))))loss = tf.add(classification_term, tf.multiply(alpha, l2_norm))prediction = tf.sign(model_output)accuracy = tf.reduce_mean(tf.cast(tf.equal(prediction, y_target),tf.float32))train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)init = tf.global_variables_initializer()sess.run(init)

Page 13: tensorflow machine learning - Lee Gilesclgiles.ist.psu.edu/.../tensorflowmachinelearning.pdftensorflow machine learning Author: 张凯旋 Created Date: 9/12/2018 1:03:32 AM ...

Model Training

for i in range(1000):rand_index = np.random.choice(len(x_vals_train), size=batch_size)rand_x = x_vals_train[rand_index]rand_y = np.transpose([y_vals_train[rand_index]])sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})temp_loss = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})train_acc_temp = sess.run(accuracy, feed_dict={x_data: x_vals_train, y_target: np.transpose([y_vals_train])})test_acc_temp = sess.run(accuracy, feed_dict={x_data: x_vals_test, y_target: np.transpose([y_vals_test])})if (i+1)%50 == 0:

print('Acc = ',test_acc_temp)

Page 14: tensorflow machine learning - Lee Gilesclgiles.ist.psu.edu/.../tensorflowmachinelearning.pdftensorflow machine learning Author: 张凯旋 Created Date: 9/12/2018 1:03:32 AM ...

Visualization[[a1], [a2]] = sess.run(A)[[b]] = sess.run(b)slope = -a2/a1y_intercept = b/a1best_fit = []

x1_vals = [d[1] for d in x_vals]

for i in x1_vals:best_fit.append(slope*i+y_intercept)

Page 15: tensorflow machine learning - Lee Gilesclgiles.ist.psu.edu/.../tensorflowmachinelearning.pdftensorflow machine learning Author: 张凯旋 Created Date: 9/12/2018 1:03:32 AM ...

setosa_x = [d[1] for i, d in enumerate(x_vals) if y_vals[i] == 1]setosa_y = [d[0] for i, d in enumerate(x_vals) if y_vals[i] == 1]not_setosa_x = [d[1] for i, d in enumerate(x_vals) if y_vals[i] == -1]not_setosa_y = [d[0] for i, d in enumerate(x_vals) if y_vals[i] == -1]

plt.plot(setosa_x, setosa_y, 'o', label='I. setosa')plt.plot(not_setosa_x, not_setosa_y, 'x', label='Non-setosa')plt.plot(x1_vals, best_fit, 'r-', label='Linear Separator', linewidth=3)plt.ylim([0, 10])plt.legend(loc='lower right')plt.title('Sepal Length vs Pedal Width')plt.xlabel('Pedal Width')plt.ylabel('Sepal Length')plt.show()

Page 16: tensorflow machine learning - Lee Gilesclgiles.ist.psu.edu/.../tensorflowmachinelearning.pdftensorflow machine learning Author: 张凯旋 Created Date: 9/12/2018 1:03:32 AM ...

Modified National Institute of Standards and Technology (MNIST dataset)

• Large database of handwritten digits• A training set of 60,000 examples, and a test set of 10,000 examples• Normalized to fit into a 28*28 pixel bounding box

https://www.lri.fr/~marc/Master2/MNIST_doc.pdf

Page 17: tensorflow machine learning - Lee Gilesclgiles.ist.psu.edu/.../tensorflowmachinelearning.pdftensorflow machine learning Author: 张凯旋 Created Date: 9/12/2018 1:03:32 AM ...

Softmax Regression

Generalization of Logistic Regression

Softmax Function

hθ (x) =

P(y = 1 x;θ )P(y = 2 x;θ )

!P(y = K x;θ )

⎢⎢⎢⎢⎢

⎥⎥⎥⎥⎥

= 1

exp(θ ( j )T x)j=1

K

exp(θ (1)T x)exp(θ (2)T x)

!exp(θ (K )T x)

⎢⎢⎢⎢⎢

⎥⎥⎥⎥⎥

Page 18: tensorflow machine learning - Lee Gilesclgiles.ist.psu.edu/.../tensorflowmachinelearning.pdftensorflow machine learning Author: 张凯旋 Created Date: 9/12/2018 1:03:32 AM ...

import tensorflow as tfimport numpy as npfrom tensorflow.examples.tutorials.mnist import input_data

input_dim = 784output_dim = 10mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)trX, trY, teX, teY = mnist.train.images, mnist.train.labels, mnist.test.images, mnist.test.labels

X = tf.placeholder("float", [None, input_dim])Y = tf.placeholder("float", [None, output_dim])

w = tf.Variable(tf.random_normal([input_dim, output_dim], stddev=0.01))b = tf.Variable(tf.zeros([output_dim]) + 0.1)py_x = tf.matmul(X, w) + b

Page 19: tensorflow machine learning - Lee Gilesclgiles.ist.psu.edu/.../tensorflowmachinelearning.pdftensorflow machine learning Author: 张凯旋 Created Date: 9/12/2018 1:03:32 AM ...

loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=py_x, labels=Y))train_op = tf.train.GradientDescentOptimizer(0.05).minimize(loss)predict_op = tf.argmax(py_x, 1)

with tf.Session() as sess:tf.global_variables_initializer().run()for i in range(100):

for start, end in zip(range(0, len(trX), 128), range(128, len(trX)+1, 128)):sess.run(train_op, feed_dict={X: trX[start:end], Y: trY[start:end]})

print(i, np.mean(np.argmax(teY, axis=1) == sess.run(predict_op, feed_dict={X: teX})))

Page 20: tensorflow machine learning - Lee Gilesclgiles.ist.psu.edu/.../tensorflowmachinelearning.pdftensorflow machine learning Author: 张凯旋 Created Date: 9/12/2018 1:03:32 AM ...