Diagnosis of defects in analog circuits using neural networks

19
DIAGNOSIS OF DEFECTS IN ANALOG CIRCUITS USING NEURAL NETWORKS SEMINAR ON DATA MINING AND MACHINE LEARNING APPROACHES FOR SEMICONDUCTOR TEST AND DIAGNOSIS SUMMER TERM 2014 PRIYADHARSHINI UVARAJ M.Sc. INFOTECH Supervised by Dr. RAFAL BARANOWSKI Examined by Prof. Dr. rer. nat. habil. Hans-Joachim Wunderlich Seminar talk given on 14.05.2014

description

The diagnosis is done by applying an impulse to the circuit under test and measuring the response. Depending upon the fault induced in the circuit the impulse response of the Circuit under Test (CUT) changes. This different faulty component forms the fault class and the impulse response of the CUT forms the features. The classifier is trained using these data.

Transcript of Diagnosis of defects in analog circuits using neural networks

Page 1: Diagnosis of defects in analog circuits using neural networks

DIAGNOSIS OF DEFECTS IN ANALOG CIRCUITS

USING NEURAL NETWORKS

SEMINAR ON DATA MINING AND MACHINE LEARNING

APPROACHES FOR SEMICONDUCTOR TEST AND DIAGNOSIS

SUMMER TERM 2014

PRIYADHARSHINI UVARAJ

M.Sc. INFOTECH

Supervised by

Dr. RAFAL BARANOWSKI

Examined by

Prof. Dr. rer. nat. habil. Hans-Joachim Wunderlich

Seminar talk given on 14.05.2014

Page 2: Diagnosis of defects in analog circuits using neural networks

ITI: Data Mining and Machine Learning Approaches for Semiconductor Test and

Diagnosis

Topic: Diagnosis of defects in analog circuits using neural networks

Priyadharshini Uvaraj Page 1

TABLE OF CONTENTS:

1. Introduction 2

2. Analog circuit diagnosis 3

2.1 Fault model 4

3. Fault classification with neural network 4

3.1 Modular Diagnosis 5

4. Pre-processing and feature selection 7

4.1 Wavelet transform 7

4.2 Principal component analysis and normalization 9

4.3 Feature selection 10

5. Neural network synthesis 11

5.1 Construction 11

5.2 Training phase 12

6. Experimental evaluation 14

6.1 Experimental setup 14

6.2 Diagnostic accuracy 15

6.3 Comparison of traditional and modular approach 17

7. Conclusion 17

Page 3: Diagnosis of defects in analog circuits using neural networks

ITI: Data Mining and Machine Learning Approaches for Semiconductor Test and

Diagnosis

Topic: Diagnosis of defects in analog circuits using neural networks

Priyadharshini Uvaraj Page 2

1. Introduction

The motivation for research in fault diagnosis of analog circuits is that the analog circuits are

difficult to diagnose due to poor fault models, component tolerances, and nonlinear effects.

Due to the complex nature a concrete relation between the fault and the consequence cannot

be drawn. To do so, it requires an engineer to have some detailed knowledge of the circuit’s

operational characteristics and experience in developing diagnostic strategies. Unlike digital

circuits which have stuck-at-one or stuck-at-zero fault models, analog circuit lack good fault

model due to non-linear effect. If, for example, parameter value of any component in a circuit

changes by any factor, it doesn’t mean that the circuit’s output change by the same factor

necessarily. It implies that the relationship between circuit component characteristics and

circuit response is non-linear. This again enhances the point of requiring an engineer’s

detailed knowledge. As a result, analog fault detection, identification and diagnosis are still a

time-consuming process. [1]

The diagnosis was carried on different basis. They can be classified as rule based, fault model

based, behavioural model-based. Rule based systems requires the experience of skilled

diagnosticians to form rules in form “IF symptoms, THEN faults” [8]. The disadvantage is

that it can only locate the faulty block in a larger system and it cannot diagnose faulty

components down to component level. Behavioural-model-based techniques rely on

generating an approximate behavioural model of the circuit. During fault diagnosis, this

reference model is sampled until its response and the faulty response of the circuit are

matched. The component is found after the response is matched. A major disadvantage of this

method is that the search of the match can be computationally intensive. The fault model

technique is based on the prior generated fault hypothesis/assumption. A fault dictionary is

prepared which contains fault measurement pairs (fault and its output response) which are

generated by sequentially simulating the circuit by inducing single fault at a time. During

diagnosis the fault dictionary is used to find faults. [8]

The application of machine learning such as neural networks for diagnosis of faults in analog

circuit is appealing as it requires no model or comprehensive examination of these faults.

Neural networks can be trained to diagnose single or multiple faults in linear or non-linear

circuits as long as the features associated with these faults are separable from each other.

Neural networks overcome many difficulties associated with analog fault diagnosis and they

offer a very promising approach to future advance in this area.

The backpropagation algorithm is used to train the neural network. This means that the

artificial neurons are organized in layers, and send their signals “forward”, and then the errors

are propagated backwards. Once the network is trained, the online fault detection and

diagnosis are fast. [4]

Page 4: Diagnosis of defects in analog circuits using neural networks

ITI: Data Mining and Machine Learning Approaches for Semiconductor Test and

Diagnosis

Topic: Diagnosis of defects in analog circuits using neural networks

Priyadharshini Uvaraj Page 3

The report is organized in the following order. The overall methodology in analog circuit

diagnosis and fault model is described in section2. The fault classification using neural

network and modular diagnosis are described in section 3. Pre-processing techniques are

described in section 4. Neural network construction and training phase are described in

section 5. Experimental evaluation and Diagnostic accuracy are discussed in section 6.

2. Analog circuit diagnosis

The objective of the study is to diagnose the defects in analog circuits. The diagnosis of

defects in analog circuits is challenging due to poor fault model, non-linear effects, and

tolerances of the component in the analog circuit. These difficulties are overcome by a

machine learning approach which gives the system the ability to learn and acquire knowledge

through which the diagnosis made easier without comprehensive model. The diagnosis is

done by training the machine learning algorithm with pre-determined data. Machine learning

algorithm learns the dependencies or pattern that exists between the input and output through

this training. Using this knowledge it predicts the output for new value of input. The

diagnosis is done for parametric faults in the analog circuit. Parametric Fault in general can

be stated as the variation in the value of the component with respect to its nominal value

which can cause change in the output of the circuit. Fault model is obtained by inducing a

single fault at a time in the circuit and measuring the response. Fault model helps to identify

target faults.

The diagnosis is done by applying an impulse to the circuit under test and measuring the

response. Depending upon the fault induced in the circuit the impulse response of the Circuit

under Test (CUT) changes. This different faulty component forms the fault class and the

impulse response of the CUT forms the features. The classifier is trained using these data.

The steps in the diagnosis of the analog circuit are given in figure 1:

Circuit Under Test

Features Preprocessing

ClassifierImpulse signal

Fault Class

Fig 1. Diagnosis of analog circuit. [9]

1) The input stimulus/signal is applied to the circuit under test. The output/response of

the circuit is measured which constitutes features.

2) This process is done for various faulty and non-faulty conditions. The output response

can be node voltages, transient response or frequency response of the circuit. The

Page 5: Diagnosis of defects in analog circuits using neural networks

ITI: Data Mining and Machine Learning Approaches for Semiconductor Test and

Diagnosis

Topic: Diagnosis of defects in analog circuits using neural networks

Priyadharshini Uvaraj Page 4

selection of type of output to be extracted/measured from the circuit depends on the

uniqueness of output with respect to the faults in the circuit. Along with uniqueness,

computational overhead has to be kept minimum.

3) Once the features are extracted from the analog circuit, it is pre-processed to reduce

the input space.

4) With the input features and output (the fault class corresponding to it) are defined

from the previous steps, a fault model is constructed and a classifier is trained based

on that.

5) The classifier is trained in such a way that during testing phase, it should be able to

identify the fault depending on the test data.

The classifier specified can be realized using many methods. One of the methods is neural

network which is used because of various advantages like robustness, strong learning ability.

This learning ability makes the neural network acquire knowledge and this knowledge is used

to predict the fault during testing phase. The back propagation is used in the training of the

neural network that is used as classifier here. Back propagation neural network is most

popular and simple approach used to identify or detect patterns. Through the predetermined

features and fault class, the neural network tries to learn a pattern with which it can identify

fault class for new inputs.

2.1 Fault models

Fault model identifies the target faults in the analog circuits. Fault in analog circuit can be

categorized into two types. Catastrophic faults, Parametric faults. Open nodes, short between

nodes, other topological changes in a circuit are catastrophic faults. Parametric faults are the

changes in parameters with respect to its nominal values outside its tolerance limits. These

are soft faults and hard to test for. Fault model is obtained by the following way. The first

step in ensuring valid fault model is to inject fault to the circuit under test. The fault model is

obtained by varying each component of the circuit and keeping other parameters within their

range and simulating using impulse signal. These extract the fault to the circuit level. The

behaviour of good and faulty circuit is developed from the simulation results.

3. Fault classification with neural networks

The classifier in fault diagnosis is realised using neural network. Artificial neural network is

machine learning approach that can be used to detect and extract complex pattern by training

them to do so with pre-determined samples. It consists of interconnection between neurons in

many layers such as input layer, hidden layer, and output layer. The number of layers in the

neural network depends on the complexity of the system. The layers are interconnected via

weights. The neural network is characterised by the interconnection between layers and

training phase by which the network acquire knowledge.

Page 6: Diagnosis of defects in analog circuits using neural networks

ITI: Data Mining and Machine Learning Approaches for Semiconductor Test and

Diagnosis

Topic: Diagnosis of defects in analog circuits using neural networks

Priyadharshini Uvaraj Page 5

Fault classification with neural network in analog fault diagnosis is done by building a

network from set of input features extracted from CUT and output fault class which forms

input and output layer respectively. During training the input features are applied to the

neurons in the input layer which is directed to the output fault class.

There are two phases in neural network. Training phase and the neural network learns the

fault model from the pre-determined input features – output fault class which constitutes the

training set. Hence through this phase, the neural network has learnt the pattern between

features and fault class. Thus in testing phase, if a new input features is given; the network

has the ability to identify correct fault class through its acquired knowledge.

Neural Network

Features 1Features 2Features 3Features 4

Fault 1

Fault 2

Fault 3Fault 4Fault 5

Fig 2 Artificial Neural Network

Fault diagnosis in analog circuit using neural network is done in a following way. Initially an

impulse signal is applied to the circuit under test and output of the circuit is obtained. The

output differs depending on the various parameter faults in the circuit. These outputs are

called as features. These features along with the fault class associated with it are used to train

the back propagation neural network. The features that are obtained from CUT may not be

optimal as they might be noisy, redundant and contains dynamic range. The features are

hence pre-processed before it is used to train the neural network.

3.1 Modular diagnosis

The problems in diagnosis of fault diagnosis using traditional approach are as follows: When

the electronic circuit is not divided into modules, the neural network tries to identify the

faulty component in the entire circuit. Hence if the size of the circuit or the number of fault

increases, then the size of neural network increases. The problem with such case is there are

sometimes similar faults which give same features that form the input layer of neural

network. It is important to distinguish such similar faults with acceptable overlap in the

feature. In traditional approach it is sometimes impossible to achieve this distinction between

fault classes. This makes the training of neural network, a failure and diagnosis impossible.

Moreover once the fault is identified the next obvious step is to replace it. If the circuit is

large it is rather difficult to replace those components.

Page 7: Diagnosis of defects in analog circuits using neural networks

ITI: Data Mining and Machine Learning Approaches for Semiconductor Test and

Diagnosis

Topic: Diagnosis of defects in analog circuits using neural networks

Priyadharshini Uvaraj Page 6

The difficulties stated above can be overcome by modular approach. Modular approach is a

method where the large circuit is subdivided into several modules. If there is a fault in the

circuit, when it is divided into modules the faulty component resides in any one of the

module. In this way, the complexity of identifying the fault in the entire circuit is reduced

now into searching the faulty module. When the module is divided into submodule, a neural

network is trained to identify faulty module. Once the faulty module is identified, the other

modules are eliminated. The faulty module is further subdivided into sub modules and a

neural network is again trained to identify the faulty sub module. This process continues till

the faulty component is identified. The number of neural network required is proportional to

number of divisions done at the various levels.

In this way, the neural network size is reduced which makes the training phase easier. In

general, there are always similar faults that have an extensive overlap in the features. The

problem is to make distinction between similar faults. This is overcome by distributing the

similar faults, among different submodules by division. In this way, the training is made

easier as the similar faults are not in same module. The process of identifying similar fault

can be done using statistical method like mean, standard deviation between the features.

Module 1

Module 2

NN trained to identify faulty

module

Level 1 Training

Sub Module 1.1

Sub Module 1.2

NN trained to identify faulty

module

Module 1

Sub Module 2.1

Sub Module 2.2

NN trained to identify faulty

module

Module 2

Level 2 Training

Till faulty component is identified

Fig 3 Modular Diagnosis of analog circuits [3]

Page 8: Diagnosis of defects in analog circuits using neural networks

ITI: Data Mining and Machine Learning Approaches for Semiconductor Test and

Diagnosis

Topic: Diagnosis of defects in analog circuits using neural networks

Priyadharshini Uvaraj Page 7

A two-level example of modular approach to neural network training is shown in Fig. 3. At

level 1, the circuit is divided into two modules, which, in turn, are subdivided into two

submodules at level 2. At level 1, when the circuit is divided into submodules the neural

network is trained to identify faulty module. Once the faulty module is identified, that module

is again subdivided into sub-modules at level 2. Once again a neural network is trained to

identify which module has the fault. This is done till circuit component level.

4. Preprocessing and feature selection

The features that are obtained from the output of the circuit under test for parametric faults

may not be optimal for training the neural network. The features may have dynamic range of

values and all the features will not give distinction across fault class. In addition to it there

might exist a lot of features to select for training the network. Preprocessing solves this

problem by selecting the optimal features and also reduces the input features for the neural

network. Preprocessing is effective method to simplify neural network architecture and

minimize the training and processing time. There are three techniques employed for this.

1) Wavelet transform is used to reduce the number of inputs drastically.

2) Principal component analysis is used to reduce the input space dimension. It can also

select input features.

3) Normalization removes the variance over the values in the input space which tend to

undermine the relevant data fed to neural network.

4.1 Wavelet transform

The features that are obtained might not be optimal. The features must be distinct for

different fault class. In order to extract the uniqueness of features from the impulse response,

wavelet transform is performed. Wavelet transform is signal processing method that is used

to represent the signal in both time and frequency domain. Discrete wavelet transform (DWT)

is used to decompose the continuous signal into a series of wavelets. Wavelets are functions

defined over a finite interval and having an average value of zero. These wavelets are

obtained from a single prototype wavelet called the mother wavelet, by dilations or

contractions (scaling) and translations (shifts). Mother wavelet is a prototype for providing

windows to analyses the signal. A window function can be referred to a specific interval in

which a signal is analyzed when the signal has its maximum energy in that particular interval,

but outside which its value is zero. Translation corresponds to the time information. And

scaling refers to the frequency information. The decomposition of signals gives rise to

approximation and details coefficients. The approximation coefficients represent the high-

scale, low-frequency components of the signal. The detail coefficients represent the low-

scale, high-frequency components of the signal.

Page 9: Diagnosis of defects in analog circuits using neural networks

ITI: Data Mining and Machine Learning Approaches for Semiconductor Test and

Diagnosis

Topic: Diagnosis of defects in analog circuits using neural networks

Priyadharshini Uvaraj Page 8

The selection of mother wavelet is critical for the analysis of impulse response of CUT. The

mother wavelet selection is based on the closeness/similarity between the impulse response of

CUT and the mother wavelet. The Haar wavelet is chosen as the mother wavelet as it gives

several advantageous features. The two wavelet properties that are needed to select suitable

mother wavelet are support and regularity. Support is a measure of the wavelet duration time

domain and regularity is a measure of discontinuity. Since haar wavelet has very compact

support and it is highly discontinuous function it is very well suited for extracting features

from the impulse response of CUT.

The Discrete wavelet transform can be implemented as iterated filter bank technique. That is

the transform can be viewed as recursively passing the signal into high pass and low pass

filters respectively. Filtering in terms of mathematics can be considered as convolution of

impulse response of the filter and the signal. The approximation and detail coefficients are

basically obtained by passing the samples through low pass filter g and high pass filter h at

level 1 respectively as shown in the figure. The obtained output is same as the number of

samples that are passed through filters. Hence it is down sampled every time it passes through

a filter to remove the redundant data. Down sampling makes the number of wavelet

coefficients is reduced by a factor of two at each level of filtering. This is done to preserve

the data.

h 2

g 2

h 2

g 2

h 2

g 2

Signal

Detail coefficients at level 1

Detail coefficients at level 2

Detail coefficients at level 3

Approximation coefficients at level 3Approximation coefficients at level 3

Approximation coefficients at level 2

Approximation coefficients at level 1

Fig 4.Hierarchical decomposition of a signal into approximation and details. [1]

Page 10: Diagnosis of defects in analog circuits using neural networks

ITI: Data Mining and Machine Learning Approaches for Semiconductor Test and

Diagnosis

Topic: Diagnosis of defects in analog circuits using neural networks

Priyadharshini Uvaraj Page 9

At level 1, the time resolution is reduced as half the signal is removed and the frequency

resolution is increased. This procedure is called as subband coding. At level 2, the

approximation coefficients are further decomposed into details and approximation

coefficients. After second level of decomposition the frequency resolution is further

increased. By continuing this process, it is possible to get desired level of frequency or time

resolution. The level of decomposition depends upon the requirement of application.

In this study this is determined by the factor of distinct features of impulse response that is

needed for training the neural network. Hence after analysing the details or approximation

coefficients, and determining which of the coefficients gives distinct features, the optimal

features are selected.

Mathematically the approach is accomplished using shifted and scaled versions of the so-

called original (mother) wavelet as

( )

√ (

) (1)

In this equation, a and b define the degree of scaling and shifting of the mother wavelet

respectively. The scaling and shifting value a and b are chosen as power of 2 which is

known as dyatic analysis for DWT. The coefficients of expansion C (a, b) for a particular

signal can be expressed as

C (a, b) = { ψa, b (x) ( )

√ ∫ ( ) (

) (2)

These coefficients give a measure of how closely correlated the modified mother wavelet is

with the input signal. In equation (2) I(x) represent the impulse signal of CUT and the

integration is performed over all possible x values where x is time. Wavelet analysis in its

discrete (dyatic) form assumes a= and b=k =ka, where (j, k) where j is the level of

decomposition [10].

The main advantage of preprocessing the output of CUT using wavelet transform is to reduce

the number of inputs to the neural network by reducing the redundant data which is

accomplished by down sampling.

4.2 Principal component analysis and normalization

Principal components analysis is mainly used to reduce the complexity of the neural network

employed in the fault classification problem. It is a dimensionality reduction technique. This

is a technique used to convert the high dimensional data to low dimensionality data without

losing essential information. It uses the dependencies between the variables to do so. It is

done by taking a x vectors in d dimensional space and summarize by projecting them down to

z vectors in M dimensional space keeping M < d.

Page 11: Diagnosis of defects in analog circuits using neural networks

ITI: Data Mining and Machine Learning Approaches for Semiconductor Test and

Diagnosis

Topic: Diagnosis of defects in analog circuits using neural networks

Priyadharshini Uvaraj Page 10

It is one of the most robust ways of doing dimensionality reduction. Principal component

analysis accounts for change of basis in useful way with that basis accounting for noise

reduction and revelation of various dynamics in the features. The summary will be the

projection of the original vectors on to a direction, where the Principal components span the

subspace.

Normalisation is a procedure followed to bring the features closer to the requirements of the

algorithms, or at least to pre-process data so as to ease the neural network algorithm’s job.

Building a neural network between the features extracted and their associated fault class is

made easier when the set of values to predict is rather compact. The linear scaling is done to

avoid large dynamic and the features are made to fit into a specific range. The PCA and

normalization is used preprocess two or more inputs features differing by several orders of

magnitude.

4.3 Feature selection

The wavelet coefficients are obtained from the wavelet transform performed on the impulse

response of the CUT. The wavelet coefficients are selected as features for training the neural

network. The feature selection plays a vital role in the diagnosis by eliminating redundant or

irrelevant features. The feature selection involves selection of optimal wavelet coefficients

which best distinguish the features across fault class. It makes the training effective and

improves interpretability of the neural network. Selecting approximation coefficients as

features ensures that the training and testing data are immune to any noise that may be

induced due to the fault diagnostic system. Any noise resulting from circuit output occurring

at high frequencies is relatively complicated to eliminate and are consequently blocked out by

the low-pass filtering effect associated with the approximation coefficients. To select

candidate features from approximation coefficients, their distinction across fault classes are

examined by comparing the means and standard deviations they take in every class. Wavelet

coefficients that take similar values across fault classes are eliminated, and those that remain

distinct across two or more fault classes are kept as candidate features. PCA will further

eliminate those features that do not show significant variation across fault classes leading to a

set of optimal features for training and testing the neural network. There are certain

guidelines for selecting wavelet coefficients as features to train the neural network. [1]

1) The signal is recursively decomposed into sufficiently high levels of approximation

and detail to expose the low and high frequency features of the signal respectively.

The PCA is further used to reduce the number of features and enhance the differences

among fault classes.

2) The approximation and detail reflect the low and high frequency contents of a signal

respectively. The low frequency contents usually give a signal its basic structure,

Page 12: Diagnosis of defects in analog circuits using neural networks

ITI: Data Mining and Machine Learning Approaches for Semiconductor Test and

Diagnosis

Topic: Diagnosis of defects in analog circuits using neural networks

Priyadharshini Uvaraj Page 11

while the high-frequency contents provide its details. As a result, the approximation

coefficients are selected as features.[1]

According to these guidelines, optimal features for training a neural network are obtained by

first selecting candidate features from wavelet coefficients. This is achieved by examining the

approximation and detail that expose the low- and high-frequency contents of the signal at

every level of decomposition. The wavelet coefficients associated with these levels then form

all the possible features for diagnosis. Generally speaking, approximation coefficients are

appropriate features for analog fault diagnosis since they represent the low-frequency

contents or basic structure of a signal and they are immune to noise. Details capture the high-

frequency contents of a signal and are not appropriate for representing its main features.

5. Neural network synthesis

There are several neural network architectures available for classification problems. Among

these, the most intuitive and reliable architecture is one whose outputs estimate the

probabilities that input features belong to different fault classes. The two layer feed forward

network is used for this purpose. This is achieved by setting the number of output layer

neurons equal to number of faults as shown in the figure.

5.1 Construction

A two-layer feed-forward neural network, has as many outputs as there are fault classes, is

used for this purpose. A feed forward network is a specific type of neural network where the

layers interconnected in forward direction from input to the output as shown in figure 5. The

input layer in the neural network constitutes the pre-processed features by wavelet transform,

PCA and normalization where features are the outputs that are extracted from the circuit

under test by applying impulse response. The output layers, as mentioned above are fault

classes. In addition to these two layers, two layer feed forward has a hidden layer. It is called

as they are not accessible. The function of the hidden layer is to extract the importance of

features from the input neurons. The hidden layer is sigmoid function. The sigmoid function

is selected as the activation function because the activation function should be non-decreasing

and differentiable.

Page 13: Diagnosis of defects in analog circuits using neural networks

ITI: Data Mining and Machine Learning Approaches for Semiconductor Test and

Diagnosis

Topic: Diagnosis of defects in analog circuits using neural networks

Priyadharshini Uvaraj Page 12

Feature 1

Feature 2

Feature 3

Feature 4

Input Layer Hidden Layer Output Layer

Fault Class 1

Fault Class 2

Fault Class 3

Fig 5 Architecture of two layer feed forward network. [2]

5.2 Training phase

Training a neural network is to make the network acquire knowledge through a learning

process by providing training set of sample data which are pre-determined. The training is

done by updating weights and bias of the network by comparing the neural network output

with the desired output (fault class). There are two types of learning: Supervised training and

unsupervised learning. Supervised training is a process where the neural network is trained

using predetermined training data. Unsupervised training is training without any such training

data.

Training data consists of input-output pairs. {( )}

= input features of neural network

= Desired fault class output

N = No of sample pairs fed to the neural network.

The training of neural network for fault diagnosis in analog circuit is done by back

propagation algorithm. Back propagation algorithm involves two stages.

Page 14: Diagnosis of defects in analog circuits using neural networks

ITI: Data Mining and Machine Learning Approaches for Semiconductor Test and

Diagnosis

Topic: Diagnosis of defects in analog circuits using neural networks

Priyadharshini Uvaraj Page 13

Forward stage: The weights of the neural network are fixed and the features propagate

through the network and give an output from which the error between the output and desired

output is calculated.

Backward stage: The error is now propagated back to the network giving this algorithm the

name back propagation. During this phase the weights are adjusted to minimize the error

between actual and desired output fault class, where output here is the fault class.

Mathematically this is done by applying gradient descent to a sum of square error function.

The error function is given by

∑ ∑ { ( ) ( )}

(3)

S2-Feature class/fault class.

The error value is calculated from this equation and weights are adjusted in the neural

network to minimize it. This is done by adjusting the weights in the direction of negative

gradient (slope) of E.

The steps involved in the training of the neural network using back propagation algorithm

are,

1) The features from the training set are propagated through the neural network.

2) The output is calculated from the sum of the weighted inputs and sigmoid function of

the hidden layer which acts activation function. Activation function defines the output

for the given input.

(4)

∑ (5)

= weights associated with the input

3) The output fault class is compared with the desired output and the error is calculated.

4) The derivative of the error with respect to weight is calculated.

(6)

= Change in error with respect to output fault class

= Change in output with respect to weighted sum

Page 15: Diagnosis of defects in analog circuits using neural networks

ITI: Data Mining and Machine Learning Approaches for Semiconductor Test and

Diagnosis

Topic: Diagnosis of defects in analog circuits using neural networks

Priyadharshini Uvaraj Page 14

= Change in weighted sum with respect to individual weights.

From (5)

= (7)

From (4)

=y (1-y) (8)

From (3)

={ ( ) ( )} (9)

From (3, 4, 5)

{ ( ) ( )}*(y (1-y))*

()

5) The change in weight is calculated from the following equation

(10)

is the learning rate which has the value between 0 and 1. Learning rate is generally

the fraction of error that is removed. Choosing the value of learning rate plays

important role. If chosen low value, then the time to learn the weights will be too

long. If chosen a high value, the algorithm tends to oscillate.

6) The output is calculated using the updated weights and in turn error is calculated and

the weights are adjusted again to minimize the error

7) This process is continued until the neural network output fault class is equal to desired

fault class so that the which means the neural network is trained.

The advantage of using back propagation algorithm is that it is simple, though slow in

procedure and easy to implement. Once the training phase is completed, now the network is

fed with test data where test data are new feature values. The trained neural network will

classify output fault class correctly for the applied featured.

5. Experimental evaluation

The experimental evaluation is done on the basis of classification of fault class with accuracy

and neural network size as it is gives the complexity of the whole diagnostic process. The

ability to distinguish between the different fault classes based on the output response of the

circuit is considered a major factor on evaluating this algorithm.

5.1 Experimental setup

The sallen key bandpass filter is used to validate the algorithm stated above. Using the

appropriate neural network architecture discussed in the previous section, the test inputs are

assigned to the fault class with the highest probability as measured by the neural network

outputs. The nine fault classes (eight faulty components and the no-fault class) associated

Page 16: Diagnosis of defects in analog circuits using neural networks

ITI: Data Mining and Machine Learning Approaches for Semiconductor Test and

Diagnosis

Topic: Diagnosis of defects in analog circuits using neural networks

Priyadharshini Uvaraj Page 15

with the Sallen–Key bandpass filter require a neural network with four inputs, six first-layer

and eight output-layer neurons. The nominal values for the components which results in a

centre frequency of 25 kHz are shown in the figure. The resistors and capacitors are assumed

to have tolerances of 5% and 10% respectively.

Fig 6 25 kHz Sallen key bandpass filter [1, 2, 3]

The sample circuit are sampled with impulse response and fed to a neural network with pre-

processing. The approximation coefficients from levels 1 through 5 are selected as features to

train the neural network. The impulse response of the circuit with R3; C2, R2, and C1

varying within their tolerances, belong to the no-fault class (NF) and are fed to the

preprocessors for feature selection. When any of the four components is higher or lower than

its nominal value by 50% with the other three components varying within their tolerances,

faulty impulse responses are obtained. These faulty impulse responses are similarly fed to the

preprocessors for feature selection and form the fault classes R3 ↑; R3 ↓; C2↑ ; C2↓; NF; R2

↑; R2 ↓; C1 ↑; andC1↓ ; where ↑ and ↓ stand for high and low, respectively. For instance, R3↑

fault class corresponds to R3=3k; with C2, R2, and C1 allowed to vary within their

tolerances. [1]

5.2 Diagnostic accuracy

This experimental evaluation indicates that the neural network when pre-processed give

optimal features and distinction across fault class. Each graph in the figure 7 corresponds to

one feature for the nine fault class in the order of C1 ↑; C1↓; C2↑; C2↓; NF ; R2 ↑; R2 ↓; R3

↑; R3 ↓;

Page 17: Diagnosis of defects in analog circuits using neural networks

ITI: Data Mining and Machine Learning Approaches for Semiconductor Test and

Diagnosis

Topic: Diagnosis of defects in analog circuits using neural networks

Priyadharshini Uvaraj Page 16

Fig 7. Features and fault classes associated with sallen band-pass key filter [2]

The range of values for the five features corresponding to 9 fault class is plotted in terms of

mean and standard deviation. The mean and standard deviation is obtained for each feature

and fault class from SPICE simulation and actual circuit output. The blue points indicate

result using SPICE simulation and black points gives the actual circuit output. The graphs

clearly show that the features can clearly distinguishes between fault classes. For example the

fifth graph in figure 7 shows it can clearly distinguish between C2↑ and C2↓.

The graph indicates the neural network cannot distinguish between the NF and R2↑ fault

classes. This becomes evident during the testing phase when neural network outputs

corresponding to these two faults take similar values for features belonging to both class.

These two fault classes are combined into one ambiguity group and eight output neurons are

used accordingly. Now the neural network can correctly classify 97% of the test data. The

reason NF and R2↑ fault classes are similar is because they produce similar outputs from

CUT. The examination of the circuit reveals that these two fault classes have transfer

functions in which corresponding terms are different by at most 8.5% assuming nominal

values. However, In an actual circuit with all components having standard tolerances of 5%

to 10%, the two transfer functions become similar. As a result, based on features extracted

NF and R2↑ classes cannot be separated. [2]

Page 18: Diagnosis of defects in analog circuits using neural networks

ITI: Data Mining and Machine Learning Approaches for Semiconductor Test and

Diagnosis

Topic: Diagnosis of defects in analog circuits using neural networks

Priyadharshini Uvaraj Page 17

5.3 Comparison of traditional and modular approach

The comparison clearly indicates that the modular approach requires network that are many

times smaller than the traditional method. This significant reduction in size leads to effective

training and improved performance which is clear from the accuracy indicated below.

However the modular approach is advantageous there is certain trade off to be done between

smaller size neural network, efficient training, reliable performance and the necessity to train

several neural networks which is time consuming.

Table 1: Comparison of traditional and modular approach [3]

Traditional Modular

Classification

accuracy

97% 100%

Network size 86 43

Size reduction -200%

6. Conclusion

The diagnosis of analog circuits due to the poor fault models, non-linear effects and tolerance

variation is challenging. These difficulties are overcome by machine learning as it requires no

comprehensive fault model. Neural network is used because of its robustness. Neural network

once trained, it makes the diagnostic process of analog circuits much easier. The input space

and architecture size are reduced due to the preprocessing the circuit output. The trained

neural network is capable of robust fault diagnosis and can correctly classify almost 95% of

the test data associated with sample circuits described [2]. The advantage of modular

approach is that it has the ability to break down a classification problem into several simpler

problems with small neural network. The modular approach significantly enhances the

efficiency of the training phase and the performance of the fault diagnostic system.

Page 19: Diagnosis of defects in analog circuits using neural networks

ITI: Data Mining and Machine Learning Approaches for Semiconductor Test and

Diagnosis

Topic: Diagnosis of defects in analog circuits using neural networks

Priyadharshini Uvaraj Page 18

References

[1] M. Aminian and F. Aminian, “Neural-network based analog circuit fault diagnosis using

wavelet transform as preprocessor,” IEEE Trans. Circuits Syst. II, vol. 47, pp. 151–156, Feb.

2000.

[2] F. Aminian, M. Aminian, and B. Collins, “Analog fault diagnosis of actual circuits using

neural networks,” IEEE Trans. Instrum. Meas., vol. 51, no. 3, pp. 544–550, Jun. 2002

[3] F. Aminian and M. Aminian, “A Modular fault-diagnostic system for analog electronic

circuits using neural networks with wavelet transform as a preprocessor”, IEEE Trans.

Instrum. Meas., vol. 56, pp. 1546-1554, Oct 2007

[4] R. Spina and S. Upadhyaya, “Linear circuit fault diagnosis using neuromorphic analyser,”

IEEE Trans. Circuits Syst. II, vol. 44, Mar. 1997.

[5] J. W. Bandler and A. E. Salama, “Fault diagnosis of analog circuits,” Proc. IEEE, vol. 73,

pp. 1279–1325, Aug., 1985.

[6] C. M. Bishop, Neural Networks for Pattern Recognition. New York: Oxford Univ. Press,

1995.

[7] Jan van Leeuwen, “Approaches In Machine Learning” Institute of Information and

Computing Sciences, Utrecht University

[8] Ke Huang, Stratigopoulos, H.-G. and Mir.S, “Fault diagnosis of analog circuits based on

machine learning”, IEEE Trans. Design, Automation & Test pp 1761-1766 Mar. 2010

[9] A.kumar and A.P.Singh “Neural Network based Fault Diagnosis in Analog Electronic

Circuit using Polynomial Curve Fitting” International Journal of Computer Applications

(0975 – 8887)Vol 61– No.16, January 2013

[10] G. Strang and T. Nguyen, Wavelet and Filter Banks. Cambridge, MA: Wellesley-

Cambridge, 1996.