CHAPTER 3 IBTRODUCTION - Shodhgangaietd.inflibnet.ac.in/jspui/bitstream/10603/908/15/15_chapter...

39

Transcript of CHAPTER 3 IBTRODUCTION - Shodhgangaietd.inflibnet.ac.in/jspui/bitstream/10603/908/15/15_chapter...

CHAPTER 3

GRAFICS AND IT8 COMPARISON WITH OTHER

WELL-KNOWN ALGORITHMS*

3.1 IBTRODUCTION

The survey of machine-component cell formation methods

(Chapter 2) indicates that most of them deal with 0-1 matrix and

block-diagonalization of the matrix to obtain machine cells and part

families. The survey also indicates that there is scope to do further

research in the area of nonhierarchical clustering. In this chapter,

initially introduction to nonhierarchical clustering technique is given.

Then, two nonhierarchical clustering algorithms namely ZODIAC

(Chandrasekharan and Rajagopalan 1987) and GRAFICS (Srinivasan

and Narendran 1991) are explained in detail by giving suitable

examples. Then, by sequentially reviewing the published GT literature,

it is shown that GRAFICS can perform better than many well-known

algorithms. Then, the similarities and dissimilarities of the algorithms

namely ZODIAC and GRAFlCS are reported. Finally, the need for

improving GRAFICS algorithm is highlighted.

A paper entitled, 'Nonhierarchfcal clusturlng of machine-

component c u l t luing see& from an m c i e n t Seed Genuraffon

Algorithm (ESGA)' based on this part of the research work was

published in the Indwtrial Engineering Journal, Vol.XXN, No. 1 I Ck

No. 12, 1995.

is the machine-component incidence matrix entry for the i&

machine and the k" component,

is the machine-component incidence matrix entry for the jth

machine and the k'h component,

number of voids (zeros) within the diagonal blocks of the

machine-component incidence matrix in a n iteration (Bo

represents the number of voids in the previous iteration)

is the component vector corresponding to component type j of

the machine-component incidence matrix

is the dissimilarity (or distance) between two machines namely

machine i and machine j

number of exceptional elements in the machine-component

incidence matrix in an iteration (E,, represents the number of

exceptional elements in the previous iteration)

is the component index,

i~ machine group

i'h machine seed

is the similarity between two machines i and j of the machine-

component incidence matrix,

a variable which is equal to 1 if the machine i is grouped with

the machine j; which is equal to 0, otherwise

127

3.3 NO-CHICAL CLUSTERING

The identification of machines and part groups is similar to the

identification of "clustersw in a scattered data space (scatter of data

points). Researchers have applied cluster analysis in its varied form to

the problem of forming machine cells and component families.

Cluster analysis seeks to group data into clusters such that the

elements within a cluster are closely related while the clusters

themselves have little or no relationship amongst them. The major

classes of c lus te r ana lys i s a re hierarchical c luster ing and

nonhierarchical clustering.

A hierarchical clustering method first computes the similarity or

dissimilarity between each pair of parts or niach~nes. Some methods

use agglomerate philosophy while others use divisive philosophy for

clustering hierarchically. The hierarchical clustering algorithms

generate a hierarchy of feasible solutions each with a particular value

of a performance measure. The analyst chooses the best feasible

solution corresponding to the best value of the performance measure.

Nonhierarchical clustering algorithms start with an initial set of

machine seeds and results in a set of machine-component cells with

optimum or near optimum value of performance measure.

3.3.1 Advantage of Nonhlerarchical Clustering Methods over

Hierarchical Clustering Methods

The main drawback of hierarchical methods (Anderberg 1973) is

that when two points (row vectors or column vectors) are grouped

together a t some stage of the algorithm there is no way to retrace the

step even if it leads to suboptimal (or unnatural) clustering a t the end.

At every stage of clustering those points which have formed some sort

of groups face the rest of the data with a fait accompli that severely

limits further possibilities. In nonhierarchical clustering, the choice is

rather free, and the natural clusters emerge from the given data without

permanently binding any data unit due to the linking done in the

initial stages of execution.

Nonhierarchical clustering is capable of identifying the natural

groups in a data-set. Only three algorithms are available for formation

of machine groups and part families using the nonhierarchical

clustering technique. The three nonhierarchical clustering algorithms

are given below:

a. Ideal Seed Nonhierarchical Clustering Algorithm

(Chandrasekharan and Rajagopalan 1986a).

b. ZODIAC (Chandrasekharan and Rajagopalan 1987).

c. GRAFICS (Srinivasan and Narendran 1991).

Hence, the development of a lgori thms based on

nonhierarchical clustering methods needs to be explored

further.

Two nonhierarchical clustering algorithms namely ZODIAC and

GRAFICS are explained in the following sections.

3.3.2 Introduction to ZODIAC (Chandrmekhalm and Rajagoplan

1987)

The algorithm namely Z9DIAC (Zero-One Data-Ideal seed

Algorithm for Clustering) was proposed by Chandrasekharan and

Rajagopalan (1987). This is a nonhierarchical clustering algorithm.

Here each machine vector (row) is treated a s a point in a higher

dimensional zero-one space and clustered around some fured seed

points, which may be among these points themselves. In this section

the natural seed clustering version of the algorithm followed by

t h e ideal seed c lus te r ing algori thm a r e explained. ZODIAC

algorithm i s given in Appendix 1 along with the corresponding

flowchart (Figure I A. 1 ) .

The natural seed algorithm attempts to find the most natural

machine cells from the data. It uses Jaccard (Sukal and Sneath 1963)

similarity matrix to create initial seed points around which the

machines are to be grouped. The seed points should be a s far away

from each other, that is, a s dissimilar a s possible so that the points

clustered around them are similar and the clusters (groups) themselves

are dissimilar.

The Jaccard similarity coefficient is given by

where

S,, is the similarity between two machines i and j of the machine-

component incidence matrix,

a,, is the machine-component incidence matrix entry for the i'h

machine and the kth component,

a,, is the machine-component incidence matrix entry for the jth

machine and the kth component,

k is the component index,

1 ; if a, = a,, = 1 (or) if a, + a,,

0; if a, = a,, = 0

numbers of components which visit both machine i and

machine j

S!, = (Jaccard (number of components which visit both machine i and coefficient)

machine j) + (number of components which visit one or

other of the machines)

ZODIAC algorithm is explained using a sample problem. The

process sequences of the above problem is given in Table 3.1. The

corresponding initial 0- 1 machine-component incidence matrix of the

above problem is shown in Figure 3.1. The Jaccard similarity matrix

corresponding to Figure 3.1 is shown in Table 3.2. The average of the

Jaccard similarity coefficient for machines from Tables 3.2 is found to

be 0.1078. Since the matrix is symmetric, only the upper triangular

values are considered. The matrix is scanned to obtain a pair of

machines whose similarity is less than the average. Machine 1 and

Machine 4 have a similarity value of zero and hence these machines

are chosen a s the first two machine seed points. Machine 6 is chosen

a s the third seed point since its similarity with the existing seeds

namely machine 1 and machine 4 is less than the average similarity

value of 0.1078. It is to be noted that any machine that has a Jaccard

similarity value more than the average Jaccard similarity value even

with one existing seed (already chosen seeds) does not qualify to be a

seed.

Table 3.1 Process sequences of components of the sample problem

used to demonstrate ZODIAC algorithm

Note: C1, C2. ..., C6 represent component codes corresponding to

six different types of components.

Component code

1

Cl

I C2

I C3

I C4

1 C5 1 c6 L

M 1 ,M2,.... , M6 represent machine codes corresponding to

six different types of machines.

J

Total number of operations

2

2

2

2

2

2

Process sequence

M5- M1

M1 - M3 M6 - M5 M5 - bib M2 - M1 M4 - M2

--

Component k

Machine i

Figure 3.1. Init ial machine-component incidence matr ix used to

demonstrate ZODIAC algorithm

The Jaccard similarity matrix corresponding to th i s Figure 3.1

(calculated) is shown in Table 3.2

Table 3.2 Jaccard similarity matrix of machines corresponding to

Figure 3.1 used to demonstrate ZODIAC algorithm

Machine j

Machine i

The machines are clustered around these seed points. The three

seed points are given below:

1. S e e d l : ( 1 1 0 0 1 0 ) .....( machine 1).

2. Seed2: [ 0 0 0 0 0 1 ] .....( machine 41.

3. Seed3: [ O O l l O O ) .....( machine 6).

Each machine vector (machine row) is assigned to the machine

seed with which the distance (dissimilarity measure is given below) is

minimum. The dissimilarity (or distance) between two machines i and

j is given by the following formula:

do = ' / - I k

(3.3)

where

a,, is the machine-component incidence matrix entry for the ith

machine and the kth component,

aJk is the machine-component incidence matrix entry for the jth

machine and kLh component,

d,) is the dissimilarity (or distance) between two machines

namely machine i and machine j

and

k is the component index.

The machine groups thus obtained are given below:

1 . Machine group 1 : (1 ). .. . singleton cluster

2. Machine group 2 : (4,2,3 ) .

3. Machine group 3 : (6,s ) .

The above machine groups obtained using natural seeds (namely

machine 1, machine 4 and machine 6) will be used to cnate sood

points for ideal mad clustering hter. A similar procedure is performed

for the components using the transpose of the matrix in Figure 3.1.

The transpose of Figure 3.1 is given below in Figure 3.2.

Machine k

Camponrnt i

Figure 3.2 Transpose of Figure 3.1 (component-machine incidence

matrix)

The Jaccard similarity matrix corresponding to Figure 3.2 is

shown in Table 3.3. The average of the Jaccard similarity coefficient

for components from Table 3.3 is found to be 0.1656. Since the matrix

[Table 3.3) i s symmetric, only the upper triangular values are

considered. The matrix (Table 3.3) i s scanned to obtain a pair of

components whose similarity is less than the average. Component 1

and component 6 have a similarity value of zero and hence these

components are chosen a s the first two component seed points. Since

the remaining components' (namely components 2,3,4,5) Jaccard

Table 3.3 Jaccard similarity matrix of components arrived from

Figure 3.2

Component j

Component i

similarity values with the existing component seeds (namely

component 1 and component 2) are more than the average Jaccard

similarity value of 0.1656 (of Table 3.3), they do not qualify to be a

seed. The components are clustered around these seed points. The

two component seed points are given below:

1. S e e d l : [ 1 0 0 0 1 0 ) .....( component 1).

2. Seed 2: [ 0 1 0 1 0 0 ] .....( component 61.

Each component vector (component row) is assigned to the

component seed with which the distance (dissimilarity measure given

in equation 3.3) i s minimum. The component groups thus obtained

are given below:

1. Component group 1 : ( 1,2,3,4 ) .

2. Component group 2 : ( 6,5 ).

Intermediate solutions from the ZODIAC algorithm may yield unequal

numbers of machine groups and part groups. In such cases the

solutions are not evaluated. The number of seed points for machines

and parts are made equal by eliminating (discarding) a few seed points

(corresponding to small or singleton groups) and then the iterations

are con tinu ed .

In this case the number of machine groups and part groups

obtained using natural seed clustering are unequal. Hence this solution

is not evaluated. The number of seed points for machines and parts

are made equal by eliminating (discarding) singleton machine group 1

with machine 1 a s its constituent.

Hence the following machine groups and part groups are

considered a s output of natural seed clustering and the iterations

continued.

Machine groups:

1. Machine group 1 : ( 4 ,2 ,3 ) .

2. Machinegroup 2 : (6,s).

. . .(machine 1 discarded)

Component groups:

1. Component group 1 : ( 1,2,3,4 ) .

2. Component group 2 : ( 6,s ).

This solution i s improved using the ideal seed algorithm. Two

Component seeds are obtained from the two component groups. They

are given below:

1. Componentseedl: [ I 1 1 1001.

2. Component seed 2: [ 0 0 0 0 1 1 1. component seed 1 has been created from part family 1 (component

group 1) and has 1's in locations (1,2,3 and 4) representing the parts

grouped in part family 1. The second component seed is also created

similarly. Each component seed is a vector with 6 elements. The

machine vectors (machine rows from Figure 3.1) are clustered using

these component seed points and using the distance measure given

in equation 3.3. The machine groups thus clustered are given below:

1. Machine group 1 : ( 5,6 1.

2. Machine group 2 : ( 1,2,3,4 ).

From machine-component groups obtained using natural seeds

earlier, machine seed points are created to cluster components. The

machine group (4,2,3) yields machine seed 1 with a 1 in positions 4,2

and 3. The machine seeds thus obtained from two machine groups are

given below:

1. Machineseed 1: [ 0 1 1 l o o ] . 2. Machine seed 2: 10 0 0 0 1 1 1.

Each component is attached to a machine seed with which its

distance is minimum. The component vectors (component rows from

Figure 3.2) are clustered using these machine seed points and using

the distance measure given in equation 3.3. The component groups

thus clustered are given below:

1 . Component group 1 : ( 2,5,6 1.

2. Component group 2 : ( 1,3,4 1.

Finally, part groups are assigned to machine groups using a

procedure called diagondization. Each family can be assigned to one

machine cell. The number of 1's is computed for each of the 4 possible

pairs of machine cells and part families. The combinations with the

maximum number of 1's are paired together. This procedure is

continued until each machine cell has a part family assigned to it. In

the present example, the assignments are given below:

1. Machine group I assigned to part family 2 (component

group 2).

2 . Machine group 2 assigned to part family 1 (component

group 1 ) .

The results are summarised in Table 3.4. The block diagonalised

machine-component incidence matrix is given in Figure 3.3.

Table 3.4 Intermediate machine-component cells

The solution shown in Figure 3.3 h a s 1 intercell move. The

goodness of the solution is given by a measure called grouping efficiency

(Chandrasekharan and Rajagopalan 1987). Therefore grouping

efficiency corresponding to the present solution shown in Figure 3.3

is equal to 77.77%. Since our objective is to find a solution with

maximum grouping efficiency, we continue the ideal seeding procedure.

r--- - Machine-component cell no

1

2

It is found out whether the solution can be improved using

ideal seed algorithm. Two component seeds are obtained from the two

Mach~ne group

5,6

],2,3,4 --

- - I

Part family

1,3,4

2,5,6

Component k

Machine i

Figure 3.3 Block-diagonalised machine-component incidence matrix

component groups shown in Table 3.4. They are given below:

1. Componentseedl: { I 0 I 1 0 0 1 .

2. Componentseed2: [ O l 0 0 1 I ] .

Component seed 1 has been created from part family 1 (component

group 1) and has 1's in locations ( 1 , 3 and 4 ) representing the parts

grouped in part family 1. The second component seed is also created

similarly. Each component seed is a vector with 6 elements. The

machine vectors (machine rows from Figure 3.1) are clustered using

these component seed points and using the distance measure given

in equation 3.3. The machine groups thus clustered are given below:

1. Machine group 1 : ( 5,6 ).

2. Machine group 2 : ( 1,2,3,4 ) .

From the machine-component groups obtained using ideal seed

clustering earlier (shown in Table 3.41, machine seed points are created

to cluster components. The machine group ( 5,6 ) yields machine

seed 1 with a 1 in positions 5 and 6. The machine seeds thus obtained

from two machine groups are given below:

1. Machineseed 1: [ 0 0 0 0 1 1 1 .

2. Machine seed 2: [ 1 1 1 1 0 0 1.

Each component is attached to a machine seed with which its

distance is minimum. The component vectors (component rows from

Figure 3.2) are clustered using these machine seed points and using

the distance measure given in equation 3.3. The component groups

thus clustered are given below:

1. Component group l : ( 1,3,4 1.

2. Component group 2 : ( 2,5,6 1.

Finally, part groups are assigned to machine groups using the

procedure called diagonalisation, which was explained earlier. In the

present example (iteration), the assignments are given below:

1 . Machine group 1 assigned to part family 1 (component

group 1 ) .

2. Machine group 2 assigned to part family 2 (component

group 2).

The results are summarised in Table 3.5.

Table 3.5 Intermediate machine-component cells

.

- - - - - - - -- -

Part fam~ly

1,3,4

2,5,6

r-- Machine-component cell number

I I

I 1

I 2

1

- - -- - ,

Mach~ne group

5 6

1,2,3,4

This solution is same a s that of the solution obtained in the

previous iteration with one intercell move and a grouping efficiency of

77.77%. Finding that further iterations do not improve the grouping

efficiency, the ZODIAC algorithm stops here. intermediate solutions

from the ZODIAC algorithm may yield unequal numbers of machine

groups and component groups. In such cases the solutions are not

evaluated. In such situations the number of seed points for machines

and parts are made equal by eliminating (discarding) a few seed points

(corresponding to small or singleton groups) and the iterations

continued. The machine-component incidence matrix corresponding

to the final solution obtained using the ZODIAC algorithm is shown in

Figure 3.4. Since the objective is to maximize grouping efficiency,

intercell moves are not considered for choosing the best solution among

the solutions generated by ZODIAC.

Component k

Machine i

Figure 3.4 Final solution obtained using ZODIAC for the sample problem

(Grouping efficiency = 77.77% and

Number of exceptional elements ;e 1)

3.3.3 Introduction to (IRAFICS Algorithm

GRAFICS (Srinivasan and Narendran 1991) is a nonhierarchical

clustering algorithm in the area of cellular manufacturing systems.

GRAFICS i s a n acronym for 'Grouping using Assignment method

For Initial Cluster Seeds". GRAFlCS algorithm is given in Appendix I

along with the corresponding flowchart (Figure 1 A.2 ) .

GRAFICS h a s two phases. In the first phase, the machine similarity

matrix i s given a s input to assignment method (Hungarian method

(Budnick e t al. 1988)) to generate a n initial set of machine groups

such that the cumulative sum of the similarity values between the

machines within the initial machine groups is maximized. Subtours

(Bellmore and Nemhauser 1968) are identified from the assignment

solution and are used to determine the initial set of machine seeds to

cluster components.

SEED:

Consider the problem given in Figure 3.5. Let the solution

for :his problem using the assignment method [Hungarian

method (Budnick et al. 1988)l be a s follows: X,, = X,, =

X,, = X,, = X,, = X , , = 1 , where X,, is equal to 1 if the

machine i is grouped with the machine j ; it is equal to 0,

otherwise . From t h i s solut ion, we get two sub tour s .

1-2-5- 1 a n d 3 -4 -6 -3 . The machines in each subtour are

treated a s a machine group. The two machine groups are

a s follows: M , = (1,2,5) and M, = (3,4,6). These two

machine groups are used to generate the following two machine

seeds which are in turn used to cluster components in the next

stage: S, = (110010) and S,= (001101).Eachseedconsistsof

entries 1 or 0. In a given machine seed, the entry positions

represent machine numbers in ascending order. The e n t n

1 represents the presence of the corresponding machine

in that seed and the entry 0 represents the absence of

the corresponding machine in that seed. The formulation

of the above seeds namely S, and S, are explained in

Table 3.6 and Table 3 . 7 respectively.

(To improve readability, Figure 1.2 is reproduced below)

Component j

Machine vector corresponding to

machine i where i-1 : (1010)

0

Machine i 3 0 O 111

5 1 0 1 Componrnt vector

6 0 component J whrre 1-4 . (001101)

Figure 3.5 Machine-component incidence matrix (contains the d

shown in Figure 1.2)

The second phase of GRAFlCS is the formation of the machine-

component cells. In this phase, clustering is done based on maximum

density rule (Srinivaaan and Narendran 1991). The maximum density

rule is given below:

Maximum Density Rule:

Assign a component vector (machine vector) from the

machine-component incidence matrix to the machine seed

Table 3.6 Tabular explanations for the formation of machine seed,

8, : (1 10010)

Machine type, i

1

2

3

4

5

6

Whether a particular machine type is included in the machine group, M , (YES or NO)

YES

YES

NO

NO

YES

NO

The cornaponding machine seed, 8,

1

1

0

0

1

0

Table 3.7 Tabular explanation for the formation of machine seed,

5, : (001101)

r- - ---- I

Mach~ne type, I

I

1

1 2

3

4 1

I 5

6

L- -

- --- --

Wherhrr a part~cular mach~nr typc 1s ~ncludrd In thr machinr group. M, (YES or NO)

NO

NO

YES

YES

N 0

YES

-- -- --

----- _-

Thr corrrspand~ng machtnr srrd. 8,

0

0

1

1

0

1

- -- - - -- -

(component seed) with which it has the maximum common

1's and break ties by assigning to the machine seed

(component seed) which has the smallest number of 1's

among the contending machine seeds (component seeds)

[Component vector and machine vector are pictorially

represented in Figure 3.51.

Given the set of machine seeds, component families are identified

such that each component is assigned to a machine seed which has

the maximum number of machines required by that component. Then

the component seeds are constructed from the component families,

Again, the machine cells are identified such that each machine is

assigned to the component seed which requires ~t the most. I f we start

the above process with a good choice of initial set of machine seeds

and alternatively form component seeds and machine seeds, it will

finally give a feasible set of machine-component cells. Singleton cluster

is eliminated by assigning it to the seed which has a t least one member

(machine or component) clustered around it. After updating this

solution a s the latest feasible solution, the procedure is continued

until 1) the number of exceptional elements reaches zero (E = 0), or 2)

the number of exceptional elements ceases to decrease (E > Eo), or 3)

two consecutive feasible solutions are identical (E, = E and Bo = B).

3.3.3.1 Rumsrid Exunple for QRAFICS

In this section, GRAFICS algorithm is demonstrated using a

problem [Vohra et al. 1990) from the GT literature. The initial 0-1

machine-component incidence matrix of this problem is shown in

Figure 3.6 (The process sequences of the above problem are not

reported by the authors). The machine similarity matrix of this

problem is shown in Table 3.8. The similarity values shown in

Table 3 . 8 a re computed using the method followed by Kusiak

(1987). Using the machine similarity matrix in Table 3.8 a s input ,

assignment method [Hungarian method (Budnick et al. (1988)) 1

h a s genera ted 2 machine cells: ( 1 , 2 . 4 . 6 , 7 ) a n d ( 3 , 5 ) . The

corresponding initial machine seeds to cluster components based on

these machine cells are shown in Table 3.9.

Component j

Figure 3.6 Initial machine-component incidence matrix of the example

problem (Vohra et al. 1990)

Table 3.8 Machine similarity matrix of the example problem (Vohra

et al . 1990)

Machine j

Machine i

Note: The similarity values shown in Table 3 . 8 are computed using the method followed by Kusiak (1987). The formula h a s been already given in page 37 of Chapter 1.

Table 3.9 Initial machine seeds to cluster components of example

problem (Vohra et al. 1990) [output of Hungarian method)

Mach~ne seed nurnbcr

Machine

The next step of GRAFlCS uses the data in Table 3.9 (initial set of

machine seeds) and cluster components based on maximum density rule.

Application of lYIxfmum Density Rule:

In this paragraph the concept of maximum density rule is

demonstrated using a component vector's assignment to a

machine seed. Let u s consider a component vector, C,

corresponding to component type 1 from Figure 3.6 (initial

machine-component incidence matrix):

Component vector, C,: ( 1 1 0 0 0 1 0 )

This component vector, C, has to be assigned to any one of the two

machine seeds shown in Table 3.9:

Machine seed 1 : ( 1 1 0 1 0 1 1 )

Machine seed 2 : ( 0 0 1 0 1 0 0 )

Common 1's between C, and machine seed 1 = 3

Common 1's between C, and machine seed 2 = 0

Hence cornponent vector C, is assigned to machine seed 1 . i.e.

component of type, 1 is assigned to machine group 1 : ( 1 , 2 , 4,6,

7 ). Similarly all the remaining components with identification 2 ,

3, 4 , 5, 6 and 7 are assigned to the initial machine groups based

on maximum density rule.

These component families are shown in Table 3.10. Since the number

of machine cells generated by Hungarian method and the number of

Component families shown in Table 3.10 are equal, a feasible set of machine

component cells is formed. The corresponding grouping efficiency and

grouping efficacy are calculated. The value of grouping efficacy is equal to

53.33%. The value of grouping efficiency is equal to 75.09%. The

corresponding machine-component cells are shown in Table 3.1 1.

Table 3.10 Component families for the example problem (Vohra et

al. 1990) using GRAFlCS in Iteration 1

Table 3.11 Intermediate machine-component cells for example

problem (Vohra et al. 1990) using GRAFlCS

- Component family number

1

2 I

- Components

1,2,3.4.7 1 5,6 I

I 2

Itemtion 2:

The component seeds corresponding to the component families

in Table 3.10 are shown in Table 3.12. These component seeds are

used to generate new machine cells based on maximum density rule.

The new machine cells are shown in Table 3.13.

151

? Component 1

family

1,2,3,4,7

5,6 I

- - - - - - Machine-component

I cell number

1

1

2

, E = 1

B = 13

Grouping efficacy = 53 33'10

Grouplng efficiency = 75 09% ~ -

~

-

Machine cell

1,2,4,6,7

3 ,5

Table 3.12 Component seeds to cluster machines of example problem

(Vohra et al . 1990)

Component seed nurnbrr

Componrnt J

Table 3.13 Machine cells for example problem (Vohra et al. 1990)

using GRAFICS in Iteration 2

Since the number of machine cells in Table 3.13 and the numbrr

of component families in Table 3.10 are equal, a feasible set of machine-

component cells is formed. The corresponding grouping efficiency

and grouping efficacy are calculated. The value of grouping efficacy is

equal to 53.33%. The value of grouping efficiency is equal to 75.07%.

Finally, GRAFICS procedure stops since the number of exceptional

elements in the current feasible solution is same a s the number of

exceptional elements in the previous feasible solution and the number

. - -

I Machlne cell I number I

I I

1 I

I 2 -

---

Mach~nes

I

1,2,4.6,7 1

of voids in the current feasible solution is same a s the number of voids

in the previous feasible solution. The final machine-component cells

and the corresponding block diagonalised machine-component

incidence matrix are shown in Table 3.14 and Figure 3.7 respectively.

Table 3.14 Final machine-component cells for example problem

(Vohra et al. 1990) using GRAFlCS

Component j

1 --Machme-component 1 cell number

1 I 2

Machine i

Figure 3.7 Final block diagonal form of example problem (Vohra et al. 1990) using GRAFICS

- - --

Machlne cell

1,2,4,6,7

( E = 1, B = 13, Grouping efficacy = 53.33% and Grouping efficiency = 75.09% )

- -

Component family

1.2,3,4,7

C 3,5 5 6

1 E = l B = 13 1 Group~ng efficiency = 75 09%

1 Grouping efficacy = 53 33% 1 ---

3.3.4 Comp8riron of QRAFIC8 with other well-knom algorithm.

In th i s section, it is shown that GRAFICS (Srinivasan and

Narendran 1991) is better than many well-known algorithms by

sequential review of the GT literature. Miltenburg and Zhang (1991)

have compared nine well-known algorithms. The details of these

algorithms are given in Table 3.15.

They have concluded that the performance of Ideal Seed Non-

Hierarchical Cluster ing Algorithm (ISNC) developed by

Chandrasekharan and Rajagopalan (1986a) is relatively better than

the rest of the eight algorithms. But, ZODIAC (Chandrasekharan and

Rajagopalan 1987) is an improved version of ISNC.

Kandiller (1994) selected a subset of six well-known cell formation

techniques for a detailed analysis. The techniques selected for analysis

and comparison are:

1. Lattice-theoretic combinatorial grouping (COMBGR)

developed by Purcheck ( 1974).

2. Modified rank order clustering (MODROC) developed by

Chandrasekharan and Rajagopalan ( 1986b).

3. Machine-component cell formation (MACE) developed by

Waghodekar and Sahu (1984).

4. within-cell utilization based clustering (WUBC) developed

by Ballakur and Steudel (1987).

5. Cost analysis algorithm (CAA) developed by Kusiak and

Chow (1987).

6. Zero-one data: ideal seed algorithm for clustering

(ZODIAC) developed by Chandrasekharan a n d

Rajagopalan (1987).

Table 3.15 Details of well-known algorithms

S.No. Underlying Procrdurr Algorithm' Rrmarks

I Rank Order Clustering ROC/ROC Machines a s well a s components are clustrrcd uslna ROC.

2 Similarity Corfficirnt SC/ROC Machines a r r clustcrrd using SC and componcnts a r r clustcrrd using ROC.

3 Similarity Coemcicnt SC/SC Both machines and components arc clustrrrd using SC.

4 Modified Similaritv MSCIHOC Machines a r r clustrrrd using MSC and components arc clustrrrd using ROC.

5 Modified Similarity MSC/MSC Ih th machinrs and componrnts wrr cluslrrrd using MSC.

0 Mod~hrd Rank Order MROC Clustrrlng

7 Srrd Clust r r~ng ISNC

X Srrd Clust r r~ng SC-Srrd

9 Ilond E n r r u REA

'ROC - Rank Order Clustering Algor~thm (King 1980).

SC - Similarity Coefficient Algnrlthm (Slnglr L~nkagr Clustrring Algorithm (McAulry 1972)).

MSC - Modifird Similarity Coemcicnt Algorithm (Avrragr I.inkagr Clustrrlng Algor~thm (Andrrberg 19731)

MROC - Modified Rank Order Clust r r~ng Algorithm (Chandraarkhari~n and Rajagopalan 19AObl.

1SNC - Ideal Seed Non-Hierarchical Clustrring Algorithm (Chendraarkharan and Rajagopalan 1986a).

SC-Seed - Modified ISNC (Milttnhurg and Zhang 1991)

BEA - Bond E n e r ~ y Clustering Algorithm (McCormick rt .al 1972)

Kandiller (1994) camed out an extensive study of the six prominent

algorithms. Kandiller (1994) reported that ZODIAC (Chandrasekharan

and Rajagopalan 1987) is one of the best well-known algorithms.

Srinivasan and Narendran (1991) have S ~ O H ~ that the performance

of GRAFICS is better than ZODIAC. Hence, in thls research work,

GRAFICS is considered for further improvement.

3.3.5 Comparison of ZODUC and GEUFICS

In this section, ZODIAC [Chandrasekharan and Rajagopalan 1987)

and GRAFICS (Srinivasan and Narendran 1991) are compared and their

similarities and dissimilarities are reported in Table 3.16.

Table 3.16 Comparison of ZODIAC (Chandrasekharan and Rajagopalan 1987) and GRAFICS (Srinivasan and Narendran 1991)

- - - - - ---

ZODIAC

1 A nonhlerarchlcal clusterlng

algorithm

2 Input da t a 1s 0-1 machlne-

component ~nc~dence matnx

3 Jaccard s ~ m ~ l a r ~ t y measure IS

used to create inlt~al machlnt.

seed polnts around which the

machlnes are to be grouped

4 lnlt lal machine groups a re

formed uslng the ~ n ~ t ~ a l machlne

seed polnts and a dtsslmllanty

I measure

CRAFICS

1 A nonhlerarchlcal clusterlng

algonthm I

2 lnput data I S 0 -1 m a c h ~ n c -

component lncldence matnx I

3 Slmllarlty measure used by

Kuslak (1987) IS used to

generate ~ n ~ t i a l set of machlne

groups

4 In~bal set of machlne groupsare

formed by glvlng machlne

slm~larlty matrlx a s Input to

assignment method [Hunganan

method (Budnick et a1 1988) ]

Table 3.16 Comparison of ZODIAC (Chandrazrkhamn and Rajagopalan 1987)

and GRAFICS (Srinivasan and Narmdran 1991) (continued)

used to create initial component I formed using the initial set of

-

seed points around which the ( machine seeds generated

ZODIAC

5. Jaccard similarity measure is

components are to be grouped.

- GRAFICS

5. lnitial component groups are

us ing ass ignment method

[Hungarian method (Budnick

et al. 1988)].

component seed points and "a

dissimilarity measure".

6. Initial component groups are

formed us ing the initial

7. No objective function is used in

the formation of either initial

machine groups or initial

component groups which are

formed separately.

6 , Initial component groups are

formed using initial se t of

machine seeds and "maximum

density rule".

7, lnitial set of machine groups

are formed by giving machine

similarity matrix a s input to

assignment method

[Hungarian method (Budnick

et al. 1988)] with the objective

of maximizing the cumulative

sum of the similarity value8

between the machines within

the init ial se t of machine

groups.

8. Machine groups and component

groups are formed separately.

Subsequently, each part family

(component groups) is assigned

to a machine group based on a

procedure called diagonalisation.

8 . Machine groups and

component groups are formed

concurrently by using

"maximum density rule".

Table 3.16 Comparison of ZODlAC (ChandraseWlaran and Rajagopalan 1987)

and GRAFICS (Srinivasan and Narmdran 1991) (continued)

7 -

ZODIAC I

9 , The initial solution obtained

using natural seed clustering is I

improved us ing ideal seed

clustering. In ideal seed I

clustering machines are

clustered using component seed

points a n d also using a

I dissimilarity measure (a distance

measure). Similarly components

; are clustered using machine seed

points and a lso using a

; dissimilarity measure (a distance

measure). I 1

10.The goodness of the solution

(machine-component cell

formation) is evaluated using a

performance measure namely

grouping efficiency

(Chandrasekha ran and

Rajagopalan 1987).

!

!

i

--

GRAFICS

9. Initial set of machine groups

obtained us ing assignment

method is used subsequently to

form component groups using

"maximum density rule". Then

machine groups and component

groups are formed alternatively

using "maximum density rule".

I

10.The goodness of the solution

(machine-component cell I formation) is evaluated primarily

using a performance measure 1 namely grouping efficacy

(Kumar and Chandrasekharan I 1

1990) and secondarily using a 1

performance measure namely

grouping efficiency

(Chandrasekharan and 1 Rajagopalan 1987).

Table 3.16 Cornpariaon of ZODIAC (Chandrasekharan and Rajagopalan 1987) and GRAFlCS (Srinivasan and Narendran 1991) (continued)

h

I ZODIAC

11 .When further iterations do not

improve grouping efficiency,

ZODIAC algorithm stops.

I

I ~ I 1

! I

i 12.The choice of the initial number

of seeds and seed points needs

careful consideration. I

Sometimes, natural seeds which

) are generated based on the

/ characteristics of the data set

can result in fewer seeds than

required, thereby increasing the

i b lanks within the diagcnal

blocks considerably.

I I

I

GRAFICS

I I .GRAF!CS algorithm stops while

any one of the following

conditions occun:

(a) the number of exceptional

elements reaches zero (E=O)

(or1

(b) the number of exceptional

elements ceases to decrease

(E'E,)

(or)

(c) two consecutive feasible

solutions are identical (E,=E and

B,=B).

I 12.If there exists multiple solutions

for a given assignment problem

[solved using Hungarian method

(Budnick et al. 1988) 1, though

there is a possibility of obtaining

reasonably large number of

subtours (i.e. reasonably large

number of machine seeds a s the

initial set of machine seeds)

which will lead to a meaningful

final solution (final machine-

component cells) , t he

assignment method (Hungarian

Table 3.16 omp par is on of ZODlAC (Chandrasekharan and Rajagopalan 1987)

and CX4FlCS (Srinivasan and Narmdran 1991) (continued)

1- ---

ZODIAC 4

1

I

I

13 The clustenng cntenon based on

" m ~ n i m u m value of d ~ s t a n c e

measure" does not reflect the

extent of processlng requrred bv

components Cons~der two seeds

S,=(llOOOOOOOO) and

S,=(0000011111) and a vector

V=(0000110000) The d~stance

1 measure between V and S, 1s 4

and between V and S2 1s 5 I f we

cluster based on the "rn~nrmurn

value of d~s tance measure", V

should be clustered to S, and not

a S, though they do not have a

' 1 ' In common The above

s l tuat lon often a r l se s whlle

clustenng ~llstructured matnces

This is a very important

h a c k of ZODIAC algorithm. I

-

GRAFICS

method) may glve a slngle

machrne seed a s the lnrhal set

ofmach~ne seeds or a very lrmrted

number of machrne seeds a s the

~ n ~ t ~ a l set of machrne seeds whlch

are undesirable

13 Thr clustenng cntenon based on

'max~rnurn dens~ty rule" does

reflect thr rxtent of processlng

requirrd by components

able 3.16 Comparison of ZODIAC ( C h a n d r a a e m and Rajagopalan 1987)

and GRARCS (Srin~asan and Narmdran 1991) (continued)

3.3.6 Meed for improving GRAFICS Algorithm

,-

ZODIAC

14. No constraint on machine group I

: size or part family size. I

i15.No cons t r a in t on number of ! I machine groups or number of

part families.

Limitation of Using Assignment Method for Generating Initial

Set of Machine Seeds:

GRAFICS

14. No constraint on machine group

size or part family size.

15.No const ra in t on number of

machine groups or number of

part families. ...~..

If there exists multiple solutions for a given assignment

problem, there i s a possibility of getting anv one of the

following three types of assignment solution:

1. A solution which consists of a single tour.

2. A solution which consists of a very limited number

of subtours.

3. A solution which consists of a reasonably large

number of subtours.

Among these alternate solutions, if the assignment method

gives any one of the first two types of solution (type 1 or

type 2), we will end up with a single machine aeed or a

very limited number of machine seeds. In the above

s i tuat ion, though there i s a posaibility of obta ining

reasonably large number of subtoura (type 3) which will

lead to a meaningful final so lu t ion (final mach ine -

component cells) in the next phase, the assignment method

may give a single machine seed or a very limited number

of machine seeds. This i s the major limitation of using the

assignment method to generate the initial set of machine

seeds.

To overcome the above limitation of the muignment method,

an Efficient Seed Generation Algorithm (ESGA) is proposed in this

research work.

Limitation of Phase 2 of GRAFICS:

Consider a s i tua t ion in which t h r r e r x i s t s a s ing l r

component with processing r equ i r rmrn t s on a set of

machines ( a machine group) and also that set of machines

i s not required by any other component. In GRAFICS, this

type of component (single component cluster or singleton

cluster) i s assigned to another machine seed (machine

group) with which that component h a s zero similarity.

Consider another situation in which a single machine

which will fully satisfy the processing requirements of a

set of components ( a component family) and also that

machine i s not required by any other component . In

GRAFICS, th i s type of single machine (single machine

cluster or singleton cluster) will be assigned to another

component seed (component family) with which it has zero

similarity. To overcome the above problem, singleton

clusters (i.e. single component cluster or single machine

cluster) should be allowed in the machine-component cells

which will improve the final solution. Hence, in th i s

r e s e a r c h work , c o r r e s p o n d i n g mod i f i ca t ions a r e

incorporated in GRAFICS to allow singleton clusters in

the solution if necessary.

To h a n d l e t h e above s i t u a t i o n s , two a lgo r i thms , namely

ALGORITHM 1 and ALGORITHM 2 are proposed in the first part of

this research work. The initial set of machine seeds obtained from the

seed generation algorithm namely ESGA is used in these algorithms.

Later, in the second part of this research work, an improved version of

the ALGORITHM 2 namely SA ALGORITHM is proposed.

3.4 SUMMARY

In th is chapter, detailed explanations have been given about two

n o n h i e r a r c h i c a l c l u s t e r i n g a lgo r i thms namely ZODIAC

(Chandrasekharan and Rajagopalan 1987) and GRAFlCS (Srinivasan

and Narendrarl 1991). The similarities and dissimilarities of these

algorithms have been reported. The drawbacks of GRAFlCS have been

highlighted. Also, it h a s been shown that GRAFlCS is better than many

well-known algorithms by sequential review of the published GT

literature.