INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ......

56
ISSN 2250 - 1975 IJACM IJACM IJACM IJACM INTERNATIONAL JOURNAL OF ADVANCES IN COMPUTING AND MANAGEMENT VOLUME-2 NUMBER-1 JANUARY-DECEMBER 2013

Transcript of INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ......

Page 1: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

ISSN 2250 - 1975

IJACMIJACMIJACMIJACM

INTERNATIONAL JOURNAL O F

ADVANCES IN COMPUTING AND MANAGEMENT

VOLUME-2 NUMBER-1 JANUARY-DECEMBER 2013

Page 2: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

Prof. Rajesh Math

Padmashree Dr D Y Patil Institute of Master of Computer Applications, Pune, Maharashtra, India 9960866179 [email protected]

Dr. Vidyadhar Vedak 9881061388 [email protected]

Neha Sharma 9923602490 [email protected]

Nilam Upasani 9881300235 [email protected]

Prof. T. Ramayah, University Sains Malaysia, Malaysia [email protected]

Dr. Guneratne Wickremasing, Victoria University, Australia [email protected]

Dr. Juergen Seitz, Baden-Wuerttemberg Cooperative State Universiti Heidenheim, Germany [email protected]

Dr. Cheickna Sylla, New Jersey Institute of Technology, USA [email protected]

Dr. Ngai M. Kwok , The University of New South Wales, Australia [email protected]

Dr. Anand, Ministry of Higher Education, Oman [email protected]

Dr. Dinesh Mital, University of Medicine and Dentistry of New Jersey, Newark [email protected]

Dr. Bimlesh Wadhwa, National University of Singapore, Singapore [email protected]

Dr. Raveesh Agarwal, India [email protected]

Dr. Renita Dubey, Amity University, India [email protected]

Muhammad Kashif Saeed, GIFT University, Pakistan [email protected]

Anupama R., Amity Business School, India [email protected]

Ting-Peng iang, National Sun Yat-Sen University, Taiwan [email protected]

Mohd Helmy Abd Wahab, Universiti Tun Hussein Onn Malaysia, Malaysia [email protected]

Simone Silvestri, University of Rome, Italy simone.silvestri{AT}di.uniroma1.it

Dr. (Capt.) C.M. Chitale, University of Pune, India [email protected]

Dr. C. D. Balaji, IIT Madras, India [email protected]

Dr. Manoj Kamat, Goa University, India [email protected]

Dr. K.B.L. Shrivastava, IIT Kharagpur, India [email protected]

Dr. Arif Anjum , Editor, International Journal of Management Studies, India [email protected]

Vivek Mahajan, Govt. Polytechnic College, J&K, India [email protected]

Dr. Pradeep Kautish, India [email protected]

Adv. Swapneshwar Goutam, India [email protected]

K. Kishore, India [email protected]

Dr. Ajith Paninchukunnath , Fellow of the MDI, India [email protected]

Dr. Mehraj-Ud-Din Dar, University of Kashmir, India [email protected]

Editorial Board

Editors

Editor-In-Chief

Published By:

DR. D. Y. PATIL PRATISHTHAN’S PADMASHREE DR. D. Y. PATIL INSTITUTE OF MASTER OF COMPUTER APPLICATIONS

SEC. NO. 29, AKURDI, MAHARASHTRA, INDIA.

Email: - [email protected], [email protected]

Page 3: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

INTERNATIONAL JOURNAL OF ADVANCES IN COMPUTING AND MANAGEMENT

Contents A General Purpose Integrated Framework For Secure Web Based Application .... 1-5 Lukesh R. kadu, Renuka Agrawal

IRIS Texture Analysis, Finger Printing and Signature Verification for Automatic Data Retrieval System using Biometric Pattern Recognition System .... 6-10 Suman Sengupta, Tutu Sengupta

Association Rule Mining with Hybrid Dimension .... 11-14 Priyanka Pawar, Vipul Dalal, Sachin Deshpande

Image Compression Using Wavelet Packet and SPIHT Encoding Scheme .... 15-18 Savita K. Borate, D. M. Yadav

Simulation Model of Ultrasonographic Imaging System using Ptolemy modelling tool and its Hardware implementation and Quantative analysis of the results .... 19-25 Moresh Mukhedkar, Bhakti Mukhedkar Application of Data Mining Algorithms for Customer Classification .... 26-28 Nilima Patil, Rekha Lathi

Software Requirement Inspection Tool for Requirement Review .... 29-33 Tejal Gandhi, Bhavesh Chauhan

The Asymptotic Analysis of Quick Sort and Quick Select .... 34-37 Riyazahmed A. Jamadar

Talent Acquisition and Retention .... 38-41 Meera Singh

The Impact of Knowledge Management Practices in Improving Student Learning Through E-Learning .... 42-46 Gopal S. Jahagirdar

ISSN 2250 – 1975 VOLUME 2 NUMBER 1 JANUARY -DECEMBER 2013

Page 4: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...
Page 5: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

A General Purpose Integrated Framework For Secure Web Based Application

Lukesh R. Kadu1, Renuka Agrawal2

Abstract - To address the challenges in Web services security, we firstly analyzed threats facing Web services and related security standards, presented integrated security framework Based on use of authentication, authorization, confidentiality, and integrity mechanisms for Web services, and proposed how to integrate and implement these security mechanisms in order to make Web services more secure against the attacks. Also the performance of web application is become critical in recent era. So proposed here general purpose integrated framework for secure web based application. Keywords — web services; security; authentication; authorization; Confidentiality; Role based access control; secure web application project

I. INTRODUCTION

Due to the evolution of Internet technology and application popularization, security and performance have become key issues for implementing Web-based applications. Presently, most of the Web-based applications design only consider security issue, but ignore performance problem. According to these two issues, we propose a new approach to integrate security and performance aspects for Web-based applications.

The advance of Web services technologies promises to have far-reaching effects on the Internet and enterprise networks. Web services based on the eXtensible Markup Language (XML), SOAP, and related open standards, and deployed in Service Oriented Architectures (SOA) allow data and applications to interact without human interruption through dynamic and ad hoc connections. Web services technology can be implemented in a wide variety of architectures, can co-exist with other technologies and software design approaches, and can be adopted in an evolutionary manner without requiring major transformations to support applications and databases. The security challenges [1] presented by the Web services approach are critical and unavoidable. Many of the features that make Web services attractive, including greater accessibility of data, dynamic application-to-application connections, and relative autonomy (lack of human Intervention) are at odds with traditional security models and controls. Difficult issues and unsolved problems exist, such as protecting the following: 1) Confidentiality and integrity of data that is transmitted via Web services protocols in service to- service transactions, including data which is traverses in intermediary services.

2) Functional integrity of the Web services that requires the establishment of trust between services on a transaction-by-transaction basis.

Current security mechanisms on the application layer are not enough to prevent any attacks from hackers. The enterprise makes use of information technology (IT) to create its competition and value. Due to advance IT, it can change business operation and living style, and exchange information transparently via Internet, Intranet, or Extranet. The Web-based applications of enterprise make internal employee exchange and retrieve Web pages that stored in different application systems on the Intranet. Also, they can strengthen the competitiveness and reduce the cost. Since the Internet environment is open and global, the attacks of the hacker are omnipresent. Hence, the security problems of Web-Based applications are more and more important.

Though firewall can efficiently prevent attacks on the Intranet, most intrusion attacks are still occurred in business operation or data management on the application layer. These events may include illegal Web pages access, theft of sensitive data, purchasing goods on low price, stealing user’s Cookie via JavaScript. Currently, we can find that most of the Web-based applications design only consider security issue and ignore performance problem. Hence, we need to consider performance problem in designing Web-based applications. Therefore, the purpose of this paper is to propose the general purpose framework for solving security events and improving processing performance issues of Web-based applications on the Intranet

Finally, our experimental results in section VII indicate that the propose general purpose integrated framework can solve related security holes, obtain better processing performance, and reduce development time and cost.

II. ATTACKS FACING WEB SERVICE AND RELATED

SECURITY PARAMETERS

There are a wealth of security standards and technologies available for securing Web services; they may not be sufficient or necessary for a particular organization or an individual service. For that reason, it is important to understand the attacks that face Web services so that organizations can determine which threats their

Web services must be secured against. The common threats facing Web services are [2] 1) Message alteration. An attacker inserts, removes or

modifies information within a message to deceive the receiver.

2) Loss of confidentiality. Information within a message is disclosed to an unauthorized individual.

3) Falsified messages. Fictitious messages that an attacker intends the receiver to believe are sent from a valid sender.

1Student of ME (Computer), Pillai Institute of Technology, New panvel, Navi Mumbai, India

2Assistant Professor,,Pillai Institute of Technology, New Panvel, Navi Mumbai, India [email protected] [email protected]

International Journal of Advances in Computing and Management, 2(1) January-December 2013, pp. 1-5 Copyright @ DYPIMCA, Akurdi, Pune (India) ISSN 2250 - 1975

Page 6: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

4) Man in the middle. A third party sits between the sender and provider and forwards messages such that the two participants are unaware, allowing the attacker to view and modify all messages.

5) Principal spoofing. An attacker constructs and sends a message with credentials such that it appears to be from a different, authorized principal.

6) Denial of service. An attacker causes the system to allocate resources disproportionately such that valid requests cannot be met.

III. LITERATURE REVIEW

The solutions about security problems in designing Web-Based applications contain SWAP (Secure Web Applications Project), RBAC (Role-Based Access Control), Language-Based Systems, Commercial System, and others. First, we examine SWAP, RBAC, and Language-Based Systems approaches. Researchers have proposed many kinds of security model, such as Discretionary Access Control (DAC) and Mandatory Access Control (MAC)[3]. These models apply to different applications and application environments, and are usually based on user-group. While DAC and MAC can hardly be enforced in open networks, RBAC has the advantages of simplifying the management, and it is one of the more promising approaches to handle sets of access rights. The concept of RBAC began with multi-user and multi-application on-line systems pioneered in the 1970s.the basic idea of the RBAC model is that permissions are assigned to roles rather than users [4]. The users acquire permissions by acting members of roles as shown in Fig 1.

Fig.1. RBAC ARCHITECTURE IN WEB BASED APPLICATION

A typical Web-based application using the SWAP tool as shown in Fig 2. Has source code written in three different languages, as well as some third-party component in which we assume that the source code is inaccessible. Spectre protects Language C or third-party component of the application by dynamically transforming HTTP requests and responses in accordance with the refined security policy of SPDL for this application. As you can see, Spectre does not protect the parts of the application written in Languages A and B. Instead, our secure IDL compiler has folded that the relevant parts of the SPDL policy are directly mapped into the source code. Also, the IDL compiler supports multiple

languages, allowing us to convert the SPDL into Language A and Language B [5].

In order to run un trusted code in the same process as trusted code, there must be a mechanism to allow dangerous calls to determine if their caller is authorized to exercise the privilege of using the dangerous routine. Java systems have adopted a technique called stack inspection to address this concern. Stack inspection is an algorithm for preventing un trusted code from using sensitive system resources

Fig.2. SWAP ARCHITECTURE IN WEB BASED APPLICATION

Some Web services and HTTP standards can protect against many of these threats.

1) W3C XML Encryption. Used by WS-Security to encrypt messages and provide confidentiality of part or all of SOAP message

2) W3C XML Signature. Used by WS-Security to digitally sign messages and provide message integrity and sender authentication

3) WS-Security Tokens. Allows messages to include credentials to aid receivers in determining whether or not the message sender is authorized to perform the requested action.

4) W3C WS-Addressing IDs. Allows the message sender

to supply a unique identifier for the message. 5) IETF SSL/TLS. Secures the HTTP protocol over

which SOAP messages are sent and received SSL/TLS with client authentication. Requires both the sender and receiver to authenticate with one another before securing the HTTP protocol.SWAP approach only protects interactions among Web pages. In contrast, RBAC approach is only restricted user to access Web pages, but not protect interactions among Web pages. In addition, the Web-based application development process is a multi-stage process consisting of: design, coding, testing and development. Usually, securing a Web-based application is difficult, but the authors have proposed to solve the security of the applications using Language-

2 International Journal of Advances in Computing and Management

Page 7: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

Based systems such like J2ME programming tool. Also some scientist have proposed the specific domain modeling and the service-oriented design frameworks for designing the secure Web-based applications [6,7]So far as now, most of the scientist only focus on designing the secure Web-based applications. However, a full-fledged Web-based applications general purpose integrated framework to support both security and performance feature.

IV. ANALYSIS

In designing Web-based applications, there are some essential design issues for implementing the system. However, we will propose following design issues according to two main aspects: security and performance.

Design issue for security The rise of the Internet with opportunities to connect

computes anywhere in the world has increased the threat of the organization’s Web-based applications. Hence, the Web-based applications are required to concern with security of accessing and storing information. In particular, following design issues are to solve the security problems in a Web-based application:

(1) Access control: Different users and roles can have different interfaces or permissions.

(2) SQL injection: Stream SQL statement to attack database or records in database.

(3) Session hijacking: Grab session Id of Web user and enter user’s Session.

(4) Information disclosure: Let user look the detail information while occurs exception.

(5) Hidden-field tampering: Update contents of hidden-field in the Web pages.

Design issue for performance Besides security, the second most important

characteristics of Web-based applications is performance. In general, following performance solutions are concerned with programming in a Web-based application:

(1). Time for accessing Web Page: It will suffer from the effect of programming structure.

(2). Time for manipulating Database:

.Data Query: Time for selecting data from database.

.Data Insert: Time for inserting data into database.

.Data Update: Time for updating data from database.

.Data Delete: Time for deleting from database. So above two demands as stated, are driving the need

for designing Web-based applications. Therefore, we will develop general purpose integrated framework to support security and performance features on the Web-based applications.

V. DESIGN

A. Algorithm of general PURPOSE INTEGRATED framework

As, already stated that general purpose framework for secure web based application combines RBAC and SWAP methods and enhances performance at the same time during in designing Web-based applications. Actually, the user will start to access Web pages using the

RBAC and then make some interactions between any two Web pages using SWAP with tuning performance under general purpose integrated framework.

Based on design and performance issues here is algorithm MAIN FUNCTION ( ) { Input : Create a database WBA Create a table Tb_User (U_Id, U_Name, Gender) Create a table Tb_Login (R_Id, U_Id, Uid, Upswd) Create a table Tb_Role (R_Id, R_Name) Create a table Tb_Permission (Per_Id, Per_Name) Create a table Tb_AccessRight (A_Id, R_Id, Per_Id) Create another tables that user will to use. Dim flag as int Dim uid as user input Dim upswd as user input Method: START

If user want to access the web page that is not login page

To navigate ‘login’ page. Else { Display ‘login’ page. User input uid in field. User input upswd in field. If ‘id’ field or ‘password’ field is null then To navigate ‘login’ page. Else Do {

Query table Tb_Login all field. If ‘id’ field is equal to table Tb_Login field Uid then If ‘upswd’ field equal to table Tb_Login field Upswd then

{ Set flag to ‘1’. Access_Right(R_Id) Break; } } while (not end of table) if flag is not equal to ‘1’ then To navigate ‘login’ page. } } END Access_Right (R_Id) {

Query table Tb_ Permission field Per_Id via R_Id. Query table Tb_Permission field Per_Name via Per_Id.

Display user interface and then user executes permission. User_Operation( ) } User_Operation ( ) { If user want to insert data into database WBA then { Input all fields values Argument_Check (field1, field2,…,fieldn) }

A General Purpose Integrated Framework For Secure Web Based Application 3

Page 8: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

If update data from database WBA then { Input all fields values }

The key performance tuning issues of general purpose framework are mainly focused on accessing Web pages on the Web Server and manipulating tables within a Database.

Fig.3.RBAC FLOWCHART IN GENERAL PURPOSE INTEGRATED FRAMEWORK

VI. PRACTICAL CASE IMPLEMENTATION

Actually, the implementation procedure is divided into security and performance instances.

PART 1: Security Instance (1)Access control: Different user’s logins will have

different interfaces based on user defined role.

(2) SQL injection:It can prevent data leak from this feature

(3) Coding for seession hijacking Session.Add("id", "u"+ uid.Text + "s"); Session.Add("pswd", "u" + upswd.Text + "u"); (4) Coding for Information disclosure: try { myCommand.ExecuteNonQuery(); Message.InnerHtml = "insertion O.K.<br>"; } catch (SqlException e) { if (e.Number == 2627) Message.InnerHtml = "The Same Index KEY!!"; else Message.InnerHtml = "Is all data stuffing?"; Message.Style["color"] = "red"; }

Fig.4.SWAP FLOWCHART IN GENERAL PURPOSE INTEGRATED FRAMEWORK

PART II: Performance Instances (1) Time for accessing Web Pages (2) Time for manipulating Database

.Data Query SqlDataAdapter myCommand = new SqlDataAdapter("select * from Tb_Users", myConnection); DataSet ds = new DataSet(); myCommand.Fill(ds, " Tb_Users "); MyDataGrid.DataSource = ds.Tables["Tb_Users "].DefaultView; MyDataGrid.DataBind();

.Data Insert String insertCmd = "insert into Tb_Users (name, pho, addr) values (@Name, @Pho, @Addr)"; SqlCommand myCommand = new SqlCommand(insertCmd, myConnection); myCommand.Parameters.Add(new SqlParameter("@Name", SqlDbType.NVarChar, 20));

myCommand.Parameters["@Name"].Value = name.Value; … myCommand.Connection.Open(); myCommand.ExecuteNonQuery();

.Data Update String updateCmd = "UPDATE Tb_Users SET name = @Name, pho = @Pho, "addr = @Addr where User_Id = @Id"; SqlCommand myCommand = new SqlCommand(updateCmd, myConnection); myCommand.Parameters.Add(new SqlParameter("@Name", SqlDbType.NVarChar, 40)); …….

4 International Journal of Advances in Computing and Management

Page 9: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

MyDataGrid.DataKeys[(int)e.Item.ItemIndex]; String[] cols = { "@Name", "@Pho", "@Addr" };

.Data Delete String deleteCmd = "DELETE from Tb_User where User_Id = @Id"; SqlCommand myCommand = new SqlCommand(deleteCmd, myConnection); myCommand.Parameters.Add(new SqlParameter("@Id", SqlDbType.NVarChar, 11)); myCommand.Parameters["@Id"].Value = MyDataGrid.DataKeys[(int)e.Item.ItemIndex];

myCommand.Connection.Open();

VII. RESULT ANALYSIS

Our models are validated by using proposed and traditional approach for implementing practical ERP system to compare their security level and processing performance. Below given the comparison between traditional and integrated approach as shown in Fig.5

Fig.5. COMPARISION BETWEEN TRADITIONAL FRAMEWORK AND GENERAL PURPOSE INTEGRATED FRAMEWORK

VIII. CONCLUSION

In this paper, we propose the general purpose integrated framework and design its algorithm. Also, we illustrate how to implement the secure Web-based applications with tuning performance. Then, we will utilize this framework to set up a practical case – ERP system, and also perform simulations and make results analysis. Finally, our experimental results indicate that the proposed general purpose integrated framework can solve related security holes, and also improve performance.

REFERENCES [1] E. Pimenidis and C. K. Georgiadis, “Web services security

evaluation considerations,” International Journal of Electronic Security and Digital Forensics, Vol. 2, pp. 239-252, Mar. 2009

[2] J. Schwarz, B. Hartman, and A. Nadalin, “Security challenges, threats and countermeasures,” White Paper, The Web Services-Interoperability Organization, May. 2005.

[3] Guoping Zhang “An Extended Role Based Access Control Model for the Internet of Things” International Conference on Information, Networking and Automation June 2010.

[4] Min Wu, Jiaxun Chen, Yongsheng Ding “Study on RoleBased Access Control Model for Web Services and its Application”

[5] David Scott and Richard Sharp, “Developing Secure Web Applications,” IEEE Internet Computing, Vol. 6, No. 6, pp. 38-45, November 2002.

[6] Hiroshi Wada, Junichi Suzuki, ”A Domain Specific Modeling Framework for Secure Network Applications,” In Proceedings of 30th Annual International Computer Software and Applications Conference (COMPSAC'06), pp. 353-355, September 2006

[7] Hiroshi Wada, Junichi Suzuki and Katsuya Oba, “A Service-Oriented Design Framework for Secure Network Applications,” In Proceedings of 30th Annual International Computer Software and Applications Conference (COMPSAC'06), pp. 359-368, September 2006

[8] www.google.com

Approaches Features

Traditional Approaches General Purpose Integrated Framework

performance

security

performance

security

Login

8 * 10-3

2

6 * 10-3

1

Web page accessing

7 * 10-3

2

5 * 10-3

1

DA*-Data Query

9*10 -3

2

6*10 -3

1

DA*-Data Insert

11*10-3

3

8*10-3

2

DA*-Data Update

7*10-3

3

4*10-3

1

DA*-Data Delete

4*10-3

3

2*10-3

2

A General Purpose Integrated Framework For Secure Web Based Application 5

Page 10: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

IRIS Texture Analysis, Finger Printing and Signature Verification for Automatic Data Retrieval System using Biometric Pattern Recognition System

Suman Sengupta1, Tutu Sengupta2

Abstract— Pattern recognition is the study of how machines can observe the environment, learn to distinguish patterns of interest from their background, and make sound and reasonable decisions about the categories of the patterns. In spite of almost 50 years of research, design of a general purpose machine to recognize pattern still remains a challenge. In most cases humans are able to recognize patterns to the best of their capabilities, yet we do not understand how humans recognize these patterns. Ross emphasizes the work of Nobel Laureate Herbert Simon whose central finding was that pattern recognition is critical in most human decision making tasks. The more relevant patterns at ones disposal, the better decisions will be. This is hopeful news to people involved in the study of artificial intelligence, since computers can surely be taught to recognize patterns. Pattern recognition finds tremendous potential in diagnosis of diseases, identification of criminals and the mapping of their brain, programs that help banks score credit applicants and help pilots land airplanes etc. These applications include data mining (identifying a pattern e.g., correlation, or an outlier in millions of multidimensional patterns), document classification (efficiently searching text documents over the internet), financial forecasting, crop forecasting, organization and retrieval of multimedia databases, and biometrics. In this paper we propose a new biometric-based combined smart card Iris feature extraction system, finger printing and signature verification system which can be successfully implemented for retrieval of individual’s data. The system involves a sensor which can automatically sense the iris and acquire the biometric data in numerical format (Iris Images). Similarly the method also involves swiping of the finger print on the swiping machine and comparing an individual’s signature for the purpose. This method of detection can be modified for face detection, detecting thumb and finger impression and detecting behavioral patterns and other non biometric characteristics like hand writing and signature verification. The sensor in this case is a highly sensitive camera and/or a swipe system and an electronic scribbling board for signature verification. Keywords— Biometric, eyeprint, IRIS, Unique Identification Code, Pattern Recognition

I. INTRODUCTION

Most children can recognize digits and letters by the time they becomes five years old. Small characters, large

characters, handwritten, machine printed or rotated–all are easily recognized by the young. Human beings as well as animals can recognize sound, touch, odor and emotions. The fact that we recognize patterns is taken for granted until faced with the uphill task of teaching a machine to understand and replicate the patterns. Pattern recognition is the study of how machines can observe the environment, learn to distinguish patterns of interest from their background, and make sound and reasonable decisions about the categories of the patterns. In spite of almost 50 years of research, design of a general purpose machine to recognize pattern still remains a challenge. In most cases humans are able to recognize patterns to the best of their capabilities, yet we do not understand how humans recognize these patterns. Nobel Laureate Herbert Simon’s central finding was that pattern recognition is critical in most human decision making tasks. decisions will be. This is hopeful news to people involved in the study of artificial intelligence, since computers can surely be taught to recognize patterns. Pattern recognition finds tremendous potential in diagnosis of diseases, identification of criminals and the mapping of their brain, help pilots land airplanes on airstrips, classify documents (efficiently searching text documents), financial forecasting, crop forecasting, organization and retrieval of multimedia databases, and biometrics.

A. Concept of pattern recognition

Automatic (machine) recognition, description, classification, and grouping of patterns are important problems in a variety of engineering and scientific disciplines such as biology, psychology, medicine, marketing, computer vision, artificial intelligence, and remote sensing. According to Watanabe, a pattern is defined as opposite of a chaos. Pattern is an entity, and the cluster has a similar shape or property or both. For example, a pattern could be a fingerprint image, a handwritten cursive word, a human face, or a speech signal. Given a pattern, its recognition / classification may consist of one of the following two tasks:

supervised classification in which the input pattern is identified as a member of a predefined class, or unsupervised classification (e.g., clustering) in which the pattern is assigned to a hitherto unknown class.

In many recognition problems involving complex patterns, it is more appropriate to adopt a hierarchical perspective where a pattern is viewed as being composed of simple sub patterns which are them/selves built from yet simpler sub patterns. The simplest/elementary sub patterns to be recognized are called primitives and the given complex pattern is represented in terms of the

1 Department of Computer Application, Shri Ramdeobaba College of Engineering and Management, RTM Nagpur University, Nagpur as Assistant Professor. 2 Maharashtra Remote Sensing Application Centre, Nagpur, an autonomous body under Department of Planning, Government of Maharashtra as Scientist. [email protected] 2 [email protected]

International Journal of Advances in Computing and Management, 2(1) January-December 2013, pp. 6-10 Copyright @ DYPIMCA, Akurdi, Pune (India) ISSN 2250 - 1975

Page 11: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

interrelationships between these primitives. In syntactic pattern recognition, a formal analogy is drawn between the structure of patterns and the syntax of a language. The patterns are viewed as sentences belonging to a language, primitives are viewed as the alphabet of the language, and the sentences are generated according to a grammar. Thus, a large collection of complex patterns can be described by a small number of primitives and grammatical rules. The grammar for each pattern class must be inferred from the available training samples. In many recognition problems involving complex patterns, it is more appropriate to adopt a hierarchical perspective where a pattern is viewed as being composed of simple sub patterns which are themselves built from yet simpler sub patterns .The simplest/elementary sub patterns to be recognized are called primitives and the given complex pattern is represented in terms of the interrelationships between these primitives. In syntactic pattern recognition, a formal analogy is drawn between the structure of patterns and the syntax of a language. The patterns are viewed as sentences belonging to a language, primitives are viewed as the alphabet of the language, and the sentences are generated according to a grammar. Thus, a large collection of complex patterns can be described by a small number of primitives and grammatical rules. The grammar for each pattern class must be inferred from the available training samples.

B. Structural pattern recognition

Structural pattern recognition is important because, in addition to classification, this approach also provides a description of how the given pattern is constructed from the primitives. This paradigm has been used in situations where the patterns have a definite structure which can be captured in terms of a set of rules, such as EKG waveforms, textured images, and shape analysis of contours.

The implementation of a syntactic approach, however, leads to many difficulties which primarily have to do with the segmentation of noisy patterns (to detect the primitives) and the inference of the grammar from training data. Fu introduced the notion of attributed grammars which unifies syntactic and statistical pattern recognition. The syntactic approach may yield a combinatorial explosion of possibilities to be investigated, demanding large training sets and very large computational efforts.

C. Neural Networks

Neural networks are massively parallel computing systems consisting of an extremely large number of simple processors with many interconnections. Neural network models attempt to use some organizational principles (such as learning, generalization, adaptively, fault tolerance and distributed representation, and computation) in a network of weighted directed graphs in which the nodes are artificial neurons and directed edges (with weights) are connections between neuron outputs and neuron inputs. These in fact try to replicate the brain of human beings. The main characteristics of neural networks are that they have the ability to learn complex

nonlinear input-output relationships, use sequential training procedures, and adapt themselves to the data.

D. Relevance of pattern recognition problem

Impersonation is the greatest security threat of modern days, from the terrorist attack to the purchase of a simple consumer item, to the harrowing task of identifying the criminals to the identification of accident victims or unclaimed dead bodies, to the uphill task of preventing impersonation in the elections, every where there is a tremendous potential for pattern recognition. We give the example of a person who produces a debit or credit card for purchasing goods, transaction is done without verifying if the producer is lawful owner of the card or not. The verifier can be identified and authenticated by what he knows (password), by what he owns (document) or by who he is (Biometrics). Passwords can be hacked, documents can be stolen but biometrics measures are something very difficult to reproduce if not impossible. The current trend in the research world is headed towards Biometrics since the level of security is highly increased. The most popular biometric features are based on individual’s signature, retinal features, face recognition, iris characteristics, fingerprint, DNA matching and voice recognition.

II. METHODOLOGY

A. For IRIS recognition technique

As in any other automated recognition technique, the IRIS-recognition system has to compare a newly acquired IRIS pattern against existing patterns already stored in the system. The IRIS pattern can be extracted from eye images. The first stage would be an alignment process in order to eliminate variation in scale and rotation. The features that can be used could be based on local orientation, phase or special frequency information. The properties of the Iris that enhance its suitability for use in automatic identification include:

• Protected from the external environment • Impossibility of surgically modifying without the

risk of vision • Physiological response to light. • Ease of registering its image at some distance The human iris proves to be of great potential in

biometrics-based identification and verification research and development community. The iris is so unique that no two irises are alike, even among identical twins, in the entire human population. Automated biometrics-based personal identification systems can be classified based on two major tasks viz., identification and verification

TABLE I APPLICATIONS OF PATTERN RECOGNITION -GENERAL

Problem Application Input Pattern

Pattern Classes

Bio informatics

Sequence Analysis

DNA/Protein Sequence

Known types of genes/ patterns

Document classification

Internet Search

Text Document

Document categories (e.g.: Business,

IRIS Texture Analysis, Finger Printing and Signature Verification for Automatic Data Retrieval System using Biometric Pattern Recognition System 7

Page 12: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

sports etc)

Document image analysis

Reading machine for blind

Image of document

Alphanumeric characters/ words

Industrial Automation

Manufacture and soldering and inspection of printed circuit board

Intensity or range image

Defective/ non-defective nature of product

Multimedia database retrieval

Internet search

Video clip Video genres( e.g., action, dialogues etc.,)

Biometric Recognition

Personal Identification

Face, iris, finger prints

Authorized users for access control

Remote sensing

Forecasting Crop yield

Multispectral image

Land use categories, growth pattern of crops

B. Actual iris identification involves

1) The person stands in front of the iris identification system, generally between one and three feet distance and a wide angle camera calculates the position of his eye.

2) A second camera zooms in on the eye and takes a black and white image.

3) After this the iris system has one's iris in focus, it overlays a circular grid (zone’s of analysis or area of interest) on the image of the iris , very similar to, and identifies where areas of light and dark fall. The purpose of overlaying the grid is so that the iris recognition system can recognize a pattern within the iris and generate ‘points’ within the pattern into an ‘eyeprint’.

4) The captured image or ‘eyeprint’ is checked against a previously stored ‘reference template’ in the database. In the iris alone, there are over 400 distinguishing

characteristics, or Degrees of Freedom (DOF), that can be quantified and used to identify an individual (Daugman, J. & Williams, G. O. 1992), of which approximately 260 of these are being used like contraction, furrows, striations, pits, collagenous fibers, filaments, crypts (darkened areas on the iris), serpentine vasculature, rings, and freckles.

It is said that the iris recognition system is more fool proof than the fingerprint.

C. For fingerprint matching

Similarly the swipe machine can be used to take impression of the thumb/finger print. Identification involves acquisition of data using very high resolution camera and then classifying the image to get the characteristics of the fingerprint using the well established methods of image enhancement, principal component analysis, edge enhancement and filtering etc.

The three basic patterns of fingerprint ridges are the arch, loop, and whorl. An arch is a pattern where the ridges enter from one side of the finger, rise in the center forming an arc, and then exit the other side of the finger. The loop is a pattern where the ridges enter from one side

of a finger, form a curve, and tend to exit from the same side they enter. In the whorl pattern, ridges form circularly around a central point on the finger.The major Minutia features of fingerprint ridges are: ridge ending, bifurcation, and short ridge (or dot). The ridge ending is the point at which a ridge terminates. Bifurcations are points at which a single ridge splits into two ridges. Short ridges (or dots) are ridges which are significantly shorter than the average ridge length on the fingerprint. Minutiae and patterns are very important in the analysis of fingerprints since no two fingers have been shown to be identical.

A fingerprint sensor is an electronic device used to capture a digital image of the fingerprint pattern. The captured image is called a live scan. This live scan is digitally processed to create a biometric template (a collection of extracted features) which is stored and used for matching.

Optical fingerprint imaging involves capturing a digital image of the print using visible light. This type of sensor is, in essence, a specialized digital camera. The top layer of the sensor, where the finger is placed, is known as the touch surface. Beneath this layer is a light-emitting phosphor layer which illuminates the surface of the finger. The light reflected from the finger passes through the phosphor layer to an array of solid state pixels (a charge-coupled device) which captures a visual image of the fingerprint. A scratched or dirty touch surface can cause a bad image of the fingerprint. A disadvantage of this type of sensor is the fact that the imaging capabilities are affected by the quality of skin on the finger. For instance, a dirty or marked finger is difficult to image properly. Also, it is possible for an individual to erode the outer layer of skin on the fingertips to the point where the fingerprint is no longer visible. It can also be easily fooled by an image of a fingerprint if not coupled with a "live finger" detector. However, unlike capacitive sensors, this sensor technology is not susceptible to electrostatic discharge damage.

Pattern based algorithms compare the basic fingerprint patterns (arch, whorl, and loop) between a previously stored template and a candidate fingerprint. This requires that the images be aligned in the same orientation. To do this, the algorithm finds a central point in the fingerprint image and centers on that. In a pattern-based algorithm, the template contains the type, size, and orientation of patterns within the aligned fingerprint image. The candidate fingerprint image is graphically compared with the template to determine the degree to which they match.

D. For signature verification

Signature verification is one of the most widely known among the non-vision based personal identification. Signature verification requires the use of electronic tablets or digitizers for on-line capturing and optical scanners for off-line conversion.

Given the subject's name and a new acquired signature, the system decides whether the signature corresponds to the subject's name. The decision is based on a measurement of the similarity between the acquired signature and a prototype of the subject's signature. The

8 International Journal of Advances in Computing and Management

Page 13: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

prototype has been previously extracted from a training set of signatures.

The signature verification system is based upon the measurement of similarity between signatures. Therefore, this computation is the key element of the system. There are two different errors that characterize the performance of the algorithm. The Type I error (or False Rejection Rate (FRR)), measures the number of true signatures classified as forgeries as a function of the classification threshold. The Type II error (or False Acceptance Rate (FAR)), evaluates the number of false signatures classified as real ones as a function of the classification threshold. The performance of the system is provided by these two errors as a function of the classification threshold. Clearly, we can trade-off one type of error for the other type of error. As an extreme example, if we accept every signature, we will have a 0% of FRR and a 100% of FAR, and if we reject every signature, we will have a 100% of FRR and a 0% of FAR. The curve of FAR as a function of FRR, using the classification threshold as a parameter, is called the error trade-off curve. It provides the behavior of the algorithm for any operation regime and it is the best descriptor of the performance of the algorithm. In practice, this curve is often characterized by the equal error rate, i.e., the error rate at which the percentage of false accepts equal the percentage of false rejects. This equal error rate provides an estimate of the statistical performance of the algorithm, i.e., it provides an estimate of its generalization error.

Final Details of the identification method involves quickly matching the information obtained through the iris recognition, fingerprint Matching and signature verification with the template already available in the huge repository.

III. RESULT AND CONCLUSION – OUTLOOK

Pattern recognition is a fast moving and expanding science. It is a frontier topic having a tremendous scope not only in the field of remote sensing for crop forecasting, automation in industries but for determination of causes and reasons for diseases, fast searching of documents on internet, personal identification including preparation of biometric database of citizens, criminals and terrorist, psychopaths, speech and image recognition for the blind etc.The Government of India has launched the envisioned "AADHAAR” wherein every individual is to be given a 12 digit “UNIQUE IDENTIFICATION NUMBER”. The Unique Identification Authority of India (UIDAI), formed under the aegis of Planning Commission of the government of India shall issue a unique number to all Indians but not smart cards. The authority would provide the database of residents containing very simple data in biometrics. It is envisaged that the Unique National ID s will help address the rigged state elections and the wide spread embezzlement that affects subsidies and poverty alleviation programs such as NREGA. It is believed to put a check to the to the illegal immigration and the terrorist threats, The no. is to be stored in a centralized database and linked to the basic demographics and biometric information like photograph, ten fingerprints and iris - of each individual. We are suggesting that if a smart card is

issued to each individual then identification and authentication can become easy and convenient. To facilitate the security personnel the same card can be swiped for purchasing railway, bus and air tickets, used as basic input for purchase of mobile phones and SIM cards, for ATMS, for issue of ration cards, for admission to schools, colleges and hospitals, purchase of vehicles, identification of voters in election with the option to swipe the card on electronic voting machine to prove identity, for land and property purchase, for vaccinations, use in blood banks etc. The more a person visualizes the better the applicability. This will prove to be a tremendous boon if lined to the Census information also to estimate the total male/female/child/ literate/ illiterate/ worker/ non worker categories. The processing time for creating and updating the information could be saved by just using mobile swipe machines to read the information on the smart card and then updating the relevant field in the linked database.

The technique is not new. For e.g. in the Orlando Disney land the tickets issued are with the biometric information to prevent fraudulent use. Similarly in the killing of Osama bin Laden, the DNA matching was done to confirm the identity.

The information technology and the mobile technology are progressing so fast that the same can be used as a new application for the retrieval of an individual’s data at any particular time with the biometric details. The future is full of opportunities, how well we are able to explore and implement shall be determined by the political willingness and the willpower of the masses.

REFERENCES [1] Jiali Cui, Yunhong Wang, JunZhou Huang, Tieniu Tan and

Zhenan Sun, “An Iris Image Synthesis Method Based on PCA and Super-resolution”, IEEE CS Proceedings of the 17th International Conference on Pattern Recognition ICPR’04)

[2] Hyung Gu Lee, Seungin Noh, Kwanghyuk Bae, Kang-Ryoung Park and Jaihie Kim, “Invariant biometric code extraction”, IEEE Intelligent Signal Processing and Communication Systems (ISPACS 2004), Proceedings IEEE ISPACS, 18-19 Nov. 2004, pp. 181-184.

[3] IEEE Transactions on Systems, MAN, and CYBERNETICS - Part C: Applications and Reviews, Vol. 35, NO. 3, AUG’ 2005, pp. 435-441.

[4] Kazuyuki Miyazawa, Koichi Ito, Takafumi Aoki, Koji Kobayashi, Hiroshi Nakajima, “An Efficient Iris Recognition Algorithm Using Phase-Based Image Matching”, IEEE Image Processing Conference, 2005 (ICIP 2005), 11-14 Sept. 2005, Vol. 2, pp. II- 49-52.

[5] Pan Lili, Xie Mei, “The Algorithm of Iris Image Preprocessing,” Fourth IEEE Workshop on Automatic Identification Advanced Technologies (AutoID'05), 17-18 Oct. 2005, pp. 134-138.

[6] Wang Jian-ming and Ding Run-tao, “Iris Image Denoising Algorithm Based on Phase Preserving”, Sixth IEEE International Conference on Parallel and Distributed Computing, Applications and Technologies, PDCAT 2005, 05-08 Dec. 2005, pp. 832-835.

[7] Weiki Yuan, Zhonghua Lin and Lu Xu, “A Rapid Iris Location Method Based on the Structure of Human Eyes”, Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual

Conference Shanghai, China, 1-4 September, 2005, pp. 3020-3023. [8] Debnath Bhattacharyya, Samir Kumar Bandyopadhyay and

Poulami Das, Handwritten Signature Verification System using Morphological Image Analysis”, CATA-2007 International Conference, A publication of International Society for Computers and their Applications, Honolulu, Hawaii, USA, March 28-30, 2007, pp. 112-117.

[9] 60 International Journal of Database Theory and Application

IRIS Texture Analysis, Finger Printing and Signature Verification for Automatic Data Retrieval System using Biometric Pattern Recognition System 9

Page 14: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

[10] International Journal of Computing and Business Research, ISSN (Online) : 2229-6166, Volume 2 Issue 2 May 2011

[11] IRIS RECOGNITION – AN EFFECTIVE HUMAN IDENTIFICATION - - Deepak Sharma1, Dr. Ashok Kumar2

[12] Statistical Pattern Recognition: A Review - Jain Anil K, Duin Robert P.W., and Mao Jianchang ; 4 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 22, NO. 1, JANUARY 2000

[13] 2. Remote Sensing data Acquisition, Platforms and Sensor Requirements - Navalgund R.R , Jayaraman V, Kiran Kumar A S, Sharma Tara et all, Photonirvachak; No. 4, Vol-24, december 1996

[14] Challenges of Pattern Recognition - an overview - Mrs. Tutu Sengupta, Mr. Sanjeev Verma, Dr. A.K. Sinha – paper in the International conference for Women in Science, NEERi, Nagpur , 2008

[15] IRIS RECOGNITION – AN EFFECTIVE HUMAN IDENTIFICATION - Deepak Sharma, Dr. Ashok Kumar – Paper in International Journal of Computing and Business Research, ISSN (Online) : 2229-6166, Volume 2 Issue 2 May 2011

[16] IRIS Texture Analysis and Feature Extraction for Biometric Pattern Recognition - Debnath Bhattacharyya, Poulami Das, Samir Kumar Bandyopadhyay, Tai-hoon Kim - International Journal of Database Theory and Application

10 International Journal of Advances in Computing and Management

Page 15: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

Association Rule Mining with Hybrid Dimension Priyanka Pawar1, Vipul Dalal2, Sachin Deshpande3

Abstract — Hybrid dimension association rules mining algorithm satisfies the definite condition on the basis of multidimensional transaction database. Boolean Matrix based approach has been employed to generate frequent item sets in multidimensional transaction databases. When using this algorithm first time, it scans the database once and will generate the association rules. Apriori property is used in algorithm to prune the item sets. It is not necessary to scan the database again; it uses Boolean logical operations to generate the association rules. Keywords — Hybrid dimensional association rule, relational calculus, multidimensional transaction database.

I. INTRODUCTION

For mining association rule in transactional or relational database in data mining till now we have used different approaches. Apriori algorithm is costly to handle a huge number of candidate sets and it requires multiple scans for the database which is a tedious job. However, in situations with a large number of frequent patterns, long patterns, or quite low minimum support thresholds, an Apriori-like algorithm may suffer from some above problems and it is used for only single dimensional mining. Although an FP-tree is rather compact, its construction needs two scans of a transaction database, which may represent a nontrivial overhead [3].

Finding frequent patterns plays an important role in data mining and knowledge discovery techniques. Association rule describes correlation between data items in large databases or datasets. The first and foremost algorithm to find frequent pattern was presented by R. Agrawal et al. in 1993. Presented, frequent pattern tree approach, for mining association rules without candidate generation. The candidate generation and test methodology, called Apriori techniques was the first technique to compute frequent patterns based on the Apriori principle and anti-monotone property. The Apriori technique finds the frequent pattern of length k from the set of already generated candidate patterns of length k-1. This algorithm requires multiple database scans and large amount of memory to handle the candidate patterns when the number of potential frequent pattern is reasonably large. In the past two decades, large numbers of research studies have been published presenting new algorithms or extending existing algorithms to solve frequent pattern mining problem more effectively and efficiently. But all the above-mentioned studies are well suitable for single-dimensional transactional databases.

II. ASSOCIATION RULES

Definition 1: Let I = {i1 , i2, i3,……. in} be a set of items. D is a database of transactions. Each transaction T is a set

of items and has an identifier called TID. Each T⊆ I. Definition 2: Association rule is the implication of the form A ⇒B, where A and B are item sets which satisfies A⊆ I, B⊆ I and A∩ B = φ. Definition 3: The strength of an association rule can be measured in terms of its Support and Confidence. The support supp(X) of an item set X is defined as the proportion of transactions in the data set which contain the item set. The confidence of a rule is defined Conf(X⇒ Y) = supp(X∪ Y)/supp(X). Definition 4: Boolean Matrix: is a matrix with element ‘0’ or ‘1’. Definition 5: The Boolean AND operation is defined as follows: 0.0=0 0.1=0 1.0=0 1.1=1. Where logical implication is denoted by ‘.’ or AND There are different methods for generating frequent item sets and association rule mining. Some of them are as follows:- A. Apriori Algorithm

The classical Apriori algorithm employ an iterative method to find all the frequent iemsets. First, the frequent 1- itemsets L1 is found according to the user-specified minimum support threshold, then the L1 is used to find frequent 2-iemsets L2, and so on, until there is no new frequent itemsets could be found. After finding all the frequent itemsets using Apriori, we could generate the corresponding association rules [5]. Apriori employs an iterative approach known as a level-wise search, where k-itemsets are used to explore (k+1)-itemsets. Apriori principle: If an itemset is frequent,then all of its subsets must also be frequent. It works in two steps- 1. Join Step: Ck is generated by joining Lk-1with itself. 2. Prune Step: Any (k-1)-itemset that is not frequent

cannot bea subset of a frequent k-itemset.Apriori Algorithm is the simple Single-dimensional mining algorithm.

B. Sampling Algorithm The main idea for the sampling algorithm is to select

small sample one that fits in the main memory of the database of transactions and to determine the frequent item sets from that sample. If those frequent itemsets form a superset of frequent itemsets for the entire database,then we can determinethe real frequent itemsets by scanning the remainder of the database in order to compute exact support values for the superset itemsets. A superset of frequent itemsets can usually be found from by using for eg. Apriori algorithm with a lowered minimum support. C. Partition Algorithm

In this algorithm if we are given a database with a small number of potential large itemsets say a few thousands, then support for them can be tested in one scan by using a partitioning technique. Partitioning divides the database into non-overlapping subsets; these are individually considered as separate databases and all large

Department of Computer Engineering, Vidyalankar Institute of Technology, Wadala, Mumbai, Maharashtra, India [email protected] [email protected]

International Journal of Advances in Computing and Management, 2(1) January-December 2013, pp. 11-14 Copyright @ DYPIMCA, Akurdi, Pune (India) ISSN 2250 - 1975

Page 16: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

itemsets for that partition called local frequent itemsets, are generated in one pass. The Apriori algorithm can then be used efficiently on each partition if it fits entirely in main memory. Partitions are chosen in such a way that each partition can be accommodated in main memory. D. FP-growth algorithm

FP-growth algorithm is an efficient method of mining all frequent itemsets without candidate’s generation. The algorithm mine the frequent itemsets by using a divide-and-conquer strategy as follows: FP-growth first compresses the database representing frequent itemset into a frequent-pattern tree, or FP-tree, which retains the itemset association information as well. The next step is to divide a compressed database into set of conditional databases (a special kind of projected database), each associated with one frequent item. Finally, mine each such database separately. Particularly, the construction of FP-tree and the mining of FP-tree are the main steps in FP-growth algorithm.

In reality, for example, along with items purchased in sales transactional databases, other related information like quantity purchased, price, branch location etc are stored. Additional related information regarding the customers who purchased the items, such as customer age, occupation, credit rating, income, and address also stored in the database. Frequent item sets along with other relevant information will be helpful in high-level decision-making. This leads to the challenging mining task of multilevel and multidimensional association rule mining. In recent years, there has been lot of interest in mining databases with multidimensional data values.

III. CONDITIONAL –HYBRID DIMENSIONAL ASSOCIATION

RULE M INING

Thus here I presents mining conditional hybrid-dimensional association rules. Based on these marking, either it does intra-dimensional join or inter-dimensional join.

To solve this problems for founding frequent item sets we have proposed this algorithm. It mines hybrid dimension association rules not only from single-dimensional as well as multidimensional database. It meet the definite condition to generate conditional hybrid dimensional association rules, from multidimensional transactional database. It scans database only once which makes easy to find large frequent patterns. It does not generate the candidate item sets as we generate in Apriori algorithm, rather it uses Boolean vector “relational calculus” to generate frequent item sets. I taken multidimensional datasets with five attribute as input and apply on Hybrid-dimensional association algorithm rule to generate association rule using Boolean matrix. I use backend sql server and front end jdk1.5 version.

Methodology used in this project:- A. Transforming the multidimensional transaction

database into two Boolean matrices one for subordinate attributes (Am*p) and one for main attribute (Am*q).

B. Generating the set of frequent 1-itemset LA1 (from the subordinate attributes matrix) and LB1 (from the main attribute matrix).

C. Pruning the Boolean matrices. D. Perform AND operations to generate 2-itemsets: E. LA1 LB1 and LA1 __ LA1 for inter-dimension

join and LB1 LB1 for intra-dimension join. F. Repeat the process to generate (k+1)-item sets from

Lk. Transforming the multidimensional transaction database into Boolean matrix Generating the frequent 1-itemset L1 Pruning the Boolean matrix generating the set of frequent k-item sets Lk

The generation of frequent itemsets is the core of all the association rules mining algorithms. Previous studies on mining multi-dimensional association rules we focused on finding non-repetitive predicate multi- dimensional rules. We integrate the single-dimensional mining and nonrepetitive predicate multi-dimensional mining, and present a method for mining hybrid-dimensional association rules using Boolean Matrix. A. The join process

There are two steps in generation of the frequent itemsets and frequent predicate sets. The two steps are joining and pruning. 1. The join generating candidate 2-itemsets C2 ; We

find frequent 1-itemsest based on each attribute, at the same time we mark items belong to every main attribute.So it will be clear that the marked items are the items of main attribute and unmarked items are the subordinate items. When we search for C2, if both of the two joining items are marked items, we call the function for intra-dimensional join between the items as well as inter-dimensional join, but only proceed with inter-dimensional join on the other occasions.

2. The join on other occasions When we generate frequent itemsets directly according to the join mode of the Apriori, it would occur intradimensional join as well as inter- imensional join. But there are some restrictions to the generation of intradimensional join and inter-dimensional join. Therefore we make the following modifications to the joining step of the Apriori. We assume that items within transaction and itemset are sorted in lexicographic order. We could take two steps to find Lk

• Distinguish the intra-dimensional join and interdimensional join; If all the items within the two (k-1)- itemsets belong to the main attribute; we proceed with intra- dimensional join, and proceed with interdimensional join on other occasions.

• Implement join L k -1 __ L k -1 , and choose the corresponding joining condition according to the characteristic of the join (intra-dimensional join or inter-dimensional join) B. The conditional restriction in hybrid-dimension

association rules First the frequent itemsets are obtained, and then we

generate the hybrid-dimension association rules from the frequent itemsets. In the process of generating frequent itemsets, we make both intra-dimensional join and inter-dimensional join, as well as the conditional restrictions while proceeding with join, all of the frequent itemsets have such a character: the values within main attribute field occur many times, while the values within

12 International Journal of Advances in Computing and Management

Page 17: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

subordinate attribute fields occur only once. Thus, the rules generated by the algorithm may include many predicates, or include the same predicate. So the hybrid dimension association rules are formed [1].

IV. ALGORITHM

The algorithm consists of following steps: A. Transforming the multidimensional

transaction database into two Boolean matrices one for subordinate attributes (Am*p) and one for main attribute (Am*q).

B. Generating the set of frequent 1-itemset LA1 (from the subordinate attributes matrix) and LB1 (from the main attribute matrix).

C. Pruning the Boolean matrices. D. Perform AND operations to generate 2-

itemsets: LA1 ___ LB1 and LA1 __ LA1 for inter- dimension join and LB1 __ LB1 for intra- dimension join.

E. Repeat the process to generate (k+1)-item sets from Lk.

• Transforming the multidimensional transaction database into Boolean matrix

• Generating the frequent 1-itemset L1 • Pruning the Boolean matrix • Generating the set of frequent k-item sets Lk

We integrate the single-dimensional mining and no repetitive predicate multi- dimensional mining, and present a method for mining hybrid- dimensional association rules using Boolean Matrix. Let a multi-dimensional transaction database Order, which includes two subordinate attributes Age and Income and one main attribute rdered_items as given in table I. In order to simplify the implement process, we pre-processed some attributes before algorithm executes, shown below in table II and table III.

The multidimensional transaction table Order is transformed into two Boolean Matrices: Am*p as subordinate attributes matrix and Bm*q as main attribute matrix. Which are as given below: Let the minimum support is 0.4; m=10 is the number of transactions.

Therefore min_sup_num=10. We compute the sum of the elements value of each column in the Boolean matrix A10*5 and B10*5 set of frequent 1-itemset is: LA1{{y},{m},{h},{l}}, LB1 ={ {I1},{I2},{I3},{I5}} smaller than the minimum

TABLE II

ORDER

TABLE III TABLE III

MAPPING AGE MAPPING INCOME

TABLE IVV

ORDER SETS

support number [7].Now we perform the ‘AND’ operation to join LA1 and LB1 (according to the type of join) to generate L2. The possible 2-itemsets are: Inter-dimensional join (LA1 __ LB1 and LA1 __ LA1): It is performed by AND operation among the columns of Matrix Am*p AND Bm*q and Am*p AND Am*p. Intra-dimensional join (LB1 __ LB1): It is performed by AND operation among the columns of Matrix Bm*p AND Bm*q. The possible 2-itemsets from LA1 and LB1 are: (y,l), (m,h), (h,1), (h,2) ,(h,3),(h,5), (l,1),(l,2), (l,3),(l,5),(y,1),(y,2),(y,3),(y,5),(m,1),(m,2),(m,3),(m,5),(I1,I2),(I1,I3),(I1,I5),(I2,I3),(I2,I5),(I3,I5).After performing ‘AND’ operation to get the support numbers of these mentioned item sets the Boolean matrices A10*18 and B10*6 are generated. Now again we compute the sum of the columns of matricesA10*18 and B10*6. And prune the columns of the 2-itemsets those are not frequent. Same process will be repeated till for next higher item sets. TABLE V

FREQUENT ITEM-SET

Association Rule Mining with Hybrid Dimension 13

Page 18: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

V. CONCLUSIONS

The proposed algorithm uses input datasets and meet the definite condition to generate conditional-hybrid dimensional association rules, from multidimensional transactional database. The main features are: it scans the database only once, it does not generate the candidate item sets, and it uses the “relational calculus” to generate frequent item sets. It stores data in the form of bits, so it needs less memory space and can be applied to large relational databases.

REFERENCES [1] R. Agrawal and R. Srikant, “Fast Algorithms for Mining

Association Rules,” Proc. Int’l Conf. Very Large Data Base s,pp. 487-499, Sept. 1994.

[2] S. Bashir, A. Rauf Baig, “HybridMiner: Mining Maximal Frequent Itemsets using Hybrid Database Representation Appraoch”, In Proc. of 10th IEEE-INMIC conference,Karachi, P akistan, 2005.

[3] D. Burdick, M. Calimlim, and J. Gehrke, “Mafia: A maximal frequent itemset algorithm for transactional databases”, In Proc.

of ICDE Conf, pp. 443-452, 2001.

[4] Proc. IEEE ICDM Workshop Frequent Itemset Mining Implementations, B. Goethals and M.J. Zaki, eds., CEUR Workshop Proc., vol. 80, Nov. 2003.

[5] K. Gouda and M. J. Zaki, “Efficiently mining maximal frequent itemsets”, In ICDM, pp. 163–170, 2001.

[6] Hunbing Liu and Baishen wang, “An association Rule Mining Algorithm Based On a Boolean Matrix” , Data Science Journal, Volume 6, Supplement 9, S559-563, September 2007.

[7] Jurgen M. Jams Fakultat fur Wirtschafts- irnd, “An Enhanced Apriori Algorithm for Mining Multidimensional Association Rules, 25th Int. Conf. Information Technology interfaces ITI Cavtat, Croatia (1994).

[8] R. Agrawal, H.Mannila, R.Srikant, H.Toivone and A.I.Verkamo. Fast discovery of association rules. In U.M. Fayyed, G.Piatetsky-Sharpiro, P.Smyth, and R.Uthurusamy, editors, Advances in knowledge Discovery and Data Mining, pages 307-328.AAAI/MIT press, 1996.

[9] H. Mannila, H. Toivonen, and A. Verkamo. “Efficient algorithm for discovering association rules”. AAA1 Workshop on Knowledge Discovery in Databases.

[10] Jiawei Han, Micheline Kamber, “Data Mining Concepts and Techniques”. Higher Education Press 2001.

14 International Journal of Advances in Computing and Management

Page 19: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

Image Compression Using Wavelet Packet and SPIHT Encoding Scheme

Savita K.Borate1, D.M.Yadav2

Abstract — The necessity in image compression continuously grows during the last decade. An image compression algorithm based on wavelet packet transform is introduced. This paper introduces an implementation of wavelet packet image compression which is combined with SPIHT (Set Partitioning in Hierarchical Trees) compression scheme. The image compression includes transform of image, quantization and encoding. This paper describes the new approach to construct the best tree on the basis of Shannon entropy. Algorithm checks the entropy of decomposed nodes (child nodes) with entropy of node, which has been decomposed (parent node) and takes the decision of decomposition of a node. Then, proposed method for the efficient encoding, SPIHT is used for encoding of wavelet packet transform coefficients. Results are compared in terms of PSNR, bit rate and quality with other state of art coding.

Keywords — Wavelet Packet, SPIHT, Entropy, HVS PSNR, Quantization

I. INTRODUCTION

The fundamental goal of image compression is to obtain the best possible quality, for a given storage or communication capacity. Wavelet based image coding algorithms are very popular in recent years because wavelet transform provides an efficient multiresolution representation of the signal. In wavelet transform, the successive decomposition is performed in the low frequency region. If an image contains information in mid and high frequency region, then wavelet transform is not an optimal tool to represent the image. In such cases, wavelet packet transform can be employed to represent the image.

The wavelet packet transform is a generalization of the standard pyramidal decomposition. WP image decomposition adaptively selects a transform basis suited to the particular image. To achieve that, the criterion for best basis selection is needed. In this implementation entropy based criteria is selected.

In wavelet decomposition, the approximate component of image is further decomposed, but in wavelet packet the approximation as well as detailed components are decomposed. The efficient encoding using SPIHT is used for encoding of wavelet packet transform coefficients.

SPIHT belongs to the next generation of wavelet encoders, employing more sophisticated coding. In fact, SPIHT exploits the properties of the wavelet-transformed images to increase the efficiency in the form of bit rate &

quality (objective & subjective). The SPIHT process represents a very effective form of entropy-coding.

This paper is organized as follows. In section 2 –discuss the brief review of image compression, section 3 – describe discrete wavelet transform, section 4 – Wavelet Packet Tree is described, section 5 - SPIHT coding scheme 6- The implemented algorithm and coding results followed by conclusion in Section 7 and 8.

II. IMAGE COMPRESSION: BRIEF OVERVIEW

A common characteristic of most of images is that the neighbouring pixels are correlated. Therefore most important task is to find a less correlated representation of image. The fundamental components of compression are reduction of redundancy and irrelevancy. Redundancy reduction aims at removing duplication from the image. Irrelevancy reduction omits parts of the signal that will not be noticed by the signal receiver namely the human visual system (HVS). Three types of redundancies can be identified: Spatial redundancy, spectral redundancy and temporal redundancy. In still image, the compression is achieved by removing spatial redundancy and spectral redundancy.

III. DISCRETE WAVELET TRANSFORM

Wavelets are functions that satisfy certain mathematical requirements and are used in representing data or other functions. The basic idea of the wavelet transform is to represent any arbitrary signal ‘X’ as a superposition of a set of such wavelets or basis functions. These basis functions are obtained from a single photo type wavelet called the mother wavelet by dilation (scaling) and translation (shifts). The discrete wavelet transform for two dimensional signal can be defined as follows.

The indexes w(a1,a2,b1,b2) are called wavelet coefficients of signal X, and a1,a2 are dilation and b1,b2 are translation, Ψ is the transforming function, which is known as mother wavelet. Low frequencies are examined with low temporal resolution while high frequencies with more temporal resolution. A wavelet transform combines both low pass and high pass filtering in spectral decomposition of signals.

In case of discrete wavelet, the image is decomposed into a discrete set of wavelet coefficients using an orthogonal set of basis functions. These sets are divided into four parts such as approximation, horizontal details,

1E & TC Department, Imperial College of Engg. & Research, 2D BSIOTR, Wagholi, University of Pune, Maharashtra, India [email protected] [email protected]

International Journal of Advances in Computing and Management, 2(1) January-December 2013, pp. 15-18 Copyright @ DYPIMCA, Akurdi, Pune (India) ISSN 2250 - 1975

Page 20: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

vertical details and diagonal details. Further decomposition of approximation is takes place; we get again four components shown in Figure 1.

Figure 1. Two level wavelet decomposition

IV. WAVELET PACKET AND WAVELET PACKET TREE

Ideas of wavelet packet is the same as wavelet, the only difference is that wavelet packet offers a more complex and flexible analysis because in wavelet packet analysis the details as well as the approximation are splited. Wavelet packet decomposition gives a lot of bases from which you can look for the best representation with respect to design objectives. The wavelet packet tree for 3-level decomposition is shown in Figure 2

Figure 2. Wavelet Packet Tree Decomposition

In this paper Shannon entropy criteria is used to construct the best tree. Shannon entropy criteria find the information content of signal ‘S’.

Information content of decomposed component

(Approximation and details) may be greater than the information content of components, which has been decomposed. In this paper the sum of information of decomposed component (child node) is checked with information of component which has been decomposed (parent node) and if sum of information of child nodes is less than the parent node, then only parent node is decomposed and child nodes are considered in tree otherwise parent nodes are not decomposed. We get wavelet packet coefficients.

V. SPIHT CODING SCHEME

SPIHT algorithm was introduced by Said and Pearlman, and is improved and extended version of Embedded Zerotree Wavelet (EZW) coding algorithm introduced by

Shapiro. Both algorithms work with tree structure, called Spatial Orientation Tree (SOT), that defines the spatial relationships among wavelet coefficients in different decomposition subbands. In this way, an efficient prediction of significance of coefficients based on significance of their ‘‘parent’’ coefficients is enabled. SPIHT coder provides gain in PSNR over EZW due to introduction of a special symbol that indicates significance of child nodes of a significant parent and separation of child nodes (direct descendants) from second-generation descendants.

When the decomposition image is obtained, we try to find a way how to code the wavelet packet coefficients into an efficient result, taking redundancy and storage space into consideration. SPIHT is one of the most advanced schemes available, even outperforming the state-of-the-art JPEG 2000 in some situations.

The basic principle is the same; a progressive coding is applied, processing the image respectively to a lowering threshold. The difference is in the concept of zerotrees (spatial orientation trees in SPIHT). This is an idea that takes bounds between coefficients across subbands in different levels into consideration. The first idea is always the same: if there is a coefficient in the highest level of transform in a particular subband considered insignificant against a particular threshold, it is very probable that its descendants in lower levels will be insignificant too, so we can code quite a large group of coefficients with one symbol. SPIHT makes use of three lists – the List of Significant Pixels (LSP), List of Insignificant Pixels (LIP) and List of Insignificant Sets (LIS). These are coefficient location lists that contain their coordinates. After the initialization, the algorithm takes two stages for each level of threshold – the sorting pass (in which lists are organized) and the refinement pass (which does the actual progressive coding transmission). The result is in the form of a bitstream. Detailed scheme of the algorithm is presented in Figure3.

Figure 3. Block Diagram of Wavelet Packet Image Coder System

VI. V ITHE IMPLEMENTED ALGORITHM

The implemented algorithm is described as follows:

A. Let level counter = 1

B. Let the current node = input image.

C. Decompose current node using wavelet packet tree.

D. Find the entropy of current node.

E. Find the entropy of decomposed components viz.

Wavelet Packet Decomposition (WPD)

Wavelet packet coefficients

Performance Analysis

Compressed Output Image (.tif, .bmp)

Wavelet Packet Reconstruction on (IWPD)

SPIHT algorithm (Encoder and Decoder)

Input grayscale Image (.tif,.bmp)

16 International Journal of Advances in Computing and Management

Page 21: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

F. CA1, CH1, CV1, CD1.

G. Compare the entropy of parent node with the sum of the entropy of child node. If the sum of the entropy of child nodes is less than that of parent node, then child node will be considered as a leaf node of a tree and repeat the steps 3, 4, 5, 6 for each child nodes considering it as current node. Otherwise parent node acts as a leaf node of a tree.

H. Encode wavelet packet coefficients using SPIHT coding technique. 8 Stop.

VII. EXPERIMENTAL RESULTS

WP-SPIHT algorithm has been tested on 8 bit 512x512 images: ‘‘Lena’’, ‘‘Barbara’’, and 256x256 Cameraman.

A. Analysis for different images:

(a) (b) (c)

Figure 4: Best basis geometry

(a) Lena (b) Cameraman (c ) Barbara

Biorthogonal 9/7 transform is used for wavelet packets decomposition.Figure4 show examples of best basis from the experiments. Paper presents coding results both visually and in the terms of PSN R.WP-SPIHT provides better visual quality for textured parts of these images.

(a) Original Image (b)bpp=0.1PSNR=30.04dB

( c )bpp=0.5PSNR=38dB (d)bpp=0.1PSNR=42.73dB

Figure5: PSNR at different bit rates

B. Influence of decomposition level:

C. Comparison with various algorithms:

D. Analysis using different types of wavelets:

Type of Image Name PSNR in dB

Standard Images Lena(512 x 512) 42.73

Barbara(512 x 512) 37.15

Cameraman(256 x 256) 36.04

Biomedical Image Medical (5) 49.43

Natural Images i1 44.29

Synthetic Images Syn62 36.24

Syn61 30.72

Rate (Bits/pixel)

Haar

CDF_9x7

sym4

db2

0.1 27.16 30.04 27.06 21.84

0.3 31.49 35.19 31.07 26.39 0.5 34.27 38.01 33.04 29.16 0.7 36.35 40.11 34.31 31.18

1 38.95 42.73 35.35 34.05

Decomposition Level

PSNR in dB

Lena Natural Image

1 23.99 24.11

2 37.94 36.44

3 41.85 36.98

4 42.55 37.09

5 42.69 37.12

6 42.72 37.12

7 42.73 37.12

8 42.73 37.14

Rate (Bits/pixel)

Algorithm

PSNR in dB

0.25

WP-SPIHT(implemented) 34.21

EZW 33.32 SPIHT 34.12

WP 34.12

JPEG2000 34.01

0.5

WP-SPIHT(implemented) 38.01

EZW 36.46

SPIHT 37.23

WP 37.54

JPEG2000 37.14

1

WP-SPIHT(implemented) 42.73

EZW 39.79

SPIHT 40.34

WP 40.80 JPEG2000 40.05

Image Compression Using Wavelet Packet and SPIHT Encoding Scheme 17

Page 22: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

VIII. CONCLUSION

An image compression algorithm based on wavelet packet transform is implemented. In this algorithm, the Wavelet Packet Best Tree using Shannon entropy has been presented. This Algorithm checks the entropy of decomposed nodes (child nodes) with entropy of node, which has been decomposed (parent node) and takes the decision of decomposition of a node. Then, the efficient encoding method SPIHT is used for encoding of wavelet packet transform coefficients. This algorithm operates through SPIHT (Set Partitioning in Hierarchical Trees) and accomplishes completely embedded coding. This SPIHT uses the principles of partial ordering by magnitude with respect to sequence of octavely decreasing threshold, ordered bit plane transmission and self similarity across scale in an image wavelet transform.

The implemented algorithm (WP-SPIHT) gives good results for Lena & Cameraman images, natural images and medical image. Images which are having edges, like “fingerprint” synthetic images gives lower PSNR but representation of finely textured regions is noticeably better for WP-SPIHT algorithm(VII-a). The algorithm has superior performance in displaying edge details and preserving the shape at low bit rates. Decomposition depth influences the final image quality, three-level decomposition is usually enough for other image compression methods based on DWT but SPIHT is most efficient when using level 5 and higher (VII-b). CDF9/7 was able to perform best results among all tested wavelet families (VII-d).

Results are compared in terms of PSNR, bit rate and quality with other compression algorithms like EZW,

SPIHT, Wavelet Packet and JPEG2000(VII-c).Experimental results show that the implemented algorithm is suitable for images with textures, which is one of the most demanding tasks. The algorithm is tested on different images, and it is seen that the results obtained by WP-SPIHT are consistently better in terms of PSNR by about 2.7 dB over JPEG2000 and 2.4 dB over SPIHT.

REFERENCES [1] G. K. Kharate, A. A. Ghatol and P.P. Rege, “Image Compression

Using Wavelet Packet Tree,” ICGST-GVIP Journal, Volume (5), Issue (7), July 2005

[2] Deepti Gupta., Shital Mutha, “Image Compression using wavelet packets”, 0-7803-7651-WO3617, IEEE Trans. on Image Processing, vol. 2, no. 2, 2003.

[3] Rusen Öktem, Levent Öktem, and Karen Egiazarian , “A Wavelet packet transform based image coding algorithm ”, IEEE Trans. On Image Processing.

[4] Amir Said, William A.Pearlman,” a new fast and efficient image codec based on Set Partitioning In Hierarchical Trees, Vol.6, No.3, June 1996

[5] Digital Image Processing”, W. K. Pratt New York: Wiley. 1978. [6] “Digital Image Processing”, Rafael Gonzalez, Richard E. Woods,

Prentice Hall. [7] “Insight into” Wavelets from theory to practice”, K.P.Soman &

K.I.Ramachandran. [8] Rusen Öktem, Karen Egiazarian,” A wavelet packet transform

based image coding algorithm”,Tempere university of technology. [9] R.Sudhakar, Ms R Karthiga,” Image Compression using Coding

of Wavelet Coefficients – A Survey” PSG College of Technology Tamilnadu, India, ICGST-GVIP Journal Volume (5) Issue (6), June 2005

[10] G. Mallat, “A theory for multiresolution signal decomposition: the wavelet representation”, IEEE Trans. on PAMI, vol. 11, 674-693, 1989.

[11] Nasir Rajpoot, Francois Meyer,” Progressive Wavelet Packet Image Coding Using Compatible Zero tree Quantization”.

[12] J.Malý,P.Rajmic,”DWT-SPIHTimagecodec implementation”, Department of telecommunications, Brno University of Technology, Brno, Czech Republic.

18 International Journal of Advances in Computing and Management

Page 23: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

Simulation Model of Ultrasonographic Imaging System using Ptolemy modelling tool and its

Hardware implementation and Quantative analysis of the results

Moresh Mukhedkar1, Bhakti Mukhedkar2

Abstract— The Ultrasonographic Imaging System is the first initiative to develop the Ultrasonographic Imaging System itself, because of its high cost and hug use. In this paper we done the quantative analysis of the simulation results generated using Ptolemy modelling tool and using hardware implementation using FPGA board. Here we are mainly looking for Medical applications. We are doing work in some modules such as Modelling of the System using Ptolemy modelling tool and Implementation on FPGA kit. In present work Median filter algorithm for removing salt and peeper type of noise and edge detection algorithm has been implemented using reconfigurable architecture & Hardware modeled will be implemented using the Xilinx ISE 7.01 tool on the XC2VP7 vertex Embedded Development Board. The algorithm will be tested on standard image processing benchmarks and finally quantative analysis is done between result found with the help of Ptolemy modelling tool and FPGA hardware implementation. Keywords — Ultrasonography Imaging System, Modelling, Ptolemy, Xilinx XC2VP7, quantative Analysis.

I. INTRODUCTION

Real time display of images of anatomy [1] or flow information developed from the capture of reflected and attenuated high frequency sound waves; in general Ultrasonic Imaging is any technique of forming a visible display of intensity and, in some systems, phase distribution [4] in an acoustic field Where as Ultrasonic is the general subject of sound in frequency range above the average audible range of human begins and Ultrasound is the sound, which is pitched too high for a human to ear. Medical ultrasound machines [5] are among the most sophisticated signal processing machines in widespread use today. As in any complex machine, there are many trade-offs in implementation due to performance requirements, physics, and cost. It is interesting to consider that ultrasound is basically a

radar or sonar system, but it operates at speeds that differ from these by order of magnitude. The Ultrasound System Challenges like it is supposed to give an accurate representation of the internal organs of a human body, and second, through Doppler signal processing, it is to determine movement within the body (for example, blood flow). Due to disadvantages associate with the full custom ASIC design like complexity and the cost associated with the design is very high we used FPGA are programmable devices. They are also called reconfigurable devices. Reconfigurable devices are processors, which can be programmed with a design, and the design can be by reprogramming the devices. Hardware design techniques such as parallelism and pipelining techniques can be developed on a FPGA [18]. So FPGAs are ideal choice for implementation of real time image processing algorithms. The algorithm is implemented on Xilinx vertex-II pro-embedded development board [21].

Here we design a synchronous dataflow model for ultrasonographic imaging system. This form of synchronous dataflow model takes advantage of the fact that in a large number of Real-time Image processing applications are an integral part of embedded systems technology. Modelling such applications using dataflow models can lead to useful formal properties, such as bounded memory requirements, and efficient synthesis solutions. The studies on extensions of SDF to provide for more flexible actor execution, including handling of such synchronous execution capabilities. They provide more flexibility. Ptolemy studies heterogeneous modelling, simulation, and design of concurrent systems, it has a graphical user interface the focus is on hardware and software co design devices. This paper aim at providing System Modelling and hardware architecture for the best of denoising filter and edge detection algorithm and quantative analysis will show the effectiveness of the system.

In this paper we will see discuss about the image processing algorithm for image enhancement and edge detection, then discuss about the Ptolemy modelling tool, will see details about the FPGA [19] Design flow and its structure and different computing boards, then Hardware implementation of the Median Filter and Sobel edge detection algorithm, then go through the results such as results of Ptolemy, hardware

1,2Electronics & Telecommunication Engineering, University of Pune Dr. D. Y. Patil Institute of Engineering & Technology, Ambi, Pune, Maharashtra, India. [email protected] [email protected]

International Journal of Advances in Computing and Management, 2(1) January-December 2013, pp. 19-25 Copyright @ DYPIMCA, Akurdi, Pune (India) ISSN 2250 - 1975

Page 24: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

simulation and synthesis, quantative analysis of the results.

II. OVERVIEW OF MAGE PROCESSING ALGORITHMS

A two dimensional representation of a scene is called an image [1]. Image processing is the science of manipulating a picture or image .it covers a broad scope of techniques that are present in numerous applications. These techniques can enhance or distort an image, highlight certain features of an image, create a new image from portion of other images, restore an image that has been degraded during or after the image acquisition and so on. Image Enhancement is the improvement of the digital image quality without knowledge about the source of degradation [1]. It is an iconical process. In this paper we used two main Image Enhancement operations edge detection and De-noising, edge [5]characteristic boundaries and are therefore a problem of fundamental importance in image processing and the De-noising is normally used to reduce noise in an image, here we used algorithms viz sobel edge detector and median filter because of their best results for Ultrasonographic images.

A. The Sobel Edge Detection

The Sobel Edge [6] Detector uses a simple convolution kernel to create a series of gradient magnitudes, convolution K to pixel group p can be represented as

( ) ( ) ( )1 1

1 1

, , ,k j

N x y k j k p x j y k= − = −

= − −∑ ∑

So the Sobel Edge Detector uses two convolution [1] kernels, one to detect changes in vertical contrast (hx) and another to detect horizontal contrast (hy).

1 0 1 1 2 1

2 0 2 , 0 0 0

1 0 1 1 2 1x yh h

− − − − = − = −

The amazing thing is is that this data can now be represented as a vector (gradient vector). The two gradients computed using hx and hy can be regarded as the x and y components of the vector. Therefore we have a gradient magnitude and direction

y

x

g

gg

=

2 2x yg g g= +

1tan x

y

g

gθ −

=

Where g is the gradient vector, g is the gradient

magnitude and ſ is the gradient direction. You can see that when Gx and Gy are large (big changes in the vertical and horizontal orientation respectively), the gradient magnitude will also be large. It is the gradient magnitudes that are finally plotted.

B. Median Filter

The median filter is normally [1] used to reduce noise in an image. It often used for preserving useful detail in the image. It considers each pixel in the

image in turn and looks at its nearby neighbors to decide whether or not it is representative of its surroundings. It replacing the pixel value with the median is Figure illustrates an example calculation.

In general, the median filter allows a great deal of high spatial frequency detail to pass while remaining very effective at removing noise on images where less than half of the pixels in a smoothing neighbourhood have been effected.

One of the major problems with the median filter is that it is relatively expensive and complex to compute. To find the median it is necessary to sort all the values in the neighbourhood into numerical order and this is relatively slow, even with fast sorting algorithms such as quick sort.

III. PTOLEMY MODELLING TOOL

Ptolemy II is the current software infrastructure of the Ptolemy Project. Ptolemy II is first and foremost a laboratory for experimenting with design techniques. It is published freely and in open-source form. The advantage open architecture and open source encourages researchers to build their own methods, leveraging and extending the core infrastructure provided by the software.

Ptolemy II has a sophisticated type system with type inference and data polymorphism (where components can be designed to operate on multiple data types), and a rich expression language.

A. Modelling and Design in PTolemy

The Ptolemy is used for heterogeneous modelling, simulation, and design of concurrent systems. The focus is on hardware and software, also on systems that are complex in the sense that they mix widely different operations, such as networking, signal processing, feedback control, mode changes, sequential decision making, and user interfaces.

Modelling is the act of representing a system or subsystem formally. A model might be mathematical, in which case it can be viewed as a set of assertions about properties of the system such as its functionality or physical dimensions. Constructive models are often used to describe behaviour of a system in response to stimulus from outside the system. Constructive models are also called executable models.

Design is the act of defining a system or subsystem. Usually this involves defining one or more models of the system and refining the models until the desired functionality is obtained within a set of constraints. Executable models are sometimes called simulations, an appropriate term when the executable model is clearly distinct from the system it models.

B. Capabilities of PTolemy

Ptolemy is a third generation system. Some of the major capabilities in Ptolemy that we believe to be new technology in modelling and design environments include higher level concurrent design in Java, actor-Oriented Classes, Subclasses, and

20 International Journal of Advances in Computing and Management

Page 25: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

Inheritance, a truly polymorphic type system, domain-polymorphic actors, extensible XML-based file formats, better modularization through the use of packages, branded, Packaged Configurations, complete separation of the abstract syntax from the semantics, improved heterogeneity via a well-defined abstract semantics, thread-safe concurrent execution, fully integrated expression language.

C. The general Idea of Ultrasonographic imaging system in PTolemy

The above figure shows the idea of simulation model which is build in Ptolemy. Here in the above block diagram simply the image is read with the help of Image Read actor, then image will be converted in gray image with the help of Band select actor the gray image then converted it in to double matrix form. When we get image in matrix for then it becomes easy to apply various image processing algorithms on it, such as in the original image we have to add salt and pepper noise, so afterwards we applying Median filtering De-noising techniques, finally the image will reconstructed and then applying the sobel edge detection algorithms. The original simulation model in Ptolemy is also shown below. Which give perfect idea of the ultrasonographic imaging system. The input of this model is the any image and output is also an image such as gray image, image with salt and peeper noise, de noise image, different types of edge detected images such as Sobel, Prewitt, Roberts, and Canny. Also it displays original image.

Figure: - The general idea of Ultrasonographic Imaging System Model

Figure: - Actual Simulation Model for ultrasonographic imaging system Ptolemy

IV. HARDWARE IMPLEMENTATION OF

ULTRASONOGRAPHIC IMAGING SYSTEM

Block diagram of system to [19] implement image processing operations such as Median filter and Sobel edge detector in the hardware is shown in figure. Output of the first block is a vector of pixels along with the data valid signal of the window. Data-vector is then sorted in descending order using sorting algorithm. Sorted output along with the data-valid signal of sorting is given to the Median filter block also same block is used for performing [18] Sobel Edge detection operation and it gives output data-valid signal to indicate the arrival valid output. In addition to this row-and-column counter provides the row and column position to each block and select signal decides the window size.

Figure: - Data Flow in the Hardware System

Simulation Model of Ultrasonographic Imaging System using Ptolemy modelling tool And its Hardware implementation and Quantative analysis of the results 21

Page 26: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

Various modules used such as 2 FIFOs [19] for Gaussian convolution stage if we are using 3x3 convolution mask and 4 FIFOs if 5x5 calculation of gradient we are using 3x3 convolution mask, so we required 2 FIFOs in non maximum suppression stage we need 256x18 bit FIFOs, and in thresholding stage we need 256x1 bit FIFO [21]array. 3 x 3 windows for 3x3 mask. 3x3 Median algorifor Median Filtering. Sorting algorithm, for Median Filter and Horizontal and Vertical sobel gradient. Row and column counter.

V. QUANTITATIVE ANALYSIS OF ULTRASONOGRAPHIC

IMAGES

Brightness is an attribute of visual perception which a source appears to emit a given amount of light. In other words, brightness is the perception elicited by the luminance of a visual target.RGB colour brightness can be thought of as the arithmetic mean µ of the Red, Green, and Blue color coordinates (although some of the three components make the light seem brighter than others, which, again, may be compensated by some display systems automatically):

In visual perception, contrast is the difference in visual properties that makes an object (or its representation in an image) distinguishable from other objects and the background. In visual perception of the real world, contrast is determined by the difference in the colour and brightness of the light reflected or emitted by an object and other objects within the same field of view. Contrast is calculated using the following method (Lmax − Lmin) / (Lmax + Lmin), Where Lmax is the maximum luminance and Lmin is the minimum luminance value.

The quantitative analysis which is done on ultrasonographic images is as follows. Tand more information about brightness are depends upon the value of N, so when N is large it gives very good results and gives more information. The Brightness is given below.

0

N

L

X m e a nX n

N==∑

Where, N = Number of equal parts Xmean = Mean value of each part Xn = Level of brightness. Similarly for contrast value

0( )

N

LXnN

Xmean Xi==

−∑

22 International Journal of Advances in Computing and Management

Various modules used such as 2 FIFOs [19] for Gaussian convolution stage if we are using 3x3 convolution mask and 4 FIFOs if 5x5 mask. For calculation of gradient we are using 3x3 convolution mask, so we required 2 FIFOs in non maximum suppression stage we need 256x18 bit FIFOs, and in thresholding stage we need 256x1 bit FIFO [21]array. 3 x 3 windows for 3x3 mask. 3x3 Median algorithms for Median Filtering. Sorting algorithm, for Median Filter and Horizontal and Vertical sobel gradient. Row

LTRASONOGRAPHIC

Brightness is an attribute of visual perception emit a given amount of

light. In other words, brightness is the perception elicited by the luminance of a visual target. In the RGB colour brightness can be thought of as the

of the Red, Green, and Blue color the three components

make the light seem brighter than others, which, again, may be compensated by some display systems

In visual perception, contrast is the difference in visual properties that makes an object (or its

an image) distinguishable from other objects and the background. In visual perception of the real world, contrast is determined by the difference in the colour and brightness of the light reflected or emitted by an object and other objects

field of view. Contrast is calculated − Lmin) / (Lmax +

Lmin), Where Lmax is the maximum luminance and

The quantitative analysis which is done on ultrasonographic images is as follows. The accuracy and more information about brightness are depends upon the value of N, so when N is large it gives very good results and gives more information. The

Where, N = Number of equal parts Xmean = Mean value of each part Xn = Level of contrast. Xi = Total mean of the image

VI. EXPERIMENTAL

Previous sections we discussed the algorithms and their corresponding architectures. Algorithms implemented using Ptolemy tool also these architectures and designs were modelled using Xilinx hardware. The programs written in VHDL were simulated and tested by writing test benches. The designs were also synthesized for an Xilinx FPGA board [18].

Modelling is done in the PTOLEMY. Synthesis was done using Xilinx Synthesis Technology of Xilinx. The FPGA used to target the design was: XC2V1000-6-fg256.

A. Results obtained from Ptolemy Modelling Tool

Here we get the above results from Ptolemy modelling and simulation tool. In this firstly we applying the various image processing algorithms such as Median filtering which is use for reducingsalt and peeper noise and spackle noise from ultrasound image and later we applying the various edge detection algorithms such as Sobel, Prewtt, Roberts, canny etc on the ultrasound image.

So here we have eight ultrasound imagesand Table II contains the various ultrasound images such as Abdomen, Fatal, Heart, Intestine, Kidney, Lung, Prostate and Pancreas. In Table column 1 contains original image, column 2 contains its image, column 3 contains the image with salt and peeper noise with probability 0.1 and columnthat the reconstructed image which contain very less noise. So for Reconstruction we are applying Median filter algorithm with square window of size 3X3.

Next Table III & Table IV shows the operation of edge detection which is applied on the ultrasound images. So here we applying mainly four important types of edge detectors they are Sobel, Prewitt, Roberts and Canny edge detector, by observing above edge detected images know that in Ptolemy Sobel gives very sharp edges and also edges are bright one, the output of Prewitt operator is some what less sharp than Sobel but is better than Roberts operator which gives very thin and some what bleary edges which are very heard to see. Finally canny operator gives bright edges which are better than Roberts but they are less sharp than Sobel and Prewitt.

So finally from above results it is seen that in Ptolemy for feature extraction of ultrasound images the Median filter and Sobel Edge detector good.

2

( )Xmean Xi−

International Journal of Advances in Computing and Management

Where, N = Number of equal parts Xmean = Mean value of each part Xn = Level of contrast. Xi = Total mean of the image

XPERIMENTAL RESULTS

Previous sections we discussed the algorithms and their corresponding architectures. Algorithms implemented using Ptolemy tool also these architectures and designs were modelled using Xilinx hardware. The programs written in VHDL were simulated and tested by writing test benches. The designs were also synthesized for an Xilinx FPGA

Modelling is done in the PTOLEMY. Synthesis was done using Xilinx Synthesis Technology of Xilinx. The FPGA used to target the design was:

from Ptolemy Modelling Tool

ere we get the above results from Ptolemy modelling and simulation tool. In this firstly we applying the various image processing algorithms such as Median filtering which is use for reducing the salt and peeper noise and spackle noise from ultrasound image and later we applying the various edge detection algorithms such as Sobel, Prewtt, Roberts, canny etc on the ultrasound image.

So here we have eight ultrasound images. Table I contains the various ultrasound images

such as Abdomen, Fatal, Heart, Intestine, Kidney, Lung, Prostate and Pancreas. In Table column 1 contains original image, column 2 contains its gray image, column 3 contains the image with salt and

robability 0.1 and column 4 shows that the reconstructed image which contain very less noise. So for Reconstruction we are applying Median filter algorithm with square window of size 3X3.

Next Table III & Table IV shows the operation of ch is applied on the ultrasound

images. So here we applying mainly four important types of edge detectors they are Sobel, Prewitt, Roberts and Canny edge detector, by observing above edge detected images know that in Ptolemy Sobel

d also edges are bright one, the output of Prewitt operator is some what less sharp than Sobel but is better than Roberts operator which gives very thin and some what bleary edges which are very heard to see. Finally canny operator gives bright

are better than Roberts but they are less

So finally from above results it is seen that in Ptolemy for feature extraction of ultrasound images the Median filter and Sobel Edge detector good.

International Journal of Advances in Computing and Management

Page 27: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

TABLE V DE-NOISING OPERATION RESULTS OF PTOLEMY

Imag

e

Original Image

Gray Image

Image with salt and

peeper noise

De-noising using

Median Filter

Ab

do

min

al

Fat

al

Hea

rt

Inte

stin

e

Kid

ney

Lu

ng

Pro

stat

e

Pan

crea

s

TABLE VI EDGE DETECTION OPERATION RESULTS OF PTOLEMY

Imag

e Edge detection operation using various operators

Sobel Operator

Prewitt Operator

Roberts Operator

Canny Operator

Ab

do

min

al

Fat

al

Hea

rt

Inte

stin

e

Kid

ney

Lu

ng

Pro

stat

e

Pan

crea

s

Simulation Model of Ultrasonographic Imaging System using Ptolemy modelling tool And its Hardware implementation and Quantative analysis of the results 23

Page 28: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

TABLE VII DE-NOISING OPERATION RESULTS OF XILINX TOOL

Imag

e

Original Image

Noise Image

De-noising by PTelomy

De-noising by Xilinx

Ab

do

min

al

Fat

al

TABLE IV EDGE DETECTION OPERATION RESULTS OF XILIN X TOOL

Imag

e

Original Image De-noising by

PTelomy De-noising by

Xilinx

Ab

do

min

al

Fat

al

Brightness and contrast is found out with

respect to different ultrasonographic images such as Heart and Kidney. Accordingly Brightness and Contrast values of interested region are calculated for different types of Edge detectors.

TABLE V QUANTATIVE ANALYSIS FOR HEART IMAGE

Imag

e

Ultrasonographic Image

Brightness value of the interested region of image

Contrast value of the interested region of image

Hea

rt O

rigin

al

Hea

rt S

ob

el

Hea

rt P

rew

itt

Hea

rt R

ob

erts

Hea

rt C

ann

y

TABLE VI QUANTATIVE ANALYSIS FOR KIDNEY IMAGE

Kid

ney

Orig

inal

Kid

ney

Sob

el

Kid

ney

Pre

witt

Kid

ney

Ro

ber

ts

24 International Journal of Advances in Computing and Management

Page 29: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

Kid

ney

Can

ny

VII. CONCLUSIONS

The modelling and simulation of Ultrasonographic imaging system is the first step towards the system design, here with the Ptolemy Modelling Tool successfully implemented different image enhancement algorithms such as for Edge detection and for de-noising, same algorithms applied to various ultrasonographic sample images. Out of the several algorithms we found Sobel Edge detector and Median filter having very good results for Ultrasonographic images Table I & II, At the last quantitative analysis of ultrasonographic images with respect to Brightness and Contrast is initiated, Table V & VI. While in brightness and contrast we found that Xilinx hardware implementation having good results. So in future work more accurate and sophisticated system should have to be developed with the help of hardware, and will have to get good affordable ultrasonographic imaging system.

REFERENCES [1] A. K. Jain, Fundamental of Digital Image Processing. NJ:

Prentice-Hall. [2] T. Loupas, W. N. Mcdicken, and P. L. Allan, “An adaptive

weighted median filter for speckle suppression in medical ultrasonic images," IEEE Trans. Circuits Syst., vol. 36, pp. 129{135, Jan. 1989.

[3] X. Zong, A. F. Laine, and E. A. Geiser, “Speckle reduc-tion and contrast Enhancement of echocardiograms via multistage nonlinear processing," IEEE Trans.Med. Imag., vol. 17, pp. 532{540, Aug. 1998.

[4] Alin Achim, Anastasios Bezerianos, and Panagiotis Tsakalides Waveletbased Ultrasound Imagine Diagnosis Using an Alpha-Stable Prior Probality Model.

[5] D. Marr and E. Hildreth, “Theory of edge detection,” Proc. Roy. Soc., vol. B207, pp.187–217, 1980.

[6] J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Machine Intell., vol. 8, pp. 679–698, June 1986.

[7] M. Petrou, “Optimal convolution filters and an algorithm for the detection of wide linear features,” Proc. Inst. Elect. Eng., vol. 140, no. 5, pp. 331–339, Oct. 1993.

[8] A. C. Bovik, “On detecting edges in speckle imagery,” IEEE Trans. Acoust., Speech, Signal Processing, vol. 36, pp. 1618–1627, Oct. 1988.

[9] R. A. Brooks & A. C. Bovik, “Robust techniques for edge detection in multiplicative Weibull image noise,” Pattern Recognit., vol. 23.

[10] Yongjian Yu, and Scott T. Acton, “Edge Detection in Ultrasound Imagery using the Instantaneous Coefficient of Variation,” IEEE Teans. Image processing, vol. 13, No. 12, December 2004.

[11] R. A. Pierson and G. P. Adams, “Computer-assisted image analysis, diagnostic Ultrasonographyic and ovulation induction: Strange bedfellows,”Theriogenol.y, vol. 43, pp. 105–112, 1995.

[12] J. Singh, R. A. Pierson, and G. P. Adams, “Ultrasound image attributes of the Bovine corpus luteum: structural and functional correlates,” J. Reproduction Fertility, vol.109, pp. 35–44, 1997.

[13] G. E. Sarty, W. Liang, M. Sonka, and R. A. Pierson, “Semi automated Segmentation of ovarian follicular ultrasound images using a knowledge based algorithm,” Ultrasound in Med. Biol., vol. 24.

[14] J. Bouma, et al., Medical Image Analysis, 1, pp. 363-377,1997.

[15] Fei Mao', Jeremy Gill, Dona1 Downey, Aaron Fenster., “Segmentation of Carotid Artery in Ultrasound Images,” Proceedings of the 22"d Annual EMBS international Conference, July 23-28,2d00, Chicago IL.

[16] Anthony Krivanek and Milan Sonka, “Ovarian Ultrasound Image Analysis: Follicle Segmentation,” IEEE Trans. Medical Imaging, Vol. 17, No. 6, December 1998.

[17] Fahad Alzahrani , Tom Chen Real-time high performance Edge detector for computer vision applications Proceedings of ASP-DAC ’97 . pp 671-672.

[18] Muthukumar Venkatesan and Daggu Venkateshwar Rao, “Hardware Acceleration of Edge Detection Algorithm on FPGAs,” Department of Electrical and Computer Engineering University of Nevada Las Vegas. As Vegas, NV 89154.

[19] F.G.Lorca, L Kessal and D.Demigny Efficent ASIC and FPGA implementation of IIR filters for Real time edge detection. International Conference on image processing (ICIP-97) Volume 2. Oct 1997.

[20] V.Gemignani, M. Demi, M Paterni, M Giannoni and A Benassi. DSP Implementation of real time edge detectors. Proc of speech and image processing pp 1721-1725, 2001.

[21] Miguel A. Vega-Rodriguez, Juan M. Sanchez Perez, Juan A. Gómez-Pulido, “An FPGA-Based Implementation For Median Filter Meeting the Real-Time Requirements of Automated Visual Inspection Systems,” Proceedings of the 10th Mediterranean Conference on Control and Automation-LED2002Lisbon, Portugal, July 9-12, 2002.

[22] K. E. Becher, “Sorting Network and their Applications”, AFIPS proceedings on Spring Joint Computer Conference, pp. 304-314, 1968.

Simulation Model of Ultrasonographic Imaging System using Ptolemy modelling tool And its Hardware implementation and Quantative analysis of the results 25

Page 30: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

Application of Data Mining Algorithms for Customer Classification

Nilima Patil1, Rekha Lathi2

Abstract— With the development of computer technology and IT (Information Technology), human-beings are entering into the information era, in which we are in dire need of some new tools and technologies to deal with such “mass” data. Data mining is the useful tool to discovering the knowledge from large data. For this we require proper classification methods & algorithms. This paper used two data mining classification algorithms-- C5.0 and CART algorithms to get useful information to decision-making out of customers’ transaction behaviors. Firstly, enterprises dividing customers into different groups and then recommending the corresponding card to the customer who has the similar characteristics By this means, enterprises can provide special service to different card rank users in order to attract more and more customers. Keywords- Data mining, classification algorithm, decision tree

I. INTRODUCTION

In commercial operation, using the membership card service is the most superior method to help the businessmen to accumulate the customers’ information. On the one hand they may obtain customers’ basic information to maintain long-term contact with them. On the special day, like the holiday or the customer’s birthday, they can by delivering a warm blessing, promote customers’ satisfaction. On the other hand, they may through the customer’s transaction information, like the purchase volume, the purchase frequency, analyze what is the value of the customer to the enterprise and analyze the characteristics of each kind card customers to help the enterprise with a clear goal to recommend the card to a new customer. The classification analysis is by analyzing the data in the demonstration database, to make the accurate description or establish the accurate model or mine the classifying rule for each category, and then use the classifying rule to classify records in other databases.

II. DATA MINING TECHNOLOGY

Data mining is the extraction of hidden predictive information from large databases. It is a powerful new technology with great potential to help companies focus on the most important information in their data warehouses. Data mining tools predict future trends and behaviors, allowing businesses to

make proactive, knowledge-driven decisions. The automated, prospective analyses offered by data mining move beyond the analyses of past events provided by retrospective tools typical of decision support systems. Data mining tools can answer business questions that traditionally were too time consuming to resolve. They scour databases for hidden patterns, findingpredictive information that experts may miss because it lies outside their expectations.

III. CLASSIFICATION ALGORITHMS

Decision Tree is an important model to realize the classification. It was a learning system—CLS builded by Hunt etc, when they researched on human concept modeling early in the 1960s. To the late 70s, J.Ross.Quinlan put forward the ID3 algorithm[2]-[5]. In 1993, Quinlan developed C4.5/C5.0 algorithm on basis of the ID3 algorithm. Below is the detailed explanation of the C5.0 algorithm and the CART algorithm [8] which are used in this paper.

A. C5.0 Algorithm:

C5.0 algorithm is an extension of C4.5 algorithm. C5.0 is the classification algorithm which applies in big data set. C5.0 is better than C4.5 on the efficiency and the memory.

C5.0 model works by splitting the sample based on the field that provides the maximum information gain. The C5.0 model can split samples on basis of the biggest information gain field..The sample subset that is get from the former split will be split afterward. The process will continue until the sample subset cannot be split and is usually according to another field. Finally, examine the lowest level split, those sample subsets that don’t have remarkable contribution to the model will be rejected.

1) Information Gain:

Gain is computed to estimate the gain produced by a split over an attribute

Where: G(S,A) is the gain of the set S after a split over the

A attribute E(S) is the information entropy of the set S m is the number of different values of the attribute

A in S fS(Ai) is the frequency (proportion) of the items

possessing Ai as value for A in S Ai is ith possible value of A S(Ai) is a subset of S containing all items where

the value of A is Ai

1K.C.college of engineering ,Thane,(E) Mumbai Maharastra India. 2 Pillai Institute of Technology, New Panvel,Navi Mumbai Maharastra India. [email protected] [email protected]

International Journal of Advances in Computing and Management, 2(1) January-December 2013, pp. 26-28 Copyright @ DYPIMCA, Akurdi, Pune (India) ISSN 2250 - 1975

Page 31: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

Entropy: E(S) is the information entropy of the set S ; n is the number of different values of the attribute

in S fS(j) is the frequency (proportion) of the value j in

the set S log2 is the binary logarithm.

B. CART algorithm

Classification and Regression Trees (CART) is a flexible method to describe how the variable Y distributes after assigning the forecast vector X. This model uses the binary tree to divide the forecast space into certain subsets on which Y distribution is continuously even. Tree's leaf nodes correspond to different division areas which are determined by Splitting Rules relating to each internal node. By moving from the tree root to the leaf node, a forecast sample will be given an only leaf node, and Y distribution on this node also be determined.

1) Splitting criteria:

CART uses GINI Index to determine in which attribute the branch should be generated. The strategy is to choose the attribute whose GINI Index is minimum after splitting.

2) GINI index:

Assuming training set T includes n samples, the target property has m values, among them, the ith value show in T with a probability Pi, so T’s GINI Index can be described as below:

Assuming the A be divided to q subsets, {T1, T2, · · · Tq}, among them, Ti’s sample number is ni, so the GINI Index divided according to property A can be described below:

CART divides the property which leads a minimum value after the division.

IV. BUSINESS UNDERSTANDING

In commercial operation, using the membership card service is the most superior method to help the businessmen to accumulate the customers’ information. The Retail chain store implements membership card system management which is helpful not only to accumulate the customer’s information but also to offer corresponding service for different card-rank users. From this way we can enhance customers' loyalty to the store. Therefore, so as to recommend corresponding card to the appropriate customer, senior managers want to obtain different card-rank customers’ characteristics and which is the most important factor that affects the customers to choose this kind of card not that kind. There are various data mining classification algorithm.

For this application we can make use these data mining techniques in order to find some rules from large database. For training data set, I am using two well known decision tree classification algorithm are C5.0 and CART. Block diagram is shown below. Before modeling, big dataset of customer information is stored in the database. Through the analysis of the customer

Fig 1. Block Diagram

basic information table's attributes, we need to add the Field Ops into the data stream, and remove the unnecessary fields. For the fields that have been selected. we set membership card type is the output option, others input option.

In the data understanding stage, by means of analyzing the characteristics of the primary data, we can make further understanding of the data distribution and the affect of various factors to membership card type. Through the preliminary analysis, we try to find out the statistics of customer records in the database, means the percentage of golden card customer’s account, the silver card customers, the bronze card customers & the normal card customer.

In the data source, after setting the type option and Filter option, we select the attribute items that are needed while classifying, ,extracting 70% data from the source data as the training set, the left over 30% as the examination set. Here I am using C5.0 and the CART algorithm separately to carry on the classification.

V. COMPARATIVE STUDY

In customer membership card model two algorithms will be used and comparative study of both of them will be done.

A. C5.0 algorithm:

In customer membership card model C5.o algorithm is used to split up data set & find out the result in the form of decision tree or rule set.

1) Splitting criteria used in c5.0 algorithm is information gain. The C5.0 model can split samples on basis of the biggest information gain.

Application of Data Mining Algorithms for Customer Classification 27

Page 32: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

2) Test criteria is decision tree have any number of branches available not fixed branches like CART

3) Pruning method performed after creating decision tree i.e. post pruning single pass based on binomial confidence limits.

4) Speed of C5.0 algorithm is significantly faster than c4.5 & more accurate.

B. CART algorithm:

CART, short for Classification And Regression Tree, When the value of the target attribute is ordered, it is called regression tree; when the value is discrete, it is called classification tree.

1) Selection criterion used in CART algorithm is Gini index (diversity index).

2) The decision-tree generated by CART algorithm is a simple structured binary tree.

3) The same with algorithm C4.5, CART builds the tree before pruning. crossusing cost-complexity model.

VI. TARGET KNOWLEDGE EXPECTED

Rule sets are formed from decision tree of both algorithms,. it shows that which factors are mainly affected for customer card classification.attributes as the main standards to recommend card to a future customer

The following graph shows the accuracy of greater than CART algorithm. In the graph on X axis if we consider algorithms and Y axis shows percentage of accuracy.

Fig 3: Comparison of algorithms

0.00%

10.00%

20.00%

30.00%

40.00%

50.00%

60.00%

70.00%

80.00%

90.00%

C 5.0 CART

%

28 International Journal of Advances in Computing and Management

Test criteria is decision tree have any number of branches available not fixed branches like CART.

Pruning method performed after creating decision tree i.e. post pruning single pass based

5.0 algorithm is significantly faster

CART, short for Classification And Regression Tree, When the value of the target attribute is ordered,

when the value is discrete,

Selection criterion used in CART algorithm is

tree generated by CART algorithm

The same with algorithm C4.5, CART also builds the tree before pruning. cross-validated

EXPECTED

sion tree of both algorithms,. it shows that which factors are mainly affected for customer card classification. Take those attributes as the main standards to recommend card to

The following graph shows the accuracy of C5.0 is In the graph on X axis

if we consider algorithms and Y axis shows

VII. CONCLUSION

Applying data mining classification algorithm in the customer membership card classification model can help to understand which factors are affecting on the card ranks. Therefore, we are able to take attributes as the main standards to recommend card to a new customer. Moreover, we can refer to other attributes to help judge which card to recommendthe basis of resulting factors the enterprise with a clear goal to recommend the corresponding membership card to the customer in order to provide the special service for each kind of card users, and is helpful to enterprise's development.

REFERENCES[1] Jiawei Han, Michelin Kamber, “Data Mining Concepts and

Techniques” [M], Morgan Kaufmann publishers,70-181.

[2] Quinlan J R, “Induction of Decision Trees”, [J], Machine Learning, 1986.

[3] Quinlan J R, “C4.5: Programs for Machine Learning”, [M], San Mateo, CA: Morgan Kaufman Publishers, 1993.

[4] Quinlan J R, “Programs for Machine Learning”, [M], Morgan Kaufman, 1992

[5] Quinlan J.R. Introduction to decision trees. Mach Learning;1(1, 1986).

[6] Yue He† , Jingsi Liu , Lirui Lin “Research on application of data mining.” School of Business and administration, Sichuan University May 2010

[7] S.B. Kotsiantis, Supervised Machine Learning: A Review of Classification Techniques, Informatica 31,2007,pp. 24926.

[8] Breiman L,Friedman J h,OlshenR A,et al.Classification and regression trees M].California:WadsworthPublishing Company, 1984.

[9] Lin Zhang, Yan Chen, Yan Liang “Applicmining clasiification algorithm for Customer membership Card Model” College of Transportation and Management China.

CART

International Journal of Advances in Computing and Management

CONCLUSION

Applying data mining classification algorithm in the customer membership card classification model can help to understand which factors are affecting on the card ranks. Therefore, we are able to take those attributes as the main standards to recommend card to a new customer. Moreover, we can refer to other attributes to help judge which card to recommend. On the basis of resulting factors the enterprise with a clear goal to recommend the corresponding membership card to the customer in order to provide the special service for each kind of card users, and is helpful to enterprise's development.

REFERENCES Jiawei Han, Michelin Kamber, “Data Mining Concepts and Techniques” [M], Morgan Kaufmann publishers, USA, 2001,

Quinlan J R, “Induction of Decision Trees”, [J], Machine

Quinlan J R, “C4.5: Programs for Machine Learning”, [M], San Mateo, CA: Morgan Kaufman Publishers, 1993. Quinlan J R, “Programs for Machine Learning”, [M],

Quinlan J.R. Introduction to decision trees. Mach

Yue He† , Jingsi Liu , Lirui Lin “Research on application of data mining.” School of Business and administration,

sed Machine Learning: A Review of Classification Techniques, Informatica 31,2007,pp. 249-

Breiman L,Friedman J h,OlshenR A,et al.Classification and regression trees M].California:WadsworthPublishing

Lin Zhang, Yan Chen, Yan Liang “Application of data mining clasiification algorithm for Customer membership

Model” College of Transportation and Management

International Journal of Advances in Computing and Management

Page 33: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

Software Requirement Inspection Tool for Requirement Review

Tejal Gandhi1, Bhavesh Chauhan2

Abstract - To reduce rate of mistakes inspection is very important. Inspection in software engineering refers to review of any work product by trained individuals who look for defects using a well defined process. Software Requirement Inspection Tool is used to inspect the software requirement documents using different methods of defect detection and review. Group of experienced people are involved in inspection process for inspecting requirements of a project. Inside Software Requirement Inspection Tool a project is initiates to maintain set of requirement documents which belong to the same project or from same customer. For each document an Inspection is created. Moderator is assigned for each Inspection, which is responsible to create, manage and monitor the overall inspection process. This software maintains information about activities of the entire requirement inspection process and reports the status accordingly pending inspection process, inspection planning, individual and group preparation log, change request management. Keywords— Software, Requirement, Inspection, Moderator, Defect

I. INTRODUCTION

An inspection is one of the most common sorts of review practices found in software projects. The goal of the inspection is for all of the inspectors to reach consensus on a work product and approve it for use in the project. Commonly inspected work products include software requirements specifications and test plans. Inspections play an important role in a mature software development process as they help to determine the quality of software products early on. Currently, inspections are typically carried out manually using printed versions of documents, models, etc. However, research and practice observed that this way of carrying out the process shows a considerable variation of individual In the area of software inspection research there exists comprehensive empirical evidence that paper-based softwareinspection is a beneficial technique to find defects in different artefacts created throughout the software development process. A major advantage of software inspections is that they can be applied to early life-cycle documents. This helps to lower the risk that critical defects affecting activities and deliverables remain undetected, which finally result in major rework costs.

In Software Requirement Inspection Tool (SRIT), it concentrates on the inspection of requirements documents, which potentially offers the largest benefit to the project.

SRIT takes requirement document as input. Many inspectors will review this document and raises defects in the document. After that, document is updated based on the discovered defects. Updated document is again given to inspectors to review modifications. After inspectors verify the modifications, SRIT gives defect less requirement document. And defects are identified & resolved in early stage of software development life cycle thus reducing the rework cost.

A. Problem

If requirements of a project are not understood properly then the project faces time and effort budget overruns. For efficient time estimation for a project good understanding of requirement is must. Managing requirement is hard. There can be several errors in the requirements like clarity of requirement, ambiguity in requirements. Sometimes misinterpretation ruins requirements. If understanding of a requirement is not correct then its implementation is also not correct. This leads to the defect discovery in later stage of software development life cycle and rework becomes very costly. The Software Requirement Inspection (Review) Tool is designed to detect ambiguities, redundancies and other defects in the requirements. Detecting the requirement defects from requirement documents is not simple and so the tool has a repository that will store the data which will provide an effective way of analysis, querying and tracing the inspection of a requirement document [2].

B. Contribution

The Software Requirement Inspection Tool helps in requirement of inspection. It identifies defects or ambiguity in the requirements and makes requirement clearer. Inspecting requirement document through SRIT offers large benefits as it removes defects in early stage of software development life cycle thus reducing the rework cost in later stages. Using SRIT user discovers error in requirement document and initiates the rework process to resolve that error. SRIT improves software quality, maintainability and reduces time to deliver and also reduces development cost. SRIT identifies errors, misunderstandings, clerical errors, and omission in the requirements document.

II. GENESIS

Study of defect injection and defect detection results in to following chart:

1Department of Computer Engineering, Marathwada Mitra Mandal of College of Engineering, Pune

Pune University,Maharashtra, , India 2Persistent System Ltd., Pune, India [email protected] [email protected]

International Journal of Advances in Computing and Management, 2(1) January-December 2013, pp. 29-33 Copyright @ DYPIMCA, Akurdi, Pune (India) ISSN 2250 - 1975

Page 34: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

Fig. 1: Example of Defect Injection and Defect Removal Curve

In Figure 1 the curve A from START to CODE represents the numbers of defects that were created or entered into the production processes during requirements analysis (REQ), High-level design (HLD), Lowdesign (LLD), and Code in a software project. Curve A shows no pre-test defect removal activities. Consequently each test activity; e.g., (Unit Test (UT), Integration Test (IT), and System Test (ST)), as shown in curve B, finds and removes varying volumes of the generated or injected defects.

In this example let’s assume that each test removes 50% of the defects present at entry to the test; e.g., UT, IT, ST. In a well-defined, well-behaved series of tests, we would expect that the number of defects removed would in each subsequent test activity. While this is a simplistic view, evidence to the contrary would suggest a quality problem for the product under test. In this example we will make the assumption that the tests are well behaved. The gap that remains at the end of system test (ST) represents latent defects that have the potential to be found by the users. Latent defects are defects remaining in the software product delivered to the customer at SHIP.

So far in this example we can see that:

A. Defects are generated in each life cycle production activity

B. Injected defects are removed in testing activities after code is completed

C. Not all defects are removed at SHIP

Had we tried to deliver the product at the conclusion of coding, we would most probably not have had a working product due to the volume of defects. There is a cost to remove defects found during test, and as we’ll soon see, the cost to remove defects during test is always higher than in pre-test. Users of software products all too frequently encounter problems and we can assume that these defects will usually require resolution and repair and therefore these are an added cost to the software project or organization. Both of these costs will vary for a number of reasons, such as the number of defects entering from each production activity, the types

30 International Journal of Advances in Computing and Management

Defect Injection and Defect Removal Curve

In Figure 1 the curve A from START to CODE represents the numbers of defects that were created or entered into the production processes during requirements

level design (HLD), Low-level (LLD), and Code in a software project. Curve A

test defect removal activities. Consequently each test activity; e.g., (Unit Test (UT), Integration Test (IT), and System Test (ST)), as shown in curve B, finds

generated or injected

In this example let’s assume that each test removes 50% of the defects present at entry to the test; e.g., UT, IT, ST.

behaved series of tests, we would expect that the number of defects removed would decline in each subsequent test activity. While this is a simplistic view, evidence to the contrary would suggest a quality problem for the product under test. In this example we will make the assumption that the tests are well behaved.

s at the end of system test (ST) represents latent defects that have the potential to be found by the users. Latent defects are defects remaining in the software product delivered to the customer at SHIP.

generated in each life cycle production

Injected defects are removed in testing activities after

Had we tried to deliver the product at the conclusion of have had a working

product due to the volume of defects. There is a cost to remove defects found during test, and as we’ll soon see, the cost to remove defects during test is always higher

test. Users of software products all too ounter problems and we can assume that

these defects will usually require resolution and repair and therefore these are an added cost to the software project or organization. Both of these costs will vary for a number of reasons, such as the number of

ects entering from each production activity, the types

and severities of defects, and the quality policy in the organization. The sum of these costs increases the cost of quality for the project, where the cost of quality is money spent due to poor quality in the product

Our objective with Inspections is to reduce the Cost of Quality by finding and removing defects earlier and at a lower cost.

While some testing will always be necessary, we can reduce the costs of test by reducing the volume of defectpropagated to test. We want to use test to verify and validate functional correctness, not for defect removal and associated rework costs. We want to use test to prove the correctness of the product without the high cost of defect removal normally seen in test. Additionally we want to use test to avoid impacting the users with defective products.

So the problems that must be addressed are:

A. Defects are injected when software solutions are produced

B. Removal of defects in test is costly

C. Users are impacted by too many software defects in delivered products

With inspections we can manage and reduce the effect of these problems. In Figure 2 the same curve (Ais shown as in Figure 1. An additional curve (Cwith inspections) represents defectsremoval by inspections for the volume of defects injected and remaining during each production activity as shown in curve A. In this example the assumption is 50% defect removal effectiveness for each inspection and test activityEffectiveness is the percentage of defects removed from the base of all the defects entering the defect removal activity. As is seen in Figure 2, the number of latent defects in the final product is reduced

Fig. 2: Example of Defect

After inspections, residual defects are removed during testing, but typically not all injected defects are removed. There is still a gap of latent defects that the users could potentially find. In the scenario, with inspections, is smaller, due to the defects removed by inspections. This reduced gap represents a quality improvement in the product delivered to the users [15].SRIT takes care of the defects present in requirement documents. It helps us to identify defects in the requirements provided by customer, in early stage of software development life cycle thus improving quality of product and reducing re

International Journal of Advances in Computing and Management

and severities of defects, and the quality policy in the organization. The sum of these costs increases the cost of quality for the project, where the cost of quality is money

in the product [15]. Our objective with Inspections is to reduce the Cost of

Quality by finding and removing defects earlier and at a

While some testing will always be necessary, we can reduce the costs of test by reducing the volume of defects propagated to test. We want to use test to verify and validate functional correctness, not for defect removal and associated rework costs. We want to use test to prove the correctness of the product without the high cost of defect

n test. Additionally we want to use test to avoid impacting the users with defective

So the problems that must be addressed are:

Defects are injected when software solutions are

Removal of defects in test is costly

y too many software defects in

With inspections we can manage and reduce the effect of these problems. In Figure 2 the same curve (A-Injected) is shown as in Figure 1. An additional curve (C-Detected with inspections) represents defects remaining after removal by inspections for the volume of defects injected and remaining during each production activity as shown in curve A. In this example the assumption is 50% defect removal effectiveness for each inspection and test activity.

eness is the percentage of defects removed from the base of all the defects entering the defect removal activity. As is seen in Figure 2, the number of latent defects in the final product is reduced with inspections.

fect Removal with Inspections

After inspections, residual defects are removed during testing, but typically not all injected defects are removed. There is still a gap of latent defects that the users could potentially find. In the scenario, with inspections, the gap is smaller, due to the defects removed by inspections. This reduced gap represents a quality improvement in the product delivered to the users [15].SRIT takes care of the defects present in requirement documents. It helps us to

the requirements provided by customer, in early stage of software development life cycle thus improving quality of product and reducing re-work cost.

International Journal of Advances in Computing and Management

Page 35: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

III. SOFTWARE REQUIREMENT INSPECTION FRAMEWORK

Fig 3 SRIT Framework

The software inspection process consists of at least three stages: Entry, planning, preparation task, defect collection, and repair and verification. In entry phase the required artifact for inspection are prepared and start planning of inspection. Planning phase include scheduling of overview and inspection meeting, team composition and assign roles and responsibilities. During preparation individual inspectors read the artifact separatattempting to detect defects. In some variations of the process, these individuals are assigned certain classes of defect to focus on during the inspection phase, but are not restricted to finding and reporting this class of defect. Studies have shown that about 75% of defects are found by individuals in the preparation stage. The final stage, repair, requires the author to address all of the issues raised in the inspection report. The moderator then evaluates the rework to determine if all of the repoissues have been sufficiently resolved. There is generally scope for the re-inspection of work if either, the moderator is not happy with the quality of the rework, or the particular artefact or deliverable is critical to the project.

A. Requirement Inspection Planning

Moderator can plan an Inspection. He/she decides source document to inspect, review checklist, team and time line for the inspection. Moderator can also modify existing inspection plan, redesign team structure and update time line for the inspection. The aim of inspection planning is to make sure that inspection process go smoothly and completed in time. Once the inspection is complete, Moderator can close the inspection

Entry

Inspection

Planning

Initiation

Inspection

Overview

Meeting

Team

Composition

Roles and

Responsibilit

y

Templates

Checklist

Inspection

Training

Task

Inspection

Execution

Meeting

Individual

Inspector

Form

,Inspection

Log Form

Observation

finalization

Record Final

Observation

in Inspection

Log

Verification &

Validation

Inspection

Log Closure

Inspection

Log Review

Rework

Verify the

changes and

validate the

successful

completion

of changes

Approve the

changes

Software Requirement Inspection Tool for Requirement Review

INSPECTION FRAMEWORK

The software inspection process consists of at least three stages: Entry, planning, preparation task, defect

In entry phase the t for inspection are prepared and start

planning of inspection. Planning phase include scheduling of overview and inspection meeting, team composition and assign roles and responsibilities. During preparation individual inspectors read the artifact separately, attempting to detect defects. In some variations of the process, these individuals are assigned certain classes of defect to focus on during the inspection phase, but are not restricted to finding and reporting this class of defect.

that about 75% of defects are found by individuals in the preparation stage. The final stage, repair, requires the author to address all of the issues raised in the inspection report. The moderator then evaluates the rework to determine if all of the reported issues have been sufficiently resolved. There is generally

inspection of work if either, the moderator is not happy with the quality of the rework, or

fact or deliverable is critical to the

Moderator can plan an Inspection. He/she decides source document to inspect, review checklist, team and

the inspection. Moderator can also modify existing inspection plan, redesign team structure and

inspection. The aim of inspection planning is to make sure that inspection process go smoothly and completed in time. Once the inspection is complete, Moderator can close the inspection [6].

B. Individual Defect Discovery

Checklist provided by the moderatscrutinize the requirement documenteach inspector is to find the maximum number of unique major potential defects. The defects are identified as objectively as possible according to the checklist and the understanding of the participants and are recorded for further discussion and investigation

C. Six Hat Thinking

Six thinking hats is review technique and discipline designed to help individuals deliberately adopt a variety of perspectives on a issue that may be diffethat they might most naturally assume. During requirement reviewing every inspection team member or stakeholder sees the requirements from his/her own perspective, one stakeholder is not aware of need of the other team member. As a result oanticipate fully the requirements aspects of another stakeholder.

D. Brainstorming

Its aim of getting root causes of the major issues logged, and to generate suggestions for improving the software development process. major issues afor analysis. The issue with highest severity should serve first. For each potential defect selected, the root causes are brainstormed, and improvements which could be made to prevent that type defect occurring again are suggested

E. Change Request Management

Assign: Moderator assigns change request on a document to corresponding author of the document.Rework: The author picks up the change request ato him/her. If the change request is of misinterpretation, an ordinary comment or footnotemisinterpretation. The author may also make other improvements or correction to the document after confirmation from the moderator.Moderator (Reviewer) checks that the satisfactory editor action has been taken on thcorrect defects in any document must have been sent to the author of the document. Follow up is completed when rework of all the defects are verified. If the rework is not satisfactory then change request is again sent to the afor further rework with comments from the reviewer. If modification done in the document is verified then moderator (reviewer) can close the change request

IV. EXPERIMENT

Primary goal of the experimentand complete requirements by eliciting, specifying, analysing and validating requirements involving all stakeholders including business staff, analysts and delivery practitioners.

This model is applicable to smallscale projects. The intent of a requirements inspection is to determine if a given set of requirements are sufficiently stable, acceptable, and understood to support

Verification &

Validation

Inspection

Log Closure

Inspection

Log Review

Rework

Verify the

changes and

validate the

successful

completion

of changes

Approve the

changes

Exit

Close the

Review

Observation

Sign-off the

work

product

Software Requirement Inspection Tool for Requirement Review

Individual Defect Discovery

Checklist provided by the moderator the inspectors the requirement document using a.The aim of

each inspector is to find the maximum number of unique major potential defects. The defects are identified as objectively as possible according to the checklist and the

of the participants and are recorded for further discussion and investigation [10].

Six thinking hats is review technique and discipline designed to help individuals deliberately adopt a variety of perspectives on a issue that may be different from one that they might most naturally assume. During requirement reviewing every inspection team member or stakeholder sees the requirements from his/her own perspective, one stakeholder is not aware of need of the other team member. As a result one stakeholder cannot anticipate fully the requirements aspects of another

Its aim of getting root causes of the major issues logged, and to generate suggestions for improving the software development process. major issues are selected for analysis. The issue with highest severity should serve first. For each potential defect selected, the root causes are brainstormed, and improvements which could be made to prevent that type defect occurring again are suggested.

st Management

Moderator assigns change request on a document to corresponding author of the document.

The author picks up the change request assigned to him/her. If the change request is of misinterpretation, an ordinary comment or footnote may be added to future misinterpretation. The author may also make other improvements or correction to the document after confirmation from the moderator. Follow up: The Moderator (Reviewer) checks that the satisfactory editor action has been taken on the defects. Change request to correct defects in any document must have been sent to the author of the document. Follow up is completed when rework of all the defects are verified. If the rework is not satisfactory then change request is again sent to the author for further rework with comments from the reviewer. If modification done in the document is verified then moderator (reviewer) can close the change request [11].

EXPERIMENT AND RESULT

Primary goal of the experiment is to produce accurate and complete requirements by eliciting, specifying,

and validating requirements involving all stakeholders including business staff, analysts and

This model is applicable to small-scale as well as large scale projects. The intent of a requirements inspection is to determine if a given set of requirements are sufficiently stable, acceptable, and understood to support

Software Requirement Inspection Tool for Requirement Review 31

Page 36: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

system/software development. A major factor in this effort is 1) insuring the individual developing the requirements has a firm grasps what

is expected, 2) potential designers can comprehend what stated in the document, and 3) potential designers find the requirements as stated technically feasible.

Fig. 4 Inspection Initiation Process

Software Requirement inspection tool can be deployed on to the IIS server and users can access this application using web browser. SRIT contains different sections that can be used for managing inspection process and defect detection and removal process. Bascically this software contains create inspection screen that can be used to create new inspection process for inspecting new project requirement work product.

Fig. 5 Individual preparation Log for Defect Detection

On completion of individual preparation log and inspection process, Evaluation matrix is available to view all logged defects of particular inspection and calculate summary of the inspection.

Fig. 6 Evaluation Matrix for analysis of Inspection

V. CONCLUSION AND FUTURE WORK

Software Requirement Inspection Tool provides a process to inspect defects in requirement documents. Documents coming from different customers can be inspected in isolated manner. Inside SRIT a project is created for set of documents which belongs to same project or customer. Single inspection is created for each document to be inspected and moderator is assigned to it.

Moderator has to manage and monitor complete inspection cycle for that document. More than one inspector inspects a document. Defects identified during inspection process, is discussed using technique called six hat thinking or brainstorming. This techniques priories the defects, identifies their severity and also comment on the workaround to resolve the same. SRIT not only identifies defects in the requirement documents but also provide automatic framework for rework and verification of rework. Change requests are generated based on the six hat thinking and brainstorming. These change requests are assigned to authors for rework. Author needs to verify rework done from moderator.

SRIT also maintains log of each inspection cycle for future reference. This helps in identifying average time (or precise time) required to inspect different type of documents and thus helps managers to schedule requirement inspection task.

The scope of the project is further extended to automate the inspection process in architecture and design phase. This application can be enhanced by automating different requirement capture techniques. Automate document versioning to mange documents more efficiently or integrate third party tool for the same. Automate other review processes like code review which can help to improve quality of the product

REFERENCES [1] Gilb T., Graham D., Software Inspection, AddisonWesley, 1993.

Halling M., Grünbacher, P., Biffl St., “Groupware Support for Software Requirements Inspection”, Proc. of the Workshop on Inspection in Software Engineering (WISE’01), July 2001.

[2] Halling M., Grünbacher, P., Biffl St., “Tailoring a COTS Group Support System for Software Requirements Inspection”, Proceedings Automated Software Engineering Conference, IEEE Comp. Soc. Press, San Diego, Dec. 2001.

[3] Benjamin cheng and Ross Jeffery, The University of New South Wales, Australia, “Comparing Inspection Strategies for Software Requirement Specification “ 1996, IEEE Computer Society Press.

[4] Basili V.R; The Experience Factory and its Relationshipto other Improvement Paradigms; Software Engineering-ESEC’93, Proceedings, Springer Verlag,pp. 68-83; 1993.

[5] Basili V.R., Green S., Laitenberger O., Lanubile F.,Shull F., Soerumgaard S., M. Zelkowitz, „The Empirical Investigation of Perspective-Based Reading.”Empirical Software Engineering: An International Journal 1, 2 (1996), pp. 133-164.

[6] proceedings of the 28 th Euromicro Conference (EUROMICRO’02), 2002 IEEE “A Groupware-Supported Inspection Process for Active Inspection Management”

[7] Proceedings of the 12th IEEE International Engineering Conference, “Requirement of Requirement management Tools” 2004, IEEE Computer Society Press.

[8] Biffl St., Halling M. “Software Product Improvement with Inspection”, Proc. Euromicro 2000 Conferenceon Software Product and Process Improvement, Maastricht, IEEE Comp. Soc. Press, Sept. 2000.

[9] Biffl St., Analysis of the Impact of Reading Techniqueand Inspector Capability on Individual InspectionPerformance. Proc. of APSEC 2000 Asia-PacificSoftware Engineering Conference, Singapore, IEEEComp. Soc. Press, Dec. 2000.

[10] Biffl St., “Using Inspection Data for Defect Estimation.”IEEE Software 17(6); special issue on recentproject estimation methods, pp. 36-43, 2000.

[11] Biffl St., Gutjahr W., “Influence of Team Size andDefect Detection Methods on Inspection Effectiveness”,Proc. of IEEE Int. Software Metrics Symposium,London, April 2001.

[12] Biffl St., Grossmann W. (2001) “Evaluating the Accuracyof Objective Estimation Models Based on InspectionData From Multiple Inspection Cycles”,Proc. of ACM/IEEE International

32 International Journal of Advances in Computing and Management

Page 37: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

Conference onSoftware Engineering, Toronto, Canada, IEEEComp. Soc. Press, 2001, pp. 145-154.

[13] Freimut B., Laitenberger O., Biffl St., “Investigating the Impact of Reading Techniques on the Accuracy of Different Defect Content Estimation Techniques”, Proc. of IEEE Int. Software Metrics Symposium,London, April 2001.

[14] Halling M., Biffl St. „Using Reading Techniques to Focus Inspection Performance”, Proc. Euromicro 2001 Conference on

Software Product and Process Improvement, Warsaw, IEEE Comp. Soc. Press, Sept. 2001.

[15] Laitenberger O., DeBaud J.-M., “An encompassing life cycle centric survey of software inspection”, Journal of Systems and Software, vol. 50, no. 1, 2000, pp. 5-31.

[16] MacDonald F., Miller J., “A Comparison of Computer Support Systems for Software Inspection”, Automated Software Engineering 6, 291-313, 1999.

Software Requirement Inspection Tool for Requirement Review 33

Page 38: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

The Asymptotic Analysis of Quick Sort and Quick Select

Riyazahmed A. Jamadar

Abstract—I revisit the classical Quick sort and Quick Select algorithms, under a complexity model that fully takes into account the elementary comparisons between symbols composing the records to be processed. My probabilistic models belong to a broad category of information sources that encompasses memoryless (i.e., independent-symbols) and Markov sources, as well as many unbounded-correlation sources. I establish that, under my conditions, the average-case complexity of Quick Sort is O(n log2 n) [rather than O(n log n), classically], whereas that of Quick Select remains O(n). Explicit expressions for the implied constants are provided by the combinatorial–analytic methods. Keywords—Asymptotic, entropy ,memoryless, Dirichlet series.

I. INTRODUCTION

Every student of a basic algorithms course is taught that, on average, the complexity of Quick sort is O(n log n), that of binary search is O(log n), and that of radix-exchange sort is O(n log n); see for instance [13,16]. Such statements are based on specific assumptions—that the comparison of data items (for the first two) and the comparison of symbols (for the third one) have unit cost—and they have the obvious merit of offering an easy-to-grasp picture of the complexity landscape. However, as noted by Sedgewick, these simplifying assumptions suffer from limitations: they do not make possibly a precise assessment of the relative merits of algorithms and data structures that resort to different methods (e.g., comparison-based versus radix-based sorting) in a way that would satisfy the requirements of either information theory or algorithms engineering. Indeed, computation is not reduced to its simplest terms, namely, the manipulation of totally elementary symbols, such as bits, bytes, characters. Furthermore, such simplified analyses say little about a great many application contexts, in databases or natural language processing, for instance, where information is highly “non-atomic”, in the sense that it does not plainly reduce to a single machine word. First, I observe that, for commonly used data models, the mean costs Sn and Kn of any algorithm under the symbol-comparison and the key-comparison model, respectively, are connected by the universal relation

Sn = Kn · O(log n). (This results from the fact that at most O(log n)

symbols suffice, with high probability, to distinguish n keys; cf. the analysis of the height of digital trees, also known as “tries”, in [3].) The surprise is that there are cases where this upper bound is tight, as in Quick Sort ; others where both costs are of the same order, as in Quick

Select. In this work, I show that the expected cost of Quick Sort is O(n log2 n), not O(n log n), when all elementary operations— symbol comparisons—are taken into account. By contrast, the cost of Quick Select turns out to be O(n), in both the old and the new world, albeit, of course, with different implied constants. My results constitute broad extensions of earlier ones by Fill and Janson (Quick sort, [7]) and Fill and Nakama (Quick Select, [8]). The sources that we deal with include memoryless (“Bernoulli”) sources with possibly non-uniform independent symbols and Markov sources, as well as many non-Markovian sources with unbounded memory from the past. A central idea is the modelling of the source via its fundamental probabilities, namely the probabilities that a word of the source begins with a given prefix. My approach otherwise relies on methods from analytic combinatorics [10], as well as on information theory and the theory of dynamical systems. It relates to earlier analyses of digital trees (“tries”) by Cl´ement, Flajolet, and Vall´ee [3].Results: I consider a totally

ordered alphabet What I call a probabilistic source

produces infinite words on the alphabet (see Definition 1). The set of keys (words, data items) is then

endowed with the strict lexicographic order

denoted . My main objects of study are two algorithms, applied to n keys assumed to be independently drawn from the same source S, namely, the standard sorting algorithm Quick Sort(n) and the algorithm Quick Select(m, n), which selects the mth smallest element. In the latter case, we shall mostly focus our attention on situations where the rank m is proportional to n, being of

the form so that the algorithm determines the

quantile; it will then be denoted by QuickQuant_(n). I also consider the cases where rank m equals 1 (which we call QuickMin), equals n (QuickMax), or is randomly chosen in {1, . . . , n} (QuickRand).My main results involve constants that depend on the source S (and possibly on the real ); these are displayed in Figures 1–3 and described in Section 1.

They specialize nicely for a binary source (under which keys are compared via their binary expansions, with uniform indesspendent bits), in which case they admit pleasant expressions that simplify and extend those of Fill and Nakama [8] and Grabner and Prodinger [11] and lend themselves to precise numerical evaluations (Figure 2). The conditions Λ-tamed , ∏-tamed and “periodic” are made explicit in Subsection 2.1. Conditions of applicability to classical source models are discussed in Section 3.

AISSMS Institute of information Technology Pune. India

International Journal of Advances in Computing and Management, 2(1) January-December 2013, pp. 34-37 Copyright @ DYPIMCA, Akurdi, Pune (India) ISSN 2250 - 1975

Page 39: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

Theorem : 1. (i) For a Λ-tamed source S, the mean number Tn of symbol compariosons of QuickSort(n) involves the entropy h(S) of the source: Tn = 1/h(s) nlog2n+a(S)nlogn+b(S)n+o(n), for some a(S),b(S) є R (1). (ii) For a periodic source, a term nP(logn) is to be addd to the estimate(1) where P(u) is a (computable) continuous periodic function. Theorem 2: For a ∏-tamed source S, one has the following, with δ=δ(S)>0. The mean number of symbol compariosons Qn(α) = ρs(α)n+O(n1 – δ ) (2) The mean number of symbol comparisons Mn(-) for QuickMin(n) and Mn(+) for QuickMax(n) satisfies with є=±,Mn(є) = ρs(є)n+O(n1-δ), with ρs(+)=ρs(1), ρs(-) = ρs(0), (3) (iii)The mean number Mn of symbols comparisons performed by QuickRand(n) satisfiesMn = γsn+O(n1-δ) with γs= 0∫1ρ(α)dα (4)These estimates are to be compared to the classical ones, relative to the number of comparisons in QuickQuantα. General strategy: Here I operate under a general model of source, parametrized by

the unit interval My strategy comprises of three main steps. The first two are essentially algebraic, while the last one relies on complex analysis.

Step (a):I first show (Proposition 1) that the mean

number of symbol comparisons performed by a

(general) algorithm applied to n words from a source can be expressed in terms of two objects: (i)

the density of the algorithm, which uniquely depends on the algorithm and provides a measure of the mean number of key-comparisons performed near a specific point; (ii) the family of fundamental triangles, which solely depends on the source and describes the location of pairs of words which share a common prefix.

Step (b):This step is devoted to computing the density of the algorithm. We first deal with the Poisson model, where the number of keys, instead of being fixed, follows a Poisson law of parameter Z. I provide an expression of the Poissonized density relative to each algorithm, from which we deduce the mean number of comparisons under this model. Then, simple algebra yield expressions relative to the model where the number n of keys is fixed.

Step (c):The exact representations of the mean costs is an alternating sum which involves two kinds of quantities, the size n of the set of data to be sorted (which tends to infinity), and the fundamental probabilities (which tend to 0).We approach the corresponding asymptotic analysis by means of complex integral representations of the N¨orlund–Rice type. For each algorithm–source pair, a series of Dirichlet type encapsulates both the properties of the source and the characteristics of the algorithm—this is

the mixed Dirichlet series, denoted by whose singularity structure in the complex plane is proved to condition our final asymptotic estimates.

A. Algebraic analysis

The developments of this section are essentially formal (algebraic) and exact; that is, no approximation is involved at this stage.

Plot the function and the three sources, Bern(1/2,1/2), Bern(1/3,2/3) and Bern(1/3,1/3,1/3).

Fig 1 Bern(1/2,1/2)

Fig2. Bern(1/3,2/3)

The Asymptotic Analysis of Quick Sort and Quick Select 35

Page 40: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

Fig3. Bern(1/3,1/3,1/3)

The curves illustrate the fractal characters involved in Quick Select.

B. Asymptotic analysis

For asymptotic purposes, the singular structure of the involved functions, most notably the mixed Dirichlet

series , is essential.

1) Tamed sources. We now introduce important regularity properties of a source, via its Dirichlet

series . Recall that always has

a singularity at s = −1, and always has a singularity at s = 0.

General asymptotic estimates. The sequence of

numerical values (−k) lifts into an analytic

function whose singularities essentially determine the asymptotic behaviour of QuickSort and QuickSelect.

2) Application to the analysis of the three algorithms.

I conclude my discussion of algorithms QuickSort and QuickVal, then that of QuickQuant.

Analysis of :Quicksort

In this case, the Dirichlet series of interest is

For a Λ-tamed source, there is a triple pole at s = −1: a pole of order 1 arising from Λ(s) and a pole of order 2 brought by the factor 1/(s+1)2. For a periodic source, the other poles located on

provide the periodic term nPlog(n), in the form of a Fourier series. This concludes the proof of Theorem 1.

Analysis of : QuickValα

In this case, the Dirichlet series of interest is

3) Analysis of :QuickQuantα

The main chain of arguments connecting the asymptotic behaviors of QuickValα (n) and QuickQuantα(n), (a) The algorithms are asymptotically “similar enough”: If X1, . . . ,Xn are n i.e. random variables uniform over [0, 1], then the α-quantile of set X is with high probability close to α. For instance, it is at distance at most (log2 n)/sqrt(n) from α with an exponentially small probability (about exp(−log2 n)). (b) The function α→ρS(α) Holder with exponent c = min(1/2, 1 − (1/γ)). (c) The error term in the expansion is uniform with respect to α. 3. Sources, QuickSort, and QuickSelect : We can now recapitulate and examine the situation of classical sources with respect to the tameness and periodicity conditions of Definition 3. All the sources listed below are ∏-tamed, so that Theorem 2 (for QuickQuant) is systematically applicable to them.The finer properties of being Λ-tamed , Λ-strongly tamed or periodic only intervene in Theorem 1, for Quick sort. Memoryless sources and Markov chains. The memoryless (or Bernoulli) sources correspond to an alphabet of some cardinality r >=2, where symbols are taken independently, with pj the probability of the jth symbol. The Dirichlet series is

Despite their simplicity, memoryless sources are never Λ-strongly tamed.They can be classified into three subcategories (Markov chain sources obey a similar trichotomy). (i) A memoryless source is said to be Diophantine if at least one ratio of the Form (logPi )/(logPj ) is irrational and poorly approximable by rationals. All Diophantine sources are Λ-tamed in the sense of definition 3(b); i.e., the existence of a hyperbolic region (11) can be established. Note that, in a measure-theoretic sense, almost all memoryless sources are Diophantine. (ii) A memoryless source is periodic when all ratios (logPi )/(logP1 ) are rational (this includes cases where p1 = · · · = pr, in particular, the model of uniform random bits). The function Λ(s) then admits an imaginary period, and its poles are regularly spaced on

=-1 This entails the presence of an additional periodic term in the asymptotic expansion of the cost of QuickSort, which corresponds to complex poles of Λ(s); this is case (ii) of Theorem 1.

(iii) Finally, Liouvillean sources are determined by the fact that the ratios are not all rational, but all the irrational ratios are very well approximated by rationals (i.e., have

36 International Journal of Advances in Computing and Management

Page 41: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

infinite irrationality measure); for their treatment, we can appeal to the Tauberian theorems of Delange [6] Dynamical sources were described in Section 1. They are principally determined by a shift transformation T.As shown by Vall´ee ([3]), the Dirichlet series of fundamental intervals can be analysed by means of transfer operators, specifically the ones of secant type I discuss here the much researched case where T is a non-linear expanding map of the interval. (Such a source may be periodic only if the shift T is conjugated to a piecewise affine map of thetype previously discussed.) One can then adapt deep results of Dolgopyat [4], to the secant operators. In this way, it appears that a dynamical source is strongly tamed as soon as the branches of the shift (i.e., the restrictions of T to the intervals are not “too often” of the “same form”—we say that such sources are of type D1. Such is the case for continued fraction sources, which satisfy a “Riemann hypothesis” [1,2]. The adaptation of other results of Dolgopyat [5] provides natural instances of tamed sources with a hyperbolic strip: this occurs as soon as the branches all have the same geometric form, but not the same arithmetic features—we say that such sources are of type D2.

II. ACKNOWLEDGMENT

I acknowledge my gratitude to Brigitte Vallee, Julien Clement, James Allen Fill, and Philippe Flajolt for providing me the necessary technical advice; I also thank all the authors of books I have refereed during my study. Thank one and all.

REFERENCES [1] Baladi, V., and Vall´ee, B. Euclidean algorithms are Gaussian.

Journal of Number Theory 110 (2005), 331–386. [2] Baladi, V., and Vall´ee, B. Exponential decay of correlations for

surface semiflows without finite Markov partitions. Proc. AMS 133, 3 (2005)

[3] Cl´ement, J., Flajolet, P., and Vall´ee, B. Dynamical sources in information theory: A general analysis of trie structures. Algorithmica 29, 1/2 (2001), 307–369.

[4] Dolgopyat, D. On decay of correlations in Anosov flows, Annals of Mathematics 147 (1998) 357-390.

[5] Dolgopyat, D. Prevalence of rapid mixing (I) Ergodic Theory and Dynamical Systems 18 (1998) 1097-1114.

[6] Delange, H. G´en´eralisation du th´eor`eme de Ikehara. Annales scientifiques de l’ ´ Ecole Normale Sup´erieure S´er. 3 71, 3 (1954), 213–142.

[7] Fill, J. A., and Janson, S. The number of bit arisonsused by Quicksort: An average-case analysis. In roceedings of the ACM-SIAM Symposium on Discrete Algorithms (SODA04) (2001), pp. 293–300.

[8] Fill, J. A., and Nakama, T. Analysis of the expected number of bit comparisons required by Quickselect. ArXiv:0706.2437. To appear in Algorithmica. Extended abstract in Proceedings ANALCO (2008), SIAM Press, 249–256.

[9] Flajolet, P., and Sedgewick, R. Mellin transforms and asymptotics: finite differences and Rice’s integrals. Theoretical Comp. Sc. 144 (1995), 101–124.

[10] Flajolet, P., and Sedgewick, R. Analytic Combinatorics. Cambridge University

[11] Press, 2009. Available electronically from the authors’ home pages.

[12] Grabner, P., and Prodinger, H. On a constant arising in the analysis of bit comparisons in Quickselect. Quaestiones Mathematicae 31 (2008), 303–306.

[13] External Links: www.gateguru.org/algorithms/quicksort_analysis.pdf www.personal.kent.edu/~rmuhamma/.../Sorting/quickSort.htm www.cise.ufl.edu/class/cot3100fa07/quicksort_analysis.pdf www.cs.cmu.edu/afs/cs/academic/class/15451-s07/www/.../lect0123.pdf

The Asymptotic Analysis of Quick Sort and Quick Select 37

Page 42: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

Talent Acquisition and Retention Meera Singh

Abstract— The changing trends have clearly indicated that the transformation of management from regular routine recruitment process to focus towards intangible capital management. In knowledge oriented society human capital is a strategic resource in attainment of competitive advantage. Talent Acquisition and Retention is striking for numerous reasons as it is the best remedy to achieve the entire organizational goal with few employees who are the key players and best performers. The workforce development is necessary for upcoming challenges like again economy being hit by recession, Talent Acquisition and Retention makes it more enhanced wherein the worker’s potential grows manifold. The following points are majorly considered in Talent Acquisition and Retention like the strategy of recruitment and retention, compensation and assessment review. At times Talent Acquisition and Retention is handled by some departments or else by HR Department. But now top level management is fully participating in talent management. Finding the talent is not at all a solution but to place it at the right position is the biggest challenge. The talent positioning means the right talent at right time at the right place with the required skill set. Engagement and motivation is the key to Talent Acquisition and Retention and in motivating the employees, one important factor is preferred job characteristics between gender and among different age groups. The primary thing of Talent Acquisition and Retention is to attract and retain qualified persons in a company to attain full potential. The packages are carefully decided based on the standards of an industry. Talent Acquisition and Retention not only scrutinizes the pay package but also finds out whether something similar can be given along with some other benefits.

Keywords— Human capital, Key players, Best performers, Bandwidth, Talent positioning.

I. INTRODUCTION

Talent Acquisition and Retention is basically constituent of five elements such as attracting, selecting, engaging, developing and retaining employees and it is generally concerned with identifying the talent gaps, succession planning, retaining talented employees by variety of initiatives as well as implementing different strategies.

Employee’s knowledge and skill are very important weapons which gives competitive advantage to the company in cut throat competition. The different organizations which are dealing with management of talent strategically and purposefully explains states that the steps of attracting, selecting, developing them through training programs, retaining them by promoting and giving them decent salary hike. It cannot be only a sole responsibility of HR Department to attract and retain talent instead it should be done at all levels of hierarchy in

the organization. There are few important things which should be implemented for Talent Acquisition and Retention i.e. support of the CEO and top level management [2], the Manager’s mentoring during the process of Talent Acquisition and Retention similarly the accountability and support of the executive team is key factor in the success of Talent Acquisition and Retention Department’s as well support of HR Head, the whole process of Talent Acquisition and Retention should be designed and followed with perfection. Needless to say, the demand for both the quantity and the quality of talented employees will grow worldwide. Companies that have fired employees in the past are already feeling the pinch, as they do not have enough bandwidth to execute. After recession there is a strong inclination towards Talent Acquisition and Retention investment in different types of organization. There has been an increase in inclination to get a job in public sector amongst the talented employees after recession as there will be job security in governmental jobs. After recession, in today’s environment the companies have to think more strategically and creatively that how to increase the output with the less input. The same approach is applicable in managing talent. The main aim of employers is to attain their strategic goals with fewer talented persons but the employers are facing problems in filling the critical positions where they need specific skill set persons along with experience as they are in shortage. This leads to frustration in individuals as well as employers. The economic downturn has allowed the companies to streamline their processes and focus on Talent Acquisition and Retention and retention of the employees. Motivation and Engagement is the key to Talent Acquisition and Retention [3] and in motivating the employees, one important factor is preferred job characteristics between gender and among different age groups. As an illustration, it is generally noticed that “Equity at workplace” is in priority list of females, however “high standards” is of greater importance amongst males. To engage talented employees one needs positive leadership. As it helps in engaging the talented employees and convinces them to stay as well. The engaged employees always indicate higher job satisfaction and have confidence in the future of the organization. To motivate talent after recession is not really an easy task but if top management helps talent grow by exposing them to as well as channelizing their potentiality with the company’s strategic goals.

II. CURRENT TREND

HR Managers are busy in finding innovative ways to hunt for key players and best performers to retain them in the company. Earlier whenever recession had hit then too many employees were retrenched. But then after the recession the gap of workforce requirement and availability becomes wide.

G.H Raisoni Institute of Engineering and Technology. Pune- 411014,Maharashtra, India

[email protected]

International Journal of Advances in Computing and Management, 2(1) January-December 2013, pp. 38-41 Copyright @ DYPIMCA, Akurdi, Pune (India) ISSN 2250 - 1975

Page 43: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

As there is shortage of talent in the market hence war begins to fetch the best workers. This whole idea leads to a long term HR planning related to Human capital management. The key players should be retained in spite of any kind of recession in the market.

Workforce is a greatest asset for any organisation [1]. It is the need of the hour to plan and implement Talent Acquisition and Retention to have competitive advantage on others. The customer is king and hence to employ excellent work force is essential and hence organizations should attract, motivate, nurture and retain talent.

Fig. 1 Talent Acquisition and Retention Cycle

III. TALENT MANAGEMENT: A CHALLENGE

The Talent Acquisition and Retention is about the selection of the group of key employees with the high potential to develop, learn and grow in the organization hence Talent Acquisition and Retention means to have a strategic group of employees who can generate, design and develop the ideas, which can help the organization to survive the future recessions successfully. The top management has to empower the talents to change the organization. The top management has to support their effort in pushing new ways of business through the organization. The talents need a support from the top management.

A. Cost

Due to cost cutting it is very difficult to recruit highly paid worker but “if we pay peanuts then only monkeys can be hired. Sao after going through the whole procedure of recruitment HR Managers have to really spend a lot of time and energy as well as finance of the company to hire talent. This investment can be calculative decision only if they stay in an organisation for long and the company will be able to reap the benefits out of their calibre.

B. Growth &Progress

In this globalized world and the initiative of the organization to hire, build and nurture their talent will

decide their existence and competence in the economy. The role of the talented employee is remarkable in the organizational growth and this whole network tends to expand if these key players are retained in the organization.

The all above evidences clearly signifies the importance of the Talent Acquisition and Retention is an obligation on the current organizations to sustain in the market. It is vital component to audit the human resource and to build up human capital through Talent Acquisition and Retention and retention.

C. Filling The Talent Pool

Intense cost pressure from the market, demanding customers are major factors for urgent need to fill the talent pool of an organization. The outsourcing of Recruitment is a threat as because the consultants are really not aware of the core skills which a technical person can adjudge rather than just searching the key skills by those few lines as defined by the HR Manager in a meeting with the consultant. These consultants generally try to find the personnel in naukri portal by scrutinising through those selective characteristics as mentioned by the HR manager. The recruitment outsourced companies not even are able to screen the workers as they are not aware of the companies work culture and appropriate requirement.

Thus it is only possible if the in house HR department will sit along with the Technical department to scrutinize the applicants rather than relying on third party. An intelligent brain is not only a requisite of filling a position but the talented person should be committed and stable as that is very important aspect to retain an employee.

IV. STRATEGIES FOR TALENT ACQUISITION AND RETENTION

The various organizations have to form a bonding with the talented employees beyond the traditional relationship of an employer – employee and work place relationship. Furthermore the available talent in the organisation should be the basis for corporate strategies related to Talent Acquisition and Retention and retention of human resource. Following are the talent management and retention strategies:

A. Employee - Employer Relationship Management

The relationship between the talented employees and the organisation should be amicable and friendly so that the expression of the talented employees should not be suppressed due to traditional pattern of relationship between the employer and the employees.

The talented employees always have the vision beyond which pays in long term however for operational planning at times it sounds impossible but then with continuous monitoring and strategic planning it is always beneficial and gives the competitive advantage in the market.

B. Business Partner

The key players should be given the job title as “Business Partner” [4] so that they can have the sense of belonging towards the organization. This acts as catalyst because they can easily realise that they are not servants

Talent Acquisition and Retention 39

Page 44: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

but actually they are the decision makers and partners in sharing the profit.

C. Power

The talented workers should be allowed to be apart in the board meetings and also be allowed to express their opinion related to decisions on any kind of matter related to their area of expertise. The decisions taken by these talented employees should be considered as a firm opinion and the top level managers must implement it after a discussion.

D. Company Representative

The organization should encourage the talented employees to represent the company in various negotiations, committees, celebrations and argument.

E. Succession Planning

The companies should plan for career of talented employees as on fast track basis and their job should be challenging and competitive. The succession planning should focus on the following points:

4) Identifying the key players. 5) To give them responsibilities related to bigger

roles related in near future so that they should be prepared for that responsibility.

6) The top level managers should focus in the overall development of these kinds of potential leaders.

7) Building a database that can keep a record of the key employees.

F. Recognition

The contributions of the employees should be recognized and they should be awarded in different functions. Not only felicitation but they should be provided with the fast track promotions.

G. Creative Environment at Work Place

The organization should plan the job of the talented employees so that they should be given challenging, innovative and creative work in order to utilize their skills and potentialities to the fullest.

Talented employees always prefer change as they should be always kept away from monotony. There grasping power is immense and hence this boon at times tend to become a curse and at that time the organization should initiate, encourage and introduce change and also give certain challenging projects based on flexibility. This will not only enhance their performance but also enhance organisational efficiency.

H. Liberty To Innovate And Implement

The talented employees should be given freedom with respect to their working hours, rules, regulations, work place, working methods and the nature of work as well [5]. They can do wonders with their creativity and this will ultimately boost the sales and build goodwill in the market.

I. Training And Development

The key players should always be encouraged to learn new skills and techniques as well they should be guided

and provided excellent training programmes to groom their technical and presentation skills. They should also be encouraged to train the team members and also to impart their skills amongst them as the talented employees will gain the ability of a leader.

V. RESULTS

The combined effects of changing demographics and increasing globalisation are causing organisations to widen their talent pools. After period of economic downturn, businesses should not impose rigorous retrenchment of layoffs or introduce cutbacks for training and development. The War for Talent will not stop and the effect will hit employers who have been cautious once the downturn is over. Forward thinking employers will use the momentum of the downturn to ensure that they understand what makes a talent to stay and

It is a herculean task to attract talented worker in a blossom in their organisation. They should make specific plans to identify and to develop these individuals.

Business must ensure that they are able to quickly produce, on an annual basis, a comprehensive and validated overview of the current and future talents (and gaps) in their organisation. They require a Talent Acquisition and Retention tool that measures both behaviours and personality and must ensure that the tool is fair, which means culturally validated, using multi-rate input and with clear norms/benchmarks (at individual and organisational level). As Human capital Management i.e. Talent Acquisition and Retention can be ensured only because of proper auditing of available talent and work force of an organisation [6]. We all know that those with poor Talent Acquisition and Retention had lost the talented personnel due to impulsive reaction of layoffs and retrenchment and on the other hand few organisations are benefitted as they had hold back their talent during the recession. The organisations are paying the price for a lack of focused TM activities during the downturn.

The issue with many companies is that their organization put many efforts to attract employees to their company, but spend very less time to retain and develop the talent. A talent management must be worked into business strategy and should be implemented as a daily routine throughout the company as a whole. It is not only the job of human resource department to acquire and retain the talented personnel but also it must be practiced at all levels of the organizations. The business strategy must include that the line managers should be responsible to develop the skills of their immediate subordinates. Similarly the senior managers should mentor the line managers and the top level managers should retain their talented middle level managers by succession plans and fast track promotions. Talented employees should be trained to built up their potentials so as it can be channelized to attain organizational goals.

The whole idea of talent acquisition is complemented when the talent is retained in an organization. The talent can be retained if it is noticed, recognized and nurtured carefully. Once the talented employees are retained then this asset is remarkable for the success of an organization.

40 International Journal of Advances in Computing and Management

Page 45: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

VI. CONCLUSIONS

This report is an overview of the past, present and future of the Talent Acquisition and Retention. This report takes a wide view of Talent management, as the biggest challenges before the HR professionals in today’s world is to retain of talent and maintain a motivated and contended workforce. Due to cut throat competition in the era of Globalization wherein there is emphasis on customer care and paradigm shifts in information technology as well as information technology enabled services necessitated the organizations to focus on management of skilled employees, talented workers and knowledge workers. Corporate and HR professionals are busy devising new and innovative management practices and ideas towards keeping their personnel motivated, satisfied and always available for giving their best efforts towards achieving organizational goals and objectives.

Keeping the above in view new ideas and practices, have been developed and are in practice in organization for example fast track promotions, challenging tasks are assigned and they are awarded special remuneration, the key players are given the responsibility of business partner so as to retain and motivate the best talents.

The top management has a very important role in the recession for the organization. Their communication is essential and their trust to the talents is also very important. The organizations do not hire new employees,

but the recognized talents are always welcome. The organizations, which empower their talents, have a better chance to go through the recession successfully with the new evolving leaders. The talents know about their qualities and when they feel a possibility to develop their skills and competencies, they will look for the opportunity to show their potential in other organization. This is a task for Human Resources to support the role of the Talent Acquisition and Retention and to show the connection of the Talent Acquisition and Retention.

REFERENCES [1] Thomas and S. Cantrell. 2002. The mysterious art and science of

Knowledge-worker performance. MIT Sloan Management Review 44, no. 1:23-30.

[2] Smith, R.C. (2009). Greed is God. Wall Street Journal, p. W1. [3] Lengnick-Hall, M.L. and Andrade, L.S. (2008). Talent staffing

systems for effective knowledge management. In V. Vaiman and C.V. Vance (eds) Smart Talent management– Building Knowledge Assets for Competitive Advantage. 33-65 Elgar Publishing, UK.

[4] Jones, R. (2008). Social capital: bridging the link between Talent management and knowledge management. In V. Vaiman and C.M. Vance (eds) Smart Talent management– Building Knowledge Assets for Competitive Advantage. 217- 233.Elgar Publishing.

[5] Hills, A. (2009). Succession Planning – or smart talent management? Industrial and Commercial Training .41 (1) .3-8

[6] Fegley, S. (2006).2006 Talent Management survey report. Alexandria, VA: Society for Human Resource Management w, December, 63-69.

Talent Acquisition and Retention 41

Page 46: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

The Impact of Knowledge Management Practices in Improving Student Learning Through E-Learning

Gopal S. Jahagirdar

Abstract— The thesis is about knowledge management in education and how to create quality knowledge through the e-learning environment which is positively related to students perceptions of their learning outcomes; and secondly, how to develop communities of practice to ensure effective transfer of tacit knowledge to improve student learning. An effective knowledge management system must address both the creation and transfer of explicit as well as tacit knowledge. This paper proposes that tacit knowledge must be converted into high quality explicit knowledge through the e-learning environment. The success in converting educator’s tacit knowledge into explicit knowledge to be internalized by the learner as tacit knowledge is very much depended on information quality as the medium for the conversion process. Thus, in this paper, information quality is an essential concept to examine in the conversion process. This is to ensure that learners are able to derive quality tacit knowledge from this information. Information quality is always relative and depends on the individual or group of students who are evaluating it. Thus, any standardizing of information quality has to match to a considerable large group of student’s cognitive structures. This paper provides an empirical investigation of the relationship between information quality and student learning outcomes. Keywords— Knowledge Management (KM), e-Learning, tacit knowledge.

I. INTRODUCTION

Knowledge Management (KM) consists of a range of practices used in an organization to create, capture, collect, transfer and apply of what people in the organization know, and how they know what people in the organization know. It has been an established discipline since late 2000 with a body of university courses and both professional and academic journals dedicated to it (Stankosky, 2005). Knowledge Management began in the corporate sector and many large companies are adopting it.

In organizations which are committed to education i.e., schools, colleges, and universities are stimulating with achieving a number of educational objectives. One of these objectives is to transfer knowledge to students (through exchanges between students and educators, through exchanges between students and books or other resources, and through exchanges among students themselves, etc.). As organizations, educational institutions face challenges about how to share information and knowledge among people within the organization. This is the central focus of the paper.

Knowledge management builds upon a human-centered approach that views organizations as complex systems that spring from the unique organizational contexts in which they are developed. It is still a budding organizational practice, there is no agreed upon definition for knowledge management. Therefore, it is generally described as broadly as possible, such as the following specified by Prusak (1997): knowledge management is any process or practice of creating, acquiring, capturing, sharing and using knowledge, wherever it resides, to enhance learning and performance in organizations. Knowledge starts as data, raw facts and numbers. Everything outside the mind that can be manipulated in any way can be defined as 'data'. Information is data put into context of relevance to the recipient. It is a collection of messages and readily captured in documents or in databases. When information is combined with experience, understanding, capability and judgment, etc., it becomes knowledge. Knowledge can be highly subjective and hard to codify. Knowledge can be shared with others by exchanging information in appropriate contexts.

E-learning systems are computerized systems in which the learner's interactions with learning materials (e.g., assignments and exercises), instructors, and / or peers are mediated through technology. Due to the promise of flexibility and reduced downtime and travel expenses, there has been a recent flux of e-learning activities in corporations as well as educational institutions. E-learning technology has been evolving separately from knowledge management technology. The distinction between e-learning and knowledge management technologies is that the former as a system delivers information and content to students using ICTs whereas the latter focuses on the management and sharing of knowledge. There have been recent investigations into the integration of these technologies in the knowledge management direction (Barron, 2000 and Allee, 2000). In this thesis, the focus will be on knowledge creation and transfer which is of greater value as compared to information delivery through the e-learning system.

II. NEED FOR THE PAPER

In this study, the contexts where knowledge is shared between educators and students are through the e-learning environment and communities of practice. Knowledge management in education can be thought of as a framework or an approach that enables people within an organization to develop a set of practices to collect information and share what they know, leading to action that improves services and outcomes. For educational institutions, the full promise of knowledge management lies in its opportunities for improving student outcomes. The ultimate benefit of this, of course, is to students, educators, and the education community as a whole.

IBMR Chinchwad, University of Pune,

Pune, Maharashtra, India

[email protected]

International Journal of Advances in Computing and Management, 2(1) January-December 2013, pp. 42-46 Copyright @ DYPIMCA, Akurdi, Pune (India) ISSN 2250 - 1975

Page 47: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

Educational institutions are under tremendous pressure for increased accountability from external and internal sources. External pressures raised by stakeholders like employers, government agencies (e.g. DTE, AICTE), and parents for measurable improvements in educational institutions are mounting and demand for information about student learning outcomes is escalating.

Internally, educational institutions are asking themselves difficult questions about accountability: for example, how can we improve student learning outcomes? In this climate of external and internal demands for accountability and improvements of student learning outcomes, schools, colleges, and universities as organizations committed to educational missions, must ensure students are learning by acquiring knowledge in the most efficient and effective way. Institutions must also have the ability to demonstrate enhancement of student learning and development. Thus, educational institutions may find it beneficial to adopt Knowledge Management programs to improve their performances and outcomes.

The crucial change in educational institutions is to improve student learning and outcomes: how effective are educational, academic, and other programs inside and outside the classroom in improving student learning. The practices of knowledge management are particularly helpful. In this thesis, we shall seek to address the question on how to create and transfer quality knowledge through the practices of knowledge management and its possible impact on student learning outcomes.

Learning is a process by which students take in information and translate it into knowledge, understanding or skills. A learning audit is necessary to measure the cognitive and behavioural changes as well as tangible improvements in result during the learning process of students (Garvin, 1993). Indeed, learning and academic assessment can be characterized as two sides of the same coin, in the sense that learning involves detection and correction of errors to improve learning (Argyris & Schon, 1978).

III. METHODOLOGY

Most knowledge management technologies focus on the actionable application of knowledge (Sallis & Jones, 2002). This notion of knowledge for action directly applies to curriculum development and assessment. The knowledge gained from assessment is used to create and improve upon the curriculum which is comprised of courses, topics, instructional materials, presentations, assignments, etc. The association between knowledge management and assessment is also evident in learning. A major goal in successful knowledge management is to achieve learning by the people in the institution and thus involves the necessity for assessment. By testing student performance and by periodically reviewing their own curricula, schools, colleges, and universities assess what they have learnt in their own institutions. This assessment hopefully prompts students to modify their study behaviour and faculty to refine the materials they present to the class. More importantly, assessment also motivates faculty and administrators to reconsider their policies and practices related to curriculum in order to improve student learning outcomes.

Knowledge management practices can also be applied to e-learning by creating quality learning materials and providing ongoing assessments. For example, every time students read a chapter of the e-learning materials and complete an interactive worksheet, the system, in turn, provides the educator with ongoing and trend assessment information about each student. This provides timely feedback for the student and educator. Educators see that the real value is in the assessments that are integrated into the learning process, and in the information about patterns of student learning. They can find out which e-learning materials and assignments were most appropriate and which ones were most troubling for specific groups of students in achieving their learning outcomes. Once the educators are provided with this information, they can adapt their pedagogy and content in ways that make sense for their students to improve their learning outcomes. If they have access to a collaborative team discussing these issues school-wide, then the knowledge gained and is shared amongst other educators, which allows them to determine ways to improve student learning outcomes.

Different learning and teaching strategies are effective to varying degrees for different groups of students. Knowledge management practices seek to help educators and faculty gather data and share information about which teaching approaches are most effective for different groups of students in specific environments. Making information available in a timely way to the people who need it means that important discussions among faculty can begin: Is it better to maintain consistent teaching styles and help all students perform within them? Should teaching styles be revised based on who is in the class? Teaching and learning styles are the behaviors or actions that educators and learners exhibit in the learning exchange. Teaching behaviors reflect the beliefs and values that educators hold about the learner’s role in the exchange (Heimlich & Norland, 2002). Learners behaviors provide insight into the ways learners perceive, interact with, and respond to the environment in which learning occurs (Ladd & Ruby 1999). Thus, the information gained will inform educators to adopt or adapt appropriate teaching and learning strategies for different groups of students to improve their learning outcomes.

Given the information, educators and faculty can discuss these kinds of questions within their own organizational context, design a series of interventions or a revised curriculum based on the needs of their students, gather outcome information again, review the results, and share their results among a wider circle of colleagues. For educators and faculty as well as students, the knowledge management process promotes participation, interaction, and, most importantly, student learning.

IV. THE POSSIBLE IMPACT OF KNOWLEDGE

MANAGEMENT PRACTICES ON LEARNING

OUTCOMES

Learning outcomes are statements that specify what learners will know or be able to do as a result of a learning activity. Outcomes are usually expressed as knowledge, skills, or attitudes (Phillips, 1994). When questions are raised about academic standards they are

The Impact of Knowledge Management Practices in Improving Student Learning Through E-Learning 43

Page 48: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

often associated with assessment practices, in particular student grading. Of course, the assurance of academic standards embraces a wide range of activities beyond the assessment of student learning. However, assessment and grading practices are perhaps the most important and visible safeguard. The role of assessment in assuring academic standards is likely to be further highlighted as tertiary institution entry pathways and the modes of student participation and engagement with learning resources diversify: the maintenance of standards through entry pre-requisites and ‘time spent on task’ are far less relevant mechanisms for ensuring standards than they once were. The measurement and reporting of student outcomes — their knowledge, skills, achievement or performance — is now a major reference point for academic standards. The experience of academic staff directly involved in teaching and assessing student learning is also central to determining and monitoring standards. Ultimately, individual academic staff and their academic judgment define and protect standards through the ways in which they assess and grade the students they teach. Sound processes for defining and monitoring academic standards will directly support the quality of teaching and learning by making the goals and standards clearer - students who understand goals and standards and who are encouraged to study towards them are likely to have better learning outcomes.

V. CONCLUSIONS

From a knowledge management perspective, e-learning is a system for the generation, codification and representation of knowledge. E-learning tools provide a context for individual and group learning. Educators construct, codify and represent knowledge as learning materials and store it in repositories of the e-learning system. Students access the e-learning system and Information is transferred to individuals’ or groups of students’ cognitive structures.

Knowledge generated and represented by educators at this stage is required to be able to correspond to a significant large group of individuals’ cognitive structures. This is necessary so that students can acquire the requisite knowledge to achieve their learning outcomes.

In e-learning, the course material/content can consist of both printed and digital material. Thus the selection, production and adaptation of course content are of major importance to the quality of e-learning. Course content can be produced by publishers, individual educators or by a group of course developers. When dealing with complex digital media, a team of production experts is often needed. In some cases, learners have become the producers of their own learning material.

REFERENCES [1] Alavi, M. (2000) ‘Managing organizational knowledge’ in Zmud,

R.W. (ed.) Framing the domains of IT management. Cincinnati, OH: Pinnaflex Educational Resources, Inc., pp. 15- 28.

[2] Alavi, M. and Tiwana, A. (2003) ‘Knowledge Management: The Information Technology Dimension’ , in Easterby-Smith, M. and Lyles, M. A. (eds.) The Blackwell handbook of organizational learning and knowledge management. Oxford: Blackwell Pub, pp. 105.

[3] Allee, V. (2000) ‘eLearning is Not Knowledge Management’ ,http://www.linezine.com/2.1/features/vaenkm.htm

[4] Allen, J. and Bresciani, M. J. (2003) ‘Public institutions, public challenges: On the transparency of assessment results’ , Change, 35, pp. 21-23.

[5] Ally, M. (2002) ‘Designing and managing successful online distance education courses’ , Workshop presented at the 2002 World Computer Congress, Montreal, Canada.

[6] Anderson, J. C. and Gerbing, D. W. (1982) ‘Some methods for respecifying measurement models to obtain unidimensional construct measurement’ , Journal of Marketing Research, 19, pp. 453-460.

[7] Angelo, T. A. (1999) ‘Doing assessment as if learning matters most’ , AAHE Bulletin, 51(9), pp. 3-6, http://www.aahea.org/bulletins/bulletins.htm

[8] Appleby, D. C. (2003) ‘The first step in student-centered assessment: Helping student understand our curriculum goals’ , American Psychological Association, http://www.apa.org/ed/helping_students.html

[9] Argyris, C. and Schon, D. (1978) Organisational Learning: A Theory of Action Perspective. Reading,: Addison Wesley.

[10] Aron, A. and Aron, E. J. (1999) Statistics for psychology. 2nd ed. Upper Saddle River, N.J.: Pearson Prentice Hall.

[11] Barr, R., McCabe, R. and Sifferlen, N. (2001) ‘Defining and Teaching Learning Outcomes’ http://www.league.org/league/projects/lcp/lcp3/Learning_Outcomes.htm

[12] Barron, T. (2000) ‘A Smarter Frankenstein: The Merging of E-Learning and Knowledge Management’ , ASTD Learning Circuits, http://www.astd.org/LC/2000/0800_barron.htm

[13] Bassi, L. and Ingram, P. (1999) ‘Harnessing the Power of Intellectual Capital’ in Cortada, J. and Woods, J. A. (eds.) The Knowledge Management Yearbook 1999 – 2000. Boston: Butterworth Heinemann, pp. 422 - 31.

[14] Berry, B., Johnson, D. and Montgomery, D. (2005) ‘The power of teacher leadership’ , Educational Leadership, 62(5), pp. 56.

[15] Brandt, R. M. (1958) ‘The accuracy of self estimates’ , Genetic Psychology Monographs, 58, pp. 55-99.

[16] Brown, J. S., (1998) ‘Internet technology in support of the concept of ‘communities-of practice’: the case of Xerox’ , Accounting, Management and Information Technologies, 8, pp.227-236.

[17] Brown, J. S. and Duguid, P. (1991) ‘Organisational learning and communities-of-practice: toward a unified view of working, learning and innovation’ , Organisation Science, 2(1), pp.40–57.

[18] Brown, J.S. and Gray, S. (1998) ‘The people are the Company, in Fast Company’ http://www.fastcompany.com/online/01/people.html

[19] Burgess, J. and Russell, J. (2003) ‘The effectiveness of distance learning initiatives in organizations’ , Journal of Vocational Behaviour, 63 (2), p. 289-303

[20] Buysse, V., Sparkman, K. L. and Wesley, P. W. (2003) ‘Communities of practice:Connecting what we know with what we do’ , Exceptional Children, 69(3), pp. 263–277.

[21] Carmines, E. G. and Zeller, R. A. (1979) Reliability and Validity Assessment (Quantitative Applications in the Social Sciences). Beverly Hills, Calif.: Sage.

[22] Carroll, J. B. (1963) ‘A model of school learning’ , Teachers College Record, 64, pp. 723-733.

[23] Chappuis, S. and Stiggins, R. J. (2002) ‘Classroom assessment for learning’ , EducationalLeadership, 60, pp. 40-43.

[24] Cohen, J., Cohen, P., West, S. G. and Aiken, L. S. (2003) Applied multiple regression / correlation analysis for the behavioral sciences (3rd ed.) Mahwah, NJ: Erblaum.

[25] Coladarci, A. P. (1959) ‘The Teacher as Hypothesis-Maker’ , California Journal for Instructional Improvement, summer, pp. 3-6.

[26] Converse, J. M. and Presser, S. (1989) Survey questions: Handcrafting the standardized questionnaire. Newbury Park, CA: Sage.

[27] Cooper, D. R. and Schindler, P. S. (2003) Business research methods. 8th ed. Boston: McGraw-Hill Irwin.

[28] Corrall, S. (1999) ‘Knowledge Management: Are we in the Knowledge Management Business?’ http://www.ariadne.ac.uk/issue18/knowledge-mgt/

[29] Crosby, P. B. (1996) Reflections on quality. New York: McGraw-Hill.

44 International Journal of Advances in Computing and Management

Page 49: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

[30] Crow, S. D. (2000) ‘The changing role of accreditation’ [Letter to the editor], The Chronicle of Higher Education. http://chronicle.com/article/The-Changing-Role-of/27577

[31] Davenport, T.H. and Volpel, S. C. (2001) ‘The rise of knowledge towards attention management’ , Journal of Knowledge Management, 5(3), pp. 212-221.

[32] Davis, A. (2004) ‘Developing an infrastructure for online learning’ , Athabasca University. http://cde.athabascau.ca/online_book/ch4.html

[33] Ekwensi, F., Moranski, J. and Townsend-Sweet, M, (2006) ‘Instructional Strategies for Online Learning’ , E-Learning Concepts and Techniques, Bloomsburg University of Pennsylvania http://iit.bloomu.edu/Spring2006_eBook_files/chapter5.htm

[34] Ellis, K. (2001) ‘Sharing best practices globally’ , Training, 38(7), pp. 32.

[35] Garvin D. (1993) ‘Building a Learning Organization’ , Harvard Business Review July-August, pp. 78-91.

[36] Hair, J. F., Jr., Black, W. C., Babin, B. J. Anderson, R. E., Tatham, R. L. (2006) Multivariate data analysis. 6th ed. Upper Saddle River, N.J.Pearson Prentice Hall, Pearson international education.

[37] Halonen, J. S., Appleby, D. C., Brewer, C. L., Buskist, W., Gillem, A. R., Halpern, D., Lloyd,

[38] Heimlich, J. E., Norland, E. (2002) ‘Teaching Style: Where Are We Now?’ , New Directions for Adult and Continuing Education, 93, pp. 17-25.

[39] Hildreth, P. and Kimble, C. (2002) ‘The duality of knowledge’ , Information Research, 8(1). http://informationr.net/ir/8-1/paper142.html

[40] Hildreth, P., Kimble, C. and Wright, P. (2000) ‘Communities of Practice in the Distributed International Environment’ , Journal of Knowledge Management, 4(1), pp. 27-38.

[41] Hislop, D., Newell, S., Scarbrough, H. and Swan, J. (2000) ‘Networks, Knowledge and Power: Decision Making, Politics and the Process of Innovation’ , Technology Analysis and Strategic Management, 12(2), pp. 399-411.

[42] Howell, S. L., Williams, P. B. and Lindsay, N. K. (2003), ‘Thirty-two Trends Affecting Distance Education: An Informed Foundation for Strategic Planning’ , http://www.emich.edu/cfid/PDFs/32Trends.pdf

[43] Jackson, D. N. (1969) ‘Multimethod factor analysis in the evaluation of convergent and discriminant validity’ , Psychological Bulletin, 72, pp. 30-49.

[44] James-Gordon, Y., Young, A. and Bal, J., (2003), ‘External environment forces affecting eLearning provider’, Marketing Intelligence & Planning, 21(3), pp. 168-172.

[45] Johnson, C. (2001) ‘A survey of current research on online communities of practice , The Internet & Higher Education, 4, pp. 45-60.

[46] Joreskog, K. G. (1971) ‘Statistical analysis of sets of congeneric tests’ , Psychometrika, 36, pp. 109-133.

[47] Joy, E. H. and Garcia, F. E. (2000) ‘Measuring Learning Effectiveness: A New Look at No- Significant-Difference Findings’ , Journal of the Asynchronous Learning Network, 4(1).

[48] Kimble, C., Hildreth, P. and Wright, P. (2001) ‘Communities of practice: going virtual’ in Malhotra, Y. (ed.) Knowledge management and business model innovation. Hershey,

[49] Medford, NJ and London: Idea Group Publishing, pp. 220–234. [50] Lave, J. and Wenger, E. (1991) Situated Learning: Legitimate

Peripheral Participation. Cambridge: Cambridge University Press. [51] Lee, Y. W., Strong, D. M., Kahn, B. K. and Wang, R. Y. (2002)

‘AIMQ: A methodology for information quality assessment’ , Information and Management, 40(2), pp. 133- 146.

[52] Lewin, C. (2005) ‘Elementary quantitative methods’ in Somekh, B. and Lewin, C. (eds.) Research methods in the social sciences. London: Sage Publications, pp. 215-225.

[53] Liedtka, J. (1999) ‘Linking Competitive Advantage with Communities of Practice’ , Journal of Management Inquiry, 8, pp. 5-16.

[54] Louis, K. S. and Marks, H. M. (1998) ‘Does professional learning community affect the classroom? Teachers’ work and student experiences in restructuring schools’ , American Journal of Education, 106(4), pp. 532–575.

[55] Lowman, R. L. and Williams, R. E. (1987) ‘Validity of self-ratings of abilities and competencies’, Journal of Vocational Behaviour, 31, pp. 1-13.

[56] Malhotra, N. (2006) Marketing Research: An Applied Orientation and SPSS 14.0 Student CD.5th ed. Upper Saddle River, NJ: Pearson/Prentice Hall.

[57] Marwick, A. D. (2001) ‘Knowledge management technology’ , IBM Systems Journal, 40(4), pp. 814-830.

[58] McDermott, R. (2001) ‘Knowing in Community: 10 Critical Success Factors in Building Communities of Practice’ , http://www.co-I-l.com/coil/knowledg garden/cop/knowing.shtml

[59] McDermott, R. (2002) ‘Measuring the impact of communities: How to draw meaning from measures of communities of practice’ , Knowledge Management Review, 5(2), pp. 26-29.

[60] McKenney, K. (2003) ‘The learning-centered institution: Key characteristics’ , Inquiry &Action, 1, pp. 5-6.

[61] Miller, E. and Findlay, M. (1996) ‘Australian Thesaurus of Education Descriptors’ , Australian Council for Educational Research, Melbourne.

[62] Mitchell, J., Wood, S. and Young, S. (2001) ‘Communities of Practice: Reshaping Professional Practice and Improving Organisational Productivity in the Vocational Education and Training (W ET) Sector’ , Australian National Training Authority. http://www.reframingthefuture.net/docs/2003/Publications/4CP_cop.pdf

[63] Murray, D. (2001) ‘E-Learning for the Workplace: Creating Canada's lifelong learners’, Conference Board of Canada, pp. 3. http://www.hrsdc.gc.ca/eng/hip/lld/olt/Skills_Development/OLTResearch/learn_e.pdf

[64] NATIONAL GOVERNORS ASSOCIATION (2002) ‘A Vision of E-Learning for American's Workforce’ , Commission on Technology & Adult Learning, pp. 4. Available from: http://www.nga.org/cda/files/ELEARNINGREPORT.pdf

[65] Newmann, F. M. (1993) ‘Beyond common sense in educational restructuring: the issues of content and linkage’ , Educational Researcher, 22, pp. 4-13.

[66] Nonaka, I. (1991) ‘The knowledge creating company’ , Harvard Business Review, 69, pp. 96-104.

[67] Nonaka, I. (1994) ‘A Dynamic Theory of Organizational Knowledge Creation’ , Organization Science, 5(1), pp. 14-37.

[68] Nonaka, I. and Nishiguchi, T. (2001) ‘Social, Technical, and Evolutionary Dimensions of Knowledge Creation’ , in Nonaka, I. and Nishiguchi, T. (eds.) Knowledge Emergence: Social,Technical, and Evolutionary Dimensions of Knowledge Creation. New York: OxfordUniversity Press, pp. 286-289.

[69] Nonaka, I. and Takeuchi, H. (1995) The knowledge creating company: how Japanese companies create the dynasties of innovation. Oxford: Oxford University Press.

[70] O'Dell, C. and Grayson, C. J. Jr. (1998) If Only We Knew What We Know. The Free Press.

[71] O’ Leary, D. (1998) ‘Using AI in Knowledge management: Knowledge Bases and Ontology’, IEEE Intelligent Systems, 13, pp. 34-39.

[72] Owen, R. and Aworuwa, B. (2003) ‘Return on investment in traditional versus distributed learning’, Proceedings of the 10th Annual Distance Education Conference.Pace, C. R. (1985) The credibility of student self-reports. Los Angeles: University of California, The Center for the Study of Evaluation, Graduate School of Education.

[73] Palmoba, C. A. and Banta, T. W. (1999) Assessment essentials: Planning, implementing, and improving assessment in higher education. San Francisco: Jossey-Bass.

[74] Petrides, L. A. and Nodine, T. R. (2003) ‘Knowledge Management inEducation: Defining the landscape’ , Institute for the Study of Knowledge Management in Education. http://www.iskme.org/what-we-do/publications/km-in-education

[75] Phillips, J. (2003) ‘Powerful learning: Creating learning communities in urban school reform’, Journal of Curriculum and Supervision, 18(3), pp. 240–258.

[76] Polanyi, M. (1966) The Tacit Dimension. Doubleday. [77] Prusak, L. (1997) Knowledge in Organizations. Oxford:

Butterworth-Heinemann, Newton, MA, pp. 135-146. [78] Ritchie, D. C. and Hoffman, B. (1997) ‘Incorporating

instructional design principles with the world wide Web’ , in Khan, B. H. (ed.) Web-based instruction. Englewood Cliffs, NJ:Educational Technology Publications, pp. 135-138. RMKB (Research Methods Knowledge Base) http://www.socialresearchmethods.net/kb/sampprob.php

The Impact of Knowledge Management Practices in Improving Student Learning Through E-Learning 45

Page 50: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

[79] Russell, T. (1997) ‘The no significant difference phenomenon’, Or Raleigh, NC: North Carolina State University. Office of Instructional Telecommunications.

[80] Sallis, E. and Jones, G. (2002) Knowledge Mangement in Education. London: Kogan Page.

[81] Schmitt, N. and Stults, D. M. (1986) ‘Methodology review: Analysis of multitraitmultimethod matrices’, Applied Psychological Measurement, 10, pp. 1-22.

[82] Stenmark (2002) ‘Information vs. Knowledge: The Role of intranets in Knowledge Management’, Proceedings of the 35th Hawaii International Conference on System Sciences.

[83] Stevens, J. (1992) Applied multivariate statistics for the social sciences. Hillsdale, N.J.: L. Erlbaum Associates.

[84] Szulanski, G. (1996) ‘Exploring internal stickiness: Impediments to the transfer of best practices within the firm’ , Strategic Management Journal, 17, pp.27-43.

[85] Taylor, J. (2001) ‘29 Steps to Heaven? Strategies for transforming University teaching and learning using multimedia and educational technology’ , The Open University, pp. 22.

[86] Vescio, V., Ross, D. and Adams, A. (2008) ‘A review of research on the impact of professional learning communities on teaching practice and student learning’ , Teaching and Teacher Education, 24 (1), pp. 80-91.

[87] ang, R. Y., Storey, V. C. and Firth, C. P. (1995) ‘A framework for analysis of data quality research’ , IEEE Transactions on Knowledge and Data Engineering, 7(4), pp. 623-640.

[88] Wang, R.Y. and Strong, D.M. (1996) ‘Beyond accuracy: what data quality means to data consumers’ , Journal of Management Information Systems, 12(4), pp. 5–34.

[89] Wellman, J. V. (2000) ‘Accreditors have to see past ‘learning outcomes’, The Chronicle of Higher Education. http://chronicle.com/weekly/v47/i04/04b02001.htm

[90] Wenger, E. (1998) Communities of Practice: Learning, Meaning and Identity. Cambridge University Press, New York, NY.

[91] Wenger, E., McDermott, R. and Synder, W. M. (2002) Cultivating Communities of Practice: A Guide to Managing Knowledge. Harvard Business Press, Cambridge, MA.

[92] Wetland, E. J. and Smith, K. W. (1993) Survey responses: An evaluation of their validity. New York: Academic Press.

[93] Wikipedia: Benefits of KM Systems. http://en.wikipedia.org/wiki/Knowledge_management_system

[94] Wilson, T.D. (2002) ‘The nonsense of 'knowledge management'’ Information Research, 8(1). http://InformationR.net/ir/8-1/paper144.html

[95] ‘Based KM process that works’, Knowledge Management Review, 10, p p. 12-15.

[96] Zander, U. and Kogut, B. (1995) ‘Knowledge and the speed of the transfer and imitation of organizational capabilities: an empirical test’, Organization Science, 6, pp. 76-92.

[97] Zumwalt, K. K. (1982) ‘Research on Teaching: Policy Implications for Teacher Education’ ,in Lieberman, A. and McLaughlin, M. W. (eds.) Eighty-first Yearbook of the National Society for the Study of Education, Part I, Chicago, Ill.: University of Chicago, pp. 215-248.

46 International Journal of Advances in Computing and Management

Page 51: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

International Journal of Advances in Computing and Management

Subscription Form

To The Editor IJACM Padmashree Dr D Y Patil Institute of Master of Computer Applications Sector No. 29, Akurdi, Pune 411 044

Please accept my / our subscription for the year ______________________ Payment enclosed Rs. / $ _____________________

Demand Draft / Cheque Number: _______________________ Dated ___/___/_______

Please invoice me Name: _____________________________________________________________________

Instt / Univ.: ________________________________________________________________

Address: ___________________________________________________________________

__________________________________________________________________

__________________________________________________________________

Pincode: ___________________________________________________________________

Email: ________________________________________ Cell No: _____________________

Signature ____________________________

Date ________________________________

Annual Subscription: Rs. 1000.00 Rs. 1200.00* $ 75.00*

(*Inclusive of Postage Charges)

Note: 1. Please make your Cheques / Demand Drafts payable to ‘Padmashree Dr. D. Y. Patil Institute of MCA’,

Payable at Pimpri, Pune-18

2.For outstation cheques, please add Rs. 50.00 towards bank charges, for foreign cheques please add $10.00

Page 52: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...
Page 53: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

Instructions to Authors

Paper Format: 1. All papers submitted for publication in the journal must be ‘original’ and in English language. 2. The manuscript must be based on the theme or sub-theme decided for the specific issue of the

journal. 3. The manuscript should start with Title, Author name(s), Authors affiliations and email

addresses, Abstract, Keywords, Main Body, Acknowledgements, and References. 4. Previous presentation at conference, or publication in another language, should be disclosed. 5. The manuscript must strictly adhere to the following formatting style:

- Use double column style with single line spacing - Title: 24 fonts in Times New Roman - Authors Name: 11 fonts in Times New Roman - Affiliations: Italics, 11 fonts in Times New Roman - Email Id: 9 fonts in Courier - Keyword and Abstract word: Bold, Italics, 9 fonts in Times New Roman - Body of Keywords and Abstract : Bold, 9 fonts in Times New Roman - Level1 heading: 10 fonts in Times New Roman, Capital - Level2 heading: 10 fonts in Times New Roman, sentence case - Body: 10 fonts in Times New Roman - Table Name: 8 fonts in Times New Roman, Capital, Top and center alignment - Figure: 8 fonts in Times New Roman, sentence case, bottom and center alignment

6. The paper should start with an abstract of approximately 150 words and end with a conclusion summarizing the findings of the paper.

7. Manuscripts should not exceed 8000 words/ 6 pages with single line spacing. Word count includes everything: title, abstract, text, endnotes, references, tables, figures and appendices.

8. All illustrations should be suitable for printing in black and white, and are numbered sequentially.

9. The use of capitalization should be kept to the minimum, except in the case of proper nouns and should be followed consistently.

10. A short bio sketch must accompany each submission along with the author’s institutional affiliation. This must not exceed 50 words.

References: 11. References should appear in a separate bibliography at the end of the paper. Place references in

the order of appearance. It should contain all the works referred to in the text. Web URLs should be there for website references with accessed date.

How to submit: 12. Authors are invited to submit three hard copies of their manuscript plus the file by e-mail in pdf

format to [email protected]. To defray the cost of mailing, manuscripts submitted for publication will not be returned to the authors.

Page 54: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

Review and Evaluation Criteria: 13. Correspondence and proofs for correction, if required, will be sent to the first named author,

unless otherwise indicated. Corrected proofs should be returned within a week. 14. Each submission will be reviewed by at least two members of the Program Committee.

Submission will be evaluated on the basis of originality, importance of contribution, soundness, evaluation, quality of presentation, and appropriate comparison to related work.

15. Submission of paper will be taken to imply that the paper contains an original work not previously published, or being considered elsewhere for publication. 16. Upon acceptance of an article, copyright in the article shall be transferred to the publisher

in order to ensure the widest possible dissemination of information. 18. The author will receive one copy of the Journal free of charge, soon after its submission. 19. Unaccepted manuscripts will not be returned. However, after the completion of the review process and if the manuscript is not selected, the author(s) is/are free to submit it elsewhere.

Pay charge policy and Reprints:

20. Authors of all accepted manuscripts from outside India are required to pay page charge of US$ 40 per printed page to the Journal Publisher, beyond six printed Journal pages, either from their institutions, or their own research Grants. However, authors from India are required to pay INR 300 per printed page, beyond six printed Journal pages. For the purchase of reprints of their papers, the authors should directly get in touch with the Journal Publisher.

About Publisher:

Padmashree Dr. D. Y. Patil Group has over 150 institutes across Mumbai, Pune & Kolhapur in Maharashtra, India, which are renowned for their educational excellence in the field of Engineering, Computer Applications, Business Management and other disciplines at undergraduate & post graduate levels. The group has established Padmashree Dr. D. Y. Patil Institute of Master of Computer Applications in 2002. The institute is recognized by the All India Council of Technical Education (AICTE), Government of India, New Delhi and approved by the Directorate of Technical Education (DTE), Government of Maharashtra, Mumbai and affiliated to University of Pune, Maharashtra state, India.

Page 55: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...
Page 56: INTERNATIONAL JOURNAL OF ADVANCES IN … IN COMPUTING AND MANAGEMENT ... New Jersey Institute of ... International Journal of Advances in Computing and Management, 2(1) ...

Dr. D. Y. Patil Pratishthan’s PADMASHREE DR. D. Y. PATIL INSTITUTE OF MASTER OF COMPUTER APPLICATIONS

Sector No. 29, Akurdi, Pune, Maharashtra, India.

Pin Code : 411 044

Tel. No. : +91 20 27640998, 25126793

Fax No. : +91 20 27652794

E-mail : [email protected], [email protected]

Website : www.dypimca.org

Concept and Design: Arati Patil, Padmashree Dr. D. Y. Patil Institute of Master of Computer Applications, India