CAPSTONE PROJECT REPORT - · PDF fileImage enhancement is the process of adjusting digital...
Transcript of CAPSTONE PROJECT REPORT - · PDF fileImage enhancement is the process of adjusting digital...
CAPSTONE PROJECT REPORT (Project Term January-April, 2014)
IMAGE ENHANCEMENT TOOL
Submitted by
(Zahid Mushtaq Beigh ) Registration Number : 11006849
(Dharmendra Kumar) Registration Number : 11002022
(Akash Ashish ) Registration Number : 11007077
(Kuldeep Chand) Registration Number : 11000169
Project Group Number CSERG-244
Course Code CSE 445
Under the Guidance of
(Manisha Sharma Assistant Professor)
School of Computer Science and Engineering
CAPSTONE PROJECT REPORT (Project Term January-April, 2014)
IMAGE ENHANCEMENT TOOL
Submitted by
(Zahid Mushtaq Beigh ) Registration Number : 11006849
(Dharmendra Kumar) Registration Number : 11002022
(Akash Ashish ) Registration Number : 11007077
(Kuldeep Chand) Registration Number : 11000169
Project Group Number CSERG-244
Course Code CSE 445
Under the Guidance of
(Manisha Sharma Assistant Professor)
School of Computer Science and Engineering
DECLARATION
We hereby declare that the project work entitled “Image Enhancement Tool” is an authentic record of
our own work carried out as requirements of Capstone Project for the award of B.Tech degree in
Computer Science Engineering from Lovely Professional University, Phagwara, under the guidance of
Mrs. Manisha Sharma, during January to April 2014.All the information furnished in this capstone
project report is based on our own intensive work and is genuine.
Project Group Number: CSERG-244
Name of Student 1: Zahid Mushtaq Beigh
Registration Number: 11006849
Name of Student 2: Dharmendra Kumar
Registration Number: 11002022
Name of Student 3: Akash Ashish
Registration Number: 11007077
Name of Student 4: Kuldeep Chand Khansuli
Registration Number: 11000169
(Signature of Student 1)
Date:
(Signature of Student 2)
Date:
(Signature of Student 3)
Date:
(Signature of Student 4)
Date:
CERTIFICATE
This is to certify that the declaration statement made by this group of students is correct to the
best of my knowledge and belief. They have completed this Capstone Project under my guidance
and supervision. The present work is the result of their original investigation, effort and study.
No part of the work has ever been submitted for any other degree at any University. The
Capstone Project is fit for the submission and partial fulfilment of the conditions for the award
of B.Tech degree in Computer Science Engineering from Lovely Professional University,
Phagwara.
Signature and Name of the Mentor
Designation
School of Computer Science and Engineering,
Lovely Professional University,
Phagwara, Punjab.
Date :
ACKNOWLEDGEMENT
Completing a task is never a one-man effort. It in fact it is the results of valuable contribution of
a number of individuals in a direct or indirect manner that helps in shaping and achieving an
objective.
This project would not have taken shape, but for the guidance provided by Mrs. Manisha Sharma
my Mentor, who helped me in my project and resolved all the technical problems and also
helped in understanding technical aspects of the project. I profusely thank them for the support
provided to me.
I also express a deep sense of gratitude to Mr. Raman Kumar for the efforts he put into this
project and provided help and guidance at every stage. I am grateful to him for lending his
precious time.
Above all I wish to express my heartfelt gratitude to my family, who has always been the
singular source of inspiration in all my ventures I have undertaken. I believe this endeavour
support has greatly boosted my self-confidence and will go a long way.
INDEX
1. Introduction…………………………………………………………………………………………………1
1.1 Technologies to be used………………………………………………………......................................2
2. Profile of the problem……………………………………………………………………………………....4
2.1. Aim………………………………………………………………………………………………….....5
2.2. Objective………………………………………………………………………………………………5
2.3. Functions provided in proposed system……………………………………………….........................5
2.3.1. Running The Application
2.3.2. Choosing The Image
2.3.3. Image Editing
2.3.4. Manage images
2.4. Input requirements of the system………………………………………………………………………5
2.5. Output requirements of the system…………………………………………………………………….5
2.6. Users of the systems…………………………………………………………........................................6
2.6.1. Designers
2.6.2. Photographers
2.6.3. Common Users
3. Existing system…………………………………………………………………………………………….6
3.1. Introduction…………………………………………………………………........................................7
3.2. Existing software……………………………………………………………........................................7
3.2.1. Picasa
3.2.2. Adobe Photoshop
3.2.3. Photo Editor
3.3 DFD for Existing Systems……………………………………………………………………………..9
3.4. What’s new in the system to be developed……………………………………………........................9
4. Problem analysis…………………………………………………………………………………………...9
4.1. Product definition…………………………………………………………….....................................10
4.2. Feasibility study…………………………………………………………………................................10
4.2.1. Economic feasibility
4.2.2. Technical feasibility
4.2.3. Operational feasibility
4.3. Project plan…………………………………………………………………………………………...13
5. Software requirement analysis………………………………………………….........................................13
5.1. Technologies to be used……………………………………………………………………………....13
5.1.1. Java
5.1.2.Windows Programming
5.2. General description………………………………………………………………………………….....19
5.2.1. Introduction to java
5.2.2. Why java?
5.2.2.1. Swings
5.2.2.2 Enriched Package for Image Processing
5.3. Specific requirements…………………………………………………………………………………..20
5.3.1. Software requirements
5.3.2. Hardware requirements
6. Design……………………………………………………………………………………..…………….21
6.1. System design.……………………………………………………………………….……………...21
6.2. User interface design.………………….. …………….…………………………………………….22
6.3. Flowcharts…..………………………………………….…………..…...………………………......23
6.3.1. Role of Effects
6.4 Data Flow Diagram Defined………………………………………………………………………...27
6.5 DFD for proposed System…………………………………………………………………………...27
6.5.1 Level 0 DFD
6.5.2 Level 1 DFD
6.5.3 Level 2 DFD
6.5.4 DFD for Distortion Module
6.5.5 DFD for Noise Module
6.6 Input Design…………………………………………………………………………………………30
6.6.1 Image Filters
6.6.2 Colour Filters
6.6.3 INVERT
6.6.4 GRAYSCALE
6.6.5 RGBScale
6.6.6 SHARPEN
6.6.7 EDGE DETECTION
6.6.8 Transform
6.6.9 CROP IMAGE
6.6.10 ZOOM
6.6.11 SHEAR
6.6.12 ROTATE
6.6.13 LOADING AND SAVING AN IMAGE
6.6.14 DISTORTING AN IMAGE
7. Testing …………………..……………………………………………………………………………...38
7.1 Black-box testing:
7.1.1 Functional testing
7.2 Structural testing
7.3 TYPES OF TESTING……………………………………………………………………………….40
7.3.1 Unit Testing7
7.3.2 Integration testing
7.3.3 System testing
7.3.4 Acceptance Testing
7.4 Project Testing……………………………………………………………………………………….42
8. Implementation of the Project…………………...………………………………………………………42
8.1. Introduction………………………………………………………………………………………....42
8.1.1. About the java technology
8.1.2. Java standard addition
8.1.3. Java programming language
8.1.4. The java platform
8.1.5. Introduction to AWT
8.1.6. Introduction to Swing
8.1.7. Tools
8.1.7.1. Eclipse
8.2 Post Implementation and Maintenance…………………………………………………………......48
9. Project Legacy………………………………………………………………………………………..48
9.1 Current Status of the project
9.2 Remaining Areas of concern
9.3 Technical and Managerial Lessons learnt
10. Implementation……………………………………………………………………………………..49
10.1 Functions provided in the proposed system
10.1.1 Home Page
10.1.2 Main Frame
10.1.3 File menu
10.1.4 Edit
10.1.5 Effects
10.1.6 Transform
10.1.7 Help
10.1.8 Distortion
10.1.9 Noise
11.Snapshots……………………………………………………………………………………………54
11.1.1 HOME PAGE
11.1.2 MENU PAGE
11.1.3 FILE
11.1.4 OPEN
11.1.5 EDIT
11.1.6 EFFECTS MENU
11.1.7 GREY IMAGE
11.1.8 INVERT
11.1.9 EDGE DETECT
11.1.10 RED EYE VIEW
11.1.10 BRIGHTNESS(INCREASED)
11.1.11 BRIGHTNESS (DECREASED)
11.1.12 BLUE EYE VIEW
11.1.13 SHARPEN
11.1.14 CUSTOM
11.1.16 TRANSFORM MENU
11.1.17 HORIZONTAL STRETCH
11.1.18 HORIZONTAL MIRROR
11.1.19 VERTICAL STRETCH
11.1.20 VERTICAL MIRROR
11.1.21 ROTATE 180 DEGREES
11.1.22 ROTATE 45 DEGREES
11.1.23 SHEAR
11.1.24 ZOOM IN
11.1.25 ZOOM OUT
11.1.26 CENTRAL CROP
11.1.27 HELP MENU
11.1.28 SHORTCUTS
11.1.29 DISTORTION MENU
11.1.30 DIFFUSE
11.1.31 OFFSET
11.1.32 DISPLACE
11.1.33 NOISE MENU
11.1.34 REMOVE NOISE
1
1. Introduction
Image enhancement is the process of adjusting digital images so that the results are more suitable
for display or further image analysis. For example, we can remove noise, sharpen, or brighten an
image, making it easier to identify key features. Image enhancement tool encompasses the
processes of altering images. We use raster graphics editor as primary tool to edit a digital image
in terms of manipulating, enhancing, transforming, applying effects and filters, and file formats
converting. Due to the popularity of digital cameras, image editing programs are readily
available.
If we have lots of spare time, we can spend hours trying to enhance our photos i.e correct colour
and sharpness, remove digital noise, etc. But for those who don’t want to waste their time and
wish to edit their photos quickly and efficiently, for that case image enhancement tool is
developed. This tool is easy to use and lets us to fix the most common problems of digital
pictures in less than a minute like fix dull colours and bad colour balance, and remove digital
noise, fix poor sharpness.
Just open your picture in Image Enhancement Tool and perform the editing by using the various
features available into it. After editing the image can be saved also in the drive of the system.
Raster images are stored in a computer in the form of a grid of picture elements, or pixels. These
pixels contain the image's colour and brightness information. Image editors can change the pixels
to enhance the image in many ways. The pixels can be changed as a group, or individually, by
the sophisticated algorithms within the image editors. Camera or computer image editing
programs often offer basic automatic image enhancement features that correct colour brightness
but by using this tool we can distort the image and also remove the noise.
This project aims at creating various effects for processing an image of any format such as .jpg,
gif etc. Our objective is to give a clear outlook about the various operations or effects that can
give to an image to change its original look.
Image Processing is the art and science of manipulating digital images. It stands with one foot
firmly in mathematics and the other in aesthetics and is a critical component of graphical
computer systems. It is a genuinely useful standalone application of Java2D.The 2D API
introduces a straight forward image processing model to help developers manipulate these image
pixels. The image processing parts of Java are buried within the java.awt.image package.
2
1.1 Technologies to be used
Java :
JAVA is high-level programming language developed by Sun Microsystems. Java was
originally called OAK, and was designed for hand held devices and set-top boxes. Java
is Platform Independent, Secure, Object Oriented, Scalable, and Robust Programming
Language.
It consists of two parts:
JVM stands for Java Virtual Machine, which is run time environment to execute the
java programs.
Java API (Application Programming Interface) that consists of inbuilt classes used in java
Programs.
JAVA EVENT DELEGATION MODEL:
The concept being quite simple:
A source generates an event and sends it to one or more listeners. In this scheme, the
listener simply waits until it receives an event. Once received, the listener processes the
event and then returns.
PLATFORM INDEPENDENCE: Java compilers do not produce native object code for a
particular platform but rather ‘byte code’ instructions for the Java Virtual Machine (JVM).
Making Java code work on a particular platform is then sim ply a matter of writing a byte code
interpreter to simulate a JVM. What this all means is that the same compiled byte code will run
unmodified on any platform that supports Java.
OBJECT ORIENTATION: Java is a pure object-oriented language. This means that
everything in a Java program is an object and everything is descended from a root object
class.
RICH STANDARD LIBRARY: One of Java’s most attractive features is its standard
library. The Java environment includes hundreds of classes and methods in six major
functional areas.
Language Support classes for advanced language features such as strings,
arrays, threads, and exception handling.
Utility classes like a random number generator, date and time functions,
and container classes.
3
Input/output classes to read and write data of many types to and from a
variety of sources.
Networking classes to allow inter-computer communications over a local
network or the Internet.
Abstract Window Toolkit for creating platform-independent GUI
applications.
Applet is a class that lets you create Java programs that can be
downloaded and run on a client browser.
APPLET INTERFACE: In addition to being able to create stand-alone applications, Java
developers can create programs that can download from a web page and run on a client
browser.
FAMILIAR C++-LIKE SYNTAX : One of the factors enabling the rapid adoption of
Java is the similarity of the Java syntax to that of the popular C++ programming
language.
GARBAGE COLLECTION: Java does not require programmers to explicitly free
dynamically allocated memory. This makes Java programs easier to write and less prone
to memory errors.
Windows Programming:
Swings:
Swing is a set of program component s for Java programmers that provide the ability to create
graphical user interface ( GUI ) components, such as buttons and scroll bars, that are independent
of the windowing system for specific operating system . Swing components are used with the
Java Foundation Classes ( JFC ). Swings in java is a rich set of components for building GUIs
and adding interactivity to java applications. Swing includes all the components that you would
expect from a modern GUI toolkit that is table controls, list controls, tree controls, buttons and
labels. The basic architecture of swing is MVC and are entirely made in java.
AWT:
Abstract Window Toolkit (AWT) is a set of application program interfaces ( API s) used by
Java programmers to create graphical user interface ( GUI ) objects, such as buttons, scroll bars,
and windows. AWT is part of the Java Foundation Classes ( JFC ) from Sun Microsystems, the
company that originated Java. The JFC are a comprehensive set of GUI class libraries that make
it easier to develop the user interface part of an application program. A more recent set of GUI
4
interfaces called Swing extends the AWT so that the programmer can create generalized GUI
objects that are independent of a specific operating system's windowing system .
Types of containers:
The AWT provides four container classes. They are class Window and its two subtypes -- class
Frame and class Dialog -- as well as the Panel class. In addition to the containers provided by the
AWT, the Applet class is a container -- it is a subtype of the Panel class and can therefore hold
components. Brief descriptions of each container class provided by the AWT are provided
below.
Window: A top-level display surface (a window). An instance of the Window
class is not attached to nor embedded within another container. An instance of
the Window class has no border and no title.
Frame: A top-level display surface (a window) with a border and title. An
instance of the Frame class may have a menu bar. It is otherwise very much
like an instance of the Window class.
Dialog: A top-level display surface (a window) with a border and title. An
instance of the Dialog class cannot exist without an associated instance of the
Frame class.
Panel: generic container for holding components. An instance of the Panel
class provides a container to which to add components.
2. Profile of the Problem
2.1 Aim
Despite advances in digital photography that improve camera functionality taking good
quality pictures remain a challenge for casual photographers. Problems include incorrect
camera settings and poor lighting conditions. Photos may be improved using image
enhancement tools, but manually retouching every single photograph is infeasible. Thus
this application can be used to make it easier for visual interpretation and understanding
of imagery. The advantage of digital imagery is that it allows us to manipulate the digital
pixel values in an image.
A low-pass filter is designed to emphasize larger, homogeneous areas of similar tone and
reduce the smaller detail in an image. Thus, low-pass filters generally serve to smooth the
appearance of an image. Gaussian blur technique is used to reduce the noise which is
5
present in a particular image. By this image enhancement tool we can also distort a
particular image.
2.2 Objective
The objective of the system would be to:
1. Edit the image by using various effects and also transform the image vertically
and horizontally.
2. Remove the noise in the image.
3. Distort the image by using various types of distortions like displace and offset.
4. Get the more clear image and after that save that image.
2.3 Functions Provided in Proposed System
2.3.1 Running The Application : The user who wants to use the application needs to
run it by using eclipse or netbeans.
2.3.2 Choosing The Image: Using this desktop application we can choose any image
which is stored in any of the drives of the system.
2.3.3 Image Editing: This allows the end users like the photographers and the designers
to edit their images by using the various effects which are available in this
application.
2.3.4 Manage images: This tool allows users to edit the images and after editing, the
images can be saved into different drives like C or E drive. We can also save the
image on the desktop.
2.4 Input Requirements of the system
1. Run the application.
2. Choose the image.
3. Use the effects
4.Save the image in any drive of the system
2.5 Output Requirements of the system
1. Load the image for the editing.
2. Save the image.
6
2.6 Users of the System
The users of this system will mostly the end users that deal with the images like the photo
designers and also the photographers. When photographers take the image it does not have the
guarantee that the image will be the clear. These photographers can use this tool to remove the
noise and also change the brightness and sharpness of an image.
Following are the users:
2.6.1 Designers
This tool is very much useful as it fulfils the requirements of the designers. The
best option available in this tool other than basic option which are available in
other tool is removing noise from an image. Thus in other words we can say that
if a particular image has variation of colour that can be removed by using the
noise remove option. This tool also has another function which is called
distortion. This feature provides various options to distort and image using java
image library.
2.6.2 Photographers
Image Enhancement tool is very much useful for the photographers. This tool acts
as a weapon for the photographers. Sometimes there might be some problem in
the setting of the camera or the background of the image is not clear this tool can
be used to change the brightness and also the sharpness of the image.
Photographers can also remove the noise of the image.
2.6.3 Common Users
Apart from the expert photographers and designers, there are always the curious
common users who like to try a hand in photography. This tool will prove to be an
asset to them as well. They can play with the images and explore different options
with it.
3. Study of Existing System:
Digital image enhancement and analysis have played, and will continue to play, and important
role in scientific, industrial, and military applications. In addition to these applications, image
enhancement and analysis are increasingly being used in consumer electronics. Internet Web
users, for instance, rely on built-in image processing protocols such as JPEG and interpolation
and in the process have become image processing users equipped with powerful yet inexpensive
software such as Photoshop. Users not only retrieve digital images from the Web but are now
able to acquire their own by use of digital cameras or through digitization services of standard
7
35mm analogy film. The end result is that consumers are beginning to use home computers to
enhance and manipulate their own digital pictures.
Image enhancement refers to processes seeking to improve the visual appearance of an image.
As an example, image enhancement might be used to emphasize the edges within the image.
This edge-enhanced image would be more visually pleasing to the naked eye, or perhaps could
serve as an input to a machine that would detect the edges and perhaps make measurements of
shape and size of the detected edges. Image enhancement is important because of its usefulness
in virtually all image processing applications.
The aim of the present image enhancement tool is to provide `better' input for other automated
image processing techniques. It improves the quality (clarity) of images for human eye.
Enhancement method consists of brightness, sharpness, increasing contrast, and revealing details.
Thus the new system or the application should be capable of reducing the noise and should be
able to provide new options as well (distortion etc). There are various algorithms to get these
effects which reveals very high and very low intensity of the original image which can adjust
their operation based on the image information (pixels) being processed.
3.1 Existing Software:
There are many existing Image Editing softwares. Some of the softwares are described below:
3.2.1 Picasa:
Picasa is free photo management software that helps you instantly find, edit and share all
the photos on your PC. Picasa automatically locates all your photos (even ones you
forgot you had) and sorts them into visual folders organized by name, size, or date. You
can drag and drop to arrange your folders and make albums to create new groups. Picasa
makes sure your photos are always organized. Picasa also makes advanced editing simple
by putting one‐click fixes and powerful effects at your fingertips. And Picasa makes it a
snap to share your photos ‐ you can email, upload to an online album, print photos at
home, make gift CDs, and even post photos to Blogger.
3.2.2 Adobe Photoshop :
Adobe Photoshop is a professional image editing software package that can be used by
experts and novices alike. While this handout offers some very basic tips on using the
tools available in Photoshop, more comprehensive guidance can be accessed on the web
or in the help menu of your version of Photoshop. The version used for this tutorial is
Adobe Photoshop CS. The work area can be intimidating to work with because of all the
complex functionality but with a quick breakdown of the available features and their
8
uses, you will be ready to comfortably navigate the work area with ease. The work area in
Photoshop has the following basic functionality and features:
Menu Bar – this is where you can access most of the commands and features
in Photoshop.
Drawing Palette – where the image being worked on will appear
Options bar content sensitive display of tool options – changes as different
tools are selected
display using Window > Options or Click a tool in the toolbox.
Tool box - for creating an editing images (display or hide using Windows >
Tools)
Palettes - to monitor and modify images (there are 5 palettes by default)
Palette Well - to organize palettes in work area
Drag a palette’s tab into the palette well to store it in the palette well
Once in the palette well click on the palette tab to use it
3.2.3 Photo Editor :
Photo Editor is powerful multifunctional software offering a complete set of image
editing tools. It contains anything a digital camera owner might need to correct or
enhance their photos. We will enjoy the ultimate convenience and professional approach
provided by each of the tools. With Photo Editor, we can remove red eye instantly,
enhance the colour of the image, make funny caricatures, add astonishing lighting effects,
straighten, resample and crop images. We will also appreciate the Make Up tool that
offers a complete set of retouching filters to make the best of your portrait photos.
9
3.3 DFD for existing system :
Fig 3.1 DFD for existing system
3.4 What’s new in the system to be developed :
Despite advances in digital photography that improve camera functionality taking good quality
pictures remain a challenge for casual photographers. Problems include incorrect camera settings
and poor lighting conditions. Photos may be improved using image enhancement tools, but
manually retouching every single photograph is infeasible. This developed Image Enhancement
Tool is useful because it has feature which can reduce the noise. In simply words noise can be
defined as the variation in the brightness or the colour thus using this image enhancement tool
we can impure the colour and shape the new image. The other feature which is available in this
tool is the distortion which is not present in the existing image editing tools.
4. Problem Analysis
4.1 Product Definition
A good, idea, method, information, object or service created as a result of a process and
serves a need or satisfies a want. It has a combination of tangible and intangible
attributes (benefits, features, functions, uses) that a seller offers a buyer for purchase. For
10
example a seller of a toothbrush not only offers the physical product but also the idea
that the consumer will be improving the health of their teeth.
As our project is for editing the images which can be in the jpg form. This tool not only
provides the editing features like effects which include brightness, sharpness, grey image
etc. But besides these features it also contains the Transformation features which includes
horizontal stretch, vertical stretch and also the mirroring of the image.
The two latest features which are present in the tool is removing noise and including
distortion.
Product Definition Process
Where are we in the market now?
This product is more efficient than the existing product because in the previous
system only certain features were present but in this we introduce this features which can
remove the noise and also distort the image. If the photographers and designers use our
tool they will be benefitted and also they need not to worry about the distortion and noise
in the image.
More flexible :Our product is more flexible means if we want to change the interface of
our product it is very much easy. We have to only write the code for that change and
introduced into the main program.
4.2 Feasibility Study
Feasibility Study simply means whether the product is worth to develop or not. A major but
optional activity within systems analysis is feasibility analysis. A wise person once said, "All
things are possible, but not all things are profitable." Simply stated, this quote addresses
feasibility. Systems analysts are often called upon to assist with feasibility analysis for proposed
systems development projects. Depending on the results of the initial investigation, the survey is
expanded to a more detailed feasibility study. Feasibility study is a test of a system proposal
according to its workability, impact on the developing body, ability to meet user needs, and
effective use of resources. The objective of a feasibility study is not to solve the problem but to
acquire a sense of its scope. During the study, the problem definition is crystallized and aspects
of the problem to be included in the system are determined. Consequently, costs and benefits are
estimated with greater accuracy at this stage. The result of the feasibility study is the formal
proposal. This is simply a report a formal document detailing the nature and scope of the
proposed solution.
11
The proposal summarizes what is known and what is going to be done. It consists of the
following:
Statement of the problem.
Summary of findings and recommendations.
Details of findings.
Recommendations and conclusions.
Once it has been determined that a project is feasible, the analyst can go ahead and prepare the
project specification which finalizes project requirements. Generally Feasibility studies are
undertaken within tight time constraints and normally culminate in a written and oral feasibility
report. The contents and recommendations of such a study will be used as a sound basis for
deciding whether to proceed, postpone or cancel the project. Thus, since the feasibility study
may lead to the commitment of large resources, it becomes necessary that it should be conducted
competently and that no fundamental errors of judgment are made.
The types of feasibility are as follows:
1. Economic Feasibility:
2. Technical Feasibility:
3. Operational Feasibility:
4.2.1 Economic Feasibility:
The term economic feasibility is used to refer to the financial viability of a given business
venture. This is usually a very important study to carry out before starting any business
since the main aim of business is profitability. Economic analysis is the most frequently
used method for evaluating the effectiveness of the candidate system. More commonly
known as cost/benefit analysis, the procedure is to be determining the benefits and
savings that are expected from a candidate and compare them with costs. If benefits
outweigh costs, then the decision is made to design and implement the system. A systems
financial benefit must exceed the cost of developing that system. i.e. a new system being
developed should be a good investment for the organization. Economic feasibility
considers the following:
i. The cost to conduct a full system investigation.
ii. The cost of hardware and software for the class of application.
iii. The benefits in the form of reduced cost or fewer costly errors.
iv. The cost if nothing changes (i.e. the proposed system is not developed).
The proposed system ie, Image Enhancement Tool, is economically feasible because :
12
i. The application requires very less time factors.
ii. The application will be easy to use and there will be no cost required for running the
application.
iii. The application will have GUI interface and no training cost is required to learn it.
iv. The application will not require expensive hardware to develop it.
4.2.2. Technical Feasibility:
A large part of determining resources has to do with assessing technical feasibility. It
considers the technical requirements of the proposed project. The technical requirements
are then compared to the technical capability of the organization. The systems project is
considered technically feasible if the internal technical capability is sufficient to support
the project requirements. The analyst must find out whether current technical resources
can be upgraded or added to in a manner that fulfills the request under
consideration. This is where the expertise of system analysts is beneficial, since using
their own experience and their contact with vendors they will be able to answer the
question of technical feasibility.
The essential questions that help in testing the operational feasibility of a system include
the following :
Is the project feasible within the limits of current technology?
Does the technology exist at all?
Is it available within given resource constraints?
Technical feasibility centres around the existing computer system (Hardware and Software
etc) and to what extend it support the proposed addition. For example, if the current
computer is operating at 80 percent capacity - an arbitrary ceiling - then running another
application could overload the system or require additional Hardware. This involves
financial considerations to accommodate technical enhancements. If the budgets are a
serious constraint, then the project is judged not feasible. In this project, the programming
should be done in such a way that the application is fast and uses less resources .
4.2.3. Operational Feasibility
Operational feasibility is dependent on human resources available for the project and
involves projecting whether the system will be used if it is developed and implemented.
Operational feasibility is a measure of how well a proposed system solves the problems,
13
and takes advantage of the opportunities identified during scope definition and how it
satisfies the requirements identified in the requirements analysis phase of system
development.Operational feasibility reviews the willingness of the organization to
support the proposed system. This is probably the most difficult of the feasibilities to
gauge. In order to determine this feasibility, it is important to understand the management
commitment to the proposed project.
4.3 Project Plan
Aim:
Developing the image enhancement tool in to increase the quality of an image by reducing the
noise in an image and providing new options like inducing distortion in an image.
Output:
The main output of the product is that it can be used by the photographers, designers and also the
casual users to induce various effects in the image. These professionals can also reduce the noise
of the image which is the one of the problem with images. Distortion can also be induced with a
particular image by using this product.
Quality Criteria :
Product Outcomes define the quality of product . like:
Is it Better than other products ?.
Does it Provide efficient output. ? .
Does it suit the market need ?
Does it fulfil the requirement of an organisation ?
Resources :
It defines what type of resources we need to design our product.
For Designing our Product :
Software Designer
Software Designer Tools.
Required Skill
14
5. Software Requirement Analysis
5.1 Technologies to be used
5.1.1 Java : Java is Platform Independent, Secure, Object Oriented, Scalable, and
Robust Programming Language. It consists of two parts.
JVM stands for Java Virtual Machine, which is run time environment to
execute the java programs.
Java API (Application Programming Interface) that consists of inbuilt
classes used in java Programs.
5.1.2 Windows Programming
Swings:
Swing is a set of program component s for Java programmers that provide the ability to create
would expect from a modern GUI toolkit that is table controls, list controls, tree controls, buttons
and labels. The basic architecture of swing is MVC and are entirely made in java.
JPANEL graphical user interface ( GUI ) components, such as buttons and scroll bars, that are
independent of the windowing system for specific operating system . Swing components are used
with the Java Foundation Classes ( JFC ). Swings in java is a rich set of components for building
GUIs and adding interactivity to java applications.
EL: is Swing’s version of the AWT class Panel and uses the same default layout.
JFRAME: is Swing’s version of Frame and is descended directly from that class. The
components added to the frame are referred to as its contents; these are managed by the
contentPane. To add a component to a JFrame, we must use its contentPane instead.
JINTERNALFRAME: is confined to a visible area of a container it is placed in. It can be
iconified , maximized and layered.
JWINDOW: is Swing’s version of Window and is descended directly from that class.
Like Window, it uses BorderLayout by default.
JDIALOG: is Swing’s version of Dialog and is descended directly from that class. Like
Dialog, it uses Border Layout by default. Like JFrame and JWindow,
JDialog contains a rootPane hierarchy including a contentPane, and it allows layered and
glass panes.
All dialogs are model, which means the current thread is blocked until user interaction
with it has been completed. JDialog class is intended as the basis for creating custom
15
dialogs; however, some of the most common dialogs are provided through static methods
in the class JOptionPane.
JLABEL: descended from JComponent, is used to create text labels.
The abstract class AbstractButton extends class JComponent and provides a foundation
for a family of button classes, including JButton:.
JTEXTFIELD: allows editing of a single line of text. New features include the ability to
justify the text left, right, or center, and to set the text’s font.
JPASSWORDFIELD: (a direct subclass of JTextField) you can suppress the display of
input. Each character entered can be replaced by an echo character.
This allows confidential input for passwords, for example. By default, the echo character
is the asterisk.
JTEXTAREA: allows editing of multiple lines of text. JTextArea can be used in
conjunction with class JScrollPane to achieve scrolling. The underlying JScrollPane can
be forced to always or never have either the vertical or horizontal scrollbar;
JButton is a component the user clicks to trigger a specific action.
JRADIOBUTTON: is similar to JCheckbox, except for the default icon for each class. A
set of radio buttons can be associated as a group in which only
one button at a time can be selected.
JCHECKBOX: is not a member of a checkbox group. A checkbox can be selected and
deselected, and it also displays its current state.
JCOMBOBOX : is like a drop down box. You can click a drop-down arrow and select an
option from a list. For example, when the component has focus,
pressing a key that corresponds to the first character in some entry’s name selects that
entry. A vertical scrollbar is used for longer lists.
JLIST: provides a scrollable set of items from which one or more may be selected. JList
can be populated from an Array or Vector. JList does not
support scrolling directly, instead, the list must be associated with a scrollpane. The view
port used by the scroll pane can also have a user-defined
border. JList actions are handled using ListSelectionListener.
JTABBEDPANE: contains a tab that can have a tool tip and a mnemonic, and it can
display both text and an image.
JTOOLBAR: contains a number of components whose type is usually some kind of
button which can also include separators to group related components
within the toolbar.
16
FLOWLAYOUT: when used arranges swing components from left to right until there’s
no more space available. Then it begins a new row below it and moves
from left to right again. Each component in a FlowLayout gets as much space as it needs
and no more.
BORDERLAYOUT: places swing components in the North, South, East, West and center
of a container. You can add horizontal and vertical gaps between
the areas.
GRIDLAYOUT: is a layout manager that lays out a container’s components in a
rectangular grid. The container is divided into equal-sized rectangles,
and one component is placed in each rectangle.
GRIDBAGLAYOUT: is a layout manager that lays out a container’s components in a
grid of cells with each component occupying one or more cells,
called its display area. The display area aligns components vertically and horizontally,
without requiring that the components be of the same size.
JMENUBAR: can contain several JMenu’s. Each of the JMenu’s can contain a series of
JMenuItem ‘s that you can select. Swing provides support for
pull-down and popup menus.
SCROLLABLE JPOPUPMENU: is a scrollable popup menu that can be used whenever
we have so many items in a popup menu that exceeds the screen visible height.
AWT:
Abstract Window Toolkit (AWT) is a set of application program interfaces ( API s) used by
Java programmers to create graphical user interface ( GUI ) objects, such as buttons, scroll bars,
and windows. AWT is part of the Java Foundation Classes ( JFC ) from Sun Microsystems, the
company that originated Java. The JFC are a comprehensive set of GUIclass libraries that make
it easier to develop the user interface part of an application program.A more recent set of GUI
interfaces called Swing extends the AWT so that the programmer can create generalized GUI
objects that are independent of a specific operating system's windowing system .
Types of containers:
The AWT provides four container classes. They are class Window and its two subtypes -- class
Frame and class Dialog -- as well as the Panel class. In addition to the containers provided by the
AWT, the Applet class is a container -- it is a subtype of the Panel class and can therefore hold
17
components. Brief descriptions of each container class provided by the AWT are provided
below.
Window: A top-level display surface (a window). An instance of the Window
class is not attached to nor embedded within another container. An instance of
the Window class has no border and no title.
Frame: A top-level display surface (a window) with a border and title. An
instance of the Frame class may have a menu bar. It is otherwise very much
like an instance of the Window class.
Dialog: A top-level display surface (a window) with a border and title. An
instance of the Dialog class cannot exist without an associated instance of the
Frame class.
Panel: generic container for holding components. An instance of the Panel
class provides a container to which to add components.
Components of Java AWT are:
Labels : This is the simplest component of Java Abstract Window Toolkit. This
component is generally used to show the text or string in your application and label never
perform any type of action. Syntax for defining the label only and with justification :
Label label_name = new Label ("This is the label text.");
Above code simply represents the text for the label.
Label label_name = new Label ("This is the label text.", Label.CENTER);
Justification of label can be left, right or centered. Above declaration used the center
justification of the label using the Label.CENTER.
Buttons : This is the component of Java Abstract Window Toolkit and is used to trigger
actions and other events required for your application. The syntax of defining the button
is as follows :
Button button_name = new Button ("This is the label of the button.");
We can change the Button's label or get the label's text by using
the Button.setLabel(String)and Button.getLabel() method. Buttons are added to the it's
container using the add (button_name) method.
18
Check Boxes : This component of Java AWT allows you to create check boxes in your
applications. The syntax of the definition of Checkbox is as follows :
CheckBox checkbox_name = new Checkbox ("Optional check box 1", false);
Above code constructs the unchecked Checkbox by passing the boolean valued
argument falsewith the Checkbox label through the Checkbox() constructor. Defined
Checkbox is added to it's container using add (checkbox_name) method. You can change
and get the checkbox's label using the setLabel (String) and getLabel() method.
Radio Button : This is the special case of the Checkbox component of Java AWT
package. This is used as a group of checkboxes which group name is same. Only one
Checkbox from a Checkbox Group can be selected at a time. Syntax for creating radio
buttons is as follows :
CheckboxGroup chkgp = new CheckboxGroup();
add (new Checkbox ("One", chkgp, false);
add (new Checkbox ("Two", chkgp, false);
add (new Checkbox ("Three",chkgp, false);
In the above code we are making three check boxes with the label "One", "Two"
and "Three". If you mention more than one true valued for checkboxes then your
program takes the last true and show the last check box as checked.
Text Area: This is the text container component of Java AWT package. The Text Area
contains plain text. TextArea can be declared as follows:
TextArea txtArea_name = new TextArea();
You can make the Text Area editable or not using the setEditable (boolean) method. If
you pass the boolean valued argument false then the text area will be non-editable
otherwise it will be editable. The text area is by default in editable mode. Text are set in
the text area using thesetText(string) method of the TextArea class.
Text Field: This is also the text container component of Java AWT package. This
component contains single line and limited text information. This is declared as follows :
TextField txtfield = new TextField(20);
19
5.2 General Description
5.2.1 Introduction To Java :
Java is a programming language originally developed by James Gosling at Sun
Microsystems (now part of Oracle Corporation) and released in 1995 as a core
component of Sun Microsystems Java platform. The language derives much of its syntax
from C and C++ but has a simpler object model and fewer low-level facilities. Java
applications are typically compiled to bytecode (class file) that can run on any Java
Virtual Machine (JVM) regardless of computer architecture. Java is a general-purpose,
concurrent, class-based, object-oriented language that is specifically designed to have as
few implementation dependencies as possible. It is intended to let application developers
"write once, run anywhere." Java is currently one of the most popular programming
languages in use, particularly for client-server web applications.
5.2.2 WHY JAVA
5.2.2.1 SWING:
We use Java Swing libraries to program our GUI components. We chose Swing because:
1. Swing has an extensively rich set of user interface elements which are convenient to
use.
2. Swing depends far less on the underlying platform; it is therefore less prone to
platform
specific bugs.
3. Swing gives a consistent user experience across platforms.
5.2.2.2 Enriched Package for Image Processing :
The image processing parts of Java are buried within the java.awt.image package.The
package consists of three interfaces and eleven classes, two of which are abstract.
They are as follows:
The ImageObserver inter face provides the single method necessary to support the
asynchronous loading of images. The interface implementers watch the production of an
image and can react when certain conditions arise.
The ImageConsumer and ImageProducer inter faces provide the means for low level
image creation. The ImageProducer provides the source of the pixel data that is used by
the ImageConsumer to create an Image.
20
The PixelGrabber and ImageFilter classes, along with the
AreaAveragingScaleFilter, CropImageFilter, RGBImageFilter, and
ReplicateScale-Filter subclasses, provide the tools for working with images.
PixelGrabber consumes pixels from an Image into an array. The ImageFilter
classes modify an existing image to produce another Image instance.
CropImageFilter makes smaller images; RGBImageFilter alters pixel colors,
while AreaAveragingScaleFilter and ReplicateScaleFilter scale images up and
down using different algorithms. All of these classes implement ImageConsumer
because they take pixel data as input.
MemoryImageSource and FilteredImageSource produce new images. Memory-
ImageSource takes an array and creates an image from it. FilteredImage- Source
uses an ImageFilter to read and modify data from another image and produces the
new image based on the original. Both MemoryImageSource and
FilteredImageSource implement ImageProducer because they produce new pixel
data.
ColorModel andv its subclasses, DirectColorModel and IndexColorModel,
provide the palette of colors available when creating an image or tell you the
palette used when using PixelGrabber.
The classes in the java.awt.image package let you create Image objects at runtime.
These classes can be used to rotate images, make images transparent, create
image viewers for unsupported graphics formats, and more.
5.3 Specific Requirements
5.3.1 Software Requirements
1. JDK 7
NetBeans 7.1or Eclipse
2. Operating System
Windows 7/8
5.3.2. Hardware Requirements
1. Intel P4/i3 processor with minimum 2.0Ghz Speed
2. RAM: Minimum 512MB
3. Hard Disk: Minimum 20GB
21
6. Design
6.1 Systems design
The design phase is the “architectural” phase of system design. The flow of data
processing is developed into charts, and the project team determines the most logical
design and structure for data flow and storage. For the user interface, the project team
designs mock-up screen layouts that the developers uses to write the code for the actual
interface. Based on the user requirements and the detailed analysis of a new system, the
new system must be designed. This is the phase of system designing. It is the most crucial
phase in the development of a system.
The logical system design arrived at as a result of system analysis and is converted into
physical system design. In the design phase the SDLC process continues to move from
the what questions of the analysis phase to the how . The logical design produced during
the analysis is turned into a physical design - a detailed description of what is needed to
solve original problem. Input, output, databases, forms, codification schemes and
processing specifications are drawn up in detail.
In the design stage, the programming language and the hardware and software platform in
which the new system will run are also decided. Data structure, control process,
equipment source, workload and limitation of the system, Interface, documentation,
training, procedures of using the system, taking backups and staffing requirement are
decided at this stage.
There are several tools and techniques used for describing the system design of the
system. These tools and techniques are: Flowchart, Data flow diagram (DFD), Data
dictionary, Structured English, Decision table and Decision tree which will be discussed
in detailed in the next lesson.
22
6.2 User Interface Design:
Fig. 6.1 User Interface Design
Transform
Home
Main Menu
File
Edit
Project
Effects
Help
Distortion
Noise
23
6. 3 Flowcharts:
Flowcharts can be a great way to organize data and represent a process. Software engineers use
flowcharts to visualize the information processing system. Web designers use flowcharts to
organize the infrastructure of the website. Textbook authors use flowcharts to help students
visualize intangible concepts. Using the shapes, type and line tools, you can quickly create
visually appealing flowcharts in design.
Transformed
Image
Various
Process
Input
Image
Output
Image
Fig 6.2: Flow Chart Of Image Enhancement Tool
6.3.1. Role of Effects:
File Menu :
Fig 6.3 File Menu
File
Open
Save
Exit
24
Edit Menu :
Fig 6.4 Edit Menu
Help Menu :
Fig 6.5 Help Menu
Distortion Menu :
Fig 6.6 Distortion Menu
Noise Menu :
Fig 6.7 Noise Menu
Reset
Edit
Shortcuts
Help
Distortion
Diffuse
Offset
Displace
Remove
Noise
Noise
25
Transform Menu :
Fig 6.8 Transformation Menu
Transform
Horizontal
Mirror
Vertical
Stretch
Vertical
Mirror
Rotate
180 deg
Rotate
45 deg
Zoom In
Shear
Horizontal
Stretch
Zoom Out
Central
Crop
250%
350%
400%
150%
300%
100%
50%
200%
26
Effects Menu :
Fig 6.9 Effects Menu
Effects
Invert
EdgeDetect
RedEyeView
Brightness
BlueEyeView
GreyImage
Sharpen
Custom
27
6.4 Data Flow Diagram:
A two-dimensional diagram that explains how data is processed and transferred in a system. The
graphical depiction identifies each source of data and how it interacts with other
data sources to reach a common output. Individuals seeking to draft a data flow diagram must (1)
identify external inputs and outputs, (2) determine how the inputs and outputs relate to each
other, and (3) explain with graphics how these connections relate and what they result in. This
type of diagram helps business development and design teams visualize how data is processed
and identify or improve certain aspects.
6.5 DFD FOR OUR PROJECT:
Level 0:
Level 1:
30
DFD for Distortion :
DFD for Noise :
6.6 INPUT DESIGN
Input design is a part of the overall system design. Collection of input data is the most expensive
part of the system, in terms of both equipment used and the number of people involved. It is the
point of must contact for the user with the computer system and is prone to error. The input
design in the system has a number of objectives like to produce a cost effective method of input,
to set highest level of accuracy for the data and to ensure that input is acceptable and understood
by the user.
Input design is the process of converting user-oriented inputs to a computer-based format. In the
system design phase, the expanded data flow diagrams identify logical dataflow, data store,
sources and destinations; Input data are collected and organized in to groups of similar data.
31
System analyst decides the following input design details :
What data to input?
What medium is used?
How the data should be arranged and coded?
The dialogues to guide users in providing inputs.
Field sequence that matches in the source document
Data items and transactions needing validation to detect errors.
Methods for performing input validation to detect errors.
The main effects and filters that we’ve used in this project are:
6.6.1 Image Filters
The Java 2D API provides a framework for filtering images. That is, a source image
enters a filter, the filter processes the image data in some way, and a new image emerges.
Source Image ==> Filter ==> Destination Image
The java.awt.image package includes several filter classes and we can also create your
own. The filter classes implement the java.awt.image.BufferedImageOp interface. This
interface holds five methods but the crucial one is:
Public BufferedImage filter (BufferedImage sourceImg, BufferedImage destImg)
This method will act upon (but not change) the source image and will return the
processed version as the destination image. If the destImg argument is not null, then the
filter will use this image object to hold the processed image. If it is null, then the filter
will create a new image object. In some, but not all filters, the source and destination
images can be the same. The five filtering classes provide with the java.awt.image
package include:
ConvolveOP convolution filter that applies a given kernel operator to the image
data for effects such as edge detection, sharpening, and other effects.
AffineTransformOp affine transforms, such as translation, scaling, flipping,
rotation, and shearing, map 2D structures in one space to another space while
maintaining straight lines and the parallelism in the original image.
LookupOp instances of LookupTable are used to map source pixels to destination
pixels according to the pixel component values (can’t be used with indexed color
model images). Provides color transformation effects such as the inversion of gray
scales.
32
RescaleOp apply a scaling factor to the color components so as to brighten or
dim an image.
ColourConvertOp change to a different color space such as converting a color
image to a grey scale image.
Image filtering allows we apply various effects on photos. The type of image
filtering described here uses a 2D filter similar to the one included in Paint Shop
Pro as User Defined Filter and in Photoshop as Custom Filter.
6.6.2 Colour Filter
Colour filters are sometimes classified according to their type of spectral absorption:
short-wavelength pass, long-wavelength pass or band-pass; diffuse or sharp-cutting;
monochromatic or conversion. The short-wavelength pass transmits all wavelengths up to
the specified one and then absorbs. The long-wavelength pass is the opposite. Every filter
is a band-pass filter when considered generally. It is very simple - it just adds or subtracts
a value to each colour. The most useful thing to do with this filter is to set two colors to -
255 in order to strip them and see one color component of an image. For example, for red
filter, keep the red component as it is and just subtract 255 from the green component and
blue component.
The Color class is used encapsulate colors in the default RGB color space or colors in
arbitrary color spaces identified by a ColorSpace. Every color has an implicit alpha value
of 1.0 or an explicit one provided in the constructor. The alpha value defines the
transparency of a color and can be represented by a float value in the range 0.0 - 1.0 or 0 -
255. An alpha value of 1.0 or 255 means that the color is completely opaque and an
alpha value of 0 or 0.0 means that the color is completely transparent. When constructing
a Color with an explicit alpha or getting the color/alpha components of a Color, the color
components are never premultiplied by the alpha component. Creates an opaque RGB
color with the specified red, green, and blue values in the range (0 - 255). The actual
color used in rendering depends on finding the best match given the color space available
for a given output device. Alpha is defaulted to 255.
6.6.3 INVERT
This filter will invert the alpha channel of an image. This is not normally terribly useful,
but has its uses on accession. There are no parameters to this filter. This filter will invert
all the pixels in an image, converting it into its photographic negative. It's pretty much the
33
simplest possible filter: To invert a pixel we, simply subtract each color component from
255. There are no parameters to this filter. The invert filter is also quite simple. It takes
apart the red, green, and blue channels and then inverts them by subtracting them from
255.These inverted values are packed back into a pixel value and returned.
6.6.4 GRAYSCALE
Grayscale or grayscale digital image is an image in which the value of each pixel is a
single sample, that is, it carries only intensity information. Images of this sort, also
known as black-and-white, are composed exclusively of shades of gray, varying from
black at the weakest intensity to white at the strongest. Grayscale images are distinct
from one-bit black-and-white images, which in the context of computer imaging are
images with only the two colors, black, and white (also called bi-level or binary images).
Grayscale images have many shades of gray in between. Grayscale images are also called
monochromatic, denoting the absence of any chromatic variation. Grayscale images are
often the result of measuring the intensity of light at each pixel in a single band of the
electromagnetic spectrum (e.g. infrared, visible light, ultraviolet, etc.), and in such cases
they are monochromatic proper when only a given frequency is captured. But also they
can be synthesized from a full color image; see the section about converting to grayscale.
The Grayscale filter is a subclass of RGBImageFilter, which means that Grayscale can
use itself as the ImageFilter parameter to FilterImageSource’s constructor. Then all it
needs to do is override filterRGB () to change the incoming color values. It takes the red,
green and blue values and computes the brightness of the pixels, using the NTSC
(National Television Standards Committee) color-to-brightness conversion factor .It then
simply returns a gray pixel that is the same brightness as the color source. The Grayscale
filter is a subclass of RGBImageFilter, which means that Grayscale can use itself as the
ImageFilter parameter to FilteredImageSource’s constructor. Then all it needs to do is
override filterRGB () to change the incoming color values. It takes the red, green, and
blue values and computers the brightness of the pixel, using the NTSC (National
Television Standards Committee) color-to-brightness conversion factor. It then simply
returns a gray pixel that is the same brightness as the color source. Gray-scale
transformation the following three ways to use color images of a gray-scale
transformation, the effect of transformation is different.
34
6.6.5 RGBScale
The RGBScale is used to convert one image to another, pixel by pixel, transforming the
colors along the way. This filter could be used to brighten an image, to increase its
contrast, or even to convert it to grayscale.
6.6.6 SHARPEN
To sharpen the image is very similar to finding edges, add the original image, and the
image after the edge detection to each other, and the result will be a new image where the
edges are enhanced, making it look sharper. Adding those two images is done by taking
the edge detection filter from the previous example, and incrementing the center value of
it with 1. Now the sum of the filter elements is 1 and the result will be an image with the
same brightness as the original, but sharper.ShapeFilter applies a "shape burst" gradient
to an image. It uses the alpha channel of the image to determine the shape and then
shades from the outside of the shape inwards. You can change the shape of the gradient
between linear, circle up, circle down and a smooth transition and you can change the
rate at which the gradient changes. By default, the gradient will shade from black at the
edges to white in the centre of the shape, but you can also get the filter to invert this. This
filter is particularly useful for creating bump maps for the Light Filter. You can apply a
color map for the gradient which can produce interesting effects especially for bump
maps. The sharpen filter is also a subclass of Convolve and is the inverse of Blur. It runs
through every pixel in the source image array, imgpixels and computes the average of the
3x3 box surrounding it, not counting the center. The sharpen filter is also a subclass of
convolve and is the inverse of blur. It runs through every pixel in the source image array,
imgpixels, and computers the average of the 3*3 box surroundings it, not counting the
center. The corresponding output pixel in newimgpixels has the difference between the
center pixel and the surroundings average added to it. This basically says that if a pixel is
30 brighter than its surroundings, make it another 30 brighter.
6.6.7 EDGE DETECTION
This filter detects the edges in a filter. For each pixel, it looks a each channel, finds the
local gradient and replaces the channel by a value determined by the gradient. Edges
become white while flat areas become black. Edge detection filters work essentially by
looking for contrast in an image. This can be done a number of different ways, the
convolution filters do it by applying a negative weight on one edge, and a positive on the
35
other. This has the net effect of trending towards zero if the values are the same, and
trending upwards as contrast exists. This is precisely how our emboss filter worked, and
using an offset of 127 would again make these filters look similar to our previous
embossing filter. The following examples follow the different filter types in the same
order as the filters above. The images have a tooltip if you want to be sure which is
which. These three filters also allow specification of a threshold. Any value below this
threshold will be clamped to it. For the test I have kept the threshold at 0.
6.6.8 Transform
Transform attribute in the Graphics2D context to move, rotate, scale, and shear graphics
primitives when they are rendered. The transform attribute is defined by an instance of
the AffineTransform class. An affine transform is a transformation such as translate,
rotate, scale, or shear in which parallel lines remain parallel even after being transformed.
The Graphics2D class provides several methods for changing the transform attribute. You
can construct a new AffineTransform and change the Graphics2D transform attribute by
calling transform.AffineTransform defines the following factory methods to make it
easier to construct new transforms:
i) getRotateInstance
ii) getScaleInstance
iii) getShearInstance
iv) getTranslateInstance
Alternatively we can use one of the Graphics2D transformation methods to modify the
current transform. When we call one of these convenience methods, the resulting
transform is concatenated with the current transform.
It is then applied during rendering:
i) rotate--to specify an angle of rotation in radians
ii) scale--to specify a scaling factor in the x and y directions
iii) shear--to specify a shearing factor in the x and y directions
iv) translate--to specify a translation offset in the x and y directions
We can also construct an AffineTransform object directly and concatenate it with the
current transform by calling the transform method. The drawImage method can also be
used to allow you to specify an AffineTransform that is applied to the image.
36
Specifying a transform when you call drawImage does not affect the Graphics2D
transform attribute. Never use the setTransform method to concatenate a coordinate
transform onto an existing transform. The setTransform method overwrites the real
Graphics2D object's current transform, which might be needed for other reasons,like
positioning Swing and lightweight components in a window. Use these steps to do
transformations:
1. Use the getTransform method to get the current transform.
2. Use transform, translate, scale, shear, or rotate to concatenate a transform.
3. Perform the rendering.
4. Restore the original transform using the setTransform method
Affine transformations are one of the least complicated operations that can be performed
in image processing. However, it is still very useful, because it is one of the basic
transformations that can be used on an image. Affine transformations do not really affect
the values of the pixels. What is done during those transformations is simply a change in
the pixels positions or order. But actually there will be no effect on the brightness or the
contrast of the picture. It does not change the colours of the objects in the image, but
rather their shapes. This kind of transformation is used on pictures which display some
distortion. If one or several objects of a picture appear badly shaped on the image, then
affine transformations can be used to make them look better.
6.6.9 CROP IMAGE
Crop Image filters an image source to extract a rectangular region .One situation in which
this filter is valuable is where you want to use several small images from a single, larger
source image .Loading twenty 2K images takes much longer than loading a single 40K
image that has many frames of an animation tiled into it. If every sub image is the same
size, then you can easily extract these images by using Crop Image to disassemble the
block once your applet starts. Cropping an image extracts a rectangular region of interest
from the original image. This focuses the viewer's attention on a specific portion of the
image and discards areas of the image that contain less useful information. Using image
cropping in conjunction with image magnification allows you to zoom in on a specific
portion of the image. This section describes how to exactly define the portion of the
image you wish to extract to create a cropped image. Image cropping requires a pair of
(x, y) coordinates that define the corners of the new, cropped image. The following
example extracts the African continent from an image of the world. Complete the
37
following steps for a detailed to cut out or trim unneeded portions of an image or a page
are to crop. Cutting lines, known as crop marks, may be indicated on a print-out of the
image or page to show where to crop. One basic way to modify images is to crop them --
remove some part of the image. Cropping changes the appearance of photographs and
clip art in order to better fit the layout, make a statement, or improve the overall
appearance of the subject matter.
6.6.10 ZOOM
The modified (zoomed image) should be the same size as the original image; in a zoomed
image the specified portion of the original image now fills the entire image window.
6.6.11 SHEAR
Shearing can be visualized by thinking of an image superimposed onto a flexible rubber
sheet. If you hold the sides of the sheet and move them up and down in opposite
directions, the image will undergo a spatial stretching known as shearing. The shear
operation shears an image either horizontally or vertically
6.6.12 ROTATE
The rotate operation rotates an image about a given point by a given angle. Specified x
and y values define the coordinate of the source image about which to rotate the image
and a rotation angle in radians defines the angle of rotation about the rotation point. If no
rotation point is specified, a default of (0, 0) is assumed. A negative rotation value rotates
the image counter-clockwise, while a positive rotation value rotates the image clockwise.
6.6.13 LOADING AND SAVING AN IMAGE
When you select either open or save menu from the file menu the corresponding dialogue
box will appear. We can browse to the decide picture using this dialogue box. After
selecting the picture we can insert that into our work area by double clicking the picture
or clicking the open button in the dialogue box. In the save dialogue box browse to the
folder where we want to save our file name it and click save button. Thus it will save in
the corresponding folder
38
6.6.14 DISTORTING AN IMAGE
Distorting an image involves moving its pixels from one place to another. Java comes
with one filter, AffineTransformOp which distorts images. We can also make our own
filters by making use of the available filters in java and some other techniques.
Some basic distortion filters involved in this project are :
Diffuse Filter :
This filter creates a diffusing effect by randomly displacing pixels in an image. You can
change the radius of the diffusion and what to do with pixels which diffuse off the edges.
Offset Filter :
This filter translates an image by the amount you specify in the X and Y directions.
Pixels which go off the edge are wrapped back onto the opposite edge. The offsets can be
positive or negative. This is a handy filter for producing seamless tilings: offset the image
and then paint over the cracks. The offsets can be positive or negative.
Displace Filter :
Displace Filter distorts an image using a displacement map. The displacement map is a
grayscale image whose gradient at each point is used to displace pixels in the source
image.
6.7 Advantages of DFD
A simple but powerful graphic technique which is easily understood.
Represents an information system from the viewpoint of data movements, which
includes the inputs and outputs to which people can readily relate.
The ability to represent the system at different levels of details gives added
advantage (you can include the advantages of decomposition listed earlier)
Helps to define the boundaries of the system.
A useful tool to use during interviews.
Serve to identify the information services the users require, on the basis of which
the future information system will be constructed.
7. Testing
Testing is the stage of implementing which is aimed at earning system running
accurately and efficiently. An error or anomaly in program code can remain undetected
indefinitely. To prevents this from happening the code tested at each of the level. So the
39
testing is performed to ensure that the system as a whole is bug free. For each stage or phase, a
different technique for eliminating the errors that exists in each stage.
However some requirement error and design errors are likely to remain undetected. Ultimately,
these errors will be reflecting in the code. Testing is usually associated with the code and is used
to detect the errors remaining from the earlier phase. The performance factors like turnaround
time, back up, file protection and human factors are some of the performance criteria for the
system testing .Hence testing perform a critical role for quality assurance and for ensuring the
reliability of software At first we used artificial data to store information.
During testing the system is to be tested is executed with a set of image inputs and the output
was verified. If the program fails to perform as expected then conditions under which a failure
occurs are noted for debugging and correction. It only reveals the presence of errors clearly; the
success of testing in revealing errors in a system depends critically on the test cases. Thus
preparation of test cases plays a vital role in the system testing. After preparing the test cases, the
system was tested. While testing the system, errors were found and corrected. Test cases were
generating based upon the requirements of the user.
7.1 Black-box testing:
Is a method of software testing that examines the functionality of an application (e.g. what the
software does) without peering into its internal structures or workings (see white-box testing).
This method of test can be applied to virtually every level of software testing: unit, integration,
system and acceptance. It typically comprises most if not all higher level testing.
Black box testing tends to find different kinds of errors than white box testing:
Missing functions
Usability problems
Performance problems
Concurrency and timing errors
Initialization and termination errors etc.
7.1.1 Functional testing
Is a quality assurance (QA) process and a type of black box testing that bases its test
cases on the specifications of the software component under test. Functions are tested by
feeding them input and examining the output, and internal program structure is rarely
40
considered (not like in white-box testing). Functional Testing usually describes what the
system does.
Functional testing typically involves five steps:
1. The identification of functions that the software is expected to perform
2. The creation of input data based on the function's specifications
3. The determination of output based on the function's specifications
4. The execution of the test case
5. The comparison of actual and expected outputs
7.2 Structural testing
White-box testing (also known as clear box testing, glass box testing, transparent box
testing, and structural testing) is a method of testing software that tests internal structures or
workings of an application, as opposed to its functionality (i.e. black-box testing). In white-box
testing an internal perspective of the system, as well as programming skills, are used to design
test cases. The tester chooses inputs to exercise paths through the code and determine the
appropriate outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT).
While white-box testing can be applied at the unit, integration and system levels of the software
testing process, it is usually done at the unit level. It can test paths within a unit, paths between
units during integration, and between subsystems during a system–level
7.3 TYPES OF TESTING
Unit testing
Integration testing
System testing
Acceptance testing
7.3.1 Unit Testing
The first level of testing is unit testing. In this, the smallest units of software design, the
module are tested against the specifications processed during the design for the modules
are tested against the specification produced during the design for the modules .It consist
of a number of tests runs such as valid path through the code, and the exception and the
error handling paths. Unit testing is essentially for verification of the code produced
during the coding phase and hence the goal is to test the internal logic of the modules.
Unit testing involved checking all pages for errors and omissions. We used the unit
41
testing plans prepared in the design phase of the system development as guide. The
testing was carried out during the coding itself. Each module of this project was found to
be working according to the expected output from the module.
7.3.2 Integration testing
Data can be lost across an interface, one module can have an adverse effect on the other
sub functions, when combined may not produce the desired functions. Integrating testing
is the systematic testing to uncover the errors within in the interface. This testing is done
with simple data and the development system has run successfully this simple data. The
need for integrated system is to find the overall system performance.
7.3.3 System testing
In this testing, the entire software system is tested. All the application programs are
grouped together for the system testing, to test the whole system exhaustively including
any additional housekeeping function like file achieving. This is the developers the last
opportunity to check that the system works before asking the silent to accept it. The
purpose of this testing is to verify that if the software meets its requirements. It verifies
all elements much properly and overall system function performance is achieved. It is
also tests to find discrepancies between the system and its original objective specification
and system document. After this test, it was found that our project “IMAGE
ENHANCEMENT TOOL” works well as per the specified requirements.
7.3.4 Acceptance Testing
User Acceptance Testing of the system is the key factor for the success of any system.
The system under considerations is tested for the user acceptance by constantly keeping
in touch with perspective system at the time of development and making change
whenever required. This is done with regard to the input screen design.
7.4 Project Testing
When we completed Designing and coding of the projects, we had moved to project testing
phase. Now our project is ready for testing . What we have done in this phase ,
First of all we check that the main frame of the project is working properly and also the
image which is inserted into this frame display properly or not. If the image inside the
frame is not displayed then we check the code of the main frame and also the properties
of image.
42
Second test includes the testing of various features which where present in our project.
First of all we check whether the jmenu is working properly or not .
Third test is used for checking the working of these effects, means when we click on a
particular effects change should be done in the image.
Final testing includes whether the noise is working properly or not, in order to check this
we simply load the image from FILE and then OPEN image will be loaded on to the
screen and then click on remove noise.
Last part of testing involves the working of the Distortion and its various sub types.
8. Implemantation
Technology / Domain / Tool
8.1. Introduction
There are many technologies available in market Like Java , .Net , PHP , Web
Technology , cloud computing and many more .Different technology have there own
advantages and disadvantages, we have choose Java Technology for achieving our goal
in Software Domain.
8.1.1. About the Java Technology
Java technology is both a programming language and a platform.
8.1.2. Java Platform, Standard Edition (Java SE)
Java SE is designed to enable you to develop secure, portable, high-performance
applications for the widest range of computing platforms possible. By making
applications available across heterogeneous environments, businesses can boost end-user
productivity, communication, and collaboration—and dramatically reduce the cost of
ownership of both enterprise and consumer applications.
8.1.3. The Java Programming Language
The Java programming language is a high-level language .There are various
characteristics of java language are:
Simple
Object oriented
Distributed
Multithreaded
43
Dynamic
Architecture neutral
Portable
High performance
Robust
Secure
In the Java programming language, all source code is first written in plain text files
ending with the .java extension. Those source files are then compiled into .class files by
the javac compiler. A.class file does not contain code that is native to your processor; it
instead contains bytecodes the machine language of the Java Virtual Machine (Java VM).
The java launcher tool then runs your application with an instance of the Java Virtual
Machine.
Fig 8.1Java working
Because the Java VM is available on many different operating systems, the same .class
files are capable of running on Microsoft Windows, the Solaris Operating System
(Solaris OS), Linux, or Mac OS. Some virtual machines, such as the Java SE Hotspot at a
Glance, perform additional steps at runtime to give your application a performance boost.
This includes various tasks such as finding performance bottlenecks and recompiling to
native code frequently used sections.
44
Fig 8.2 Java Platform Independence
Through the Java VM, the same application is capable of running on multiple platforms.
8.1.4. The Java Platform
A platform is the hardware or software environment in which a program runs. We've
most popular platforms like Microsoft Windows, Linux, Solaris OS, and Mac OS. Most
platforms can be described as a combination of the operating system and underlying
hardware. The Java platform differs from most other platforms in that it's a software-only
platform that runs on top of other hardware-based platforms.
The Java platform has two components:
The Java Virtual Machine
The Java Application Programming Interface (API)
You've already been introduced to the Java Virtual Machine; it's the base for the Java
platform and is ported onto various hardware-based platforms The API is a large
collection of ready-made software components that provide many useful capabilities. It is
grouped into libraries of related classes and interfaces; these libraries are known as
packages. The next section, What Can Java Technology Do? Highlights some of the
functionality provided by the API.
45
Fig 6.3 Java API
The API and Java Virtual Machine insulate the program from the underlying hardware.
As a platform-independent environment, the Java platform can be a bit slower than native
code. However, advances in compiler and virtual machine technologies are bringing
performance close to that of native code without threatening portability.
8.1.5. Introduction to the AWT
The Java programming language class library provides a user interface toolkit called the
Abstract Windowing Toolkit, or the AWT. The AWT is both powerful and flexible.
Newcomers, however, often find that its power is veiled. The class and method
descriptions found in the distributed documentation provide little guidance for the new
programmer. Furthermore, the available examples often leave many important questions
unanswered. Of course, newcomers should expect some difficulty. Effective graphical
user interfaces are inherently challenging to design and implement, and the sometimes
complicated interactions between classes in the AWT
only make this task more complex.
Components and containers
A graphical user interface is built of graphical elements called components. Typical
components include such items as buttons, scrollbars, and text fields. Components allow
the user to interact with the program and provide the user with visual feedback about the
state of the program. In the AWT, all user interface components are instances of class
Component or one of its subtypes. Components do not stand alone, but rather are found
within containers. Containers contain and control the layout of components. Containers
are themselves components, and can thus be placed inside other containers. In the AWT,
all containers are instances of class Container or one of its subtypes.
46
8.1.6. Introduction to Swing
Swing GUI Components
The Swing toolkit includes a rich array of components: from basic components, such as
buttons and check boxes, to rich and complex components, such as tables and text. Even
deceptively simple components, such as text fields, offer sophisticated functionality, such
as formatted text input or password field behaviour. There are file browsers and dialogs
to suit most needs, and if not, customization is possible. If none of Swing's provided
components are exactly what we need, We can leverage the basic Swing component
functionality to create your own.
Java 2D API To make our application stand out; convey information visually; or add
figures, images, or animation to your GUI, you'll want to use the Java 2D API. Because
Swing is built on
the 2D package, it's trivial to make use of 2D within Swing components. Adding images,
drop shadows, compositing — it's easy with Java 2D.
Pluggable Look-and-Feel Support
Any program that uses Swing components has a choice of look and feel. The classes
shipped by Oracle provide a look and feel that matches that of the platform. The Synth
package allows you to create our own look and feel. The GTK+ look and feel makes
hundreds of existing look and feels available to Swing programs. A program can specify
the look and feel of the platform it is running on, or it can specify to always use the Java
look and feel, and without recompiling, it will just work.
8.1.7. Tools :
For creating this project , we are using a tool called “Eclipse “. The Eclipse
Organisation(IBM) provided us open source software that is free for use. We are using
Eclipse Kepler 4.3 . It is very helpful in creating GUI java applications.
8.1.7.1 ECLIPSE
Eclipse is an integrated development environment (IDE). It contains a base
workspace and an extensible plug-in system for customizing the environment.
Written mostly in Java, Eclipse can be used to develop applications. By means of
various plug-ins, Eclipse may also be used to develop applications in other
programming languages: Ada, ABAP, C, C++, COBOL, Fortran, Haskell,
47
JavaScript, Lasso, Perl, PHP, Python, R, Ruby (including Ruby on Rails
framework), Scala, Clojure, Groovy, Scheme, and Erlang. It can also be used to
develop packages for the software Mathematica. Development environments
include the Eclipse Java development tools (JDT) for Java and Scala, Eclipse
CDT for C/C++ and Eclipse PDT for PHP, among others.
Best Support for Latest Java Technologies
It provides first-class comprehensive support for the newest Java technologies
and latest Java enhancements before other IDEs. With its constantly improving
Java Editor, many rich features and an extensive range of tools, templates and
samples, Eclipse IDE sets the standard for developing with cutting edge
technologies out of the box.
Fast & Smart Code Editing
An IDE is much more than a text editor. The Eclipse indents lines, matches
words and brackets, and highlights source code syntactically and semantically. It
also provides code templates, coding tips, and refactoring tools. The editor
supports many languages from Java, C/C++, XML and HTML, to PHP, Groovy,
Javadoc, JavaScript and JSP. Because the editor is extensible, we can plug in
support for many other languages.
Easy & Efficient Project Management
Keeping a clear overview of large applications, with thousands of folders and
files, and millions of lines of code, is a daunting task. Eclipse provides different
views of our data, from multiple project windows to helpful tools for setting up
our applications. When new developers join our project, they can understand the
structure of our application because our code is well-organized.
Modeling platform
The Modeling project contains all the official projects of the Eclipse Foundation
focusing on model-based development technologies. They are all compatible with
the Eclipse Modeling Framework created by IBM. Those projects are separated in
several categories: Model Transformation, Model Development Tools, Concrete
Syntax Development, Abstract Syntax Development, Technology and Research,
and Amalgam.
48
8.2 Post Implementation and Maintenance
After the installation phase is completed and the user is adjusted to the changes created by the
candidate system, evaluation and maintenance begin. Like any system, there is an aging process
that requires periodic maintenance of hardware and software. If the new information is
inconsistent with the design specifications, then changes have to be made. Hardware also
requires periodic maintenance to keep in tune with design specifications. The importance of
maintenance is to continue to bring the new system to standards.
For the maintenance it has to be regularly checked whether all the devices are working properly.
If any of the devices on network is not working then that has to be checked and amended in time.
This tool though doesn’t require a lot of hardware resources and we can easily do the changes in
the tool by writing new code to the program and recompiling it. But it is important to update and
keep the performance of our image tools upto the level of other tools so that it stays competitive
and demanding,
9 Project Legacy
9.1 Current Status of the project
After performing testing phase , we find lots of bugs in application .We made corrections
in our application and make it fit to run . So the current status of our project is that its
working fine. It performs all the functions very well. Though it also depends upon
memory requirement and operating system we use. These are the main requirements of
this application .We need to complete this requirement. Our application is very simple,
easy to use , platform independent , portable and provides ease of access.
9.2 Remaining Areas of concern
Operating system
JDK Environment.
Memory requirement
. 9.3 Technical and Managerial Lessons learnt
All we have done, we come to know many points . these points really help in making
projects in future .
Deep understanding of Java Language.
In testing, how to test a product . So that we can make it more helpful for client, and
it fulfils all the requirements of the client .
Last but not the least important point is , maintenance of Product . How can we make
it durable for future.
49
10. Implementation
10.1 Functions provided in the proposed system
10.1.1 Home Page
When the user runs the program the home page given below opens.
Fig.10.1.1 Home Page
10.1.2 Main Frame
When the home page is double clicked the following main frame opens. It has all the options for
manipulating the image and inducing the effects into it.
Fig 10.1.2 Main Frame
50
10.1.3 File menu
The user can open an image file from his computer by the using the “open”Option of file menu.
He/she can also save the image to a desired location on the disk after making alterations to it.
The file menu also has the “Exit” option to close the application after use.
Fig 10.1.3 File menu
10.1.4 Edit
Using the edit option the user can undo the effect of the last action taken i.e he/she can reset the
image.
Fig 10.1.4 Edit
51
10.1.5 Effects
This menu has various effects like Grey Image, Invert , Edge Detect, Red Eye View, Blue Eye
View, Brightness, Sharpen and Custom. The user can choose from any of these options to induce
the desired effect in the image.
Fig 10.1.5 Effects
10.1.6 Transform
This menu has options like Horizontal and Vertical Stretch, Horizontal and Vertical Mirror,
Rotation, Shear , Zoom In , Zoom Out and Central Crop.The users can use this transformation
effects on the image of his choice.
Fig 10.1.6 Transform
52
10.1.7 Help
The help menu provides means to assist with the keyboard shortcuts.One can Also add notes if
one wants to in the shortcuts window.
Fig 10.1.7 Help
10.1.8 Distortion
Using this option one can distort the image. Three different options are available for distortion
namely : Diffuse, Offset and Displace.
Diffuse : This filter creates a diffusing effect by randomly displacing pixels in an image.
Offset : This filter translates an image by the amount you specify in the X and Y directions.
Pixels which go off the edge are wrapped back onto the opposite edge.
Displace: This filter distorts an image using a displacement map. The displacement map is a
grayscale image whose gradient at each point is used to displace pixels in the source image.
Fig 10.1.8 Distort
53
10.1.9 Noise
This option is used to remove noise from an image. Noise in an image is deviation of the pixels
or colours from its original value. This option removes the unwanted noise from a noisy image.
Fig 10.1.9 Noise
54
11. Snapshots
11.1.1 HOME PAGE
public void ii1() {
jPanel60 = new javax.swing.JPanel();
jButton60 = new javax.swing.Jbutton();
setDefaultCloseOperation(javax.swing.WindowConstants.EXIT_ON_CLOSE);
jPanel60.setBackground(new java.awt.Color(0,204,255));
jPanel60.setName(“jPanel60”); // NOI18N*/
jButton60.setIcon(new javax.swing.ImageIcon(“E:\\iet2.jpg”)); // NOI18N
jButton60.setText(“START”);
jButton60.setName(“jButton60”); // NOI18N
jButton60.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jButton60ActionPerformed(evt);
}});
javax.swing.GroupLayout jPanel60Layout = new javax.swing.GroupLayout(jPanel60);
jPanel60.setLayout(jPanel60Layout);
jPanel60Layout.setHorizontalGroup(
jPanel60Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addComponent(jButton60, javax.swing.GroupLayout.PREFERRED_SIZE, 736,
Short.MAX_VALUE));
jPanel60Layout.setVerticalGroup(
jPanel60Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(jPanel60Layout.createSequentialGroup()
.addGap(62, 62, 62)
.addComponent(jButton60, javax.swing.GroupLayout.PREFERRED_SIZE, 401,
javax.swing.GroupLayout.PREFERRED_SIZE)
55
.addContainerGap(62, Short.MAX_VALUE)));
javax.swing.GroupLayout layout = new javax.swing.GroupLayout(getContentPane());
getContentPane().setLayout(layout);
layout.setHorizontalGroup(
layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addComponent(jPanel60, javax.swing.GroupLayout.DEFAULT_SIZE,
javax.swing.GroupLayout.DEFAULT_SIZE, Short.MAX_VALUE));
layout.setVerticalGroup(
layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addComponent(jPanel60, javax.swing.GroupLayout.DEFAULT_SIZE,
javax.swing.GroupLayout.DEFAULT_SIZE, Short.MAX_VALUE));
pack();
//
}
11.1.2 MENU PAGE
setDefaultCloseOperation(javax.swing.WindowConstants.EXIT_ON_CLOSE);
setBackground(new java.awt.Color(255, 153, 255));
setForeground(new java.awt.Color(255, 153, 255));
jScrollPane1.setName("jScrollPane1");
jLabel1.setBackground(new java.awt.Color(0,204,255));
jLabel1.setName("jLabel1");
jScrollPane1.setViewportView(jLabel1);
jPanel1.setBackground(new java.awt.Color(0,204,255));
jPanel1.setForeground(new java.awt.Color(0,204,255));
jPanel1.setName("jPanel1");
jSlider1.setEnabled(false);
jSlider1.setName("jSlider1");
jSlider1.addChangeListener(new javax.swing.event.ChangeListener() {
public void stateChanged(javax.swing.event.ChangeEvent evt) {
56
jSlider1StateChanged(evt);
}});
jLabel2.setFont(new java.awt.Font("Times New Roman", 1, 12));
jLabel2.setText(" Adjust Value");
jLabel2.setName("jLabel2");
jButton1.setFont(new java.awt.Font("Times New Roman", 1, 12));
jButton1.setText("Apply");
jButton1.setName("jButton1");
jButton1.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jButton1ActionPerformed(evt);
}
});
11.1.3 FILE
jMenuBar1.setName("jMenuBar1");
jMenu1.setText("File");
jMenu1.setFont(new java.awt.Font("Segoe UI", 1, 12));
jMenu1.setName("jMenu1");
jMenuItem1.setAccelerator(javax.swing.KeyStroke.getKeyStroke(java.awt.event.KeyEvent.VK_
O, java.awt.event.InputEvent.CTRL_MASK));
jMenuItem1.setText("Open");
jMenuItem1.setName("jMenuItem1");
jMenuItem1.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem1ActionPerformed(evt);
}});
jMenu1.add(jMenuItem1);
57
jMenuItem2.setAccelerator(javax.swing.KeyStroke.getKeyStroke(java.awt.event.KeyEvent.VK_
S, java.awt.event.InputEvent.CTRL_MASK));
jMenuItem2.setText("Save");
jMenuItem2.setName("jMenuItem2");
jMenuItem2.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem2ActionPerformed(evt);
}});
jMenu1.add(jMenuItem2);
jMenuItem3.setAccelerator(javax.swing.KeyStroke.getKeyStroke(java.awt.event.KeyEvent.VK_
X, java.awt.event.InputEvent.CTRL_MASK));
jMenuItem3.setText("Exit");
jMenuItem3.setName("jMenuItem3");
jMenuItem3.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem3ActionPerformed(evt);
}});
11.1.4 OPEN
private void jMenuItem1ActionPerformed(java.awt.event.ActionEvent evt) {
JFileChooser jfcOpen = new JFileChooser("c:\\");
jfcOpen.showOpenDialog(this);
File input = jfcOpen.getSelectedFile();
imagefile = input;
try
{
image=ImageIO.read(input);
image1=ImageIO.read(input);
58
image2=ImageIO.read(input);
}
catch(IOException e1)
{
e1.printStackTrace();
}
jLabel1.setIcon(new ImageIcon(image.getScaledInstance( -1, -1,
BufferedImage.SCALE_DEFAULT)));
repaint();
jScrollPane1.getViewport().add(jLabel1);
repaint();
}
11.1.5 EDIT
jMenu1.add(jMenuItem3);
jMenuBar1.add(jMenu1);
jMenu2.setText("Edit");
jMenu2.setFont(new java.awt.Font("Segoe UI", 1, 12));
jMenu2.setName("jMenu2");
jMenuItem4.setAccelerator(javax.swing.KeyStroke.getKeyStroke(java.awt.event.KeyEvent.VK_
Z, java.awt.event.InputEvent.CTRL_MASK));
jMenuItem4.setText("Reset");
jMenuItem4.setName("jMenuItem4");
jMenuItem4.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem4ActionPerformed(evt); }});
59
11.1.6 EFFECTS MENU
jMenu2.add(jMenuItem4);
jMenuBar1.add(jMenu2);
jMenu3.setText("Effects");
jMenu3.setFont(new java.awt.Font("Segoe UI", 1, 12));
jMenu3.setName("jMenu3");
jMenu3.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenu3ActionPerformed(evt);
}});
jMenuItem9.setForeground(new java.awt.Color(51, 0, 51));
jMenuItem9.setText("GreyImage");
jMenuItem9.setName("jMenuItem9");
jMenuItem9.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem9ActionPerformed(evt);}});
jMenu3.add(jMenuItem9);
jMenuItem5.setForeground(new java.awt.Color(51, 0, 51));
jMenuItem5.setText("Invert");
jMenuItem5.setName("jMenuItem5");
jMenuItem5.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem5ActionPerformed(evt);
}});
jMenu3.add(jMenuItem5);
jMenuItem21.setForeground(new java.awt.Color(51, 0, 51));
jMenuItem21.setText("EdgeDetect");
jMenuItem21.setName("jMenuItem21");
jMenuItem21.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem21ActionPerformed(evt);
60
}});
jMenu3.add(jMenuItem21);
jMenuItem14.setForeground(new java.awt.Color(51, 0, 51));
jMenuItem14.setText("RedEyeView");
jMenuItem14.setName("jMenuItem14");
jMenuItem14.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem14ActionPerformed(evt);
}});
jMenu3.add(jMenuItem14);
jMenuItem19.setText("Brightness");
jMenuItem19.setName("jMenuItem19");
jMenuItem19.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem19ActionPerformed(evt);
}});
jMenu3.add(jMenuItem20);
jMenuItem18.setText("BlueEyeView");
jMenuItem18.setName("jMenuItem18");
jMenuItem18.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem18ActionPerformed(evt);
}});
jMenu3.add(jMenuItem18);
jMenuItem16.setText("Sharpen");
jMenuItem16.setName("jMenuItem16");
jMenuItem16.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem16ActionPerformed(evt);
}});
jMenu3.add(jMenuItem16);
jMenuItem23.setText("Custom");
jMenuItem23.setName("jMenuItem23");
jMenuItem23.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem23ActionPerformed(evt); }});
61
11.1.7 GREY IMAGE
private void jMenuItem9ActionPerformed(java.awt.event.ActionEvent evt) {
currentAction='I';
previewImage = grayImage(imageWorkQueue[nextImageIndex(0)]);
displayImage(previewImage);
image=previewImage;
}
public BufferedImage grayImage(BufferedImage img)
{
img = image;
ColorSpace cs = ColorSpace.getInstance(ColorSpace.CS_GRAY);
ColorConvertOp op = new ColorConvertOp(cs, null);
img = op.filter(img, null);
return img;
}
62
11.1.8 INVERT
private void jMenuItem5ActionPerformed(java.awt.event.ActionEvent evt) {
BufferedImage img = image;
Color col;
for (int x = 0; x < img.getWidth(); x++) { //width
for (int y = 0; y < img.getHeight(); y++) { //height
int RGBA = img.getRGB(x, y); //gets RGBA data for the specific pixel
col = new Color(RGBA, true); //get the color data of the specific pixel
col = new Color(Math.abs(col.getRed() - 255),
Math.abs(col.getGreen() - 255), Math.abs(col.getBlue() - 255)); //Swaps values
img.setRGB(x, y, col.getRGB()); //set the pixel to the altered colors
}
}
Graphics2D gg = img.createGraphics();
gg.drawImage(img, 0, 0, img.getWidth(null), img.getHeight(null), null);
jLabel1.setIcon(new ImageIcon(img.getScaledInstance( -1, -1,
BufferedImage.SCALE_DEFAULT)));
repaint();
}
63
11.1.9 EDGE DETECT
public BufferedImage EdgeDetect(BufferedImage img)
{
img = image;
float data[] = { 1.0f, 0.0f, -1.0f, 1.0f, 0.0f, -1.0f, 1.0f, 0.0f,
-1.0f };
Kernel kernel = new Kernel(3, 3, data);
ConvolveOp convolve = new ConvolveOp(kernel, ConvolveOp.EDGE_NO_OP,
null);
convolve.filter(img, image2);
return image2;
}
private void jMenuItem21ActionPerformed(java.awt.event.ActionEvent evt) {
currentAction='b';
//setMenuItemEnabled(false);
previewImage = EdgeDetect(imageWorkQueue[nextImageIndex(0)]);
displayImage(previewImage);
image=previewImage;
image=image2;
}
64
11.1.10 RED EYE VIEW
private void jMenuItem14ActionPerformed(java.awt.event.ActionEvent evt) {
BufferedImage img = image;
Color col;
for (int x = 0; x < img.getWidth(); x++) { //width
for (int y = 0; y < img.getHeight(); y++) { //height
int RGBA = img.getRGB(x, y); //gets RGBA data for the specific pixel
col = new Color(RGBA, true); //get the color data of the specific pixel
col = new Color(Math.abs(col.getRed() - 255),1,1); //Swaps values
img.setRGB(x, y, col.getRGB()); //set the pixel to the altered colors
}
}
65
11.1.10 BRIGHTNESS (INCREASED)
public BufferedImage brightenImage(BufferedImage img)
{
for (int x = 0; x < img.getWidth(); x++)
{ //width
for (int y = 0; y < img.getHeight(); y++)
{ //height
Color color = new Color(img.getRGB(x, y));
Color brighter = color.brighter();
img.setRGB(x, y, brighter.getRGB());
}
}
return img;
}
66
11.1.11 BRIGHTNESS (DECREASED)
public BufferedImage darkenImage(BufferedImage img)
{
for (int x = 0; x < img.getWidth(); x++)
{
for (int y = 0; y < img.getHeight(); y++)
{
Color color = new Color(img.getRGB(x, y));
Color brighter = color.darker();
img.setRGB(x, y, brighter.getRGB());
}
}
return img;
}
67
11.1.12 BLUE EYE VIEW
private void jMenuItem18ActionPerformed(java.awt.event.ActionEvent evt) {
BufferedImage img = image;
Color col;
for (int x = 0; x < img.getWidth(); x++) { //width
for (int y = 0; y < img.getHeight(); y++) { //height
int RGBA = img.getRGB(x, y); //gets RGBA data for the specific pixel
col = new Color(RGBA, true); //get the color data of the specific pixel
col = new Color(1,1,Math.abs(col.getBlue() - 255)); //Swaps values
img.setRGB(x, y, col.getRGB()); //set the pixel to the altered colors
}
}
68
11.1.13 SHARPEN
public BufferedImage Sharpen(BufferedImage img)
{
img = image;
float data[] = { -1.0f, -1.0f, -1.0f, -1.0f, 9.0f, -1.0f, -1.0f, -1.0f,
-1.0f };
Kernel kernel = new Kernel(3, 3, data);
ConvolveOp convolve = new ConvolveOp(kernel, ConvolveOp.EDGE_NO_OP,
null);
convolve.filter(img, image2);
return image2;
}
public BufferedImage Blur(BufferedImage img)
{
img=image;
float data[] = { 0.0625f, 0.125f, 0.0625f, 0.125f,0.25f, 0.125f,
0.0625f, 0.125f, 0.0625f };
Kernel kernel = new Kernel(3, 3, data);
ConvolveOp convolve = new ConvolveOp(kernel, ConvolveOp.EDGE_NO_OP,
null);
convolve.filter(img, image2);
return image2;
}
69
11.1.14. CUSTOM
public BufferedImage Custom(BufferedImage img)
{
img=image;
float data[] = { 0.5f, 0.5f, 0.5f, 0.5f,0.5f, 0.5f,0.5f, 0.5f, 0.5f };
Kernel kernel = new Kernel(3, 3, data);
ConvolveOp convolve = new ConvolveOp(kernel, ConvolveOp.EDGE_NO_OP,
null);
convolve.filter(img, image2);
return image2;
}
public BufferedImage EdgeDetect(BufferedImage img)
{
img = image;
float data[] = { 1.0f, 0.0f, -1.0f, 1.0f, 0.0f, -1.0f, 1.0f, 0.0f,
-1.0f };
Kernel kernel = new Kernel(3, 3, data);
ConvolveOp convolve = new ConvolveOp(kernel, ConvolveOp.EDGE_NO_OP,
null);
convolve.filter(img, image2);
return image2;
}
70
11.1.16 TRANSFORM MENU
jMenu3.add(jMenuItem23);
jMenuBar1.add(jMenu3);
jMenu4.setText("Transform");
jMenu4.setFont(new java.awt.Font("Segoe UI", 1, 12));
jMenu4.setName("jMenu4");
jMenu4.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenu4ActionPerformed(evt);
}});
jMenuItem13.setText("HorizontalStretch");
jMenuItem13.setName("jMenuItem13");
jMenuItem13.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem13ActionPerformed(evt);
}});
jMenu4.add(jMenuItem13);
jMenuItem7.setText("HorizontalMirror");
jMenuItem7.setName("jMenuItem7");
jMenuItem7.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem7ActionPerformed(evt);
}});
jMenu4.add(jMenuItem7);
jMenuItem15.setText("VerticalStretch");
jMenuItem15.setName("jMenuItem15");
jMenuItem15.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem15ActionPerformed(evt);
}});
71
jMenu4.add(jMenuItem15);
jMenuItem6.setText("VerticalMirror");
jMenuItem6.setName("jMenuItem6");
jMenuItem6.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem6ActionPerformed(evt);
}});
jMenu4.add(jMenuItem6);
jMenuItem11.setText("Rotate-180 deg");
jMenuItem11.setName("jMenuItem11");
jMenuItem11.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem11ActionPerformed(evt);
}});
jMenu4.add(jMenuItem11);
jMenuItem24.setText("Rotate-45 deg");
jMenuItem24.setName("jMenuItem24");
jMenuItem24.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem24ActionPerformed(evt);
}});
jMenu4.add(jMenuItem24);
jMenuItem26.setText("Shear");
jMenuItem26.setName("jMenuItem26");
jMenuItem26.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem26ActionPerformed(evt);
}});
jMenu4.add(jMenuItem26);
jMenu7.setText("ZoomIn");
jMenu7.setName("jMenu7");
jMenuItem30.setText("250%");
jMenuItem30.setName("jMenuItem30");
jMenuItem30.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem30ActionPerformed(evt);
}});
jMenu7.add(jMenuItem30);
jMenuItem31.setText("300%");
jMenuItem31.setName("jMenuItem31");
jMenuItem31.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem31ActionPerformed(evt);
}});
jMenu7.add(jMenuItem31);
jMenuItem32.setText("350%");
jMenuItem32.setName("jMenuItem32");
jMenuItem32.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem32ActionPerformed(evt);
}});
72
jMenu7.add(jMenuItem32);
jMenuItem33.setText("400%");
jMenuItem33.setName("jMenuItem33");
jMenuItem33.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem33ActionPerformed(evt);
}});
jMenu7.add(jMenuItem33);
jMenu4.add(jMenu7);
jMenu6.setText("ZoomOut");
jMenu6.setName("jMenu6");
jMenuItem25.setText("200%");
jMenuItem25.setName("jMenuItem25");
jMenuItem25.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem25ActionPerformed(evt);
}});
jMenu6.add(jMenuItem25);
jMenuItem27.setText("150%");
jMenuItem27.setName("jMenuItem27");
jMenuItem27.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem27ActionPerformed(evt);
}});
jMenu6.add(jMenuItem27);
jMenuItem28.setText("100%");
jMenuItem28.setName("jMenuItem28");
jMenuItem28.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem28ActionPerformed(evt);
}});
jMenu6.add(jMenuItem28);
jMenuItem29.setText("50%");
jMenuItem29.setName("jMenuItem29");
jMenuItem29.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem29ActionPerformed(evt);
}});
jMenu6.add(jMenuItem29);
jMenu4.add(jMenu6);
jMenuItem34.setText("Central Crop");
jMenuItem34.setName("jMenuItem34");
jMenuItem34.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem34ActionPerformed(evt);
}});
73
11.1.17 HORIZONTAL STRETCH
public BufferedImage HorizontalStrech(BufferedImage img)
{
img = image;
AffineTransform tx = AffineTransform.getScaleInstance(2, 1);
tx.translate(0, 0);
AffineTransformOp op = new AffineTransformOp(tx,
AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
img = op.filter(img, null);
return img;
}
74
11.1.18 HORIZONTAL MIRROR
public BufferedImage flipHorizontal(BufferedImage img)
{
img = image;
AffineTransform tx = AffineTransform.getScaleInstance(-1, 1);
tx.translate(-img.getWidth(null), 0);
AffineTransformOp op = new AffineTransformOp(tx,
AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
img = op.filter(img, null);
return img;
}
75
11.1.19 VERTICAL STRETCH
public BufferedImage VerticalStrech(BufferedImage img)
{
img = image;
AffineTransform tx = AffineTransform.getScaleInstance(1, 2);
tx.translate(0,0);
AffineTransformOp op = new AffineTransformOp(tx,
AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
img = op.filter(img, null);
return img;
}
76
11.1.20 VERTICAL MIRROR
public BufferedImage flipVertical(BufferedImage img)
{
img = image;
AffineTransform tx = AffineTransform.getScaleInstance(1, -1);
tx.translate(0, -img.getHeight(null));
AffineTransformOp op = new AffineTransformOp(tx,
AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
img = op.filter(img, null);
return img;
}
77
11.1.21 ROTATE 180 DEGREES
public BufferedImage flip180deg(BufferedImage img)
{
img=image;
AffineTransform tx = AffineTransform.getScaleInstance(-1, -1);
tx.translate(-img.getWidth(null),-img.getHeight(null));
AffineTransformOp op = new AffineTransformOp(tx,
AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
img = op.filter(img, null);
return img;
}
78
11.1.22 ROTATE 45 DEGREES
public BufferedImage flip45deg(BufferedImage img)
{
img=image;
AffineTransform tx = AffineTransform.getRotateInstance(1,0.2);
tx.translate(80,10);
AffineTransformOp op = new AffineTransformOp(tx,
AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
img = op.filter(img, null);
return img;
}
79
11.1.23 SHEAR
public BufferedImage Shear(BufferedImage img)
{
img=image;
AffineTransform tx = AffineTransform.getShearInstance(1,0.1);
tx.translate(80,10);
AffineTransformOp op = new AffineTransformOp(tx,
AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
img = op.filter(img, null);
return img;
}
80
11.1.24 ZOOM IN
public BufferedImage zoomin(BufferedImage img)
{
img=image;
AffineTransform tx = AffineTransform.getScaleInstance(-2, 2);
tx.translate(-img.getWidth(null), 0);
AffineTransformOp op = new AffineTransformOp(tx,
AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
img = op.filter(img, null);
return img;
}
public BufferedImage pezoom400(BufferedImage img)
{
img=image;
AffineTransform tx = AffineTransform.getScaleInstance(1.2, 1.2);
tx.translate(1.2, 1.2);
AffineTransformOp op = new AffineTransformOp(tx,
AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
img = op.filter(img, null);
return img;
}
81
11.1.25 ZOOM OUT
public BufferedImage nezoom50(BufferedImage img)
{
img=image;
AffineTransform tx = AffineTransform.getScaleInstance(0.2, 0.2);
tx.translate(0.2,0.2);
AffineTransformOp op = new AffineTransformOp(tx,
AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
img = op.filter(img, null);
return img;
}
82
11.1.26 CENTRAL CROP
public BufferedImage crop(BufferedImage img, int scalex, int scaley)
{
img=image2;
AffineTransform tx = new AffineTransform();
tx.scale(1.5,1.5);
AffineTransformOp op = new AffineTransformOp(tx,
AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
img = op.filter(img, null);
float data[] = {0.10f, 0.10f, 0.10f,0.10f, 0.20f, 0.10f,0.10f, 0.10f, 0.10f };
Kernel kernel = new Kernel(3, 3, data);
ConvolveOp convolve = new ConvolveOp(kernel, ConvolveOp.EDGE_NO_OP,
null);
convolve.filter(img, image2);
return image2;
}
83
11.1.27 HELP MENU
jMenu4.add(jMenuItem34);
jMenuBar1.add(jMenu4);
jMenu5.setText("Help");
jMenu5.setFont(new java.awt.Font("Segoe UI", 1, 12));
jMenu5.setName("jMenu5");
jMenuItem8.setText("Shortcuts");
jMenuItem8.setName("jMenuItem8");
jMenuItem8.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem8ActionPerformed(evt);
}});
84
11.1.28 SHORTCUTS
private void hp()
{
jPanel50 = new javax.swing.JPanel();
jLabel50 = new javax.swing.JLabel();
jLabel51 = new javax.swing.JLabel();
jLabel52 = new javax.swing.JLabel();
jLabel53 = new javax.swing.JLabel();
jLabel54 = new javax.swing.JLabel();
jLabel56 = new javax.swing.JLabel();
jLabel57 = new javax.swing.JLabel();
jScrollPane1 = new javax.swing.JScrollPane();
jTextArea50 = new javax.swing.JTextArea();
jButton50 = new javax.swing.JButton();
setDefaultCloseOperation(javax.swing.WindowConstants.EXIT_ON_CLOSE);
jPanel50.setBackground(new java.awt.Color(51, 204, 255));
jPanel50.setName("jPanel50");
jLabel50.setIcon(new
javax.swing.ImageIcon("F:\\final\\Image007\\src\\img\\Details1.jpg"));
jLabel50.setText("jLabel1");
jLabel50.setName("jLabel50");
jLabel51.setFont(new java.awt.Font("Berlin Sans FB", 3, 11));
jLabel51.setText("Control ShortCuts");
jLabel51.setName("jLabel51");
jLabel52.setText("Loading an image : Ctrl+O");
jLabel52.setName("jLabel52");
jLabel53.setText("Exit : Ctrl+E");
jLabel53.setName("jLabel53");
jLabel54.setText("For saving : Ctrl+S");
85
jLabel54.setName("jLabel54");
jLabel57.setName("jLabel57");
jScrollPane1.setName("jScrollPane1");
jTextArea50.setColumns(20);
jTextArea50.setRows(5);
jTextArea50.setText("Notes Here");
jTextArea50.setName("jTextArea50");
jScrollPane1.setViewportView(jTextArea50);
jButton50.setText("BACK");
jButton50.setName("jButton50");
jButton50.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jButton50ActionPerformed(evt);
}});
javax.swing.GroupLayout jPanel50Layout = new javax.swing.GroupLayout(jPanel50);
jPanel50.setLayout(jPanel50Layout);
jPanel50Layout.setHorizontalGroup(
jPanel50Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(jPanel50Layout.createSequentialGroup()
.addContainerGap()
.addGroup(jPanel50Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADIN
G)
.addComponent(jLabel57)
.addComponent(jLabel56))
.addGap(949, 949, 949))
.addGroup(jPanel50Layout.createSequentialGroup()
.addContainerGap()
.addComponent(jScrollPane1, javax.swing.GroupLayout.PREFERRED_SIZE, 787,
javax.swing.GroupLayout.PREFERRED_SIZE)
.addContainerGap(252, Short.MAX_VALUE))
.addComponent(jLabel50, javax.swing.GroupLayout.PREFERRED_SIZE, 925,
javax.swing.GroupLayout.PREFERRED_SIZE)
.addGroup(javax.swing.GroupLayout.Alignment.TRAILING,
jPanel50Layout.createSequentialGroup()
.addGroup(jPanel50Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.TRAILIN
G)
.addComponent(jLabel53)
.addComponent(jLabel54)
.addGroup(jPanel50Layout.createSequentialGroup()
.addComponent(jButton50)
.addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.RELATED,
610, Short.MAX_VALUE)
.addComponent(jLabel51))
.addComponent(jLabel52))
.addGap(287, 287, 287)));
jPanel50Layout.setVerticalGroup(
jPanel50Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(jPanel50Layout.createSequentialGroup()
.addComponent(jLabel50, javax.swing.GroupLayout.PREFERRED_SIZE, 80,
javax.swing.GroupLayout.PREFERRED_SIZE)
86
.addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.RELATED)
.addGroup(jPanel50Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.TRAILIN
G)
.addComponent(jLabel51)
.addComponent(jButton50))
.addGap(18, 18, 18)
.addComponent(jLabel52)
.addGap(18, 18, 18)
.addComponent(jLabel54)
.addGap(18, 18, 18)
.addComponent(jLabel53)
.addGap(34, 34, 34)
.addComponent(jScrollPane1, javax.swing.GroupLayout.PREFERRED_SIZE, 154,
javax.swing.GroupLayout.PREFERRED_SIZE)
.addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.RELATED, 34,
Short.MAX_VALUE)
.addComponent(jLabel56)
.addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.RELATED)
.addComponent(jLabel57)
.addGap(17, 17, 17)));
javax.swing.GroupLayout layout = new javax.swing.GroupLayout(getContentPane());
getContentPane().setLayout(layout);
layout.setHorizontalGroup(
layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(layout.createSequentialGroup()
.addComponent(jPanel50, javax.swing.GroupLayout.PREFERRED_SIZE, 925,
javax.swing.GroupLayout.PREFERRED_SIZE)
.addContainerGap(124, Short.MAX_VALUE)));
layout.setVerticalGroup(
layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addComponent(jPanel50, javax.swing.GroupLayout.DEFAULT_SIZE,
javax.swing.GroupLayout.DEFAULT_SIZE, Short.MAX_VALUE));
pack();
}
87
11.1.29 DISTORTION MENU
jMenu5.add(jMenuItem10);
jMenuBar1.add(jMenu5);
jMenu8.setText("Distortion");
Diffuse = new JMenuItem("Diffuse");
jMenu8.add(Diffuse);
Diffuse.addActionListener(this);
Offset = new JMenuItem("Offset");
jMenu8.add(Offset);
Offset.addActionListener(this);
Displace = new JMenuItem("Displace");
jMenu8.add(Displace);
Displace.addActionListener(this);
88
11.1.30 DIFFUSE
public void actionPerformed(ActionEvent e)
{
if(e.getSource()==Diffuse)
{
DiffuseFilter df = new DiffuseFilter();
df.setInterpolation(1);
df.setEdgeAction(1);
BufferedImage resultImage = df.filter(image1, bimage);
displayImage(resultImage);
}
import java.awt.*;
import java.awt.image.*;
/**
* This filter diffuses an image by moving its pixels in random directions.
*/
public class DiffuseFilter extends TransformFilter {
private float[] sinTable, cosTable;
private float scale = 4;
89
public DiffuseFilter() {
setEdgeAction(CLAMP);
}
/**
* Specifies the scale of the texture.
* @param scale the scale of the texture.
* @min-value 1
* @max-value 100+
* @see #getScale
*/
public void setScale(float scale) {
this.scale = scale;
}
/**
* Returns the scale of the texture.
* @return the scale of the texture.
* @see #setScale
*/
public float getScale() {
return scale;
}
protected void transformInverse(int x, int y, float[] out) {
int angle = (int)(Math.random() * 255);
float distance = (float)Math.random();
out[0] = x + distance * sinTable[angle];
out[1] = y + distance * cosTable[angle];
}
public BufferedImage filter( BufferedImage src, BufferedImage dst ) {
sinTable = new float[256];
cosTable = new float[256];
for (int i = 0; i < 256; i++) {
float angle = ImageMath.TWO_PI*i/256f;
sinTable[i] = (float)(scale*Math.sin(angle));
cosTable[i] = (float)(scale*Math.cos(angle));
}
return super.filter( src, dst );
}
public String toString() {
return "Distort/Diffuse...";
}
}
90
11.1.31 OFFSET
else if(e.getSource()==Offset)
{
OffsetFilter of = new OffsetFilter();
of.setXOffset(200);
of.setYOffset(300);
BufferedImage resultImage1 = of.filter(image1, bimage);
displayImage(resultImage1);
}
import java.awt.*;
import java.awt.image.*;
public class OffsetFilter extends TransformFilter {
private int width, height;
private int xOffset, yOffset;
private boolean wrap;
public OffsetFilter() {
this(0, 0, true);
}
public OffsetFilter(int xOffset, int yOffset, boolean wrap) {
this.xOffset = xOffset;
this.yOffset = yOffset;
this.wrap = wrap;
91
setEdgeAction( ZERO );
}
public void setXOffset(int xOffset) {
this.xOffset = xOffset;
}
public int getXOffset() {
return xOffset;
}
public void setYOffset(int yOffset) {
this.yOffset = yOffset;
}
public int getYOffset() {
return yOffset;
}
public void setWrap(boolean wrap) {
this.wrap = wrap;
}
public boolean getWrap() {
return wrap;
}
protected void transformInverse(int x, int y, float[] out) {
if ( wrap ) {
out[0] = (x+width-xOffset) % width;
out[1] = (y+height-yOffset) % height;
} else {
out[0] = x-xOffset;
out[1] = y-yOffset;
}
}
public BufferedImage filter( BufferedImage src, BufferedImage dst ) {
this.width = src.getWidth();
this.height = src.getHeight();
if ( wrap ) {
while (xOffset < 0)
xOffset += width;
while (yOffset < 0)
yOffset += height;
xOffset %= width;
yOffset %= height;
}
return super.filter( src, dst );
}
92
public String toString() {
return "Distort/Offset...";
}
}
11.1.32 DISPLACE
else if(e.getSource()==Displace)
{
DisplaceFilter dpf = new DisplaceFilter();
dpf.setAmount(20.0f);
BufferedImage resultImage2 = dpf.filter(image1, bimage);
displayImage(resultImage2);
}
import java.awt.*;
import java.awt.image.*;
/*
* A filter which simulates the appearance of looking through glass. A separate grayscale
displacement image is provided and
* pixels in the source image are displaced according to the gradient of the displacement map.
*/
public class DisplaceFilter extends TransformFilter {
private float amount = 1;
private BufferedImage displacementMap = null;
93
private int[] xmap, ymap;
private int dw, dh;
public DisplaceFilter() {
}
/**
* Set the displacement map.
* @param displacementMap an image representing the displacment at each point
* @see #getDisplacementMap
*/
public void setDisplacementMap(BufferedImage displacementMap) {
this.displacementMap = displacementMap;
}
/**
* Get the displacement map.
* @return an image representing the displacment at each point
* @see #setDisplacementMap
*/
public BufferedImage getDisplacementMap() {
return displacementMap;
}
/**
* Set the amount of distortion.
* @param amount the amount
* @min-value 0
* @max-value 1
* @see #getAmount
*/
public void setAmount(float amount) {
this.amount = amount;
}
/**
* Get the amount of distortion.
* @return the amount
* @see #setAmount
*/
public float getAmount() {
return amount;
}
public BufferedImage filter( BufferedImage src, BufferedImage dst ) {
int w = src.getWidth();
int h = src.getHeight();
BufferedImage dm = displacementMap != null ? displacementMap : src;
dw = dm.getWidth();
94
dh = dm.getHeight();
int[] mapPixels = new int[dw*dh];
getRGB( dm, 0, 0, dw, dh, mapPixels );
xmap = new int[dw*dh];
ymap = new int[dw*dh];
int i = 0;
for ( int y = 0; y < dh; y++ ) {
for ( int x = 0; x < dw; x++ ) {
int rgb = mapPixels[i];
int r = (rgb >> 16) & 0xff;
int g = (rgb >> 8) & 0xff;
int b = rgb & 0xff;
mapPixels[i] = (r+g+b) / 8; // An arbitrary scaling factor which
gives a good range for "amount"
i++;
}
}
i = 0;
for ( int y = 0; y < dh; y++ ) {
int j1 = ((y+dh-1) % dh) * dw;
int j2 = y*dw;
int j3 = ((y+1) % dh) * dw;
for ( int x = 0; x < dw; x++ ) {
int k1 = (x+dw-1) % dw;
int k2 = x;
int k3 = (x+1) % dw;
xmap[i] = mapPixels[k1+j1] + mapPixels[k1+j2] +
mapPixels[k1+j3] - mapPixels[k3+j1] - mapPixels[k3+j2] - mapPixels[k3+j3];
ymap[i] = mapPixels[k1+j3] + mapPixels[k2+j3] +
mapPixels[k3+j3] - mapPixels[k1+j1] - mapPixels[k2+j1] - mapPixels[k3+j1];
i++;
}
}
mapPixels = null;
dst = super.filter( src, dst );
xmap = ymap = null;
return dst;
}
protected void transformInverse(int x, int y, float[] out) {
float xDisplacement, yDisplacement;
float nx = x;
float ny = y;
int i = (y % dh)*dw + x % dw;
out[0] = x + amount * xmap[i];
out[1] = y + amount * ymap[i];
}
95
public String toString() {
return "Distort/Displace...";
}
}
11.1.33 NOISE MENU
jMenu8.setFont(new java.awt.Font("Segoe UI", 1, 12));
jMenu8.setName("jMenu8");
jMenuBar1.add(jMenu8);
jMenu9.setText("Noise");
jMenu9.setFont(new java.awt.Font("Segoe UI", 1, 12));
jMenu9.setName("jMenu9");
jMenu9.add(jMenuItem20);
jMenuItem20.setText("Remove Noise");
jMenuItem20.setName("jMenuItem20");
jMenuItem20.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jMenuItem20ActionPerformed(evt);
}});
96
11.1.34 REMOVE NOISE
public BufferedImage Blur(BufferedImage img)
{
img=image;
float data[] = { 0.0625f, 0.125f, 0.0625f, 0.125f,0.25f, 0.125f,
0.0625f, 0.125f, 0.0625f };
Kernel kernel = new Kernel(3, 3, data);
ConvolveOp convolve = new ConvolveOp(kernel, ConvolveOp.EDGE_NO_OP,
null);
convolve.filter(img, image2);
return image2;
}
97
12. GANTT CHART
Sl
No
Activity/Task
Time (Days)
Feb Mar Apr
11 12 18 19 20 25 26 27 2 3 4 9 10 16 17 18 20 21 22 23 27 28 29 30 31 7 8 13 14 15 16
1. System Analysis
1.1 1.1 System
Planning
1.2 1.2 Information
Gathering
1.3 1.3 Feasibility
Study
1.4 1.4
Req.Analysis &
Specification
2. System Design
2.1 Structure
Analysis
2.2 Designing the
form
3. Coding for basic
functions
3.1 Main Frame
3.2 Transforms
3.3 Effects
3.4 Help
3.5 Distortion
3.6 Noise
4. Coding For Setting
4.1 Open
Application
4.2 Saving
Application
4.3 Reset
Application
4.4 Exit Application
5. Design for Image
Enhancement Tool
5.1 Design of title &
Front pages
5.2 Static Pages
5.3 Dynamic pages
6. Testing &
Implementation
6.1 Module wise
testing
6.2 Integration &
System Testing
6.3 Modification
6.4 Implementation
7. Report
Activity
Task
98
References
WebSites :-
1) http://www.jhlabs.com/ie/index.html
2) http://www.imageprocessingbasics.com
3) http://www.tutorialspoint.com/java_dip/index.htm
4) http://docs.oracle.com/javase/tutorial/2d/images.html
5) http://docs.oracle.com/javase/6/docs/api/java/awt/geom/AffineTransform.html
Books :-
1) The Art of Image Processing With Java by Kenny A. Hunt
2) The Complete Reference by Herbert Schildt
3) Introduction to Java Programming by Daniel Liang