1
Object Oriented ProgrammingLecture IX
Some notes on Java Performance with aspects on execution time and memory
consumption
2
Last Lecture
• We talked about refactoring strategies– refactoring methods– refactoring by inheritance– refactoring by delegation
• Netbeans and GUI forms for Graphic User Interfaces
3
Today’s Talk
• Improving performance in Java programs– time consumption– space consumption
• Reference to the material in todays lecture:– Peter Sestoft, “Java performance: Reducing time and
space consumption”
4
Java Compilers
• Java compilers cannot do all kinds of optimization that we are used to in other languages
• Compared to a language as C– Java is semantically more restricted and many things have to be
done at run-time• operations on objects bound at run-time, dynamic data structures
(eg Vector) etc.
• It many times cannot be predicted – what is the type of an object or– which object that is being accessed
• untill the code is executed
5
What can we do?
• With some knowledge about the– semantics of the language– run-time mechanisms in the JVM– how classes in the API are implemented,
• we can in some cases avoid expensive run-time JVM operations by how we write the code
6
Time consumption
7
Iterative computing on invariant expressions
• Loops and loop-invariant expressions– avoid recomputing loop bounds for
expressions when we know the value is static
• One example is the size()method
for(int i=0; i<size()+1; i++){…
}
8
Iterative computing on invariant expressions
• Instead of calling size() everytime the bound is checked:– compute the loop bound once, and bind it to a
local variable
for(int i=0,stop=size()+1; i<stop; i++){
…
}
9
Computing subexpressions
• Avoid recomputing the same subexpression more than once
• ”Bad” example:
if(shapes.elementAt(i).isCircle());
if(shapes.elementAt(i).isSquare());
10
Computing subexpressions
• Instead, compute the subexpression once and bind the value to a local variable
• ”Good” example:
Shape shape = shapes.elementAt(i);
if(shape.isCircle());
if(shape.isSquare());
11
Computing on Arrays inloop expressions
• For every access in an array, an index (out-of-bounds) check is required to be performed
double[] rowsum = new double[n];
for(int i=0; i<n; i++){
for(int j=0; j<m; j++){
rowsum[i]+= arr[i][j];
}
}
12
Computing on Arrays inloop expressions
• What we can do in this example, is to compute the outer index once
double[] rowsum = new double[n];
for(int i=0; i<n; i++){double[] arri = arr[i]; //copy referencedouble sum = 0.0;for(int j=0; j<m; j++)
sum += arr[j]; //avoid checking i indexrowsum[i] = sum;
}
13
Constant Fields and Variables
• Declare– constant fields as final static– Variables as final
• The compiler can inline such code and precompute constant expressions
• Examples:
final static int x = 10;
final Object a;//assigned only once
14
Long If-Else chains
• Avoid construction of long if-else chains:
if(exp){
...
}else if(exp){
...
}else if(exp){
...
}
...
else{
...
}
• Replace with a switch-case expression instead
switch(exp){
case a:
statement 1
case b:
statement 2
Default:
statement n
}
15
What about ”clever one-liners”
• Example:
int year = 0;double sum = 200.0;double[] balance = new double[100];while((balance[year++] = sum *= 1.05) < 1000.0);
• Nothing is gained by such expressions, except that you get a piece of code which noone understands
16
Fields and Variables
• There is no run-time overhead for variables declared as local– local variables also improves clarity of the code
• In a method– it is faster to access local variables and parameters
than accessing static values or instance (class global) variables
• In loops– it might be more efficient to copy a global variable into
a locally created variable, and use the local inside the loop
17
String Manipulation
• Avoid dynamically building strings using the concatenation (+) operator:
String s = ””;for(int i=0; i<n; i++){
S += ”#” + i;
}
• This loop has quadratic execution time and can easily cause heap fragmentation
18
String Manipulation
• Instead, use a StringBuilder object
StringBuilder sbuf = new StringBuilder;
for(int i=0; i<n; i++){
sbuf.append(”#”).append(i);
}
• However, for a sequence, using ’+’ is ok :– String s = ”(” + x + ”, ” + y + ”)”;
19
Initialized Array variables
• Local method variables are initialized upon method entrance– avoid declaring initialized arrays in methods
• When initialized array variables or tables need to be used, declare them once as instance global
final static int arr = {31,28,31,30,...};
20
Methods
• Method calls can be executed faster if methods are declared using private, final or static
• For example, accessor methods (like getSize) can often be declared final– there is usually no need for overriding such
methods in inheriting sub classes
21
Input and Output
• For program input and output (such as file access)– use buffered streams from java.io
•BufferedReader, BufferedWriter, BufferedInputStream, BufferedOutputStream
• Using buffered streams can speed up input/output up by a factor of 20
22
Bulk Array Operations
• Bulk operations means performing operations on whole, or sub blocks of data
• For Arrays, there are some special operations which are usually faster than using for-loops– one reason is that only a single bound check
is required
23
Bulk Array operations
• static void arrayCopy(src, si, dst, di,n)– In java.lang.System
• Many bulk methods in java.util.Arrays– equality (compare elements)– fill (initialize elements)– searching– sorting
24
Collection Classes
• The collection classes in java.util are well designed and implemented and can improve the speed of a program
• When using lists (eg LinkedList, ArrayList…)– Avoid indexed position access, insertion or removal of
elements• Those methods do linear search
– Example• System.out.println(list.get(i));
25
Scientific Computing
• There are several open source libraries available for different fields of computing
• For scientific computing– Colt open source library
• Linear algebra, matrix computation, statistics, random number generators, mathematical functions
• If needing special routines, it is always a good idea to look around of what is available before starting implementation
26
Reducing space consumption
27
JVM memory organization
• The stack– method input- and return parameters, local variables
• The Heap– for object instances, including strings and arrays
• Each thread has a separate stack– each growing and shrinking by the depth of method
calls
• The heap is joint for all threads– a garbage collector manages the heap deallocation
28
Three important aspects of memory usage
• Allocation rate– the rate by which a program creates new objects
• high rate costs in time and space
• Retention– the amount of live data on the heap, which is reachable from the
method stacks anytime• high retention cost space (and garbage collector time)
• Fragmentation– creating small chunks of unusable memory– Allocation of dynamically growing objects may cause memory
fragmentation• costs time (to search for useable space) and space (wasted
fragments)
29
Instance shared variables
• Use static declaration for variables shared by all instances of a class– only one field created for all objects
• Instead ofpublic class Car{
String brand = ”Ferrari”
}• we can do this
public class Car{final static String brand = ”Ferrari”;
}
30
Object allocation
• When it is not sure the an object will be used, we can apply lazy allocation
• Lazy allocation means that we postpone the allocation of an object untill it is actually needed, but only once
31
Without Lazy Allocation
public class car{private Button button = new Jbutton();
public Car(){… initialize button …
}public final Jbutton getButton(){
return button;
}
}
32
Lazy Allocation
public class car{private Button button = null;public Car(){…}public final Jbutton getButton(){
if(button == null){button = new Jbutton(); …initialize button …
}return button;
}
}
33
Other resources
• J. Noble and C. Weir, Small Memory Software, Addison-Wesley, 2001– A number of design patterns for limited
memory systems
Top Related