Grid and Cloud Computing Laboratory
Transcript of Grid and Cloud Computing Laboratory
Grid and Cloud Computing Laboratory
VII Semester of B.Tech
As per the curricullam and syllabus
of
Bharath Institute of Higher Education & Research
(Grid and Cloud Computing Laboratory)
PREPARED BY
Dr. S. Chakravarthi
Mr. A.V. Allin Geo
NEW EDITION
1
SCHOOL OF COMPUTING
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
2
LAB MANUAL
SUBJECT NAME:
GRID AND CLOUD COMPUTING LABORATORY
SUBJECT CODE: BCS7L1
Regualtion R2015
(2015-2016)
3
BCS7L1 GRID AND CLOUD COMPUTING LABORATORY L T P C
Total Contact Hours - 30 0 0 3 2
Prerequisite –Distributed Computing, Operating Systems, Grid and Cloud Computing.
Lab Manual Designed by – Dept. of Computer Science and Engineering.
OBJECTIVES
Be exposed to tool kits for grid and cloud environment.
Be familiar with developing web services/Applications in grid framework
Learn to run virtual machines of different configuration.
COURSE OUTCOMES (COs)
CO1 Use the grid and cloud tool kits.
CO2 Design and implement applications on the Grid.
CO3 Design and Implement applications on the Cloud.
CO4 Connect Multiple System Using Zonal Server and JVishwa
CO5 Implement the Theorem using Naive Bayes Approach.
CO6 Implement the Virtual Machine.
MAPPING BETWEEN COURSE OUTCOMES & PROGRAM OUTCOMES
(3/2/1 INDICATES STRENGTH OF CORRELATION) 3- High, 2- Medium, 1-Low
COs PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12 PSO1 PSO2 PSO3
CO1 3 2 2 2 3 3 3
CO2 3 2 2 3 3 3
CO3 2 3 3
CO4 2 2 3 3 3
CO5 2 2 2 3 3
CO6 2 2 3 2 2 3 3
Category Professional Core (PC)
Approval 37th Meeting of Academic Council, May 2015
LIST OF EXPERIMENTS
1. Connecting Zonal Server with JVishwa. 2. Find the Prime Number for largest interval using Grid Computing. 3. Calculate the Matrix multiplication using Grid Computing. 4. Find procedure to run the virtual machine of different configuration. Check how
many virtual machines can be utilized at particular time Find the missing dataset using hadoop and map reduce.
5. Find procedure to attach virtual block to the virtual machine and check whether it holds the data even after the release of the virtual machine Classification using Naïve bayes approach.
6. Install a C++ compiler in the virtual machine and execute a sample program. 7. Show the virtual machine migration based on the certain condition from one
node to the other. 8. Find procedure to install storage controller and interact with it. 9. Write a word count program to demonstrate the use of Map and Reduce tasks.
4
BCS7L1-GRID AND CLOUD COMPUTING LABORATORY
LIST OF EXPERIMENTS
S.NO NAME OF THE EXPERIMENT
1 Connecting Zonal Server with JVishwa
2
Find the Prime Number for largest interval using Grid Computing.
3
Calculate the Matrix multiplication using Grid Computing.
4
Find procedure to run the virtual machine of different configuration. Check
how many virtual machines can be utilized at particular time Find the
missing dataset using hadoop and map reduce.
5 Find procedure to attach virtual block to the virtual machine and check
whether it holds the data even after the release of the virtual machine
Classification using Naïve bayes approach.
6 Install a C++ compiler in the virtual machine and execute a sample
program.
7 Show the virtual machine migration based on the certain condition from one
node to the other.
8 Find procedure to install storage controller and interact with it
9 Write a word count program to demonstrate the use of Map and Reduce
tasks
5
CONTENT
S.NO
NAME OF THE EXPERIMENT
PAGE NO
1 Connecting Zonal Server with JVishwa 5
2
Find the Prime Number for largest interval using Grid
Computing.
8
3
Calculate the using Grid Computing. 13
4
Find procedure to run the virtual machine of different
configuration. Check how many virtual machines can be
utilized at particular time Find the missing dataset using
hadoop and map reduce.
17
5 Find procedure to attach virtual block to the virtual machine
and check whether it holds the data even after the release of the
virtual machine Classification using Naïve bayes
approach.
21
6 Install a C++ compiler in the virtual machine and execute a
sample program.
24
7 Show the virtual machine migration based on the certain
condition from one node to the other.
27
8 Find procedure to install storage controller and interact with it.
28
9 Write a word count program to demonstrate the use of Map and
Reduce tasks.
30
6
Ex NO:1 CONNECTING ZONAL SERVER WITH JVISHWA
AIM:
Construction of uniform cluster in the grid using P2P middle ware
ALGORITHM:
1. Copy JVishwa and ZonalServer file into the bin of java folder.
2. Open the command prompt window and change the directory to the bin of java folder.
3. Execute the below command in the server machine “ java –jar ZonalServer.jar”
4. Give Zonal server id as 0.
5. Now the adjacency table has been created. And clients are yet to be connected.
6. In client machine open the command prompt window and change the directory as
same as server.
7. Find the IP address of the server machine and make a note.
8. Execute the below command in the server machine “java –jar jVishwa.jar (IP
Address of the server)”.
9. Now the client is connected to the server and the adjacency table had been
updated in the server machine.
10. Repeat the same process in other three client machines.
11. Now the server and client machine has been connected and ready for the
program execution.
SOURCE CODE:
1. Message on Server
JVishwa.ZonalServer.messaging.Add to Adjacency table
Row is inserted in the list
Adj Table
192.168.1.120
Column……….0
………………………
No of rows……..1
Jvishwa .messaging .joinmessage to ZonalServer
Error is here
Position is here
7
Reply send
Jvishwa.Zonal Server messaging Add to Adjacency Table
Row is inserted in the list
JVishwa .join messaging.AdjacencyTable
192.168.1.122 192.168.1.121 192.168.1.120
Column ……..2
192.168.1.122 192.168.1.121 192.168.1.120
Column …………….2
192.168.1.122 192.168.1.121 192.168.1.120
Column ………………2
192.168.1.122 192.168.1.121 192.168.1.120
Column…………………2
Number of rows….3
Jvishwa ZonalServer.messaging.Remove from .Adjacency Table
Jvishwa.messaging.Rep To Zonal Server
Jvishwa .join.messaging.Adjacency Table Req
Jvishwa .join.messaging.Adjacency Table Req
2. Grid Node Command
C:\Documents and Settings\Administrator> cd Desktop
C:\Documents and Settings\Administrator\desktop>java –jar JVishwa Jar 192.168.1.119
Message
Routing
Table-1
Neighbour-2
Leaf set-3
Other zonal nodes.ge use with caution will slow down system> Node id-
Message:::jVishwa.messaging.join rep to unstructured form::192.168.1.121 join request is receive
Local node id is…192..168.1.120 set size
is 0 Adding ….192.168.1.121
Message:::jVishwa.messaging join req To Unstructure from::192.168.1.122 Join req is received
Local node Id is 192.168.1.120 set size adding….192.168.1.122 is …1
3. Grid Node Command
C:\document and setting\Administrator>cd desktop
C:\document and setting\Administrator\desktop>java –jar jVishwa.jar 192.168.1.11
MESSAGE
Zonal server reply message is receiver local node id is 192.168.1.121 set size is adding….192.168.1.120
Sending 192.168.1.120
Size of vector is……..0
It took …..1016 milliseconds to join gri
Routing
Table-1
Neighbour-2
Leaf set-
Other Zone nodes-gc use with caution will slow down system> Node id-5
Node 192.168.1.120 joined
Local node id is ….192.168.1.121 set size is 1
Local node id is ….192.168.1.121 set size is 1
192.168.1.121.
192.168.1.122
4. Grid Node Command C:\Documents and Settings\Administrator> cd Desktop
8
C:\Documents and Settings\Administrator\desktop>java –jar JVishwa jar 192.168.1.119
Messag
Zonal Server reply message is received
Local Node id………………….192.168.1.122 Set Size is ………………0
Adding ………………192.168.1.121
Local Node id………………….192.168.1.122 Set Size is ………………1
Send 192.168.1.120
Sending the message to join req to unstructured 192.168.1.127 Size of vector is 0
It took -1531 milliseconds to join grid
Routing table – 1
Neighbour – 2
Leaf Set – 3
Other Zonal node - gc use with caution will slow down system> Node Id -5
192.168.1.122
192.168.1.126 192.168.1.12……………..
…………………………………………………
RESULT :
Thus the uniform cluster has been connected in the grid using P2P middleware
9
Ex. NO: 2 PRIME NUMBER
AIM:
To calculate the Prime number for huge interval using grid nodes.
ALGORITHM:
1. Connect the clients with server using grid in P2P middleware.
2. Copy the prime.java and paste it in the bin of the java folder.
3. Find the IP Address of the first client machine connected to the cluster and make a note.
4. In the Primeclient.java provide IP address of any one of the client node.
5. From the remote machine execute the Prime client program.
6. Now the work will be distributed to the clients equally and executed different interval in each
client.
7. This execution process can be monitored by server.
8. If any clients don’t respond, then that work will be shared by the other system.
SOURCE CODE:
import jvishwa.Context;
import jvishwa.Result;
import jvishwa.VishwaSubTask;
import java.util.Vector;
import java.io.FileInputStream;
import java.io.ObjectInputStream
import jvishwa.file.*;
import java.io.PrintStream;
import java.io.FileOutputStream;
public class Prime extends VishwaSubTask
{ Vector v; VishwaFile vfile=null;
//Constructor
public Prime() {
v=new Vector();
}
//calculation of prime public
void run(Context c) {
//Accessing the parameters from the context
int startpoint=Integer.parseInt((String)c.get("fromval")); int
endpoint=Integer.parseInt((String)c.get("toval")); //File Service for creating a file
FileService fileService=new FileService(this);
vfile=fileService.createFile("result");
10
PrintStream ps=null;
try {
ps=new PrintStream(new FileOutputStream(vfile));
}
catch(Exception e) {
e.printStackTrace();
}
//Calculation of prime
for(int i=startpoint;i<=endpoint;i++)
{ if(isPrime(i)) ps.println(i+"");
}
ps.close();
}
// Algorithm for calculation of prime private boolean isPrime(int n) {
int max=(int)Math.sqrt(n); for(int
div=2;div<=max;div++) {
if(n%div==0) return false;
}
return true;
}
//Aggregating the results...
public Result callback() { Result r=new
Result(); r.putFile(vfile);
return r;
}
//Deserialization method for getting the object public Object getObject(String fileName) {
try
{
FileInputStream fin=new FileInputStream(fileName); ObjectInputStream ois=new
ObjectInputStream(fin); Object o=ois.readObject();
ois.close(); return o;
}
catch(Exception e)
{
e.printStackTrace();
}
return null;
}
}
PRIMECLIENT
// prime numbers in a huge interval
import jvishwa.client.*;
import jvishwa.query.*;
import jvishwa.metric.*;
import jvishwa.task.SchedulerType;
import jvishwa.Result;
import jvishwa.Context;
import java.io.FileOutputStream;
import java.io.PrintStream;
import java.io.BufferedReader;
11
import java.io.File;
import java.io.FileInputStream;
import jvishwa.file.*;
import java.io.InputStreamReader;
public class PrimeClient {
public static void main(String args[])
{
//Getting the instance of client manager
ClientManager manager=ClientManager.getInstance(); Metric
metric=new Metric();
Query query=new Query(metric); //Setting the
configuration parameters
query.setLowerBound(5);
query.setMemoryFractionValue(0.3);
query.setCPUFractionValue(0.7);
manager.setConditionType(ConditionType.DEFAULT_TYPE); manager.setMetric(metric);
manager.setQuery(query);
manager.setSurplusValue(0);
manager.setMinDonors(2);
manager.setMaxDonors(3);
manager.setReplicaSize(1);
manager.setSchedulerType(SchedulerType.DYNAMIC);
manager.setGridNodeIP("172.16.1.68");
try {
manager.initilize();
System.out.println("Press any key to start the execution...");
try
{
BufferedReader input = new BufferedReader(new InputStreamReader(System.in));
input.readLine();
}
catch(Exception e)
{
e.printStackTrace();
}} catch(Exception e)
System.exit(0);
}
SubTaskHandle handleSet[]=new SubTaskHandle[10];
Context context=null;
//Creating subtasks
for (i=1;i<=10;i++)
{
context=new Context();
context.put("fromval",i+"000000");
context.put("toval",i+1+"000000"); handleSet[i-1]=manager.execute(new Prime(),context);
}
//Waiting for all subtasks to
finish.manager.barrier();
//handleSet[i].waitSubTask()...Waiting for particular subtask to finish. try {
FileOutputStream fout=new FileOutputStream("sreedhar.txt"); PrintStream
ps=new PrintStream(fout);
12
//placing the results into the text file....
//int sum=0;
//int
content1=0;
for(int i=1;i<=10;i++) {
Result r=handleSet[i-1].getResult(); VishwaFile
vfile=r.getFile();
BufferedReader input = new BufferedReader(new InputStreamReader(new
FileInputStream(vfile)));
String content=null;
System.out.println("Compiling results from Sub-Task: "+(i-1));
ps.println("Compiling results from Sub-Task:"+String.valueOf(i-1));
while((content=input.readLine())!=null)
//content1=Integer.parseInt(content);
ps.print(content+",");
//sum=sum+content1;
input.close();
}
// ps.println("sum of prime no in the given range"+sum);
ps.close();
fout.close();
} catch(Exception e) {
e.printStackTrace();
}
//Closing the client manager
manager.close();
}
}
OUTPUT:
Compiling results from Sub-Task:0
100003,100019,100043,100049,100057……..199933,199961,199967,199999
Compiling results from Sub-Task:1
200003,200009,200017,200023,200029,200033….299951,299969,299977,299983,299993
Compiling results from Sub-Task:2
300007,300017,300023,300043,300073…..,399941,399953,399979,399983,399989
Compiling results from Sub-Task:3
400009,400031,400033,400051……..499943,499957,499969,499973,499979
Compiling results from Sub-Task:4
500009,500029,500041,500057,500069……599959,599983,599993,599999
13
Compiling results from Sub-Task:5
600011,600043,600053,600071,600073…..699947,699953,699961,699967
Compiling results from Sub-Task:6
700001,700027,700057,700067,700079…..799949,799961,799979,799991,799993,799999
Compiling results from Sub-Task:7
800011,800029,800053,800057,800077,800083……899939,899971,899981
Compiling results from Sub-Task:8
900001,900007,900019,900037,900061……999953,999959,999961,999979,999983 Compiling results from Sub-Task:9
1000003,1000033,1000037,1000039,1000081……1099927,1099933,1099957,1099961,1099997
RESULT :
Thus the prime number for huge interval has been calculated using grid node and the result
has been viewed.
14
Ex. NO: 3 MATRIX MULTIPLICATION
AIM:
To calculate the multiplication of integers for huge interval using grid nodes.
ALGORITHM:
1. Connect the clients with server using grid in P2P middleware.
2. Copy the Square.java and paste it in the bin of the java folder.
3. Find the IP Address of the first client machine connected to the cluster and make a note.
4. In the Squareclient.java provide the IP address of any one of the client machine
5. From the remote machine execute the Squareclient program.
6. Now the work will be distributed to the clients equally and executed different interval in each
client.
7. This execution process can be monitored by the server.
8. If any clients don’t respond, then that work will be shared by the other systems.
9. After the execution the results will be compiled and stored in the presentation machine.
SOURCE CODE :
import jvishwa.VishwaSubTask;
import jvishwa.Result;
import jvishwa.Context;
import jvishwa.file.*;
import java.io.PrintStream;
import java.io.FileOutputStream;
import java.io.FileInputStream;
import java.io.ObjectInputStream;
public class Square extends VishwaSubTask { VishwaFile vfile = null; public void run(Context c) {
//Accessing the parameters from the context
int startpoint = Integer.parseInt((String) c.get("fromval")); int endpoint = Integer.parseInt((String)
c.get("toval")); //File Service for creating a file
FileService fileService = new FileService(this); vfile = fileService.createFile("result");
PrintStream ps = null;
15
try {
ps = new PrintStream(new FileOutputStream(vfile));
}
catch (Exception e) { e.printStackTrace();
}
//Calculation of square
for (int i = startpoint; i <= endpoint; i++) {
ps.println(squareNum(i) + "");
}
ps.close();
}
public int squareNum(int num) { return num * num;
}
//Deserialization method for getting the object public Object getObject(String fileName)
{
try { FileInputStream fin = new FileInputStream(fileName); ObjectInputStream ois = new
ObjectInputStream(fin); Object o = ois.readObject();
ois.close();
return o;
} catch (Exception e) { e.printStackTrace();
}
return null;
}
//Aggregating the results... public Result callback() { Result r = new Result(); r.putFile(vfile);
return r;
}} SQUARE CLIENT: import jvishwa.client.*; import jvishwa.query.* import jvishwa.metric.*; import jvishwa.task.SchedulerType; import jvishwa.Result; import jvishwa.Context; import java.io.FileOutputStream; import java.io.PrintStream; import java.io.BufferedReader; import java.io.File; import java.io.FileInputStream; import jvishwa.file.*; import java.io.InputStreamReader; public class SquareClient { public static void main(String args[]) { ClientManager manager = ClientManager.getInstance(); Metric metric=new Metric(); Query query=new Query(metric);
//setting minimum and maximum grid nodes required query.setLowerBound(5);
query.setMemoryFractionValue(0.3);
query.setCPUFractionValue(0.7); manager.setConditionType(ConditionType.DEFAULT_TYPE);
manager.setMetric(metric);
manager.setQuery(query);
16
manager.setSurplusValue(0);
manager.setMinDonors(2);
manager.setMaxDonors(3);
manager.setReplicaSize(1);
//setting grid node
manager.setSchedulerType(SchedulerType.DYNAMIC);
manager.setGridNodeIP("172.16.1.62");
try {
manager.initilize();
System.out.println("Press any key to start the execution..."); try
{
BufferedReader input = new BufferedReader(new InputStreamReader(System.in));
input.readLine();
}
catch(Exception e)
{
e.printStackTrace();
}
} catch (Exception e) {
e.printStackTrace();
System.exit(0);
}
//System.out.println("Starting execution...");
SubTaskHandle handleSet[] = new SubTaskHandle[10];
Context context = null;
//Creating subtasks
for (int i = 0; i <= 9; i++) { context = new
Context(); context.put("fromval", i + "1");
context.put("toval", i + 1 + "0");
//pass parameter and get id of remote subtask
handleSet[i] = manager.execute(new Square(), context);
}
//wait for all remote subtasks to finish
manager.barrier();
try {
FileOutputStream fout = new FileOutputStream("squareresult1.txt"); PrintStream ps = new
PrintStream(fout);
//placing the results into the text file....
/*int sum=0;
int content1=0;*/
for (int i = 1; i <= 10; i++)
{
//get results from each subtask
Result r = handleSet[i - 1].getResult();
VishwaFile vfile = r.getFile();
BufferedReader input = new BufferedReader(new InputStreamReader(new FileInputStream(vfile)));
String content = null;
System.out.println("Compiling results from Sub-Task: " + (i - 1)); ps.println("Compiling results from
Sub-Task:" + String.valueOf(i - 1)); //placing the results into the text file....
while ((content = input.readLine()) != null) {
/*content1=Integer.parseInt(content);
ps.print(content1+",");
17
sum=sum+content1;*/
ps.print(content+",");
}
input.close();
}
//ps.println("sum of square no in the given range"+sum);
ps.close();
fout.close();
} catch (Exception e) { e.printStackTrace();
}
//Closing the client manager
manager.close();
}
}
OUTPUT:
Compiling results from Sub-Task:0 1,4,9,16,25,36,49,64,81,100
Compiling results from Sub-Task:1 121,144,169,196,225,256,289,324,361,400
Compiling results from Sub-Task:2 441,484,529,576,625,676,729,784,841,900
Compiling results from Sub-Task:3 961,1024,1089,1156,1225,1296,1369,1444,1521,1600
Compiling results from Sub-Task:4 1681,1764,1849,1936,2025,2116,2209,2304,2401,2500
Compiling results from Sub-Task:5 2601,2704,2809,2916,3025,3136,3249,3364,3481,3600
Compiling results from Sub-Task:6 3721,3844,3969,4096,4225,4356,4489,4624,4761,4900
Compiling results from Sub-Task:7 5041,5184,5329,5476,5625,5776,5929,6084,6241,6400
Compiling results from Sub-Task:8 6561,6724,6889,7056,7225,7396,7569,7744,7921,8100
Compiling results from Sub-Task:9 8281,8464,8649,8836,9025,9216,9409,9604,9801,10000
RESULT:
Thus the square number for huge interval has been calculated using grid node and the result
has been viewed.
18
Ex. NO: 4 MATRIX ADDITION
AIM :
To calculate the matrix addition for huge interval using grid nodes.
ALGORITHM:
1. Connect the clients with server using grid in P2P middleware.
2. Copy the Matrix.java and paste it in the bin of the java folder.
3. Find the IP Address of the first client machine connected to the cluster and make a note.
4. In the Matrixclient.java change the IP address of any one of the client node.
5. From the remote machine execute the Matrixclient program.
6. Now the work will be distributed to the clients equally and executed different interval in each
client.
7. This execution process can be monitored in the server.
8. If any clients don’t respond, then that work will be shared by the other systems
9. After the execution the results will be compiled and stored in the presentation machine.
SOURCE CODE:
import jvishwa.Context;
import jvishwa.Result; import jvishwa.VishwaSubTask;
import java.util.Vector;
import java.io.FileInputStream;
import java.io.ObjectInputStream;
import jvishwa.file.*;
import java.io.PrintStream;
import java.io.FileOutputStream;
public class Matrix extends VishwaSubTask { Vector v; VishwaFile vfile=null; //Constructor public Matrix() { v=new Vector(); } //calculation of prime public
void run(Context c) {
//double A[],B[][];
//Accessing the parameters from the context
19
int taskidno = Integer.parseInt((String)c.get("taskid"));
System.out.println("##################Starting TASK: "+ taskidno +" --------##################");
int N = Integer.parseInt((String)c.get("order"));
double[] C = new double[N];
double[] A = new double[N];
double[][] B = new double[N][N];
for(int ti = 0; ti<N; ti++){
String rt= "R:"+String.valueOf(ti);
A[ti]= Double.parseDouble((String)c.get(rt));
}
for(int tj = 0; tj<N; tj++)
for(int tk=0;tk<N; tk++){
String rt=
"R:["+String.valueOf(tj)+"]["+String.valueOf(tk)+"]"; B[tj][tk]
= Double.parseDouble((String)c.get(rt)); //context.put(rt, ri);
}
A=(double[])c.get("firstmatrix");
// B=(double[][])c.get("secondmatrix");
System.out.println("populated the matrix**********************: "+ B);
//File Service for creating a file
FileService fileService=new FileService(this);
vfile=fileService.createFile("result");
PrintStream ps=null;
try {
ps=new PrintStream(new FileOutputStream(vfile));
}
catch(Exception e) {
e.printStackTrace();
}
//Calculation of prime
for (int j = 0; j < N; j++){
for (int k = 0; k < N; k++)
C[j] += A[j] * B[j][k];
System.out.println("c: " +C[j]);ps.println(String.valueOf(C[j]));
}
System.out.println("##################COMPLETED TASK: "+ taskidno +" --------
##################");
ps.close();
}
//Aggregating the results...
public Result callback() {
Result r=new Result();
r.putFile(vfile);
return r;
}
//Deserialization method for getting the object public Object getObject(String fileName) {
try
{
FileInputStream fin=new FileInputStream(fileName);
ObjectInputStream ois=new ObjectInputStream(fin);
Object o=ois.readObject();
ois.close();
return o;
20
}
catch(Exception e)
{
e.printStackTrace();
}
return null;
}
}
MATRIX CLIENT:
import jvishwa.client.*;
import jvishwa.query.*;
import jvishwa.metric.*;
import jvishwa.task.SchedulerType;
import jvishwa.Result;
import jvishwa.Context;
import java.io.FileOutputStream;
import java.io.PrintStream;
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.util.Vector;
import java.io.FileInputStream;
import jvishwa.file.*;
import java.io.InputStreamReader;
public class MatrixClient {
public static void main(String args[]) {
int N=4;
//Getting the instance of client manager ClientManager
manager=ClientManager.getInstance(); Metric metric=new Metric();
Query query=new Query(metric);
//Setting the configuration parameters
query.setLowerBound(5);//20
query.setMemoryFractionValue(0.3);
query.setCPUFractionValue(0.7);
manager.setConditionType(ConditionType.DEFAULT_TYPE);
manager.setMetric(metric);
manager.setQuery(query);
manager.setSurplusValue(0);
manager.setMinDonors(1);
manager.setMaxDonors(4);
manager.setReplicaSize(1);
manager.setCustomClassPath(System.getProperty("user.dir"));
manager.setSchedulerType(SchedulerType.DYNAMIC);
manager.setGridNodeIP("172.16.12.1");
manager.setClassDirectory("/home/apkarthick/vishwa/MatrixMultiplication/");
//Generating matrix
double[][] A = new double[N][N]; double[][] B = new double[N][N];
double[][] C;for (int i = 0; i < N; i++)
for (int j = 0; j < N; j++)
A[i][j] = Math.random();
for (int i = 0; i < N; i++) for
21
(int j = 0; j < N; j++)
B[i][j] = Math.random();
System.out.println("matrix formed");
//Intilizing the grid with appliction requirements try {
manager.initilize();
System.out.println("Prees Any key to start the
execution..."); try
{
BufferedReader input = new BufferedReader(new InputStreamReader(System.in));
input.readLine();
}
catch(Exception e)
{
e.printStackTrace();
}
}
catch(Exception e) {
e.printStackTrace();
System.exit(0);
}
SubTaskHandle handleSet[]=new SubTaskHandle[N];
Context context=null;
//Creating subtasks
for(int i=0;i<N;i++)
{
context = new Context()
for(int ti = 0; ti<N; ti++){
String rt= "R:"+String.valueOf(ti);
double ri = A[i][ti]; context.put(rt,
String.valueOf(ri));
}
for(int tj = 0; tj<N; tj++)
for(int tk=0;tk<N; tk++){
String rt= "R:["+String.valueOf(tj)+"]["+String.valueOf(tk)+"]"; double ri = B[tj][tk];
context.put(rt, String.valueOf(ri));
}
// context.put("firstmatrix",A[i]);
// context.put("secondmatrix",B);
OUTPUT:-
Compiling results from Sub-Task:0 1.3110847180130507 0.45072657945777495
0.9328806887907788 0.0080678568628709
Compiling results from Sub-Task:1 1.9840518891858074 0.39136279611877806
1.0333216991598813 0.5685121830127593
Compiling results from Sub-Task:2 0.08503545774512963 1.3325388149855717
1.113233731808045 0.07265781176921086
Compiling results from Sub-Task:3 1.1803533183669492 0.18617914016555293
1.892017828553201 0.647364140085597
RESULT:
Thus the matrix addition for huge interval has been calculated using grid node and the result
has been viewed.
22
ExNo:5 STORING & ACCESSING FILES IN TO DROPBOX
AIM:
Write a Program in Storing & Accessing Files In To Dropbox
Algorithm: STEP 1: Make sure you’ve installed the desktop app on your computer STEP 2: Drag and drop files into the Dropbox folder. That’s it!
STEP 3: Sign in to dropbox.com STEP 4: Click the blue Upload file button at the top of the window.
On Windows or Mac
1. Install the Dropbox desktop app if you haven’t already.
2. Open your Dropbox folder, and find the file or folder you’d like to share.
3. Right-click on the file and select Copy Dropbox Link. The link will be copied
automatically. Just paste it wherever you’d like.
23
Work on files together
Collaborate on presentations and docs — without emailing files back and forth. Just create a shared
folder and add other people to it. When you edit a file in a shared folder, everyone instantly gets the
latest version on their devices.
Set up a shared folder
On dropbox.com
Sign in to dropbox.com, and click on the Sharing tab on the left side of the window.
Choose New shared folder at the top of the window, select I’d like to create and share a new folder, then
click Next.
Enter a name for your shared folder, then click Next.
24
RESULT:
Thus the Storing & Accessing Files In To Dropbox,executed and the output was verified
successfully.
25
Ex no:6 IMPLEMENTATION OF PARA-VIRTUALIZATION USING VM WARE’S
WORKSTATION/ ORACLE’S VIRTUAL BOX
AIM:
Implementation of Virtual Box for Virtualization of any OS.
ALGORITHM:
Host operating system (host OS). This is the operating system of the physical computer on
which Virtual Box was installed. There are versions of Virtual Box for Windows, Mac OS X, Linux and
Solaris hosts.
Guest operating system (guest OS).This is the operating system that is running inside the virtual
machine. Theoretically, Virtual Box can run any x86 operating system (DOS, Windows, OS/2, FreeBSD, Open
BSD), but to achieve near-native performance of the guest code on your machine, we had to go through a lot of
optimizations that are specific to certain operating systems. So while your favorite operating system may run as a
guest, we officially support and optimize for a select few (which, however, include the most common ones).
Virtual machine (VM). This is the special environment that Virtual Box creates for your guest operating
system while it is running. In other words, you run your guest operating system "in" a VM. Normally, a VM will
be shown as a window on your computers desktop, but depending on which of the various frontends of
VirtualBox you use, it can be displayed in full screen mode or remotely on another computer. In a more abstract
way, internally, Virtual Box thinks of a VM as a set of parameters that determine its behavior. They include
hardware settings (how much memory the VM should have, what hard disks Virtual Box should virtualize
through which container files, what CDs are mounted etc.) as well as state information (whether the VM
is currently running, saved, its snapshots etc.). These settings are mirrored in the Virtual Box Manager window as
well as the VBoxManage command line program;
Guest Additions. This refers to special software packages which are shipped with VirtualBox but designed to
be installed inside a VM to improve performance of the guest OS and to add extra features.
Starting Virtual Box:
After installation, you can start VirtualBox as follows:
On a Windows host, in the standard "Programs" menu, click on the item in the "VirtualBox"
group. On Vista or Windows 7, you can also type "VirtualBox" in the search box of the "Start"
menu.
On a Mac OS X host, in the Finder, double-click on the "VirtualBox" item in the "Applications"
folder.
On a Linux or Solaris host, depending on your desktop environment, a "VirtualBox" item may
have been placed in either the "System" or "System Tools" group of your "Applications" menu.
Alternatively, you can type VirtualBox in a terminal.
26
This window is called the "VirtualBox Manager". On the left, you can see a pane that will later list all
your virtual machines. Since you have not created any, the list is empty. A row of buttons above it
allows you to create new VMs and work on existing VMs, once you have some. The pane on the right
displays the properties of the virtual machine currently selected, if any. Again, since you don't have any
machines yet, the pane displays a welcome message.
Creating your first virtual machine:
Click on the "New" button at the top of the VirtualBox Manager window. A wizard will pop up to guide
you through setting up a new virtual machine (VM)
Running your virtual machine: To start a virtual machine, you have several options:
Double-click on its entry in the list within the Manager window or
select its entry in the list in the Manager window it and press the "Start" button at the top or
for virtual machines created with VirtualBox 4.0 or later, navigate to the "VirtualBox VMs"
folder in your system user's home directory, find the subdirectory of the machine you want to
start and double-click on the machine settings file (with a .vbox file extension). This opens up a
new window, and the virtual machine which you selected will boot up. Everything which would
normally be seen on the virtual system's monitor is shown in the window. In general, you can use
the virtual machine much like you would use a real computer. There are couple of points worth
mentioning however.
Saving the state of the machine: When you click on the "Close" button of your virtual machine
window (at the top right of the window, just like you would close any other window on your system),
VirtualBox asks you whether you want to "save" or "power off" the VM. (As a shortcut, you can also
press the Host key together with "Q".)
27
Save the machine state: With this option, VirtualBox "freezes" the virtual machine by
completely saving its state to your local disk. When you start the VM again later, you will find
that the VM continues exactly where it was left off. All your programs will still be open, and
your computer resumes operation. Saving the state of a virtual machine is thus in some ways
similar to suspending a laptop computer (e.g. by closing its lid).
Send the shutdown signal. This will send an ACPI shutdown signal to the virtual machine,
which has the same effect as if you had pressed the power button on a real computer. So long as
the VM is running a fairly modern operating system, this should trigger a proper shutdown
mechanism from within the VM.
Manager window discards a virtual machine's saved state. This has the same effect as powering
it off, and the same warnings apply.
Importing and exporting virtual machines
VirtualBox can import and export virtual machines in the industry-standard Open Virtualization
Format (OVF). OVF is a cross-platform standard supported by many virtualization products which
allows for creating ready-made virtual machines that can then be imported into a virtualizer such
as VirtualBox. VirtualBox makes OVF import and export easy to access and supports it from the
Manager window as well as its command-line interface. This allows for packaging so-called
virtual appliances: disk images together with configuration settings that can be distributed easily.
This way one can offer complete ready-to-use software packages (operating systems with
applications) that need no configuration or installation except for importing into VirtualBox.
They can come in several files, as one or several disk images, typically in the widely-used
VMDK format (see Section 5.2,Disk image files (VDI, VMDK, VHD, HDD)‖) and a textual
description file in an XML dialect with an .ovf extension. These files must then reside in the
same directory for Virtual Box to be able to import them.
Alternatively, the above files can be packed together into a single archive file, typically with an
.ova extension. (Such archive files use a variant of the TAR archive format and can therefore be
unpacked outside of Virtual Box with any utility that can unpack standard TAR files.)
Select "File" -> "Export appliance". A different dialog window shows up that allows you to combine
several virtual machines into an OVF appliance. Then, select the target location where the target files
should be stored, and the conversion process begins. This can again take a while.
RESULT:
Thus we have studied use of Multiple OS using Virtual Box by virtualization.
28
Ex no:7 IMPLEMENTATION OF VIRTUAL MACHINE MIGRATION BASED ON THE
CERTAIN CONDITION FROM ONE NODE TO THE OTHER
AIM:
To show the virtual machine migration based on the certain condition from one node to the other.
ALGORITHM:
Existing images can be cloned to a new one. This is useful to make a backup of an Image before you
modify it, or to get a private persistent copy of an image shared by other user.
To clone an image, execute
$ oneimage clone Ubuntu new_image
Listing Available Images
You can use the oneimage list command to check the available images in the
repository.
$ oneimage list
ID USER GROUP NAME DATASTORE SIZE TYPE PER STAT RVMS
0 oneuser1 users Ubuntu default 8M OS No rdy 0
To get complete information about an image, use oneimage show, or list images
continuously with oneimage top.
RESULT:
Thus the Implementation of virtual machine migration based on the certain condition from one node to
the other has been verified and executed successfully.
29
Exno:8 Find procedure to install storage controller and interact with it
AIM:
To find the procedure to install storage controller and interact with it.
ALGORITHM:
Storage controller will be installed as Swift and Cinder components when installing open stack. The
ways to interact with the storage will be done through portal.
Open Stack Object Storage (swift) is used for redundant, scalable data storage using clusters of
standardized servers to store petabytes of accessible data. It is a long-term storage system for large
amounts of static data which can be retrieved and updated.
Open Stack Object Storage provides a distributed, API-accessible storage platform that can be
integrated directly into an application or used to store any type of file, including VM images, backups,
archives, or media files. In the OpenStack dashboard, you can only manage containers and objects.
In OpenStack Object Storage, containers provide storage for objects in a manner similar to a Windows
folder or Linux file directory, though they cannot be nested. An object in OpenStack consists of the file
to be stored in the container and any accompanying metadata.
Create a container
Log in to the dashboard
1. Select the appropriate project from the drop down menu at the top left.
2. On the Project tab, open the Object Store tab and click Containers category.
3. Click Create Container.
4. In the Create Container dialog box, enter a name for the container, and then click Create
Container.
You have successfully created a container.
Upload an object
1. Log in to the dashboard.
2. Select the appropriate project from the drop down menu at the top left.
3. On the Project tab, open the Object Store tab and click Containers category.
4. Select the container in which you want to store your object.
5. Click Upload Object.
The Upload Object To Container: <name> dialog box appears. <name> is the name of the
container to which you are uploading the object.
30
6. Enter a name for the object.
7. Browse to and select the file that you want to upload.
8. Click Upload Object.
You have successfully uploaded an object to the container.
Manage an object
To edit an object
1. Log in to the dashboard.
2. Select the appropriate project from the drop down menu at the top left.
3. On the Project tab, open the Object Store tab and click Containers category.
4. Select the container in which you want to store your object.
5. Click the menu button and choose Edit from the dropdown list.
The Edit Object dialog box is displayed.
6. Browse to and select the file that you want to upload.
7. Click Update Object
RESULT:
Thus the Implementation to find the procedure to install storage controller and interaction has been
executed successfully
31
ExNo:9 WRITE A WORD COUNT PROGRAM TO DEMONSTRATE THE USE OF MAP &
REDUCE TASKS.
AIM:
To write a word count program to demonstrate the use of Map and Reduce tasks.
ALGORITHM:
1. Splitting – The splitting parameter can be anything, e.g. splitting by space, comma, semicolon,
or even by a new line (‘\n’).
2. Mapping – as explained above.
3. Intermediate splitting – the entire process in parallel on different clusters. In order to group them
in “Reduce Phase” the similar KEY data should be on the same cluster.
4. Reduce – it is nothing but mostly group by phase.
5. Combining – The last phase where all the data (individual result set from each cluster) is
combined together to form a result.
6. Open Eclipse> File > New > Java Project >( Name it – MRProgramsDemo) > Finish.
7. Right Click > New > Package ( Name it - PackageDemo) > Finish.
8. Right Click on Package > New > Class (Name it - WordCount).
9. Add Following Reference Libraries:
10. Right Click on Project > Build Path> Add External
11. /usr/lib/hadoop-0.20/hadoop-core.jar
12. Usr/lib/hadoop-0.20/lib/Commons-cli-1.2.jar
SOURCE CODE
package PackageDemo;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
32
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class WordCount {
public static void main(String [] args) throws Exception
{
Configuration c=new Configuration();
String[] files=new GenericOptionsParser(c,args).getRemainingArgs();
Path input=new Path(files[0]);
Path output=new Path(files[1]);
Job j=new Job(c,"wordcount");
j.setJarByClass(WordCount.class);
j.setMapperClass(MapForWordCount.class);
j.setReducerClass(ReduceForWordCount.class);
j.setOutputKeyClass(Text.class);
j.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(j, input);
FileOutputFormat.setOutputPath(j, output);
System.exit(j.waitForCompletion(true)?0:1);
}
public static class MapForWordCount extends Mapper<LongWritable, Text, Text, IntWritable>{
public void map(LongWritable key, Text value, Context con) throws IOException,
InterruptedException
{
String line = value.toString();
String[] words=line.split(",");
for(String word: words )
{
Text outputKey = new Text(word.toUpperCase().trim());
IntWritable outputValue = new IntWritable(1);
con.write(outputKey, outputValue);
}
33
}
}
public static class ReduceForWordCount extends Reducer<Text, IntWritable, Text, IntWritable>
{
public void reduce(Text word, Iterable<IntWritable> values, Context con) throws IOException,
InterruptedException
{
int sum = 0;
for(IntWritable value : values)
{
sum += value.get();
}
con.write(word, new IntWritable(sum));
}
}
}
The above program consists of three classes:
Driver class (Public, void, static, or main; this is the entry point).
The Map class which extends the public class
Mapper<KEYIN,VALUEIN,KEYOUT,VALUEOUT> and implements the Map function.
The Reduce class which extends the public class
Reducer<KEYIN,VALUEIN,KEYOUT,VALUEOUT> and implements the Reduce
function.
Make a jar file
Right Click on Project> Export> Select export destination as Jar File > next> Finish
34
To move this into Hadoop directly, open the terminal and enter the following commands:
[training@localhost ~]$ hadoop fs -put wordcountFile wordCountFile
Run the jar file:
[training@localhost ~]$ hadoop jar MRProgramsDemo.jar PackageDemo.WordCount
wordCountFile MRDir1
RESULT:
[training@localhost ~]$ hadoop fs -ls MRDir1
Found 3 items
-rw-r--r-- 1 training supergroup 0 2016-02-23 03:36 /user/training/MRDir1/_SUCCESS
drwxr-xr-x - training supergroup 0 2016-02-23 03:36 /user/training/MRDir1/_logs
-rw-r--r-- 1 training supergroup 20 2016-02-23 03:36 /user/training/MRDir1/part-r-00000
[training@localhost ~]$ hadoop fs -cat MRDir1/part-r-00000
BUS 7
CAR 4
TRAIN 6