Sei sulla pagina 1di 143

CE443 : Cloud Computing D17CE144

Conclusion:
From this practical I learned about cloudsim and how to install cloudsim in Netbeans and
Eclipse.

11
CE443 : Cloud Computing D17CE144

Practical 2

Aim: i. Cloud analyst Class design. To add load balancing policy


ii. CloudSim paper.

Hardware Requirement: None


Software Requirement: Eclipse

Part I Cloud Analyst class design to add load balancing policy


1) Create your own algorithm under cloudsim.ext.datacenter call it
DynamicLoadBalancer.java

2) Create string in constant.java under cloudsim.ext call it LOAD_BALANCE_DLB

12
CE443 : Cloud Computing D17CE144

3) Add load balancing policy in ConfigureSimulationPanel.java under


cloudsim.ext.gui.screen

4) Add load balancing policy to if-else condition in DatacenterController.java under


cloudsim.ext.datacenter

13
CE443 : Cloud Computing D17CE144

5) Output see there is Dynamic Load Balancer option will be available in advanced
option of Configure Simulation

14
CE443 : Cloud Computing D17CE144

Part II CloudSim paper.

Cloud computing is a recent advancement wherein IT infrastructure and applications are


provided as “services” to end-users under a usage-based payment model. They can leverage
virtualized services even on the fly based on requirements (workload patterns and QoS) varying
with time. The application services hosted under Cloud computing model have complex
provisioning, composition, configuration, and deployment requirements. Evaluating the
performance of Cloud provisioning policies, application workload models, and resources
performance models in a repeatable manner under varying system and user configurations and
requirements is difficult to achieve. To overcome this challenge, we propose CloudSim: an
extensible simulation toolkit that enables modeling and simulation of Cloud computing systems
and application provisioning environments. The CloudSim toolkit supports both system and
behaviour modeling of Cloud system components such as data centers, virtual machines (VMs)
and resource provisioning policies. It implements generic application provisioning techniques
that can be extended with ease and limited efforts. Currently, it supports modeling and
simulation of Cloud computing environments consisting of both single and inter-networked
clouds (federation of clouds). Moreover, it exposes custom interfaces for implementing policies
and provisioning techniques for allocation of VMs under inter-networked Cloud computing
scenarios. Several researchers from organisations such as HP Labs in USA are using CloudSim
in their investigation on Cloud resource provisioning and energy-efficient management of data
center resources. The usefulness of CloudSim is demonstrated by a case study involving
dynamic provisioning of application services in hybrid federated clouds environment. The
result of this case study proves that the federated Cloud computing model significantly
improves the application QoS requirements under fluctuating resource and service demand
patterns.

Cloud computing is a recent advancement wherein IT infrastructure and applications are


provided as “services” to end-users under a usage-based payment model. They can leverage
virtualized services even on the fly based on requirements (workload patterns and QoS) varying
with time. The application services hosted under Cloud computing model have complex

15
CE443 : Cloud Computing D17CE144

provisioning, composition, configuration, and deployment requirements. Evaluating the


performance of Cloud provisioning policies, application workload models, and resources
performance models in a repeatable manner under varying system and user configurations and
requirements is difficult to achieve. To overcome this challenge, we propose CloudSim: an
extensible simulation toolkit that enables modeling and simulation of Cloud computing systems
and application provisioning environments. The CloudSim toolkit supports both system and
behaviour modeling of Cloud system components such as data centers, virtual machines (VMs)
and resource provisioning policies. It implements generic application provisioning techniques
that can be extended with ease and limited efforts. Currently, it supports modeling and
simulation of Cloud computing environments consisting of both single and inter-networked
clouds (federation of clouds). Moreover, it exposes custom interfaces for implementing policies
and provisioning techniques for allocation of VMs under inter-networked Cloud computing
scenarios. Several researchers from organisations such as HP Labs in USA are using CloudSim
in their investigation on Cloud resource provisioning and energy-efficient management of data
center resources. The usefulness of CloudSim is demonstrated by a case study involving
dynamic provisioning of application services in hybrid federated clouds environment. The
result of this case study proves that the federated Cloud computing model significantly
improves the application QoS requirements under fluctuating resource and service demand
patterns.

Class Diagram of CloudSim

16
CE443 : Cloud Computing D17CE144

CloudSim core simulation framework class diagram

CloudSimTags. This class contains various static event/command tags that indicate the type
of action that needs to be undertaken by CloudSim entities when they receive or send events.
CloudInformationService: A Cloud Information Service (CIS) is an entity that provides
resource registration, indexing, and discovering capabilities. CIS supports two basic primitives:
(i) publish(), which allows entities to register themselves with CIS and (ii) search(), which
allows entities such as CloudCoordinator and Brokers in discovering status and endpoint
contact address of other entities. This entity also notifies the (other?) entities about the end of
simulation.
CloudSimShutdown: This is an entity that waits for the termination of all end-user and broker
entities, and then signals the end of simulation to CIS.
Predicate: Predicates are used for selecting events from the deferred queue. This is an abstract
class and must be extended to create a new predicate. Some standard predicates are provided
that are presented in Figure 7 (b).
PredicateAny: This class represents a predicate that matches any event on the deferred event
queue. There is a publicly accessible instance of this predicate in the CloudSim class, called
CloudSim.SIM_ANY, and hence no new instances need to be created.
PredicateFrom: This class represents a predicate that selects events fired by specific entities.

17
CE443 : Cloud Computing D17CE144

PredicateNone: This represents a predicate that does not match any event on the deferred event
queue. There is a publicly accessible static instance of this predicate in the CloudSim class,
called CloudSim.SIM_NONE, hence the users are not needed to create any new instances of
this class.
PredicateNotFrom: This class represents a predicate that selects events that have not been
sent by specific entities.
PredicateNotType: This class represents a predicate to select events that don't match specific
tags.
PredicateType: This class represents a predicate to select events with specific tags.

18
CE443 : Cloud Computing D17CE144

Practical 3

Aim: Understanding of basic CloudSim Examples.

(1) Write a program in cloudsim using NetBeans IDE to create a datacenter with one host
and run four cloudlet on it.
(2) Write a program in cloudsim using NetBeans IDE to create a datacenter with three hosts
and run three cloudlets on it.

Hardware Required: None


Software Required: Netbeans

(1) Write a program in cloudsim using NetBeans IDE to create a datacenter with one
host and run four cloudlet on it.
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package org.cloudbus.cloudsim.examples;
import java.text.DecimalFormat;
import java.util.*;
import org.cloudbus.cloudsim.*;
import org.cloudbus.cloudsim.core.CloudSim;
import org.cloudbus.cloudsim.provisioners.BwProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.PeProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.RamProvisionerSimple;

public class Practical3_1 {

private static List<Cloudlet> cloudletList;


/**
* The vmlist.

19
CE443 : Cloud Computing D17CE144

*/
private static List<Vm> vmlist;

public static void main(String[] args) {


Log.printLine("Starting CloudSimExample1...");
try {
int num_user = 1; // number of cloud users
Calendar calendar = Calendar.getInstance();
boolean trace_flag = false; // mean trace events
// Initialize the CloudSim library
CloudSim.init(num_user, calendar, trace_flag);
// Second step: Create Datacenters
// Datacenters are the resource providers in CloudSim. We need at
// list one of them to run a CloudSim simulation
Datacenter datacenter0 = createDatacenter("Datacenter_0");
// Third step: Create Broker
DatacenterBroker broker = createBroker();
int brokerId = broker.getId();
// Fourth step: Create one virtual machine
vmlist = new ArrayList<Vm>();
// VM description
int vmid = 0;
int mips = 1000;
long size = 10000; // image size (MB)
int ram = 512; // vm memory (MB)
long bw = 1000;
int pesNumber = 1; // number of cpus
String vmm = "Xen"; // VMM name
// create VM
Vm vm = new Vm(vmid, brokerId, mips, pesNumber, ram, bw, size, vmm,

20
CE443 : Cloud Computing D17CE144

new CloudletSchedulerTimeShared());
// add the VM to the vmList
vmlist.add(vm);
// submit vm list to the broker
broker.submitVmList(vmlist);
// Fifth step: Create one Cloudlet
cloudletList = new ArrayList<Cloudlet>();
// Cloudlet properties
int id = 0;
long length = 400000;
long fileSize = 300;
long outputSize = 300;
UtilizationModel utilizationModel = new UtilizationModelFull();
Cloudlet cloudlet1 = new Cloudlet(id, length, pesNumber, fileSize,
outputSize, utilizationModel, utilizationModel, utilizationModel);
id++;
Cloudlet cloudlet2 = new Cloudlet(id, length, pesNumber, fileSize, outputSize,
utilizationModel, utilizationModel, utilizationModel);
id++;
Cloudlet cloudlet3 = new Cloudlet(id, length, pesNumber, fileSize, outputSize,
utilizationModel, utilizationModel, utilizationModel);
id++;
Cloudlet cloudlet4 = new Cloudlet(id, length, pesNumber, fileSize, outputSize,
utilizationModel, utilizationModel, utilizationModel);
cloudlet1.setUserId(brokerId);
cloudlet1.setVmId(vmid);
cloudlet2.setUserId(brokerId);
cloudlet2.setVmId(vmid);
cloudlet3.setUserId(brokerId);
cloudlet3.setVmId(vmid);

21
CE443 : Cloud Computing D17CE144

cloudlet4.setUserId(brokerId);
cloudlet4.setVmId(vmid);
cloudletList.add(cloudlet1);
cloudletList.add(cloudlet2);
cloudletList.add(cloudlet3);
cloudletList.add(cloudlet4);
// submit cloudlet list to the broker
broker.submitCloudletList(cloudletList);
// Sixth step: Starts the simulation
CloudSim.startSimulation();
CloudSim.stopSimulation();
//Final step: Print results when simulation is over
List<Cloudlet> newList = broker.getCloudletReceivedList();
printCloudletList(newList);
// Print the debt of each user to each datacenter
// datacenter0.printDebts();
Log.printLine("CloudSimExample1 finished!");
} catch (Exception e) {
e.printStackTrace();
Log.printLine("Unwanted errors happen");
}
}

private static Datacenter createDatacenter(String name) {


// Here are the steps needed to create a PowerDatacenter:
// 1. We need to create a list to store our machine
List<Host> hostList = new ArrayList<Host>();
// 2. A Machine contains one or more PEs or CPUs/Cores. In this example, it will have only
one core.
List<Pe> peList = new ArrayList<Pe>();
int mips = 1000;

22
CE443 : Cloud Computing D17CE144

// 3. Create PEs and add these into a list.


peList.add(new Pe(0, new PeProvisionerSimple(mips))); // need to store Pe id and MIPS
Rating
// 4. Create Host with its id and list of PEs and add them to the list of machines
int hostId = 0;
int ram = 2048; // host memory (MB)
long storage = 1000000; // host storage
int bw = 10000;
hostList.add(
new Host(
hostId,
new RamProvisionerSimple(ram),
new BwProvisionerSimple(bw),
storage,
peList,
new VmSchedulerTimeShared(peList)
)
); // This is our machine
// 5. Create a DatacenterCharacteristics object that stores the properties of a datacenter:
architecture, OS, list of Machines, allocation policy: time- or space-shared, time zone andits
price (G$/Pe time unit).
String arch = "x86"; // system architecture
String os = "Linux"; // operating system
String vmm = "Xen";
double time_zone = 10.0; // time zone this resource located
double cost = 3.0; // the cost of using processing in this resource
double costPerMem = 0.05; // the cost of using memory in this resource
double costPerStorage = 0.001; // the cost of using storage in resource
double costPerBw = 0.0; // the cost of using bw in this resource
LinkedList<Storage> storageList = new LinkedList<Storage>(); // we are not adding
SAN devices by now
DatacenterCharacteristics characteristics = new DatacenterCharacteristics(

23
CE443 : Cloud Computing D17CE144

arch, os, vmm, hostList, time_zone, cost, costPerMem,


costPerStorage, costPerBw);
// 6. Finally, we need to create a PowerDatacenter object.
Datacenter datacenter = null;
try {
datacenter = new Datacenter(name, characteristics, new
VmAllocationPolicySimple(hostList), storageList, 0);
} catch (Exception e) {
e.printStackTrace();
}
return datacenter;
}

private static DatacenterBroker createBroker() {


DatacenterBroker broker = null;
try {
broker = new DatacenterBroker("Broker");
} catch (Exception e) {
e.printStackTrace();
return null;
}
return broker;
}

private static void printCloudletList(List<Cloudlet> list) {


int size = list.size();
Cloudlet cloudlet;
String indent = " ";
Log.printLine();
Log.printLine("========== OUTPUT ==========");
Log.printLine("Cloudlet ID" + indent + "STATUS" + indent

24
CE443 : Cloud Computing D17CE144

+ "Data center ID" + indent + "VM ID" + indent + "Time" + indent


+ "Start Time" + indent + "Finish Time");
DecimalFormat dft = new DecimalFormat("###.##");
for (int i = 0; i < size; i++) {
cloudlet = list.get(i);
Log.print(indent + cloudlet.getCloudletId() + indent + indent);
if (cloudlet.getCloudletStatus() == Cloudlet.SUCCESS) {
Log.print("SUCCESS");
Log.printLine(indent + indent + cloudlet.getResourceId()
+ indent + indent + indent + cloudlet.getVmId()
+ indent + indent
+ dft.format(cloudlet.getActualCPUTime()) + indent
+ indent + dft.format(cloudlet.getExecStartTime())
+ indent + indent
+ dft.format(cloudlet.getFinishTime()));
}
}
}
}
Output:

25
CE443 : Cloud Computing D17CE144

(2) Write a program in cloudsim using NetBeans IDE to create a datacenter with three
hosts and run three cloudlets on it.
package org.cloudbus.cloudsim.examples;
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package org.cloudbus.cloudsim.examples;

26
CE443 : Cloud Computing D17CE144

/**
*
* @author Pooja Patel
*/
import java.text.DecimalFormat;
import java.util.ArrayList;
import java.util.Calendar;
import java.util.LinkedList;
import java.util.List;
import org.cloudbus.cloudsim.*;
import org.cloudbus.cloudsim.core.CloudSim;
import org.cloudbus.cloudsim.provisioners.BwProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.PeProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.RamProvisionerSimple;

public class Practical3_2 {

/**
* The cloudlet list.
*/
private static List<Cloudlet> cloudletList;
/**
* The vmlist.
*/
private static List<Vm> vmlist;

public static void main(String[] args) {


Log.printLine("Starting CloudSimExample2...");
try {

27
CE443 : Cloud Computing D17CE144

// First step: Initialize the CloudSim package. It should be called


// before creating any entities.
int num_user = 1; // number of cloud users
Calendar calendar = Calendar.getInstance();
boolean trace_flag = false; // mean trace events
// Initialize the CloudSim library
CloudSim.init(num_user, calendar, trace_flag);

// Second step: Create Datacenters


//Datacenters are the resource providers in CloudSim. We need at list one of them to run a
CloudSim simulation
Datacenter datacenter0 = createDatacenter("Datacenter_0");
//Third step: Create Broker
DatacenterBroker broker = createBroker();
int brokerId = broker.getId();
//Fourth step: Create one virtual machine
vmlist = new ArrayList<Vm>();
//VM description
int vmid = 0;
int mips = 250;
long size = 10000; //image size (MB)
int ram = 2048; //vm memory (MB)
long bw = 1000;
int pesNumber = 1; //number of cpus
String vmm = "Xen"; //VMM name
//create two VMs
Vm vm1 = new Vm(vmid, brokerId, mips, pesNumber, ram, bw, size, vmm,
new CloudletSchedulerTimeShared());
//the second VM will have twice the priority of VM1 and so will receive twice CPU time
vmid++;
Vm vm2 = new Vm(vmid, brokerId, mips * 2, pesNumber, ram, bw, size,

28
CE443 : Cloud Computing D17CE144

vmm, new CloudletSchedulerTimeShared());


vmid++;
Vm vm3 = new Vm(vmid, brokerId, mips * 2, pesNumber, ram, bw, size,
vmm, new CloudletSchedulerTimeShared());
//add the VMs to the vmList
vmlist.add(vm1);

//add the VMs to the vmList


vmlist.add(vm2);
vmlist.add(vm3);
//submit vm list to the broker
broker.submitVmList(vmlist);
//Fifth step: Create two Cloudlets
cloudletList = new ArrayList<Cloudlet>();
//Cloudlet properties
int id = 0;
long length = 40000;
long fileSize = 300;
long outputSize = 300;
UtilizationModel utilizationModel = new UtilizationModelFull();
Cloudlet cloudlet1 = new Cloudlet(id, length, pesNumber, fileSize,
outputSize, utilizationModel, utilizationModel, utilizationModel);
cloudlet1.setUserId(brokerId);
id++;
Cloudlet cloudlet2 = new Cloudlet(id, length, pesNumber, fileSize,
outputSize, utilizationModel, utilizationModel, utilizationModel);
cloudlet2.setUserId(brokerId);
id++;
Cloudlet cloudlet3 = new Cloudlet(id, length, pesNumber, fileSize, outputSize,
utilizationModel, utilizationModel, utilizationModel);

29
CE443 : Cloud Computing D17CE144

cloudlet3.setUserId(brokerId);
//add the cloudlets to the list
cloudletList.add(cloudlet1);
cloudletList.add(cloudlet2);
cloudletList.add(cloudlet3);
//submit cloudlet list to the broker

broker.submitCloudletList(cloudletList);
//bind the cloudlets to the vms. This way, the broker
// will submit the bound cloudlets only to the specific VM
broker.bindCloudletToVm(cloudlet1.getCloudletId(), vm1.getId());
broker.bindCloudletToVm(cloudlet2.getCloudletId(), vm2.getId());
broker.bindCloudletToVm(cloudlet3.getCloudletId(), vm3.getId());
// Sixth step: Starts the simulation
CloudSim.startSimulation();
// Final step: Print results when simulation is over
List<Cloudlet> newList = broker.getCloudletReceivedList();
CloudSim.stopSimulation();
printCloudletList(newList);
//Print the debt of each user to each datacenter
// datacenter0.printDebts();
Log.printLine("CloudSimExample3 finished!");
} catch (Exception e) {
e.printStackTrace();
Log.printLine("The simulation has been terminated due to an unexpected error");
}
}

private static Datacenter createDatacenter(String name) {


// Here are the steps needed to create a PowerDatacenter:

30
CE443 : Cloud Computing D17CE144

// 1. We need to create a list to store


// our machine
List<Host> hostList = new ArrayList<Host>();
// 2. A Machine contains one or more PEs or CPUs/Cores.
// In this example, it will have only one core.
List<Pe> peList = new ArrayList<Pe>();
int mips = 1000;
// 3. Create PEs and add these into a list.
peList.add(new Pe(0, new PeProvisionerSimple(mips))); // need to store Pe id and MIPS
Rating
//4. Create Hosts with its id and list of PEs and add them to the list of machines
int hostId = 0;
int ram = 2048; //host memory (MB)
long storage = 1000000; //host storage
int bw = 10000;
hostList.add(
new Host(
hostId,
new RamProvisionerSimple(ram),
new BwProvisionerSimple(bw),
storage,
peList,
new VmSchedulerTimeShared(peList)
)
); // This is our first machine
//create another machine in the Data center
List<Pe> peList2 = new ArrayList<Pe>();
peList2.add(new Pe(0, new PeProvisionerSimple(mips)));
hostId++;
hostList.add(
new Host(

31
CE443 : Cloud Computing D17CE144

hostId,
new RamProvisionerSimple(ram),
new BwProvisionerSimple(bw),
storage,
peList2,
new VmSchedulerTimeShared(peList2)
)
); // This is our second machine
List<Pe> peList3 = new ArrayList<Pe>();
peList3.add(new Pe(0, new PeProvisionerSimple(mips)));
hostId++;
hostList.add(
new Host(
hostId,
new RamProvisionerSimple(ram),
new BwProvisionerSimple(bw),
storage,
peList3,
new VmSchedulerTimeShared(peList3)
)
); // This is our second machine
// 5. Create a DatacenterCharacteristics object that stores the
// properties of a data center: architecture, OS, list of
// Machines, allocation policy: time- or space-shared, time zone
// and its price (G$/Pe time unit).
String arch = "x86"; // system architecture
String os = "Linux"; // operating system
String vmm = "Xen";
double time_zone = 10.0; // time zone this resource located
double cost = 3.0; // the cost of using processing in this resource

32
CE443 : Cloud Computing D17CE144

double costPerMem = 0.05; // the cost of using memory in this resource


double costPerStorage = 0.001; // the cost of using storage in this resource
double costPerBw = 0.0; // the cost of using bw in this resource
LinkedList<Storage> storageList = new LinkedList<Storage>(); //we are not adding
SAN devices by now
DatacenterCharacteristics characteristics = new DatacenterCharacteristics(
arch, os, vmm, hostList, time_zone, cost, costPerMem, costPerStorage,
costPerBw);
// 6. Finally, we need to create a PowerDatacenter object.
Datacenter datacenter = null;
try {
datacenter = new Datacenter(name, characteristics, new
VmAllocationPolicySimple(hostList), storageList, 0);
} catch (Exception e) {
e.printStackTrace();
}
return datacenter;
}
//We strongly encourage users to develop their own broker policies, to submit vms and
cloudlets according
//to the specific rules of the simulated scenario

private static DatacenterBroker createBroker() {


DatacenterBroker broker = null;
try {
broker = new DatacenterBroker("Broker");
} catch (Exception e) {
e.printStackTrace();
return null;
}
return broker;
}

33
CE443 : Cloud Computing D17CE144

private static void printCloudletList(List<Cloudlet> list) {


int size = list.size();
Cloudlet cloudlet;
String indent = " ";
Log.printLine();
Log.printLine("========== OUTPUT ==========");
Log.printLine("Cloudlet ID" + indent + "STATUS" + indent
+ "Data center ID" + indent + "VM ID" + indent + "Time" + indent
+ "Start Time" + indent + "Finish Time");
DecimalFormat dft = new DecimalFormat("###.##");
for (int i = 0; i < size; i++) {
cloudlet = list.get(i);
Log.print(indent + cloudlet.getCloudletId() + indent + indent);
if (cloudlet.getCloudletStatus() == Cloudlet.SUCCESS) {
Log.print("SUCCESS");
Log.printLine(indent + indent + cloudlet.getResourceId() + indent
+ indent + indent + cloudlet.getVmId() + indent + indent
+ dft.format(cloudlet.getActualCPUTime()) + indent + indent
+ dft.format(cloudlet.getExecStartTime()) + indent + indent +
dft.format(cloudlet.getFinishTime()));
}
}
}
}

34
CE443 : Cloud Computing D17CE144

Output:

Conclusion:
From this practical I learned about cloudsim and how to implement different scenarios in
cloudsim.

35
CE443 : Cloud Computing D17CE144

Practical 4

Aim: Understanding Network Example (Any 2 Example)


Hardware Requirement: None
Software Requirement: Netbeans

1. Network Example 1
Program:
package org.cloudbus.cloudsim.examples.network;
import java.text.DecimalFormat;
import java.util.ArrayList;
import java.util.Calendar;
import java.util.LinkedList;
import java.util.List;
import org.cloudbus.cloudsim.*;
/**
* A simple example showing how to create
* a datacenter with one host and a network
* topology and and run one cloudlet on it.
*/
public class NetworkExample1 {
/** The cloudlet list. */
private static List<Cloudlet> cloudletList;
/** The vmlist. */
private static List<Vm> vmlist;
/**
* Creates main() to run this example
*/
public static void main(String[] args) {
Log.printLine("Starting NetworkExample1...");
try {
// First step: Initialize the CloudSim package. It should be called
// before creating any entities.
int num_user = 1; // number of cloud users
Calendar calendar = Calendar.getInstance();
boolean trace_flag = false; // mean trace events
// Initialize the CloudSim library CloudSim.init(num_user, calendar, trace_flag);
// Second step: Create Datacenters
//Datacenters are the resource providers in CloudSim. We need at list one of them to run a
CloudSim simulation
Datacenter datacenter0 = createDatacenter("Datacenter_0");
//Third step: Create Broker
DatacenterBroker broker = createBroker();

36
CE443 : Cloud Computing D17CE144

int brokerId = broker.getId();


//Fourth step: Create one virtual machine
vmlist = new ArrayList<Vm>();
//VM description
int vmid = 0;
int mips = 250;
long size = 10000; //image size (MB)
int ram = 512; //vm memory (MB)
long bw = 1000;
int pesNumber = 1; //number of cpus
String vmm = "Xen"; //VMM name
//create VM
Vm vm1 = new Vm(vmid, brokerId, mips, pesNumber, ram, bw, size, vmm, new
CloudletSchedulerTimeShared());
//add the VM to the vmList
vmlist.add(vm1);
//submit vm list to the broker
broker.submitVmList(vmlist);
//Fifth step: Create one Cloudlet
cloudletList = new ArrayList<Cloudlet>();
//Cloudlet properties
int id = 0;
long length = 40000;
long fileSize = 300;
long outputSize = 300;
UtilizationModel utilizationModel = new UtilizationModelFull();
Cloudlet cloudlet1 = new Cloudlet(id, length, pesNumber, fileSize, outputSize,
utilizationModel, utilizationModel, utilizationModel);
cloudlet1.setUserId(brokerId);
//add the cloudlet to the list
cloudletList.add(cloudlet1);
//submit cloudlet list to the broker
broker.submitCloudletList(cloudletList);
//Sixth step: configure network
//load the network topology file
NetworkTopology.buildNetworkTopology("topology.brite");
//maps CloudSim entities to BRITE entities
//PowerDatacenter will correspond to BRITE node 0
int briteNode=0;
NetworkTopology.mapNode(datacenter0.getId(),briteNode);
//Broker will correspond to BRITE node 3
briteNode=3;
NetworkTopology.mapNode(broker.getId(),briteNode);
// Seventh step: Starts the simulation
CloudSim.startSimulation();
// Final step: Print results when simulation is over

37
CE443 : Cloud Computing D17CE144

List<Cloudlet> newList = broker.getCloudletReceivedList();


CloudSim.stopSimulation();
printCloudletList(newList);
Log.printLine("NetworkExample1 finished!");
}
catch (Exception e) {
e.printStackTrace();
Log.printLine("The simulation has been terminated due to an unexpected error");
}}
private static Datacenter createDatacenter(String name){
// Here are the steps needed to create a PowerDatacenter:
// 1. We need to create a list to store our machine
List<Host> hostList = new ArrayList<Host>();
// 2. A Machine contains one or more PEs or CPUs/Cores.
// In this example, it will have only one core.
List<Pe> peList = new ArrayList<Pe>();
int mips = 1000;
// 3. Create PEs and add these into a list.
peList.add(new Pe(0, new PeProvisionerSimple(mips))); // need to store Pe id and MIPS
Rating
//4. Create Host with its id and list of PEs and add them to the list of machines
int hostId=0;
int ram = 2048; //host memory (MB)
long storage = 1000000; //host storage
int bw = 10000;
hostList.add(
new Host(
hostId,
new RamProvisionerSimple(ram),
new BwProvisionerSimple(bw),
storage,
peList,
new VmSchedulerTimeShared(peList)
)
); // This is our machine
// 5. Create a DatacenterCharacteristics object that stores the properties of a data center:
//architecture, OS, list of Machines, allocation policy: time- or space-shared, time zone
// and its price (G$/Pe time unit).
String arch = "x86"; // system architecture
String os = "Linux"; // operating system
String vmm = "Xen";
double time_zone = 10.0; // time zone this resource located
double cost = 3.0; // the cost of using processing in this resource
double costPerMem = 0.05; // the cost of using memory in this resource
double costPerStorage = 0.001; // the cost of using storage in this resource
double costPerBw = 0.0; // the cost of using bw in this resource

38
CE443 : Cloud Computing D17CE144

LinkedList<Storage> storageList = new LinkedList<Storage>(); //we are not


adding SAN devices by now
DatacenterCharacteristics characteristics = new DatacenterCharacteristics(arch, os, vmm,
hostList, time_zone, cost, costPerMem, costPerStorage, costPerBw);
// 6. Finally, we need to create a PowerDatacenter object.
Datacenter datacenter = null;
try {
datacenter = new Datacenter(name, characteristics, new
VmAllocationPolicySimple(hostList), storageList, 0);
} catch (Exception e) {
e.printStackTrace();
}
return datacenter;
}
//We strongly encourage users to develop their own broker policies, to submit vms and
cloudlets according
//to the specific rules of the simulated scenario
private static DatacenterBroker createBroker(){
DatacenterBroker broker = null;
try {
broker = new DatacenterBroker("Broker");
} catch (Exception e) {
e.printStackTrace();
return null;
}
return broker;
}
/**
* Prints the Cloudlet objects
* @param list list of Cloudlets
*/
private static void printCloudletList(List<Cloudlet> list) {
int size = list.size();
Cloudlet cloudlet;
String indent = " ";
Log.printLine();
Log.printLine("========== OUTPUT ==========");
Log.printLine("Cloudlet ID" + indent + "STATUS" + indent + "Data center ID" + indent +
"VM ID" + indent + "Time" + indent + "Start Time" + indent + "Finish Time");
for (int i = 0; i < size; i++) {
cloudlet = list.get(i);
Log.print(indent + cloudlet.getCloudletId() + indent + indent);
if (cloudlet.getCloudletStatus() == Cloudlet.SUCCESS){
Log.print("SUCCESS");
DecimalFormat dft = new DecimalFormat("###.##");

39
CE443 : Cloud Computing D17CE144

Log.printLine( indent + indent + cloudlet.getResourceId() + indent + indent + indent +


cloudlet.getVmId() + indent + indent + dft.format(cloudlet.getActualCPUTime()) + indent
+ indent +
dft.format(cloudlet.getExecStartTime())+ indent + indent +
dft.format(cloudlet.getFinishTime()));
}}}}

Output:

2. Network Example 2

Program:
package org.cloudbus.cloudsim.examples.network;
import java.text.DecimalFormat;
import java.util.ArrayList;
import java.util.Calendar;
import java.util.LinkedList;
import java.util.List;
import org.cloudbus.cloudsim.Cloudlet;
import org.cloudbus.cloudsim.CloudletSchedulerTimeShared;
import org.cloudbus.cloudsim.Datacenter;
import org.cloudbus.cloudsim.DatacenterBroker;

40
CE443 : Cloud Computing D17CE144

import org.cloudbus.cloudsim.DatacenterCharacteristics;
import org.cloudbus.cloudsim.Host;
import org.cloudbus.cloudsim.Log;
import org.cloudbus.cloudsim.NetworkTopology;
import org.cloudbus.cloudsim.Pe;
import org.cloudbus.cloudsim.Storage;
import org.cloudbus.cloudsim.UtilizationModel;
import org.cloudbus.cloudsim.UtilizationModelFull;
import org.cloudbus.cloudsim.Vm;
import org.cloudbus.cloudsim.VmAllocationPolicySimple;
import org.cloudbus.cloudsim.VmSchedulerSpaceShared;
import org.cloudbus.cloudsim.core.CloudSim;
import org.cloudbus.cloudsim.provisioners.BwProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.PeProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.RamProvisionerSimple;
/**
* A simple example showing how to create
* two datacenters with one host each and
* run cloudlets of two users with network
* topology on them.
*/
public class NetworkExample3 {
/** The cloudlet list. */
private static List<Cloudlet> cloudletList1;
private static List<Cloudlet> cloudletList2;
/** The vmlist. */
private static List<Vm> vmlist1;
private static List<Vm> vmlist2;
/**
* Creates main() to run this example
*/
public static void main(String[] args) {
Log.printLine("Starting NetworkExample3...");
try {
// First step: Initialize the CloudSim package. It should be called
// before creating any entities.
int num_user = 2; // number of cloud users
Calendar calendar = Calendar.getInstance();
boolean trace_flag = false; // mean trace events
// Initialize the CloudSim library
CloudSim.init(num_user, calendar, trace_flag);
// Second step: Create Datacenters
//Datacenters are the resource providers in CloudSim. We need at list one
of them to run a CloudSim simulation
Datacenter datacenter0 = createDatacenter("Datacenter_0");
Datacenter datacenter1 = createDatacenter("Datacenter_1");

41
CE443 : Cloud Computing D17CE144

//Third step: Create Brokers


DatacenterBroker broker1 = createBroker(1);
int brokerId1 = broker1.getId();
DatacenterBroker broker2 = createBroker(2);
int brokerId2 = broker2.getId();
//Fourth step: Create one virtual machine for each broker/user
vmlist1 = new ArrayList<Vm>();
vmlist2 = new ArrayList<Vm>();
//VM description
int vmid = 0;
long size = 10000; //image size (MB)
int mips = 250;
int ram = 512; //vm memory (MB)
long bw = 1000;
int pesNumber = 1; //number of cpus
String vmm = "Xen"; //VMM name
//create two VMs: the first one belongs to user1
Vm vm1 = new Vm(vmid, brokerId1, mips, pesNumber, ram, bw, size,
vmm, new CloudletSchedulerTimeShared());
//the second VM: this one belongs to user2
Vm vm2 = new Vm(vmid, brokerId2, mips, pesNumber, ram, bw, size,
vmm, new CloudletSchedulerTimeShared());
//add the VMs to the vmlists
vmlist1.add(vm1);
vmlist2.add(vm2);
//submit vm list to the broker
broker1.submitVmList(vmlist1);
broker2.submitVmList(vmlist2);
//Fifth step: Create two Cloudlets
cloudletList1 = new ArrayList<Cloudlet>();
cloudletList2 = new ArrayList<Cloudlet>();
//Cloudlet properties
int id = 0;
long length = 40000;
long fileSize = 300;
long outputSize = 300;
UtilizationModel utilizationModel = new UtilizationModelFull();
Cloudlet cloudlet1 = new Cloudlet(id, length, pesNumber, fileSize,
outputSize, utilizationModel, utilizationModel, utilizationModel);
cloudlet1.setUserId(brokerId1);
Cloudlet cloudlet2 = new Cloudlet(id, length, pesNumber, fileSize,
outputSize, utilizationModel, utilizationModel, utilizationModel);
cloudlet2.setUserId(brokerId2);
//add the cloudlets to the lists: each cloudlet belongs to one user
cloudletList1.add(cloudlet1);
cloudletList2.add(cloudlet2);

42
CE443 : Cloud Computing D17CE144

//submit cloudlet list to the brokers


broker1.submitCloudletList(cloudletList1);
broker2.submitCloudletList(cloudletList2);
//Sixth step: configure network
//load the network topology file
NetworkTopology.buildNetworkTopology("topology.brite");
//maps CloudSim entities to BRITE entities
//Datacenter0 will correspond to BRITE node 0
int briteNode=0;
NetworkTopology.mapNode(datacenter0.getId(),briteNode);
//Datacenter1 will correspond to BRITE node 2
briteNode=2;
NetworkTopology.mapNode(datacenter1.getId(),briteNode);
//Broker1 will correspond to BRITE node 3
briteNode=3;
NetworkTopology.mapNode(broker1.getId(),briteNode);
//Broker2 will correspond to BRITE node 4
briteNode=4;
NetworkTopology.mapNode(broker2.getId(),briteNode);
// Sixth step: Starts the simulation
CloudSim.startSimulation();
// Final step: Print results when simulation is over
List<Cloudlet> newList1 = broker1.getCloudletReceivedList();
List<Cloudlet> newList2 = broker2.getCloudletReceivedList();
CloudSim.stopSimulation();
Log.print("=============> User "+brokerId1+" ");
printCloudletList(newList1);
Log.print("=============> User "+brokerId2+" ");
printCloudletList(newList2);
Log.printLine("NetworkExample3 finished!");
}
catch (Exception e) {
e.printStackTrace();
Log.printLine("The simulation has been terminated due to an unexpected
error");
}
}
private static Datacenter createDatacenter(String name){
// Here are the steps needed to create a PowerDatacenter:
// 1. We need to create a list to store
// our machine
List<Host> hostList = new ArrayList<Host>();
// 2. A Machine contains one or more PEs or CPUs/Cores.
// In this example, it will have only one core.
List<Pe> peList = new ArrayList<Pe>();
int mips = 1000;

43
CE443 : Cloud Computing D17CE144

// 3. Create PEs and add these into a list.


peList.add(new Pe(0, new PeProvisionerSimple(mips))); // need to store Pe id and MIPS
//Rating
//4. Create Host with its id and list of PEs and add them to the list of machines
int hostId=0;
int ram = 2048; //host memory (MB)
long storage = 1000000; //host storage
int bw = 10000;
//in this example, the VMAllocatonPolicy in use is SpaceShared. It means that only
one VM
//is allowed to run on each Pe. As each Host has only one Pe, only one VM can run on each
//Host.
hostList.add( new Host( hostId, new RamProvisionerSimple(ram), new
BwProvisionerSimple(bw), storage, peList, new VmSchedulerSpaceShared(peList) ) );
// This is our machine
// 5. Create a DatacenterCharacteristics object that stores the
// properties of a data center: architecture, OS, list of
// Machines, allocation policy: time- or space-shared, time zone
// and its price (G$/Pe time unit).
String arch = "x86"; // system architecture
String os = "Linux"; // operating system
String vmm = "Xen";
double time_zone = 10.0; // time zone this resource located
double cost = 3.0; // the cost of using processing in this resource
double costPerMem = 0.05; // the cost of using memory in this resource
double costPerStorage = 0.001; // the cost of using storage in this resource
double costPerBw = 0.0; // the cost of using bw in this resource
LinkedList<Storage> storageList = new LinkedList<Storage>(); //we are not adding SAN
// devices by now
DatacenterCharacteristics characteristics = new DatacenterCharacteristics( arch, os, vmm,
hostList, time_zone, cost, costPerMem, costPerStorage, costPerBw);
// 6. Finally, we need to create a PowerDatacenter object.
Datacenter datacenter = null;
try {
datacenter = new Datacenter(name, characteristics, new
VmAllocationPolicySimple(hostList), storageList, 0);
} catch (Exception e) {
e.printStackTrace();
}
return datacenter;
}
//We strongly encourage users to develop their own broker policies, to submit vms and
cloudlets according
//to the specific rules of the simulated scenario
private static DatacenterBroker createBroker(int id){
DatacenterBroker broker = null;

44
CE443 : Cloud Computing D17CE144

try {
broker = new DatacenterBroker("Broker"+id);
} catch (Exception e) {
e.printStackTrace();
return null;
}
return broker;
}
/**
* Prints the Cloudlet objects
* @param list list of Cloudlets
*/
private static void printCloudletList(List<Cloudlet> list) {
int size = list.size();
Cloudlet cloudlet;
String indent = " ";
Log.printLine();
Log.printLine("========== OUTPUT ==========");
Log.printLine("Cloudlet ID" + indent + "STATUS" + indent +
"Data center ID" + indent + "VM ID" + indent + "Time" + indent +
"Start Time" + indent + "Finish Time");
for (int i = 0; i < size; i++) {
cloudlet = list.get(i);
Log.print(indent + cloudlet.getCloudletId() + indent + indent);
if (cloudlet.getCloudletStatus() == Cloudlet.SUCCESS){
Log.print("SUCCESS");
DecimalFormat dft = new DecimalFormat("###.##");
Log.printLine( indent + indent + cloudlet.getResourceId() + indent + indent + indent +
cloudlet.getVmId() + indent + indent + dft.format(cloudlet.getActualCPUTime()) + indent
+ indent + dft.format(cloudlet.getExecStartTime())+ indent + indent +
dft.format(cloudlet.getFinishTime()));
}}}}

Output:

45
CE443 : Cloud Computing D17CE144

Conclusion:
From this practical, we learnt how to write the code of networking in netbeans.

Aim: Write program in cloudsim using NetBeans IDE/ Eclipse to create one datacenter with
four host, one databroker , 40 cloudlets, 10 virtual machines.
Hardware Requirement: 4GB RAM, 500GB HDD, CPU
Software Requirement: Netbeans IDE/ Eclipse

Program:
import java.util.Calendar;
import java.text.DecimalFormat;
import java.util.ArrayList;
import java.util.Calendar;
import java.util.LinkedList;
import java.util.List;
import org.cloudbus.cloudsim.Cloudlet;

46
CE443 : Cloud Computing D17CE144

import org.cloudbus.cloudsim.CloudletSchedulerTimeShared;
import org.cloudbus.cloudsim.CloudletSchedulerSpaceShared;
import org.cloudbus.cloudsim.Datacenter;
import org.cloudbus.cloudsim.DatacenterBroker;
import org.cloudbus.cloudsim.DatacenterCharacteristics;
import org.cloudbus.cloudsim.Host;
import org.cloudbus.cloudsim.Log;
import org.cloudbus.cloudsim.Pe;
import org.cloudbus.cloudsim.Storage;
import org.cloudbus.cloudsim.UtilizationModel;
import org.cloudbus.cloudsim.UtilizationModelFull;
import org.cloudbus.cloudsim.Vm;
import org.cloudbus.cloudsim.VmAllocationPolicySimple;
import org.cloudbus.cloudsim.VmSchedulerSpaceShared;
import org.cloudbus.cloudsim.VmSchedulerTimeShared;
import org.cloudbus.cloudsim.core.CloudSim;
import org.cloudbus.cloudsim.provisioners.BwProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.PeProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.RamProvisionerSimple;
import org.cloudbus.cloudsim.Datacenter;
import org.cloudbus.cloudsim.Log;
import org.cloudbus.cloudsim.core.CloudSim;

public class CloudSimSimulationProgram {


/** The cloudlet list. */
private static List<Cloudlet> cloudletList;

/** The vmlist. */


private static List<Vm> vmlist;
public static void main(String[] args) {

47
CE443 : Cloud Computing D17CE144

Log.printLine("Starting CloudSimExample1...");

// First step: Initialize the CloudSim package. It should be called


// before creating any entities.
int num_user = 40; // number of cloud users
Calendar calendar = Calendar.getInstance();
boolean trace_flag = false; // mean trace events

// Initialize the CloudSim library


CloudSim.init(num_user, calendar, trace_flag);

// Second step: Create Datacenters


// Datacenters are the resource providers in CloudSim. We need at
// list one of them to run a CloudSim simulation
Datacenter datacenter0 = createDatacenter("Datacenter_0");

DatacenterBroker broker = createBroker();


int brokerId = broker.getId();

// Fifth step: Create one Cloudlet


List<Cloudlet> cloudletList = new ArrayList<Cloudlet>();
// Cloudlet properties
int id = 0;
long length = 40000;
int pesNumber=1;
long fileSize = 300;
long outputSize = 400;
UtilizationModel utilizationModel = new UtilizationModelFull();
for(id = 0;id<40;id++) {
Cloudlet cloudlet = new Cloudlet(id, length, pesNumber, fileSize,
outputSize, utilizationModel, utilizationModel, utilizationModel);

48
CE443 : Cloud Computing D17CE144

cloudlet.setUserId(brokerId);
cloudletList.add(cloudlet);
}
// Fourth step: Create one virtual machine
vmlist = new ArrayList<Vm>();
// VM description
int vmid;
int mips = 1000;
long size = 20000; // image size (MB)
int ram = 2048; // vm memory (MB)
long bw = 1000;
int vCPU = 1; // number of cpus
String vmm = "Xen"; // VMM name
for(vmid=0;vmid<40;vmid++) {
// create VM
Vm vm = new Vm(vmid, brokerId, mips, vCPU, ram, bw, size, vmm,
new CloudletSchedulerSpaceShared());

// add the VM to the vmList


vmlist.add(vm);
}
// submit vm list to the broker
broker.submitCloudletList(cloudletList);
broker.submitVmList(vmlist);

// Sixth step: Starts the simulation


CloudSim.startSimulation();
List<Cloudlet> newList = broker.getCloudletReceivedList();
CloudSim.stopSimulation();
int cloudletNo=0;
//for(Cloudlet c: newList) {

49
CE443 : Cloud Computing D17CE144

printCloudletList(newList);
//}
}

private static Datacenter createDatacenter(String name) {


// 2. A Machine contains one or more PEs or CPUs/Cores.
// In this example, it will have only one core.
List<Pe> peList = new ArrayList<Pe>();
int mips = 1000;
for(int j=0;j<4;j++) {
// 3. Create PEs and add these into a list.
peList.add(new Pe(j, new PeProvisionerSimple(mips))); // need to store Pe id and
MIPS Rating
}
// Here are the steps needed to create a PowerDatacenter:
// 1. We need to create a list to store our machine
List<Host> hostList = new ArrayList<Host>();

// 4. Create Host with its id and list of PEs and add them to the list
// of machines
int hostId ;
int ram = 8000; // host memory (MB)
long storage = 100000; // host storage
int bw = 8000;
for(hostId=0;hostId<4;hostId++) {
hostList.add(new Host(hostId,new RamProvisionerSimple(ram),new
BwProvisionerSimple(bw),storage,peList,
new VmSchedulerSpaceShared(peList)
)
);
}// This is our machine

50
CE443 : Cloud Computing D17CE144

// 5. Create a DatacenterCharacteristics object that stores the


// properties of a data center: architecture, OS, list of
// Machines, allocation policy: time- or space-shared, time zone
// and its price (G$/Pe time unit).
String arch = "x86"; // system architecture
String os = "Linux"; // operating system
String vmm = "Xen";
double time_zone = 10.0; // time zone this resource located
double cost = 3.0; // the cost of using processing in this resource
double costPerMem = 0.05; // the cost of using memory in this resource
double costPerStorage = 0.001; // the cost of using storage in this resource
double costPerBw = 0.0; // the cost of using bw in this resource
LinkedList<Storage> storageList = new LinkedList<Storage>(); // we are not
addingSAN
//devices by now
DatacenterCharacteristics characteristics = new DatacenterCharacteristics(
arch, os, vmm, hostList, time_zone, cost, costPerMem,
costPerStorage, costPerBw);

// 6. Finally, we need to create a PowerDatacenter object.


Datacenter datacenter = null;
try {
datacenter = new Datacenter(name, characteristics, new
VmAllocationPolicySimple(hostList), storageList, 0);
} catch (Exception e) {
e.printStackTrace();
}
return datacenter;
}

51
CE443 : Cloud Computing D17CE144

// We strongly encourage users to develop their own broker policies, to


// submit vms and cloudlets according
// to the specific rules of the simulated scenario
/**
* Creates the broker.
*
* @return the datacenter broker
*/
private static DatacenterBroker createBroker() {
DatacenterBroker broker = null;
try {
broker = new DatacenterBroker("Broker");
} catch (Exception e) {
e.printStackTrace();
return null;
}
return broker;
}
private static void printCloudletList(List<Cloudlet> list) {
int size = list.size();
Cloudlet cloudlet;
String indent = " ";
Log.printLine();
Log.printLine("========== OUTPUT ==========");
Log.printLine("Cloudlet ID" + indent + "STATUS" + indent
+ "Data center ID" + indent + "VM ID" + indent + "Time" + indent
+ "Start Time" + indent + "Finish Time");

DecimalFormat dft = new DecimalFormat("###.##");


for (int i = 0; i < size; i++) {

52
CE443 : Cloud Computing D17CE144

cloudlet = list.get(i);
Log.print(indent + cloudlet.getCloudletId() + indent + indent);

if (cloudlet.getCloudletStatus() == Cloudlet.SUCCESS) {
Log.print("SUCCESS");
Log.printLine(indent + indent + cloudlet.getResourceId()
+ indent + indent + indent + cloudlet.getVmId() + indent + indent
+ dft.format(cloudlet.getActualCPUTime()) + indent+ indent
+dft.format(cloudlet.getExecStartTime()) + indent + indent +
dft.format(cloudlet.getFinishTime()));
}
}
}
}

Output:

53
CE443 : Cloud Computing D17CE144

54
CE443 : Cloud Computing D17CE144

Practical 5
Aim: Create a scenario in Aneka / Eucalyptus to create a datacenter and host. Also
create virtual machines with static configuration to run cloudlets on them.
Hardware Requirement: 4GB RAM, 500GB HDD, CPU
Software Requirement: Linux OS

1. Click on the Launch Instance button on the main console page:

55
CE443 : Cloud Computing D17CE144

2. Select an image from the list (for this example, we'll select a CentOS image), then
click the Next button:

56
CE443 : Cloud Computing D17CE144

3. Select an instance type and availability zone from the Details tab. For this example,
select the defaults, and then click the Next button:

4. On the Security tab, we'll create a key pair and a security group to use with our new
instance. A key pair will allow you to access your instance, and a security group
allows you to define what kinds of incoming traffic your instance will allow.

57
CE443 : Cloud Computing D17CE144

a. First, we will create a key pair. Click the Create key pair link to bring up
the Create key pair dialog:

58
CE443 : Cloud Computing D17CE144

b. Type the name of your new key pair into the Name text box, and then click
the Create and Download button:

The key pair automatically downloads to a location on your computer,


typically in the Downloads folder.

59
CE443 : Cloud Computing D17CE144

c. The Create key pair dialog will close, and the Key name text box will be
populated with the name of the key pair you just created:

d. Next, we will create a security group. Click the Create security group link to
bring up the Create security group dialog:

60
CE443 : Cloud Computing D17CE144

e. On the Create security group dialog, type the name of your security group into
the Name text box.

f. Type a brief description of your security group into the Description text box.

g. We'll need to SSH into our instance later, so in the Rules section of the dialog, select the
SSH protocol from the Protocol drop-down list box.

h. Note: In this example, we allow any IP address to access our new instance. For production
use, please use appropriate caution when specifying an IP range. For more information on
CIDR notation, go to CIDR notation.

You need to specify an IP address or a range of IP addresses that can use SSH to access
your instance. For this example, click the Open to all addresses link. This will populate
the IP Address text box with 0.0.0.0/0, which allows any IP address to access your instance
via SSH.

61
CE443 : Cloud Computing D17CE144

i. Click the Add rule button. The Create security group dialog should now
look something like this:

62
CE443 : Cloud Computing D17CE144

j. Click the Create security group button.

The Create security group dialog will close, and the Security group text box
will be populated with the name of the security group you just created:

63
CE443 : Cloud Computing D17CE144

5. You're now ready to launch your new instance. Click the Launch Instance button.

The Launch Instance dialog will close, and the Instances screen will display. The
instance you just created will display at the top of the list with a status of Pending:

6. When the status of your new instance changes to Running, click the instance in the list
to bring up a page showing details of your instance. For example:

64
CE443 : Cloud Computing D17CE144

7. Note the Public IP address and/or the Public hostname fields. You will need this
information to connect to your new instance. For example:

65
CE443 : Cloud Computing D17CE144

8. Using the public IP address or hostname of your new instance, you can now use SSH
to log into the instance using the private key file you saved when you created a key
pair. For example:

ssh -i my-test-keypair.private root@10.111.57.109

66
CE443 : Cloud Computing D17CE144

Practical 6

Aim: Create a Beowulf Cluster and explain its applications in detail.


Hardware Requirement: 4GB RAM, 500GB HDD, CPU
Software Requirement: Linux OS

Theory:

A Beowulf cluster is a group of what are normally identical, commercially available computers,
which are running a Free and Open Source Software (FOSS), Unix-like operating system, such
as BSD, GNU/Linux, or Solaris. They are networked into a small TCP/IP LAN, and have
libraries and programs installed which allow processing to be shared among them.

According to Engineering a Beowulf-style Compute Cluster by brown, 2004 book, there is an


accepted definition of a Beowulf cluster. This book describes the true Beowulf as a cluster of
computers interconnected with a network with the following characteristics:

1. The nodes are dedicated to the Beowulf cluster.


2. The network on which the nodes reside are dedicated to the Beowulf cluster.
3. The nodes are Mass Market Commercial-Off-The-Shelf (M2COTS) computers.
4. The network is also a COTS entity.
5. The nodes all run open source software.
6. The resulting cluster is used for High Performance Computing (HPC).

Steps to create Beowulf Cluster

1. For accessing edit rights on all the Linux machines, we will login as a root user by applying
command on both master and slave.

2. For creating any cluster, we’ll set SSI (Single System Image) on both master and slave
machines.

3. Now, we’ll check whether our machine is connected to internet or not by applying
command

sudo apt-get update

we’ll also perform this step by going to the path machine-> settings-> network-> secure
connection in Virtual Box.

67
CE443 : Cloud Computing D17CE144

4. After checking internet connectivity we’ll check for SSH (Secure Shell Protocol) on our
both master and slave machines, if it is not installed then we’ll install it manually.

5. Now, we’ll establish NFS (Network File System) on master by following step

And on slaves by means of following command.


sudo apt-get install nfs-common

6. After connecting to the network and following above steps, we’ll assign IP addresses to all

68
CE443 : Cloud Computing D17CE144

the machines connected to the cluster.

We can do this by opening a window from machine-> Edit connection-> Set IPv4 address for
both master and slave.

Now, to check whether the network of cluster works or not we can ping to the assigned IP
address.

69
CE443 : Cloud Computing D17CE144

7. Now, we’ll add all the connected nodes of a cluster into host file of each machine.

To check whether this step completed successfully or not we can ping to particular machine

by its host name entered in hosts file

70
CE443 : Cloud Computing D17CE144

8. Now on master node, we’ll install mpich to add all other slave nodes in a cluster to it.
sudo
apt-get install mpich
sudo adduser mpiuser ce109-VirtualBox
password: password ce109-VirtualBox:

So, now all the slave nodes are added to the master.
At last, to communicate master will generate a key

9. At last, to communicate master will generate a key

Conclusion:
From this practical, we learnt how to create Beowulf Cluster and what the applications of it
are.

71
CE443 : Cloud Computing D17CE144

Practical 7
Aim:- A report on Charusat Datacenter Visit.

CHARUSAT Cloud
Need
1. Resource Utilization
2. Increase Server Efficiency
3. Improve Disaster Recovery Efforts
4. Increase Business Continuity
Real implementation
HP C7000 Blade Centre housing 7 Blades & each Blade comprising :
• 2 No of Xeon Hexacore (2.66 Ghz) CPU
• 48 GB ECC RAM
• 300 GB Internal Harddisk
• All these linked with HP SAN Box(Storage Area Network) with the capacity 20
TB
Six redundant power supply
Redundant backbone fiber controller card
Redundant IO controller
LTO-5 Auto loader for the Data backup
10 KVA Online UPS with 5 hours of Battery backup
15PGCE028 CE-746 CC
78
CSPIT-CE
Migration
Migration is a technology used for load balancing and optimization of VM deployment in
data centers. With the help of live migration, VMs can be transferred to another node
without shutting down.
Pre-copy: In this, first Memory is transferred and after this execution is transferred. The
pre-copy method is used to transfer the memory to the destination node over a number of
iterations.
Post-copy: In this, First execution is transferred and after this, memory is transferred.
Unlike pre-copy, in post copy the Virtual CPU and devices on the destination node is
transfer in the first step and starts the execution in second step.
Following metrics are used to measure the performance of migration.
i) Preparation: In this, resources are reserved on the destination which performed various
operations.
ii) Downtime: Time during which the VM on the source host is suspended
iii) Resume: It does the instantiation of VM on the destination but with the same state as
suspended source.
iv)Total time: The total time taken in completion of all these phases is called Total
Migration time.
Virtualization
Figure 1 Virtualization
Virtualization essentially means to create multiple, logical instances of software or
hardware on a single physical hardware resource. This technique simulates the available
hardware and gives every application running on top of it, the feel that it is the unique
holder of the resource. The details of the virtual, simulated environment are kept

72
CE443 : Cloud Computing D17CE144

transparent from the application.


15PGCE028 CE-746 CC
79
CSPIT-CE
The advantage here is the reduced cost of maintenance and reduced energy wastage.
Private cloud
Private clouds are data center architectures owned by a single company that provides
flexibility, scalability, provisioning, and automation and monitoring. The goal of a private
cloud is not sell “as-a-service” offerings to external customers but instead to gain the
benefits of cloud architecture without giving up the control of maintaining your own data
center.
Private clouds can be expensive with typically modest economies of scale. This is usually
not an option for the average Small-to-Medium sized business and is most typically put to
use by large enterprises. Private clouds are driven by concerns around security and
compliance, and keeping assets within the firewall.
Public cloud
A public cloud is basically the internet. Service providers use the internet to make
resources, such as applications (also known as Software-as-a-service) and storage,
available to the general public, or on a ‘public cloud. Examples of public clouds include
Amazon Elastic Compute Cloud (EC2), IBM’s Blue Cloud, Sun Cloud, Google AppEngine
and Windows Azure Services Platform.
For users, these types of clouds will provide the best economies of scale, are inexpensive
to set-up because hardware, application and bandwidth costs are covered by the
provider. It’s a pay-per-usage model and the only costs incurred are based on
the capacity that is used.
At CHARUSAT cloud:
All Institutes and departmental level server applications and campus e-governance
facilities are being migrated to central cloud ready environment at WINCell through
Private cloud.
Other Moti27 organizations like Matrusanstha located at Piplag, Kelavani Mandal located
at Anand and Hostel located at Vidhyanagar are centrally connected through Public cloud
and hence their server level applications and other IT requirements are facilitated centrally.
All these IT setup is managed through VCenter application at WINCell.
Three type of deployment model are used in CHARUSAT.
1) Private Cloud:
E-governance is only used in the campus by student & teacher or staff.
Private cloud ranged is limited to campus.
2) Public Cloud:
15PGCE028 CE-746 CC
80
CSPIT-CE
CHARUSAT has many branches out of campus.
So that are connected via a BSNL Broadband to our charusat website and accessing the
data on cloud.
3) Hybrid Cloud:
Every student & staff has their charusat E-mail ID for unique identification.
By the use of Google service we can create hybrid cloud.
Example: 15pgce@charusat.edu.in
NOTE: powered by google
That note denote that service is provided by google.

73
CE443 : Cloud Computing D17CE144

SAN
A storage-area network (SAN) is a dedicated high-speed network (or sub network) that
interconnects and presents shared pools of storage devices to multiple servers.
A SAN moves storage resources off the common user network and reorganizes them into
an independent, high-performance network. This allows each server to access shared
storage as if it were a drive directly attached to the server. When a host wants to access a
storage device on the SAN, it sends out a block-based access request for the storage device.

74
CE443 : Cloud Computing D17CE144

Practical 8
Aim: Create a sample mobile application using Microsoft Azure account as a cloud service.
Also provide database connectivity with implemented mobile application.
Create a sample mobile application using Amazon Web Service
(AWS) account as a cloud service. Also provide database connectivity
with implemented mobile application.

Part 1:

1. Click Create a new Project in the AWS Mobile Hub console. Provide a name for your
project.
2. Click NoSQL Database.
3. Click Enable NoSQL.
4. Click Add a new table.
5. Click Start with an example schema.
6. Select Notes as the example schema.
7. Select Public for the permissions (we won’t add sign-in code to this app).
8. Click Create Table, and then click Create Table in the dialog box.

Even though the table you created has a userID, the data is stored unauthenticated in this
example. If you were to use this app in production, use Amazon Cognito to sign the user in to
your app, and then use the userID to store the authenticated data.

In the left navigation pane, click Resources. AWS Mobile Hub created an Amazon Cognito
identity pool, an IAM role, and a DynamoDB database for your project. Mobile Hub also
linked the three resources according to the permissions you selected when you created the
table. For this demo, you need the following information:

 The Amazon Cognito identity pool ID (for example, us-east-1:f9d582af-51f9-4db3-


8e36-7bdf25f4ee07)
 The name of the Notes table (for example, androidmemoapp-mobilehub-1932532734-
Notes)

These are stored in the application code and used when connecting to the database.

Now that you have a mobile app backend, it’s time to look at the frontend. We have a memo
app that you can download from GitHub. First, add the required libraries to the dependencies
section of the application build.gradle file:

compile 'com.amazonaws:aws-android-sdk-core:2.4.4'

compile 'com.amazonaws:aws-android-sdk-ddb:2.4.4'

75
CE443 : Cloud Computing D17CE144

compile 'com.amazonaws:aws-android-sdk-ddb-document:2.4.4'

Add the INTERNET, ACCESS_NETWORK_STATE,


and ACCESS_WIFI_STATE permissions to AndroidManifest.xml:

<uses-permission android:name="android.permission.INTERNET" />

<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />

<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />

Set up the connection to the Amazon DynamoDB table by creating an Amazon Cognito
credentials provider (for appropriate access permissions), and then creating
a DynamoDbClient object. Finally, create a table reference:

// Create a new credentials provider

credentialsProvider = new CognitoCachingCredentialsProvider(

context, COGNITO_POOL_ID, COGNITO_REGION);

// Create a connection to DynamoDB

dbClient = new AmazonDynamoDBClient(credentialsProvider);

// Create a table reference

dbTable = Table.loadTable(dbClient, DYNAMODB_TABLE);

You can now perform CRUD (Create, Read, Update, Delete) operations on the table:

/**

76
CE443 : Cloud Computing D17CE144

* create a new memo in the database

* @param memo the memo to create

*/

public void create(Document memo) {

if (!memo.containsKey("userId")) {

memo.put("userId", credentialsProvider.getCachedIdentityId());

if (!memo.containsKey("noteId")) {

memo.put("noteId", UUID.randomUUID().toString());

if (!memo.containsKey("creationDate")) {

memo.put("creationDate", System.currentTimeMillis());

dbTable.putItem(memo);

/**

* Update an existing memo in the database

* @param memo the memo to save

77
CE443 : Cloud Computing D17CE144

*/

public void update(Document memo) {

Document document = dbTable.updateItem(memo, new


UpdateItemOperationConfig().withReturnValues(ReturnValue.ALL_NEW));

/**

* Delete an existing memo in the database

* @param memo the memo to delete

*/

public void delete(Document memo) {

dbTable.deleteItem(

memo.get("userId").asPrimitive(), // The Partition Key

memo.get("noteId").asPrimitive()); // The Hash Key

/**

* Retrieve a memo by noteId from the database

* @param noteId the ID of the note

* @return the related document


78
CE443 : Cloud Computing D17CE144

*/

public Document getMemoById(String noteId) {

return dbTable.getItem(new Primitive(credentialsProvider.getCachedIdentityId()), new


Primitive(noteId));

/**

* Retrieve all the memos from the database

* @return the list of memos

*/

public List<Document> getAllMemos() {

return dbTable.query(new
Primitive(credentialsProvider.getCachedIdentityId())).getAllResults();

There are two mechanisms for searching the dataset: scan and query. The query() method
uses indexed fields within the DynamoDB table to rapidly retrieve the appropriate
information. The scan() method is more flexible. It allows you to search on every field, but it
can run into performance issues when searching large amounts of data. This results in a worse
experience for your users because data retrieval will be slower. For the best experience, index
fields that you intend to search often and use the query() method.

The Notes schema in DynamoDB usually segments data on a per-user basis. The app works
with both authenticated and unauthenticated users by using
the .getCachedIdentityId() method. This method stores the current user identity with every
new note that is created.

Android does not allow you to perform network requests on the main UI thread. You must
wrap each operation in an AsyncTask. For example:

79
CE443 : Cloud Computing D17CE144

/**

* Async Task to create a new memo into the DynamoDB table

*/

private class CreateItemAsyncTask extends AsyncTask<Document, Void, Void> {

@Override

protected Void doInBackground(Document... documents) {

DatabaseAccess databaseAccess = DatabaseAccess.getInstance(EditActivity.this);

databaseAccess.create(documents[0]);

return null;

You can initiate a save operation by instantiating the appropriate AsyncTask and then calling
.execute():

/**

* Event Handler called when the Save button is clicked

* @param view the initiating view

*/

public void onSaveClicked(View view) {

if (memo == null) {

80
CE443 : Cloud Computing D17CE144

Document newMemo = new Document();

newMemo.put("content", etText.getText().toString());

CreateItemAsyncTask task = new CreateItemAsyncTask();

task.execute(newMemo);

} else {

memo.put("content", etText.getText().toString());

UpdateItemAsyncTask task = new UpdateItemAsyncTask();

task.execute(memo);

// Finish this activity and return to the prior activity

this.finish();

Similarly, you can retrieve a list of memos on an AsyncTask and pass the memos back to a
method in the MainActivity to populate the UI:

/**

* Async Task for handling the network retrieval of all the memos in DynamoDB

*/

private class GetAllItemsAsyncTask extends AsyncTask<Void, Void, List<Document>> {

@Override

81
CE443 : Cloud Computing D17CE144

protected List<Document> doInBackground(Void... params) {

DatabaseAccess databaseAccess = DatabaseAccess.getInstance(MainActivity.this);

return databaseAccess.getAllMemos();

@Override

protected void onPostExecute(List<Document> documents) {

if (documents != null) {

populateMemoList(documents);

You can find this sample in our GitHub repository. To run the sample:

1. Open the project in Android Studio.


2. Open the DatabaseAccess.java file.
3. Replace the temporary string for COGNITO_POOL_ID with your Amazon Cognito
Identity Pool ID.
4. Replace the temporary string for DYNAMODB_TABLE with your Amazon
DynamoDB table name.
5. If necessary, replace the COGNITO_REGION with the region for your Amazon
Cognito identity pool ID.
6. Run the app on a real device or the Android emulator.

If you are successful, the app looks like the following:

82
CE443 : Cloud Computing D17CE144

Part 2:

1. Sign in to the Azure portal.


2. Click Create a resource.
3. In the search box, type Web App.
4. In the results list, select Web App from the Marketplace.
5. Select your Subscription and Resource Group (select an existing resource
group or create a new one (using the same name as your app)).
6. Choose a unique Name of your web app.
7. Choose the default Publish option as Code.
8. In the Runtime stack, you need to select a version under ASP.NET or Node. If
you are building a .NET backend, select a version under ASP.NET. Otherwise if
you are targeting a Node based application, select one of the version from
Node.
9. Select the right Operating System, either Linux or Windows.
10. Select the Region where you would like this app to be deployed.
11. Select the appropriate App Service Plan and hit Review and create.
12. Under Resource Group, select an existing resource group or create a new one
(using the same name as your app).
13. Click Create. Wait a few minutes for the service to be deployed successfully
before proceeding. Watch the Notifications (bell) icon in the portal header for
status updates.
14. Once the deployment is completed, click on the Deployment details section
and then click on the Resource of Type Microsoft.Web/sites. It will navigate
you to the App Service Web App that you just created.
15. Click on the Configuration blade under Settings and in the Application
settings, click on the New application setting button.

83
CE443 : Cloud Computing D17CE144

16. In the Add/Edit application setting page,


enter Name as MobileAppsManagement_EXTENSION_VERSION and Value
as latest and hit OK.

You are all set to use this newly created App Service Web app as a Mobile app.

Create a database connection and configure the client and server project

1. Download the client SDK quickstarts for the following platforms:

iOS (Objective-C)
iOS (Swift)
Android (Java)
Xamarin.iOS
Xamarin.Android
Xamarin.Forms
Cordova
Windows (C#)

Note

If you use the iOS project you need to download "azuresdk-iOS-*.zip"


from latest GitHub release. Unzip and add
the MicrosoftAzureMobile.framework file to the project's root.

2. You will have to add a database connection or connect to an existing


connection. First, determine whether you’ll create a data store or use an existing
one.
o Create a new data store: If you’re going to create a data store, use the
following quickstart:

Quickstart: Getting started with single databases in Azure SQL Database

o Existing data source: Follow the instructions below if you want to use an
existing database connection
1. SQL Database Connection String format - Data
Source=tcp:{your_SQLServer},{port};Initial
Catalog={your_catalogue};User
ID={your_username};Password={your_password}

{your_SQLServer} Name of the server, this can be found in the


overview page for your database and is usually in the form of
“server_name.database.windows.net”. {port} usually
1433. {your_catalogue} Name of the database. {your_username} User

84
CE443 : Cloud Computing D17CE144

name to access your database. {your_password} Password to access


your database.

Learn more about SQL Connection String format

2. Add the connection string to your mobile app In App Service, you can
manage connection strings for your application by using
the Configuration option in the menu.

To add a connection string:

1. Click on the Application settings tab.


2. Click on [+] New connection string.
3. You will need to provide Name, Value and Type for your connection
string.
4. Type Name as MS_TableConnectionString
5. Value should be the connecting string you formed in the step
before.
6. If you are adding a connection string to a SQL Azure database
choose SQLAzure under type.
3. Azure Mobile Apps has SDKs for .NET and Node.js backends.
o Node.js backend

If you’re going to use Node.js quickstart app, follow the instructions


below.

1. In the Azure portal, go to Easy Tables, you will see this screen.

85
CE443 : Cloud Computing D17CE144

2. Make sure the SQL connection string is already added in


the Configuration tab. Then check the box of I acknowledge that this
will overwrite all site contents and click the Create TodoItem
table button.

86
CE443 : Cloud Computing D17CE144

3. In Easy Tables, click the + Add button.

87
CE443 : Cloud Computing D17CE144

4. Create a TodoItem table with anonymous access.

88
CE443 : Cloud Computing D17CE144

o .NET backend

89
CE443 : Cloud Computing D17CE144

If you’re going to use .NET quickstart app, follow the instructions below.

1. Download the Azure Mobile Apps .NET server project from the azure-
mobile-apps-quickstarts repository.
2. Build the .NET server project locally in Visual Studio.
3. In Visual Studio, open Solution Explorer, right-click
on ZUMOAPPNAMEService project, click Publish, you will see a Publish to
App Service window. If you are working on Mac, check out other ways
to deploy the app here.

4. Select App Service as publish target, then click Select Existing, then
click the Publish button at the bottom of the window.
5. You will need to log into Visual Studio with your Azure subscription first.
Select the Subscription, Resource Group, and then select the name of your
app. When you are ready, click OK, this will deploy the .NET server
project that you have locally into the App Service backend. When
deployment finishes, you will be redirected
to http://{zumoappname}.azurewebsites.net/ in the browser.

Run the Android app

90
CE443 : Cloud Computing D17CE144

1. Open the project using Android Studio, using Import project (Eclipse ADT,
Gradle, etc.). Make sure you make this import selection to avoid any JDK errors.
2. Open the file ToDoActivity.java in this folder -
ZUMOAPPNAME/app/src/main/java/com/example/zumoappname. The
application name is ZUMOAPPNAME.
3. Go to the Azure portal and navigate to the mobile app that you created. On
the Overview blade, look for the URL which is the public endpoint for your mobile
app. Example - the sitename for my app name "test123" will
be https://test123.azurewebsites.net.
4. In onCreate() method, replace ZUMOAPPURL parameter with public endpoint above.

new MobileServiceClient("ZUMOAPPURL", this).withFilter(new


ProgressFilter());

becomes

new MobileServiceClient("https://test123.azurewebsites.net",
this).withFilter(new ProgressFilter());

5. Press the Run 'app' button to build the project and start the app in the Android
simulator.
6. In the app, type meaningful text, such as Complete the tutorial and then click the
'Add' button. This sends a POST request to the Azure backend you deployed
earlier. The backend inserts data from the request into the TodoItem SQL table,
and returns information about the newly stored items back to the mobile app.
The mobile app displays this data in the list.

91
CE443 : Cloud Computing D17CE144

92
CE443 : Cloud Computing D17CE144

Practical 9
Aim: To find city wise maximum temperature of cities using Hadoop Cluster and Map
Reduce.
Now a day’s the data processing and computing became crucial technology in
enterprise and critical business decision making. Suppose the Government of India
has placed the temperature sensor’s as a part of digital India/smart city project. These
temperate sensors collects data on hourly basis and send it to the server for storing
and further processing. Hence these sensors generate huge structured data. Now if
researcher/government want to find the max temp of cities in given duration. Here
traditional clusters outperforms due to both huge data and heavy computing. In this
case data processing and computing can be implemented distributed manner as well
as parallel model using popular hadoop and map reduce. This provides cost
effectiveness as well as better performance within given time constraint.
Hardware Requirements: 4GB RAM, 500GB HDD, CPU
Software Requirements: Java 6 JDK, Hadoop requires a working Java 1.5+ (aka Java 5)
installation.

Steps to configure Hadoop cluster


Running Hadoop on Ubuntu (Single node cluster setup)
The report here will describe the required steps for setting up a single-node Hadoop cluster
backed by the Hadoop
Distributed File System, running on Ubuntu Linux. Hadoop is a framework written in Java for
running applications
on large clusters of commodity hardware and incorporates features similar to those of the
Google File System
(GFS) and of the MapReduce computing paradigm. Hadoop’s HDFS is a highly fault-tolerant
distributed file system
and, like Hadoop in general, designed to be deployed on low-cost hardware. It provides high
throughput access to
application data and is suitable for applications that have large data sets.
• DataNode: A DataNode stores data in the Hadoop File System. A functional file system has
more than one
DataNode, with the data replicated across them.
• NameNode: The NameNode is the centrepiece of an HDFS file system. It keeps the directory
of all files in
the file system, and tracks where across the cluster the file data is kept. It does not store the
data of these
file itself.
• Jobtracker:The Jobtracker is the service within hadoop that farms out MapReduce to specific
nodes in the
cluster, ideally the nodes that have the data, or atleast are in the same rack.
• TaskTracker: A TaskTracker is a node in the cluster that accepts tasks- Map, Reduce and
Shuffle operatons
– from a Job Tracker.
• Secondary Namenode:Secondary Namenode whole purpose is to have a checkpoint in HDFS.
It is just a

93
CE443 : Cloud Computing D17CE144

helper node for namenode.


Update the source list
user@ubuntu:~$ sudo apt-get update

check whether java JDK is correctly installed or not, with the following command
user@ubuntu:~$ java -version
Adding a dedicated Hadoop system user
We will use a dedicated Hadoop user account for running Hadoop.

user@ubuntu:~$ sudo addgroup hadoop_group


user@ubuntu:~$ sudo adduser --ingroup hadoop_group hduser1

94
CE443 : Cloud Computing D17CE144

This will add the user hduser1 and the group hadoop_group to the local machine. Add hduser1 to
the sudo group

95
CE443 : Cloud Computing D17CE144

user@ubuntu:~$ sudo adduser hduser1 sudo


Configuring SSH
The hadoop control scripts rely on SSH to peform cluster-wide operations. For example,
there is a script for stopping and starting all the daemons in the clusters. To work
seamlessly, SSH needs to be setup to allow password-less login for the hadoop user from
machines in the cluster. The simplest way to achive this is to generate a public/private key
pair, and it will be shared across the cluster.
Hadoop requires SSH access to manage its nodes, i.e. remote machines plus your local machine.
For our single- node setup of Hadoop, we therefore need to configure SSH access to localhost for
the hduser user we created in the earlier.
We have to generate an SSH key for the hduser user.
user@ubuntu:~$ su – hduser1 hduser1@ubuntu:~$
ssh-keygen -t rsa -P ""

The second line will create an RSA key pair with an empty password.
Note:
P “”, here indicates an empty password
You have to enable SSH access to your local machine with this newly created key which is done
by the following command.
hduser1@ubuntu:~$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys

96
CE443 : Cloud Computing D17CE144

The final step is to test the SSH setup by connecting to the local machine with the hduser1 user.
The step is also needed to save your local machine’s host key fingerprint to the hduser user’s
known hosts file.
hduser@ubuntu:~$ ssh localhost

If the SSH connection fails, we can try the following (optional):


-Enable debugging with ssh -vvv localhost and investigate the error in detail.
-Check the SSH server configuration in /etc/ssh/sshd_config. If you made any
changes to the SSH server configuration file, you can force a configuration reload
with sudo /etc/init.d/ssh reload.

INSTALLATION
Main Installation
 Now, I will start by switching to hduser

 hduser@ubuntu:~$ su - hduser1
 Now, download and extract Hadoop 1.2.0
 Setup Environment
Variables for Hadoop
Add the following
entries to .bashrc file
 # Set Hadoop-
related environment
variables export
HADOOP_HOME
=/usr/local/hadoop

97
CE443 : Cloud Computing D17CE144

# Add Hadoop bin/


directory to PATH
export PATH=
$PATH:$HADOO
P_HOME/bin
Configuration
 hadoop-env.sh
 Change the file: conf/hadoop-env.sh

 #export
JAVA_HOME=/usr
/lib/j2sdk1.5-sun to
in the same file

 # export JAVA_HOME=/usr/lib/jvm/java-
6-openjdk-amd64 (for 64 bit) # export
JAVA_HOME=/usr/lib/jvm/java-6-
openjdk-amd64 (for 32 bit) conf/*-site.xml
 Now we create the directory and set the required ownerships and permissions

 hduser@ubuntu:~$ sudo mkdir -p


/app/hadoop/tmp hduser@ubuntu:~$
sudo chown hduser:hadoop
/app/hadoop/tmp hduser@ubuntu:~$
sudo chmod 750 /app/hadoop/tmp
 The last line gives reading and writing permissions to the /app/hadoop/tmp directory
Error: If you forget to set the required ownerships and permissions, you will see
a java.io.IO Exception when you try to format the name node.
Paste the following between <configuration>
 In file conf/core-site.xml

<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose scheme and authority
determine the FileSystem implementation. The uri's scheme determines the config property
(fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used
to determine the host, port, etc. for a filesystem.</description>
</property>

98
CE443 : Cloud Computing D17CE144

<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job
tracker runs at. If "local", then jobs are run in-process
as a single map
and reduce task.
</description>
</property>
In file conf/hdfs-site.xml

<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the
file is created. The default is used if replication is not
specified in create time.
</description>
</property>
Formatting the HDFS filesystem via the NameNode
To format the filesystem (which simply initializes the directory specified by the dfs.name.dir
variable). Run the command
hduser@ubuntu:~$ /usr/local/hadoop/bin/hadoop namenode –format

99
CE443 : Cloud Computing D17CE144

Starting your single-node cluster


Before starting the cluster, we need to give the required permissions to the directory with the
following command
hduser@ubuntu:~$ sudo chmod -R 777 /usr/local/Hadoop
Run the command
hduser@ubuntu:~$ /usr/local/hadoop/bin/start-all.sh
This will startup a Namenode, Datanode, Jobtracker and a Tasktracker on the machine.
hduser@ubuntu:/usr/local/hadoop$ jps

100
CE443 : Cloud Computing D17CE144

Errors:
1. If by chance your datanode is not starting, then you have to erase the contents of
the folder /app/hadoop/tmp The command that can be used
hduser@ubuntu:~:$ sudo rm –Rf /app/hadoop/tmp/*
2. You can also check with netstat if Hadoop is listening on the configured ports.
3. The command that can be used
hduser@ubuntu:~$ sudo netstat -plten | grep java
4. Errors if any, examine the log files in the
/logs/ directory. Stopping your single-node
cluster
Run the command to stop all the daemons running on your machine.

hduser@ubuntu:~$ /usr/local/hadoop/bin/stop-all.sh

ERROR POINTS:
If datanode is not starting, then clear the tmp folder before formatting the namenode
using the following command
hduser@ubuntu:~$ rm -Rf /app/hadoop/tmp/*
Note:

 The masters and slaves file should contain localhost.


 In /etc/hosts, the ip of the system should be given with the alias as localhost.
 Set the java home path in hadoop-env.sh as well bashrc.

101
CE443 : Cloud Computing D17CE144

Problem Statement : Find the max temperature of each city using MapReduce
Mapper:

Import
java.io.IOException;
import
java.util.StringTokeni
zer;

import
org.apache.hadoop.io.IntWrita
ble; import
org.apache.hadoop.io.LongWri
table; import
org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class Map


extends Mapper<LongWritable, Text, Text, IntWritable>{

private IntWritable max = new


IntWritable(); private Text word
= new Text();

@Override
protected void map(LongWritable key, Text
value, Context context) throws IOException,
InterruptedException {

StringTokenizer line = new StringTokenizer(value.toString(),",\t");

word.set(line.nextToken());
max.set(Integer.parseInt(line.n
extToken()));

context.write(word,max);

}
}

Reducer:

import
java.io.IOExcepti
on; import
java.util.Iterator;

102
CE443 : Cloud Computing D17CE144

import
org.apache.hadoop.io.IntWrit
able; import
org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class Reduce


extends Reducer<Text, IntWritable, Text, IntWritable>{

private int max_temp =


Integer.MIN_VALUE; private
int temp = 0;

@Override
protected void reduce(Text key,
Iterable<IntWritable> values, Context
context)
throws IOException,

InterruptedException {

Iterator<IntWritable> itr =

values.iterator(); while

(itr.hasNext()) {

temp =
itr.next().get()
; if( temp >
max_temp)
{
max_temp
=temp;
}
}
context.write(key, new IntWritable(max_temp));
}
}

Driver Class:
import
org.apache.hadoop.fs.Path;
import
org.apache.hadoop.io.IntWri

103
CE443 : Cloud Computing D17CE144

table; import
org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import
org.apache.hadoop.mapreduce.lib.input.FileInput
Format; import
org.apache.hadoop.mapreduce.lib.output.FileOut
putFormat;

public class MaxTempDriver {


public static void main(String[] args) throws Exception {

// Create a
new job Job
job = new
Job();

// Set job name to locate it in the


distributed environment
job.setJarByClass(MaxTempDriver.class
); job.setJobName("Max Temperature");

// Set input and output Path, note that we use the default input format
// which is TextInputFormat (each record
is a line of input)
FileInputFormat.addInputPath(job, new
Path(args[0]));
FileOutputFormat.setOutputPath(job, new
Path(args[1]));

// Set Mapper and


Reducer class
job.setMapperClass(Map.
class);
job.setCombinerClass(Re
duce.class);
job.setReducerClass(Red
uce.class);

// Set Output key and value


job.setOutputKeyClass(Text.cl
ass);
job.setOutputValueClass(IntW
ritable.class);

System.exit(job.waitForCompletion(true) ? 0 : 1);
}

104
CE443 : Cloud Computing D17CE144

Input:

Kolk
ata,5
6
Jaipu
r,45
Delhi
,43
Mum
bai,3
4
Goa,
45
Kolk
ata,3
5
Jaipu
r,34
Delhi
,32

Outp
ut
Kolkata 56
Jaipur 45
Delhi 43
Mumbai 34

105
CE443 : Cloud Computing D17CE144

Practical 10
Aim: Implementing and Configuration of High availability web server cluster for Web
Services.
Hardware Requirement: 4GB RAM, 500GB HDD, CPU
Software Requirement: Ubuntu 14.04 servers on your DigitalOcean account.

Introduction

High availability is a function of system design that allows an application to


automatically restart or reroute work to another capable system in the event of a
failure. In terms of servers, there are a few different technologies needed to set up
a highly available system. There must be a component that can redirect the work
and there must be a mechanism to monitor for failure and transition the system if
an interruption is detected.

The keepalived daemon can be used to monitor services or systems and to


automatically failover to a standby if problems occur. In this guide, we will
demonstrate how to use keepalivedto set up a highly available web service. We will
configure a floating IP address that can be moved between two capable web servers.
If the primary server goes down, the floating IP will be moved to the second server
automatically, allowing service to resume.

Prerequisites
Both servers must be located within the same datacenter and should have private
networking enabled.
On each of these servers, you will need a non-root user configured with sudo
access. You can follow our Ubuntu 14.04 initial server setup guide to learn how
to set up these users.
When you are ready to get started, log into both of your servers with your non-root user.
Install and Configure Nginx
While keepalivedis often used to monitor and failover load balancers, in order to
reduce our operational complexity, we will be using Nginx as a simple web server
in this guide.
Start off by updating the local package index on each of your servers. We can then install
Nginx:
sudo apt-get update

sudo apt-get install nginx

106
CE443 : Cloud Computing D17CE144

In most cases, for a highly available setup, you would want both servers to
serve exactly the same content. However, for the sake of clarity, in this guide
we will use Nginx to indicate which of the two servers is serving our requests
at any given time. To do this, we will change the
default index.htmlpage on each of our hosts. Open the file now:
sudo nano /usr/share/nginx/html/index.html

On your first server, replace the contents of the file with this:
Primary server's /usr/share/nginx/html/index.html
<h1>Primary</h1>
On your second server, replace the contents of the file with this:
Secondary server's
/usr/share/nginx/html/index.html

<h1>Secondary</h1>
Save and close the files when you are finished.
Build and Install Keepalived
Next, we will install the keepalived daemon on our servers. There is a version of
keepalived in Ubuntu's default repositories, but it is outdated and suffers from a few
bugs that prevent our configuration from working. Instead, we will install the latest
version of keepalivedfrom source.
Before we begin, we should grab the dependencies we will need to build the
software. The build- essentialmeta-package will provide the compilation tools we
need, while the libssl-devpackage contains the SSL libraries that keep alived needs
to build against:
sudo apt-get install build-essential libssl-dev
Once the dependencies are in place, we can download the tarball for keepalived.
Visit this page to find the latest version of the software. Right-click on the latest
version and copy the link address. Back on your servers, move to your home
directory and use wgetto grab the link you copied:
cd ~
wget http://www.keepalived.org/software/keepalived-1.2.19.tar.gz

Use the tarcommand to expand the archive and then move into the resulting directory:

tar xzvf
keepalive
d* cd
keepalive
d*

Build and install the daemon by typing:


./configure
make
sudo make install

The daemon should now be

107
CE443 : Cloud Computing D17CE144

installed on the system. Create a

Keepalived Upstart Script


The keepalivedinstallation moved all of the binaries and supporting files into place
on our system. However, one piece that was not included was an Upstart script for
our Ubuntu 14.04 systems.

We can create a very simple Upstart script that can handle our keepalivedservice.
Open a file called keepalived.confwithin the /etc/initdirectory to get started:
sudo nano /etc/init/keepalived.conf

Inside, we can start with a simple description of the functionality keepalived


provides. We'll use the description from the included man page. Next we will
specify the runlevels in which the service should be started and stopped. We want
this service to be active in all normal conditions (runlevels 2-5) and stopped for all
other runlevels (when reboot, poweroff, or single-user mode is initiated, for
instance):
/etc/init/keepalived.conf
description "load-balancing and high-availability service"

start on runlevel
[2345] stop on
runlevel [!2345]
Because this service is integral to ensuring our web service remains available, we
want to restart this service in the event of a failure. We can then specify the actual
execline that will start the service. We need to add the --dont-forkoption so that
Upstart can track the pidcorrectly:
/etc/init/keepalived.conf
description "load-balancing and high-availability service"
start on runlevel
[2345] stop on
runlevel [!2345]

respawn

exec /usr/local/sbin/keepalived --dont-fork


Save and close the files when you are finished.
Create the Keepalived Configuration File
With our Upstart file in place, we can now move on to configuring keepalived.
The service looks for its configuration files in the /etc/keepaliveddirectory. Create
that directory now on both of your servers:
sudo mkdir -p /etc/keepalived

Collecting the Private IP addresses of your Servers

108
CE443 : Cloud Computing D17CE144

Before we create the configuration file, we need to find the private IP addresses
of both of our servers. On DigitalOcean servers, you can get our private IP
address through the metadata service by typing:
curl http://169.254.169.254/metadata/v1/interfaces/private/0/ipv4/address &&
echo
Output
1
0
.
1
3
2
.
7
.
1
0
7
This can also be found with the iproute2 tools by typing:
ip -4 addr show dev eth1

The value you are looking for will be found here:

Output
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
pfifo_fast state UP group default qlen 1000
inet 10.132.7.107/16 brd 10.132.255.255 scope global eth1
valid_lft forever preferred_lft forever

Copy this value from both of your systems. We will need to reference these
addresses inside of our configuration files below.

Creating the Primary Server's Configuration


Next, on your primary server, create the main keepalivedconfiguration file. The
daemon looks for a file called keepalived.confinside of the /etc/keepalived
directory:
sudo nano /etc/keepalived/keepalived.conf
Inside, we will start by defining a health check for our Nginx service by opening up
a vrrp_scriptblock. This will allow keepalivedto monitor our web server for failures
so that it can signal that the process is down and begin recover measures.
Our check will be very simple. Every two seconds, we will check that a process
called nginxis still claiming a pid:
Primary server's /etc/keepalived/keepalived.conf
vrrp_script
chk_ngin
x { script
"pidof

109
CE443 : Cloud Computing D17CE144

nginx"
interval 2
}
Next, we will open a block called vrrp_instance. This is the main configuration
section that defines the way that keepalived will implement high availability.

We will start off by telling keepalivedto communicate with its peers over eth1,
our private interface. Since we are configuring our primary server, we will set the
stateconfiguration to "MASTER". This is the initial value that keepalivedwill use
until the daemon can contact its peer and hold an election.
During the election, the priority option is used to decide which member is elected.
The decision is simply based on which server has the highest number for this
setting. We will use "200" for our primary server:

Primary server's /etc/keepalived/keepalived.conf


vrrp_script
chk_ngin
x { script
"pidof
nginx"
interval 2
}
vrrp_insta
nce
VI_1
{
inter
face
eth1
state
MA
STE
R
prior
ity
200
}

Next, we will assign an ID for this cluster group that will be shared by both nodes.
We will use "33" for this example. We need to set unicast_src_ip to our primary
server's private IP address that we retrieved earlier. We will set unicast_peerto our
secondary server's private IP address:
Primary server's /etc/keepalived/keepalived.conf
vrrp_script
chk_ngin
x { script
"pidof

110
CE443 : Cloud Computing D17CE144

nginx"
interval 2
}
vrrp_insta
nce
VI_1
{
inter
face
eth1
state
MA
STE
R
prior
ity
200
virtual_router_id 33 unicast_src_ip
primary_private_IP
unicast_peer {
secondary_private_IP
} }

Next, we can set up some simple authentication for our keepalived daemons to
communicate with one another. This is just a basic measure to ensure that the servers
in question are legitimate. Create an authentication sub-block. Inside, specify
password authentication by setting the auth_type. For the auth_pass parameter, set
a shared secret that will be used by both nodes. Unfortunately, only the first eight
characters are significant:

Primary server's /etc/keepalived/keepalived.conf

vrrp_script
chk_nginx { script
"pidof nginx"
interval 2
}
vrrp_instance
VI_1 {
interface eth1
state
MASTER
priority 200
virtual_router_id 33
unicast_src_ip primary_private_IP
unicast_peer {
secondary_private_IP
}

111
CE443 : Cloud Computing D17CE144

authentication {
auth_type PASS
auth_pass password
}

Next, we will tell keepalivedto use the routine we created at the top of the file,
labeled chk_nginx, to determine the health of the local system. Finally, we will set
a notify_masterscript, which is executed whenever this node becomes the "master"
of the pair. This script will be responsible for triggering the floating IP address
reassignment. We will create this script momentarily:

Primary server's /etc/keepalived/keepalived.conf


vrrp_script
chk_nginx {
script "pidof
nginx"
interval 2
}
vrrp_instance
VI_1 {
interface eth1
state
MASTER
priority 200

virtual_router_id 33
unicast_src_ip primary_private_IP
unicast_peer {
secondary_private_IP
}
authentication {
auth_type PASS
auth_pass password

112
CE443 : Cloud Computing D17CE144

113
CE443 : Cloud Computing D17CE144

114
CE443 : Cloud Computing D17CE144

On the API page, your new token will be displayed:

Copy the token now. For security purposes, there is no way to display this token
again later. If you lose this token, you will have to destroy it and create another
one.

115
CE443 : Cloud Computing D17CE144

Configure a Floating IP for your Infrastructure


Next, we will create and assign a floating IP address to use for our servers.
In the DigitalOcean control panel, click on the "Networking" tab and select the
"Floating IPs" navigation item. Select the Droplet from the list that you assigned as
your "primary" server:

A new floating IP address will be created in your account and assigned to the
Droplet specified:

116
CE443 : Cloud Computing D17CE144

If you visit the floating IP in your web browser, you should see the "primary" server
index.htmlpage:

Copy the floating IP address down. You will need this value in the script below.
Create the Wrapper Script
Now, we have the items we need to create the wrapper
script that will call our /usr/local/bin/assign-ipscript
with the correct credentials.
Create the file now on both servers by typing:
sudo nano /etc/keepalived/master.sh
Inside, start by assigning and exporting a variable called DO_TOKENthat holds
the API token you just created. Below that, we can assign a variable called IPthat
holds your floating IP address:
/etc/keepalived/master.sh
export
DO_TOKEN='digitalocean_api_toke
n' IP='floating_ip_addr'
Next, we will use curlto ask the metadata service for the Droplet ID of the server
we're currently on. This will be assigned to a variable called ID. We will also ask
whether this Droplet currently has the floating IP address assigned to it. We will
store the results of that request in a variable called HAS_FLOATING_IP:
/etc/keepalived/master.sh
export
DO_TOKEN='digitalocean_api_toke
n' IP='floating_ip_addr'
ID=$(curl -s http://169.254.169.254/metadata/v1/id)
HAS_FLOATING_IP=$(curl -s
http://169.254.169.254/metadata/v1/floating_ip/ipv4/active)
Now, we can use the variables above to call the assign-ip script. We will only call
the script if the floating IP is not already associated with our Droplet. This will help
minimize API calls and will help prevent conflicting requests to the API in cases
where the master status switches between your servers rapidly.
To handle cases where the floating IP already has an event in progress, we will retry the
assign-
ipscript a few times. Below, we attempt to run the script 10 times, with a 3 second

117
CE443 : Cloud Computing D17CE144

interval between each call. The loop will end immediately if the floating IP move
is successful:
/etc/keepalived/master.sh
export
DO_TOKEN='digitalocean_api_t
oken' IP='floating_ip_addr'
ID=$(curl -s http://169.254.169.254/metadata/v1/id)
HAS_FLOATING_IP=$(curl -s
http://169.254.169.254/metadata/v1/floating_ip/ipv4/activ
e)

Start Up the Keepalived Service and Test Failover


The keepaliveddaemon and all of its companion scripts should now be completely
configured. We can start the service on both of our machines by typing:

sudo start keepalived

The service should start up on each server and contact its peer, authenticating with
the shared secret we configured. Each daemon will monitor the local Nginx
process, and will listen to signals from the remote keepalivedprocess.
When both servers are healthy, if you visit your floating IP in your web browser,
you
should be taken to the primary server's Nginx page:

Now, we are ready to test the failover capabilities


of our configuration. Failover should occur when
either of the following conditions occur:
 When the Nginx health check on the primary server indicates that

118
CE443 : Cloud Computing D17CE144

Nginx is no longer running. In this case, the primary server's


keepaliveddaemon will enter the "fault" state. It will notify the
secondary server that it should transition to the master state and claim
the floating IP.
 When the secondary server loses its keepalived connection to the
primary server. If the secondary server cannot reach the primary
server for any reason, it will transition to the "master" state and
attempt to claim the floating IP.

If the primary server later recovers, it will transition back to the master state and
reclaim the floating IP because it will initiate a new election (it will still have the
highest priority number).
Testing Nginx Failure
We can test the first condition by stopping the Nginx service on the primary server:
sudo service nginx stop

If you refresh your web browser, you might initially get a response indicating that
the page is not available:

However, after just a few seconds, if you refresh the page a few times, you will
see that the secondary server has claimed the floating IP address:

A moment later, when the primary server finishes rebooting, it will reclaim the IP
address:

This verifies our second failure scenario.


Conclusion
In this guide, we configured a highly available web server environment using

119
CE443 : Cloud Computing D17CE144

keepalived, the DigitalOcean API, and a floating IP address. The actual


infrastructure was rather simple, but the concepts can be applied to any type of
infrastructure where service availability and uptime is important.

120
CE443 : Cloud Computing D17CE144

Practical 11
Aim: Create Virtual Machine using Xen Hypervisor
Hardware Requirement: 4GB RAM, 500GB HDD, CPU
Software Requirement: VM Ware Workstation

Here, One scenario designed for virtualization with the help of Xen hyperviser. There are two Xen
servers configured and on that you can see on Xen server-1 there are three virtual machine (VM) created
and on server-2 only one VM is created.Now let's see the steps to configure below scenario.

Step-1:-
1. Open VM Ware Workstation
2. Create a new virtual machine
3. Select Installer disc image file (.iso)
4. Select guest operating system linux(Ubuntu 64-bit)
5. Write virtual machine name(Xen-Server-1)
6. Specify Disk Capacity (100GB) & Select store virtual disk as a single file radio button
7. Assign 8GB RAM or more to server.
8. Start XenServer(VM) in VM Ware.(installation process start as you seen below)
Step:-2
Here you have to select keyboard layout or keyboard type & Click OK

121
CE443 : Cloud Computing D17CE144

Step-3:-
It is simple welcome screen for XenServer Setup. Click OK

Step:-4:-
Here it is End User License Agreement (between User & Provider) so read it and Click Accept
EULA

122
CE443 : Cloud Computing D17CE144

Step:-5
Here , we have to assign storage space for virtual machine. For our scenario 100 GB assigned of
VMware virtual storage. Click OK

Note:- Here we install XenServer in VMWare so you can see 100 GB [VMware, VMware Virtual S]
but in place of VMware if you install on direct machine then you have to assign storage from main
hard disk of machine. You will see option like 100 GB [HP Logical , Volume] but here you can see
100 GB [VMware, VMware Virtual S].

123
CE443 : Cloud Computing D17CE144

Step:-6
Here , you have to select Media or Source from which system will use for installations, here I select
Local Media. Click OK.

Step-7:-
One more option you have to select for supplemental package. Here for our configuration there is no
requirements for supplemental package . So Click NO.

124
CE443 : Cloud Computing D17CE144

Step:-8:-
This is for testing purpose so select Skip Verification. Click OK.

125
CE443 : Cloud Computing D17CE144

Step:-9
Here , you have to set root password and confirm it. This password is use later on for various
task related to server for instance root password required for Server Shut Down.

Step:-10
There are two ways for networking configuration Static & Dynamic . Using static way you have to
assign IP address , subnet mask and gateway but on the other hand in automatic configuration mode
all the related addresses will take automatically using DHCP. Here select Automatic
configuration(DHCP) Click OK.

Note:- If you install server direct on machine and if you have multiple NIC card then you have to
select one NIC card then you will get the network configuration tab(Step-10).

Step:-11:-
Now , you have to configure host name and DNS address. In our case , I chose Manually
specify option for ease management of multiple server on Xen Center. Select Manually specify and

126
CE443 : Cloud Computing D17CE144

write IT1.

In DNS configuration select Automatically set via DHCP and Clik OK.

Step:-12
Select time zone according to geographical location of XenServer. Select Asia and click OK.

Step:-13:-
Select time zone according to city so select Kolkata click OK.

127
CE443 : Cloud Computing D17CE144

Step:-14:-

Select Network time here I chose Using NTP (Network Time Protocol) which help to synchronise
time according to network so select Using NTP click OK.

Note:- If you select Manual time entry option then you have enter time after step:-15 in which first
you have to assign NTP Server address.

Step:-15:-

Select NTP is configured by my DHCP server click OK.

128
CE443 : Cloud Computing D17CE144

Step:-16:-
Now, you have to give final confirmation after our previous configuration. Select Install
XenServer and click OK.

Step:-17:-

Finally Installation Complete , click OK.

129
CE443 : Cloud Computing D17CE144

Finally You will get this window of XenServer.

How to create a virtual machine with Xen


In the following procedure you'll learn how to install an instance of
paravirtualized SUSE Linux Enterprise Server 10 SP1 on top of a SUSE Linux
Enterprise 10 SP1 virtualization host.

130
CE443 : Cloud Computing D17CE144

1.Make sure that your server has booted the Xen kernel. Next, run the virt-
manager command to start Virtual Machine Manager. This will give you an
interface, as in figure.

131
CE443 : Cloud Computing D17CE144

Select I need to install an operating system to start installation of a new virtual machine

132
CE443 : Cloud Computing D17CE144

133
CE443 : Cloud Computing D17CE144

machine that has one CPU only, you can specify that here.

Both the amount of memory and the amount of CPU's available to a virtual machine can
be changed easily later.

134
CE443 : Cloud Computing D17CE144

8.As for the graphics adapter, a paravirtualized graphics adapter is used as a


default. This adapter performs fine, so there is no need to change it in most cases.

9.One of the most important choices when setting up a virtual machine is the disk
that you want to use. The default choice of the installer is to create a disk image file
in the directory /var/lib/xen/images. This is fine, but for performance reasons, it's a
good idea to set up LVM volumes and use an LVM volume as the virtualized disk.
To keep setting up the virtual machine easy, in this article we'll configure a virtual
disk based on a disk image file. Click the link Disks. This gives an overview in
which you can see the disk that the installer has created for you.

Both the amount of memory and the amount of CPUs available to a virtual
machine can be changed easily later.

*Note: Here's a tip. Want to use your virtual machines in a data center? Put the disk
image files on the SAN, which makes migrating a virtual machine to another host
much easier!

To change disk properties, such as the size or location of the disk file, select the
virtual disk and click "Edit." Change the disk properties according to your needs
now.

As you can see in figure 5, the installation wizard doesn't give you access to an
optical drive by default. You may want to set this up anyway, if only to be able to
perform the installation from the installation DVD! Click CD- ROM and select the
medium you want to use as the optical drive within the virtual machine. By default
this is
135
CE443 : Cloud Computing D17CE144

/dev/cdrom on the host operating system. If you want to install from an ISO file,
use the Open button to browse to the location of the ISO file.

It is easy to select an ISO-file instead of a physical CD-rom.

In the Network Adapters part of the summary window, you'll see that a
paravirtualized network adapter has been added automatically. We'll talk about
network adapters later, so let's just keep it this way now.

Now check that under Operating System Installation an installation source is


mentioned. If it is, it's time to click OK and deploy of your virtual machine.

After installing the virtual operating system, you can access it from Virtual Machine
Manager. Later in this series, you'll read more about all the managent options you
have from this interface and the command line as well.

136
CE443 : Cloud Computing D17CE144

Practical 12
Aim: Implementing Container based virtualization using Docker

Docker is an open-source project for automating the deployment of applications as


portable, self-sufficient containers that can run on the cloud or on-premises. Docker
is also a company that promotes and evolves this technology, working in
collaboration with cloud, Linux, and Windows vendors, including Microsoft.

Docker deploys containers at all layers of the hybrid cloud

Docker image containers can run natively on Linux and Windows. However,
Windows images can run only on Windows hosts and Linux images can run on
Linux hosts and Windows hosts (using a Hyper-V Linux VM, so far), where host
means a server or a VM.

Developers can use development environments on Windows, Linux, or macOS. On


the development computer, the developer runs a Docker host where Docker images
are deployed, including the app and its dependencies. Developers who work on
Linux or on the Mac use a Docker host that is Linux based, and they can create
images only for Linux containers. (Developers working on the Mac can edit code or
run the Docker CLI from macOS, but as of the time of this writing, containers don't
run directly on macOS.) Developers who work on Windows can create images for
either Linux or Windows Containers.

To host containers in development environments and provide additional developer


tools, Docker ships Docker Community Edition (CE) for Windows or for macOS.
These products install the necessary VM (the Docker host) to host the containers.
Docker also makes available Docker Enterprise Edition (EE), which is designed for
enterprise development and is used by IT teams who build, ship, and run large
business-critical applications in production.

To run Windows Containers, there are two types of runtimes:


 Windows Server Containers provide application isolation through process
and namespace isolation technology. A Windows Server Container shares
a kernel with the container host and with all containers running on the
host.
 Hyper-V Containers expand on the isolation provided by Windows Server
Containers by running each container in a highly optimized virtual

137
CE443 : Cloud Computing D17CE144

machine. In this configuration, the kernel of the container host isn't shared
with the Hyper-V Containers, providing better isolation.

The images for these containers are created the same way and function the same.
The difference is in how the container is created from the image running a
Hyper-V Container requires an extra parameter. For details, see Hyper-V
Containers.
Comparing Docker containers with virtual machines
shows a comparison between VMs and Docker containers.
Comparison of traditional virtual machines to Docker containers

Because containers require far fewer resources (for example, they don't need a
full OS), they're easy to deploy and they start fast. This allows you to have
higher density, meaning that it allows you to run more services on the same
hardware unit, thereby reducing costs.

As a side effect of running on the same kernel, you get less isolation than VMs.

The main goal of an image is that it makes the environment (dependencies) the
same across different deployments. This means that you can debug it on your
machine and then deploy it to another machine with the same environment
guaranteed.

A container image is a way to package an app or service and deploy it in a


reliable and reproducible way. You could say that Docker isn't only a
technology but also a philosophy and a process.

When using Docker, you won't hear developers say, "It works on my machine,
why not in production?" They can simply say, "It runs on Docker", because the
packaged Docker application can be executed on any supported Docker
environment, and it runs the way it was intended to on all deployment targets
(such as Dev, QA, staging, and production).

138
CE443 : Cloud Computing D17CE144

Comparison of traditional virtual machines to Docker containers

Because containers require far fewer resources (for example, they don't need a full
OS), they're easy to deploy and they start fast. This allows you to have higher
density, meaning that it allows you to run more services on the same hardware unit,
thereby reducing costs.

As a side effect of running on the same kernel, you get less isolation than VMs.

The main goal of an image is that it makes the environment (dependencies) the same
across different deployments. This means that you can debug it on your machine and
then deploy it to another machine with the same environment guaranteed.

A container image is a way to package an app or service and deploy it in a reliable


and reproducible way. You could say that Docker isn't only a technology but also a
philosophy and a process.

When using Docker, you won't hear developers say, "It works on my machine, why
not in production?" They can simply say, "It runs on Docker", because the packaged
Docker application can be executed on any supported Docker environment, and it
runs the way it was intended to on all deployment targets (such as Dev, QA, staging,
and production).
A simple analogy

Perhaps a simple analogy can help getting the grasp of the core concept of Docker.

Let's go back in time to the 1950s for a moment. There were no word processors,
and the photocopiers were used everywhere (kind of).

Imagine you're responsible for quickly issuing batches of letters as required, to mail

139
CE443 : Cloud Computing D17CE144

them to customers, using real paper and envelopes, to be delivered physically to each
customer's address (there was no email back then).
At some point, you realize the letters are just a composition of a large set of
paragraphs, which are picked and arranged as needed, according to the purpose of
the letter, so you devise a system to issue letters quickly, expecting to get a hefty
raise.

The system is simple:

1. You begin with a deck of transparent sheets containing one paragraph each.
2. To issue a set of letters, you pick the sheets with the paragraphs you
need, then you stack and align them so they look and read fine.
3. Finally, you place the set in the photocopier and press start to produce as many
letters as required.
So, simplifying, that's the core idea of Docker.
In Docker, each layer is the resulting set of changes that happen to the filesystem
after executing a command, such as, installing a program.
So, when you "look" at the filesystem after the layer has been copied, you see all
the files, included the layer when the program was installed.
You can think of an image as an auxiliary read-only hard disk ready to be installed in
a "computer" where the operating system is already installed.
Similarly, you can think of a container as the "computer" with the image hard disk
installed. The container, just like a computer, can be powered on or off.
First Docker container

To make sure Docker is installed it is possible to run a command [10]


$ docker version

Docker can be run from terminal. Docker run command will create the

container using the image user specifies, then spin up the container and

run it.Simple example of giving a command to run docker image can look

like this:

$ docker run hello-world

When we use the image to run a container docker first looks through local box to
find the image. If docker can’t find image locally it will look for a download from
remote registry. To check if there are any images we have locally we can type in

$docker images

The outcome of the command in my terminal looked like this:

140
CE443 : Cloud Computing D17CE144

Docker images command

As seen on the picture, docker downloaded an image from repository “hello-


world”, with a tag name “latest”. Images have unique ids.
The specifications for the command can be more precise in a form of
$docker run repository:tag command [arguments] and the output for the command
$docker run busybox:1.24 echo “What’s up” is:

Running busybox image

We can see that image wasn’t found locally, and it was pulled from remote registry.
If we run the command again the execution will happen a lot faster since now the
image is located locally.
The -i flag starts an interactive container. The -t flag creates a pseudo-TTY
that attaches receiving or reading input such as stdin(standard in) and
printing output such as stdout(standard output).
Implementing command:
$docker run -i -t busybox:1.24

Will put us right inside the container.


Usual Docker commands

$docker ps

Command is used to show all the running containers.

$docker ps -a
Shows all the containers which stopped as well.
In order to give a container specific name we can use a command
$docker run --name ello busybox:1.24
This will give a particular name to the container.

141
CE443 : Cloud Computing D17CE144

Container names

Docker containers can be run in a browser. This can be accessed by a command:

$docker run -it -p 8888:8080 tomcat:8.0

I had to use IP address given in Ubuntu console, since I was using cloud in order
to access Docker and Linux distribution system.

$docker logs

This command will show any running containers.


When we create a container, we add a thin image layer on top of all underlying
layers. It is called writable container layer. All changes made into the running
containers will be written into the writable layer. When the container is deleted, the
writable layer is also deleted, but the underlying image remains unchanged. This
means that many containers can have different files but all of them have same
underlying image.
There are two ways in which docker image can be built. One way is to commit
changes made in a Docker container.

There are three main steps to be done in order to achieve the goal. We have to spin up
a container from the base image, install Git package in the container and commit
changes made in the container.

Docker commit is a command used to save the changes made to the Docker
container file system to a new image.

Another is to write a Dockerfile. A Dockerfile is a document where user provides all


the instructions to assemble the image. Each instruction creates new image layer to
the im-age. Docker build command takes the path to the build context as an
argument.

By default, docker would search for the Dockerfile in the build context path. Docker
exe-cutes all the instructions written in the file. Then it creates a new container from

142
CE443 : Cloud Computing D17CE144

the base image. Docker daemon runs each instruction in a different container. For
example, for instruction, Docker daemon creates a container, runs the instruction,
commits new layer to the image and removes the container.

Each run command will execute the command on the top writable layer of the
container, then commit the container as a new image. As well as each run command
will create a new image layer. It is important to remember to update packages
alphanumerically. This way the process will be going faster. CMD instructions
specify what command user wants to run when the container starts up, otherwise if
CMD instructions are not specified in the Dockerfile, Docker will use default
command defined in the base image.
If instructions do not change Docker will use same layer when building an image.
ADD instruction let’s not only to copy file but also allow to download file from
internet and add it to the container.

All the dependencies are managed by Docker, so there is no need to install, for
example, Python on the local machine in order to work with it
Docker compose

Container links allow containers locate each other and securely transfer information
about one container to another container. When we set up a link we create a pipeline
between the source container and recipient container. The recipient container can
then access select data above the source container. In our case the rattus container is
the source container and our container is the recipient container. The links are
established by using container names.

The main use for docker container links is when we build an application with a
micro-service architecture, we are able to run many independent components in
different con-tainers. Docker creates a secure tunnel between the containers that
doesn’t need to expose any ports externally on the container.

Docker compose is a very important component which is made in order to run multi-
container Docker applications. With the help of Docker compose we can define all
the containers in a single file called yemo file.
You can check if you have Docker compose installed simply by running a command:

Checking if Docker-compose is installed

143
CE443 : Cloud Computing D17CE144

As we clearly see you will be told if installation is required. Next step is to create
docker- compose.yml file.
Structure of a yemo file

There are three versions of docker compose file, version 1 which is legacy format
which does not support volumes or networks, version 2 and version 3 which is the
most up to date format and it is recommended to use it. Next we define services to
make up our application for each service. We should provide instructions on how to
build and run the container in the application. We have two services: example and
redis.

The first instruction is the “build” instruction. The “build” instruction tells the path
to the file which will be used to build docker image. Second instruction is “port”
which defines what ports to show to external network. “Depends on” is the next part
since in this exam-ple docker container is the client of redis and we need to start
redis beforehand.

Docker-compose file
Dockerizing React
application

There are bunch of articles which can help building applications and deploy react
appli-cations, well basically any applications. The most important steps are
explained be-low.[11]
Step 1
$ npm install -g create-react-app
Someone who has worked with React might see this steps as
very familiar. Step 2 (creating a folder)
$ mkdir myApp

144
CE443 : Cloud Computing D17CE144

Step 3 (get inside the folder)


$ cd myApp
Step 4 (create a react folder)
$ create-react-app frontend
Then we are creating a Dockerfile.
Writing Dockerfile for a React App

Writing Dockerfile
So what does the code mean? [11]

FROM node means start from node base image


ENV NPM_CONFIG_LOGLEVEL warn means less messages
during build ARG app_env means that app environment can be set
during build
ENV APP_ENV $app_env means that an environment variable is set to app_env argu-
ment
RUN mkdir -p /frontend means that frontend folder is created
WORKDIR /frontend means that all commands will be run from this folder
COPY ./frontend ./ means that the code from the local folder is copied into
container’s working directory
RUN npm install installs dependencies
CMD if means If the arg_env = production then http-server will be installed, and
then build. Otherwise used create-react-app hot reloading tool (basically
webpack — watch) EXPOSE 3000 means that app runs on port 3000 by default
Next commands in order to run the app are :

145
CE443 : Cloud Computing D17CE144

$ docker build ./
Type $ docker images to find out image id
After following command you should be able to run your container in localhost:3000
$ docker run -it -p 3000:3000 -v [put your path
here]/frontend/src:/frontend/src [image id] To build a production image run:
$ docker build ./ --build-arg
app_env=production To run the
production image:
$ docker run -i -t -p 3000:3000 [image id]
And worry not if you make a mistake in your file, the image won’t be built.

Letter A is a mistake in the file


Next is Docker build.

146
CE443 : Cloud Computing D17CE144

Figure 9 Docker build

Current Docker images situation.

Figure 10 Docker images


Getting the image to run inside the container.

Figure 11 Getting inside the container

Figure 12 Successful Docker build

Figure 13 React app interface

147
CE443 : Cloud Computing D17CE144

9 Pushing images into Docker Cloud


Important thing to know from the beginning Docker Cloud does not provide cloud
ser-vices. Docker cloud however has more added features then Docker Hub but it is
built on top of Docker Hub. If you push an image to Docker Hub it will be
automatically in Docker cloud. [12]

Figure 14 My repository in Docker Cloud

Next step is to login into Docker Hub. You will be able to see your image in both
Docker Cloud and Docker Hub.

Figure 15 Login into Docker Hub


Push an image.

148
CE443 : Cloud Computing D17CE144

Image push

Docker Cloud repository interface


If you wish to find my image in Docker Hub just search for game_theory, this is
what you can find.

Docker Hub repository interface

149
CE443 : Cloud Computing D17CE144

Repository
interface Docker
Networks

Docker networks
Docker has three built in networks:
bridge
host
none
You can specify which network you want to use with --net command.
None network means that the container is isolated. To make it run use:
$docker run -d --net none [your image]

Creating none network


None network provides the most protection

Checking if none network is isolated

150
CE443 : Cloud Computing D17CE144

Containers in the same network can connect to each other. Containers from other
net-works can’t connect to containers in the given one.
Bridge is the default network. If network is not specified this the the network type
you are creating. Usually this kind of network is created in single containers which
need a way to communicate. Host is a network that removes network isolation
between the container and the Docker host. Least protected network. This kind of
containers are usually called open containers.Overlay network connect multiple
Docker daemons and enables swarm services to talk to each other. It allows
communication for two single containers on different Docker dae-mons. Macvlan
network allows assigning a MAC address to your container, which shows your
container as a physical device on your network. It’s usually best choice when dealing
with application that have to be directly connected to the physical network.
Creating a custom network
In order to create a custom network use command: [14]
$docker network create --driver [you driver choice] [your network]

Creating custom network


If we check networks again:

All the networks available

In a bridge network containers can have access to two network interfaces.

Loopback interface
Private interface
Containers in the same network can communicate with each other. We can define
networks in docker-compose file as well

151
CE443 : Cloud Computing D17CE144

Including network into Docker-compose file

Networks are defined similar to other services, with sub being the name of my
network. Network should be defined as well in other sections where it is being
used.

You can create two networks which will provide network isolation between ser-vices.

Including more networks into Docker-compose

Using Docker in production


Opinions are divided whether it is safe to use Docker in production environment

152
CE443 : Cloud Computing D17CE144

or not. Main concerns are that Docker is missing important security and data
management. On the other hand, Docker is being developed at a very fast pace. In
the case studies men-tioned previously it can be verified that Docker in
production can work and is in fact quite efficient.
Technically it is possible to run many different processes in one container, but it
is better to run one specific process in each. It is easier to use containers with only
one function-ality. You can always spin up container to use in some other project,
but you can’t really spin up container which already has your database and you
don’t need it in another place. It is also easier to debug and find mistakes in one
component out of the whole application than the whole application. Benefit of
docker containers is their small size, so it is good to keep it that way especially
when many containers have to be deployed and updated at the same time. The
most important part though is to remember about security. Once you deploy your
containers to production be careful of the network vulnerabilities and make sure
your data is protected.

Conclusion

The purpose of this thesis was to document learning of the Docker technology
and re-search its apparent success. As a result, I can sum up that Docker is a very
powerful tool which helped many companies to overcome their difficulties in
recourse management, isolation of environments, security issues and moving into
the cloud. Since information is being received and sent as fast as ever before it is
essential for services providers to ensure that they can give the best assistance for
the customers.

Documenting my learning was not easy, since I had to remember to take


screenshots of the code. In my opinion it is very important to see an example of a
code as a beginner either as a picture or a short video. I tried to make my
explanations as easy as possible for new Linux users as well. I covered all the
topics necessary in order to be able to run Docker in test environment.

Overall, I think that Docker documentation and pool of developers provides good
support for new Docker users.

153

Potrebbero piacerti anche