UNIT 2 SERVICES DELIVERED FROM THE CLOUD ... - BAMU ENGINE€¦ · Infrastructure as service:...
Transcript of UNIT 2 SERVICES DELIVERED FROM THE CLOUD ... - BAMU ENGINE€¦ · Infrastructure as service:...
2.1
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
UNIT 2
SERVICES DELIVERED FROM THE CLOUD
Model architecture, Benefits and Drawbacks: Infrastructure-as-a-Service (IaaS), Platform-as-a- Service (PaaS), Software-as-a-
Service (SaaS), Business-Process-as-a-service (BPaaS), Identity-as-a-service (IDaaS), Communication-as-a-service (CaaS),
Monitoring-as-a-service (MaaS), Storage as a service: Traditional storage versus storage cloud, Cloud Service providers:
Infrastructure as service: Amazon EC2, Platform as Service: Google App Engine, Force.com.
INFRASTRUCTURE-AS-A-SERVICE (IAAS)
Infrastructure-as-a-Service (IaaS) is the delivery of computer infrastructure as a service. IaaS leverages
significant technology, services, and data center investments to deliver IT as a service to customers. Unlike
traditional outsourcing, which requires extensive due negotiations and complex, lengthy contract vehicles,
IaaS is centered around a model of service delivery that provisions a predefined, standardized infrastructure
specifically optimized for the customer‟s applications. Simplified statements of work and a‟ la carte service-
level choices make it easy to tailor a solution to a customer‟s specific application requirements. IaaS
providers manage the transition and hosting of selected applications on their infrastructure. Customers
maintain ownership and management of their application(s) while off-loading hosting operations and
infrastructure management to the IaaS provider. IaaS providers provide virtual machines, virtual storage,
virtual infrastructure, and other hardware assets as resources that clients can be provisioned. Most large
Infrastructure as a Service (IaaS) providers rely on virtual machine technology to deliver servers that can run
applications. In order to represent how control and management responsibilities are shared, the IaaS cloud
component Stack with scope of control is shown below.
The cloud provider controls the most privileged, lower layers of the software stack. As depicted in the figure
2.14 above the provider maintains total control over the physical hardware and administrative control over
the hypervisor layer e.g. Xen5. Thus the consumer can make requests to the cloud to create and manage
VMs but these requests are honored only in case they conform to the provider‟s policies over resource
assignment. Via hypervisor, the provider will normally supply interfaces for the networking functions that
the consumers can use in order to configure the virtual network within the provider‟s infrastructure. The
consumer maintains the complete control over the guest operating system functionality in each of virtual
machines, and all the software layers above. This structure gives very significant control over the software
stack to consumers that have to take responsibility to operate, update and configure these computing
resources for security and reliability Provider-owned implementations typically include the following
layered components:
Computer hardware (typically set up as a grid for massive horizontal scalability)
2.2
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
Computer network (including routers, firewalls, load balancing, etc.)
Internet connectivity (often on OC 192 backbones)
Platform virtualization environment for running client-specified virtual machines Service-level
agreements
Utility computing billing
Rather than purchasing data center space, servers, software, network equipment, etc., IaaS customers
essentially rent those resources as a fully outsourced service. Usually, the service is billed on a monthly
basis, just like a utility company bills customers. The customer is charged only for resources consumed. The
chief benefits of using this type of outsourced service include:
Ready access to a preconfigured environment that is generally ITIL-based (The Information
Technology Infrastructure Library [ITIL] is a customized framework of best practices designed to
promote quality computing services in the IT sector.)
Use of the latest technology for infrastructure equipment
Secured, “sand-boxed” (protected and insulated) computing platforms that are usually security
monitored for breaches
Reduced risk by having off-site resources maintained by third parties
Ability to manage service-demand peaks and valleys
Lower costs that allow expensing service costs instead of making capital investments
Reduced time, cost, and complexity in adding new features or capabilities
One example we will examine is Amazon‟s Elastic Compute Cloud (Amazon EC2). This is a web service
that provides resizable computing capacity in the cloud. It is designed to make web-scale computing easier
for developers and offers many advantages to customers:
Its web service interface allows customers to obtain and configure capacity with minimal effort.
It provides users with complete control of their (leased) computing resources and lets them run on a
proven computing environment.
It reduces the time required to obtain and boot new server instances to minutes, allowing customers to
quickly scale capacity as their computing demands dictate.
It changes the economics of computing by allowing clients to pay only for capacity they actually use.
It provides developers the tools needed to build failure-resilient applications and isolate themselves from
common failure scenarios.
Examples of IaaS service providers include:
• Amazon Elastic Compute Cloud (EC2)
• Eucalyptus
• GoGrid
2.3
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
• FlexiScale
• Linode
• RackSpace Cloud
• Terremark
Drawbacks
As in other service models IaaS cloud shares similar concerns in regards to network dependence, and
browser dependency. The following are the issues related exclusively with IaaS cloud pointed out by the
source.
Legacy Security Vulnerabilities impact. Most of IaaS systems give its users a possibility to create and
retain virtual machines in various states e.g., running, suspended and off. An inactive VM can become out
of date with important security updates; whenever such out-of-date VM is activated it may become
compromised.
Virtual Machine Sprawl. IaaS clouds expose consumers to all of the security vulnerabilities of the
legacy software systems allowed by consumers to run in the provider‟s infrastructure.
Iaas provider authenticity verification. The user´s browser will most likely use public key
cryptography to establish a private link to the cloud provider. Nevertheless, it is consumer that is in
charge of checking the identity of the cloud Website in order to check if the private link is not with an
imposter.
Robustness of VM-level Isolation. Cloud consumers must be isolated from each other except when they
choose to interact. Normally an IaaS cloud uses a hypervisor (which is a software layer), in combination
with hardware support for virtualization (e.g., AMD-V and Intel VT-x), to split each physical computer
into multiple virtual machines. Isolation of the virtual machines depends on the correct implementation
and configuration of the hypervisor. Hardware virtualization provided by hypervisors has become a
widely used technique for providing isolated, computing environments, but the strength of the isolation in
the presence of sophisticated attackers is an open research question.
Features for Dynamic Network Configuration for Providing Isolation. In order to prevent unwanted
interactions among consumers, the cloud network must prevent a consumer from observing other
consumer‟s packets. Furthermore it has to reserve enough bandwidth to ensure that each consumer has the
expected level of service. The allocation a Virtual Machines typically is a matter of a few minutes, and
the corresponding network configuration must be performed just as quickly. Various techniques for
logical view of network‟s topology, such as Virtual Local Area Networks (VLANs) and overlay networks
can be quickly reconfigured. Thus they (and perhaps support in hypervisors as well) have to be
configured carefully in order to prevent interference between networks belonging to different consumers.
Data Erase Practices. Virtual machines access disk resources maintained by the provider. When a
consumer releases such a resource, the provider must ensure that the next consumer that rents the
resource does not observe data residue from previous tenants. Strong data erase policies (e.g., multiple
2.4
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
overwriting of disk blocks) are time consuming and may not be compatible with high performance when
tenants are changing. Data replication and backup practices also complicate data erase practices.
PLATFORM-AS-A-SERVICE (PAAS)
Cloud computing has evolved to include platforms for building and running custom web-based applications,
a concept known as Platform-as-a-Service. PaaS is an outgrowth of the SaaS application delivery model.
The PaaS model makes all of the facilities required to support the complete lifecycle of building and
delivering web applications and services entirely available from the Internet, all with no software downloads
or installation for developers, IT managers, or end users. Unlike the IaaS model, where developers may
create a specific operating system instance with home grown applications running, PaaS developers are
concerned only with web-based development and generally do not care what operating system is used. PaaS
services allow users to focus on innovation rather than complex infrastructure. Organizations can redirect a
significant portion of their budgets to creating applications that provide real business value instead of
worrying about all the infrastructure issues in a roll-your-own delivery model. The PaaS model is thus
driving a new era of mass innovation. Now, developers around the world can access unlimited computing
power. Anyone with an Internet connection can build powerful applications and easily deploy them to users
globally.
The Traditional On-Premises Model: The traditional approach of building and running on-premises
applications has always been complex, expensive, and risky. Building your own solution has never offered
any guarantee of success. Each application was designed to meet specific business requirements. Each
solution required a specific set of hardware, an operating system, a database, often a middleware package,
email and web servers, etc. Once the hardware and software environment was created, a team of developers
had to navigate complex programming development platforms to build their applications. Additionally, a
team of network, database, and system management experts was needed to keep everything up and running.
A business requirement would force the developers to make a change to the application. The changed
application then required new test cycles before being distributed. Large companies often needed specialized
facilities to house their data centers. Enormous amounts of electricity also were needed to power the servers
as well as to keep the systems cool. Finally, all of this required use of fail-over sites to mirror the data center
so that information could be replicated in case of a disaster.
The New Cloud Model: As it is illustrated in the figure 2.12 below, the cloud provider has control over the
more privileged, lower layers of the software stack (also has control over networking infrastructure such as
LANs and routers between data centers). Thus it also shows how control and management responsibilities
are shared.
The provider makes programming and utility interfaces available to the consumer at the middleware
layer. PaaS offers a faster, more cost-effective model for application development and delivery. PaaS
provides the entire infrastructure needed to run applications over the Internet. Such is the case with
companies such as Amazon.com, eBay, Google, iTunes, and YouTube. The new cloud model has made it
2.5
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
possible to deliver such new capabilities to new markets via the web browsers. PaaS is based on a metering
or subscription model, so users pay only for what they use. PaaS offerings include workflow facilities for
application design, application development, testing, deployment, and hosting, as well as application
services such as virtual offices, team collaboration, database integration, security, scalability, storage,
persistence, state management, dashboard instrumentation, etc.
Key Characteristics of PaaS
Chief characteristics of PaaS include services to develop, test, deploy, host, and manage applications to
support the application development life cycle. Web-based user interface creation tools typically provide
some level of support to simplify the creation of user interfaces, based either on common standards such as
HTML and JavaScript or on other, proprietary technologies. Supporting a multitenant architecture helps to
remove developer concerns regarding the use of the application by many concurrent users. PaaS providers
often include services for concurrency management, scalability, fail-over and security. Another
characteristic is the integration with web services and databases. Support for Simple Object Access Protocol
(SOAP) and other interfaces allows PaaS offerings to create combinations of web services (called mashups)
as well as having the ability to access databases and reuse services maintained inside private networks. The
ability to form and share code with ad-hoc, predefined, or distributed teams greatly enhance the productivity
of PaaS offerings. Integrated PaaS offerings provide an opportunity for developers to have much greater
insight into the inner workings of their applications and the behavior of their users by implementing
dashboard-like tools to view the inner workings based on measurements such as performance, number of
concurrent accesses, etc. Some PaaS offerings leverage this instrumentation to enable pay-per-use billing
models.
Main Providers
Main PaaS cloud providers will be given, below is a short list of the few leading players on the market.
Google App Engine
Windows Azure
Force.com
Drawbacks
Possibility of information disclosures. For example, the very presence or absence of message
traffic, or the sizes of messages sent, or the originating locations may leak information that is indirect
but still of importance to some consumers
2.6
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
Network Dependency. In case of network failure outsourced PaaS platforms become non-
operational as there is no connection with them in such case.
PaaS clouds are not portable. This is a concern particularly when platforms require proprietary
languages and run-time environments.
Vendor lock-in. In many cases of PaaS happens (e.g. Google App Engine) that uploaded application
to PaaS cloud is not retrievable from the providers´ servers. It is also called a vendor lock-in, i.e.
once a company deploys its software onto the cloud it becomes dependent on that cloud provider.
Event-based Scheduling. PaaS applications can be event driven with the events composed of HTTP
messages. This kind of design is cost effective (absent an outstanding request, few resources are
consumed), however it poses resource constraints on applications, such as they must answer a
request within a time interval or they must continue a long-running request by queuing synthetic
messages that then can be serviced. Moreover, tasks that execute rapidly in a local application not
necessarily offer equivalent performance in a PaaS application.
Security Engineering of PaaS Applications. Unlike the case of an application that can potentially
run in an isolated environment using only local resources, PaaS applications access networks
intrinsically. Moreover, PaaS applications must use cryptography in an explicit way, and must
interact with the presentation features of common Web browsers that provide output to consumers.
SOFTWARE-AS-A-SERVICE (SAAS)
The traditional model of software distribution, in which software is purchased for and installed on personal
computers, is sometimes referred to as Software-as-a-Product. Software-as-a-Service is software distribution
model in which applications are hosted by a vendor or service provider and made available to customers
over a network, typically the Internet. SaaS is becoming an increasingly prevalent delivery model as
underlying technologies that support web services and service-oriented architecture (SOA) mature and new
developmental approaches become popular. SaaS is also often associated with a pay-as-you-go subscription
licensing model. Meanwhile, broadband service has become increasingly available to support user access
from more areas around the world.
The huge strides made by Internet Service Providers (ISPs) to increase bandwidth, and the constant
introduction of ever more powerful microprocessors coupled with inexpensive data storage devices, is
providing a huge platform for designing, deploying, and using software across all areas of business and
personal computing. SaaS applications also must be able to interact with other data and other applications in
an equally wide variety of environments and platforms. SaaS is closely related to other service delivery
models we have described. IDC identifies two slightly different delivery models for SaaS. The hosted
application management model is similar to an Application Service Provider (ASP) model. Here, an ASP
hosts commercially available software for customers and delivers it over the Internet. The other model is
software on demand model where the provider gives customers network-based access to a single copy of an
2.7
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
application created specifically for SaaS distribution. IDC predicted that SaaS would make up30% of the
software market by 2007 and would be worth $10.7 billion by the end of 2009.
SaaS is most often implemented to provide business software functionality to enterprise customers at
a low cost while allowing those customers to obtain the same benefits of commercially licensed, internally
operated software without the associated complexity of installation, management, support, licensing, and
high initial cost. Most customers have little interest in the how or why of software implementation,
deployment, etc., but all have a need to use software in their work. Many types of software are well suited to
the SaaS model (e.g., accounting, customer relationship management, email software, human resources, IT
security, IT service management, videoconferencing, web analytics, and web-content management). The
distinction between SaaS and earlier applications delivered over the Internet is that SaaS solutions were
developed specifically to work within a web browser. The architecture of SaaS-based applications is
specifically designed to support many concurrent users (multi-tenancy) at once. This is a big difference from
the traditional client/server or application service provider (ASP)-based solutions that cater to a contained
audience. SaaS providers, on the other hand, leverage enormous economies of scale in the deployment,
management, support, and maintenance of their offerings.
In order to facilitate the understanding of scope and division of roles between cloud consumer and
cloud provider, the following figure is placed as a reference.
The figure above depicts a “user level control”, which represents that a consumer has control over the application-
specific resources that SaaS application makes available. In some cases, a consumer also has some limited
administrative control over an application. A provider normally has significantly more administrative control at the
application level. The responsibilities of a provider are to deploy, configure, update, and manage the operation of the
application in order to provide expected service levels to consumers. The middleware layer provides software blocks
that are the base of an application. It can take various forms, ranging from: traditional software libraries, to software
interpreters, to invocations of remote network services. Moreover, middleware components can provide database
services, user authentication services, identity managements, etc. Basically consumers cannot have an access to this
layer; neither should they have access to the operating system nor hardware layers.
SaaS Implementation Issues
Many types of software components and applications frameworks may be employed in the development of
SaaS applications. Using new technology found in these modern components and application frameworks
can drastically reduce the time to market and cost of converting a traditional on-premises product into a
SaaS solution. According to Microsoft, SaaS architectures can be classified into one of four maturity levels
whose key attributes are ease of configuration, multitenant efficiency, and scalability. Each level is
2.8
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
distinguished from the previous one by the addition of one of these three attributes. The levels described by
Microsoft are as follows.
SaaS Architectural Maturity Level 1—Ad-Hoc/Custom. The first level of maturity is actually no
maturity at all. Each customer has a unique, customized version of the hosted application. The application
runs its own instance on the host‟s servers. Migrating a traditional non-networked or client-server
application to this level of SaaS maturity typically requires the least development effort and reduces
operating costs by consolidating server hardware and administration.
SaaS Architectural Maturity Level 2—Configurability. The second level of SaaS maturity provides
greater program flexibility through configuration metadata. At this level, many customers can use separate
instances of the same application. This allows a vendor to meet the varying needs of each customer by
using detailed configuration options. It also allows the vendor to ease the maintenance burden by being
able to update a common code base.
SaaS Architectural Maturity Level 3—Multi-tenant Efficiency. The third maturity level adds multi-
tenancy to the second level. This results in a single program instance that has the capability to serve all of
the vendor‟s customers. This approach enables more efficient use of server resources without any apparent
difference to the end user, but ultimately this level is limited in its ability to scale massively.
SaaS Architectural Maturity Level 4—Scalable. At the fourth SaaS maturity level, scalability is added
by using a multi-tiered architecture. This architecture is capable of supporting a load-balanced farm of
identical application instances running on a variable number of servers, sometimes in the hundreds or even
thousands. System capacity can be dynamically increased or decreased to match load demand by adding or
removing servers, with no need for further alteration of application software architecture.
Key Characteristics of SaaS
Deploying applications in a service-oriented architecture is a more complex problem than is usually
encountered in traditional models of software deployment. As a result, SaaS applications are generally
priced based on the number of users that can have access to the service. There are often additional fees for
the use of help desk services, extra bandwidth, and storage. SaaS revenue streams to the vendor are usually
lower initially than traditional software license fees. However, the trade-off for lower license fees is a
monthly recurring revenue stream, which is viewed by most corporate CFOs as a more predictable gauge of
how the business is faring quarter to quarter. These monthly recurring charges are viewed much like
maintenance fees for licensed software. The key characteristics of SaaS software are the following:
Network-based management and access to commercially available software from central locations rather
than at each customer‟s site, enabling customers to access applications remotely via the Internet.
Application delivery from a one-to-many model (single-instance, multitenant architecture), as opposed
to a traditional one-to-one model.
2.9
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
Centralized enhancement and patch updating that obviates any need for downloading and installing by a
user. SaaS is often used in conjunction with a larger network of communications and collaboration
software, sometimes as a plug-in to a PaaS architecture.
Key benefits of a SaaS model include the following:
• SaaS enables the organization to outsource the hosting and management of applications to a third party
(software vendor and service provider) as a means of reducing the cost of application software licensing,
servers, and other infrastructure and personnel required to host the application internally.
• SaaS enables software vendors to control and limit use, prohibits copying and distribution, and facilitates
the control of all derivative versions of their software. SaaS centralized control often allows the vendor or
supplier to establish an ongoing revenue stream with multiple businesses and users without preloading
software in each device in an organization.
• Applications delivery using the SaaS model typically uses the one-to-many delivery approach, with the
Web as the infrastructure. An end user can access a SaaS application via a web browser; some SaaS vendors
provide their own interface that is designed to support features that are unique to their applications.
• A typical SaaS deployment does not require any hardware and can run over the existing Internet access
infrastructure. Sometimes changes to firewall rules and settings may be required to allow the SaaS
application to run smoothly.
• Management of a SaaS application is supported by the vendor from the end user perspective, whereby a
SaaS application can be configured using an API, but SaaS applications cannot be completely customized.
SaaS solutions are very different from application service provider (ASP) solutions. There are two main
explanations for this:
• ASP applications are traditional, single-tenant applications, but are hosted by a third party. They are
client/server applications with HTML frontends added to allow remote access to the application.
• ASP applications are not written as Net-native applications. As a result, their performance may be poor,
and application updates are no better than self-managed premise-based applications.
Drawbacks
For all scenarios, SaaS clouds place significant reliance on consumer browsers as most of computation is
done on provider side. This brings up number of issues and concerns.
Lack of 100% Security. Although browsers encrypt their communications with cloud providers, subtle
disclosures of information are still possible. For example, the very presence or absence of message
traffic, or the sizes of messages sent, or the originating locations may leak information that is indirect but
still of importance to some consumers. Moreover man-in-the-middle attacks on the cryptographic
protocols used by browsers can allow an attacker to hijack a consumer's cloud resources.
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.10
Browser Dependence. If a consumer visits a malicious Web site and the browser becomes
contaminated, subsequent access to a SaaS application might compromise the consumer's data. Data
from different SaaS applications might be inadvertently mixed on consumer systems within consumer
Web browsers.
Network Dependence - In the public SaaS cloud scenario, the network's reliability cannot be guaranteed
either by the cloud consumer or by the cloud provider as the Internet is not controlled by either one.
No Portability. Formats for exporting and importing data may not be entirely compatible between SaaS
clouds. Customized workflow and business rules, user interface and application settings, support scripts,
data extensions, and add-ons developed over time can also be vendor specific and not easily transferable.
Main providers
1. Salesforce.com
2. Google Apps
3. Zoho.com
BUSINESS PROCESS AS A SERVICE (BPAAS):
In today‟s challenging and complex business environment, firms need streamlined business processes in
order to run efficient and sustained operations. Business process management (BPM) is very critical to a
firm because it helps to create efficient and effective workflow processes that integrate with different
functions of the firm. In the advent of the internet and mobility, firms establish flexible and robust business
processes so that process owners, users and stakeholders could take advantage of the integrated and
ubiquitous connectivity approach to execute the business processes anywhere in the world. Business Process
as a Service (BPaaS) employs the cloud computing service model to outsource Business Process
Management (BPM) dependent on related cloud services; these include Software as a Service (SaaS),
Platform as a Service (PaaS) and Infrastructure as a Service (IaaS).
Traditional BPM Systems (BPMS) run business processes and track active instances of these
processes. A BPMS automates the workflow of a business process step by step and provides reporting on the
status of a process instance giving details on whether it is completed or stalled. In the case of a stalled
process, BPMS shows which step a process has stalled on; allowing companies to be proactive in their
approach to optimizing their processes and resolving workflow steps that may continually stall.
BPaaS on the other hand is simply Business Processes uploaded to a cloud service that performs the
tasks and allows for monitoring and reporting on the workflow status of active and completed tasks. The
added advantages of BPaaS over traditional BPMS are what set it apart. Some examples of outsourcing
services available with the BPaaS model include payroll, procurement, tender and industry operation
processes. The aim of BPaaS is to reduce labour costs through an increase in automation of business
processes and adheres to the usual cost structures typical of cloud computing - „pay as you go‟.
BPaaS differs from traditional business logic software packages as it is specifically designed-
oriented towards delivering services. BPaaS therefore will tend to have well-defined application interfaces
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.11
that are useable by many different businesses and offer a consistent, automated and repeatable service
assisting in the standardization of business processes. Automating business processes is not a new concept
and has been achieved in the past either manually or programmatically often incurring costly modifications
to existing ERP, CRM or other business logic software packages.
Although, cloud revolution has helped firms to approach business transformation with radical
changes to IT infrastructure and practices. IT plays a critical role in selecting the necessary infrastructure to
support the firms‟ business operations. Choosing the right platform for BPaaS depends on how well the
corporate infrastructure is architected and designed to support cloud based solutions and services.
BPaaS integrates very well with other cloud services of a hybrid cloud model thereby creating an
integrated delivery platform for efficient business process management. Hybrid cloud model is a
combination of private, community or public clouds that offer firms to build necessary technology platform
and services without worrying about the infrastructure ownership, maintenance, and support.
The figure above clearly depicts how BPaaS fits well in a corporate hybrid cloud structure along with other
cloud-based services. Infrastructure as a Service (IaaS) provides necessary computing resources, storage and
networking capabilities, hosted by a service provider who takes responsibility to manage, maintain and
support the underlying infrastructure and offers as on-demand services to customers. Platform as a Service
(PaaS) offers a broad range of middleware services including integrated application development
environment, application delivery platform, and database services. Software as a Service (SaaS) offers a
wide range of software services hosted in a cloud infrastructure using a pay-per-use pricing model or
subscription service-based model.
BPaaS sits on top of the other cloud-services as a robust business process management system and
offers firms to experiment new innovative business process ideas, thereby creating a well-integrated
business approach for firms in order to establish a superior competitive advantage. Consequently, IT brings
the needful business process innovation into reality through efficient and effective IT governance, quality
assurance and control, and robust program management practices, thereby providing immense value to reap
business benefits.
There is a practical reason to select a business process service. First, an organization can select a
process that matches business policy. It can then be used in many different application environments. This
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.12
ensures that a well-defined and, more importantly, a consistent process exist across the organization. For
example, a company may have a complex process for processing payroll or managing shipping. This service
can be linked to other services in the cloud, such as SaaS, as well as to applications in the data center.
Like SaaS cloud services, business processes are beginning to be designed as a packaged offering
that can be used in a hybrid manner. These business processes can really be any service that can be
automated, including managing e-mail, shipping a package, or managing customer credit.
The difference between traditional packaged applications and BPaaS is that BPaaS is designed to be
service-oriented. So, BPaaS is likely to have well-defined interfaces. In addition, a BPaaS is a standardized
service for use by many different organizations.
The following characteristics define BPaaS:
The BPaaS sits on top of the other three foundational cloud services: SaaS, PaaS, and IaaS.
A BPaaS service is configurable based on the process being designed.
A BPaaS service must have well-defined APIs so it can be easily connected to related services.
A BPaaS must be able to support multiple languages and multiple deployment environments because
a business cannot predict how a business process will be leveraged in the future.
A BPaaS environment must be able to handle massive scaling. The service must be able to go from
managing a few processes for a couple of customers to being able to support hundreds if not
thousands of customers and processes. The service accomplishes that objective by optimizing the
underlying cloud services to support this type of elasticity and scaling.
IDENTITY AS A SERVICE (IDAAS):
The establishment and proof of an identity is a central network function. An identity service is one
that stores the information associated with a digital entity in a form that can be queried and managed for use
in electronic transactions. Identity services have as their core functions: a data store, a query engine, and a
policy engine that maintains data integrity.
Distributed transaction systems such as internetworks or cloud computing systems magnify the
difficulties faced by identity management systems by exposing a much larger attack surface to an intruder
than a private network does. Whether it is network traffic protection, privileged resource access, or some
other defined right or privilege, the validated authorization of an object based on its identity is the central
tenet of secure network design. In this regard, establishing identity may be seen as the key to obtaining trust
and to anything that an object or entity wants to claim ownership of.
What is an identity?
An identity is a set of characteristics or attributes that make something recognizable or known. In computer
network systems, it is one's digital identity that most concerns us. A digital identity is those attributes and
metadata of an object along with a set of relationships with other objects that makes an object identifiable.
Not all objects are unique, but by definition a digital identity must be unique, if only trivially so, through the
assignment of a unique identification attribute. An identity must therefore have a context in which it exists.
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.13
This description of an identity as an object with attributes and relationships is one that programmer's would
recognize. You can extend this notion to the idea of an identity having a profile and profiling services such
as Facebook as being an extension of the notion of Identity as a Service in cloud computing. An identity can
belong to a person and may include the following:
• Things you are: Biological characteristics such as age, race, gender, appearance, and so forth
• Things you know: Biography, personal data such as social security numbers, PINs, where you went to
school, and so on
• Things you have: A pattern of blood vessels in your eye, your fingerprints, a bank account you can access,
a security key you were given, objects and possessions, and more
• Things you relate to: Your family and friends, a software license, beliefs and values, activities and
endeavors, personal selections and choices, habits and practices, an iGoogle account, and more.
To establish your identity on a network, you might be asked to provide a name and password, which
is called a single-factor authentication method. More secure authentication requires the use of at least two-
factor authentication; for example, not only name and password (things you know) but also a transient token
number provided by hardware key (something you have). To get to multi-factor authentication, you might
have a system that examines a biometric factor such as a fingerprint or retinal blood vessel pattern—both of
which are essentially unique things you are. Multifactor authentication requires the outside use of a network
security or trust service, and it is in the deployment of trust services that our first and most common IDaaS
applications are employed in the cloud.
Of course, many things have digital identities. User and machine accounts, devices, and other objects
establish their identities in a number of ways. For user and machine accounts, identities are created and
stored in domain security databases that are the basis for any network domain, in directory services, and in
data stores in federated systems. Network interfaces are identified uniquely by Media Access Control
(MAC) addresses, which alternatively are referred to as Ethernet Hardware Addresses (EHAs). It is the
assignment of a network identity to a specific MAC address that allows systems to be found on networks.
The manner in which Microsoft validates your installation of Windows and Office is called Windows
Product Activation and creates an identification index or profile of your system, which is instructive. During
activation, the following unique data items are retrieved:
• A 25-character software product key and product ID
• The uniquely assigned Global Unique Identifier or GUID
• PC manufacturer
• CPU type and serial number
• BIOS checksum
• Network adapter and its MAC address
• RAM amount
• Hard drive and volume serial number
• Optical drive
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.14
• Region and language settings and user locale
From this information, a code is calculated, checked, and entered into the registration database. Each
of these uniquely identified hardware attributes is assigned a weighting factor such that an overall summary
be calculated. If you change enough factors—NIC and CPU, display adapter, RAM amount, and hard
drive—you trigger a request for a reactivation based on system changes. This activation profile is also
required when you register for the Windows Genuine Advantage program. Windows Product Activation and
Windows Genuine Advantage are cloud computing applications, albeit proprietary ones. Whether people
consider these applications to be services is a point of contention.
Networked identity service classes
To validate Web sites, transactions, transaction participants, clients, and network services—various forms of
identity services—have been deployed on networks. Ticket or token providing services, certificate servers,
and other trust mechanisms all provide identity services that can be pushed out of private networks and into
the cloud. Identity protection is one of the more expensive and complex areas of network computing. If you
think about it, requests for information on identity by personnel such as HR, managers, and others; by
systems and resources for access requests; as identification for network traffic; and the myriad other
requirements mean that a significant percentage of all network traffic is supporting an identification service.
Literally hundreds of messages on a network every minute are checking identity, and every Ethernet packet
contains header fields that are used to identify the information it contains. As systems become even more
specialized, it has become increasingly difficult to find the security experts needed to run an ID service. So
Identity as a Service or the related hosted (managed) identity services may be the most valuable and cost
effective distributed service types you can subscribe to.
Identity as a Service (IDaaS) may include any of the following:
• Authentication services (identity verification)
• Directory services
• Federated identity
• Identity governance
• Identity and profile management
• Policies, roles, and enforcement
• Risk and event monitoring, including audits
• Single sign-on services (pass-through authentication)
The sharing of any or all of these attributes over a network may be the subject of different
government regulations and in many cases must be protected so that only justifiable parties may have access
to the minimal amount that may be disclosed. This level of access defines what may be called an identity
relationship. Certain codes of conduct must be observed legally, and if not legally at the moment, then
certainly on amoral basis. Cloud computing services that don't observe these codes do so at their peril. In
working with IDaaS software, evaluate IDaaS applications on the following basis:
• User control for consent: Users control their identity and must consent to the use of their information.
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.15
• Minimal Disclosure: The minimal amount of information should be disclosed for an intended use.
• Justifiable access: Only parties who have a justified use of the information contained in a digital identity
and have a trusted identity relationship with the owner of the information may be given access to that
information.
• Directional Exposure: An ID system must support bidirectional identification for a public entity so that it
is discoverable and a unidirectional identifier for private entities, thus protecting the private ID.
• Interoperability: A cloud computing ID system must interoperate with other identity services from other
identity providers.
• Unambiguous human identification: An IDaaS application must provide an unambiguous mechanism for
allowing a human to interact with a system while protecting that user against an identity attack.
• Consistency of Service: An IDaaS service must be simple to use, consistent across all its uses, and able to
operate in different contexts using different technologies.
Federated Identity Management (FIDM)
FIDM describes the technologies and protocols that enable a user to package security credentials across
security domains. It uses Security Markup Language (SAML) to package a user's security credentials as
shown in the following diagram:
OpenID is a developing industry standard for authenticating “end users” by storing their digital identity in a
common format. When an identity is created in an OpenID system, that information is stored in the system
of any OpenID service provider and translated into a unique identifier. Identifiers take the form of a Uniform
Resource Locator (URL) or as an Extensible Resource Identifier (XRI) that is authenticated by that OpenID
service provider. Any software application that complies with the standard accepts an OpenID that is
authenticated by a trusted provider. A very impressive group of cloud computing vendors serve as identity
providers (or OpenID providers), including AOL, Facebook, Google, IBM, Microsoft, MySpace, Orange,
PayPal, VeriSign, LiveJournal, Ustream, Yahoo!, and others.
The OpenID standard applies to the unique identity of the URL; it is up to the service provider to
store the information and specify the forms of authentication required to successfully log onto the system.
Thus an OpenID authorization can include not only passwords, but smart cards, hardware keys, tokens, and
biometrics as well.
These are samples of trusted providers and their URL formats:
• Blogger: <username>.blogger.com or <blogid>.blogspot.com
• MySpace: myspace.com/<username>
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.16
• MyOpenID: <username>.myopenid.com
• Orange: openid.orange.fr/username or simply orange.fr/
• Verisign: <username>.pip.verisinglabs.com
• WordPress: <username>.wordpress.com
• Yahoo!: openid.yahoo.com
After you have logged onto a trusted provider, that logon may provide you access to other Web sites
that support OpenID. When you request access to a site through your browser (or another application that is
referred to as a user-agent), that site serves as the “relying party” and requests of the server or server-agent
that it verify the end-user's identifier. You won't need to log onto these other Web sites, if your OpenID is
provided. Most trusted providers require that you indicate which Web sites you want to share your OpenID
identifier with and the information is submitted automatically to the next site. CardSpace is a Microsoft
software client that is part of the company's Identity Meta system and built into the Web Services Protocol
Stack. This stack is built on the OASIS standards (WS-Trust, WS-Security, WS-Security Policy, and WS-
Metadata Exchange), so any application that conforms to the OASIS WS- standards can interoperate with
CardSpace. CardSpace was introduced with .NET Frameworks 3.0 and can be installed on Windows XP,
Server 2003, and later. It is installed by default on Windows Vista and Windows 7.
A SAML assertion is a security statement in the SAML file that makes a claim regarding
authentication, attributes, or authorization. The SAML protocol request is often referred to as a query; the
three different supported query types are an authentication query, an attribute query, and an authorization
decision query. SAML requests use a SOAP binding; that is, the SAML request or response is embedded in
a SOAP wrapper within an HTTP message. SAML is used to provide a mechanism for a Web Browser
Single Sign On (SSO). In this instance, a Web browser is the user agent, which requests access to a resource
that is authorized by a SAML service provider. The service provider takes a request from a user for access to
the resource and sends an authentication request to the SAML identity provider directly from the initiating
user agent (Web browser). Figure 4.10 shows the SAML Single Sign On Request/Response mechanism.
The Service Provisioning Markup Language (SPML) is another of the OASIS open standards
developed to provide for service provisioning. Provisioning is the process by which a resource is prepared
for use, reserved, accessed, used, and then released when the transaction is completed. A classic example of
provisioning a resource is the reservation and use of a phone line or a Virtual Private Network. A
provisioning system has three types of components: A Requesting Authority (RA) is the client, the
Provisioning Service Point (PSP) is the cloud component that receives the request and returns a response to
the RA, and a Provisioning Service Targets (PST) is the software application upon which the provisioning
action is performed. The SPML provisioning system (which can be thought of as an architectural layer)
means that identity information need only be entered into these three components once.
Amazon Web Services Provides IAM as IDaaS
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.17
FIGURE 4.10: SAML provides a mechanism by which a service requester can use a Single Sign On
logon to access Web services securely.
SPML is used to prepare Web services and applications for use, signal that the resource is available
for use and waiting for instructions, and signal when the use or transaction has been completed. With SPML,
a system can provide automated user and system access, enforce access rights, and make cloud computing
services available across network systems. Without a provisioning system, a cloud computing system can be
very inefficient and potentially unreliable.
COMMUNICATION-AS-A-SERVICE (CAAS)
CaaS is an outsourced enterprise communications solution. Providers of this type of cloud-based solution
(known as CaaS vendors) are responsible for the management of hardware and software required for
delivering Voiceover IP (VoIP) services, Instant Messaging (IM), and video conferencing capabilities to
their customers. This model began its evolutionary process from within the telecommunications (Telco)
industry, not unlike how the SaaS model arose from the software delivery services sector. CaaS vendors are
responsible for all of the hardware and software management consumed by their user base. CaaS vendors
typically offer guaranteed quality of service (QoS) under a service-level agreement (SLA).
A CaaS model allows a CaaS provider‟s business customers to selectively deploy communications
features and services throughout their company on a pay-as-you-go basis for service(s) used. CaaS is
designed on a utility-like pricing model that provides users with comprehensive, flexible, and (usually)
simple-to-understand service plans. According to Gartner, the CaaS market is expected to total $2.3 billion
in 2011, representing a compound annual growth rate of more than 105% for the period.
CaaS service offerings are often bundled and may include integrated access to traditional voice (or
VoIP) and data, advanced unified communications functionality such as video calling, web collaboration,
chat, real-time presence and unified messaging, a handset, local and long-distance voice services, voice
mail, advanced calling features (such as caller ID, three way and conference calling, etc.) and advanced PBX
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.18
functionality. A CaaS solution includes redundant switching, network, POP and circuit diversity, customer
premises equipment redundancy, and WAN fail-over that specifically address the needs of their customers.
All VoIP transport components are located in geographically diverse, secure data centers for high
availability and survivability.
CaaS offers flexibility and scalability that small and medium-sized business might not otherwise be
able to afford. CaaS service providers are usually prepared to handle peak loads for their customers by
providing services capable of allowing more capacity, devices, modes or area coverage as their customer
demand necessitates. Network capacity and feature sets can be changed dynamically, so functionality keeps
pace with consumer demand and provider-owned resources are not wasted. From the service provider
customer‟s perspective, there is very little to virtually no risk of the service becoming obsolete, since the
provider‟s responsibility is to perform periodic upgrades or replacements of hardware and software to keep
the platform technologically current.
CaaS requires little to no management oversight from customers. It eliminates the business
customer‟s need for any capital investment in infrastructure, and it eliminates expense for ongoing
maintenance and operations overhead for infrastructure. With a CaaS solution, customers are able to
leverage enterprise-class communication services without having to build a premises-based solution of their
own. This allows those customers to reallocate budget and personnel resources to where their business can
best use them.
Companies including AT&T, IntelePeer, Alteva and Cypress Communications offer services that fall
into this category.
Advantages of CaaS
From the handset found on each employee‟s desk to the PC-based software client on employee laptops, to
the VoIP private backbone, and all modes in-between, every component in a CaaS solution is managed 24/7
by the CaaS vendor. As we said previously, the expense of managing a carrier-grade datacenter is shared
across the vendor‟s customer base, making it more economical for businesses to implement CaaS than to
build their own VoIP network. Some of the advantages of a hosted approach of CaaS are.
Hosted and Managed Solutions
Remote management of infrastructure services provided by third parties once seemed an unacceptable
situation to most companies. However, over the past decade, with enhanced technology, networking, and
software, the attitude has changed. This is, in part, due to cost savings achieved in using those services.
However, unlike the “one-off” services offered by specialist providers, CaaS delivers a complete
communications solution that is entirely managed by a single vendor. Along with features such as VoIP and
unified communications, the integration of core PBX features with advanced functionality is managed by
one vendor, who is responsible for allof the integration and delivery of services to users.
Fully Integrated, Enterprise-Class Unified Communications
With CaaS, the vendor provides voice and data access and manages LAN/WAN, security, routers, email,
voice mail, and data storage. By managing the LAN/WAN, the vendor can guarantee consistent quality of
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.19
service from a user‟s desktop across the network and back. Advanced unified communications features that
are most often a part of a standard CaaS deployment include:
Chat
Multimedia conferencing
Microsoft Outlook integration
Real-time presence
“Soft” phones (software-based telephones)
Video calling
Unified messaging and mobility
Providers are constantly offering new enhancements (in both performance and features) to their CaaS
services. The development process and subsequent introduction of new features in applications is much
faster, easier, and more economical than ever before. This is, in large part, because the service provider is
doing work that benefits many end users across the provider‟s scalable platform infrastructure. Because
many end users of the provider‟s service ultimately share this cost, services can be offered to individual
customers at a cost that is attractive to them.
No Capital Expenses Needed
When business outsources their unified communications needs to a CaaS service provider, the provider
supplies a complete solution that fits the company‟s exact needs. Customers pay a fee (usually billed
monthly) for what they use. Customers are not required to purchase equipment, so there is no capital outlay.
Bundled in these types of services are ongoing maintenance and upgrade costs, which are incurred by the
service provider. The use of CaaS services allows companies the ability to collaborate across any
workspace. Advanced collaboration tools are now used to create high-quality, secure, adaptive work spaces
throughout any organization. This allows a company‟s workers, partners, vendors, and customers to
communicate and collaborate more effectively. Better communication allows organizations to adapt quickly
to market changes and to build competitive advantage. CaaS can also accelerate decision making within an
organization. Innovative unified communications capabilities (such as presence, instant messaging, and rich
media services) help ensure that information quickly reaches whoever needs it.
Flexible Capacity and Feature Set
When customers outsource communications services to a CaaS provider, they pay for the features they need
when they need them. The service provider can distribute the cost services and delivery across a large
customer base. As previously stated, this makes the use of shared feature functionality more economical for
customers to implement. Economies of scale allow service providers enough flexibility that they are not tied
to a single vendor investment. They are able to leverage best-of-breed providers such as Avaya, Cisco,
Juniper, Microsoft, Nortel and ShoreTel more economically than any independent enterprise.
No Risk of Obsolescence
Rapid technology advances, predicted long ago and known as Moore‟s law, have brought about product
obsolescence in increasingly shorter periods of time. Moore‟s law describes a trend he recognized that has
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.20
held true since the beginning of the use of integrated circuits (ICs) in computing hardware. Since the
invention of the integrated circuit in 1958, the number of transistors that can be placed inexpensively on an
integrated circuit has increased exponentially, doubling approximately every two years. Unlike IC
components, the average life cycles for PBXs and key communications equipment and systems range
anywhere from five to 10 years. With the constant introduction of newer models for all sorts of
technology(PCs, cell phones, video software and hardware, etc.), these types of products now face much
shorter life cycles, sometimes as short as a single year. CaaS vendors must absorb this burden for the user by
continuously upgrading the equipment in their offerings to meet changing demands in the marketplace.
No Facilities and Engineering Costs Incurred
CaaS providers host all of the equipment needed to provide their services to their customers, virtually
eliminating the need for customers to maintain data center space and facilities. There is no extra expense for
the constant power consumption that such a facility would demand. Customers receive the benefit of
multiple carrier-grade data centers with full redundancy—and it‟s all included in the monthly payment.
Guaranteed Business Continuity
If a catastrophic event occurred at your business‟s physical location, would your company disaster recovery
plan allow your business to continue operating without a break? If your business experienced a serious or
extended communications outage, how long could your company survive? For most businesses, the answer
is “not long.” Distributing risk by using geographically dispersed data centers has become the norm today. It
mitigates risk and allows companies in a location hit by a catastrophic event to recover as soon as possible.
This process is implemented by CaaS providers because most companies don‟t even contemplate voice
continuity if catastrophe strikes. Unlike data continuity, eliminating single points of failure for a voice
network is usually cost-prohibitive because of the large scale and management complexity of the project.
With a CaaS solution, multiple levels of redundancy are built into the system, with no single point of failure.
MONITORING-AS-A-SERVICE (MAAS)
Monitoring-as-a-Service (MaaS) is the outsourced provisioning of security, primarily on business platforms
that leverage the Internet to conduct business. MaaS has become increasingly popular over the last decade.
Since the Advent of cloud computing, its popularity has, grown even more. Security monitoring involves
protecting an enterprise or government client from cyber threats. A security team plays a crucial role in
securing and maintaining the confidentiality, integrity, and availability of IT assets. However, time and
resource constraints limit security operations and their effectiveness for most companies. This requires
constant vigilance over the security infrastructure and critical information assets.
Many industry regulations require organizations to monitor their security environment, server logs,
and other information assets to ensure the integrity of these systems. However, conducting effective security
monitoring can be a daunting task because it requires advanced technology, skilled security experts, and
scalable processes—none of which come cheap. MaaS security monitoring services offer real-time, 24/7
monitoring and nearly immediate incident response across a security infrastructure—they help to protect
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.21
critical information assets of their customers. Prior to the advent of electronic security systems, security
monitoring and response were heavily dependent on human resources and human capabilities, which also
limited the accuracy and effectiveness of monitoring efforts. Over the past two decades, the adoption of
information technology into facility security systems, and their ability to be connected to security operations
centers (SOCs) via corporate networks, has significantly changed that picture. This means two important
things: (1) The total cost of ownership (TCO) for traditional SOCs is much higher than for a modern-
technology SOC; and (2)achieving lower security operations costs and higher security effectiveness means
that modern SOC architecture must use security and IT technology to address security risks.
Protection against Internal and External Threats
SOC-based security monitoring services can improve the effectiveness of a customer security infrastructure
by actively analyzing logs and alerts from infrastructure devices around the clock and in real time.
Monitoring teams correlate information from various security devices to provide security analysts with the
data they need to eliminate false positives and respond to true threats against the enterprise. Having
consistent access to the skills needed to maintain the level of service an organization requires for enterprise-
level monitoring is a huge issue. The information security team can assess system performance on a
periodically recurring basis and provide recommendations for improvements as needed. Typical services
provided by many MaaS vendors are described below.
Early Detection
An early detection service detects and reports new security vulnerabilities shortly after they appear.
Generally, the threats are correlated with third party sources, and an alert or report is issued to customers.
This report is usually sent by email to the person designated by the company. Security vulnerability reports,
aside from containing a detailed description of the vulnerability and the platforms affected, also include
information on the impact the exploitation of this vulnerability would have on the systems or applications
previously selected by the company receiving the report. Most often, the report also indicates specific
actions to be taken to minimize the effect of the vulnerability, if that is known.
Platform, Control, and Services Monitoring
Platform, control, and services monitoring is often implemented as a dashboard interface and makes it
possible to know the operational status of the platform being monitored at any time. It is accessible from a
web interface, making remote access possible. Each operational element that is monitored usually provides
an operational status indicator, always taking into account the critical impact of each element. This service
aids in determining which elements may be operating at or near capacity or beyond the limits of established
parameters. By detecting and identifying such problems, preventive measures can be taken to prevent loss of
service.
Intelligent Log Centralization and Analysis
Intelligent log centralization and analysis is a monitoring solution based mainly on the correlation and
matching of log entries. Such analysis helps to establish a baseline of operational performance and provides
an index of security threat. Alarms can be raised in the event an incident moves the established baseline
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.22
parameters beyond a stipulated threshold. These types of sophisticated tools are used by a team of security
experts who are responsible for incident response once such a threshold has been crossed and the threat has
generated an alarm or warning picked up by security analysts monitoring the systems.
Vulnerabilities Detection and Management
Vulnerabilities detection and management enables automated verification and management of the security
level of information systems. The service periodically performs a series of automated tests for the purpose of
identifying system weaknesses that may be exposed over the Internet, including the possibility of
unauthorized access to administrative services, the existence of services that have not been updated, the
detection of vulnerabilities such as phishing, etc. The service performs periodic follow-up of tasks
performed by security professionals managing information systems security and provides reports that can be
used to implement a plan for continuous improvement of the system‟s security level.
Continuous System Patching/Upgrade and Fortification
Security posture is enhanced with continuous system patching and upgrading of systems and application
software. New patches, updates, and service packs for the equipment‟s operating system are necessary to
maintain adequate security levels and support new versions of installed products. Keeping abreast of all the
changes to all the software and hardware requires a committed effort to stay informed and to communicate
gaps in security that can appear in installed systems and applications.
Intervention, Forensics, and Help Desk Services
Quick intervention when a threat is detected is crucial to mitigating the effects of a threat. This requires
security engineers with ample knowledge in the various technologies and with the ability to support
applications as well as infrastructures on a 24/7 basis. MaaS platforms routinely provide this service to their
customers. When a detected threat is analyzed, it often requires forensic analysis to determine what it is,
how much effort it will take to fix the problem, and what effects are likely to be seen. When problems are
encountered, the first thing customers tend to do is pick up the phone. Help desk services provide assistance
on questions or issues about the operation of running systems. This service includes assistance in writing
failure reports, managing operating problems, etc.
Delivering Business Value
Some consider balancing the overall economic impact of any build-versus buy decision as a more significant
measure than simply calculating a return on investment (ROI). The key cost categories that are most often
associated with MaaS are (1) service fees for security event monitoring for all firewalls and intrusion
detection devices, servers, and routers; (2) internal account maintenance and administration costs; and (3)
preplanning and development costs. Based on the total cost of ownership, whenever a customer evaluates
the option of an in-house security information monitoring team and infrastructure compared to outsourcing
to a service provider, it does not take long to realize that establishing and maintaining an in-house capability
is not as attractive as outsourcing the service to a provider with an existing infrastructure. Having an in-
house security operations center forces a company to deal with issues such as staff attrition, scheduling,
around the clock operations, etc.
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.23
Losses incurred from external and internal incidents are extremely significant, as evidenced by a
regular stream of high-profile cases in the news. The generally accepted method of valuing the risk of losses
from external and internal incidents is to look at the amount of a potential loss, assume a frequency of loss,
and estimate a probability for incurring the loss. Although this method is not perfect, it provides a means for
tracking information security metrics. Risk is used as a filter to capture uncertainty about varying cost and
benefit estimates. If a risk-adjusted ROI demonstrates a compelling business case, it raises confidence that
the investment is likely to succeed because the risks that threaten the project have been considered and
quantified. Flexibility represents an investment in additional capacity or agility today that can be turned into
future business benefits at some additional cost. This provides an organization with the ability to engage in
future initiatives, but not the obligation to do so. The value of flexibility is unique to each organization, and
willingness to measure its value varies from company to company.
Real-Time Log Monitoring Enables Compliance
Security monitoring services can also help customers comply with industry regulations by automating the
collection and reporting of specific events of interest, such as log-in failures. Regulations and industry
guidelines often require log monitoring of critical servers to ensure the integrity of confidential data. MaaS
providers‟ security monitoring services automate this time consuming process.
Providers:
Commercial: CloudWatch, AzureWatch, CloudKick, CloudStatus, Nimsoft, Monitis, LogicMonitor, Aneka,
Open Source: Hyperic-HQ, OpenNebula, CloudStack, ZenPack, Nimbus, PCMONS, DARGOS, Sensu.
STORAGE AS A SERVICE:
Cloud data storage is a critical component in the cloud computing model; without cloud storage, there can be
no cloud service. A storage cloud provides storage as a service to storage consumers. A storage cloud can be
used to support a diverse range of storage needs, including mass data stores, file shares, backup, archive, and
more. Implementations range from public user data stores to large private storage area networks (SAN) or
network-attached storage (NAS), hosted in-house or at third-party managed facilities. The following
examples are publicly available storage clouds:
IBM SmartCloud offers a variety of storage options, including archive, backup, and object storage.
Skydrive from Microsoft allows the public to store and share nominated files on the Microsoft public
storage cloud service.
Email services, such as Hotmail, Gmail, and Yahoo, store user email and attachments in their
respective storage clouds.
Facebook and YouTube allow users to store and share photos and videos.
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.24
Storage cloud capability can also be offered in the form of storage as a service, where you pay basedon the
amount of storage space used. There are various ways a storage cloud can be used, based on your
organization's specific requirements. Figure 2-1 describes how various electronic or portable devices can
access storage through the Internet without necessarily knowing the explicit details of the type or location of
storage that is used underneath.
Storage usage differences within a storage cloud
infrastructure
Within a cloud infrastructure, a useful distinction can be
made between how storage capacity is used, similar to the
difference that exists in traditional IT between system data
(files, libraries, utilities, and so on), and application data
and user files. This distinction becomes important for
storage allocation in virtual server implementations.
Storage as cloud: A storage cloud exhibits the characteristics that are essential to any cloud service (self-
service provisioning, Internet and intranet accessibility, pooled resources, elastic, and metered). It is a cloud
environment on which the offered services provide the ability to store and retrieve data on behalf of
computing processes that are not part of the storage cloud service. A storage cloud can be used in
combination with a compute cloud, a private compute facility, or as storage for a computing device. Storage
in a storage cloud can be categorized as follows:
Hosted storage: This category is primary storage for block or file data that can be written and read on
demand, and is provisioned as generally higher performance and availability storage.
Reference storage: This category is fixed content storage to which blocks or files are typically written to
once, and read from many times. Examples of data typically residing on reference storage include
multimedia, archival data, medical imaging, surveillance data, log files, and others.
Storage for cloud: Storage for cloud is a general name applied to the type of storage environment,
implemented in cloud computing that is required to provision cloud computing services. For example, when
a virtual server machine is created, some storage capacity is required. This storage is provisioned as part of
the virtual machine creation process to support the operating system and runtime environment for the
instance. It is not delivered by a storage cloud. However, it may be provisioned from the same storage
infrastructure as a storage cloud. The types of storage provisioned for a cloud service can be categorized as
follows:
Ephemeral storage: This storage is required only while a virtual machine is running. It is freed from use
and made available to the storage pool when the virtual machine is shut down. Examples of this category of
storage include boot volumes, page files, and other temporary data.
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.25
Persistent storage: This storage is required across virtual machine reboots. It is retained even when a
virtual machine is shutdown. It includes “gold” (master template) images, systems customization, and user
data.
Figure 2-2 Storage categories used in cloud
Traditional storage versus storage cloud
This section compares the various challenges of traditional and cloud storage, outlines the advantages of
cloud storage, and explains key implementation considerations for potential storage cloud infrastructure
deployments.
Challenges of traditional storage
Before exploring the advantages and benefits of storage cloud, we list several limitations of current IT
infrastructure, which businesses deal with on a daily basis. This categorization is from a high level;
challenges in one category can sometimes be applicable to other categories.
1) Constrained business agility
The time required to provision storage capacity for new projects or unexpectedly rapid growth affects an
organization‟s ability to quickly react to changing business conditions. This situation can often negatively
affect the ability to develop and deliver products and services within competitive time-to-market targets. The
following constraints are examples:
Time required deploying new or upgraded business function
Downtime required for data migration and technology refresh
Unplanned storage capacity acquisitions
Staffing limitations
Often substantial reserve capacity is required to support growth that requires planning and investment far in
advance of the actual need to store data. The reason is because the infrastructure cannot easily scale up the
needed additional capacity as a result of an inability to seamlessly add required storage resources. This key
issue makes it more difficult to cope with rapidly changing business environments, adversely affecting the
ability to make better decisions more rapidly and proactively optimize processes with more predictable
outcomes.
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.26
2) Sub-optimal utilization of IT resources
The variation in workloads and the difficulty in determining future requirements typically results in IT
storage capacity inefficiencies:
_ Difficulty in predicting future capacity and service level needs
_ Peaks and valleys in resource requirements
_ Over and under provisioning of IT resources
Extensive capacity planning effort is needed to plan for varying future storage capacity and service level
requirements. Capacity is often underutilized as the storage infrastructure requires reserve capacity for
unpredictable future growth requirements and therefore cannot be easily scaled up or down. Compounding
these issues is the frequent inability to seamlessly provision additional storage capacity without impacting
application uptime.
3) Organizational constraints
Another barrier to efficient use of resources can be traced to artificial resource acquisition, ownership, and
operational practices:
_ Project oriented infrastructure funding
_ Constrained operational budgets
_ Difficulty implementing resource sharing
_ No chargeback or show back mechanism as incentive for IT resource conservation
The limited ability to share data across the enterprise especially in the context of interdepartmental sharing
can degrade overall use of IT resources including storage capacity. Parallel performance requirements in
existing storage systems result in one node supporting one disk, leading to multiplication of nodes and
servers.
4) IT resource management
Efficient IT support is based on cost-effective infrastructure and service-level management to address
business needs.
Rapid capacity growth
Cost control
Service-level monitoring and support (performance, availability, capacity, security, retention, and
more)
Architectural open standardization
The continued growth of resource management complexity in the storage infrastructure is often based on a
lack of standardization and high levels of configuration customization. For example, adjusting storage
performance through multiple RAID settings and manual tuning the distribution of I/O loads across various
storage arrays consumes valuable staff resources. Sometimes, the desire to avoid vendor lock-in because of
proprietary protocols for data access also creates tremendous pressure on storage resource management.
Other challenges are related to managing and meeting stringent SLA requirements and lack of enough in-
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.27
house expertise to manage complex storage infrastructures. New service levels, adjusting existing SLAs to
align IT disaster recovery, business resilience requirements, and high-availability solutions are also factors.
Duplicate data existing in the form of copies across organizational islands within the enterprise leads
to higher costs for data storage and also backup infrastructure. Compounding all of this are ever-shrinking
operational and project budgets, and lack of dynamic chargeback or show back models as incentives for IT
resource conservation.
ADVANTAGES OF A STORAGE CLOUD:
Storage cloud has redefined the way storage consumers can do business, especially those who have seasonal
or unpredictable capacity requirements, and those requiring rapid deployment or contraction of storage
capacity. Storage cloud can help them focus more on their core business and worry less about supporting a
storage infrastructure for their data.
Here are the advantages:
Facilitates rapid capacity provisioning supporting business agility
Improves storage utilization by avoiding unused capacity
Supports storage consolidation and storage virtualization functionality
Chargeback and show back accounting for usage as incentive to conserve resources
Storage cloud helps companies to become more flexible and agile, and supports their growth. Improvement
in quality of service (QoS), by automating provisioning and management of underlying complex storage
infrastructure, helps improve the overall efficiency of IT storage.
Benefits and features of storage cloud
The overall benefits of storage cloud vary significantly based on the underlying storage infrastructure.
Storage cloud can help businesses achieve more effective functionality at lower cost while improving
business agility and reducing project scheduling risk. Figure 2-4 identifies basic differences between the
traditional IT model and a storage cloud model.
1) Dynamic scaling and provisioning (elasticity)
One of the key advantages of storage cloud is dynamic scaling, also known as elasticity. Elasticity means
that storage resources can be dynamically allocated (scaled up) or released (scaled down) based on business
needs. Traditional IT storage infrastructure administration most often acquires capacity needed within the
next year or two, which necessarily means this reserve capacity will be idle or underutilized for some period
or time. A storage cloud can start small and grow incrementally with business requirements, or even shrink
in size to lower
costs if appropriate to capacity demands. For this key reason, storage cloud can support a company‟s growth
while reducing net capital investment in storage.
2) Faster deployment of storage resources
New enterprise storage resources can be provisioned and deployed in minutes compared to less optimized
traditional IT, which typically takes more time, sometimes days or even months.
3) Reduction in TCO and better ROI
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.28
Enterprise storage virtualization and consolidation lowers infrastructure total cost of ownership (TCO)
significantly, with centralized storage capacity and management driving improved usage and efficiency,
generally providing a significantly higher return on investment (ROI) through storage capacity cost
avoidance. In addition, savings can be gained because of reduced floor space, energy required for cooling,
labor costs, and also support and maintenance. This gain can be important where storage costs grow faster
than revenues and directly affect profitability.
4) Reduce cost of managing storage Virtualization helps in consolidating storage capacity and helps
achieve much higher utilization, thereby significantly reducing the capital expenditure on storage and its
management. 5) Greener data centers
By consolidating geographically dispersed storage into fewer data centers, you achieve a smaller footprint in
terms of rackspace; You can save on energy (electrical power) and charges for infrastructure space, which
also improves TCO and ROI.
6) Dynamic, flexible chargeback model (pay-per-use)
By implementing storage cloud, an organization pays only for the amount of storage that is actually that is
used rather than paying for an incremental spare capacity, which remains idle until needed. This model can
provide an enterprise with enormous benefits financially. Savings can also be realized from hardware and
software licensing for functionality such as replication and point-in-time copy.
7) Multiuser file sharing
By centralizing the storage infrastructure, all users can have parallel and simultaneous access to all the data
across the enterprise rather than dealing with isolated islands of data. This also helps in collaboration and
file sharing with higher data access rates.
INFRASTRUCTURE AS A SERVICE: AMAZON EC2
The Amazon cloud provides infrastructure as a service (IaaS), whereby computing infrastructure such as for
servers, storage or network end points of a desired capacity are virtually provisioned in minutes through an
automated web-based management console. This core IaaS service, called Elastic Compute Cloud, or EC2,
is but one of a set of services that constitute the Amazon cloud platform, but the term EC2 is also often used
to describe the entire cloud offering. Figure 5.1 illustrates the services provided by the Amazon
infrastructure cloud from a user perspective. These services are implemented on a very large network of
servers, shown as dashed boxes in the figure. The Elastic Compute Cloud service provides users access to
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.29
dedicated virtual machines of a desired capacity that are provisioned on these physical servers, with details
of the actual physical server, such as its location, capacity, etc. being transparent to the end-user. Through
the management console users generatePKI1 key-pairs using which they can securely login to these virtual
servers over the internet. In Figure 5.1, user C provisions the virtual server VM4through the management
console, and accesses it using ssh, for a Linux server, or via „remote desktop‟ for a Windows server. Users
have a choice of virtual machine images (called Amazon machine images, or AMIs) to choose from when
provisioning a server. All AMIs are stored in common storage in the Amazon S3 storage service (which we
shall return to below), and used to boot the desired configuration of virtual server. The user‟s account is
charged on an hourly basis based on actual consumption, i.e. time that the server is up. Charges vary
depending on the AMI used and capacity of server chosen while provisioning. For example, a „small‟ Linux
server costs a few cents per cpu-hour, whereas a larger server preloaded with licensed software, such as
Windows, as well as other database or middleware products, could end up costing close to a dollar per hour.
Cloud users have root/administrator access to these servers, and therefore control them completely. For
example, they can deploy applications and make them publicly accessible over the internet. Static network
addresses required for such purposes can also be provisioned through the management console. Thus, VM4
is also accessible by internet users at large over HTTP. Such publicly available static IP addresses are
charged on a fixed monthly basis; note that network data transfer to any server, with or without a staticIP
address, is charged on usage basis, at rates of a few cents per gigabyte transferred.
Users can provision and access many servers that can communicate with each other over the fast
internal network within the Amazon cloud. For example, user C in Figure 5.1 has provisioned VM5 and
VM6 in addition to VM4.If VM4 is a web server, VM5 may be a database server, and these two
communicate over the internal cloud network using TCP/IP. Since VM5 is a database server, it needs to
store and retrieve data. The Amazon SimpleDB service provides an object store where key-value pairs can
be efficiently stored and retrieved. Instead of using SimpleDB, virtual servers could instead use a relational
database system, which may come either pre-installed as part of the AMI, or separately by users in the
normal manner. However, it is important to understand that virtual servers do not have any persistent
storage; so any user data on file system (i.e. whatever is not part of the AMI) is lost when the server shuts
down. In order to store data persistently, such in a relational database, Elastic Block Storage needs to be
mounted on a virtual server. The Elastic Block Storage service maintains persistent data across all users on
alarge set of physical servers. After a virtual server boots, it must attach user data from the EBS as a logical
storage volume mounted as a raw device (disk).
Any database service, or for that matter any application relying on persistent data, can be run once
this step is performed. In our illustration in Figure 5.1, VM6 might be an archival server whereVM5 sends
logs of whatever updates it makes to the SimbleDB data store. Note that VM6 has mounted a logical volume
D6, where it possibly maintains archived data. Now notice that VM5 sends data to VM6 not over TCP/IP,
but using the Amazon Simple Queue Service (SQS). The SQS is a reliable persistent message queue that is
useful for temporarily storing data that needs to eventually get to a processing server such as VM6, but in a
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.30
manner that does not rely on VM6 always being available. Thus, VM6 may be booted say, only on a daily
basis, when all it does is process the messages waiting for it in the SQS and log them in its persistent
database. Usage of the SQS is charged based on data volumes and how long data resides in the queue. Thus,
VM5 need not concern itself with archiving apart from logging data in the SQS, and VM6 needs to be up
only when required. SQS is normally used for managing such asynchronous transfer of data between
processing servers in a batch-oriented workflow.
Persistent storage in the EBS as described above can be accessed only if it is attached to a running
virtual server. Further, any other servers can access this data only via the server where the EBS is attached.
The Amazon S3 Storage Service provides a different storage model. Data in S3 can be files of any type, and
in general any blob (binary large object). Users access and modifyS3 objects via URIs, using REST web
services. S3 objects are accessible over the internet as well as from virtual servers within the Amazon cloud.
S3 is especially useful for reliably storing large collections of unstructured data that need to be accessed by
many client applications. It is important to note that all data in S3 is automatically replicated at least three
times for fault tolerance. The S3 storage model provides „eventual‟ consistency across replicas: A write may
return while data has not yet propagated to all replicas, so some clients may still read old data; eventually,
however, all replicas will be updated. This consistency model and its underlying implementation
architecture, which is also shared by SimpleDB, Storage in S3 is also used for storing machine images
(AMIs) that users define themselves, either from scratch by packaging OS and application files from their
own physical servers, or by „deriving‟ from an already available AMI. Such images can also be made
available to other users or to the public at large. Further, such sharing can be combined with the Amazon
payments gateway through a DevPay agreement whereby a portion of the charges paid by users of such
AMIs are credited to the AMI creator‟s account. Thus DevPay based sharing of AMIs in S3 has created a
new software distribution channel, and many industry standard databases and middleware packages, such as
from Oracle or IBM are now available in this manner. The mechanism is also secure in that „derived‟ AMIs
still maintains their DevPay lineage and are charged appropriately.
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.31
Finally, Elastic Load Balancing allows a group of servers to be configured into a set across which
incoming requests (e.g. HTTP connections) are load balanced. The performance statistics of the load-
balanced requests can also be monitored by Cloud Watch and used by Auto Scale to add or remove servers
from the load balanced cluster. Using these tools users can configure a scalable architecture that can also
elastically adjust its resource consumption. It remains the user‟s responsibility to configure a scalable cluster
for each of these layers, define what performance parameters need to be monitored in Cloud Watch and set
the Auto Scale parameters for each cluster. Network security is an important element of these concerns: An
enterprise‟s computing resources are usually protected by firewalls, proxy servers, intrusion detection
systems etc. Naturally, enterprise security requires that virtual servers running in the cloud also be protected
in this manner, using the same policies and safeguards that apply to any resources in their own data centers.
Amazon EC2 provides a Virtual Private Cloud service, whereby virtual servers can be connected to an
enterprise‟s internal network using a VPN (virtual private network). For example, users A and B in Figure
5.1 access virtual servers VM1, VM2 and VM3 through a VPN running over the public internet.
Integration with Other Amazon Web Services
Amazon EC2 works in conjunction with a variety of other Amazon web services. For example, Amazon
Simple Storage Service (Amazon S3), Amazon SimpleDB, Amazon Simple Queue Service (Amazon SQS),
and Amazon CloudFront are all integrated to provide a complete solution for computing, query processing,
and storage across a wide range of applications.
Amazon S3 provides a web services interface that allows users to store and retrieve any amount of
data from the Internet at anytime, anywhere.
Amazon SimpleDB is another web-based service, designed for running queries on structured data
stored with the Amazon Simple Storage Service (Amazon S3) in real time.
Amazon Simple Queue Service (Amazon SQS) is a reliable, scalable, hosted queue for storing
messages as they pass between computers.
Reliable and Resilient Performance Amazon Elastic Block Store (EBS) is yet another Amazon
EC2 feature that provides users powerful features to build failure-resilient applications. Amazon
EBS offers persistent storage for Amazon EC2 instances.
PLATFORM AS A SERVICE: GOOGLE APP ENGINE
The Google cloud, called Google App Engine, is a „platform as a service‟ (PaaS) offering. In contrast with
the Amazon infrastructure as a service cloud, where users are explicitly provisioned by virtual machines and
control them fully, including installing, compiling and running software on them by PaaS Provider, a PaaS
offering hides the actual execution environment from users. Instead, a software platform is provided along
with an SDK, using which users develop applications and deploy them on the cloud. The PaaS platform is
responsible for executing the applications, including servicing external service requests, as well as running
scheduled jobs included in the application. By making the actual execution servers transparent to the user, a
PaaS platform is able to share application servers across users who need lower capacities, as well as
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.32
automatically scale resources allocated to applications that experience heavy loads. Figure 5.2 depicts a user
view of Google App Engine. Users upload code, in either Java or Python, along with related files, which are
stored on the Google File System, a very large scale fault tolerant and redundant storage system. It is
important to note that an application is immediately available on the internet as soon as it is successfully
uploaded.
Resource usage for an application is metered in terms of web requests served and CPU-hours
actually spent executing requests or batch jobs. Note that this is very different from the IaaS model: A PaaS
application can be deployed and made globally available 24×7, but charged only when accessed (or if batch
jobs run); in contrast, in an IaaS model merely making an application continuously available incurs the full
cost of keeping at least some of the servers running all the time. Further, deploying applications in Google
App Engine is free, within usage limits; thus applications can be developed and tried out free and begin to
incur cost only when actually accessed by a sufficient volume of requests.
The PaaS model enables Google to provide such a free service because applications do not run in
dedicated virtual machines; a deployed application that is not accessed merely consumes storage for its code
and data and expends no CPU cycles.
GAE applications are served by a large number of web servers in Google‟s data centers that execute
requests from end-users across the globe. The web servers load code from the GFS into memory and serve
these requests. Each request to a particular application is served by any one of GAE‟s webservers; there is
no guarantee that the same server will serve requests to any two requests, even from the same HTTP session.
Applications can also specify some functions to be executed as batch jobs which are run by a scheduler.
While this architecture is able to ensure that applications scale naturally as load increases, it also means that
application code cannot easily rely on in-memory data. A distributed in-memory cache called Memcache is
made available to partially address this issue: In particular HTTP sessions are implemented using Memcache
so that even if requests from the same session go to different servers they can retrieve their session data,
most of the time.
Google Datastore:
Applications persist data in the Google Datastore, which is also (like Amazon SimpleDB) a non-relational
database. The Datastore allows applications to define structured types (called „kinds‟) and store their
instances (called „entities‟) in a distributed manner on the GFS file system. While one can view Datastore
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.33
„kinds‟ as table structures and entities as records, there are important differences between a relational model
and the Datastore, some of which are also illustrated in Figure 5.3.Unlike a relational schema where all rows
in a table have the same set of columns; all entities of a „kind‟ need not have the same properties. Instead,
additional properties can be added to any entity. This feature is particularly useful in situations where one
cannot foresee all the potential properties in a model, especially those that occur occasionally for only a
small subset of records. For example, a model stores „products‟ of different types would need to allow each
product to have a different set of features. In a relational model, this would probably be implemented using a
separate FEATURES table, as shown on the bottom left of Figure 5.3. Using the Datastore, this table
(„kind‟) is not required; instead, each product entity can be assigned a different set of properties at runtime.
The Datastore allows simple queries with conditions, such as the first query shown in Figure 5.3 to retrieve
all customers having names in some lexico graphic range. The query syntax (called GQL) is essentially the
same as SQL, but with some restrictions. For example, all inequality conditions in a query must be on a
single property; so a query that also filtered customers on, say,their „type‟, would be illegal in GQL but
allowed in SQL.
Force.com
Force.com is a platform as a service (PaaS) that allows developers to create multitenant add-on
applications that integrate into the main Salesforce.com application.Force.com applications are hosted on
Salesforce.com's infrastructure. Force.com applications are built using Apex (a proprietary Java-like
programming language for Force.com) and Visualforce (an XML syntax typically used to generate HTML).
Documented by Prof.K.V.ReddyAsst.Prof at DIEMS BamuEngine.com
2.34
The Force.com platform receives three complete releases a year. As the platform is provided as a service to
its developers, every single development instance also receives all these updates.
In the Spring 2015 release a new framework for building user interfaces, lightning components, was
introduced in beta. Lightning components are built using the open-source Aura Framework but with support
for Apex as the server-side language instead of Aura's Javascript dependency. This has been described as an
alternative to, not necessarily a replacement for, Visualforce pages.
Apex
Apex is a proprietary programming language provided by the Force.com platform to developers
similar to Java. It is a strongly typed, object-oriented programming language, following a dot-notation and
curly-brackets syntax. Apex can be used to execute programmed functions during most processes on the
Force.com platform including custom buttons and links, event handlers on record creation, updates or
deletions and via the custom controllers of Visualforce pages.
Due to the multitenant nature of the platform the language has strictly imposed governor limitations
to guard against any code monopolizing shared resources. Salesforce have provided a series of asynchronous
processing methods for Apex to allow developers to produce longer running and more complex apex code.
Visualforce
Visualforce is the view control technology on the Force.com platform. It is an open/close tag based
library with structure and markup very similar to HTML. Visualforce can be used to create entire custom
pages inside a Salesforceorganisation in conjunction with many other front end technologies, such
as HTML5, CSS3 and Javascript. One of the key benefits of Visualforce is tight coupling to native features
of the platform, such as controller methods and data access, that would not typically be available to other
front end technologies.