For the last couple of years the IT industry has been getting excited
and energised about Cloud. Large IT companies and consultancies have
spent, and are spending, billions of dollars, pounds and yen investing
in Cloud technologies. So, what's uh, the deal?
While Cloud is
generating lot more heat than light it is, nonetheless, giving us all
something to think about and something to sell our customers. In some
respects Cloud isn't new, in other respects it's ground-breaking and
will make an undeniable change in the way that business provides users
with applications and services.
Beyond that, and it is already
happening, users will at last be able to provide their own Processing,
Memory, Storage and Network (PMSN) resources at one level, and at other
levels receive applications and services anywhere, anytime, using
(almost) any mobile technology. In short, Cloud can liberate users, make
remote working more feasible, ease IT management and move a business
from CapEx to more of an OpEx situation. If a business is receiving
applications and services from Cloud, depending on the type of Cloud, it
may not need a data centre or server-room any more. All it will require
is to cover the costs of the applications and services that it uses.
Some in IT may perceive this as a threat, others as a liberation.
So, what is Cloud?
To
understand Cloud you need to understand the base technologies,
principles and drivers that support it and have provided a lot of the
impetus to develop it.
Virtualisation
For the last decade
the industry has been super-busy consolidating data centres and
server-rooms from racks of tin boxes to less racks of fewer tin boxes.
At the same time the number of applications able to exist in this new
and smaller footprint has been increasing.
Virtualisation; why do it?
Servers
hosting a single application have utilisation levels of around 15%.
That means that the server is ticking over and highly under-utilised.
The cost of data centres full of servers running at 15% is a financial
nightmare. Server utilisation of 15% can't return anything on the
initial investment for many years, if ever. Servers have a lifecycle of
about 3 years and a depreciation of about 50% out of the box. After
three years, the servers are worth anything in corporate terms.
Today
we have refined tool-sets that enable us to virtualise pretty much any
server and in doing that we can create clusters of virtualised servers
that are able to host multiple applications and services. This has
brought many benefits. Higher densities of Application servers hosted on
fewer Resource servers enables the data centre to deliver more
applications and services.
It's Cooler, It's Greener
Besides
the reduction of individual hardware systems through expeditious use of
virtualisation, data centre designers and hardware manufacturers have
introduced other methods and technologies to reduce the amount of power
required to cool the systems and the data centre halls. These days
servers and other hardware systems have directional air-flow. A server
may have front-to-back or back-to-front directional fans that drive the
heated air into a particular direction that suits the air-flow design of
the data centre. Air-flow is the new science in the IT industry. It is
becoming common to have a hot-isle and a cold-isle matrix across the
data centre hall. Having systems that can respond and participate in
that design can produce considerable savings in power requirements. The
choice of where to build a data centre is also becoming more important.
There
is also the Green agenda. Companies want to be seen to be engaging with
this new and popular movement. The amount of power needed to run large
data centres is in the Megawatt region and hardly Green. Large data
centres will always require high levels of power. Hardware manufacturers
are attempting to bring down the power requirements of their products
and data centre designers are making a big effort to make more use of
(natural) air-flow. Taken together these efforts are making a
difference. If being Green is going to save money, then it's a good
thing.
Downsides
High utilisation of hardware introduces
higher levels of failure caused, in the most part, by heat. In the case
of the 121 ratio, the server is idling, cool and under-utilised and
costing more money than necessary (in terms of ROI) but, will provide a
long lifecycle. In the case of virtualisation, producing higher levels
of utilisation per Host will generate a lot more heat. Heat damages
components (degradation over time) and shortens MTTF (Mean Time To
Failure) which affects TCO (Total Cost of Ownership = the bottom line)
and ROI (Return on Investment). It also raises the cooling requirement
which in turn increases power consumption. When Massive Parallel
Processing is required, and this is very much a cloud technology,
cooling and power will step up a notch. Massive Parallel Processing can
use tens of thousands of servers/VMs, large storage environments along
with complex and large networks. This level of processing will increase
energy requirements. Basically, you can't have it both ways.
Another
downside to virtualisation is VM density. Imagine 500 hardware servers,
each hosting 192 VMs. That's 96,000 Virtual Machines. The average
number of VMs per Host server is limited by the number of
vendor-recommended VMs per CPU. If a server has 16 CPUs (Cores) you
could create approximately 12 VMs per Core (this is entirely dependent
on what the VM is going to be used for). Therefore it's a simple piece
of arithmetic, 500 X 192 = 96,000 Virtual Machines. Architects take all
this into account when designing large virtualisation infrastructures
and make sure that Sprawl is kept strictly under control. However, the
danger exists.
Virtualisation; The basics of how to do it
Take
a single computer, a server, and install software that enables the
abstraction of the underlying hardware resources: Processing, Memory,
Storage and Networking. Once you've configured this
virtualisation-capable software, you can use it to fool various
operating systems into thinking that they are being installed into a
familiar environment that they recognise. This is achieved by the
virtualisation software that (should) contain all the necessary drivers
used by the operating system to talk to the hardware.
At the
bottom of the virtualisation stack is the Hardware Host. Install the
hypervisor on this machine. The hypervisor abstracts the hardware
resources and delivers them to the virtual machines (VMs). On the VM
install the appropriate operating system. Now install the application/s.
A single hardware Host can support a number of Guest operating systems,
or Virtual Machines, dependent on the purpose of the VM and the number
of processing cores in the Host. Each hypervisor vendor has its own
permutation of VMs to Cores ratio but, it is also necessary to
understand exactly what the VMs are going to support to be able to
calculate the provisioning of the VMs. Sizing/Provisioning virtual
infrastructures is the new black-art in IT and there are many tools and
utilities to help carry out that crucial and critical task. Despite all
the helpful gadgets, part of the art of sizing is still down to informed
guesswork and experience. This means that the machines haven't taken
over yet!
Hypervisor
The hypervisor can be installed in two formats:
1.
Install an operating system that has within it some code that
constitutes a hypervisor. Once the operating system is installed, click a
couple of boxes and reboot the operating system to activate the
hypervisor. This is called Host Virtualisation because there is a Host
operating system, such as Windows 2008 or a Linux distribution, as the
foundation and controller of the hypervisor. The base operating system
is installed in the usual way, directly onto the hardware/server. A
modification is made and the system is rebooted. Next time it loads it
will offer the hypervisor configuration as a bootable choice
2.
Install a hypervisor directly onto the hardware/server. Once installed,
the hypervisor will abstract the hardware resources and make them
available to multiple Guest operating systems via a Virtual machine.
VMware's ESXi and XEN are this type of hypervisor (on-the-metal
hypervisor)
The two most popular hypervisors are VMware ESXi and
Microsoft's Hyper-V. ESXi is a stand-alone hypervisor that is installed
directly onto the hardware. Hyper-V is part of the Windows 2008
operating system. Windows 2008 must be installed first to be able to use
the hypervisor within the operating system. Hyper-V is an attractive
proposition but, it does not reduce the footprint to the size of ESXi
(Hyper-V is about 2GB on the disk and ESXi is about 70MB on the disk),
and it does not reduce the overhead to a level as low ESXi.
To
manage virtual environments requires other applications. VMware offers
vCenter Server and Microsoft offers System Center Virtual Machine
Manager. There are a range of third-party tools available to enhance
these activities.
Which hypervisor to use?
The choice of
which virtualisation software to use should be based on informed
decisions. Sizing the Hosts, provisioning the VMs, choosing the support
toolsets and models, and a whole raft of other questions need to be
answered to make sure that money and time is spent effectively and what
is implemented works and doesn't need massive change for a couple of
years (wouldn't that be nice?).
What is Cloud Computing?
Look around the Web and there are myriad definitions. Here's mine. "Cloud Computing is billable, virtualised, elastic services"
Cloud is a metaphor for the methods that enable users to access applications and services using the Internet and the Web.
Everything from the Access layer to the bottom of the stack is located in the data centre and never leaves it.
Within
this stack are many other applications and services that enable
monitoring of the Processing, Memory, Storage and Network which can then
be used by chargeback applications to provide metering and billing.
Cloud Computing Models
The Deployment Model and the Delivery Model.
Deployment Model
- Private Cloud
- Public Cloud
- Community Cloud
- Hybrid Cloud
Private Cloud Deployment Model
For most
businesses the Private Cloud Deployment Model will be the Model of
choice. It provides a high level of security and for those companies and
organisation that have to take compliance and data security laws into
consideration Private Cloud will be the only acceptable Deployment
Model.
Note: There are companies (providers) selling managed
hosting as Cloud. They rely on the hype and confusion about what Cloud
actually is. Check exactly what is on offer or it may turn out that the
product is not Cloud and cannot offer the attributes of Cloud.
Public Cloud Deployment Model
Amazon
EC2 is a good example of the Public Cloud Deployment Model. Users in
this case are, by and large, the Public although more and more
businesses are finding Public Cloud a useful addition to their current
delivery models.
Small business can take advantage of the Public
Cloud low costs, particularly where security is not an issue. Even large
enterprises, organisations and government institutions can find
advantages in utilising Public Cloud. It will depend on legal and data
security requirements.
Community Cloud Deployment Model
This
model is created by users allowing their personal computers to be used
as resources in a P2P (Point-to-Point) network. Given that modern
PCs/Workstations have multiprocessors, a good chunk of RAM and large
SATA storage disks, it is sensible to utilise these resources to enable a
Community of users each contributing PMSN and sharing the applications
and services made available. Large numbers of PCs and, possibly, servers
can be connected into a single subnet. Users are the contributors and
consumers of compute resources, applications and services via the
Community Cloud.
The advantage of the Community Cloud is that it's
not tied to a vendor and not subject to the business case of a vendor.
That means the community can set its own costs and prices. It can be a
completely free service and run as a co-operative.
Security may
not be as critical but, the fact that each user has access at a low
level might introduce the risk of security breaches, and consequent bad
blood amongst the group.
While user communities can benefit from
vendor detachment it isn't necessary that vendors are excluded.
Vendor/providers can also deliver Community Cloud, at a cost.
Large
companies that may share certain needs can also participate using
Community Cloud. Community Cloud can be useful where a major disaster
has occurred and a company has lost services. If that company is part of
a Community Cloud (car manufacturers, oil companies etc.) those
services may be available from other sources within that Cloud.
Hybrid Cloud Deployment Model
The
Hybrid Cloud is used where it is useful to have access to the Public
Cloud while maintaining certain security restrictions on users and data
within a Private Cloud. For instance, a company has a data centre from
which it delivers Private Cloud services to its staff but, it needs to
have some method of delivering ubiquitous services to the public or to
users outside its own network. The Hybrid Cloud can provide this kind of
environment. Companies using Hybrid Cloud services can take advantage
of the massive scalability of the Public Cloud delivered from Public
Cloud providers, while still maintaining control and security over
critical data and compliance requirements.
Federated Clouds
While
this is not a Cloud deployment or delivery model per se, it is going to
become an important part of Cloud Computing services in the future.
As
the Cloud market increases and enlarges across the world, the diversity
of provision is going to become more and more difficult to manage or
even clarify. Many Cloud providers will be hostile to each other and may
not be keen to share across their Clouds
Summer Training 2014 in Jaipur. Business and users will want
to be able to diversify and multiply their choices of Cloud delivery and
provision. Having multiple Clouds increases the availability of
applications and services.
A company may find that it is a good
idea to utilise multiple Cloud providers to enable data to be used in
differing Clouds for differing groups. The problem is how to
control/manage this multiple headed delivery model? IT can take control
back by acting as the central office clearing house for the multiple
Clouds. Workloads may require different levels of security, compliance,
performance and SLAs across the entire company. Being able to use
multiple Clouds to fulfil each requirement for each workload is a
distinct advantage over the one-size-fits-all principle that a single
Cloud provider brings to the table. Federated Cloud also answers the
question of How do I avoid vendor lock-in? However, multiple Clouds
require careful management and that's where the Federated Cloud comes
in.
So, what is stopping this happening? Mostly it's about the
differences between operating systems and platforms. The other reason is
that moving a VM can be difficult when that VM is 100GBs. If you
imagine thousands of those being moved around simultaneously you can see
why true Cloud federation is not yet with us, although some companies
are out there trying to make it happen. Right now you can't move a VM
out of EC2 into Azure or OpenStack.
True federation is where disparate Clouds can be managed together seamlessly and where VMs can be moved between Clouds.
Abstraction
The
physical layer resources were abstracted by the hypervisor to provide
an environment for the Guest operating systems via the VMs. This layer
of abstraction is managed by the appropriate vendor virtualisation
management tools (in the case of VMware its vSphere vCenter Server and
its APIs). The Cloud Management Layer (vCloud Director in the case of
VMware) is an abstraction of the Virtualisation Layer. It has taken the
VMs, applications and services (and users) and organised them into
groups. It can then make them available to users.
Using the
abstracted virtual layer it is possible to deliver IaaS, PaaS and SaaS
to Private, Public, Community and Hybrid Cloud users.
Cloud Delivery Models
IaaS-Infrastructure as a Service (Lower Layer)
When
a customer buys IaaS it will receive the entire compute infrastructure
including Power/Cooling, Host (hardware) servers, storage, networking
and VMs (supplied as servers). It is the customers responsibility to
install the operating systems, manage the infrastructure and to patch
and update as necessary. These terms can vary depending on the
vendor/provider and the individual contract details.
PaaS-Platform as a Service (Middle Layer)
PaaS
delivers a particular platform or platforms to a customer. This might
be a Linux or Windows environment. Everything is provided including the
operating systems ready for software developers (the main users of PaaS)
to create and test their products. Billing can be based on resource
usage over time. There are a number of billing models to suit various
requirements.
SaaS-Software as a service (Top Layer)
SaaS
delivers a complete computing environment along with applications ready
for user access. This is the standard offer in the Public Cloud
Summer Internship 2014 in Jaipur.
Examples of applications would be Microsoft's Office 365. In this
environment the customer has no responsibility to manage the
infrastructure.
Cloud Metering & Billing
Metering
Billing
is derived from the chargeback information (Metering) gleaned from the
infrastructure. Depending on the service ordered the billing will
include the resources outlined below.
Billable Resource Options: (Courtesy Cisco)
Virtual machine: CPU, Memory, Storage capacity, Disk and network I/O
Server blade Options will vary by type and size of the hardware
Network services: Load balancer, Firewall, Virtual router
Security services: Isolation level, Compliance level
Service-level agreements (SLAs): Best effort (Bronze), High availability (Silver), Fault tolerant (Gold)
Data services: Data encryption, Data compression, Backups, Data availability and redundancy
WAN services: VPN connectivity, WAN optimisation
Billing
Pay-as-you-Go:
Straightforward payment based on billing from the provider. Usually
customers are billed for CPU and RAM usage only when the server is
actually running. Billing can be Pre-Paid, or Pay-as-you-Go. For servers
(VMs) that are in a non-running state (stopped), the customer only pays
for the storage that server is using. If a server is deleted, there are
no further charges. Pay-as-you-Go can be a combination of a variety of
information billed as a single item. For instance, Network usage can be
charged for each hour that a network or networks are deployed. Outbound
and Inbound Bandwidth can be charged; NTT America charges only for
outbound traffic leaving a customer network or Cloud Files storage
environment, whereas inbound traffic may be billed, or not. It all comes
down to what the provider offers and what you have chosen to buy.
Pre-Allocated
Some
current cloud models use pre-allocation, such as a server instance or a
compute slice,as the basis for pricing. Here, the resource that a
customer is billed for has to be allocated first, allowing for
predictability and pre-approval of the expenditure. However, the term
instance can be defined in different ways. If the instance is simply a
chunk of processing time on a server equal to 750 hours, that equates to
a full month. If the size of the instance is linked to a specific
hardware configuration, the billing appears to be based on hours of
processing, but in fact reflects access to a specific server
configuration for a month. As such, this pricing structure doesn't
differ significantly from traditional server hosting.
Reservation or Reserved
Amazon,
for instance, uses the term Reserved Instance Billing. This refers to
usage of VMs over time. The customer purchases a number of Reserved
Instances in advance. There are three levels of Reserved Instance
billing, Light, Medium and Heavy Reserved Instances. If the customer
increases usage of instance above the set rate Amazon will charge at the
higher rate. That's not an exact description but, it's close enough.
Cloud billing is not a straightforward and simple as vendors would
like to have us believe. Read carefully the conditions and try to stick
rigidly to the prescribed usage levels or the bill could come as a
shock.
The Future of Cloud
Some say Cloud has no future and
that it's simply another trend. Larry Ellison (of Oracle) made a
statement a few years ago that Cloud was an aberration or fashion
generated by an industry that was looking desperately for something,
anything, new to sell (paraphrased). Others say that Cloud is the future
of IT and IS delivery. The latter seem to be correct. It's clear that
Cloud is the topical subject on the lips of all IT geeks and gurus. It's
also true that the public at large is becoming Cloud-savvy and, due to
the dominance of mobile computing, the public and business will continue
to demand on-tap utility-computing, (John McCarthy, speaking at the MIT
Centennial in 1961 forecast that computing would become a public
utility), via desktops, laptops, netbooks, iPads, iPhones, Smartphones
and gadgets yet to be invented. Cloud can provide that ubiquitous,
elastic and billable utility.