Lets refresh where we are! We are discussing about architecting a Citrix environment from Architects perspective. There are 2 parts to it:
1) Assessment
2) Design
Assessment is further divided into
· User Community
· Operating System Delievery
· Application Delivery
· Server Virtualization
· Infrastructure
· Security and personalization
· Operation and Support
· Conceptual Architecture
In this blog, we will discuss assessment of server virtualization. Each topic will be covered in separate blog.
From the
Architect
Although
network throughput typically is not exhausted before other resources,
architects must consider the network implications of consolidating several
server workloads onto a single server. In a physical server environment, each
server workload leverages dedicated NICs on the physical server. In a
virtualized environment, several server workloads will be leveraging the NICs
on the underlying host server. If a blade server solution is used, all blade
servers in a rack--as well as the virtual servers running on them--will share a
maximum of six NICs.
While a single physical workload might have relatively low network usage, architects must consider the usage and any impact on user experience when that workload and other virtual workloads are leveraging the same underlying NICs. Examples of workloads that typically have higher network utilization include database servers and Citrix Provisioning Services servers. To maintain optimal performance, architects may want to exclude these servers from the virtualization solution or virtualize them on physical servers with few, if any, additional workloads.
A Citrix virtualization solution requires an
underlying server virtualization platform, such as Citrix XenServer, Microsoft
Hyper-V or VMware ESX. This server virtualization platform will host the Citrix
infrastructure servers required for desktop and application virtualization.
However, this virtualization platform can also be used to host other server
workloads in the environment, such as database, application and web servers
workloads. Some of these workloads may be pre-existing, whereas others might be
new to the environment.
Server Virtualization Planning
When planning a virtualization project,
architects must have a good understanding of the physical workloads that are
candidates for migration to the server virtualization platform. Architects can
conduct a server virtualization assessment to determine if the server
infrastructure is ready for virtualization and if the benefits can be fully
realized. A typical server virtualization assessment project involves analyzing
the existing infrastructure, reviewing the overall health of IT assets,
identifying which applications and servers are candidates for virtualization
and developing the roadmap that outlines the transition of physical servers to
a virtual infrastructure.
The key considerations that drive server
virtualization decisions are server capacity utilization and performance
metrics that account for the current environment snapshot as well as the
historical trend. Architects must also assess the application demands on the
servers and their degree of fluctuation in order to determine how well the
organization can leverage the fluidity of a virtualized platform and ensure
that all commitments are met. During this assessment, architects should
identify the risks, issues and barriers associated with moving to the
virtualization platform.
Hypervisor Platform
Architects should identify whether the
organization is currently using a hypervisor platform and the extent to which
the platform is being used. For example, some organizations might be
virtualizing a small percentage of servers with Citrix XenSever and are looking
to migrate more servers, whereas other organizations might be virtualizing the
majority of their server workloads.
If
the organization already has a server virtualization platform such as
XenServer, VMware ESX, vSphere, or Microsoft Hyper-V, architects likely will
extend this platform to support a Citrix desktop and application virtualization
infrastructure. Conversely, organizations may be implementing a new server
virtualization platform and need to migrate existing virtual workloads to the new
platform. If the environment already has a virtualization platform in place,
architects should determine the following:
- Which
platform is currently in use and are there any planned upgrades or
migrations to a new version or new platform?
- Are there
any shortcomings or issues with the existing platform?
- Has the
organization standardized on a particular server hardware model or does
the environment contain mixed hardware models?
- Which
workloads currently are running on the virtualization platform?
- Which management
and monitoring tools are used within the environment?
- What
processes and procedures are in place for creating virtual machines,
provisioning shared storage and configuring virtual networking
requirements?
- Is there a
team responsible for managing the virtualization platform?
Virtualization Hardware
Server hardware specification and
configurations have a direct impact on the performance, stability and
resilience of a virtualization implementation. If supporting physical XenApp
workloads, the hardware will impact the number of concurrent users supported on
each server. If supporting virtual machine workloads, hardware resources impact
the virtual machine density supported by a single XenServer host. Architects
can ask the following types of questions to understand the hardware
configuration:
- Will blade
or non-blade servers be used?
- What are the
server specifications? Will the servers be running Intel Nehalem
processors?
- Are at least
four NICs present?
- Are the
servers upgradeable?
- Is the
storage infrastructure redundant?
- If hard
disks are present, what is the RAID configuration?
- Do the
servers have a battery-backed write cache?
- Is enough
power, cooling and floor space available in the datacenter?
From the Architect
Many virtualization projects coincide with
server consolidation projects--often based on blade servers--intended to reduce
power, cooling and datacenter floor space costs. Architects may find that
organizations often already have ordered server hardware or know which hardware
will be ordered. However, architects sometimes may be asked to recommend a
particular server model. In general, architects should avoid recommending a
specific model of hardware unless they are fully aware of the organization's
requirements and environment. Furthermore, and as a caution, the organization
likely already has a preferred server hardware vendor, which may even be a
stakeholder in the project. Instead of recommending specific hardware, the
architects should gather the requirements and then help the organization
determine whether certain hardware models under consideration will meet the
design requirements. If a partner or hardware vendor is part of the project,
its representatives should be invited to kick-off meetings and discussions.
These individuals likely are familiar with the organization's procurement
process.
Physical
Server Assessment
similarly, we have to do the same thing for Disk I/O, OS, Applications etc
Data Collection
For large-scale
or enterprise-level server virtualization assessment projects, architects
likely need to leverage an automated data collection tool, such as PlateSpin
Recon, to capture system resource utilization data. This helps architects
determine how to consolidate physical servers and allocate resources to virtual
machines. PlateSpin Recon provides the following benefits:
- Secure,
agent-less data collection, which eliminates the need to physically touch
servers and keeps proprietary information contained within the datacenter
- Custom
report creation, which allows architects to quickly identify
virtualization candidates based on utilization trends and to compare
workload characteristics before and after consolidation
- Enterprise-level
scalability of as many as 1,500 servers for each instance of PlateSpin
Recon
- Support for
distributed datacenters across different geographic locations
- Support for
Windows, Solaris and Linux systems
- Power and
cooling cost savings and ROI analysis derived from different consolidation
scenarios
- Workload and
resource utilization forecasting based on historical trends
- Support for
both physical and virtual workloads
From the Architect
The data
collection process can help identify applications and workloads that are good
candidates for consolidation on the same XenServer host. Even if the scope of
the server virtualization project addresses only server workloads related to
the Citrix virtualization solution, architects must keep in mind that many
organizations already have an existing implementation of a Citrix solution,
such as XenApp. Architects should conduct a physical server assessment of the
existing Citrix infrastructure to determine how these workloads should be
virtualized on a server virtualization platform.
Excluded
Servers from Virtualization
Architects
must identify server workloads that are not suitable for virtualization and
must be excluded from a server consolidation or virtualization project.
Exclusion is based primarily on high usage of the following server resources:
CPU, memory, disk I/O and network throughput. The most weight is applied to CPU
and memory requirements as these resources typically are exhausted more quickly
than disk I/O and network throughput. However, disk I/O operations are not
always an accurate measure of disk performance, and high I/O operations should
prompt further investigation. Generally, if I/O operations are high, CPU and
memory also will be high. Tools such as PlateSpin Recon can determine if
resource utilization exceeds the desired performance metric threshold for
inclusion within the virtualized environment. If the workload does not fall
within the performance metric threshold, PlateSpin Recon will identify the
workload as "excluded" and exclude the workload from consolidation
scenarios.
From the
Architect
Although
network throughput typically is not exhausted before other resources,
architects must consider the network implications of consolidating several
server workloads onto a single server. In a physical server environment, each
server workload leverages dedicated NICs on the physical server. In a
virtualized environment, several server workloads will be leveraging the NICs
on the underlying host server. If a blade server solution is used, all blade
servers in a rack--as well as the virtual servers running on them--will share a
maximum of six NICs. While a single physical workload might have relatively low network usage, architects must consider the usage and any impact on user experience when that workload and other virtual workloads are leveraging the same underlying NICs. Examples of workloads that typically have higher network utilization include database servers and Citrix Provisioning Services servers. To maintain optimal performance, architects may want to exclude these servers from the virtualization solution or virtualize them on physical servers with few, if any, additional workloads.
Consolidation
Scenarios
Architects can
use tools like PlateSpin Recon to generate server consolidation scenarios based
on the performance metrics collected during the physical server assessment.
These scenarios are recommendations for how physical servers can be virtualized
and consolidated on existing server hardware, new server hardware or a
combination of both. Architects can select popular server hardware models from
within the tool to see how servers will be virtualized on that particular
hardware. Alternatively, custom server criteria can be entered.
The tool does not
generate consolidation scenarios arbitrarily; rather, the tool attempts to
consolidate various virtualized workloads in a manner which increases physical
server utilization without compromising performance. For example, rather than
virtualize two server workloads with high CPU usage metrics on the same host
server, the tool suggests virtualizing those workloads on separate physical
servers alongside workloads with lower CPU usage metrics.
Sample
Consolidation Scenario
Below is a sample
of a typical physical server consolidation report showing how 37 physical
server workloads (source) are virtualized on five physical virtualization host
servers. Through consolidation, the physical server resource utilization
increases while the annual energy cost decreases due to the reduction in
physical servers required.
Space
|
Server Count
|
Rack Units
|
Annual Energy
Cost
|
Processor (%)
|
Memory (%)
|
Source Servers
|
37
|
14
|
1240.6486.57
|
8.6
|
56.8
|
Target Servers
|
5
|
10
|
1927.2
|
32.4
|
55.3
|
Change
|
-32
|
-4
|
686.57
|
23.8
|
-1.5
|
Change (%)
|
-86.5
|
-28.6
|
55.3
|
N/A
|
N/A
|
Physical Server
Migration
Architects can
design a strategy for migrating these physical workloads to the virtualized
server environment. Ideally, the migration strategy can be repeatable for
various kinds of physical servers in the environment; the strategy will always
result in a consistent and reliable virtual workload.
Manual Migration
Typically, a
manual migration only would be used if there were very few migrations to be
performed, making the purchase or use of an automated migration tool
unnecessary. Manual migration might also be used for advanced migrations of
physical servers with complex requirements such as option cards, peripherals or
extremely complex storage requirements. A manual migration typically consists
of the following basic steps:
- Initiate
change control process and back up application data on server to be
migrated.
- Create a new
virtual machine in the server virtualization environment and give it the
necessary CPU, memory, network and storage resources based on the
requirements gathered during the server assessment.
- Install the
operating system, service packs, hotfixes and any necessary
para-virtualized drivers.
- Install and
configure the necessary applications.
- Restore
necessary application data to return the application to its pre-migratory
state.
- Conduct
application testing and obtain signoff from the application owner.
- Release the
virtual server workload into production.
- Decommission
the physical server.
From the Architect
A manual
migration can allow for higher optimization as all unnecessary application
data, such as temporary files and log files, are removed and the registry is
optimized. Furthermore, manual migration provides an opportunity to simplify
the server configuration, such as disk and storage requirement.
However, the
manual migration process is very time-consuming and best suited to small or
complex migrations. Manual migrations may lower reliability as inconsistent or
poorly managed migrations can introduce workload inconsistencies.
Automated
Migration
Physical-to-virtual
(P2V) migrations typically are undertaken using automated migration tools,
which integrate closely with the virtualization platform to deliver faster and
more consistent results. A typical P2V process consists of the following steps:
- Initiate
change control process and back up application data on server to be
migrated.
- Initiate P2V
conversion process, which should create a new virtual machine, connect to
the physical server instance and migrate both the operating system and
application data directly to the new virtual machine. Para-virtualized
drivers should be installed automatically to ensure best performance.
- Conduct
application testing and obtain signoff from the application owner.
- Release the
virtual server workload into production and decommission the physical
server.
An automated
migration also can be used for virtual-to-virtual migrations in which virtual
machines running on one hypervisor platform are migrated to another hypervisor
platform.
From the Architect
P2V migrations
allow for faster deployment as once the migration parameters are supplied into
the P2V tool, the migration proceeds automatically. Many P2V migrations can be
performed in the time it would take to complete a single manual migration. For
large scale, enterprise migrations, a P2V process is the only rational option.
P2V can allow for better hypervisor integration as many proper virtual machine
optimizations--such as installation of para-virtualized drivers--are installed
automatically. Furthermore, P2V migrations, as they are parameter-based, can
achieve higher consistency and are repeatable, so that similar servers will
always be built consistently.
A potential
downside to P2V migrations is the need for an additional toolset, which should
involve testing and staff training before being used. If P2V migrations are not
well-planned, inconsistencies can be introduced on a large scale depending on
the number of workloads that are migrated at one time. Architects should
involve application owners in the migration process to ensure any
inconsistencies are identified before the workload is released into production.
All workload migrations should be conducted in phases to mitigate potential
risks.



No comments:
Post a Comment