Windows Server 2012 R2 Hyper-V Replica Configuration, Simulation and Verification Explained

Windows Server 2012 R2 Hyper-V Role introduces a new capability, Hyper-V Replica, as a built-in replication mechanism at a virtual machine (VM) level. Hyper-V Replica can asynchronously replicate a selected VM running at a primary (or source) site to a designated replica (or target) site across LAN/WAN. The following schematic depicts this concept.

A replication process will replicate, i.e. create an identical VM in the Hyper-V Manager of a target replica server and subsequently the change tracking module of Hyper-V Replica will track and replicate the write-operations in the source VM every based on a set interval after the last successful replication regardless the associated vhd files are hosted in SMB shares, Cluster Shared Volumes (CSVs), SANs, or with directly attached storage devices.

Hyper-V Replica Requirements

Above both a primary site and a replica site are Windows Server 2012 R2 Hyper-V hosts where the former runs production or the so-called primary VMs, while the latter hosts replicated ones which are off and each is to be brought online, should a corresponding primary VM experience an outage. Hyper-V Replica requires neither shared storage, nor a specific storage hardware. Once an initial copy is replicated to a replica site, Hyper-V Replica will replicate only the changes of a configured primary VM, i.e. the deltas, asynchronously.

It goes without saying that assessing business needs and developing a plan on what to replicate and where is essential. In addition, both a primary site and a replica site (i.e. Hyper-V hosts run source and replica VMs, respectively) must enable Hyper-V Replica in the Hyper-V Settings of Hyper-V Manager. IP connectivity is assumed when applicable. Notice that Hyper-V Replica is a server capability and not applicable to client Hyper-V such as running Hyper-V in a Windows 8 client.

Establishing Hyper-V Replica Infrastructure

Once all the requirements are put in place, enable Hyper-V Replica on both a primary site (i.e. a Windows Server 2012 R2 Hyper-V host where a source VM runs) a target replica server, followed by configure an intended VM at a primary site for replication. The following is a sample process to establish Hyper-V Replica.

Step 1 – Enable Hyper-V Replica

Identify a Windows Server 2012 R2 Hyper-V host as a primary site where a target source VM runs. Enable Hyper-V Replica in the Hyper-V settings in Hyper-V Manager of the host. The following are Hyper-V Replica sample settings of a primary site. Repeat the step on a target replica site to enable Hyper-V Replica. In this article, DEVELOPMENT and VDI are both Windows Server 2012 R2 Hyper-V hosts where the former is a primary site while the latter is the replica site.

image

Step 2 – Configure Hyper-V Replica on Source VMs

Using the Hyper-V Manager of a primary site, enable Hyper-V Replica of a VM by right-clicking the VM and select the option, followed by walking through the wizard to establish a replication relationship with a target replica site. Below shows how to enable Hyper-V Replica of a VM, named A-Selected-VM, at the primary site, DEVELOPMENT. The replication settings are fairly straightforward and for a reader to get familiar with.

image

Step 3 – Carry Out an Initial Replication

While configuring Hyper-V Replica of a source VM in step 2, there are options to deliver an initial replica. If an initial replica is to be transmitted over the network, which can happen in real-time or according to a configured schedule. Optionally an initial copy can also be delivered out of band, i.e. with external media, while sending an initial copy on wire may overwhelm the network, not be reliable, take too long to do it or not be a preferred option due to the sensitivity of content. There are however additional considerations with out-of-band deliver of an initial replica.

Option to Send Initial Replica Out of Band

The following shows the option to send initial copy using external media while enabling Hyper-V Replica on a VM named A-Selected-VM.

image

An exported initial copy includes the associated VHD disk and an XML file capturing the configuration information as depicted below. This folder can then be delivered out of band to the target replica server and imported into the replica VM.

image

When delivering an initial copy with external media, a replica VM configuration remains automatically created at an intended replica site exactly the same with the user experience of sending a replica VM online. The difference is that a source VM (here the VM on the primary site, DEVELOPMENT) shows an initial replication in progress in replication health. The replica VM (here the VM on the replica site, VDI) has an option to import an initial replica in replica settings while the replication health information is presenting a warning as shown below:

The primary site

image

The replica site

image

After receiving the initial replica with external media, the replica VM can then import the disk a shown:

image

and upon successfully importing the disk, the following shows the replica VM now with additional replication options and updated replication health information.

image

Step 4 – Simulate a Failover from a Primary Site

This is done at an associated replica site. First verify and ensure the replication health on a target replica VM is normal. Then right-click the replica VM and select Test Failover in Replication settings as shown here:

image

Notice supposedly a failover should occur only when the primary VM experience outage. Nevertheless a simulated failover does not require shutting down a primary/source VM. It does create a VM instance after all at the replica site without altering the replica settings in place. A newly created VM instance resulted from a Test Failover should be deleted via the option, Stop Test Failover, via the replication UI where the Test Failover was originally initiated, as demonstrated below, to ensure all replication settings remain validated.

image

Step 5 – Do a Planned Failover from the Primary Site

In a maintenance event where a primary/source is expecting an outage, ensure the replication health is normal followed by conducting a Planned Failover from the primary site after shutting down the source VM as shown below.

image

Notice that successfully performing a Planned Failover with the presented settings will automatically establish a reverse replication relationship.  To verify the failover has been correctly carried out, check the replication health of a replication pair of VMs on both the primary site and the replica site, prior and after a planned failover. The results need to be consistent across the board. The following examines a replication pair: A-Selected-VM on DEVELOPMENT at a primary site and the replica on VDI at the replica site. Prior to a planned failover from DEVELOPMENT, the replication health information of a source VM (left and at DEVELOPMENT server) and the replica VM (right and at VDI server) is as shown below.

image

After a successful planned failover event as demonstrated at the beginning of this step, the replication heath information becomes the following where the replication roles had been reversed and with a normal state. This signifies the planned failover was carried out successfully.

image

In the event that a source VM experiences an unexpected outage at a primary site, failing over a primary VM to a replica one in this scenario will not be able to automatically establish a reverse replication due to un-replicated changes have already lost along with the unexpected outage.

Step 6 – Finalize the Settings with Another Planned Failover

Conduct another Planned Failover event to confirm that a reverse replication works. In the presented scenario, the planned failover of a primary VM will be now shutting down in the primary site (i.e VDI at this time), followed by failing over back to the replica site (i.e. DEVELOPMENT at this time). And upon successful execution of the planned failover event, the resulted replication relationship should be DEVELOPMENT as a primary site with VDI as the replica site again. Which is the same state at the beginning of step 5. At this time, in a DR scenario, failing production site (in the above example DEVELOPMENT server) over to a DR site (here VDI server), and as needed failing over from the DR site (VDI server) back to the original production site (DEVELOPMENT server) are proven working bi-directionally.

Step 7 – Incorporate Hyper-V Replica into Existing SOPs

Incorporate Hyper-V Replica configurations and maintenance into applicable IT standard operating procedures and start monitoring and maintaining the health of Hyper-V Replica resources.

Hyper-V Extended Replication

In Windows Server 2012 R2, we can now further replicate a replica VM form a replica site to an extended replica site, similar to a backup’s backup. The concept as illustrated below is quite simple.

image

The process is to first set up Hyper-V Replica from a primary site to a target replica site as described earlier in this article. Then at the replica site, simply configure Hyper-V Replica of a replica VM, as shown below. In this way, a source VM is replicated from a primary site to a replica site which then replicates the replica VM from the replica site to an extended replica site.

image

In a real-world setting, Hyper-V Extended Replication fits business needs. For instance, IT may want a replica site nearby for convenience and timely response in an event that monitored VMs are experience some localized outage within IT’s control, while the datacenter remains up and running. While in an outage causing an entire datacenter or geo-region to go down, an extended replication stored in a geo-distant location is pertinent.

Hyper-V Replica Broker

Importantly, if to employ a Hyper-v failover cluster as a replica site, one must use Failover Cluster Manager to perform all Hyper-V Replica configurations and management by first creating a Hyper-V Replica Broker role, as demonstrated below.

image

And here is a sample configuration:

image

Although the above uses Kerberos authentication, as far as Hyper-V Replica is concerned Windows Active Directory domain is not a requirement. Hyper-V Replica can also be implemented between workgroups and untrusted domains with a certificate-based authentication. Active Directory is however a requirement if involving a Hyper-v host which is part of a failover cluster as in the case of Hyper-V Replica Broker, and in such case all Hyper-V hosts of a failover cluster must be in the same Active Directory domain.

Business Continuity and Disaster Recovery (BCDR)

In a BC scenario and a planned failover event, for example a scheduled maintenance, of a primary VM, Hyper-V Replica will first copy any un-replicated changes to the replica VM, so that the event produces no loss of data. Once the planned failover is completed, the VM replica will then become the primary VM and carry the workload, while a reverse replication is automatically set. In a DR scenario, i.e. an unplanned outage of a primary VM, an operator will need to manually bring up the replicated VM with an expectation of some data loss, specifically data change since the last successfully replication based on the set replication interval.

Closing Thoughts

The significance of Hyper-V Replica is not only the simplicity to understand and operate, but the readiness and affordability that comes with Windows Server 2012 R2. A DR solution for business in any sizes is now a reality with Windows Server 2012 R2 Hyper-V. For small and medium businesses, never at a time is a DR solution so feasible. IT pros can now configure, simulate and verify results in a productive and on-demand manner to formulate/prototype/pilot a DR solution at a VM level based on Windows Server 2012 R2 Hyper-V infrastructure. At the same time, in an enterprise setting, System Center family remains the strategic platform to maximize the benefits of Windows Server 2012 R2 and Hyper-V with a comprehensive system management solution including private cloud automation, process orchestration, services deployment and management at a datacenter level.

Additional Information

Advertisements

Resource Pooling, Virtualization, Fabric, and Cloud

One of the five essential attributes of cloud computing (ref. The 5-3-2 Principle of Cloud Computing) is resource pooling which is an important differentiator separating the thought process of traditional IT from that of a service-based, cloud computing approach.

Resource pooling in the context of cloud computing and from a service provider’s viewpoint denotes a set of strategies for standardizing, automating, and optimizing resources. For a user, resource pooling institutes an abstraction for presenting and consuming resources in a consistent and transparent fashion.

This article presents key concepts derived from resource pooling as the following:

  • Resource Pools
  • Virtualization in the Context of Cloud Computing
  • Standardization, Automation, and Optimization
  • Fabric
  • Cloud
  • Closing Thoughts

Resource Pools

Ultimately data center resources can be logically placed into three categories. They are: compute, networks, and storage. For many, this grouping may appear trivial. It is however a foundation upon which cloud computing methodologies are developed, products designed, and solution formulated.

Compute

This is a collection of all CPU-related capabilities. Essentially all datacenter physical sand virtual servers, either for supporting or actually running a workload, are all part of this compute pool. Compute pool represents the total capacity for executing code and running instances. The process to construct a compute pool is to first inventory all servers and identify virtualization candidates followed by implementing server virtualization. And it is never too early to introduce a system management solution to facilitate the processes. Which in my view is a strategic investment and a critical component for all cloud initiatives.

Networks

The physical and logical artifacts putting in place to connect, segment, and isolate resources from layer 3 and below, etc. are gathered in the network pool. Networking enables resources becoming discoverable and hence possibly manageable. In this age of instant gratification, everything has to be readily available with a relatively short windows of opportunities. This is an economy driven by emotions and current events. The needs to be mobile and connected at the same time are redefining the IT security and system administration boundaries. Which plays a direct and impactful role in user productivity and customer satisfaction. Networking in cloud computing is more than just remote access, but an abstraction empowering thousands users to self-serve and consume resources anytime anywhere with any device. Bring your own device, bring your own network, and consumerization of IT are various expressions of the network and mobility requirements of cloud computing.

Storage

This has long been a very specialized and sometimes mysterious part of IT. An enterprise storage solution frequently characterizes as a high cost item with significant financial and technical commitments on specialized hardware, proprietary API and software, a dependency on direct vendor support, etc. In cloud computing, storage solutions are becoming very noticeable since the ability to grow and shrink resource capacities based on demands, i.e. elasticity, demands an enterprise-level, massive, reliable, and resilient storage solution at scale. While enterprise IT is consolidating resources and transforming existing establishments into a cloud computing environment, the ability to leverage existing storage solutions from various vendors and integrate them into an emerging cloud storage solution is a noticeable opportunity for minimizing the cost.

Virtualization in the Context of Cloud Computing

In the last decade, virtualization has proved its values and accelerated the realization of cloud computing. Then, virtualization was mainly server virtualization. Which in an over-simplified description means hosting multiple server instances with the same hardware while each instance runs transparently and in isolation, namely as if each consumes the entire hardware and is the only instance running. Now, with so many new technologies and solutions emerging, customers’ expectations and business requirements have been evolving. And we should validate virtualization in the context of cloud computing to fully address the innovations rapidly changing how IT conducts business and delivers services. As presented below, in the context of clod computing, consumable resources are delivered in some virtualized form. Various virtualization layers collectively construct and form the so-called fabric.

Server Virtualization

The concept of server virtualization remains as: running multiple server instances with the same hardware while each instance runs transparently and in isolation, as if each instance is the only instance running and consuming the entire server hardware.

In addition to virtualizing and consolidating servers, server virtualization also signifies the practices of standardizing server deployment, switching away from physical boxes to VMs. Server virtualization is the abstraction layer for packaging, delivering, and consuming a compute pool.

There are a few important considerations of virtualizing servers. IT needs the ability to identify and manage bare metal such that the entire resource life-cycle management from commencing an employment to the decommissioning of server hardware can be automated. To fundamentally reduce the support and training cost while increasing the productivity, a consistent platform with tools applicable across physical, virtual, on-premises, and off-premises deployments is essential. The last thing IT wants is one set of tools for managing physical resources and another for working on those virtualized; one set of tools for on-premises deployment and another for those deployed to a service provider; one set of tools for infrastructure development and another for configuring applications. The essential goal is one skill set for all, namely one platform for all, one methodology for all, and one set of tools for all. This advantage is obvious when, for example, developing applications and deploying Windows Server 2012 R2 on premises or off premises to Windows Azure. The Active Directory security model can work across sites, System Center can manage resources deployed off premises to Windows Azure, and Visual Studio can publish applications across platforms. IT pros can operate all based on Windows system administration skills. There is minimal operations training needed since all provides a consistent Windows user experience on a common platform and security model.

Network Virtualization

Similar idea of Server Virtualization applies here. Network virtualization is the ability to run multiple networks on the same network infrastructure while each network runs transparently and in isolation, as if each network is the only network running and consuming the entire network hardware.

Conceptually since each network instance is running in isolation, one tenant’s 192.168.x network is not aware of another tenant’s and identical 192.168.x network running with the same network infrastructure. Network virtualization provides the translation between physical network characteristics and the representation of a logical network. Consequently, above the network virtualization layer, various tenants while running in isolation can have identical network configurations. This abstraction essentially substantiates the concept of bringing your own network.

A great example of network virtualization is Windows Azure virtual network. At any given time, there can be multiple Windows Azure subscribers all allocate the same 192.168.x address space and with identical subnet scheme, 192.168.1.x/16, for deploying VMs. Those VMs belonging to one subscriber are however not aware of or visible to these deployed by others, despite the network configuration, IP scheme, and IP address assignments may be all identical. Network virtualization in Windows Azure isolates one subscriber from the others such that each subscriber operates as if the subscription account is the only one employing 192.168.x address space.

Storage Virtualization

I believe this is where the next wave of drastic cost reduction of IT post server virtualization happens. Historically, storage has been a high cost item in IT budget in each and every aspects including hardware, software, staffing, maintenance, SLA, etc. Since the introduction of Windows Server 2012, there is a clear direction where storage virtualization is becoming a commodity and an essential skill for IT pros since storage virtualization is now built into Windows OS. New capabilities like Storage Pool, Hyper-V over SMB, Scale-Out Fire Share, etc. are making storage virtualization part of server administration routines and easily manageable with tools and utilities like PowerShell which many IT professionals are familiar with.

The concept of storage virtualization remains consistent with the idea of logically separating a computing object from the hardware where it is running. Storage virtualization is the ability to integrate multiple and heterogeneous storage devices, aggregate the storage capacities, and present/manage as one logical storage device with a continuous storage space. And it should be apparent that JBOD is a visualization of this concept.

Standardization, Automation, and Optimization

Each of the three resource pools has an abstraction to logically present itself with characteristics and work patterns. A compute pool is a collection of physical hosts and VM instances. A virtualization host runs VMs which carry workloads deployed by service owners and consumed by authorized users. A network pool encompasses network resources including physical devices, logical switches, logical networks, address spaces, and site configurations. Network virtualization can map many identical logical/virtual IP addresses with the same physical NIC, such a service provider can host tenants implementing identical network scheme with the same network hardware without a concern. A storage pool is based on storage virtualization which is a concept of presenting an aggregated storage capacity as one continuous storage space as if provided from one logical storage device.

In other words, the three resource pools are wrapped with server virtualization, network virtualization, and storage virtualization, respectively. Together these three virtualization layers forms the cloud fabric which is presented as an architectural layer with opportunities to standardize, automate, and optimize deployments without the needs to know the physical complexities.

Standardization

Virtualizing resources decouples the dependency between instances and the underlying hardware. This offers an opportunity to simplify and standardize the logical representation of a resource. For instance, a VM is defined and deployed with a VM template which provides a level of consistency with a standardized configuration. 

Automation

Once a VM characteristics are identified and standardized with configurations, we can now generate an instance by providing only instance-based information such as the VM machine name which must be validated at deployment time to prevent duplicated names. With a deployment template, only minimal information at deployment time us needed. Which can significantly simplify the process and facilitate automation. Standardization and automation are essential mechanisms so that a workload can scale on demand, i.e. become elastic, once a threshold is specified from a system management perspective.

Optimization

Standardization provides a set of common criteria. Automation executes operations based on set criteria with volumes, consistency, and expediency. With standardization and automation, instances can be instantiated with consistency and predictability. In other words, cloud resources can be operated in bulk with consistency and predictability by standardizing configurations and automate operations. The next logical step is then to optimize the usage based on SLA.

Optimization is essentially standardization and automation with intelligence and objectives. The intelligence is why we need to have analytics designed into a cloud solution. With self-servicing, ubiquitous access, and elasticity, technically a cloud resource can be consumed anytime, by any number of users, and with capacities changed on demand. We will need a mechanism to provide us insights of the usage including when it is used, by whom, how much, to what degree, etc. A consumption-based chargeback or show-back model is what the analytics must provide so system management can optimize resources accordingly and timely.

Integrating resource pooling and virtualization concepts into OS has happened since Windows Server 2012 R2 and System Center 2012. Server virtualization, network virtualization, and storage virtualization are now integrated into Windows platform and part of the Server OS.

Fabric

A significant abstraction in cloud computing this is. Fabric implies the discoverability and manageability. Which denotes the ability to discover, identify, and manage a datacenter resource. Conceptually fabric is an umbrella term encompassing all the datacenter physical and virtualized resources supporting a cloud computing environment. At the same time, a fabric controller represents the system management solution which manages, i.e. owns, fabric.

In cloud architecture, fabric consists the three resource pools: compute, networking, and storage. Compute provides the computing capabilities, executes code, and runs instances. Networking glues the resources based on requirements. And storage is where VMs, configurations, data, and resources are kept. Fabric shields a user from the physical complexities of the three resource pools. All operations are eventually managed by the fabric controller of a datacenter. Above fabric, there are logical views of consumable resources including VMs, virtual networks, and logical storage drives. By deploying VMs, configuring virtual networks, or acquire storage, a user consumes resources. Under fabric, there are virtualization and infrastructure hosts, Active Directory, DNS, clusters, load balancers, address pools, network sites, library shares, storage arrays, topology, racks, cables, etc. all under the fabric controller’s command to collectively form the fabric.

For a service provider, building a cloud computing environment is essentially to establish a fabric controller and construct fabric. Namely institute a comprehensive management solution, build the three resource pools, and integrate server virtualization, network virtualization, and storage virtualization to form fabric. Form a user’s point of view, how and where a resource is physically located is not a concern. The user experience and satisfaction are much based on the accessibility, readiness, and scalability of requested resources and fulfillment of SLA.

Cloud

This is a well-defined term by NIST SP 800-145 and the 5-3-2 Principle of Cloud Computing. We need to be very clear on: what a cloud must exhibit (the five essential attributes), how to consume it (with SaaS, PaaS, or IaaS), and the model a service is deployed in (like private cloud, public cloud, and hybrid cloud). Cloud is a concept, a state, a set of capabilities such that the capacity of a requested resource can be delivered as a service, i.e. available on demand.

The architecture of a cloud computing environment is presented with three resource pools: compute, networks, and storage. Each is abstraction provided by a virtualization layer. Server virtualization presents a compute pool with VMs which supplies the computing power to execute code and run instances. Network virtualization offers a network pool and is the mechanism to allow multiple tenants with identical network configurations on the same virtualization hosts while connecting, segmenting, isolating network traffic with virtual NICs, logical switches, address space, network sites, IP pools, etc. Storage virtualization provides a logical storage device with the capacity appearing continuous by aggregating the capacity of a pool of storage devices behind the scene. The three resource pools together form an abstraction, namely fabric, such that despite the underlying physical infrastructure may be intricate, the user experience above fabric is presented with a logical and consistent fashion. Deploying a VM, configuring a virtual network, or acquiring storage is transparent with virtualization regardless where the VM actually resides, how the virtual network is physically wired, or what devices are included in the aggregate the requested storage.

Closing Thoughts

Cloud is an all consumer-focused approach. It is about enabling a customer to consume resources on demand and with scale. And perhaps more significant than the need to increase capacity on demand is the ability to release resources when no longer required. Cloud is not about products and technologies. Cloud is about standardizing, automating, and optimizing the consumption of resources and ultimately strengthening the bottom line.

Why Private Cloud First

Some IT decision makers may wonder, I have already virtualized my datacenter and am running a highly virtualized IT environment, do I still need a private cloud? If so, why?

The answer is a definitive YES, and the reason is straightforward. The plain truth is that virtualization is no private cloud, and a private cloud goes far beyond virtualization. (Ref 1, 2)

Virtualization Is No Private Cloud

Technically, virtualization is signified by the concept of “isolation.” By which a running instance is with the notion that the instance consumes the entire hardware despite the fact that multiple instances may be running at the same time with the same hosting environment. A well understood example is server virtualization where multiple server instances running on the same hardware while each instance runs as if it possesses the entire host machine.

A private cloud on the other hand is a cloud which abides the 5-3-2 Principle or NIST SP 800-145 which the de facto definition of cloud computing. In other words, a private cloud as illustrated above must exhibit the attributes like elasticity, resource pooling, self-service model, etc. of cloud computing and be delivered in a particular fashion. Virtualization nonetheless does not hold, for instance, any of the four attributes as a technical requirement. Virtualization is about isolating and virtualizing resources, while how a virtualized resource is allocated, delivered, or presented is not particularly specified. At the same time, cloud computing or a private cloud, is visualized much differently. The servicing, accessibility, readiness, and elasticity of all consumable resources in cloud computing are conceptually defined and technically required for being delivered as “services.”

Essence of Cloud Computing

The service concept is a center piece of cloud computing. A cloud resource is to be consumed as a service. This is why these terms, IaaS, PaaS, SaaS, ITaaS, and XaaS (everything and anything as a service), are frequently heard in a cloud discussion. And they all are delivered as a service. A service is what must be presented to and experienced by a cloud user. So, what is a service?

A service can be presented and implemented in various ways like forming a web service with a block of code, for example. However in the context of cloud computing, a service can be captured by three words, capacity on demand. Capacity here is associated with an examined object such as cpu, network connections, or storage. One-demand denotes the anytime readiness with any network and any device accessibility. It is a state that previously takes years and years of IT disciplines and best practices to possibly achieve with a traditional infrastructure-focused approach, while cloud computing makes “service” a basic deliver model and demand all consumable resources including infrastructure, platform, and software to be presented as services. Consequently, replacing the term, service, with “capacity of demand” or simply “on demand’ brings clarity and gives substance to any discussion of cloud computing.

Hence, IaaS, infrastructure as a service, is to construct infrastructure on demand. Namely one can provision infrastructure, i.e. deploying a set of virtual machines (since all consumable resources in cloud computing are virtualized) to together form the infrastructure for delivering a target application based on needs. PaaS means platform as a service, or a runtime environment available on demand. Notice that a target runtime environment is for running an intended application. Since the runtime is available on demand, an application deployed to the runtime will then become available on demand, which is essentially SaaS, or software available on demand or as a service. There is a clear logical progression among IaaS, PaaS, and SaaS.

So what is cloud exactly?

Cloud, as I define it here, is a concept, a state, a set of capabilities such that a targeted business capacity is available on demand. And on-demand denotes a self-servicing model with anytime readiness, on any network and with any device accessibility. Cloud is certainly not a particular implementation since the same state can be achieved in various implementations as technologies advancing and methodologies evolving.

Logically, building a private cloud is the post-virtualization step to continue transforming IT into the next generation of computing with cloud-based deliveries. The following schematic depicts a vision of transforming a datacenter to a service-centric cloud delivery model.

Once resources have been virtualized with Hyper-V, System Center builds and transforms existing establishments into an on-premises private cloud environment based on IaaS. Windows Azure then provides a computing platform with both IaaS and PaaS solutions for extending an on-premise private cloud beyond corporate boundaries and into a global setting with resources deployed off premises. This hybrid deployment scenario is emerging as the next generation IT computing model.

To Cloud or Not to Cloud, That Is Not the Question

Comparing apples to apples, there is few reason that a business does not prefer cloud computing over traditional IT. Why one would not want to acquire the ability to adjust business capacity based on needs. Therefore, to cloud or not to cloud is not the question. Nor is security the issue. In most cases, cloud is likely to be more secure managed by a team of cloud security professionals in a service provider’s datacenter, than implemented by IT generalists wearing multiple hats while administering an IT shop. Cloud is to acquire the on-demand capability, and for certain verticals, the question is more about regulatory compliance since the cost reduction and increasing servicing capabilities are very much understood. Above all it is about a business owner’s understanding and comfort level with cloud.

IT industry nevertheless does not wait, nor can simply maintain status quo. Why private cloud? The pressure to produce more with less, and the need to instantaneously become ready and respond to a market opportunity are not for pursuing excellence, but a matter of survival in today’s economic climate with ever increasing user expectations and driven by current events and emotions. One will find out that a private cloud is a vehicle to facilitate and transform IT with increasing productivity and reduced TCO over time as discussed in the Building a Private Cloud blog post series. IT needs cloud computing to shorten go-to-market, to promote consumption, to accelerate product adoption, to change the dynamics by offering better, quicker, and more with less. And for enterprise IT, it is critical to first convert existing on-premises deployments into a cloud setting, and a private cloud solution is a strategic step and a logical approach for enterprise IT to become cloud friendly, cloud enabled, and cloud ready..