Windows Server 2012 R2 Hyper-V Replica Configuration, Simulation and Verification Explained

Windows Server 2012 R2 Hyper-V Role introduces a new capability, Hyper-V Replica, as a built-in replication mechanism at a virtual machine (VM) level. Hyper-V Replica can asynchronously replicate a selected VM running at a primary (or source) site to a designated replica (or target) site across LAN/WAN. The following schematic depicts this concept.

A replication process will replicate, i.e. create an identical VM in the Hyper-V Manager of a target replica server and subsequently the change tracking module of Hyper-V Replica will track and replicate the write-operations in the source VM every based on a set interval after the last successful replication regardless the associated vhd files are hosted in SMB shares, Cluster Shared Volumes (CSVs), SANs, or with directly attached storage devices.

Hyper-V Replica Requirements

Above both a primary site and a replica site are Windows Server 2012 R2 Hyper-V hosts where the former runs production or the so-called primary VMs, while the latter hosts replicated ones which are off and each is to be brought online, should a corresponding primary VM experience an outage. Hyper-V Replica requires neither shared storage, nor a specific storage hardware. Once an initial copy is replicated to a replica site, Hyper-V Replica will replicate only the changes of a configured primary VM, i.e. the deltas, asynchronously.

It goes without saying that assessing business needs and developing a plan on what to replicate and where is essential. In addition, both a primary site and a replica site (i.e. Hyper-V hosts run source and replica VMs, respectively) must enable Hyper-V Replica in the Hyper-V Settings of Hyper-V Manager. IP connectivity is assumed when applicable. Notice that Hyper-V Replica is a server capability and not applicable to client Hyper-V such as running Hyper-V in a Windows 8 client.

Establishing Hyper-V Replica Infrastructure

Once all the requirements are put in place, enable Hyper-V Replica on both a primary site (i.e. a Windows Server 2012 R2 Hyper-V host where a source VM runs) a target replica server, followed by configure an intended VM at a primary site for replication. The following is a sample process to establish Hyper-V Replica.

Step 1 – Enable Hyper-V Replica

Identify a Windows Server 2012 R2 Hyper-V host as a primary site where a target source VM runs. Enable Hyper-V Replica in the Hyper-V settings in Hyper-V Manager of the host. The following are Hyper-V Replica sample settings of a primary site. Repeat the step on a target replica site to enable Hyper-V Replica. In this article, DEVELOPMENT and VDI are both Windows Server 2012 R2 Hyper-V hosts where the former is a primary site while the latter is the replica site.

image

Step 2 – Configure Hyper-V Replica on Source VMs

Using the Hyper-V Manager of a primary site, enable Hyper-V Replica of a VM by right-clicking the VM and select the option, followed by walking through the wizard to establish a replication relationship with a target replica site. Below shows how to enable Hyper-V Replica of a VM, named A-Selected-VM, at the primary site, DEVELOPMENT. The replication settings are fairly straightforward and for a reader to get familiar with.

image

Step 3 – Carry Out an Initial Replication

While configuring Hyper-V Replica of a source VM in step 2, there are options to deliver an initial replica. If an initial replica is to be transmitted over the network, which can happen in real-time or according to a configured schedule. Optionally an initial copy can also be delivered out of band, i.e. with external media, while sending an initial copy on wire may overwhelm the network, not be reliable, take too long to do it or not be a preferred option due to the sensitivity of content. There are however additional considerations with out-of-band deliver of an initial replica.

Option to Send Initial Replica Out of Band

The following shows the option to send initial copy using external media while enabling Hyper-V Replica on a VM named A-Selected-VM.

image

An exported initial copy includes the associated VHD disk and an XML file capturing the configuration information as depicted below. This folder can then be delivered out of band to the target replica server and imported into the replica VM.

image

When delivering an initial copy with external media, a replica VM configuration remains automatically created at an intended replica site exactly the same with the user experience of sending a replica VM online. The difference is that a source VM (here the VM on the primary site, DEVELOPMENT) shows an initial replication in progress in replication health. The replica VM (here the VM on the replica site, VDI) has an option to import an initial replica in replica settings while the replication health information is presenting a warning as shown below:

The primary site

image

The replica site

image

After receiving the initial replica with external media, the replica VM can then import the disk a shown:

image

and upon successfully importing the disk, the following shows the replica VM now with additional replication options and updated replication health information.

image

Step 4 – Simulate a Failover from a Primary Site

This is done at an associated replica site. First verify and ensure the replication health on a target replica VM is normal. Then right-click the replica VM and select Test Failover in Replication settings as shown here:

image

Notice supposedly a failover should occur only when the primary VM experience outage. Nevertheless a simulated failover does not require shutting down a primary/source VM. It does create a VM instance after all at the replica site without altering the replica settings in place. A newly created VM instance resulted from a Test Failover should be deleted via the option, Stop Test Failover, via the replication UI where the Test Failover was originally initiated, as demonstrated below, to ensure all replication settings remain validated.

image

Step 5 – Do a Planned Failover from the Primary Site

In a maintenance event where a primary/source is expecting an outage, ensure the replication health is normal followed by conducting a Planned Failover from the primary site after shutting down the source VM as shown below.

image

Notice that successfully performing a Planned Failover with the presented settings will automatically establish a reverse replication relationship.  To verify the failover has been correctly carried out, check the replication health of a replication pair of VMs on both the primary site and the replica site, prior and after a planned failover. The results need to be consistent across the board. The following examines a replication pair: A-Selected-VM on DEVELOPMENT at a primary site and the replica on VDI at the replica site. Prior to a planned failover from DEVELOPMENT, the replication health information of a source VM (left and at DEVELOPMENT server) and the replica VM (right and at VDI server) is as shown below.

image

After a successful planned failover event as demonstrated at the beginning of this step, the replication heath information becomes the following where the replication roles had been reversed and with a normal state. This signifies the planned failover was carried out successfully.

image

In the event that a source VM experiences an unexpected outage at a primary site, failing over a primary VM to a replica one in this scenario will not be able to automatically establish a reverse replication due to un-replicated changes have already lost along with the unexpected outage.

Step 6 – Finalize the Settings with Another Planned Failover

Conduct another Planned Failover event to confirm that a reverse replication works. In the presented scenario, the planned failover of a primary VM will be now shutting down in the primary site (i.e VDI at this time), followed by failing over back to the replica site (i.e. DEVELOPMENT at this time). And upon successful execution of the planned failover event, the resulted replication relationship should be DEVELOPMENT as a primary site with VDI as the replica site again. Which is the same state at the beginning of step 5. At this time, in a DR scenario, failing production site (in the above example DEVELOPMENT server) over to a DR site (here VDI server), and as needed failing over from the DR site (VDI server) back to the original production site (DEVELOPMENT server) are proven working bi-directionally.

Step 7 – Incorporate Hyper-V Replica into Existing SOPs

Incorporate Hyper-V Replica configurations and maintenance into applicable IT standard operating procedures and start monitoring and maintaining the health of Hyper-V Replica resources.

Hyper-V Extended Replication

In Windows Server 2012 R2, we can now further replicate a replica VM form a replica site to an extended replica site, similar to a backup’s backup. The concept as illustrated below is quite simple.

image

The process is to first set up Hyper-V Replica from a primary site to a target replica site as described earlier in this article. Then at the replica site, simply configure Hyper-V Replica of a replica VM, as shown below. In this way, a source VM is replicated from a primary site to a replica site which then replicates the replica VM from the replica site to an extended replica site.

image

In a real-world setting, Hyper-V Extended Replication fits business needs. For instance, IT may want a replica site nearby for convenience and timely response in an event that monitored VMs are experience some localized outage within IT’s control, while the datacenter remains up and running. While in an outage causing an entire datacenter or geo-region to go down, an extended replication stored in a geo-distant location is pertinent.

Hyper-V Replica Broker

Importantly, if to employ a Hyper-v failover cluster as a replica site, one must use Failover Cluster Manager to perform all Hyper-V Replica configurations and management by first creating a Hyper-V Replica Broker role, as demonstrated below.

image

And here is a sample configuration:

image

Although the above uses Kerberos authentication, as far as Hyper-V Replica is concerned Windows Active Directory domain is not a requirement. Hyper-V Replica can also be implemented between workgroups and untrusted domains with a certificate-based authentication. Active Directory is however a requirement if involving a Hyper-v host which is part of a failover cluster as in the case of Hyper-V Replica Broker, and in such case all Hyper-V hosts of a failover cluster must be in the same Active Directory domain.

Business Continuity and Disaster Recovery (BCDR)

In a BC scenario and a planned failover event, for example a scheduled maintenance, of a primary VM, Hyper-V Replica will first copy any un-replicated changes to the replica VM, so that the event produces no loss of data. Once the planned failover is completed, the VM replica will then become the primary VM and carry the workload, while a reverse replication is automatically set. In a DR scenario, i.e. an unplanned outage of a primary VM, an operator will need to manually bring up the replicated VM with an expectation of some data loss, specifically data change since the last successfully replication based on the set replication interval.

Closing Thoughts

The significance of Hyper-V Replica is not only the simplicity to understand and operate, but the readiness and affordability that comes with Windows Server 2012 R2. A DR solution for business in any sizes is now a reality with Windows Server 2012 R2 Hyper-V. For small and medium businesses, never at a time is a DR solution so feasible. IT pros can now configure, simulate and verify results in a productive and on-demand manner to formulate/prototype/pilot a DR solution at a VM level based on Windows Server 2012 R2 Hyper-V infrastructure. At the same time, in an enterprise setting, System Center family remains the strategic platform to maximize the benefits of Windows Server 2012 R2 and Hyper-V with a comprehensive system management solution including private cloud automation, process orchestration, services deployment and management at a datacenter level.

Additional Information

Advertisements

TechEd 2014 North America Announcement Summary

imageAt TechEd 2014 North America, Microsoft has announced a large set of new innovations for IT Professionals & Enterprise developers to embrace a cloud culture in today’s mobile first, cloud first world. The following is a summary based on Brad Anderson’s keynote on May 5, 2014. I have highlighted my personal favorites.

 

Transform the Datacenter Announcements

  • Azure ExpressRoute
  • Azure Import/Export
  • Azure Compute-Intensive A8 & A9 Virtual Machines
  • Internal Load Balancing
  • Azure Files
  • Virtual Networking features
  • Microsoft Antimalware and security partnership with Trend Micro and Symantec
  • IP Reservation for VIPs and Instance-level public IP’s for Virtual Machines
  • Azure Site Recovery (formerly Hyper-V Recovery Manager)

 

People Centric-IT Announcements

  • Azure RemoteApp
  • Windows Intune Roadmap
  • Azure Active Directory Premium enhancements

 

Modern Business Apps Announcements

  • API Management
  • Azure Traffic Manager External End Points
  • Azure Cache Services
  • BizTalk Hybrid Connections
  • BizTalk Server 2013 R2
  • ASP.NET vNext
  • Visual Studio 2013 Update 2
  • Visual Studio Online APIs
  • Multi-Device Hybrid Apps
  • Windows Client VMs for MSDN Subscribers
  • Desired State Configuration Support in the next update of Release Management
  • Visual Studio Online Migration Utility

 

Azure ExpressRoute

We are announcing the general availability of the ExpressRoute service.  AT&T and Equinix customers are able to use the ExpressRoute service at Silicon Valley, Washington, and London ExpressRoute locations. We are excited to announce a new partnership with TelecityGroup and SingTel which will expand ExpressRoute’s reach in Europe and APAC, and a new partnership with Zadara, which allows Azure customers to use Zadara storage via ExpressRoute in the U.S. ExpressRoute creates private, high-throughput connections between Azure datacenters and your existing infrastructure, whether it’s on-premises or in a colocation environment.

 

Azure Compute-intensive A8 & A9 Virtual Machines

We are announcing the general availability of compute-intensive A8 and A9 instances for virtual machines. These instances provide faster processors, faster interconnectivity, more virtual cores for higher computing power, and larger amounts of memory. With these instances, customers can run compute-intensive and network-intensive applications such as high-performance cluster applications and applications that use modeling, simulation and analysis, and video encoding.

 

Azure Files

We are announcing the public preview of Azure Files. This new service enables virtual machines in an Azure datacenter to mount a shared file system using the SMB protocol. These VMs will then be able to access the file system using standard Windows file APIs (CreateFile, ReadFile, WriteFile, etc.). Many VMs (or platform-as-a-service roles) can attach to these file systems concurrently, so customers can share persistent data easily between various roles and instances.

 

Azure Import/Export

We are announcing the general availability of Azure Import/Export service. By using Azure Import/Export, customers can move large amounts of data into and out of Azure Blobs much faster than is possible by downloading data from the internet. Transporting data from hard drives to Azure is easy when using the Microsoft high-speed, security-enabled internal network to transfer the data to our datacenter.

 

Microsoft Antimalware and security partnership with Trend Micro and Symantec

We are announcing the public preview of Microsoft Antimalware and partnerships with Trend Micro and Symantec. For Microsoft Antimalware, we are providing customers the choice on which anti-virus solution to use. One of those solutions is in partnership with Trend Micro. Specifically, deep integration for Trend’s Deep Security and SecureCloud products in the Azure platform.  Additionally, Microsoft is also working to integrate PortalProtectwith Azure.  Another solution we are announcing is in partnership with Symantec. Symantec End Point Protection (SEP) is being supported on Azure. Through deep portal integration, customers have the ability to specify that they intend to use SEP within a VM.

 

Internal Load balancing

We are announcing the public preview of Internal Load Balancing. This new service provides the ability to load balance Azure virtual machines with private IP addresses. The internally load balanced endpoint will be accessible only within a virtual network (if the VM is within a virtual network) or within a cloud service (if the VM isn’t within a virtual network). Internal Load Balancing is available in the standard tier of VMs at no additional cost.

 

Virtual Networking features

We are announcing general availability of the two most requested virtual networking features: multiple site-to-site VPN and VNET-to-VNET connectivity. Virtual Network now supports more than one site-to-site VPN connection so customers can securely connect multiple on-premises locations with a virtual network (VNET) in Azure. VNET-to-VNET connectivity enables multiple virtual networks to be directly and securely connected with one another. We are enabling both cross-region VNET-to-VNET and in-region VNET-to-VNET connectivity.

 

IP Reservation for VIPs and Instance-level public IP’s for Virtual Machines

We are announcing the IP Reservation for VIPs and public preview of Instance-level public IPs for virtual machines. For IP reservation, customers can now reserve public IP addresses and use them as virtual IP (VIP) addresses for their applications. This enables scenarios where applications need to have static public IP addresses or where applications need to be updated by swapping the reserved IP addresses. During preview, customers can obtain two public IP addresses per subscription at no additional charge. With Instance-level Public IPs for VMs, customers can now assign public IP addresses to their virtual machines, so they become directly addressable without having to map an endpoint. This feature will enable scenarios like running FTP servers in Azure and monitoring virtual machines directly using their IPs.

 

Azure Site Recovery (formerly Hyper-V Recovery Manager)

We are announcing new capabilities for Azure Hyper-V Recovery Manager that will be in public preview next month. Today, Windows Azure Hyper-V Recovery Manager provides a disaster recovery solution for customers which helps protect the availability of System Center private clouds. In June a public preview of new features will enable customers to replicate virtual machines from their primary site directly to Azure, instead of a second customer site. To better reflect the service’s new, broader capabilities we will be renaming it Microsoft Azure Site Recovery in conjunction with the release of this preview. Updated resources, including an Academy Live, will be available in June at time of public preview.

 

Azure RemoteApp

We are announcing the public preview of Azure RemoteApp: a new service delivering remote applications from the Azure cloud. With Azure RemoteApp, business applications run on Windows Server in the Azure cloud where they’re easier to scale and update. Employees install Microsoft Remote Desktop clients on their Internet-connected laptop, tablet, or phone—and can then access applications as if they were running locally.

Azure RemoteApp helps IT:

  • Enable employees access to their corporate applications on a variety of devices
  • Scale up or down without expensive infrastructure costs and management complexity
  • Centralize and protect corporate applications on Azure’s reliable platform.

Azure RemoteApp is available at no additional cost during the preview period. For detailed information, please refer to the following resources:

 

Azure Active Directory Premium enhancements

We have announced the public preview of more enhancements and capabilities in Azure Active Directory Premium. The public preview of these additional enhancements to the service include a new version of the DirSync tool, a new synchronization engine (Azure AD Sync), and Multi-Factor Authentication IP whitelisting allowing companies to specify IP addresses from which MFA is not required. We also announced the public preview of Azure AD Cloud App Discovery feature that provides IT departments with visibility into all the cloud apps used within their organization.

 

Windows Intune Roadmap

Future investments for Windows Intune are in the areas of managed productivity, including the ability to apply data management and protection policies to an upcoming release of Office mobile apps, and corporate-owned mobile device management.

 

API Management

We are announcing the public preview of Azure API Management. This new service enables organizations to publish APIs to developers, partners and employees securely and at scale. Organizations can use API Management to monetize its core product, transform its product into a platform and create new content distribution channels.

 

Azure Cache Services

First, Azure Managed Cache service is moving from public preview to general availability.  Second, Azure Redis Cache service is available in public preview.  Azure Redis Cache is based on the popular open source Redis Cache. A cache created using Redis Cache is accessible from any application within Azure. And, lastly, as previously announced, Azure Shared Caching Service will be retired in September 2014 and with it the Microsoft’s Silverlight-based portal. Because the Azure Cache Service is in general availability, we strongly encourage customers to migrate all existing caches on Shared Caching to the new Azure Managed Cache Service.

 

Traffic Manager External End Points

We are announcing the general availability of Azure Traffic Manager, which supports both Azure endpoints and external endpoints. Traffic Manager enables customers to control the distribution of user traffic to specified endpoints. With support for endpoints that can reside outside of Azure, customers can now build highly available applications across Azure and on-premises. In addition, intelligent traffic management policies can be applied across all managed endpoints.

 

BizTalk Hybrid Connections

We are announcing the public preview of BizTalk Hybrid Connections. This new Azure service enables cloud services to more securely, quickly, and easily integrate Azure cloud solutions with on-premises applications. With no custom code required, Hybrid Connections enables customers to connect to any on-premises TCP or HTTP resource—such as Microsoft SQL Server, MySQL, or any web service—from Azure Web Sites.

 

BizTalk Server 2013 R2

We are announcing the release of BizTalk Server 2013 R2. For over a decade, Microsoft customers and partners have deployed thousands of mission-critical integration solutions using BizTalk. This release enables customers to upgrade now with confidence by delivering improvements in performance, reliability, and functionality. Biz Talk Server 2013 R2 allows customers to take advantage of the latest integration platform capabilities from Microsoft and accelerate solutions for vertical industries including healthcare and financial services.

 

Visual Studio 2013 Update 2

We are announcing the RTM of Visual Studio 2013 Update 2. With over 5 Million downloads of Visual Studio 2013 to date, today we delivered on our promise of enabling developers to create universal Windows apps for Windows 8.1 and Windows Phone 8.1 from one single project in Visual Studio.

 

ASP.NET vNext

We are sharing an early preview of the next version of ASP.NET vNext, a streamlined framework and runtime that is optimized for cloud and server workloads. It allows ASP.NET developers to leverage their existing skills and create applications with automatic cloud support built-in. These components of ASP.NET will be part of the .NET Foundation as an open source project and will run across multiple platforms through a partnership with Xamarin.

 

Visual Studio Online APIs

We are announcing the public preview of Visual Studio Online APIs. This a set of APIs and service hooks that extends Visual Studio Online and provides integration points to 3rd party services. This makes it easier for organizations to adopt Visual Studio Online without abandoning the tools they’re using today, and for developers to build apps on any platform that can consume Visual Studio Online services.

 

Multi-Device Hybrid Apps

In addition to the ability for .NET developers to target iOS and Android devices with native-apps through the strategic partnership with Xamarin, we are announcing Visual Studio tooling support for hybrid-apps based on Apache Cordova™ (Multi-Device Hybrid Apps for Visual Studio – CTP). With these tools, web developers can use their existing skills in HTML and JavaScript to create hybrid packaged apps for multiple devices while taking advantage of each device’s capabilities.

 

Windows Client VMs for MSDN Subscribers

We are announcing that virtual machine images for Windows 7 and Windows 8.1 are now available in the Azure virtual machine gallery for MSDN subscribers. This means that you now have the flexibility to utilize an on-demand dev/test environment for client-targeted apps, leveraging the elasticity and scale of Microsoft Azure.

 

Desired State Configuration support in the next update of Release Management

In the next update of Release Management for Visual Studio, customers will be able to directly leverage PowerShell Desired State Configuration (DSC) scripts to configure and manage Windows based environments through Release Management. This extended capability of Release Management will allow customers to manage both on-premises and cloud based infrastructure as part of an application deployment. This capability places Microsoft ahead of Puppet and Chef for configuring and managing both on-premises and cloud based environments as part of an application deployment.

 

Visual Studio Online Migration Utility

Microsoft has partnered with OpsHub to deliver the OpsHub TFS to Visual Studio Online Migration Utility. This utility is a free offering provided by OpsHub that supports the one-time, one-way migration of on-premises Team Foundation Server data into Visual Studio Online. The goal of the OpsHub TFS to Visual Studio Online Migration Utility is to provide a straightforward migration path from an on-premises Team Foundation Server environment to the Visual Studio Online environment minimizing or eliminating the need for any consulting support.

  • OpsHub will be offering support and answering questions on StackOverflow

TechEd 2014 North America Announcement Summary

imageAt TechEd 2014 North America, Microsoft has announced a large set of new innovations for IT Professionals & Enterprise developers to embrace a cloud culture in today’s mobile first, cloud first world. The following is a summary based on Brad Anderson’s keynote on May 5, 2014. I have highlighted my personal favorites.

Transform the Datacenter Announcements

 

People Centric-IT Announcements

 

Modern Business Apps Announcements

 

Azure ExpressRoute

We are announcing the general availability of the ExpressRoute service.  AT&T and Equinix customers are able to use the ExpressRoute service at Silicon Valley, Washington, and London ExpressRoute locations. We are excited to announce a new partnership with TelecityGroup and SingTel which will expand ExpressRoute’s reach in Europe and APAC, and a new partnership with Zadara, which allows Azure customers to use Zadara storage via ExpressRoute in the U.S. ExpressRoute creates private, high-throughput connections between Azure datacenters and your existing infrastructure, whether it’s on-premises or in a colocation environment.

 

Azure Compute-intensive A8 & A9 Virtual Machines

We are announcing the general availability of compute-intensive A8 and A9 instances for virtual machines. These instances provide faster processors, faster interconnectivity, more virtual cores for higher computing power, and larger amounts of memory. With these instances, customers can run compute-intensive and network-intensive applications such as high-performance cluster applications and applications that use modeling, simulation and analysis, and video encoding.

 

Azure Files

We are announcing the public preview of Azure Files. This new service enables virtual machines in an Azure datacenter to mount a shared file system using the SMB protocol. These VMs will then be able to access the file system using standard Windows file APIs (CreateFile, ReadFile, WriteFile, etc.). Many VMs (or platform-as-a-service roles) can attach to these file systems concurrently, so customers can share persistent data easily between various roles and instances.

 

Azure Import/Export

We are announcing the general availability of Azure Import/Export service. By using Azure Import/Export, customers can move large amounts of data into and out of Azure Blobs much faster than is possible by downloading data from the internet. Transporting data from hard drives to Azure is easy when using the Microsoft high-speed, security-enabled internal network to transfer the data to our datacenter.

 

Microsoft Antimalware and security partnership with Trend Micro and Symantec

We are announcing the public preview of Microsoft Antimalware and partnerships with Trend Micro and Symantec. For Microsoft Antimalware, we are providing customers the choice on which anti-virus solution to use. One of those solutions is in partnership with Trend Micro. Specifically, deep integration for Trend’s Deep Security and SecureCloud products in the Azure platform.  Additionally, Microsoft is also working to integrate PortalProtectwith Azure.  Another solution we are announcing is in partnership with Symantec. Symantec End Point Protection (SEP) is being supported on Azure. Through deep portal integration, customers have the ability to specify that they intend to use SEP within a VM.

 

Internal Load balancing

We are announcing the public preview of Internal Load Balancing. This new service provides the ability to load balance Azure virtual machines with private IP addresses. The internally load balanced endpoint will be accessible only within a virtual network (if the VM is within a virtual network) or within a cloud service (if the VM isn’t within a virtual network). Internal Load Balancing is available in the standard tier of VMs at no additional cost.

 

Virtual Networking features

We are announcing general availability of the two most requested virtual networking features: multiple site-to-site VPN and VNET-to-VNET connectivity. Virtual Network now supports more than one site-to-site VPN connection so customers can securely connect multiple on-premises locations with a virtual network (VNET) in Azure. VNET-to-VNET connectivity enables multiple virtual networks to be directly and securely connected with one another. We are enabling both cross-region VNET-to-VNET and in-region VNET-to-VNET connectivity.

 

IP Reservation for VIPs and Instance-level public IP’s for Virtual Machines

We are announcing the IP Reservation for VIPs and public preview of Instance-level public IPs for virtual machines. For IP reservation, customers can now reserve public IP addresses and use them as virtual IP (VIP) addresses for their applications. This enables scenarios where applications need to have static public IP addresses or where applications need to be updated by swapping the reserved IP addresses. During preview, customers can obtain two public IP addresses per subscription at no additional charge. With Instance-level Public IPs for VMs, customers can now assign public IP addresses to their virtual machines, so they become directly addressable without having to map an endpoint. This feature will enable scenarios like running FTP servers in Azure and monitoring virtual machines directly using their IPs.

 

Azure Site Recovery (formerly Hyper-V Recovery Manager)

We are announcing new capabilities for Azure Hyper-V Recovery Manager that will be in public preview next month. Today, Windows Azure Hyper-V Recovery Manager provides a disaster recovery solution for customers which helps protect the availability of System Center private clouds. In June a public preview of new features will enable customers to replicate virtual machines from their primary site directly to Azure, instead of a second customer site. To better reflect the service’s new, broader capabilities we will be renaming it Microsoft Azure Site Recovery in conjunction with the release of this preview. Updated resources, including an Academy Live, will be available in June at time of public preview.

 

Azure RemoteApp

We are announcing the public preview of Azure RemoteApp: a new service delivering remote applications from the Azure cloud. With Azure RemoteApp, business applications run on Windows Server in the Azure cloud where they’re easier to scale and update. Employees install Microsoft Remote Desktop clients on their Internet-connected laptop, tablet, or phone—and can then access applications as if they were running locally.

Azure RemoteApp helps IT:

  • Enable employees access to their corporate applications on a variety of devices
  • Scale up or down without expensive infrastructure costs and management complexity
  • Centralize and protect corporate applications on Azure’s reliable platform.

Azure RemoteApp is available at no additional cost during the preview period. For detailed information, please refer to the following resources:

 

Azure Active Directory Premium enhancements

We have announced the public preview of more enhancements and capabilities in Azure Active Directory Premium. The public preview of these additional enhancements to the service include a new version of the DirSync tool, a new synchronization engine (Azure AD Sync), and Multi-Factor Authentication IP whitelisting allowing companies to specify IP addresses from which MFA is not required. We also announced the public preview of Azure AD Cloud App Discovery feature that provides IT departments with visibility into all the cloud apps used within their organization.

 

Windows Intune Roadmap

Future investments for Windows Intune are in the areas of managed productivity, including the ability to apply data management and protection policies to an upcoming release of Office mobile apps, and corporate-owned mobile device management.

 

API Management

We are announcing the public preview of Azure API Management. This new service enables organizations to publish APIs to developers, partners and employees securely and at scale. Organizations can use API Management to monetize its core product, transform its product into a platform and create new content distribution channels.

 

Azure Cache Services

First, Azure Managed Cache service is moving from public preview to general availability.  Second, Azure Redis Cache service is available in public preview.  Azure Redis Cache is based on the popular open source Redis Cache. A cache created using Redis Cache is accessible from any application within Azure. And, lastly, as previously announced, Azure Shared Caching Service will be retired in September 2014 and with it the Microsoft’s Silverlight-based portal. Because the Azure Cache Service is in general availability, we strongly encourage customers to migrate all existing caches on Shared Caching to the new Azure Managed Cache Service.

 

Traffic Manager External End Points

We are announcing the general availability of Azure Traffic Manager, which supports both Azure endpoints and external endpoints. Traffic Manager enables customers to control the distribution of user traffic to specified endpoints. With support for endpoints that can reside outside of Azure, customers can now build highly available applications across Azure and on-premises. In addition, intelligent traffic management policies can be applied across all managed endpoints.

 

BizTalk Hybrid Connections

We are announcing the public preview of BizTalk Hybrid Connections. This new Azure service enables cloud services to more securely, quickly, and easily integrate Azure cloud solutions with on-premises applications. With no custom code required, Hybrid Connections enables customers to connect to any on-premises TCP or HTTP resource—such as Microsoft SQL Server, MySQL, or any web service—from Azure Web Sites.

 

BizTalk Server 2013 R2

We are announcing the release of BizTalk Server 2013 R2. For over a decade, Microsoft customers and partners have deployed thousands of mission-critical integration solutions using BizTalk. This release enables customers to upgrade now with confidence by delivering improvements in performance, reliability, and functionality. Biz Talk Server 2013 R2 allows customers to take advantage of the latest integration platform capabilities from Microsoft and accelerate solutions for vertical industries including healthcare and financial services.

 

Visual Studio 2013 Update 2

We are announcing the RTM of Visual Studio 2013 Update 2. With over 5 Million downloads of Visual Studio 2013 to date, today we delivered on our promise of enabling developers to create universal Windows apps for Windows 8.1 and Windows Phone 8.1 from one single project in Visual Studio.

 

ASP.NET vNext

We are sharing an early preview of the next version of ASP.NET vNext, a streamlined framework and runtime that is optimized for cloud and server workloads. It allows ASP.NET developers to leverage their existing skills and create applications with automatic cloud support built-in. These components of ASP.NET will be part of the .NET Foundation as an open source project and will run across multiple platforms through a partnership with Xamarin.

 

Visual Studio Online APIs

We are announcing the public preview of Visual Studio Online APIs. This a set of APIs and service hooks that extends Visual Studio Online and provides integration points to 3rd party services. This makes it easier for organizations to adopt Visual Studio Online without abandoning the tools they’re using today, and for developers to build apps on any platform that can consume Visual Studio Online services.

 

Multi-Device Hybrid Apps

In addition to the ability for .NET developers to target iOS and Android devices with native-apps through the strategic partnership with Xamarin, we are announcing Visual Studio tooling support for hybrid-apps based on Apache Cordova™ (Multi-Device Hybrid Apps for Visual Studio – CTP). With these tools, web developers can use their existing skills in HTML and JavaScript to create hybrid packaged apps for multiple devices while taking advantage of each device’s capabilities.

 

Windows Client VMs for MSDN Subscribers

We are announcing that virtual machine images for Windows 7 and Windows 8.1 are now available in the Azure virtual machine gallery for MSDN subscribers. This means that you now have the flexibility to utilize an on-demand dev/test environment for client-targeted apps, leveraging the elasticity and scale of Microsoft Azure.

 

Desired State Configuration support in the next update of Release Management

In the next update of Release Management for Visual Studio, customers will be able to directly leverage PowerShell Desired State Configuration (DSC) scripts to configure and manage Windows based environments through Release Management. This extended capability of Release Management will allow customers to manage both on-premises and cloud based infrastructure as part of an application deployment. This capability places Microsoft ahead of Puppet and Chef for configuring and managing both on-premises and cloud based environments as part of an application deployment.

 

Visual Studio Online Migration Utility

Microsoft has partnered with OpsHub to deliver the OpsHub TFS to Visual Studio Online Migration Utility. This utility is a free offering provided by OpsHub that supports the one-time, one-way migration of on-premises Team Foundation Server data into Visual Studio Online. The goal of the OpsHub TFS to Visual Studio Online Migration Utility is to provide a straightforward migration path from an on-premises Team Foundation Server environment to the Visual Studio Online environment minimizing or eliminating the need for any consulting support.

  • OpsHub will be offering support and answering questions on StackOverflow

Invitation to Week-Long TechEd 2014 Live Streaming

At this year’s TechEd, Microsoft will announce and demo many remarkable new capabilities and functionalities to be delivered to the IT and developer industry. If not able to attend TechEd 2014 in person,  you can join us online via Tech Ed Live Streaming all week long including the keynotes and technical breakout sessions.

Register Now for the Tech Ed Live Stream

In this article, I’ve included the schedule for this week’s TechEd sessions that will be live streamed and the registration link you can use for joining us online.  Hope to see you there virtually!

Don’t miss these TechEd Live Stream sessions!

We have a full week planned of great technical content via the TechEd 2014 Live Stream sessions. Be sure to tune in for these topics …

Day Time Session Title Primary Speaker
5/12/2014 09:00AM – 10:30AM Central Time Tech Ed 2014 Keynote Session Brad Anderson,
Corporate Vice President, Windows Server and System Center
5/12/2014 11:00AM – Noon Central Time Enabling Enterprise Mobility with Windows Intune, Microsoft Azure, and Windows Server Adam Hall; Andrew Conway; Demi Albuz; Jason Leznek
5/12/2014 01:15PM – 02:30PM Central Time Windows PowerShell Unplugged with Jeffrey Snover Jeffrey Snover
5/12/2014 03:00PM – 04:15PM Central Time RemoteApp for Mobility and BYOD Demi Albuz;Samim Erdogan
5/12/2014 04:45PM – 06:00PM Central Time Microsoft System Center 2012 Configuration Manager: MVP Experts Panel Greg Ramsey;Jason Sandys;Johan Arwidmark;Kent Agerlund;Steve Thompson
5/13/2014 08:30PM – 09:45AM Central Time INTRODUCING: The Future of .NET on the Server Scott Hanselman;Scott Hunter
5/13/2014 10:15AM – 11:30AM Central Time DEEP DIVE: The Future of .NET on the Server David Fowler;Scott Hanselman
5/13/2014 01:30PM – 02:45PM Central Time Group Policy: Notes from the Field – Tips, Tricks, and Troubleshooting Jeremy Moskowitz
5/13/2014 03:15PM – 04:30PM Central Time Microsoft Desktop Virtualization Overview Session Demi Albuz;Robin Brandl;Thomas Willingham
5/13/2014 05:00PM – 06:15PM Central Time TWC: Sysinternals Primer: TechEd 2014 Edition Aaron Margosis
5/14/2014 08:30AM – 09:45AM Central Time Making Sense of the Microsoft Information Protection Stack Chris Hallum
5/14/2014 10:15AM – 11:30AM Central Time Mark Russinovich and Mark Minasi on Cloud Computing Mark Minasi;Mark Russinovich
5/14/2014 01:30PM – 02:45PM Central Time Entity Framework: Building Applications with Entity Framework 6 Rowan Miller
5/14/2014 03:15PM – 04:30PM Central Time Windows PowerShell Best Practices and Patterns: Time to Get Serious Don Jones
5/14/2014 05:00PM – 06:15PM Central Time What’s New in Windows Server 2012 R2 Hyper-V Jeff Woolsey
5/15/2014 08:30AM – 09:45AM Central Time Real-World Windows 8.1 Deployment: Notes from the Field Johan Arwidmark
5/15/2014 10:15AM – 11:30AM Central Time Async Best Practices for C# and Visual Basic Mads Torgersen
5/15/2014 01:00PM – 02:15PM Central Time VDI Deployment Walkthrough John Kelbley;Rich McBrine;Robin Brandl
5/15/2014 02:45PM – 04:00PM Central Time 2014 Edition: How Many Coffees Can You Drink While Your PC Starts? Matthew Reynolds

Join TechEd from wherever you are.

Use the link below to register for the TechEd Live Stream to watch the sessions online this week – live from Houston TX.

Register Now for the TechEd Live Stream

Join the discussion! What are you most excited about?

Which of the announcements, new features or enhancements are you most excited to begin leveraging in your IT organization? Let us know your thoughts in the Comments area below!  In addition, be sure to follow us online via the #MSTechEd hashtag on Twitter.

Moving a Microsoft Azure VM to a Different Subnet Within a Virtual Network

In Microsoft Azure, an administrator can now use PowerShell to migrate a VM from one subnet to another within the same virtual network. This offers an opportunity to reorganize an application topology for better managing subnet capacity and grouping. For instance, when an existing subnet, ABC, is running out of IP addresses, customers can move  the associated VMs to a different and perhaps larger subnet. At this time, subnet ABC can then be deleted to recover the IP address space.

The logical process to carry out a subnet migration is to:

  1. Migrate a VM from one subnet to another, followed by
  2. Update the VM configuration which will restart the VM

The MSDN documentation at http://msdn.microsoft.com/en-us/library/azure/dn643636.aspx offers a sample PowerShell statement as below.

Get-AzureVM –Name <a target VM name> –ServiceName <the associated service name> `
| Set-AzureSubnet –SubnetNames <a target subnet to migrate to> `
| Update-AzureVM

And I am here providing a sample scenario of moving a VM from a DEV subnet to a TEST one as illustrated below.

image A virtual network, fooNet, has three subnets: AD, DEV, and TEST, while each subnet is configured with a specific IP address range as shown.
image The intent is to move the VM, app1, currently in DEV to the subnet, TEST.
image The VM, app1, is deployed to the service, foo-devtest, i.e. http://foo-devtest.cloudapp.net.
image An ISE session is connecting to the Microsoft Azure subscription account and has successfully moved the VM to the target subnet, TEST.For those who are not familiar with Microsoft Azure PowerShell, a self-training tool called Quick Start Kit (QSK) is available at https://yungchou.wordpress.com/2014/01/21/announcing-windows-azure-iaas-quick-start-kit-qsk-at-http-aka-ms-qsk/.
image The VM now resides in the subnet, TEST.

Important Compliance Information of Microsoft Cloud Computing Solutions

Subscribe update of this page.

Microsoft’s Cloud Infrastructure Receives FISMA Approval

http://aka.ms/FISMA  

Windows Azure Compliance

http://aka.ms/waCompliance

Windows Azure HIPPA Implementation Guidance

http://aka.ms/waHIPPA

   

Office 365 Trust Center and Regulatory Compliance FAQ

http://aka.ms/o365TC

HIPAA Compliance with Office 365 (Exchange Online) – Steps for Configuration and Use

http://aka.ms/exOnlineHIPPA

HIPAA/HITECH Act Implementation Guidance for Microsoft Office 365 and Microsoft Dynamics CRM Online

http://aka.ms/o365HIPPA or http://aka.ms/crmHIPPA

Office 365 Compliance Certifications

http://aka.ms/o365Compliance

image

A Memorandum to IT Pros on Cloud Computing

The IT industry is moving fast with cloud computing. And there are fundamental changes on how to approach businesses, which we must grasp to fully appreciate the opportunities presented to us.

Fabric, Not Virtualization

In cloud computing, resources for consumption are via abstractions without the need to reveal the underlying physical complexities. Ultimately all cloud computing artifacts eventually consist of resources categorized into three pools, namely compute, networking, and storage. Compute is the ability to execute code and run instances. Networking is how instances and resources are glued or isolated. And storage is where the resources, configurations, and data are kept. In current state of technologies, one approach is to deliver resources via a form of virtualization. And these three resource pools are abstracted with server virtualization, network virtualization, and storage virtualization, respectively, to collectively form the so-called fabric, as detailed in “Resource Pooling, Virtualization, Fabric, and Cloud.”

Fabric is an abstraction signifying the ability to discover and manage datacenter resources. Sometimes we refer the owner of fabric as a fabric controller which is essentially a datacenter management solution and manages all datacenter physical and virtual resources, With fabric, a server is delivered with a virtual machine (VM), an IP address space can be logically defined through a virtual network layer, and a disk drive appearing with a massive and continuous storage space is in fact an aggregate of the storage provided by just a bunch of disks. Virtualization is an essential building block of cloud computing. We must however go beyond virtualization and envision “fabric” as the architectural layer of abstraction. 

A critical decision in transforming into cloud computing is to as  early as possible establish a holistic approach of fabric management, i.e. deploying a system management solution to provide a common and comprehensive platform for integrating, managing, and operating the three resource pools. This management solution is to strategically form the fabric such that datacenter resources regardless physical or virtualized, deployed on premises or off premises are all discoverable and manageable in a transparent fashion.

Service Architecture, Not VMs 

Many may consider IaaS is about deploying VMs. In cloud computing, an IaaS deployment is nevertheless not just about individual VMs. Modern computing models employ distributed computing with multiple machine tiers while each tier may have multiple VM instances taking incoming requests or processing data. A typical example is a three-tier web application including a frontend, mid-tier, and backend, which maintains multiple frontend instances for load-balancing and multiple backend instances for high-availability of data. And an application is functional only when all three tiers are considered and operated as a whole. Although there are times, perhaps an application architecture is formed with a single machine tier, i.e. one machine instance constitutes an application instance, and operating directly on a VM is equivalent to that on the service. We must however manage the deployment as a service architecture deployment and not as an individual VM deployment.

A service in cloud computing is an application instance, a security boundary, and a management entity. One will deploy a service, start and stop a service, scale a service, upgrade a service. Service from an operations point of view is a set of VMs collectively maintaining an application architecture, run-time environment, and a running instance. No, cloud computing is not just about deploying VMs, since cloud has no concept on individual VMs. it is about the ability to deploy an application architecture followed by configuring target application run-time environment before finally installing and running an application. It is about a service, i.e. an application. And IaaS is about the ability to deploy a service architecture and not individual VMs.

Services, Not Servers

A similar concept to a service architecture vs. VMs is services vs. servers. Here a server is the server OS instance running in a deployed VM. “Service” is operationally a set of servers which forms a service, or an application, architecture. In the context of cloud computing, a service carries a set of attributes, five to be specific as defined in NIST SP 800-145 and summarized in the 5-3-2 Principle of Cloud Computing. Deploying a server (or a VM) and deploying a service denote very different capabilities. Deploying ten VMs is a process of placing ten individual servers, and it suggests little on the scope, relationship, scalability, and management of the ten servers. At the same time, deploying ten instances of a service denotes there is one service definition and with ten instantiations. The significance of the ten service instances is that since all instances are based on a the same service definition, there is an opportunity to optimize business objectives via “service” management. An example is to employ upgrade domains to eliminate downtime during application upgrade.

A service is also how cloud computing is delivered and consumed.  IaaS, PaaS, and SaaS all ended with the term, service, is a clear indication on how significant a service plays. It is “the” way in cloud computing to deliver resource for consumption. If it is not delivered as a servicer, it is not cloud.

From a customer’s point of view, a service (i.e. an application) is what is consumed. Therefore IT pros should pay attention to what is running within a server and not just the server itself. Form a system management viewpoint, what matters is the ability to look into a server instance and drill down to application level, and gain insights of application usage and health. For instance, for a database application what is critical to know and respond to is the health of databases and not just the state of the server which hosts the database application.

So for IT pros, cloud computing is more than just how a server is automatically configures and deployed, it is how the application running in the server instance is defined, constructed, deployed, and managed including fault domain and upgrade domain, availability, geo-redundancy, SLA, pricing, costs, cross-site recovery, etc.

Hybrid, Not On-Premises

With virtualization in place, enterprise IT can accelerate cloud computing adoption by hybrid deployment scenarios. Here a hybrid cloud is a private cloud with a cross-premises deployment. For example an on-premises private cloud with some off-premises resources is a form of hybrid cloud, and vice versa. A hybrid cloud based on an on-premises private cloud offers an opportunity for keeping sensitive information on premises while taking advantages of the flexibility and readiness that a 3rd-party cloud service provider can provide to host non-sensitive data. An on-premises private cloud solution is a stepping stone, the ability to define, deploy, and manage a hybrid cloud is where IT needs to be. 

The idea of a hybrid cloud surfaces an immediate challenge: how to enable a user to self-serve resources in a cross-premises deployment. Self-servicing is an essential characteristic in cloud computing and plays a crucial role in fundamentally minimizing training and support cost while continually promoting resource consumption. For a hybrid IT environment, there are strategically important considerations including consistent user experience with on-premises and off-premises deployments, SSO maturity and federated identity solutions, a manageable delegation model, and inter-op capabilities with 3rd-party vendors. To ensure IT agility, a management platform to manage resources not just physical and virtualized, but also those deployed to a private cloud, a public cloud, or a hybrid cloud is increasingly critical.

Closing Thoughts

Transitioning into cloud computing platform is critical for enterprise IT to compete in the emerging economy which is driven by emotions and current events and intensified by social media. IT should institute a comprehensive management solution while the first opportunity arises to facilitate and converge fabric construction with cloud computing methodology. Keep staying focused on constructing, deploying, and managing:

  • Not virtualization, but fabric
  • Not VMs, but a service architecture
  • Not servers, but service instances
  • Not on-premises, but hybrid deployment scenarios

For enterprise IT, the determining factor of a successful transformation is the ability to continue managing not only what has been established, but what is emerging; not only physical and virtualized, but those deployed to private, public, and hybrid clouds; not only from one vendor’s solution platform, but vSphere, Hyper-V, Citrix and beyond.

Resource Pooling, Virtualization, Fabric, and Cloud

One of the five essential attributes of cloud computing (ref. The 5-3-2 Principle of Cloud Computing) is resource pooling which is an important differentiator separating the thought process of traditional IT from that of a service-based, cloud computing approach.

Resource pooling in the context of cloud computing and from a service provider’s viewpoint denotes a set of strategies for standardizing, automating, and optimizing resources. For a user, resource pooling institutes an abstraction for presenting and consuming resources in a consistent and transparent fashion.

This article presents key concepts derived from resource pooling as the following:

  • Resource Pools
  • Virtualization in the Context of Cloud Computing
  • Standardization, Automation, and Optimization
  • Fabric
  • Cloud
  • Closing Thoughts

Resource Pools

Ultimately data center resources can be logically placed into three categories. They are: compute, networks, and storage. For many, this grouping may appear trivial. It is however a foundation upon which cloud computing methodologies are developed, products designed, and solution formulated.

Compute

This is a collection of all CPU-related capabilities. Essentially all datacenter physical sand virtual servers, either for supporting or actually running a workload, are all part of this compute pool. Compute pool represents the total capacity for executing code and running instances. The process to construct a compute pool is to first inventory all servers and identify virtualization candidates followed by implementing server virtualization. And it is never too early to introduce a system management solution to facilitate the processes. Which in my view is a strategic investment and a critical component for all cloud initiatives.

Networks

The physical and logical artifacts putting in place to connect, segment, and isolate resources from layer 3 and below, etc. are gathered in the network pool. Networking enables resources becoming discoverable and hence possibly manageable. In this age of instant gratification, everything has to be readily available with a relatively short windows of opportunities. This is an economy driven by emotions and current events. The needs to be mobile and connected at the same time are redefining the IT security and system administration boundaries. Which plays a direct and impactful role in user productivity and customer satisfaction. Networking in cloud computing is more than just remote access, but an abstraction empowering thousands users to self-serve and consume resources anytime anywhere with any device. Bring your own device, bring your own network, and consumerization of IT are various expressions of the network and mobility requirements of cloud computing.

Storage

This has long been a very specialized and sometimes mysterious part of IT. An enterprise storage solution frequently characterizes as a high cost item with significant financial and technical commitments on specialized hardware, proprietary API and software, a dependency on direct vendor support, etc. In cloud computing, storage solutions are becoming very noticeable since the ability to grow and shrink resource capacities based on demands, i.e. elasticity, demands an enterprise-level, massive, reliable, and resilient storage solution at scale. While enterprise IT is consolidating resources and transforming existing establishments into a cloud computing environment, the ability to leverage existing storage solutions from various vendors and integrate them into an emerging cloud storage solution is a noticeable opportunity for minimizing the cost.

Virtualization in the Context of Cloud Computing

In the last decade, virtualization has proved its values and accelerated the realization of cloud computing. Then, virtualization was mainly server virtualization. Which in an over-simplified description means hosting multiple server instances with the same hardware while each instance runs transparently and in isolation, namely as if each consumes the entire hardware and is the only instance running. Now, with so many new technologies and solutions emerging, customers’ expectations and business requirements have been evolving. And we should validate virtualization in the context of cloud computing to fully address the innovations rapidly changing how IT conducts business and delivers services. As presented below, in the context of clod computing, consumable resources are delivered in some virtualized form. Various virtualization layers collectively construct and form the so-called fabric.

Server Virtualization

The concept of server virtualization remains as: running multiple server instances with the same hardware while each instance runs transparently and in isolation, as if each instance is the only instance running and consuming the entire server hardware.

In addition to virtualizing and consolidating servers, server virtualization also signifies the practices of standardizing server deployment, switching away from physical boxes to VMs. Server virtualization is the abstraction layer for packaging, delivering, and consuming a compute pool.

There are a few important considerations of virtualizing servers. IT needs the ability to identify and manage bare metal such that the entire resource life-cycle management from commencing an employment to the decommissioning of server hardware can be automated. To fundamentally reduce the support and training cost while increasing the productivity, a consistent platform with tools applicable across physical, virtual, on-premises, and off-premises deployments is essential. The last thing IT wants is one set of tools for managing physical resources and another for working on those virtualized; one set of tools for on-premises deployment and another for those deployed to a service provider; one set of tools for infrastructure development and another for configuring applications. The essential goal is one skill set for all, namely one platform for all, one methodology for all, and one set of tools for all. This advantage is obvious when, for example, developing applications and deploying Windows Server 2012 R2 on premises or off premises to Windows Azure. The Active Directory security model can work across sites, System Center can manage resources deployed off premises to Windows Azure, and Visual Studio can publish applications across platforms. IT pros can operate all based on Windows system administration skills. There is minimal operations training needed since all provides a consistent Windows user experience on a common platform and security model.

Network Virtualization

Similar idea of Server Virtualization applies here. Network virtualization is the ability to run multiple networks on the same network infrastructure while each network runs transparently and in isolation, as if each network is the only network running and consuming the entire network hardware.

Conceptually since each network instance is running in isolation, one tenant’s 192.168.x network is not aware of another tenant’s and identical 192.168.x network running with the same network infrastructure. Network virtualization provides the translation between physical network characteristics and the representation of a logical network. Consequently, above the network virtualization layer, various tenants while running in isolation can have identical network configurations. This abstraction essentially substantiates the concept of bringing your own network.

A great example of network virtualization is Windows Azure virtual network. At any given time, there can be multiple Windows Azure subscribers all allocate the same 192.168.x address space and with identical subnet scheme, 192.168.1.x/16, for deploying VMs. Those VMs belonging to one subscriber are however not aware of or visible to these deployed by others, despite the network configuration, IP scheme, and IP address assignments may be all identical. Network virtualization in Windows Azure isolates one subscriber from the others such that each subscriber operates as if the subscription account is the only one employing 192.168.x address space.

Storage Virtualization

I believe this is where the next wave of drastic cost reduction of IT post server virtualization happens. Historically, storage has been a high cost item in IT budget in each and every aspects including hardware, software, staffing, maintenance, SLA, etc. Since the introduction of Windows Server 2012, there is a clear direction where storage virtualization is becoming a commodity and an essential skill for IT pros since storage virtualization is now built into Windows OS. New capabilities like Storage Pool, Hyper-V over SMB, Scale-Out Fire Share, etc. are making storage virtualization part of server administration routines and easily manageable with tools and utilities like PowerShell which many IT professionals are familiar with.

The concept of storage virtualization remains consistent with the idea of logically separating a computing object from the hardware where it is running. Storage virtualization is the ability to integrate multiple and heterogeneous storage devices, aggregate the storage capacities, and present/manage as one logical storage device with a continuous storage space. And it should be apparent that JBOD is a visualization of this concept.

Standardization, Automation, and Optimization

Each of the three resource pools has an abstraction to logically present itself with characteristics and work patterns. A compute pool is a collection of physical hosts and VM instances. A virtualization host runs VMs which carry workloads deployed by service owners and consumed by authorized users. A network pool encompasses network resources including physical devices, logical switches, logical networks, address spaces, and site configurations. Network virtualization can map many identical logical/virtual IP addresses with the same physical NIC, such a service provider can host tenants implementing identical network scheme with the same network hardware without a concern. A storage pool is based on storage virtualization which is a concept of presenting an aggregated storage capacity as one continuous storage space as if provided from one logical storage device.

In other words, the three resource pools are wrapped with server virtualization, network virtualization, and storage virtualization, respectively. Together these three virtualization layers forms the cloud fabric which is presented as an architectural layer with opportunities to standardize, automate, and optimize deployments without the needs to know the physical complexities.

Standardization

Virtualizing resources decouples the dependency between instances and the underlying hardware. This offers an opportunity to simplify and standardize the logical representation of a resource. For instance, a VM is defined and deployed with a VM template which provides a level of consistency with a standardized configuration. 

Automation

Once a VM characteristics are identified and standardized with configurations, we can now generate an instance by providing only instance-based information such as the VM machine name which must be validated at deployment time to prevent duplicated names. With a deployment template, only minimal information at deployment time us needed. Which can significantly simplify the process and facilitate automation. Standardization and automation are essential mechanisms so that a workload can scale on demand, i.e. become elastic, once a threshold is specified from a system management perspective.

Optimization

Standardization provides a set of common criteria. Automation executes operations based on set criteria with volumes, consistency, and expediency. With standardization and automation, instances can be instantiated with consistency and predictability. In other words, cloud resources can be operated in bulk with consistency and predictability by standardizing configurations and automate operations. The next logical step is then to optimize the usage based on SLA.

Optimization is essentially standardization and automation with intelligence and objectives. The intelligence is why we need to have analytics designed into a cloud solution. With self-servicing, ubiquitous access, and elasticity, technically a cloud resource can be consumed anytime, by any number of users, and with capacities changed on demand. We will need a mechanism to provide us insights of the usage including when it is used, by whom, how much, to what degree, etc. A consumption-based chargeback or show-back model is what the analytics must provide so system management can optimize resources accordingly and timely.

Integrating resource pooling and virtualization concepts into OS has happened since Windows Server 2012 R2 and System Center 2012. Server virtualization, network virtualization, and storage virtualization are now integrated into Windows platform and part of the Server OS.

Fabric

A significant abstraction in cloud computing this is. Fabric implies the discoverability and manageability. Which denotes the ability to discover, identify, and manage a datacenter resource. Conceptually fabric is an umbrella term encompassing all the datacenter physical and virtualized resources supporting a cloud computing environment. At the same time, a fabric controller represents the system management solution which manages, i.e. owns, fabric.

In cloud architecture, fabric consists the three resource pools: compute, networking, and storage. Compute provides the computing capabilities, executes code, and runs instances. Networking glues the resources based on requirements. And storage is where VMs, configurations, data, and resources are kept. Fabric shields a user from the physical complexities of the three resource pools. All operations are eventually managed by the fabric controller of a datacenter. Above fabric, there are logical views of consumable resources including VMs, virtual networks, and logical storage drives. By deploying VMs, configuring virtual networks, or acquire storage, a user consumes resources. Under fabric, there are virtualization and infrastructure hosts, Active Directory, DNS, clusters, load balancers, address pools, network sites, library shares, storage arrays, topology, racks, cables, etc. all under the fabric controller’s command to collectively form the fabric.

For a service provider, building a cloud computing environment is essentially to establish a fabric controller and construct fabric. Namely institute a comprehensive management solution, build the three resource pools, and integrate server virtualization, network virtualization, and storage virtualization to form fabric. Form a user’s point of view, how and where a resource is physically located is not a concern. The user experience and satisfaction are much based on the accessibility, readiness, and scalability of requested resources and fulfillment of SLA.

Cloud

This is a well-defined term by NIST SP 800-145 and the 5-3-2 Principle of Cloud Computing. We need to be very clear on: what a cloud must exhibit (the five essential attributes), how to consume it (with SaaS, PaaS, or IaaS), and the model a service is deployed in (like private cloud, public cloud, and hybrid cloud). Cloud is a concept, a state, a set of capabilities such that the capacity of a requested resource can be delivered as a service, i.e. available on demand.

The architecture of a cloud computing environment is presented with three resource pools: compute, networks, and storage. Each is abstraction provided by a virtualization layer. Server virtualization presents a compute pool with VMs which supplies the computing power to execute code and run instances. Network virtualization offers a network pool and is the mechanism to allow multiple tenants with identical network configurations on the same virtualization hosts while connecting, segmenting, isolating network traffic with virtual NICs, logical switches, address space, network sites, IP pools, etc. Storage virtualization provides a logical storage device with the capacity appearing continuous by aggregating the capacity of a pool of storage devices behind the scene. The three resource pools together form an abstraction, namely fabric, such that despite the underlying physical infrastructure may be intricate, the user experience above fabric is presented with a logical and consistent fashion. Deploying a VM, configuring a virtual network, or acquiring storage is transparent with virtualization regardless where the VM actually resides, how the virtual network is physically wired, or what devices are included in the aggregate the requested storage.

Closing Thoughts

Cloud is an all consumer-focused approach. It is about enabling a customer to consume resources on demand and with scale. And perhaps more significant than the need to increase capacity on demand is the ability to release resources when no longer required. Cloud is not about products and technologies. Cloud is about standardizing, automating, and optimizing the consumption of resources and ultimately strengthening the bottom line.

Why Private Cloud First

Some IT decision makers may wonder, I have already virtualized my datacenter and am running a highly virtualized IT environment, do I still need a private cloud? If so, why?

The answer is a definitive YES, and the reason is straightforward. The plain truth is that virtualization is no private cloud, and a private cloud goes far beyond virtualization. (Ref 1, 2)

Virtualization Is No Private Cloud

Technically, virtualization is signified by the concept of “isolation.” By which a running instance is with the notion that the instance consumes the entire hardware despite the fact that multiple instances may be running at the same time with the same hosting environment. A well understood example is server virtualization where multiple server instances running on the same hardware while each instance runs as if it possesses the entire host machine.

A private cloud on the other hand is a cloud which abides the 5-3-2 Principle or NIST SP 800-145 which the de facto definition of cloud computing. In other words, a private cloud as illustrated above must exhibit the attributes like elasticity, resource pooling, self-service model, etc. of cloud computing and be delivered in a particular fashion. Virtualization nonetheless does not hold, for instance, any of the four attributes as a technical requirement. Virtualization is about isolating and virtualizing resources, while how a virtualized resource is allocated, delivered, or presented is not particularly specified. At the same time, cloud computing or a private cloud, is visualized much differently. The servicing, accessibility, readiness, and elasticity of all consumable resources in cloud computing are conceptually defined and technically required for being delivered as “services.”

Essence of Cloud Computing

The service concept is a center piece of cloud computing. A cloud resource is to be consumed as a service. This is why these terms, IaaS, PaaS, SaaS, ITaaS, and XaaS (everything and anything as a service), are frequently heard in a cloud discussion. And they all are delivered as a service. A service is what must be presented to and experienced by a cloud user. So, what is a service?

A service can be presented and implemented in various ways like forming a web service with a block of code, for example. However in the context of cloud computing, a service can be captured by three words, capacity on demand. Capacity here is associated with an examined object such as cpu, network connections, or storage. One-demand denotes the anytime readiness with any network and any device accessibility. It is a state that previously takes years and years of IT disciplines and best practices to possibly achieve with a traditional infrastructure-focused approach, while cloud computing makes “service” a basic deliver model and demand all consumable resources including infrastructure, platform, and software to be presented as services. Consequently, replacing the term, service, with “capacity of demand” or simply “on demand’ brings clarity and gives substance to any discussion of cloud computing.

Hence, IaaS, infrastructure as a service, is to construct infrastructure on demand. Namely one can provision infrastructure, i.e. deploying a set of virtual machines (since all consumable resources in cloud computing are virtualized) to together form the infrastructure for delivering a target application based on needs. PaaS means platform as a service, or a runtime environment available on demand. Notice that a target runtime environment is for running an intended application. Since the runtime is available on demand, an application deployed to the runtime will then become available on demand, which is essentially SaaS, or software available on demand or as a service. There is a clear logical progression among IaaS, PaaS, and SaaS.

So what is cloud exactly?

Cloud, as I define it here, is a concept, a state, a set of capabilities such that a targeted business capacity is available on demand. And on-demand denotes a self-servicing model with anytime readiness, on any network and with any device accessibility. Cloud is certainly not a particular implementation since the same state can be achieved in various implementations as technologies advancing and methodologies evolving.

Logically, building a private cloud is the post-virtualization step to continue transforming IT into the next generation of computing with cloud-based deliveries. The following schematic depicts a vision of transforming a datacenter to a service-centric cloud delivery model.

Once resources have been virtualized with Hyper-V, System Center builds and transforms existing establishments into an on-premises private cloud environment based on IaaS. Windows Azure then provides a computing platform with both IaaS and PaaS solutions for extending an on-premise private cloud beyond corporate boundaries and into a global setting with resources deployed off premises. This hybrid deployment scenario is emerging as the next generation IT computing model.

To Cloud or Not to Cloud, That Is Not the Question

Comparing apples to apples, there is few reason that a business does not prefer cloud computing over traditional IT. Why one would not want to acquire the ability to adjust business capacity based on needs. Therefore, to cloud or not to cloud is not the question. Nor is security the issue. In most cases, cloud is likely to be more secure managed by a team of cloud security professionals in a service provider’s datacenter, than implemented by IT generalists wearing multiple hats while administering an IT shop. Cloud is to acquire the on-demand capability, and for certain verticals, the question is more about regulatory compliance since the cost reduction and increasing servicing capabilities are very much understood. Above all it is about a business owner’s understanding and comfort level with cloud.

IT industry nevertheless does not wait, nor can simply maintain status quo. Why private cloud? The pressure to produce more with less, and the need to instantaneously become ready and respond to a market opportunity are not for pursuing excellence, but a matter of survival in today’s economic climate with ever increasing user expectations and driven by current events and emotions. One will find out that a private cloud is a vehicle to facilitate and transform IT with increasing productivity and reduced TCO over time as discussed in the Building a Private Cloud blog post series. IT needs cloud computing to shorten go-to-market, to promote consumption, to accelerate product adoption, to change the dynamics by offering better, quicker, and more with less. And for enterprise IT, it is critical to first convert existing on-premises deployments into a cloud setting, and a private cloud solution is a strategic step and a logical approach for enterprise IT to become cloud friendly, cloud enabled, and cloud ready..

Windows Azure Infrastructure Services with PowerShell Quick Start Kit (http://aka.ms/QSK) Release Note

This quick start kit is available for download at http://aka.ms/QSK for self-studies of Windows Azure Infrastructure Services and Windows Azure PowerShell. It deploys a simple, yet well-structured, Windows Azure Infrastructure Services instance including: 1072.qsk[1]

  • Affinity Group
  • Storage Account
  • Cloud Service
  • VMs with attached data disk
  • Availability Set
  • Load-Balancer

Additional information of Windows Azure IaaS deployment methodology is available at:

The conceptual model of a Windows Azure Infrastructure Services deployment with QSK is showed as below:

3010.qsk.conceptual.model[1]

Notice since the storage account is auto-created with PowerShell in QSK, by default Windows Azure optionally enables geo-replication of a storage account. Therefore the synchronization may take a little bit longer for creating or deleting a VM and its attached disks.

For each deployment, the RDP files of each deployed VM is also automatically downloaded to a designated local folder. Namely a VM user can access a deployed Windows Azure VM using the RDP file and VM user credentials, and without the need to log in Windows Azure management portal or acquire the Windows Azure subscription information. Further, accessing a deployed Windows Azure VM will be based on a self-signed certificate, if no PKI has been pre-configured for the VM.

QSK uses a user-specified prefix and a timestamp as a tag to eliminate any possible naming conflict within Windows Azure. This tag also makes resources in one deployment unique and easily identifiable by matching the tag. Hence, a user can execute the script multiple times to deploy multiple services and consume resources within what the subscription allows and without a naming conflict.

The following are additional resources and the history of a sample PowerShell session where I deployed two services by executing the script twice.

qsk.session.1

In Windows Azure Management Portal, the following shows what were deployed.

qsk.session.2.affinity.group

qsk.session.3.storage.account

qsk.session.4.cloud.services

qsk.session.5.deployed.resources

qsk.session.6.availability.set

qsk.session.7.load.balancer

 qsk.session.8.availability.set

 qsk.session.9.load.balancer

And the RDP files of deployed VMs were also downloaded to the local SQL folder as shown here.qsk.session.10.local.folder

To remove all deployed resources, the 4-step clean up routine as included in the script was run in sequence and one step at a time.

qsk.session.11.removing.resources

qsk.session.12.removing.resources