Changing Server Installation Option from Server Core to Server-Gui-Shell

This is a follow-up post of my earlier article of Features on Demand, where I also illustrated changing server installation options and developing minimal server footprint. In this post, I walked through the process of changing the server installation option from server core to server-gui-shell.

In this scenario, I install a server core, as shown below, to a VM from the Windows Server 2012 R2 iso file, en_windows_server_2012_r2_with_update_x64_dvd_4065220.iso, downloaded from MSDN. And later changed the server core installation to one with server-gui-shell, i.e. with a full server GUI using IDSM online commands. The OS disk is 40 GB in size.

image image

The following displayed the c drive of a clean server core install. And the feature store is at c:\windows\winsxs as shown. Here I also included the command in the screen capture.

image image

The following are some highlights of a formatted output of the server features of a clean install of server core using a dism online command.


Since the GUI components had a state of “disabled with payload removed”, I needed to pointed out the source to enable the GUI feature. Below, I first shared out the feature store, i.e. c:\windows\winsxs, with read access to everyone on the machine, named VMM, which is a full gui installation as the source.

image image

Then I ran the dism online command to enable server-gui-shell as:

dism /online /enable-feature /featurename:server-gui-shell /source:\\vmm\winsxs /all

To include all dependencies, I used /all in the statement. Notice the default behavior of enabling a feature is to ultimately go to Windows Updates, if an installation source is not specified or the needed bits are not found. Remember to include /LimitAccess in the statement to prevent seeking Windows Updates, as needed.

This statement took a few moments to run and complete the statement since it’s a gigabyte operation.  After finishing enabling server-gui-shell, I did a dir on the feature store just to show the difference made by enabling the feature. Notice there were 4,006 directories added to the feature store by this enabling server-gui-shell operation.

image image

After reboot, I noticed I was not able to find Server Manager in the Apps page. Opening a command prompt, I typed and ran the statement, powershell “get-windowsfeature”, and interestingly none of the GUI components was enabled. This may be an anomaly. So I ran the statement, powershell “install-windowsfeature server-gui-shell” to install the gui shell again. I thought I would need to reboot, the operation ran successfully and did not require a reboot. I then saw Server Manager, PowerShell ISE, etc. Server Manager came up fine and all seemed back to normal apparently.


Finally, you may notice that the above operations do not work on a Hyper-V Server which is a free and dedicated stand-alone product with the hypervisor, Windows Server driver model, virtualization capabilities, and supporting components such as failover clustering. A Hyper-V Server however does not contain the set of features and roles as a Windows Server operating system which requires a purchased license.

Call to Action

  • Register at Microsoft Virtual Academy,, and work on the server track to better understand the roles and features of Windows Server 2012 R2.
  • Download Windows Server 2012 R2 trial from and evaluate the product.
  • Follow the Features on Demand series,, and assess a minimal server footprint based on your business requirements.

High Availability, Disaster Recovery, and Microsoft Azure

Both High Availability (HA) and Disaster Recovery (DR) have been essential IT topics. Fundamentally HA is about fault tolerance relevant to the availability of an examined subject like application, database, VMs, etc. While DR roots on the ability to resume operations in the aftermath of a catastrophic event. A fundamental difference of these two is that HA expects no down time and no data loss, while DR does. They are different issues and should be addressed separately.


For many IT shops, either HA or DR has been a high risk and high cost item. Both are essential to business continuity, while traditionally tough technical problems to solve with very significant and long-term commitments on resources. Not only they are technically challenging, but a continual cost-cutting which has become an IT standard practice in the past two decades makes purchasing hardware/software and constructing either HA or DR solution on premises further distant from IT’s financial and technical realties.

Sense of Urgency

Too often, the technical challenges and resource commitments overwhelm IT and turn HA and DR into academic discussions, or symbolic items on a project checklist. At the same time, information is rapidly exploding as internet, mobility and social-network are becoming integral in our daily lives and businesses. There are progressively more data to process and store. For many businesses, the needs for HA and DR is urgent for better managing risks. And continual availability and on-demand recoverability of IT are becoming increasingly critical.

This Is the Reality, Now

The good news is that the recent introduction of cloud computing has fundamentally changed how an HA or DR solution can be implemented. Microsoft Azure is a vivid example of HA and DR solutions with significantly reduced the required financial commitment and involved technical complexities. The traditional approach by establishing redundancy and acquiring a physical DR site with long-term resources and financial commitments is now largely replaced with consumable services which can be configured in minutes by mouse-clicking and with a manageable cost structure based on usage. HA and DR have become IT solutions which are financially realistic and technically feasible for businesses in all sizes.

HA, Redundancy, and Microsoft Azure LRS

HA is to eliminate a single point of failure of an examined component, an application for example. It denotes a strategy to employ redundancy such that a target application can and will continue being available without downtime while experiencing a failure of hosting hardware or software. There are various and well-developed HA solutions like a hyper-V host cluster using redundant hardware to eliminate a single point of failure of hosting OS or hardware, and an application cluster for eliminating a single point of failure by running the application in multiple VM instances with a synchronous state. Although HA implementations may vary, the fundamental principle nevertheless remains the same. HA expects neither downtime nor data loss while experiencing an outage of a target hardware or software.

HA has become dramatically simple in Microsoft Azure. Basically, all data written to disk in Microsoft Azure are kept at least in the so-called LRS, Locally Redundant Storage. LRS replicates a transaction synchronously to three different storage nodes across fault domains and upgrade domains within the same region for durability. In layman’s terms, Microsoft Azure by default maintains at least three copies of user data to achieve HA.

DR, Replication, and Microsoft Azure GRS

DR is about having a plan and backups in place to resume operations in the aftermath of a catastrophic event. Unplanned outage is assumed in a DR scenario, therefore some data loss is also expected. Notice that HA and DR are different business problems and addressed differently.

While both HA and DR are based on applying redundancy, i.e. a source and replicas, or multiple identical nodes of an examines component like application instance, databases, or VMs, there are however differences between the two. A DR solution generally employs replicas or backups, are implemented with asynchronous processes, and expects an outage of a source and with some data loss in transit while the outage occurs. While HA requires a logical representation with a real-time integrity using synchronous processes across all participating nodes, expects neither downtime nor data loss while experiencing an outage of a participating node.

For a critical workload, one approach of DR is to establish geo-replication to address an outage of an entire geographic area caused by a natural disaster, for example. The concern is that a catastrophic event may impact an entire geographic area causing a datacenter where a mission critical application is being hosted becomes unavailable for an extended period of time.


In Microsoft Azure, Geo Redundant Storage or GRS is the default and an optional setting, as shown above, while configuring a storage account. GRS will queue a transaction committed to LRS as an asynchronous replication to a secondary region, a few hundreds miles away from the primary region where a storage account is originated. At the secondary region, data is also stored in LRS, i.e. made durable by replicating it to three storage nodes.


Specifically, a Microsoft Azure storage account configured with GRS essentially maintains three replicas locally for high availability, and replicates the content and maintains three replicas at a secondary datacenter a few hundreds miles away for DR. So all are six copies, three locally and three remotely. All these are configured by one, yes one mouse click from a dropdown list while creating a storage account. The above is a conceptual model illustrated a data flow of GRS.

GRS replication has little performance impact on an application since application data are committed to LRS in real-time while replication to GRS is queued, i.e. asynchronously. A write to LRS is synchronous and in real-time, once committed, the changes are expected within 15 minutes to be asynchronously replicated to the secondary site.

For a RA-GRS storage account, in addition to one primary endpoint for read/write operations as it is in a GRS, there is also one secondary endpoint as read only becomes available as shown below. image

The cost implications of GRS or RA-GRS include the additional storage and the transmission costs for egress traffic, as applicable, of the secondary datacenter. Ingress traffic is free. And Microsoft Azure Storage SLA offers 99.9% availability and a cost calculator is also available.

Microsoft Azure Recovery Services

So far, much is about backing up or replicating data. To successfully restore, a DR plan must be put in place and ensure its availability upon a DR scenario in progress. Either placing a DR plan at a primary site where the source is or a secondary site where a replica stays has some issues and concerns.

Keeping a DR plan at the source site where all the resources are in place and on-the-job trainings seems logical. Or does it? DR is assuming a catastrophic event over an extended geographic areas where the source site is experiencing an outage. In such case, keeping a DR plan in the source site defeats the purpose.

Maintaining a DR plan at the secondary site is the choice then. In a DR scenario, a recovery site is to be brought on line within a expected period of time according to a DR plan, and having the DR plan right there and then at a recovery site makes all the sense. Or does it? This decision introduces a number of requirements including the physical readiness, the timeliness, and the financial implications on securing and maintaining a DR plan at a remote physical facility.

For a VMM server running on System Center 2012 SP1 or later, an idea, reliable and straightforward way is to use Azure Recovery Services to maintain a DR plan as shown below. And for any backup needs, using cloud as a backup site makes backing up and restoring data an anytime anywhere operation.


Azure Site Recovery Vault

This service essentially acts as the director of a DR process. It orchestrates and manages the protection and failover of VMs in clouds managed by Virtual Machine Manager 2012 SP1 or later. A noticeable advantage is the ability to test a recovery configuration, exercise a proactive failover and recovery, and automate recovery in the event of a site outage.

The SLA of Site Recovery Services is 99.9% availability to ensure a configured DR plan is always in place with expected updates. This is a DR solution that IT can implement, simulate, verify, bring online and be absolutely confident with the readiness.


Azure Backup Vault

This is a reliable, scalable and inexpensive data protection solution with zero capital investment and extremely low operational expense. Like other secure communication with Microsoft Azure, you will first upload a public certificate to Microsoft Azure. Then download the backup agent to register a target server with the backup vault. Then select what to be backed up. Both Microsoft Azure Backup SLA (99.9% availability) and cost calculator are available for better assessing the solution.

Closing Thoughts

Form an application’s view, HA is an on-going event while DR is an anticipation. HA and DR are different business problems and should be addressed differently. Nevertheless, Microsoft Azure provides a single platform to gracefully address HA with LRS, DR with GRS, and DR orchestration with Recovery Services, and all with published SLAs and a predictable cost structure. Going forward, IT pros can now include HA and DR as a reliable, scalable and relatively inexpensive proposition by employing Microsoft Azure as a solution platform.

Call to Action

Windows Server 2012 R2 Hyper-V Replica Configuration, Simulation and Verification Explained

Windows Server 2012 R2 Hyper-V Role introduces a new capability, Hyper-V Replica, as a built-in replication mechanism at a virtual machine (VM) level. Hyper-V Replica can asynchronously replicate a selected VM running at a primary (or source) site to a designated replica (or target) site across LAN/WAN. The following schematic depicts this concept.

A replication process will replicate, i.e. create an identical VM in the Hyper-V Manager of a target replica server and subsequently the change tracking module of Hyper-V Replica will track and replicate the write-operations in the source VM every based on a set interval after the last successful replication regardless the associated vhd files are hosted in SMB shares, Cluster Shared Volumes (CSVs), SANs, or with directly attached storage devices.

Hyper-V Replica Requirements

Above both a primary site and a replica site are Windows Server 2012 R2 Hyper-V hosts where the former runs production or the so-called primary VMs, while the latter hosts replicated ones which are off and each is to be brought online, should a corresponding primary VM experience an outage. Hyper-V Replica requires neither shared storage, nor a specific storage hardware. Once an initial copy is replicated to a replica site, Hyper-V Replica will replicate only the changes of a configured primary VM, i.e. the deltas, asynchronously.

It goes without saying that assessing business needs and developing a plan on what to replicate and where is essential. In addition, both a primary site and a replica site (i.e. Hyper-V hosts run source and replica VMs, respectively) must enable Hyper-V Replica in the Hyper-V Settings of Hyper-V Manager. IP connectivity is assumed when applicable. Notice that Hyper-V Replica is a server capability and not applicable to client Hyper-V such as running Hyper-V in a Windows 8 client.

Establishing Hyper-V Replica Infrastructure

Once all the requirements are put in place, enable Hyper-V Replica on both a primary site (i.e. a Windows Server 2012 R2 Hyper-V host where a source VM runs) a target replica server, followed by configure an intended VM at a primary site for replication. The following is a sample process to establish Hyper-V Replica.

Step 1 – Enable Hyper-V Replica

Identify a Windows Server 2012 R2 Hyper-V host as a primary site where a target source VM runs. Enable Hyper-V Replica in the Hyper-V settings in Hyper-V Manager of the host. The following are Hyper-V Replica sample settings of a primary site. Repeat the step on a target replica site to enable Hyper-V Replica. In this article, DEVELOPMENT and VDI are both Windows Server 2012 R2 Hyper-V hosts where the former is a primary site while the latter is the replica site.


Step 2 – Configure Hyper-V Replica on Source VMs

Using the Hyper-V Manager of a primary site, enable Hyper-V Replica of a VM by right-clicking the VM and select the option, followed by walking through the wizard to establish a replication relationship with a target replica site. Below shows how to enable Hyper-V Replica of a VM, named A-Selected-VM, at the primary site, DEVELOPMENT. The replication settings are fairly straightforward and for a reader to get familiar with.


Step 3 – Carry Out an Initial Replication

While configuring Hyper-V Replica of a source VM in step 2, there are options to deliver an initial replica. If an initial replica is to be transmitted over the network, which can happen in real-time or according to a configured schedule. Optionally an initial copy can also be delivered out of band, i.e. with external media, while sending an initial copy on wire may overwhelm the network, not be reliable, take too long to do it or not be a preferred option due to the sensitivity of content. There are however additional considerations with out-of-band deliver of an initial replica.

Option to Send Initial Replica Out of Band

The following shows the option to send initial copy using external media while enabling Hyper-V Replica on a VM named A-Selected-VM.


An exported initial copy includes the associated VHD disk and an XML file capturing the configuration information as depicted below. This folder can then be delivered out of band to the target replica server and imported into the replica VM.


When delivering an initial copy with external media, a replica VM configuration remains automatically created at an intended replica site exactly the same with the user experience of sending a replica VM online. The difference is that a source VM (here the VM on the primary site, DEVELOPMENT) shows an initial replication in progress in replication health. The replica VM (here the VM on the replica site, VDI) has an option to import an initial replica in replica settings while the replication health information is presenting a warning as shown below:

The primary site


The replica site


After receiving the initial replica with external media, the replica VM can then import the disk a shown:


and upon successfully importing the disk, the following shows the replica VM now with additional replication options and updated replication health information.


Step 4 – Simulate a Failover from a Primary Site

This is done at an associated replica site. First verify and ensure the replication health on a target replica VM is normal. Then right-click the replica VM and select Test Failover in Replication settings as shown here:


Notice supposedly a failover should occur only when the primary VM experience outage. Nevertheless a simulated failover does not require shutting down a primary/source VM. It does create a VM instance after all at the replica site without altering the replica settings in place. A newly created VM instance resulted from a Test Failover should be deleted via the option, Stop Test Failover, via the replication UI where the Test Failover was originally initiated, as demonstrated below, to ensure all replication settings remain validated.


Step 5 – Do a Planned Failover from the Primary Site

In a maintenance event where a primary/source is expecting an outage, ensure the replication health is normal followed by conducting a Planned Failover from the primary site after shutting down the source VM as shown below.


Notice that successfully performing a Planned Failover with the presented settings will automatically establish a reverse replication relationship.  To verify the failover has been correctly carried out, check the replication health of a replication pair of VMs on both the primary site and the replica site, prior and after a planned failover. The results need to be consistent across the board. The following examines a replication pair: A-Selected-VM on DEVELOPMENT at a primary site and the replica on VDI at the replica site. Prior to a planned failover from DEVELOPMENT, the replication health information of a source VM (left and at DEVELOPMENT server) and the replica VM (right and at VDI server) is as shown below.


After a successful planned failover event as demonstrated at the beginning of this step, the replication heath information becomes the following where the replication roles had been reversed and with a normal state. This signifies the planned failover was carried out successfully.


In the event that a source VM experiences an unexpected outage at a primary site, failing over a primary VM to a replica one in this scenario will not be able to automatically establish a reverse replication due to un-replicated changes have already lost along with the unexpected outage.

Step 6 – Finalize the Settings with Another Planned Failover

Conduct another Planned Failover event to confirm that a reverse replication works. In the presented scenario, the planned failover of a primary VM will be now shutting down in the primary site (i.e VDI at this time), followed by failing over back to the replica site (i.e. DEVELOPMENT at this time). And upon successful execution of the planned failover event, the resulted replication relationship should be DEVELOPMENT as a primary site with VDI as the replica site again. Which is the same state at the beginning of step 5. At this time, in a DR scenario, failing production site (in the above example DEVELOPMENT server) over to a DR site (here VDI server), and as needed failing over from the DR site (VDI server) back to the original production site (DEVELOPMENT server) are proven working bi-directionally.

Step 7 – Incorporate Hyper-V Replica into Existing SOPs

Incorporate Hyper-V Replica configurations and maintenance into applicable IT standard operating procedures and start monitoring and maintaining the health of Hyper-V Replica resources.

Hyper-V Extended Replication

In Windows Server 2012 R2, we can now further replicate a replica VM form a replica site to an extended replica site, similar to a backup’s backup. The concept as illustrated below is quite simple.


The process is to first set up Hyper-V Replica from a primary site to a target replica site as described earlier in this article. Then at the replica site, simply configure Hyper-V Replica of a replica VM, as shown below. In this way, a source VM is replicated from a primary site to a replica site which then replicates the replica VM from the replica site to an extended replica site.


In a real-world setting, Hyper-V Extended Replication fits business needs. For instance, IT may want a replica site nearby for convenience and timely response in an event that monitored VMs are experience some localized outage within IT’s control, while the datacenter remains up and running. While in an outage causing an entire datacenter or geo-region to go down, an extended replication stored in a geo-distant location is pertinent.

Hyper-V Replica Broker

Importantly, if to employ a Hyper-v failover cluster as a replica site, one must use Failover Cluster Manager to perform all Hyper-V Replica configurations and management by first creating a Hyper-V Replica Broker role, as demonstrated below.


And here is a sample configuration:


Although the above uses Kerberos authentication, as far as Hyper-V Replica is concerned Windows Active Directory domain is not a requirement. Hyper-V Replica can also be implemented between workgroups and untrusted domains with a certificate-based authentication. Active Directory is however a requirement if involving a Hyper-v host which is part of a failover cluster as in the case of Hyper-V Replica Broker, and in such case all Hyper-V hosts of a failover cluster must be in the same Active Directory domain.

Business Continuity and Disaster Recovery (BCDR)

In a BC scenario and a planned failover event, for example a scheduled maintenance, of a primary VM, Hyper-V Replica will first copy any un-replicated changes to the replica VM, so that the event produces no loss of data. Once the planned failover is completed, the VM replica will then become the primary VM and carry the workload, while a reverse replication is automatically set. In a DR scenario, i.e. an unplanned outage of a primary VM, an operator will need to manually bring up the replicated VM with an expectation of some data loss, specifically data change since the last successfully replication based on the set replication interval.

Closing Thoughts

The significance of Hyper-V Replica is not only the simplicity to understand and operate, but the readiness and affordability that comes with Windows Server 2012 R2. A DR solution for business in any sizes is now a reality with Windows Server 2012 R2 Hyper-V. For small and medium businesses, never at a time is a DR solution so feasible. IT pros can now configure, simulate and verify results in a productive and on-demand manner to formulate/prototype/pilot a DR solution at a VM level based on Windows Server 2012 R2 Hyper-V infrastructure. At the same time, in an enterprise setting, System Center family remains the strategic platform to maximize the benefits of Windows Server 2012 R2 and Hyper-V with a comprehensive system management solution including private cloud automation, process orchestration, services deployment and management at a datacenter level.

Additional Information

Automating and Managing Hybrid Cloud Environment

In part 5 of our “Modernizing Your Infrastructure with Hybrid Cloud”  series, Keith Mayer and I got a chance to discuss and demonstrate ways to manage and automate a hybrid cloud environment. System Center, Microsoft Azure and Windows Azure Pack combined with PowerShell are great solutions for hybrid cloud scenarios. Keith is a great guy and we always have much fun working together.


  • [1:15] When architecting a Hybrid Cloud infrastructure, what are some of the important considerations relating to management and automation?
  • [4:09] You mentioned PowerShell for automation … how can PowerShell be leveraged for automation in a Hybrid Cloud?
  • [7:54]  Is PowerShell my ONLY choice? Are there other automation and configuration management solutions available for a Hybrid Cloud?
  • [11:12] DEMO: Let’s see some of this in action
      • Brief tour of System Center and Azure / Azure Pack management portal interfaces
      • Getting started with PowerShell for Azure, Azure Pack automation
      • Intro to PowerShell DSC for configuration management
      • Intro to Azure Automation for automated runbooks

Additional resources:

Windows Azure Pack (WAP) simplified: Prepping OS Image Disks for Gallery Items

To publish a gallery item in Windows Azure Pack (WAP), the associated OS image disks, i.e. vhd files, must be set according to what’s in the readme file of a gallery resource package. For those who are not familiar with the operations, this can be a frustrating learning experience before finally getting it right. This blog post is to address this concern by presenting a routine with a sample PowerShell script [download] to facilitate the process.

Required Values of OS Image Disks for WAP Gallery Items

For example, below is from the readme file of the gallery resource package, Windows Server 2012 R2 VM Role. It lists out specific property values for WAP to recognize a vhd as an applicable OS image disk for the Role. To find out more about WAP gallery resources, the information is available at


As a gallery item introduced into vmm and WAP, the item then becomes available when a tenant is provisioning a Role as shown below.


There are several steps involved including:

  • Prepping vhds of and importing resource extension of the gallery item, as applicable, to vmm server library shares
  • Importing resource definition to WAP

Here, prepping vhds is the focus. And the process and operations are rather mechanical as detailed in the following.

Process and Operations

The script below illustrates a routine for a vmm administrator to set required property values on applicable OS image disks in a target vmm server’s library shares, . This sample script is available for download.


Line 23 connects to a target vmm server.

Line 25 builds a list of vhd objects the prefix, ws2012r2, in their names. Which suggests a vmm administrator to develop a meaningful naming scheme for referenced resources.

Line 27 and 28 display settings of the vhd files before making changes.

Line 30 to line 35 are to set the values to specific fields including OS, familyname and release according to the readme file of a particular gallery resource package, for example, WS2012_R2_WG_VMRole_Pkg. And as preferred, one can also default a product key to a vhd.

The foreach loop goes through each vhd in the list and set the values. WAP references the tag values of a vhd file to realize if a vhd is applicable for various workloads. Make sure to add all tag values specified in the readme file, as demonstrated between line 41 and line 44 to build the list. Line 46 to line 52 sets all specified values to corresponding property fields of a currently referenced vhd file.

Finally upon finishing the foreach loop,  line 56 and line 57 present the updated settings of the processed vhd files for verification.

User Experience

Here’s an example of running the script:




And with a vmm admin console of the target server, go to Library workspace and right-click an updated vhd disk to verify the property values are correctly set, as shown below.


At this time, with correctly populated property values and tags, the vhds are ready for this particular WAP gallery item, Windows Server 2012 R2 VM Role.

For all the gallery items which WAP displays, a vmm administrator must reference the readme file of each gallery resource package and carry out the above exercise to set property values of the applicable OS image disks. Pay attention to the tags. Missing a tag may invalidate an image disk for some workload and inadvertently prevent that workload from being available for a tenant to provision an associated VM Role in WAP, despite the OS is properly installed on the disk.

Closing Thoughts

The tasks of prepping OS imaging disks for WAP gallery items are simple and predictable. Each step is however critical for successfully publishing an associated gallery item in WAP. Like many mechanics, understand the routine, practice, and practice more. A vmm administrator needs to perform these operations with confidence and precision. The alternative is needless frustration and delay, while both are absolutely avoidable at this juncture of deploying WAP.

Accelerate DevOps with the Cloud – Bringing Docker Online using PowerShell DSC

Picking up where we last left off, Yung Chou and Keith Mayer continue our Accelerate DevOps with the Cloud series as they welcome Andrew Weiss from Microsoft Consulting Services as they show us how we can manage Docker containers using PowerShell DSC.


  • [1:15] What is Docker?
  • [4:06] How is it relevant to IT pros?
  • [8:20DEMO: Docker in action

Resource Links: