US TechNet on Tour | Cloud Infrastructure – Resource Page

This wave of TechNet events focuses on Azure (IaaS) V2, namely Azure Resource Manager or ARM. It is part of IT Innovation series currently delivered in US metros and many other geo-locations in the spring of 2016. For those outside of the US, go to http://aka.ms/ITInnovation to find out events near you. Come and have some serious fun in learning.

imageimage

The presentations, available in PDF format, and the following lab material are included in this zip file.

GitHub repository for Lab Files if using your own machine

If you are not using the hosted virtual machine and are using your own workstation, any custom files the lab instruction call out can be found in a GitHub repository. The repository is located here: https://github.com/AZITCAMP/Labfiles.

Required Software

Description

Steps

Required software will be called out throughout the lab.

  1. Microsoft Azure PowerShell – http://go.microsoft.com/?linkid=9811175&clcid=0x409 (also installs the Web Platform Installer, minimum version 0.9.8 and higher)
  2. Visual Studio Code – https://code.visualstudio.com/
  3. Install GIT at: http://git-scm.com/download/win
  4. GitHub Desktop for Windows – https://desktop.github.com/
  5. Windows Credential Store for Git (if VSCode won’t authenticate with GitHub) – http://gitcredentialstore.codeplex.com/
  6. Iometer – http://sourceforge.net/projects/iometer/

Optional Software

Description

Software

Any additional software that you require will be called out in the lab. The following software may be useful when working with Azure in general.

  1. Remote Server Administration Tools – http://support.microsoft.com/kb/2693643 (Windows 8.1) or http://www.microsoft.com/en-ca/download/details.aspx?id=45520 (Windows 10)
  2. AzCopy – http://aka.ms/downloadazcopy
  3. Azure Storage Explorer – http://azurestorageexplorer.codeplex.com/downloads/get/891668
  4. Microsoft Azure Cross-platform Command Line Tools (installed using the Web Platform Installer)
  5. Visual Studio Community 2015 with Microsoft Azure SDK – 2.8.1 (installed using the Web Platform Installer)
  6. Msysgit – http://msysgit.github.io
  7. PuTTY and PuTTYgen – (Use the Windows Installer) http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
  8. Microsoft Online Services Sign-In Assistant for IT Professionals RTW – http://go.microsoft.com/fwlink/?LinkID=286152
  9. Azure Active Directory Module for Windows PowerShell (64-bit version) – http://go.microsoft.com/fwlink/p/?linkid=236297

IT Pros’ Job Interview Cheat Sheet of Multi-Factor Authentication (MFA)

Internet Climate

Recently, as hacking has become a business model and identity theft an everyday phenomenon, there is increasing hostility in Internet and an escalating concerns for PC and network securities. No longer is a long and complex password sufficient to protect your assets. In addition to a strong password policy, adding MFA is now a baseline defense to better ensure the authenticity of an examined user and an effective vehicle to deter fraud.

Market Dynamics

Furthermore, the increasing online ecommerce transactions, the compliance needs of regulated verticals like financial and healthcare, the unique business requirements of market segments like the gaming industry, the popularity of smartphones, the adoption of cloud identity services with MFA technology, etc. all contribute to the growth of MFA market. Some market research published in August of 2015 reported that “The global multi-factor authentication (MFA) market was valued at USD 3.60 Billion in 2014 and is expected to reach USD 9.60 Billion by 2020, at an estimated CAGR of 17.7% from 2015 to 2020.”

Strategic Move

While mobility becomes part of the essential business operating platform, a cloud-based authentication solution offers more flexibility and long-term benefits.The is apparent The street stated that

“Availability of cloud-based multi-factor authentication technology has reduced the maintenance costs typically associated with hardware and software-based two-factor and three-factor authentication models. Companies now prefer adopting cloud-based authentication solutions because the pay per use model is more cost effective, and they offer improved reliability and scalability, ease of installation and upgrades, and minimal maintenance costs. Vendors are introducing unified platforms that provide both hardware and software authentication solutions. These unified platforms are helping authentication vendors reduce costs since they need not maintain separate platforms and modules.”

Disincentives

Depending on where IT is and where IT wants to to be, the initial investment may be consequential and significant. Adopting various technologies and cloud computing may be necessary, while facing resistance to change in corporate IT cultural.

Snapshot

The following is not an exhaustive list, but some important facts, capabilities and considerations of Windows MFA.

mfa

Closing Thoughts

MFA helps ensure the authenticity of a user. MFA by itself nevertheless cannot stop identity theft since there are various ways like key logger, phishing, etc. to steal identity. Still, as hacking has become a business model for some underground industry, and even a military offense, and credential theft has been developed as a hacking practice, it is not an option to operate without a strong authentication scheme. MFA remains arguably a direct and effective way to deter identity theft and fraud.

And the emerging trend of employing biometrics, instead of a password, with a key-based credential leveraging hardware and virtualization-based security like Device Guard and Credential Guard in Windows 10 further minimizes the attack surface by ensuring hardware boot integrity and OS code integrity, and allowing only trusted system applications to request for a credential. Device Guard and Credential Guard together offers a new standard in preventing PtH which is one of the most popular types of credential theft and reuse attacks seen by Microsoft so far.

Above all, going forward we must not consider MFA as an afterthought and add-on, but an immediate and imperative need of a PC security solution. IT needs to implement MFA sooner than later, if not already.

Don’t Kid Yourself to Use the Same Password with Multiple Sites

I am starting a series of Windows 10 contents with much on security features. A number of topics including Multi-Factor Authentication (MFA), hardware- and virtualization-based securities like Credential Guard and Device Guard, Windows as a Service are all included in upcoming posts. These features are not only signature deliveries of Windows 10, but significant initiatives in addressing fundamental issues of PC security while leveraging market opportunities presented by a growing trend of BYOD. The series nevertheless starts with where all security discussions should start, in my view.

Password It Is

At a very high level, I view security encompassing two key components. Authentication is to determine if a user is sad claimed, while authorization grants access rights accordingly upon a successful authentication. The former starts with a presentation of user credentials or identity, i.e. user name and password, while the latter operates according to a security token or a so-called ticket derived based on a successful authentication. The significance of this model is that user’s identity, or more specifically a user password since a user name is normally a display and not encrypted field, is essential to initiate and acquire access to a protected resource. A password however can be easily stolen or lost, and is arguably the weakest link of a security solution.

Using the Same Password for Multiple Sites

When it comes to cyber security, the shortest distance between two points is not always a straight direct line. For a hacker to steal your bank account, the quietest way is not necessarily to directly hack your bank web site, unless the main target is the bank instead. Since institutions like banks and healthcare providers, for example, are subject to laws, regulations and mandates to protect customers’ personal information. These institutions have to financially and administratively commit and implement security solutions, and attacking them is a high cost operation and obvious much difficult effort.

image

An alternative, as illustrated above, is to attack those unregulated businesses, low profile, lesser known and mom-and-pop shops where you perhaps order groceries, your favorite leaf teas and neighborhood deliveries as a hacker learned your lifestyle from your posting, liking and commenting on subjects and among communities in social media. Many of those shops are family own businesses, operating on a string budget, and barely with enough awareness and technical skills to maintain a web site with freeware download from some unknown web site. The OS is probably not patched up to date. If there is antivirus software, it may be a free trail and have expired. The point is that for those small businesses the security of the computer environment is properly not an everyday priority, let alone a commitment to protect your personal information. 

The alarming fact is that many do use the same password for accessing multiple sites.

image

Bitdefender published a study in August of 2010, as shown above, and revealed more than 250,000 email addresses, usernames and passwords can be found easily online, via postings on blogs, collaboration platforms, torrents and other channels. And it pointed out that “In a random check of the sample list consisting of email addresses, usernames and passwords, 87 percent of the exposed accounts were still valid and could be accessed with the leaked credentials. Moreover, a substantial number of the randomly verified email accounts revealed that 75 percent of the users rely on the same password to access both their social networking and email accounts.”

image

 On April 23, 2013, Ofcom published that (as shown above) “More than half (55%) of adult internet users admit they use the same password for most, if not all, websites, according to Ofcom’s Adults’ Media Use and Attitudes Report 2013. Meanwhile, a quarter (26%) say they tend to use easy to remember passwords such as birthdays or names, potentially opening themselves up to the threat of account hacking.” As noted, this was based on 1805 adults aged 16 and over were interviewed as part of the research. Although the above statistics are derived from surveying UK adult internet users, it does represent a common practices in internet surfing and raises a security concern.

Convenience at the Risk of Compromising Security

imageWith free Wifi, our access to Internet and getting connected is available at coffee shops, bookstores, shopping malls, airports, hotels, just about everywhere. The convenience comes with a high risk nevertheless since these free accessing points are also available for hackers to identify, phish and attack targets. Operating your account with a public Wifi or using a shared device to access protected information is essentially inviting an unauthorized access to invade your privacy. Using the same password with multiple sites further increases the opportunities to possibly compromise high profile accounts of yours via a weaker account. It is a poor and potentially a costly practice with devastating results, while choosing convenience at the risk of compromising security.

User credentials and any Personally Identifiable Information (PII) are valuable asset and what hackers are looking for. Identifying and protecting PII should be an essential part of a security solution.

Fundamental Issues with Password

Examining the presented facts of using password surfaces two issues. First, the security of a password much relies on a user’s practice and is problematic. Second, a hacker can log in remotely from other states or countries with stolen user credentials with a different device. A direct answer to these issues includes to simply not use password, instead with something else like biometrics to eliminate the need for user to remember a string of strange characters. And associate user credentials with a user’s local hardware, so that the credentials are not applicable with a different device. Namely employ the user’s device as a second factor for MFA. 

Closing Thoughts

Password is the weakest link in a security solution. Keep it complex and long. Exercise your common sense in protecting your credentials. Regardless you are winning a trip to Hawaii or $25,000 free money, do not read those suspicious email. Before clicking a link, read the url in its entirety and make sure the url is legitimate. These are nothing new and just a review of what we learn in grade school of computer security.

Evidence nevertheless shows that many of us however tend to use the same passwords for multiple sites as passwords increase and are complex and hard to remember. And the risk of an unauthorized access becomes high.  

For IT, eliminate password and replace with biometrics is an emerging trend. Implementing Multi-Factor Authentication needs to be sooner than later. Assess hardware- and virtualization-based securities to fundamentally design out rootkit and ensure hardware boot integrity and OS code integrity should be a top priority. These are the subjects to be examined as this blog post series continues.

Changing Server Installation Option from Server Core to Server-Gui-Shell

This is a follow-up post of my earlier article of Features on Demand, http://aka.ms/FoD where I also illustrated changing server installation options and developing minimal server footprint. In this post, I walked through the process of changing the server installation option from server core to server-gui-shell.

In this scenario, I installed a server core, as shown below, to a VM from the Windows Server 2012 R2 iso file, en_windows_server_2012_r2_with_update_x64_dvd_4065220.iso, downloaded from MSDN. And later changed the server core installation to one with server-gui-shell, i.e. with a full server GUI using DISM online commands. The OS disk is 40 GB in size.

image image

The following displayed the c drive of a clean server core install. And the feature store is at c:\windows\winsxs as shown. Here I also included the command in the screen capture.

image image

The following are some highlights of a formatted output of the server features of a clean install of server core using a DISM online command.

image

Since the GUI components had a state of “disabled with payload removed”, I needed to pointed out the source to enable the GUI feature. Below, I first shared out the feature store, i.e. c:\windows\winsxs, with read access to everyone on the machine, named VMM, which is a full gui installation as the source.

image image

Then I ran the DISM online command to enable server-gui-shell as:

dism /online /enable-feature /featurename:server-gui-shell /source:\\vmm\winsxs /all

To include all dependencies, I used /all in the statement. Notice the default behavior of enabling a feature is to ultimately go to Windows Updates, if an installation source is not specified or the needed bits are not found. Remember to include /LimitAccess in the statement to prevent seeking Windows Updates, as needed.

This statement took a few moments to run and complete the statement since it’s a gigabyte operation.  After finishing enabling server-gui-shell, I did a dir on the feature store just to show the difference made by enabling the feature. Notice there were 4,006 directories added to the feature store by this enabling server-gui-shell operation.

image image

After reboot, I noticed I was not able to find Server Manager in the Apps page. Opening a command prompt, I typed and ran the statement, powershell “get-windowsfeature”, and interestingly none of the GUI components was enabled. This may be an anomaly. So I ran the statement, powershell “install-windowsfeature server-gui-shell” to install the gui shell again. I thought I would need to reboot, the operation ran successfully and did not require a reboot. I then saw Server Manager, PowerShell ISE, etc. Server Manager came up fine and all seemed back to normal apparently.

image

Finally, you may notice that the above operations do not work on a Hyper-V Server which is a free and dedicated stand-alone product with the hypervisor, Windows Server driver model, virtualization capabilities, and supporting components such as failover clustering. A Hyper-V Server however does not contain the set of features and roles as a Windows Server operating system which requires a purchased license.

Call to Action

  • Register at Microsoft Virtual Academy, http://aka.ms/MVA1, and work on the server track to better understand the roles and features of Windows Server 2012 R2.
  • Download Windows Server 2012 R2 trial from http://aka.ms/R2 and evaluate the product.
  • Follow the Features on Demand series, http://aka.ms/FoD, and assess a minimal server footprint based on your business requirements.

Windows Server 2012 R2 Hyper-V Replica Configuration, Simulation and Verification Explained

Windows Server 2012 R2 Hyper-V Role introduces a new capability, Hyper-V Replica, as a built-in replication mechanism at a virtual machine (VM) level. Hyper-V Replica can asynchronously replicate a selected VM running at a primary (or source) site to a designated replica (or target) site across LAN/WAN. The following schematic depicts this concept.

A replication process will replicate, i.e. create an identical VM in the Hyper-V Manager of a target replica server and subsequently the change tracking module of Hyper-V Replica will track and replicate the write-operations in the source VM every based on a set interval after the last successful replication regardless the associated vhd files are hosted in SMB shares, Cluster Shared Volumes (CSVs), SANs, or with directly attached storage devices.

Hyper-V Replica Requirements

Above both a primary site and a replica site are Windows Server 2012 R2 Hyper-V hosts where the former runs production or the so-called primary VMs, while the latter hosts replicated ones which are off and each is to be brought online, should a corresponding primary VM experience an outage. Hyper-V Replica requires neither shared storage, nor a specific storage hardware. Once an initial copy is replicated to a replica site, Hyper-V Replica will replicate only the changes of a configured primary VM, i.e. the deltas, asynchronously.

It goes without saying that assessing business needs and developing a plan on what to replicate and where is essential. In addition, both a primary site and a replica site (i.e. Hyper-V hosts run source and replica VMs, respectively) must enable Hyper-V Replica in the Hyper-V Settings of Hyper-V Manager. IP connectivity is assumed when applicable. Notice that Hyper-V Replica is a server capability and not applicable to client Hyper-V such as running Hyper-V in a Windows 8 client.

Establishing Hyper-V Replica Infrastructure

Once all the requirements are put in place, enable Hyper-V Replica on both a primary site (i.e. a Windows Server 2012 R2 Hyper-V host where a source VM runs) a target replica server, followed by configure an intended VM at a primary site for replication. The following is a sample process to establish Hyper-V Replica.

Step 1 – Enable Hyper-V Replica

Identify a Windows Server 2012 R2 Hyper-V host as a primary site where a target source VM runs. Enable Hyper-V Replica in the Hyper-V settings in Hyper-V Manager of the host. The following are Hyper-V Replica sample settings of a primary site. Repeat the step on a target replica site to enable Hyper-V Replica. In this article, DEVELOPMENT and VDI are both Windows Server 2012 R2 Hyper-V hosts where the former is a primary site while the latter is the replica site.

image

Step 2 – Configure Hyper-V Replica on Source VMs

Using the Hyper-V Manager of a primary site, enable Hyper-V Replica of a VM by right-clicking the VM and select the option, followed by walking through the wizard to establish a replication relationship with a target replica site. Below shows how to enable Hyper-V Replica of a VM, named A-Selected-VM, at the primary site, DEVELOPMENT. The replication settings are fairly straightforward and for a reader to get familiar with.

image

Step 3 – Carry Out an Initial Replication

While configuring Hyper-V Replica of a source VM in step 2, there are options to deliver an initial replica. If an initial replica is to be transmitted over the network, which can happen in real-time or according to a configured schedule. Optionally an initial copy can also be delivered out of band, i.e. with external media, while sending an initial copy on wire may overwhelm the network, not be reliable, take too long to do it or not be a preferred option due to the sensitivity of content. There are however additional considerations with out-of-band deliver of an initial replica.

Option to Send Initial Replica Out of Band

The following shows the option to send initial copy using external media while enabling Hyper-V Replica on a VM named A-Selected-VM.

image

An exported initial copy includes the associated VHD disk and an XML file capturing the configuration information as depicted below. This folder can then be delivered out of band to the target replica server and imported into the replica VM.

image

When delivering an initial copy with external media, a replica VM configuration remains automatically created at an intended replica site exactly the same with the user experience of sending a replica VM online. The difference is that a source VM (here the VM on the primary site, DEVELOPMENT) shows an initial replication in progress in replication health. The replica VM (here the VM on the replica site, VDI) has an option to import an initial replica in replica settings while the replication health information is presenting a warning as shown below:

The primary site

image

The replica site

image

After receiving the initial replica with external media, the replica VM can then import the disk a shown:

image

and upon successfully importing the disk, the following shows the replica VM now with additional replication options and updated replication health information.

image

Step 4 – Simulate a Failover from a Primary Site

This is done at an associated replica site. First verify and ensure the replication health on a target replica VM is normal. Then right-click the replica VM and select Test Failover in Replication settings as shown here:

image

Notice supposedly a failover should occur only when the primary VM experience outage. Nevertheless a simulated failover does not require shutting down a primary/source VM. It does create a VM instance after all at the replica site without altering the replica settings in place. A newly created VM instance resulted from a Test Failover should be deleted via the option, Stop Test Failover, via the replication UI where the Test Failover was originally initiated, as demonstrated below, to ensure all replication settings remain validated.

image

Step 5 – Do a Planned Failover from the Primary Site

In a maintenance event where a primary/source is expecting an outage, ensure the replication health is normal followed by conducting a Planned Failover from the primary site after shutting down the source VM as shown below.

image

Notice that successfully performing a Planned Failover with the presented settings will automatically establish a reverse replication relationship.  To verify the failover has been correctly carried out, check the replication health of a replication pair of VMs on both the primary site and the replica site, prior and after a planned failover. The results need to be consistent across the board. The following examines a replication pair: A-Selected-VM on DEVELOPMENT at a primary site and the replica on VDI at the replica site. Prior to a planned failover from DEVELOPMENT, the replication health information of a source VM (left and at DEVELOPMENT server) and the replica VM (right and at VDI server) is as shown below.

image

After a successful planned failover event as demonstrated at the beginning of this step, the replication heath information becomes the following where the replication roles had been reversed and with a normal state. This signifies the planned failover was carried out successfully.

image

In the event that a source VM experiences an unexpected outage at a primary site, failing over a primary VM to a replica one in this scenario will not be able to automatically establish a reverse replication due to un-replicated changes have already lost along with the unexpected outage.

Step 6 – Finalize the Settings with Another Planned Failover

Conduct another Planned Failover event to confirm that a reverse replication works. In the presented scenario, the planned failover of a primary VM will be now shutting down in the primary site (i.e VDI at this time), followed by failing over back to the replica site (i.e. DEVELOPMENT at this time). And upon successful execution of the planned failover event, the resulted replication relationship should be DEVELOPMENT as a primary site with VDI as the replica site again. Which is the same state at the beginning of step 5. At this time, in a DR scenario, failing production site (in the above example DEVELOPMENT server) over to a DR site (here VDI server), and as needed failing over from the DR site (VDI server) back to the original production site (DEVELOPMENT server) are proven working bi-directionally.

Step 7 – Incorporate Hyper-V Replica into Existing SOPs

Incorporate Hyper-V Replica configurations and maintenance into applicable IT standard operating procedures and start monitoring and maintaining the health of Hyper-V Replica resources.

Hyper-V Extended Replication

In Windows Server 2012 R2, we can now further replicate a replica VM form a replica site to an extended replica site, similar to a backup’s backup. The concept as illustrated below is quite simple.

image

The process is to first set up Hyper-V Replica from a primary site to a target replica site as described earlier in this article. Then at the replica site, simply configure Hyper-V Replica of a replica VM, as shown below. In this way, a source VM is replicated from a primary site to a replica site which then replicates the replica VM from the replica site to an extended replica site.

image

In a real-world setting, Hyper-V Extended Replication fits business needs. For instance, IT may want a replica site nearby for convenience and timely response in an event that monitored VMs are experience some localized outage within IT’s control, while the datacenter remains up and running. While in an outage causing an entire datacenter or geo-region to go down, an extended replication stored in a geo-distant location is pertinent.

Hyper-V Replica Broker

Importantly, if to employ a Hyper-v failover cluster as a replica site, one must use Failover Cluster Manager to perform all Hyper-V Replica configurations and management by first creating a Hyper-V Replica Broker role, as demonstrated below.

image

And here is a sample configuration:

image

Although the above uses Kerberos authentication, as far as Hyper-V Replica is concerned Windows Active Directory domain is not a requirement. Hyper-V Replica can also be implemented between workgroups and untrusted domains with a certificate-based authentication. Active Directory is however a requirement if involving a Hyper-v host which is part of a failover cluster as in the case of Hyper-V Replica Broker, and in such case all Hyper-V hosts of a failover cluster must be in the same Active Directory domain.

Business Continuity and Disaster Recovery (BCDR)

In a BC scenario and a planned failover event, for example a scheduled maintenance, of a primary VM, Hyper-V Replica will first copy any un-replicated changes to the replica VM, so that the event produces no loss of data. Once the planned failover is completed, the VM replica will then become the primary VM and carry the workload, while a reverse replication is automatically set. In a DR scenario, i.e. an unplanned outage of a primary VM, an operator will need to manually bring up the replicated VM with an expectation of some data loss, specifically data change since the last successfully replication based on the set replication interval.

Closing Thoughts

The significance of Hyper-V Replica is not only the simplicity to understand and operate, but the readiness and affordability that comes with Windows Server 2012 R2. A DR solution for business in any sizes is now a reality with Windows Server 2012 R2 Hyper-V. For small and medium businesses, never at a time is a DR solution so feasible. IT pros can now configure, simulate and verify results in a productive and on-demand manner to formulate/prototype/pilot a DR solution at a VM level based on Windows Server 2012 R2 Hyper-V infrastructure. At the same time, in an enterprise setting, System Center family remains the strategic platform to maximize the benefits of Windows Server 2012 R2 and Hyper-V with a comprehensive system management solution including private cloud automation, process orchestration, services deployment and management at a datacenter level.

Additional Information