Azure Network Topology Document Extracts and Notes

Azure Network Topology

  • Two core approaches: traditional and Azure Virtual WAN
  • The above document has a topology diagram for each model.
FeatureTraditional Azure Network TopologyAzure Virtual WAN Network Topology
HighlightsCustomer-managed routing and security

An Azure subscription can create up to 50 vnets across all regions.

Vnet Peering links two vnets either in the same region or in different regions and enables you to route traffic between them using private IP addresses (carry a nominal charge).

Inbound and outbound traffic is charged at both ends of the peered networks. Network appliances such as VPN Gateway and Application Gateway that are run inside a virtual network are also charged.

Azure Virtual Network Pricing  
A Microsoft-managed networking service providing optimized and automated branch to branch connectivity through Azure.

Virtual WAN allows customers to connect branches to each other and Azure, centralizing their network and security needs with virtual appliances such as firewalls and Azure network and security services.

Azure Virtual WAN Pricing
DeploymentCustomized deployment with routing and security managed by the customer

Virtual Network documentation

Plan virtual networks

Tutorial: Filter network traffic with a network security group using the Azure portal
Microsoft-managed service

Virtual WAN documentation

Tutorial: Create an ExpressRoute association to Virtual WAN – Azure portal

– Other tutorials include site-to-site and point-to-site connections
InterconnectivityTraffic between two virtual networks across two different Azure regions is expected. Full mesh network across all Azure regions is not required.Global connectivity between vnets in these Azure regions and multiple on-premises locations.
IPsec TunnelsFewer than 30 IPsec Site-to-Site tunnels are needed.More than 30 branch sites for native IPsec termination.
Routing PolicyFull control and granularity for manually configuring your Azure network routing policy.Not applicable
Data CollectionCollects data from servers and Kubernetes clusters.Collects data from servers and Kubernetes clusters.
Data StorageStores data in Log Analytics workspace or customer’s own storage account.Stores data in Log Analytics workspace or customer’s own storage account.
Data Analysis and VisualizationUses Log Analytics for analysis and visualization of collected data.Uses Azure Monitor for analysis and visualization of collected data.

Additional Information

Why Azure Arc

For IT decision makers, here’s why it’s pertinent to consider Azure Arc:

  • An integrated management and governance solution that is centralized and unified, providing streamlined control and oversight.
  • Securely extending your on-prem and non-Azure resources into Azure Resource Manager (ARM), empowering you to:
    • Define, deploy, and manage resources in a declarative fashion using JSON template for dependencies, configuration settings, policies, etc.
    • Manage Azure Arc-enabled servers, Kubernetes clusters, and databases as if they were running in Azure with consistent user experience.
    • Harness your existing Windows and Azure sysadmin skills honed from on-premises deployment.
  • When connecting to Azure Arc-enabled servers, you may perform many operational functions, just as you would with native Azure VMs including these key supported actions:
    • Govern
    • Protect
      • Secure non-Azure servers with Microsoft Defender for Endpoint, included through Microsoft Defender for Cloud, for threat detection, vulnerability management, and proactive monitoring for potential security threats. Microsoft Defender for Cloud presents the alerts and remediation suggestions from the threats detected.
    • Configure
    • Monitor
      • Keep an eye on OS, processes, and dependencies along with other resources using VM insights. Additionally collect, store, and analyze OS as well as workload logs, performance data, and events. Which may be injected into Microsoft Sentinel real-time analysis, threat detection, and proactive security measures across the entire IT environment.
October 10, 2023 is the date the support for Windows Server 2012 and 2012 R2 ends.
January 9, 2024 is the date the support for Windows Server 2012 and 2012 R2 ends.

Extended Security Updates (ESUs) is enabled by Azure Arc. IT can seamlessly deploy ESUs through Azure Arc in on-premises or multi-cloud environments, right from the Azure portal. In addition to providing a centralized management of security patching, ESUs enabled by Azure Arc is flexible with a pay-as-you-go subscription model compared to the classic ESU offered through the Volume Licensing Center which are purchased in yearly increments.

To test it out, follow Quickstart – Connect hybrid machine with Azure Arc-enabled servers.

Deploying Azure VM with a Generalized VHD file Using Azure Portal

Assuming one has already

  • logged in Azure portal
  • had a generalized vhd stored in an Azure storage account,
  1. Create an image with a target vhd file by

searching and find the image service

adding a vm image

browsing and selecting an intended vhd file to create a vm image

  1. Create a vm with the image by

form the Images page, selecting/clicking the target image

creating a vm with the image from the image overview page

 Test RDP

From the vm overview page, start and connect to the VM

  • If RDP does not start a dialogue as the following,

use RUN command to review and validate the VM RDP settings, as needed

  • If experiencing a credential issue,

reset the user password or create new user credential

 

A Secure Software Supply Chain with Containers

The concept of software supply chain is not a new one. What may be new is that CI/CD (Continuous Integration/Continuous Delivery) with containers makes it conceptually easy to understand and technically practical to implement. Here’s a process diagram illustrates this approach with five steps.

image

CI/CD Process

A software supply chain is here the “master” branch of a release, while development activities at other branch level are not considered here. The start of a master branch is where and when code or a change is introduced, while on the other end of the master branch is a production runtime environment where applications run. The process, as shown above, are highlighted in 5 steps.

  1. A CI/CD process starts by committing and pushing code to a centralized repository. Code here encompasses all programmatic assets relevant to defining, configuring, deploying and managing a relevant application workload.
  2. Changes made on the master branch triggers the CI process to automatically (re)build and (re)test all assets relevant to the workload which the master branch delivers.
  3. The successful outcome will generate a container images which are automatically versioned, tagged, registered and published in a designated trusted registry. In Docker Enterprise Edition, this is implemented as a Docker Trusted Registry, or DTR. The function of a trusted registry is to secure and host container images. Important tasks carried out here are to at a binary level scan for known malware, check vulnerabilities and digitally sign a container image upon being successfully processed. The generation of a container image signifies application assets are successfully integrated and packaged, which signifies the end of CI part.
  4. CD kicks off, executes and validates the steps for deploying the workload to a target production environment.
  5. Upon substantiating containers, referenced container images are then as needed pulled to a local host and start the container instances, hence an application or service.

Notice that Continuous Delivery is a reference of capability and not the state. Continuous Delivery signifies the ability to maintain payload at a production ready and deployable condition at all time. It is not necessary suggesting payload once validated is deployed to production immediately.

One Version of Truth

A software supply chain starts with developers commit and push the code into the master branch of a release’s centralized repository. To have one version of truth, centralized management is essential. Nevertheless, as preferred we can operate a centralized repository in a distributed fashion like github. Further, a centralize source code hosting solution must properly address these priorities including role-based access control, naming and branching, high availability, network latency, single-point-of-failure, etc. With source control, promoted code can be versioned and tagged for asset and release management.

Triggering upon Pushing Changes

When changes introduced into the master branch which is the supply chain, CI must and will automatically kick off validation process which includes a set of defined criteria, namely test cases.

Test-Driven, a Must

Once the development criteria (or requirements) are set, test cases can be developed accordingly prior to writing the code. And this essentially establishes the acceptance criteria and force a developer to focus on designing code guided by what must be later validated. Which in essence designs in the quality.

When changes are made, there is no point to “manually” execute all test cases against all the code. Let me be clear. The regression tests are necessary, but the repetitions with manual labor is counterproductive and error-prone. A better approach is to programmatically automate all structured tests (i.e. the criteria are structured and stable, like canned test cases) and let a tester to do exploratory testing which may not be performed with scripted, expected or even logical steps, but with the intent to break the system. The automation makes regression tests consistent and efficient, while exploratory testing adds extra values by expanding the test coverage.

Master Branch Has No Downtime

A test-driven development in CI/CD holds a key objective that the master branch is always functional and ready to deliver. This may first appear to some of us as idealistic and over-committed. While in fact, considering an automobile production line, as material and parts are put together while moving through one station to another, an automobile is being built. If at any time, a station breaks down, it must be fixed at the scene since the whole production line is on hold. Fixing what stops a subject moving from one station to the next on the spot is necessary to keep the production producing.

Software supply chain, or a CI/CD pipeline, with containers is a digital version of the above-mentioned model where artifacts are definitions, configurations and scripts. As these artifacts are integrated, built and tested throughout the pipeline, the process to construct a service based on containers is validated. If a step fails the validation, the pipeline stops and the issue must be immediately addressed and resulted, so the process can continue to the next step, hence material flows through the pipeline. To CI/CD, a master branch is the pipeline and must be always kept functional and ready to deliver.

Containers Are Not the Deliverables

It should be noted here that artifacts passing through the CI/CD pipeline are neither container images, nor container instances. What the pipeline validates are a set of developed definitions, configurations and processes based on application requirements and presented in mark-up language like json or yaml. In Docker, they are dockerfile, compose yaml file and template-based scripts, for example, to define application architecture with configured Docker runtime environment for a target application delivered as containers upon instantiation.

Container images and instances are in a way by-products. Container images and instances are however not intended to be deliverables. A container image generated by a CI/CD pipeline should always first programmatically created by the initial CI and later reference or updated by CD. The key is that images must be pulled or generated by executing CI. With Docker, thanks for natively configuration management as code, a release may employ a particular version of a container image. And upgrade or fall back a software supply chain may be as easy as changing the reference version, followed by redeploying the associated payload.

Trusted Registry, the Connective Tissue

CI once successfully generated a container image should register and upload the image to a trusted registry for security scanning and digital signing, before CD takes over and later pulls or updates the images, as needed to complete the CI/CD pipeline. Technically CI starts from receiving code changes and ends at successfully register a container image.

Fail to register a resulting container image will prevent CD from progressing upon referencing the image. In other words, a trusted registry is like a connecting tissue holding and keeping CI and CD fully synchronized and functional with the associated container images. A generated container image does not flow through every step of a CI/CD pipeline, the image is however the focal point to the validity of produced results. As shown in the above diagram, I used a dashed line between CI and CD to indicate there is a dependency of the trusted registry. Failing a registration will eventually fail the overall process.

Closing Thoughts

The essence of CI is automatic testing against defined criteria at unit, function and integration levels. These criteria are basically test cases which should be developed prior to code development as acceptance criteria. This is a key. Development must fully address these test criteria at coding time to build in quality.

Software supply chain is a better way. Wait, make that a much better way than just “developing” applications. I remember those days when every release to production was a nightmare. And code promotion was an event full of anxiety, numerous crashes and many long hours. Good riddance, so long those days. CI/CD with containers presents a very interesting and powerful combination for quality and speed, which is unusual to be able to achieve both at the same time. Docker with Jenkins, github and Azure, a software supply chain is realistic and approachable. Which I will detail in an upcoming post. Stay tuned.

Microsoft Nano Server with Docker Enterprise Edition (EE)

This article details the process to install the latest Docker EE Version 17.06.2-ee-3 to a Microsoft Nano Server. I am sure there are different ways to do this. After a few iterations, here is one verified approach. A sample script is available.

Background

As shown below, when adding the server feature, containers, in Windows Server 2016, it installs Docker EE Version 17.06-1-ee-2. As opposed to what is in Windows 10, adding containers in ‘Turn Windows features on or off’ of  Program and Features of Control Panel installs Docker CE Version 17.03.1-ce, i.e. Community Edition. Information about the two versions is available. The latest version of Docker EE is 17.06-2-ee-3. To keep all Docker EE to the same and the latest version, one may need to manually install Docker EE, instead of employing the default version with a Windows Server. To manually install Docker EE to a Microsoft Nano Server, follow the steps provided below.

Windows Server 2016 patched on 10/05/2017

image

Windows 10 patched on 10/05/2017

image

Step 1 – Create a Nano Server vhdx file with the container package

First, use Nano Server Image Builder to create a vhdx file with intended packages including containers. Notice if Windows ADK (Assessment and Deployment Kit) is not in place, it will prompt for installing ADK. Which is about 6.7 GB download. Once ADK is in place, start the image builder which is wizard-driven and straightforward. I picked vhdx format for building a Gen2 VM. And as shown below, I also added containers, Hyper-V and Anti-Virus packages. The Windows Server 2016 media used to create the Nano Server vhdx file is en_windows_server_2016_x64_dvd_9718492.iso download from my MSDN subscription.

clip_image001

Step 2 – Update the Nano Server OS

In Hyper-V manager, I created and started a Gen2 VM using the vhdx created in Step 1. And log in the VM to find out the IP address, as shown below.

image

I did the following to connect to the host. Once connected, not shown in the following is that to test the Internet connectivity and update the DNS setting, as needed, by following the instructions in the sample script. What should be done first is to carry out a Windows update. Which I did.

image

For this particular VM, I had already updated the OS before taking this screen capture, thus there was no, i.e. zero, updates applicable. Originally there were two updates, KB4035631 and KB4038782, listed as applicable. This update took 20 minutes with about 2 GB download, followed by a reboot of the system. If there is an interest in examining the list of applicable updates beforehand, you can run the following in the PSSession before the Invoke-CimMethod in line 8,

$updateList = ($ci | Invoke-CimMethod -MethodName ScanForUpdates -Arguments @{SearchCriteria=”IsInstalled=0″;OnlineScan=$true}).Updates

Step 3 – Install Docker EE

If to simply use the Docker default to current Windows Server 2016 installation, which is Docker EE Version 17.06-1-ee-2, as stated earlier, install the provider and package will do. In this case, execute line 1 and line 4 to start a PSSession after updating followed by rebooting the OS, then run the following PowerShell commands to install Docker.

Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
Install-Package -Name docker -ProviderName DockerMsftProvider
Restart-Computer -Force; exit

To manually install Docker EE Version 17.06.2-ee-3, I did the following:

image

After downloading and extracting the source file in line 27 and 28, the PATH in the registry was updated with the installation of Docker to persist the reference across sessions. Rather than reboot the system, the current PATH was updated to start and verify the installation of an intended version of Docker from line 39 to line 41. The following is the user experience of executing line 1 to line 41 and successfully installed Docker EE Version 17.06.2-ee-3.

image

In a swarm, keeping all Docker instances in the same version is essential. In case there is a need to have Docker EE Version 17.06.2-ee-3 in Windows workloads, the presented steps achieve that.

What’s Next

Having installed Docker EE, start pulling down images and building containers. Deploying a swarm in a hybrid setting is what I plan to share next.

Microsoft Cortana Intelligence Suite Workshop Video Tutorial Series (1/5): Introduction

This series, based on the content developed by Microsoft, offers a learning path with minimal time and effort to acquire the essential operation-level knowledge of Microsoft Cortana Intelligence Suite. The workshop steps through a process to construct and deploy a web application with predictive analytics, while along the way introducing key functional components. By specifying an origin and a destination airports, a future date and time and an airline carrier, this application predicts a flight delay with probability based on the weather forecast. The video tutorial series runs about 75 minutes and has captured exactly when and what you will see on the screen, where and how to respond based on the instruction of each exercise in the workshop.

I believe this series will most benefit those who function in a technical leadership capacity including: enterprise architect, solution architect, cloud architect, application architect, DevOps lead, etc. and are interested in the solution architecture of an application of predictive analytics. Going through the recordings will provide you an end-to-end view and clarity on how to constructing and deploying a predictive analytics solution, hence a better understanding on the processes and technologies, integration points, packaging and publishing, resource skill profiles, critical path, cost model, etc.

Cortana Intelligence Suite is a set of processes and tools. This workshop outlines an approach where analytic models, data, analysis, visualization, packaging, publishing and deployment are delivered in an integrated fashion. In my view, this is a productive and the right way to start learning how to architect a predictive analytics solution. The above video is the first of five to accelerate your learning of Cortana Intelligence Suite, and highlight a few important items before starting the workshop.

Content Repo

The content of this workshop made available by Todd Kitta is at http://aka.ms/CortanaManual in github. The readme file of the workshop details the scenario, architecture, prerequisites and a list of links to the instructions of all eight exercises.

image

The above architecture diagram of the workshop depicts the functional components for a web application with predictive analytics. Here the lab VM is also employed as an on-premises file server as the source of a data pipeline securely connected to a created Azure Data Factory service to automatically upload data to be scored by the Azure Machine Learning model. At the center is a Spark HDInsight cluster for data analysis, while the data are visualized by Power BI. The predictive analytics model is integrated and package as a web service consumed by a web application.

Introduction

Let’s first pay attention to a few important items before doing the workshop. There are eight exercises in this workshop and I have grouped them into five videos: an introduction and four learning units.

I recommend reading the instruction of an exercise in its entirety before doing the exercise, this will help set the context and gain clarity the objectives of each exercise. To do the workshop, one will need an active Azure subscription. Notice that a free trial account does provide sufficient credit for doing the entire workshop.

image

The workshop environment is a collection of resources deployed to Azure, as shown above, including:

  • A VM with Internet connectivity for a student to log in and work on all the exercises, such that there is no need to download or install anything locally for this workshop
  • A Machine Learning workspace accessed via Microsoft Azure Machine Learning studio to develop an experiment of predictive analytics
  • A Spark cluster for hosting and analyzing data including a scored dataset and a summary table
  • A number of storage accounts for storing workshop data

These resources do incur a cost. And to minimize the cost, try deploying the workshop environment only when you are ready to work on the exercises and delete it once completed the workshop. The deployment will take about fifteen minutes, if not more. And do deploy all resources and create services into the same resource group, so all can be later removed by simply deleting the resources group. Personally, when doing the workshop, I will set aside at least a four-hour block, find a quiet room and get a great cup of coffee. It is indeed a lot to consume.

Enjoy the workshop. Let’s get started.

An Introduction of Windows 10 Credential Guard

Windows 10 Enterprise has introduced a set of new security features including Credential Guard which is a key for securing derived credentials and defend ‘credential theft and reuse’ attacks like Pass-the-Hash (PtH) and Pass-the-Ticket. This article is to provide a technical background and highlights how Credential Guard works. A good reference titled “Protect derived domain credentials with Credential Guard” is also available in Microsoft TechNet Library. Note that Credential Guard is technically part of Device Guard to be detailed in an upcoming article.

LSASS, a Known Secret Service in Windows

Credential Guard is to secure the data kept by Local Security Authority (LSA) Subsystem Service (LSASS) which is a privileged process in Windows and for:

  • Validating users for local and remote sign-ins
  • Enforcing local security policies

And on an Active Directory domain controller, LSASS is also responsible for providing Active Directory database lookups, authentication, and replication. The following illustrates an instance of Windows 10 running OS Build 10586, while the desktop Task Manager reported the status of LSASS. Also hereafter, unless stated otherwise, LSA and LSASS are used interchangeably.

image

image

image

The above shows the Task Manager reporting LSASS.exe running in a Windows 10 Enterprise desktop prior to enabling Credential Guard.

The Secret Service’s Secrets

Those secrets for authenticating requests on behalf of users are hashes and Ticket Granting Tickets (TGTs) which the secret service protects. In this article, the focus is on a hash, while much of the information is applicable to TGTs as well.

Hash

The term, hash (or a password hash, a hash value), in the context of computer security is a derived credential, namely an encrypted form of a user’s plain text password. During a session, a protected resource upon being initially accessed by a user will first try to authenticate the user. And upon a successful authentication, the process then authorizes the request based on access rights granted to the user and those security groups the user is a member of for this resource. In Windows security model, a password is never stored in plain text, instead it first goes through a hash function, and the resulting hash value is what LSA stores for reuse in a subsequent authentication request. A password hash can be stored in one of four forms: LAN Manager (LM), NT, AES key, or Digest. Although over the years, recent versions of Windows do not store LM and NT hashes anymore, however for backward compatibility some out-dated applications may still cause which to be stored. Windows security model stores hashes in one of the two places:

  • a local Security Accounts Manager (SAM) database
  • a networked Active Directory database, as applicable, namely the NTDS.DIT of a domain controller

LSA Secrets

And based on the supported authentication protocols and methods of the OS, LSA loads corresponding authentication packages which know how to realize a particular hash. And as mentioned, there may be various hashes which LSA stores in an encrypted forms in the local device registry for various usage scenarios and backward supportability. These stored hashes and other security artifacts like TGTs are collectively referred as LSA secrets.

There is normally a time-to-live element associated with a hash. And before a stored hash expired, LSA can and will reuse it upon an authentication request, instead of repeatedly prompting the user for credentials, hence provide an SSO experience to the user. Detailed description of authentication artifacts and processes are beyond the scope of this article and available elsewhere.

Recently, SSO has become a crucial component for a baseline criteria for user experience and adopting an application. And as malware becomes increasingly sophisticated, LSA secrets have become targets for identity theft and reuse attacks with stealth.

Pass-the-Hash (PtH) Attack

Once a user is authenticated, depending on the scenarios, there are multiple forms of a hash, i.e. LSA secrets, stored in LSASS process memory on a user’s behalf. And there are also tools including PowerShell to extract them. A password hash is stored for subsequent authentication needs including storage access, network transmission, etc. for this user, rather than to repeatedly prompt the user for the password again during a session. This is significant for implementing SSO, at the same tim for identity theft and reuse as well.

What makes a hash an interesting hacking target is a stored hash represents an already authenticated user. Additionally, a hash can roam within a network for remote access. And by passing a hash when accessing a remote resource, an authentication process can proceed without the need to have the user whose password previously generated the hash aware of. Such that this attack can allow a hacker to traverse a network, infect the next note and look for higher value accounts. And repeat the process to eventually acquire an account with domain administrator rights to ultimately own the associated Active Directory domain. Notice that Accessing LSA secrets does require a local administrator-level access.

Conceptually, a PtH attack is to first hack a local administrator-level account for accessing LSA secrets, i.e. stored hashes, then impersonates a user by presenting the user’s hash for accessing a remote resource without the need to involve the user at all. Notice that from an OS point of view there is no difference if a hash is presented on behalf of a legitimate user or a hacker via unauthorized access. This is why a PtH attack is effective, hard to detect and an imminent threat to an entire network.

Ultimately, an unauthorized access to LSA secrets including password hashes and TGTs can lead to the so called PtH and Pass-the-Ticket (PtT) attacks, respectively. And accessing LSA secrets is a privileged operation and requires local administrator rights. In other words, to conduct a PtH attack a hacker must first hack/phish an account with local administrator rights.

Assume Breach Security Model

So to conduct PtH or PtT, a hacker will need to first acquire local administrator access. And it seems one logical and essential defense is to focus on protecting the credential of a local administrator account. This is a right approach, however not an effective way. Statistics has shown that many users uses the same password for accessing multiple sites, potentially corporate and external sites. With a rootkit which can be installed by a careless click on a phishing email, malware can potentially be loaded before OS code upon forcing a reboot and compromise a device at a boot time. Anti-virus, encryption, real-time monitoring, etc. are all based on a presumption that hardware and the loaded OS are trustworthy. This is apparently not necessarily always the case.

Today, with hundreds of thousands, yes, hundreds of thousands new malware everyday and the alarming infection rate which we have learned from the past, in addition to Advanced Persistent Threats (APTs) are on the rise, ensuring a device and its OS maintain a pristine state from power on to off is critical. IT has quickly learned in the past few years from some high profile breaches that getting hacked is a reality and highly likely once targeted. And a new normal for IT has become “You have been hacked, you just don’t know it yet.”

In other words, IT needs to essentially assume breach, and take a serious look on how to prevent PtH attacks in this setting. And the first has to be ensuring the hardware and the platform (i.e. OS runtime) are trustworthy.

First, Hardware Boot Integrity and OS Code Integrity

I put these two together since from a user’s point of view both seem appearing as one start-up process once powering on a device. By the time a user sees the welcome or logon screen and types in a user ID and a password, the OS has been loaded and the system processes are auto-started, delayed or stopped per associated configurations. An important presumption for security measures to work after a device starts has always been that collectively the hardware (i.e. firmware) is loaded without being tempered, the OS code is loaded without being altered and all defined processes are set as designed. In other words, for all the security measures like antivirus, security policies, etc. engaged or imposed after a device is booted to be effective, the device must maintain both hardware and platform integrity. So the runtime environment is trustworthy and with predictable behaviors to possibly monitor identify and detect malware.

Windows 10 Device Guard employs UEFI 2.3.1 secure boot and a number of measures to ensure boot integrity and OS code integrity. This is to be detailed in upcoming post and beyond the scope of this article. Here, the point is that a meaningful discussion of protecting hash values and defending PtH attacks must be based on a device with trusted firmware and OS code.

Protecting Derived Credentials (Hashes)

imageThe introduction of Credential Guard in Windows 10 Enterprise edition offers a new approach to increase the security of derived credentials, i.e. password hashes and Kerberos TGTs, with virtualization-based securities to mitigate PtH and PtT attacks. There are various editions in Windows 10 for home, business and education users. Notice that Credential Guard is to better secure credentials in a managed and networked environment and does require Windows 10 Enterprise or Education edition.

Windows 10 Credential Guard Architecture

Credential Guard uses Virtualization-Based Security (VBS) to isolate secrets so that only privileged system software can access them. The following illustrates a conceptual model. There are particular hardware and configuration requirements as shown. Specifically, UEFI 2.3.1 or later is required and has a different disk partitions with those of BIOS. IT needs to plan the deployment of UEFI to those devices accordingly for employing Credential Guard.

image

Prior to Windows 10, all derived credentials are kept by LSA. The danger is that upon successfully compromising LSA, a hacker can get full access to LSA secrets. In Windows 10, the LSA is moved into a separate container serving as a VBS environment and runs as LSAIso. Windows 10 marks the first version of Windows to leverage hardware to create an area of high isolation. This makes it impossible for hackers to steal derived credentials. Let’s take a deeper look of VBS.

Enabling Credential Guard

In a Windows 10 Enterprise device, with UEFI Secure Boot and the hardware assisted virtualization turned on in the boot configuration, the settings are in the policy, as shown below.

image

VBS requires Secure Boot, and can optionally be enabled with the use of DMA Protections, as illustrated below. DMA protection requires hardware support and will only be enabled on correctly configured devices.  On Code Integrity setting, this enables VBS to protect and ensure the integrity of Kernel Mode.

image

With Windows 10 Version 1511, Credential Guard now has an UEFI lock setting. IT should test the setting against management scenarios to ensure supportability. Once Credential Guard is enabled, after a reboot LSAIso.exe should run and appear in Task Manager, as shown below.

image

image

Concept of Virtualization-Based Security

With Credential Guard, the local LSASS now works with LSAIso, a new and isolated LSA process that runs, stores and protects in a virtualized, i.e. isolated, environment within the same device. Notice that LSAIso does not host any device drivers, but a small subset of OS binaries that are needed for security. All of these binaries are signed with a certificate that is trusted by VBS and these signatures are validated before launching the file in the protected environment. These measures make Windows 10 virtualization-based security highly secure with minimal attack surface.

Remote Procedure Call

When processing a hash, LSA issues RPC to LSAIso. RPC, a program communication model introduced in 1984, offers a mechanism allowing a program in one device to make a procedure call to a program/procedure running in a remote environment. The calling (or caller) program becomes suspended after making the call and resumed upon receiving the results passed back from the called (or callee) program. Essentially, RPC offers a mechanism which enables a distributed application appearing as if the entire application resides locally. The employment of RPC simplifies the logical design and facilitates the implementation of a distributed application.

From a PRC’s point of view, LSA and LSAIso are two processes running in two separated environments and isolated from each other. When processing hash, LSA perform a RPC call to LSAIso, and waits for LSAIso to passed back the results to continue.  Notice the secrets stored by LSAIso are  protected with VBS and is not accessible to the rest of the operating system. In other words, “who” can call “whom” for “what” and “how” in this model are well defined. Credential Guard further enhances security by not allowing older variants of NTLM and Kerberos authentication protocols and cipher suites when using default derived credentials, including NTLMv1, MS-CHAPv2, and weaker Kerberos encryption types, such as DES.

Closing Thoughts

Credential Guard isolates and protects LSA secrets against PtH and PtT which have become popular “credential theft and reuse” attacks. Like many other Windows 10 security features, Credential Guard does have specific hardware, software and configuration requirements. IT need to start planning early since the hardware and the UEFI Secure Boot requirements may require a hardware upgrade, a disk layout change from BIOS to UEFI, an architecture change from x86 to x64, etc. The immediate task is to build inventories and assess business needs for Windows 10 security features, followed by planning hardware refresh and rolling out UEFI sooner than later.

Be Strategic, While Facing a Growing Trend of BYOD

By itself, Credential Guard secures hashes which is an important, yet just one component for protecting identity and defending malware. In Windows 10, a suite of security features are available including Devise Guard, Windows Hello, Microsoft Passport and combined with Enterprise Mobility Management (EMM) suite offering a new approach on device security from power on to off. Such that IT has the ability to manage not only on-premises devices, but also BYOD-based deployment via cloud. The key consideration is to make sure all management solutions are in convergence, and not end up with one solution for on-premises, one for cloud, and one for mobile.

EMM is a key enabler for authenticating users and enforcing device policies via cloud. It is a vehicle to embrace BYOD with minimal changes needed on existing on-premises infrastructure. For a sizeable company, some form of EMM solution is evitable facing raid adoption of mobility and BYOD. IT leadership needs to develop a roadmap for adopting EMM sooner than later, and start transforming  into a “mobile first, cloud first” operation model as Microsoft so passionately advocates.

Call to Action

  • Learn Windows 10 which Microsoft Virtual Academy offers self-paced and free online courses in all Microsoft products, technologies and solutions including Windows 10, security and “Windows as a Service” as shown below.

image

Microsoft Azure Stack Technical Preview 1: Introduction & Feature Overview

This is something I had wanted to do for a while. Finally did welcome Charles Joy, a Principal Program Manager on the Azure Stack team, back to the show and we discussed the recent release of Microsoft Azure Stack Technical Preview 1. It’s a fun episode. Enjoy it.

image

IT Pros’ Job Interview Cheat Sheet of Multi-Factor Authentication (MFA)

Internet Climate

Recently, as hacking has become a business model and identity theft an everyday phenomenon, there is increasing hostility in Internet and an escalating concerns for PC and network securities. No longer is a long and complex password sufficient to protect your assets. In addition to a strong password policy, adding MFA is now a baseline defense to better ensure the authenticity of an examined user and an effective vehicle to deter fraud.

Market Dynamics

Furthermore, the increasing online ecommerce transactions, the compliance needs of regulated verticals like financial and healthcare, the unique business requirements of market segments like the gaming industry, the popularity of smartphones, the adoption of cloud identity services with MFA technology, etc. all contribute to the growth of MFA market. Some market research published in August of 2015 reported that “The global multi-factor authentication (MFA) market was valued at USD 3.60 Billion in 2014 and is expected to reach USD 9.60 Billion by 2020, at an estimated CAGR of 17.7% from 2015 to 2020.”

Strategic Move

While mobility becomes part of the essential business operating platform, a cloud-based authentication solution offers more flexibility and long-term benefits.The is apparent The street stated that

“Availability of cloud-based multi-factor authentication technology has reduced the maintenance costs typically associated with hardware and software-based two-factor and three-factor authentication models. Companies now prefer adopting cloud-based authentication solutions because the pay per use model is more cost effective, and they offer improved reliability and scalability, ease of installation and upgrades, and minimal maintenance costs. Vendors are introducing unified platforms that provide both hardware and software authentication solutions. These unified platforms are helping authentication vendors reduce costs since they need not maintain separate platforms and modules.”

Disincentives

Depending on where IT is and where IT wants to to be, the initial investment may be consequential and significant. Adopting various technologies and cloud computing may be necessary, while facing resistance to change in corporate IT cultural.

Snapshot

The following is not an exhaustive list, but some important facts, capabilities and considerations of Windows MFA.

mfa

Closing Thoughts

MFA helps ensure the authenticity of a user. MFA by itself nevertheless cannot stop identity theft since there are various ways like key logger, phishing, etc. to steal identity. Still, as hacking has become a business model for some underground industry, and even a military offense, and credential theft has been developed as a hacking practice, it is not an option to operate without a strong authentication scheme. MFA remains arguably a direct and effective way to deter identity theft and fraud.

And the emerging trend of employing biometrics, instead of a password, with a key-based credential leveraging hardware and virtualization-based security like Device Guard and Credential Guard in Windows 10 further minimizes the attack surface by ensuring hardware boot integrity and OS code integrity, and allowing only trusted system applications to request for a credential. Device Guard and Credential Guard together offers a new standard in preventing PtH which is one of the most popular types of credential theft and reuse attacks seen by Microsoft so far.

Above all, going forward we must not consider MFA as an afterthought and add-on, but an immediate and imperative need of a PC security solution. IT needs to implement MFA sooner than later, if not already.