An Introduction of Windows 10 Credential Guard

Windows 10 Enterprise has introduced a set of new security features including Credential Guard which is a key for securing derived credentials and defend ‘credential theft and reuse’ attacks like Pass-the-Hash (PtH) and Pass-the-Ticket. This article is to provide a technical background and highlights how Credential Guard works. A good reference titled “Protect derived domain credentials with Credential Guard” is also available in Microsoft TechNet Library. Note that Credential Guard is technically part of Device Guard to be detailed in an upcoming article.

LSASS, a Known Secret Service in Windows

Credential Guard is to secure the data kept by Local Security Authority (LSA) Subsystem Service (LSASS) which is a privileged process in Windows and for:

  • Validating users for local and remote sign-ins
  • Enforcing local security policies

And on an Active Directory domain controller, LSASS is also responsible for providing Active Directory database lookups, authentication, and replication. The following illustrates an instance of Windows 10 running OS Build 10586, while the desktop Task Manager reported the status of LSASS. Also hereafter, unless stated otherwise, LSA and LSASS are used interchangeably.

image

image

image

The above shows the Task Manager reporting LSASS.exe running in a Windows 10 Enterprise desktop prior to enabling Credential Guard.

The Secret Service’s Secrets

Those secrets for authenticating requests on behalf of users are hashes and Ticket Granting Tickets (TGTs) which the secret service protects. In this article, the focus is on a hash, while much of the information is applicable to TGTs as well.

Hash

The term, hash (or a password hash, a hash value), in the context of computer security is a derived credential, namely an encrypted form of a user’s plain text password. During a session, a protected resource upon being initially accessed by a user will first try to authenticate the user. And upon a successful authentication, the process then authorizes the request based on access rights granted to the user and those security groups the user is a member of for this resource. In Windows security model, a password is never stored in plain text, instead it first goes through a hash function, and the resulting hash value is what LSA stores for reuse in a subsequent authentication request. A password hash can be stored in one of four forms: LAN Manager (LM), NT, AES key, or Digest. Although over the years, recent versions of Windows do not store LM and NT hashes anymore, however for backward compatibility some out-dated applications may still cause which to be stored. Windows security model stores hashes in one of the two places:

  • a local Security Accounts Manager (SAM) database
  • a networked Active Directory database, as applicable, namely the NTDS.DIT of a domain controller

LSA Secrets

And based on the supported authentication protocols and methods of the OS, LSA loads corresponding authentication packages which know how to realize a particular hash. And as mentioned, there may be various hashes which LSA stores in an encrypted forms in the local device registry for various usage scenarios and backward supportability. These stored hashes and other security artifacts like TGTs are collectively referred as LSA secrets.

There is normally a time-to-live element associated with a hash. And before a stored hash expired, LSA can and will reuse it upon an authentication request, instead of repeatedly prompting the user for credentials, hence provide an SSO experience to the user. Detailed description of authentication artifacts and processes are beyond the scope of this article and available elsewhere.

Recently, SSO has become a crucial component for a baseline criteria for user experience and adopting an application. And as malware becomes increasingly sophisticated, LSA secrets have become targets for identity theft and reuse attacks with stealth.

Pass-the-Hash (PtH) Attack

Once a user is authenticated, depending on the scenarios, there are multiple forms of a hash, i.e. LSA secrets, stored in LSASS process memory on a user’s behalf. And there are also tools including PowerShell to extract them. A password hash is stored for subsequent authentication needs including storage access, network transmission, etc. for this user, rather than to repeatedly prompt the user for the password again during a session. This is significant for implementing SSO, at the same tim for identity theft and reuse as well.

What makes a hash an interesting hacking target is a stored hash represents an already authenticated user. Additionally, a hash can roam within a network for remote access. And by passing a hash when accessing a remote resource, an authentication process can proceed without the need to have the user whose password previously generated the hash aware of. Such that this attack can allow a hacker to traverse a network, infect the next note and look for higher value accounts. And repeat the process to eventually acquire an account with domain administrator rights to ultimately own the associated Active Directory domain. Notice that Accessing LSA secrets does require a local administrator-level access.

Conceptually, a PtH attack is to first hack a local administrator-level account for accessing LSA secrets, i.e. stored hashes, then impersonates a user by presenting the user’s hash for accessing a remote resource without the need to involve the user at all. Notice that from an OS point of view there is no difference if a hash is presented on behalf of a legitimate user or a hacker via unauthorized access. This is why a PtH attack is effective, hard to detect and an imminent threat to an entire network.

Ultimately, an unauthorized access to LSA secrets including password hashes and TGTs can lead to the so called PtH and Pass-the-Ticket (PtT) attacks, respectively. And accessing LSA secrets is a privileged operation and requires local administrator rights. In other words, to conduct a PtH attack a hacker must first hack/phish an account with local administrator rights.

Assume Breach Security Model

So to conduct PtH or PtT, a hacker will need to first acquire local administrator access. And it seems one logical and essential defense is to focus on protecting the credential of a local administrator account. This is a right approach, however not an effective way. Statistics has shown that many users uses the same password for accessing multiple sites, potentially corporate and external sites. With a rootkit which can be installed by a careless click on a phishing email, malware can potentially be loaded before OS code upon forcing a reboot and compromise a device at a boot time. Anti-virus, encryption, real-time monitoring, etc. are all based on a presumption that hardware and the loaded OS are trustworthy. This is apparently not necessarily always the case.

Today, with hundreds of thousands, yes, hundreds of thousands new malware everyday and the alarming infection rate which we have learned from the past, in addition to Advanced Persistent Threats (APTs) are on the rise, ensuring a device and its OS maintain a pristine state from power on to off is critical. IT has quickly learned in the past few years from some high profile breaches that getting hacked is a reality and highly likely once targeted. And a new normal for IT has become “You have been hacked, you just don’t know it yet.”

In other words, IT needs to essentially assume breach, and take a serious look on how to prevent PtH attacks in this setting. And the first has to be ensuring the hardware and the platform (i.e. OS runtime) are trustworthy.

First, Hardware Boot Integrity and OS Code Integrity

I put these two together since from a user’s point of view both seem appearing as one start-up process once powering on a device. By the time a user sees the welcome or logon screen and types in a user ID and a password, the OS has been loaded and the system processes are auto-started, delayed or stopped per associated configurations. An important presumption for security measures to work after a device starts has always been that collectively the hardware (i.e. firmware) is loaded without being tempered, the OS code is loaded without being altered and all defined processes are set as designed. In other words, for all the security measures like antivirus, security policies, etc. engaged or imposed after a device is booted to be effective, the device must maintain both hardware and platform integrity. So the runtime environment is trustworthy and with predictable behaviors to possibly monitor identify and detect malware.

Windows 10 Device Guard employs UEFI 2.3.1 secure boot and a number of measures to ensure boot integrity and OS code integrity. This is to be detailed in upcoming post and beyond the scope of this article. Here, the point is that a meaningful discussion of protecting hash values and defending PtH attacks must be based on a device with trusted firmware and OS code.

Protecting Derived Credentials (Hashes)

imageThe introduction of Credential Guard in Windows 10 Enterprise edition offers a new approach to increase the security of derived credentials, i.e. password hashes and Kerberos TGTs, with virtualization-based securities to mitigate PtH and PtT attacks. There are various editions in Windows 10 for home, business and education users. Notice that Credential Guard is to better secure credentials in a managed and networked environment and does require Windows 10 Enterprise or Education edition.

Windows 10 Credential Guard Architecture

Credential Guard uses Virtualization-Based Security (VBS) to isolate secrets so that only privileged system software can access them. The following illustrates a conceptual model. There are particular hardware and configuration requirements as shown. Specifically, UEFI 2.3.1 or later is required and has a different disk partitions with those of BIOS. IT needs to plan the deployment of UEFI to those devices accordingly for employing Credential Guard.

image

Prior to Windows 10, all derived credentials are kept by LSA. The danger is that upon successfully compromising LSA, a hacker can get full access to LSA secrets. In Windows 10, the LSA is moved into a separate container serving as a VBS environment and runs as LSAIso. Windows 10 marks the first version of Windows to leverage hardware to create an area of high isolation. This makes it impossible for hackers to steal derived credentials. Let’s take a deeper look of VBS.

Enabling Credential Guard

In a Windows 10 Enterprise device, with UEFI Secure Boot and the hardware assisted virtualization turned on in the boot configuration, the settings are in the policy, as shown below.

image

VBS requires Secure Boot, and can optionally be enabled with the use of DMA Protections, as illustrated below. DMA protection requires hardware support and will only be enabled on correctly configured devices.  On Code Integrity setting, this enables VBS to protect and ensure the integrity of Kernel Mode.

image

With Windows 10 Version 1511, Credential Guard now has an UEFI lock setting. IT should test the setting against management scenarios to ensure supportability. Once Credential Guard is enabled, after a reboot LSAIso.exe should run and appear in Task Manager, as shown below.

image

image

Concept of Virtualization-Based Security

With Credential Guard, the local LSASS now works with LSAIso, a new and isolated LSA process that runs, stores and protects in a virtualized, i.e. isolated, environment within the same device. Notice that LSAIso does not host any device drivers, but a small subset of OS binaries that are needed for security. All of these binaries are signed with a certificate that is trusted by VBS and these signatures are validated before launching the file in the protected environment. These measures make Windows 10 virtualization-based security highly secure with minimal attack surface.

Remote Procedure Call

When processing a hash, LSA issues RPC to LSAIso. RPC, a program communication model introduced in 1984, offers a mechanism allowing a program in one device to make a procedure call to a program/procedure running in a remote environment. The calling (or caller) program becomes suspended after making the call and resumed upon receiving the results passed back from the called (or callee) program. Essentially, RPC offers a mechanism which enables a distributed application appearing as if the entire application resides locally. The employment of RPC simplifies the logical design and facilitates the implementation of a distributed application.

From a PRC’s point of view, LSA and LSAIso are two processes running in two separated environments and isolated from each other. When processing hash, LSA perform a RPC call to LSAIso, and waits for LSAIso to passed back the results to continue.  Notice the secrets stored by LSAIso are  protected with VBS and is not accessible to the rest of the operating system. In other words, “who” can call “whom” for “what” and “how” in this model are well defined. Credential Guard further enhances security by not allowing older variants of NTLM and Kerberos authentication protocols and cipher suites when using default derived credentials, including NTLMv1, MS-CHAPv2, and weaker Kerberos encryption types, such as DES.

Closing Thoughts

Credential Guard isolates and protects LSA secrets against PtH and PtT which have become popular “credential theft and reuse” attacks. Like many other Windows 10 security features, Credential Guard does have specific hardware, software and configuration requirements. IT need to start planning early since the hardware and the UEFI Secure Boot requirements may require a hardware upgrade, a disk layout change from BIOS to UEFI, an architecture change from x86 to x64, etc. The immediate task is to build inventories and assess business needs for Windows 10 security features, followed by planning hardware refresh and rolling out UEFI sooner than later.

Be Strategic, While Facing a Growing Trend of BYOD

By itself, Credential Guard secures hashes which is an important, yet just one component for protecting identity and defending malware. In Windows 10, a suite of security features are available including Devise Guard, Windows Hello, Microsoft Passport and combined with Enterprise Mobility Management (EMM) suite offering a new approach on device security from power on to off. Such that IT has the ability to manage not only on-premises devices, but also BYOD-based deployment via cloud. The key consideration is to make sure all management solutions are in convergence, and not end up with one solution for on-premises, one for cloud, and one for mobile.

EMM is a key enabler for authenticating users and enforcing device policies via cloud. It is a vehicle to embrace BYOD with minimal changes needed on existing on-premises infrastructure. For a sizeable company, some form of EMM solution is evitable facing raid adoption of mobility and BYOD. IT leadership needs to develop a roadmap for adopting EMM sooner than later, and start transforming  into a “mobile first, cloud first” operation model as Microsoft so passionately advocates.

Call to Action

  • Learn Windows 10 which Microsoft Virtual Academy offers self-paced and free online courses in all Microsoft products, technologies and solutions including Windows 10, security and “Windows as a Service” as shown below.

image

Microsoft Azure Stack Technical Preview 1: Introduction & Feature Overview

This is something I had wanted to do for a while. Finally did welcome Charles Joy, a Principal Program Manager on the Azure Stack team, back to the show and we discussed the recent release of Microsoft Azure Stack Technical Preview 1. It’s a fun episode. Enjoy it.

image

IT Pros’ Job Interview Cheat Sheet of Multi-Factor Authentication (MFA)

Internet Climate

Recently, as hacking has become a business model and identity theft an everyday phenomenon, there is increasing hostility in Internet and an escalating concerns for PC and network securities. No longer is a long and complex password sufficient to protect your assets. In addition to a strong password policy, adding MFA is now a baseline defense to better ensure the authenticity of an examined user and an effective vehicle to deter fraud.

Market Dynamics

Furthermore, the increasing online ecommerce transactions, the compliance needs of regulated verticals like financial and healthcare, the unique business requirements of market segments like the gaming industry, the popularity of smartphones, the adoption of cloud identity services with MFA technology, etc. all contribute to the growth of MFA market. Some market research published in August of 2015 reported that “The global multi-factor authentication (MFA) market was valued at USD 3.60 Billion in 2014 and is expected to reach USD 9.60 Billion by 2020, at an estimated CAGR of 17.7% from 2015 to 2020.”

Strategic Move

While mobility becomes part of the essential business operating platform, a cloud-based authentication solution offers more flexibility and long-term benefits.The is apparent The street stated that

“Availability of cloud-based multi-factor authentication technology has reduced the maintenance costs typically associated with hardware and software-based two-factor and three-factor authentication models. Companies now prefer adopting cloud-based authentication solutions because the pay per use model is more cost effective, and they offer improved reliability and scalability, ease of installation and upgrades, and minimal maintenance costs. Vendors are introducing unified platforms that provide both hardware and software authentication solutions. These unified platforms are helping authentication vendors reduce costs since they need not maintain separate platforms and modules.”

Disincentives

Depending on where IT is and where IT wants to to be, the initial investment may be consequential and significant. Adopting various technologies and cloud computing may be necessary, while facing resistance to change in corporate IT cultural.

Snapshot

The following is not an exhaustive list, but some important facts, capabilities and considerations of Windows MFA.

mfa

Closing Thoughts

MFA helps ensure the authenticity of a user. MFA by itself nevertheless cannot stop identity theft since there are various ways like key logger, phishing, etc. to steal identity. Still, as hacking has become a business model for some underground industry, and even a military offense, and credential theft has been developed as a hacking practice, it is not an option to operate without a strong authentication scheme. MFA remains arguably a direct and effective way to deter identity theft and fraud.

And the emerging trend of employing biometrics, instead of a password, with a key-based credential leveraging hardware and virtualization-based security like Device Guard and Credential Guard in Windows 10 further minimizes the attack surface by ensuring hardware boot integrity and OS code integrity, and allowing only trusted system applications to request for a credential. Device Guard and Credential Guard together offers a new standard in preventing PtH which is one of the most popular types of credential theft and reuse attacks seen by Microsoft so far.

Above all, going forward we must not consider MFA as an afterthought and add-on, but an immediate and imperative need of a PC security solution. IT needs to implement MFA sooner than later, if not already.

My Presentation at The Univ. of Texas at Arlington

It is a great pleasure to have an opportunity to meet the wonderful and vibrant student community and speak about cloud computing at UTA on October 8, 2015. I focused on making the point of why cloud and why now, demonstrated with the ability to constructing computing fabric and deploying application on demand.

http://www.microsoft.com/feeds/omni_external_blogs.js

Additional resources:

Try It Yourself – Configure a Point-to-Site VPN Connection to a Virtual Network (3-Part video Series)

This connection is very easy to understand and implement. Point-to-Site (or P2S) here refers as a connection between a single device (namely a connection point) and an Azure virtual network (vnet) site.

A P2S connection requires a subnet defined within the target Azure vnet site. If to examine from a connected Azure vnet site, a connecting device automatically allocates an IP within the defined P2S subnet and connects to the site via a VPN connection.

Technically, a P2S connection is specific to, not the physical but logical device which is the OS instance which a connecting physical device is running on, since the connection is based on a-private-and-a-public key pair generated with the OS. At this time, Azure P2S supports only self-signed certificates, and the x.509 certificate (i.e. a public key) of an employed key pair resides in a target Azure vent site, while the certificate of PFX format (i.e. a certificate exported with a private key) should be installed at a connecting device. An administrator can configure an Azure P2S connection by:

  1. First enabling P2S connectivity and defining a P2S subnet associated with a target Azure vnet site
  2. Generating an x.509/PFX certificate pair
  3. Uploading the x.509 certificate to the site
  4. Distributing to and installing the PFX certificate on intended (logical) devices
  5. Initiating a connection from a logical device

Although one x.509-and-PFX-certificate-pair is sufficient to establish a P2S connection between an Azure vnet site with multiple devices by uploading an x.509 certificate to a target Azure vent site and employing/installing the associated PFX file on all connecting devices. The best practices is to employ a unique certificate pair for each connecting device to better secure the P2S environment.

Here are the Azure documentation page and complementary videos to walk through the processes and operations to

  1. Create a virtual network and a VPN gateway (video)
  2. Create your certificates (video)
  3. Configure your VPN client (video)

A Memorandum to IT Pros on Imperative vs. Declarative Scripting Models

One noticeable difference of Azure Infrastructure Services (IaaS) V2 from Azure IaaS V1 (or classic Azure IaaS as I call it) is the employment of  Azure “Resource Group” templates. A resource group not only is a newly introduced artifact in Azure, but denotes a fundamental shift on automating, deploying, and managing IT resources. This change signifies the arrival of a declarative programming/scripting model for the better. I will walk through an application deployment with Azure resource group templates in an upcoming post. In this memo, the focus is on distinguishing these two programming/scripting models.

Imperative vs. Declarative

Traditionally, within a logical unit of work (or simply a transaction) the conventional wisdom is to define how to implement a business logic by programmatically referencing parameter values, verifying the dependencies, examining variables at runtime, and stepping through a predefined data flow accordingly. This is a so-called imperative programming model which uses assignments, conditions/branching and looping statements to serialize operations for establishing the state of a program at runtime, i.e. an instance. An imperative programming model is to describe virtually “how” to reach “what.” A vivid example is that C-family programming languages are based on an imperative model. An imperative model like the following pseudo code specifies the steps (i.e. how) to ensure the operability of attaching a database to a SQL server (in other words, what) by ensuring the SQL server is first up and running, i.e. ready, before attaching an intended database. The implementation logic is to repeated a routine of waiting for a specified period of time, checking the status of a target resource, until the target resource is ready for an intended operation.

Wait 30 seconds and check the SQL server status again, till it is up and running

Then attach the database

At the same time, a declarative programming model is to describe business logic based on ‘what it is and not how to do it.’ For instance, rather than programming a loop to periodically check the status of if a target SQL server is up and running like what an imperative model does as depicted by the above example, a declarative model will simply state the dependency on a target SQL server, i.e. what the state must meet, before attaching an intended database and let the system (here I use the system as an umbrella team of other components) to implement how to enforce this pre-requisite. The following illustrates a declarative approach.

This database has a dependency of the hosting SQL server

The above states the dependency, i.e. what it is, and delegates the implementations carried out later.

What vs. How

Notice that an imperative model is to specify both the what and the how of a deployment. At the same time, a declarative model implies a logical separation and focuses on the what and leave the how later.  In layman’s term, imperative vs. declarative is simply an approach of how vs. what, respectively.

Why Declarative

For simple operations, one may not be advantageous over the other. For large amount of operations or tasks with high concurrency and noticeable complexities, the orchestrations can be too overwhelming to productively implement with an imperative model. This is increasingly what IT pros are facing in a cloud setting where operations are intermittent, concurrent, and carried out on hundreds or thousands of various application instances with inter- and intra-dependencies among themselves at an application layer and a system level.

A declarative model states what a target state is and the system will make it so, i.e. enforce it as stated. Employing an declarative model will fundamentally simplify how an administrator carries out application deployment and automation with increased consistency, persistency, and predictability.

As IT is transitioning into cloud computing, the number of VMs will continue to increase while the deployment environment is likely becoming hybrid and complex, adopting a declarative programming model is, in my view, critical and inevitable.

Essentially, IT has become such a highly integrated and increasingly complex environment, which is particularly true in an emerging IT model where cloud computing combined with hybrid deployment scenarios. Programmatically describing how to establish a state in runtime can quickly overwhelm programming logic and make an implementation based on imperative model very costly to develop and maintain. Shifting to a declarative programming model is strategic and becoming “imperative” for IT.

Call to Action

Recognizing the presented opportunity, IT pros should make this shift from imperative to declarative scripting models sooner than later. Employ a declarative model as a vehicle to improve the capabilities and productivity of application deployments, to facilitate and maximize ROI of transitioning to cloud in an IT organization. To get started, there are already abundant information of Azure IaaS V2 available including:

In addition, those who are new to Azure IaaS may find the following resources helpful:

And for those who would like to review cloud computing concepts, I recommend: