Azure Network Topology Document Extracts and Notes

Azure Network Topology

  • Two core approaches: traditional and Azure Virtual WAN
  • The above document has a topology diagram for each model.
FeatureTraditional Azure Network TopologyAzure Virtual WAN Network Topology
HighlightsCustomer-managed routing and security

An Azure subscription can create up to 50 vnets across all regions.

Vnet Peering links two vnets either in the same region or in different regions and enables you to route traffic between them using private IP addresses (carry a nominal charge).

Inbound and outbound traffic is charged at both ends of the peered networks. Network appliances such as VPN Gateway and Application Gateway that are run inside a virtual network are also charged.

Azure Virtual Network Pricing  
A Microsoft-managed networking service providing optimized and automated branch to branch connectivity through Azure.

Virtual WAN allows customers to connect branches to each other and Azure, centralizing their network and security needs with virtual appliances such as firewalls and Azure network and security services.

Azure Virtual WAN Pricing
DeploymentCustomized deployment with routing and security managed by the customer

Virtual Network documentation

Plan virtual networks

Tutorial: Filter network traffic with a network security group using the Azure portal
Microsoft-managed service

Virtual WAN documentation

Tutorial: Create an ExpressRoute association to Virtual WAN – Azure portal

– Other tutorials include site-to-site and point-to-site connections
InterconnectivityTraffic between two virtual networks across two different Azure regions is expected. Full mesh network across all Azure regions is not required.Global connectivity between vnets in these Azure regions and multiple on-premises locations.
IPsec TunnelsFewer than 30 IPsec Site-to-Site tunnels are needed.More than 30 branch sites for native IPsec termination.
Routing PolicyFull control and granularity for manually configuring your Azure network routing policy.Not applicable
Data CollectionCollects data from servers and Kubernetes clusters.Collects data from servers and Kubernetes clusters.
Data StorageStores data in Log Analytics workspace or customer’s own storage account.Stores data in Log Analytics workspace or customer’s own storage account.
Data Analysis and VisualizationUses Log Analytics for analysis and visualization of collected data.Uses Azure Monitor for analysis and visualization of collected data.

Additional Information

Azure Landing Zone Document Extracts and Notes

What is an Azure landing zone? – Cloud Adoption Framework

Landing zone implementation options – Cloud Adoption Framework

“A migration landing zone is an environment that’s been provisioned and prepared to host certain workloads. These workloads are being migrated from an on-premises environment into Azure.”

Deploy a CAF Foundation blueprint in Azure

“The CAF Foundation blueprint does not deploy a landing zone. Instead, it deploys the tools required to establish a governance MVP (minimum viable product) to begin developing your governance disciplines. This blueprint is designed to be additive to an existing landing zone and can be applied to the CAF Migration landing zone blueprint with a single action.”

Get help building a landing zone – Cloud Adoption Framework

“Getting your Azure landing zone (ALZ) done right and on time is important. Working with a certified Azure partner is a great way to get the support you need to build your ALZ.”

  • Option 1 – use Azure Migrate and Modernize.
  • Option 2 – find a partner offer for a landing zone in our marketplace.

Azure landing zone FAQ

Why Azure Arc

For IT decision makers, here’s why it’s pertinent to consider Azure Arc:

  • An integrated management and governance solution that is centralized and unified, providing streamlined control and oversight.
  • Securely extending your on-prem and non-Azure resources into Azure Resource Manager (ARM), empowering you to:
    • Define, deploy, and manage resources in a declarative fashion using JSON template for dependencies, configuration settings, policies, etc.
    • Manage Azure Arc-enabled servers, Kubernetes clusters, and databases as if they were running in Azure with consistent user experience.
    • Harness your existing Windows and Azure sysadmin skills honed from on-premises deployment.
  • When connecting to Azure Arc-enabled servers, you may perform many operational functions, just as you would with native Azure VMs including these key supported actions:
    • Govern
    • Protect
      • Secure non-Azure servers with Microsoft Defender for Endpoint, included through Microsoft Defender for Cloud, for threat detection, vulnerability management, and proactive monitoring for potential security threats. Microsoft Defender for Cloud presents the alerts and remediation suggestions from the threats detected.
    • Configure
    • Monitor
      • Keep an eye on OS, processes, and dependencies along with other resources using VM insights. Additionally collect, store, and analyze OS as well as workload logs, performance data, and events. Which may be injected into Microsoft Sentinel real-time analysis, threat detection, and proactive security measures across the entire IT environment.
October 10, 2023 is the date the support for Windows Server 2012 and 2012 R2 ends.
January 9, 2024 is the date the support for Windows Server 2012 and 2012 R2 ends.

Extended Security Updates (ESUs) is enabled by Azure Arc. IT can seamlessly deploy ESUs through Azure Arc in on-premises or multi-cloud environments, right from the Azure portal. In addition to providing a centralized management of security patching, ESUs enabled by Azure Arc is flexible with a pay-as-you-go subscription model compared to the classic ESU offered through the Volume Licensing Center which are purchased in yearly increments.

To test it out, follow Quickstart – Connect hybrid machine with Azure Arc-enabled servers.

A Secure Software Supply Chain with Containers

The concept of software supply chain is not a new one. What may be new is that CI/CD (Continuous Integration/Continuous Delivery) with containers makes it conceptually easy to understand and technically practical to implement. Here’s a process diagram illustrates this approach with five steps.

image

CI/CD Process

A software supply chain is here the “master” branch of a release, while development activities at other branch level are not considered here. The start of a master branch is where and when code or a change is introduced, while on the other end of the master branch is a production runtime environment where applications run. The process, as shown above, are highlighted in 5 steps.

  1. A CI/CD process starts by committing and pushing code to a centralized repository. Code here encompasses all programmatic assets relevant to defining, configuring, deploying and managing a relevant application workload.
  2. Changes made on the master branch triggers the CI process to automatically (re)build and (re)test all assets relevant to the workload which the master branch delivers.
  3. The successful outcome will generate a container images which are automatically versioned, tagged, registered and published in a designated trusted registry. In Docker Enterprise Edition, this is implemented as a Docker Trusted Registry, or DTR. The function of a trusted registry is to secure and host container images. Important tasks carried out here are to at a binary level scan for known malware, check vulnerabilities and digitally sign a container image upon being successfully processed. The generation of a container image signifies application assets are successfully integrated and packaged, which signifies the end of CI part.
  4. CD kicks off, executes and validates the steps for deploying the workload to a target production environment.
  5. Upon substantiating containers, referenced container images are then as needed pulled to a local host and start the container instances, hence an application or service.

Notice that Continuous Delivery is a reference of capability and not the state. Continuous Delivery signifies the ability to maintain payload at a production ready and deployable condition at all time. It is not necessary suggesting payload once validated is deployed to production immediately.

One Version of Truth

A software supply chain starts with developers commit and push the code into the master branch of a release’s centralized repository. To have one version of truth, centralized management is essential. Nevertheless, as preferred we can operate a centralized repository in a distributed fashion like github. Further, a centralize source code hosting solution must properly address these priorities including role-based access control, naming and branching, high availability, network latency, single-point-of-failure, etc. With source control, promoted code can be versioned and tagged for asset and release management.

Triggering upon Pushing Changes

When changes introduced into the master branch which is the supply chain, CI must and will automatically kick off validation process which includes a set of defined criteria, namely test cases.

Test-Driven, a Must

Once the development criteria (or requirements) are set, test cases can be developed accordingly prior to writing the code. And this essentially establishes the acceptance criteria and force a developer to focus on designing code guided by what must be later validated. Which in essence designs in the quality.

When changes are made, there is no point to “manually” execute all test cases against all the code. Let me be clear. The regression tests are necessary, but the repetitions with manual labor is counterproductive and error-prone. A better approach is to programmatically automate all structured tests (i.e. the criteria are structured and stable, like canned test cases) and let a tester to do exploratory testing which may not be performed with scripted, expected or even logical steps, but with the intent to break the system. The automation makes regression tests consistent and efficient, while exploratory testing adds extra values by expanding the test coverage.

Master Branch Has No Downtime

A test-driven development in CI/CD holds a key objective that the master branch is always functional and ready to deliver. This may first appear to some of us as idealistic and over-committed. While in fact, considering an automobile production line, as material and parts are put together while moving through one station to another, an automobile is being built. If at any time, a station breaks down, it must be fixed at the scene since the whole production line is on hold. Fixing what stops a subject moving from one station to the next on the spot is necessary to keep the production producing.

Software supply chain, or a CI/CD pipeline, with containers is a digital version of the above-mentioned model where artifacts are definitions, configurations and scripts. As these artifacts are integrated, built and tested throughout the pipeline, the process to construct a service based on containers is validated. If a step fails the validation, the pipeline stops and the issue must be immediately addressed and resulted, so the process can continue to the next step, hence material flows through the pipeline. To CI/CD, a master branch is the pipeline and must be always kept functional and ready to deliver.

Containers Are Not the Deliverables

It should be noted here that artifacts passing through the CI/CD pipeline are neither container images, nor container instances. What the pipeline validates are a set of developed definitions, configurations and processes based on application requirements and presented in mark-up language like json or yaml. In Docker, they are dockerfile, compose yaml file and template-based scripts, for example, to define application architecture with configured Docker runtime environment for a target application delivered as containers upon instantiation.

Container images and instances are in a way by-products. Container images and instances are however not intended to be deliverables. A container image generated by a CI/CD pipeline should always first programmatically created by the initial CI and later reference or updated by CD. The key is that images must be pulled or generated by executing CI. With Docker, thanks for natively configuration management as code, a release may employ a particular version of a container image. And upgrade or fall back a software supply chain may be as easy as changing the reference version, followed by redeploying the associated payload.

Trusted Registry, the Connective Tissue

CI once successfully generated a container image should register and upload the image to a trusted registry for security scanning and digital signing, before CD takes over and later pulls or updates the images, as needed to complete the CI/CD pipeline. Technically CI starts from receiving code changes and ends at successfully register a container image.

Fail to register a resulting container image will prevent CD from progressing upon referencing the image. In other words, a trusted registry is like a connecting tissue holding and keeping CI and CD fully synchronized and functional with the associated container images. A generated container image does not flow through every step of a CI/CD pipeline, the image is however the focal point to the validity of produced results. As shown in the above diagram, I used a dashed line between CI and CD to indicate there is a dependency of the trusted registry. Failing a registration will eventually fail the overall process.

Closing Thoughts

The essence of CI is automatic testing against defined criteria at unit, function and integration levels. These criteria are basically test cases which should be developed prior to code development as acceptance criteria. This is a key. Development must fully address these test criteria at coding time to build in quality.

Software supply chain is a better way. Wait, make that a much better way than just “developing” applications. I remember those days when every release to production was a nightmare. And code promotion was an event full of anxiety, numerous crashes and many long hours. Good riddance, so long those days. CI/CD with containers presents a very interesting and powerful combination for quality and speed, which is unusual to be able to achieve both at the same time. Docker with Jenkins, github and Azure, a software supply chain is realistic and approachable. Which I will detail in an upcoming post. Stay tuned.

NIST Guidance on Container Security

Here, a selected few of NIST documents which I’ve found very informative may help those who seek formal criteria, guidelines and recommendations for evaluating containerization and security.

NIST SP 800-190

Application Container Security Guide

Published in September of 2017, this document (800-190) reminds us the potential security concerns and how to address those concerns when employing containers. 800-190 details the major risks and the countermeasures of container technologies include image, registry, orchestrator, containers and host OS.

Worth pointing out that in section 6 of 800-190 recommends organizations should apply all, while listing out  exceptions and additions in planning and implementation to, the NIST SP 800-125 Section 5 recommendations in a container technology context.

NISTIR 8176

Security Assurance Requirements for Linux Application Container Deployments

Published in October of 2017, this document (8176) explains the execution model of Linux containers and assumes the attack model is that the vulnerability in the application code of the container or its faulty configuration has been exploited by an attacker. 8176 also examines securing containers based on hardware and configurations including namespace, cgroups and capabilities. Addressing the functionality and assurance requirements for the two types container security solutions, 8176 complements NIST 800-190 which provides the security guidelines and counter measures for application containers, .

NIST SP 800-180

NIST Definition of Microservices, Application Containers and System Virtual Machines

As of January of 2018, this document (800-180) is not yet finalized, while the draft was published in February of 2017 and the call for comments had ended in the following month.

The overwhelming interests on container technologies and their applications have energized organizations for seeking new and improved ways to add values to their customers and increase ROI. At the same time, as containers, containerization and microservices have become highly popular terms and over and over again being abused in our daily business conversations, the lack of rigorous and recognized criteria to clearly define what containers and microservices are has been in my view a main factor confusing and perhaps misguided many. For those who seek definitions and clarity before examining a solution, the agony of being in a state of confusion suffocated by the ambiguity of technical jargons indiscreetly applied to statements can be, or for me personally is, a very stressful experience. And some apparently has had enough and urged us that “There is no such thing as a microservice!”

With 800-180 serving a similar role to what NIST 800-145 to cloud computing, we now have a set of criteria to reference as a baseline for carrying out a productive conversation on containers., microservices and related solutions. And that’s a good thing.

NIST SP 800-125

Guide to Security for Full Virtualization Technologies

Like many NIST documents, this document (800-125) first gives the background information by explaining what full virtualization is, the motivations of employing it and how it works, before depicting the use cases, requirements and security recommendations for planning and deployment. Although today most business and technical professionals in the IT industry are to some degree versed in virtualization technologies. 800-125 remains an interesting read and provides an insight into virtualization and security. There are two associated documents, as below, point out important topics on virtualization to for a core knowledge domain of the subject.

  • NIST SP 800-125A Security Recommendations for Hypervisor Deployment on Servers
  • NIST SP 800-125B Secure Virtual Network Configuration for Virtual Machine (VM) Protection

An Introduction of UEFI Secure Boot and Disk Partitions in Windows 10

As a firmware interface standard to replace BIOS (Basic Input/Output System), UEFI (Unified Extensible Firmware Interface) specification has been a collective effort by UEFI Forum members for a while. UEFI is in essence an abstraction layer between firmware and OS, and independent of device hardware and architecture. Which provides flexibility for supporting multiple and various OS environments and as well acts as a generic target boot environment of drivers for cross-platform compatibility, as opposed to the need to develop a particular driver for particular hardware. With UEFI, there are also security opportunities to better defend a class of malware like bootkit and rootkit targeting the pre-boot environment of a device.

Why UEFI Secure Boot

Specifically, UEFI Secure Boot is an option to prevent a device from being tampered in a pre-boot environment, i.e. the period from power-on to initializing the OS. Malware injects itself in firmware or ROM, gains hardware access and is loaded before the OS, etc. make it difficult to defend or clean up, once a device is compromised. The Secure Boot option performs signature authentication upon executing code during pre-boot. A code/firmware creator is in this case required to digitally sign one’s code with a private key and to be verified against the paired public key upon loading at a device startup. Apparently, this process demands a signature database of supported hardware vendors established beforehand. Which explains why Microsoft, in fact since Windows 8, has instituted a driver signing process for certifying digital signatures of firmware for implementing UEFI Secure Boot and there are some changes of the process in Windows 10.

Above all, UEFI Secure Boot specification addresses security issues relevant to boot time exploits and eliminates the possibility for executing untrusted or altered code during pre-boot. And Windows 10 Enterprise supports UEFI 2.3.1 and later for Device Guard, a new and key security feature of Windows 10 Enterprise to ensure hardware and OS boot integrity of corporate devices. The following compares the security features of Windows 10 editions. Additional information of comparing Windows 10 editions based on Core Experience and Business Experience is readily available.

image

Sample Disk Layouts

One easy way for an end user to verify if a device is UEFI-based is to examine the disk layout. Typically a BIOS-based device has two partitions, system and OS, on a primary disk where the OS is installed. A device based on UEFI has a vividly different disk layout from that of BIOS. The following details. The following are sample disk layouts of Windows 10 devices based on BIOS and UEFI as reported by Disk Manager and DISKPART command line utility.

Then BIOS

This is a sample BIOS setup screen of a Windows device.

IMG_0088 (2)

And this is a typical disk layout of a device with BIOS, with System and an OS partitions, as reported by Windows desktop Disk Manager.

bios1

A DISKPART session as demonstrated below shows a disk layout consistent with what is reported by Disk Manager, with two partitions accordingly.

bios2

Now with UEFI

For UEFI, the following are two sample disk layouts. Notice that either one has an EFI System Partition, i.e. ESP. However, a DISKPART session reveals that there is actually an extra partition, the so-called Microsoft Reserved Partition, or MSR, which is:

  • With no partition ID and not reported by Disk Manager
  • Not for storing any user data
  • Reserved for drive management of the local hard disk

The sizes of ESP and MSR are customizable. And based on business needs, additional partitions are to be added. Those  involved in OS imaging and enterprise device deployment are encouraged to review and get familiar with the specifics of and Microsoft’s recommendations on configuring UEFI/GPT-based hard drive partitions, as detailed elsewhere.

Sample 1, A Typical Disk Layout with UEFI

This is based on a Surface Pro 4 machine purchased from a retail store running Windows 10 Pro Build 10586, as shown.

SNAGHTML5cb372

Disk Manager shows a three-partition disk with an ESP of 100 MB in size.

image

On the same device, a DISKPART session reveals that the disk also has a 16-MB MSR as Partition 3.

image

Sample 2, A Custom Disk Layout with UEFI

Below is a sample company-issued Surface Pro 3 device running Windows 10 Enterprise Insider Preview Build 11082.

corp.sp3.device

Here, Disk Manager presents a custom image with four partitions including a 350 MB ESP and a recovery partition after the OS partition.

4-partition.UEFI

And again, a DISKPART session reveals an 128 MB MSR as Partition 3.

diskpart

UEFI and GPT

One thing worth point out is that when deploying Windows to an UEFI-based PC, one must format the hard drive that includes the Windows partition as a GUID Partition Table (GPT) file system. Additional drives may then use either the GPT or Master Boot Record (MBR) file format.

Manually Enabling Secure Boot

With UEFI already configured, here are manually ways to enable/configure secure boot with a Windows 10 device.

A Hardware Short-Cut with Surface Pro devices

There is a convenient way via hardware to change the boot settings of a Microsoft Surface Pro device. While the machine is power off, pressing the power button and the volume up button at the same time for a few (generally 5 to 10, some may take 20 or more) seconds will bring up the device boot setting screen. And if keeping holding the power and volume up buttons passing the boot setup screen for another few seconds, eventually this will trigger reloading the firmware in my experience.

Here is the boot setup screen from the above mentioned Surface Pro 4 device running Windows 10 Pro, purchased from a retail store.

IMG_0085

and with the secure boot options:

IMG_0087

On the other hand, the following is a boot setup screen from a sample company-issued Surface Pro 3 device with a custom image built deployed with System Center Configuration Manager. The UI may appear different, and the available settings are most the same.

IMG_0092 (2)

Changing UEFI or Boot Settings via UI

For a Windows 10 device based on UEFI, here is a visual presentation for demonstrating how to manually enable UEFI Secure Boot on Surface Pro 3 device running Windows 10 Enterprise Insider Preview Build 11082. The process begins by clicking Start/Settings from the desktop.

image

Upon the first restart, click the following screens as indicated.

image

And the second restart should bring up the boot setup screen shown earlier in this article. And again as mentioned, the process via UI can be short-cut by pressing the power and the volume up button at the same time for a few seconds, while a Surface Pro device is powered off.

Closing Thoughts

With on-going malware threats and a growing trend in adopting BYOD, IT must recognize the urgency to fundamentally secure corporate devices from power on to off. UEFI Secure Boot is an industry standard and a mechanism to ensure hardware integrity every time and all the time. There are specific hardware requirements and configurations must be first put in place for solutions relying on UEFI Secure Boot like Device Guard for Windows 10 Enterprise. Therefore, rolling out UEFI 2.3.1 or later and TPM 2.0 as well are important hardware components for IT to leverage hardware-based security features to fundamentally secure corporate devices.

(Part 2) Deploying Windows 10: How to Migrate Users and User Data to Windows 10

Yung Chou, Kevin Remde and Dan Stolts continue their TechNet Radio multi-part Windows 10 series and in part 2 they showcase free tools like the User State Migration Toolkit (USMT)  that can easily migrate users and user data to Windows 10 from Windows XP, 7 and 8.

image

IT Pros’ Job Interview Cheat Sheet of Multi-Factor Authentication (MFA)

Internet Climate

Recently, as hacking has become a business model and identity theft an everyday phenomenon, there is increasing hostility in Internet and an escalating concerns for PC and network securities. No longer is a long and complex password sufficient to protect your assets. In addition to a strong password policy, adding MFA is now a baseline defense to better ensure the authenticity of an examined user and an effective vehicle to deter fraud.

Market Dynamics

Furthermore, the increasing online ecommerce transactions, the compliance needs of regulated verticals like financial and healthcare, the unique business requirements of market segments like the gaming industry, the popularity of smartphones, the adoption of cloud identity services with MFA technology, etc. all contribute to the growth of MFA market. Some market research published in August of 2015 reported that “The global multi-factor authentication (MFA) market was valued at USD 3.60 Billion in 2014 and is expected to reach USD 9.60 Billion by 2020, at an estimated CAGR of 17.7% from 2015 to 2020.”

Strategic Move

While mobility becomes part of the essential business operating platform, a cloud-based authentication solution offers more flexibility and long-term benefits.The is apparent The street stated that

“Availability of cloud-based multi-factor authentication technology has reduced the maintenance costs typically associated with hardware and software-based two-factor and three-factor authentication models. Companies now prefer adopting cloud-based authentication solutions because the pay per use model is more cost effective, and they offer improved reliability and scalability, ease of installation and upgrades, and minimal maintenance costs. Vendors are introducing unified platforms that provide both hardware and software authentication solutions. These unified platforms are helping authentication vendors reduce costs since they need not maintain separate platforms and modules.”

Disincentives

Depending on where IT is and where IT wants to to be, the initial investment may be consequential and significant. Adopting various technologies and cloud computing may be necessary, while facing resistance to change in corporate IT cultural.

Snapshot

The following is not an exhaustive list, but some important facts, capabilities and considerations of Windows MFA.

mfa

Closing Thoughts

MFA helps ensure the authenticity of a user. MFA by itself nevertheless cannot stop identity theft since there are various ways like key logger, phishing, etc. to steal identity. Still, as hacking has become a business model for some underground industry, and even a military offense, and credential theft has been developed as a hacking practice, it is not an option to operate without a strong authentication scheme. MFA remains arguably a direct and effective way to deter identity theft and fraud.

And the emerging trend of employing biometrics, instead of a password, with a key-based credential leveraging hardware and virtualization-based security like Device Guard and Credential Guard in Windows 10 further minimizes the attack surface by ensuring hardware boot integrity and OS code integrity, and allowing only trusted system applications to request for a credential. Device Guard and Credential Guard together offers a new standard in preventing PtH which is one of the most popular types of credential theft and reuse attacks seen by Microsoft so far.

Above all, going forward we must not consider MFA as an afterthought and add-on, but an immediate and imperative need of a PC security solution. IT needs to implement MFA sooner than later, if not already.

Don’t Kid Yourself to Use the Same Password with Multiple Sites

I am starting a series of Windows 10 contents with much on security features. A number of topics including Multi-Factor Authentication (MFA), hardware- and virtualization-based securities like Credential Guard and Device Guard, Windows as a Service are all included in upcoming posts. These features are not only signature deliveries of Windows 10, but significant initiatives in addressing fundamental issues of PC security while leveraging market opportunities presented by a growing trend of BYOD. The series nevertheless starts with where all security discussions should start, in my view.

Password It Is

At a very high level, I view security encompassing two key components. Authentication is to determine if a user is sad claimed, while authorization grants access rights accordingly upon a successful authentication. The former starts with a presentation of user credentials or identity, i.e. user name and password, while the latter operates according to a security token or a so-called ticket derived based on a successful authentication. The significance of this model is that user’s identity, or more specifically a user password since a user name is normally a display and not encrypted field, is essential to initiate and acquire access to a protected resource. A password however can be easily stolen or lost, and is arguably the weakest link of a security solution.

Using the Same Password for Multiple Sites

When it comes to cyber security, the shortest distance between two points is not always a straight direct line. For a hacker to steal your bank account, the quietest way is not necessarily to directly hack your bank web site, unless the main target is the bank instead. Since institutions like banks and healthcare providers, for example, are subject to laws, regulations and mandates to protect customers’ personal information. These institutions have to financially and administratively commit and implement security solutions, and attacking them is a high cost operation and obvious much difficult effort.

image

An alternative, as illustrated above, is to attack those unregulated businesses, low profile, lesser known and mom-and-pop shops where you perhaps order groceries, your favorite leaf teas and neighborhood deliveries as a hacker learned your lifestyle from your posting, liking and commenting on subjects and among communities in social media. Many of those shops are family own businesses, operating on a string budget, and barely with enough awareness and technical skills to maintain a web site with freeware download from some unknown web site. The OS is probably not patched up to date. If there is antivirus software, it may be a free trail and have expired. The point is that for those small businesses the security of the computer environment is properly not an everyday priority, let alone a commitment to protect your personal information. 

The alarming fact is that many do use the same password for accessing multiple sites.

image

Bitdefender published a study in August of 2010, as shown above, and revealed more than 250,000 email addresses, usernames and passwords can be found easily online, via postings on blogs, collaboration platforms, torrents and other channels. And it pointed out that “In a random check of the sample list consisting of email addresses, usernames and passwords, 87 percent of the exposed accounts were still valid and could be accessed with the leaked credentials. Moreover, a substantial number of the randomly verified email accounts revealed that 75 percent of the users rely on the same password to access both their social networking and email accounts.”

image

 On April 23, 2013, Ofcom published that (as shown above) “More than half (55%) of adult internet users admit they use the same password for most, if not all, websites, according to Ofcom’s Adults’ Media Use and Attitudes Report 2013. Meanwhile, a quarter (26%) say they tend to use easy to remember passwords such as birthdays or names, potentially opening themselves up to the threat of account hacking.” As noted, this was based on 1805 adults aged 16 and over were interviewed as part of the research. Although the above statistics are derived from surveying UK adult internet users, it does represent a common practices in internet surfing and raises a security concern.

Convenience at the Risk of Compromising Security

imageWith free Wifi, our access to Internet and getting connected is available at coffee shops, bookstores, shopping malls, airports, hotels, just about everywhere. The convenience comes with a high risk nevertheless since these free accessing points are also available for hackers to identify, phish and attack targets. Operating your account with a public Wifi or using a shared device to access protected information is essentially inviting an unauthorized access to invade your privacy. Using the same password with multiple sites further increases the opportunities to possibly compromise high profile accounts of yours via a weaker account. It is a poor and potentially a costly practice with devastating results, while choosing convenience at the risk of compromising security.

User credentials and any Personally Identifiable Information (PII) are valuable asset and what hackers are looking for. Identifying and protecting PII should be an essential part of a security solution.

Fundamental Issues with Password

Examining the presented facts of using password surfaces two issues. First, the security of a password much relies on a user’s practice and is problematic. Second, a hacker can log in remotely from other states or countries with stolen user credentials with a different device. A direct answer to these issues includes to simply not use password, instead with something else like biometrics to eliminate the need for user to remember a string of strange characters. And associate user credentials with a user’s local hardware, so that the credentials are not applicable with a different device. Namely employ the user’s device as a second factor for MFA. 

Closing Thoughts

Password is the weakest link in a security solution. Keep it complex and long. Exercise your common sense in protecting your credentials. Regardless you are winning a trip to Hawaii or $25,000 free money, do not read those suspicious email. Before clicking a link, read the url in its entirety and make sure the url is legitimate. These are nothing new and just a review of what we learn in grade school of computer security.

Evidence nevertheless shows that many of us however tend to use the same passwords for multiple sites as passwords increase and are complex and hard to remember. And the risk of an unauthorized access becomes high.  

For IT, eliminate password and replace with biometrics is an emerging trend. Implementing Multi-Factor Authentication needs to be sooner than later. Assess hardware- and virtualization-based securities to fundamentally design out rootkit and ensure hardware boot integrity and OS code integrity should be a top priority. These are the subjects to be examined as this blog post series continues.

Another Memorandum to IT Pros on Cloud Computing: Virtualization , Cloud Computing, and Service

As IT considers and adopts cloud computing, I thought it is crucial to understand the fundamental difference between cloud and virtualization. For many IT pros the questions are where to start and what is the road map from on-premises to cloud, to hybrid cloud. In my previous memo to IT pros on cloud computing I highlighted a number of considerations when examining virtualization and cloud. In this article, I detail more on the underlying approach of the two and how they are different on problem domain, the scope of impact, and most significantly service model.

In the context of cloud computing, virtualization is the capability to run multiple server, network or storage instances on the same hardware, while each instance runs as if it is the only instance occupying and consuming the entire underlying hardware. Notice networking and storage devices generally present themselves via some form of an OS instance, namely for network and storage there is a software component. However here I uses the term, hardware, to include the needed software and hardware to form an underlying layers for a network or a storage device to run.

Virtualization is an enabler of cloud computing and a critical component in a cloud implementation. Virtualization is however not cloud computing and should not be considered at the same level as illustrated in the figure later in this article. And IT pros must recognize that virtualization is a mean, while cloud computing is an intermediate step and also a mean for the ultimate goal which is to deliver a service to customer.

These terms: virtualization, cloud and service are chosen carefully, and not to be confused or used interchangeably with simulation, infrastructure or application, for instance. The three terms signify specific capabilities and represent an emerging model of solving business problems with IT.

Here, I will present the concepts of virtualization, cloud and then service to make these points that

  • Cloud computing is to deliver services on a user’s demand, while virtualization is a technical component enabling cloud computing.
  • Virtualization is a necessary, yet not sufficient condition for cloud computing.

First to fully appreciate the role virtualization plays in IT, let’s first realize the following.

Isolation vs. Emulation/Simulation

A key concept in virtualization technologies is the notion of isolation. The term, isolation, signifies the ability to provide an instance of server, network, or storage a logical (i.e. virtualized) environment such that the instance runs as if it is the only instance of its kind occupying and consuming the entire underlying supporting layer. This isolation concept differentiates virtualization from the so-called emanation and simulation.

Either emulation or simulation mimics the runtime environment of an application instance. While running multiple emulated or simulated runtime environments, each remains running within a common and single OS instance. Essentially emulation or simulation runs multiple runtime environments in a shared address space, therefore either approach can potentially introduce resource contention and limit the performance and scalability rather quickly. At the same time, virtualization isolates or virtualizes at an OS level by letting each virtualized environment have its own OS with virtual CPUs, network configuration and local storage to form a virtual machine or VM. And a VM can and will actually schedule assigned virtual CPUs, hence the underlying physical CPUs, based on an associated VM configuration. As explained below, this is server virtualization.

Virtualization in the Context of Cloud Computing

Cloud computing recognizes virtualization in three categories. They are server virtualization, network virtualization and storage virtualization. Servers represent the computing capabilities, the abilities to schedule CPUs and execute compute instructions. Network is to direct, separate, or aggregate communication traffics among resources, while storage is where applications, data, and resources are stored.

These three are all implemented with virtualization. And each of the three plays a vital part. The integration of the three forms what is referenced as “fabric” as explained in the article “Fabric, Oh, Fabric.”

Server Virtualization

Hence server virtualization means to run multiple server instances on the same computer hardware, while each server instance (i.e. VM) runs in isolation, meaning each server instance runs as if it is the only instance occupying and consuming the entire hardware.

Network Virtualization

This is an ability to establish multiple network instances, i.e. IP addressing scheme, name resolution, subnets, etc. on the same network layer while each network instance runs in isolation. Here I don’t call it network hardware, instead of a layer since a network instance is wrapped within and requires a form of OS which runs on computer hardware, so there is also a software component with networking.

Multiple network instances in isolation means each network instance exists as if the network configuration is unique on the supporting network layer. For instance, in a network virtualization environment, there can be multiple tenants which all have an identical network configuration like 192.168.x.x while within each virtual network when referencing the same IP address like 192.168.1.1, virtualization will ensure tenant A’s network and tenant B’s network remains operating correctly. And a reference within one network will not be confused with one in other network even both have the same IP configured with respective logical network.

Storage Virtualization

A similar concept applies to storage virtualization which is a capability to aggregate storage devices in various kinds while presenting all as one logical device with a continuous storage space. For instance, storage virtualization enables a user to form a logical storage volume by detecting and selecting storage devices with unallocated storage in various interfaces like SATA, SAS, IDE, SCSI, iSCSI, and USB. This volume then aggregates all available storage from those selected devices and presented to a user as a single formatted volume with continuous storage. This concept has been long realized by SAN technologies which is an expensive and specialized storage virtualization solution. Lately, storage virtualization has become a software solution, been integrated into OS, and become a core IT skill and integral part of JBOD storage solution for IT.

Cloud Fabric

Fabric represents three things: compute, network and storage. Compute is basically the ability to execute instructions in CPUs. Network is how resources are glued together. Storage is where application and data are stored. This fabric concept is something we all have been exercising often and sometimes not fully aware of. For instance, when buying a computer, we mainly consider a number of items: the chip family and processor speed (compute), how much hard disk capacity and the spindle speed (storage), while network capabilities are expected and integrated into the mother board. The fact is that when buying a laptop, we focus on the fabric configuration which forms the capabilities of a solution, here a laptop, a personal computing solution. This practice is applicable from an individual’s hardware purchase, to IT’s running datacenter facilities. For cloud computing, because the transparency of physical locations and hardware where applications are running, we can use fabric to collectively represent the IT capabilities of compute, network, and storage relevant to a datacenter.

Fabric Controller

This term denotes the management solution for managing compute, network and storage in a datacenter. Fabric controller, as the name indicates, controls and is the owner of a datacenter’s fabric. The what, when, where and how of IT capabilities in a datacenter are all managed by this logical role, Fabric Controller.

There is another significance of employing the term, fabric. It signifies the ability to discover, identify, and manager a resource. In other words, a fabric controller knows if there are newly deployed resources and is able to identify, bring up, monitor or shut down resources within fabric.

image

Virtualization and Fabric

From a service provider’s view point, constructing cloud is mainly to construct cloud fabric, so all resources in cloud can be discovered, identified, and managed. Form a user’s point of view, all consumable resources in cloud are above the virtualization layer.

Cloud relies on virtualization to form fabric which wraps and shields the underlying technical and physical complexities of servers, networks, and storage infrastructures from a user. Virtualization is an enabling technology for cloud computing. Virtualization is nevertheless no cloud computing itself.

Commitments of Cloud Computing

What is cloud? Cloud is a state, a set of capabilities which enables a user to consume IT resources on demand. Cloud is about the ability to consume resource on demand, and not implementations. What cloud guarantees are the capabilities which a solution must deliver. Cloud is about the existence of particular computer programs or an employment of specific products or technologies.

Above all, NIST SP 800-145 (800-145) has define a set of criteria including essential characteristic, service models, and deployment models. NIST essentially specifies what capabilities cloud has and how they must be delivered. 800-145 has been adopted as an industry standard for IT on adopting cloud. Here is how I interpret the definition on essential characteristics of cloud computing.

  • A cloud must enable a user to self-serve. That means that a user is to call no one and email no one, when consuming cloud resources. A user should be able to serve oneself to allocate storage, create network, deploy VMs and configure application instances without the service provider’s intervention. Just this requirement is very significant already.
  • A cloud must be generally accessible for a user. Self-service and ubiquitous access are complementary. Self-service implies always accessibility, while always accessibility enables self-service. You can’t self-serve if you can’t get to it. And if you can’t get to it, you can’t self-serve. This requirement is mandatory and logical.
  • A cloud employs resource pooling. In my view, this implies actually three things: standardization, automation, and optimization. A pool is a metaphor for a collection of things with a same set of criteria. A swimming pool with water is an obvious example. Putting things in a resource pool implies that there is a standardization process based on set criteria to prepare resources for automation. Automation improves efficiency, not necessarily always the overall productivity. For instance, shortening the time by automating a process from 8 hours to 2 hours, by this time reduction itself does not necessarily increase the productivity accordingly since the assembly line may be idling for the rest of 6 hours. Unless relevant materials are continuously coming into the process line to optimize the productivity. Although this example is apparently trivial, the applicability is apparent for resource pooling to encompass standardization, automation and optimization as a whole to better manage cloud resources.
  • Elasticity is perhaps one of the most referenced cloud computing terms since it vividly describes cloud computing to be able to grow and shrink the capacity of a target resource in responding to demands, by visualizing a rubber band expands and retracts as the applied force varies.
  • Finally, for measured service, some call it a charge-back or show-back model, others may state it as pay-as-you-go. The significance of either is the underlying requirement of designed-in analytics. If a cloud enables a user to self-serve, is accessible anytime, grow and shrink based on demands, etc. as the first four essential characteristics state, we need a fundamental way to understand and develop date usage patterns to carry out a realistic capacity planning. Analytics is an essential part of cloud computing, must be architected and designed into a cloud solution and definitely not an afterthought.

All the above are five essential characteristics defined in 800-145. In other words, to be qualified as a cloud, these five attributes are expected, given, guaranteed, and committed. This is tremendous for both user and IT.

Is Virtualization Cloud Computing

The answer should be an obvious “NO.” In virtualization, the requirement is rooted on providing an isolation environment. There are no requirements on a virtualization solution must establish all including self-servicing, offer general accessibility, employ standardization/automation/optimization as a whole, grow and shrink virtualized resources based on demands, and provide analytics of consumptions.

Unique Values of Cloud Computing

There are three service models (IaaS, PaaS and SaaS) to consume cloud as defined in 800-145. From the definition, it is apparent that software is an application, platform means an application runtime and infrastructure implies a set of servers (or VMs) to form the fabric of a target application.

Some observations I have from examining these service models. There are three sets of capabilities offered by cloud, i.e. three way a user can consume cloud.

  • Enabling a user to consume an application as a service is SaaS.
  • Providing an application runtime such that a user can develop and deploy a target application as a service is PaaS.
  • Provisioning application fabric, i.e. compute, network and storage, for a target application is IaaS.

Notice all three must be delivered as a service. Application, platform and infrastructure are much described in 800-145, however it was unclear what a service is. This is an essential term which surprisingly NIST did not define in 800-145. In addition to what has been stated in “An Inconvenience Truth of the NIST Definition of Cloud Computing, SP 800-145,” this is another unfortunate missing part of NIST definition. So what is a service? Without some clarity, IaaS/PaaS/SaaS are designated for ambiguity.

“Service” in this context means simply “capacity on demand” or simply “on demand.” The three service models are therefore

  • IaaS: Infrastructure or a set of VMs as a whole forms an application infrastructure is available on demand
  • PaaS: Platform or an application runtime is available on demand
  • SaaS: Software or an application is available on demand

On-demand is a very powerful word. In this context, it means anytime, anywhere, on any network, and with any device. Cloud enables a user to consume an application, deploy an application to a runtime environment, or build application fabric as a service. It means anytime, anywhere, on any network, with any device a user is guaranteed to be able to build application fabric (IaaS), deploy application (PaaS), and consume an application (SaaS). This is where cloud delivers most of its business values by committing the readiness for a user to consume cloud resources anytime, anywhere, on any network, with any device.

On-demand also implies a series of management requirements including monitoring, availability, and scalability relevant to all five essential characteristics. To make a resource available on demand, the fabric controller must know everything about the resource. The on-demand requirement, i.e. delivered as a service, is a requirement which a cloud service provider must ensure every delivery fulfills.

Cloud Defines Service Models, Virtualization Does Not

Virtualization on the other hand defines what it delivers, i.e. an isolated environment for delivering compute, network, or storage resources. Virtualization however does not define what a virtualization service model is. For instance, server virtualization is the ability to create VMs. Server virtualization does not define if a VM must be instantaneously created and deployed upon a user’s request anytime, anywhere, on any network, and with any device. Missing a service model in virtualization is a very obvious difference why virtualization is no cloud, and cloud goes far beyond virtualization. Virtualization and cloud are contributing to business values at different dimensions. Those effects from the former are localized and limited, while the impacts from the latter are fundamental and significant.

Cloud Deployment Models

For cloud, 800-145 defines four deployment models including public, community, private, and hybrid. Those with different views (see “An Inconvenience Truth of the NIST Definition of Cloud Computing, SP 800-145,”) may consider otherwise. Regardless how many deployment models, it is a classification to address the ramifications in ownership, manageability, security, compliance, portability, licenses, etc. introduced by deployment.

Virtualization has no concern on a deployment model. The scope of virtualization is confined within the ability to virtualize a compute, network, or storage instance. Where a virtualized instance is deployed, who’s using it, etc. has no influence on how the virtualized instance is constructed. What is running and how an application is licensed within a VM, a logical network, or a JBOD volume is not a concern to virtualization. The job of virtualization is to construct a virtual instance and let the fabric controller manage the rest. For instance, a VM is a VM regardless it is deployed to a company host, a vendor’s datacenter, or a personal laptop. It is still a VM as far as virtualization is concerned.

Closing Thoughts

Examining the definitions of virtualization and cloud, it is clear that

  • Virtualization assures system integrity within a VM, a network, or a storage volume while multiple instances of a kind running on the same underlying hardware. At the same time, cloud computing guarantees a user experience presented by five essential capabilities.
  • Virtualization does not define how a VM to be delivered, while cloud computing commits all deliveries are on a user’s demand.
  • Virtualization technology is transparent on where a VM, a logical network, or a JBOD volume is deployed and who are the intended users. Cloud computing on the other hand categorizes deployment models to address the business impact on management, audit, security, compliance, licensing, etc.

Overall, cloud is about user experience, enabling consumption on demand, rapid or even proactive response to expected business scenarios on resource capacities. Virtualization on the other hand is a technical component which isolates multiple VM, network, or storage instances running on the same hardware layer. The focus and impact on virtualization is relatively localized and limited.

It should be very clear that cloud and virtualization are operating at different levels with very different scope of impact. Needless to say that virtualization is no cloud, and cloud goes far beyond virtualization.