Creating Microsoft Azure virtual machine

There are a number of changes made recently in the process of creating a Microsoft Azure VM. This article presents a sample user experience of creating a VM from Microsoft Azure image gallery and with the quick create method, as of Aril of 2014, after a user has logged into Microsoft Azure Management Portal with a subscription account.

To acquire a Microsoft Azure free trial subscription while it is available, go to http://aka.ms/R2 and click the dropdown list and select the option, Windows Server 2012 R2 on Windows Azure. Detailed instructions are available at http://aka.ms/30.

To Start

image This is to interactively deploy a Microsoft Azure VM. Using Microsoft Azure PowerShell, we can automate the entire process. Microsoft Azure Quick Start Kit (QSK) has a sample script.

FORM GALLERY

image When creating a VM which is associated with a storage account, a cloud service, or a virtual network, must use FROM GALLERY option.
image There are two tiers i.e. BASIC and STANDARD, of hardware configuration of a VM, and each with a set of selected compute configurations available as show below.
image  
image As of April of 2014, the Basic tier has a set of compute configurations from A0 to A4, while Standard is from A0 to A7. There are deployment constraints associated a Basic compute configuration. For instance, deploying a VM with a Basic tier configuration to a virtual network will error out with the following message,image
image Notice the endpoint configurations are by default with two preconfigured: one for RDP connections and the other for PowerShell remoting.
image There are 3rd-party extensions now available. These extensions which provides additional functionality to a VM instance can be installed, managed, and uninstalled from a VM, as needed.
image Upon a successful deployment, the VM becomes available.
image In this example, the VM is deployed to a virtual network with the IP address, 10.0.0.4. The article provides additional information on the IP address management in Microsoft Azure virtual network.

QUICK CREATE

QUICK CREATE method is for deploying a VM without concerning the associated artifacts including a cloud service name, a storage account, and a virtual network. This is a speedy way to deploy a VM for testing, troubleshooting, and training.

image QUICK CREATE requires just 7 pieces of information to deploy a VM. A cloud service name is to be created based on the DNS name, a storage account is to be created for current subscription account, and a VIP and a DIP are to be automatically assigned and managed by Microsoft Azure.
In this example, the VM is to be deployed with a Basic tier A1 with an Affinity Group which is not associated with a virtual network.
image If to deploy a VM with a Basic tier configuration with an Affinity Group which is associated with a virtual network, it will error out with the following message,image
image In this example, the VM is to be deployed with a Standard tier A1 with an Affinity Group which is associated with a virtual network.
image  
Advertisements

Windows Azure Infrastructure Services IP Address management (Part 2 of 2)

This two-part series details the IP address management of Windows Azure Infrastructure Services including:

We can use Windows Azure PowerShell to assign a static IP to a VM deployed to a Windows Azure virtual network (VNET). The following explains.

Assigning Static VNET IP Address to a VM

imageTo assign a static IP to a VM, use PowerShell. And the VM must be deployed to a VNET. Windows Azure PowerShell v0.7.3.1 released on March 11, 2014 supports static VNET IP address. To install Windows Azure PowerShell and connect to your Windows Azure subscription, follow the instruction at http://aka.ms/AzureCmdlets.

 

image

In PowerShell ISE Command Add-On pane, simply search the string, azurestat, you will find out there are four Windows Azure PowerShell cmdlets associated with managing a static IP of a VM deployed to a Windows Azure virtual network, as shown on the left.

A typical routine starts with using Test-AzureStaticVnetIP to confirm the availability of a target IP address. The cmdlet returns a list of suggestions if the target IP address is not available.

Once the availability of a target IP address is confirmed, run Set-AzureStaticVNetIP and Get-AzureStaticVnetIP to assign and verify the IP assignment of a target VM. And as needed, use Remove-AzureStaticVnetIP to remove a static IP from a VM object.

With a static internal IP addresses, VMs in a VNET will maintain the same IP addresses across shutdown, reboot, and re-imaging. And the deployment order of VMs to a VNET does not impact the IP assignments anymore. Supposed two VMs were at the state, Stopped Deallocated, and then later get started again in a random order. The new internal IP addresses will likely be different. However if the two VMs were assigned with static IP addresses, the same IP address will be in the VM upon restart.

Sample Session of Assigning a Static IP

The following user experience is based on Windows Azure PowerShell already configured with connectivity to a Windows Azure subscription by following the instructions provided at http://aka.ms/AzureCmdlets.

imageThree VMs, vm1, vm2, and vm3 were deployed in order to myTestNet which is configured with 10.0.0.0 address space. Upon the initial deployment, 10.0.0.4, 10.0.0.5, and 10.0.0.6 are assigned to the three VMs as shown. VM2 is the target VM for a static IP assignment.

 

image

For this particular deployment, it is obvious which IP addresses are available. In an event that Windows Azure Management Portal is not accessible, we can use Test-AzureStaticVnetIP to confirm the availability of a target IP address within a VNET.

 

 

imageOnce a target IP address is confirmed available, we can use Set-AzureStaticVnetIP to assign a static IP to a target VM. Here, on the left I use a small routine to first test the availability of a target IP, 10.0.0.9, in an intended VNET. If the IP is available, assign it to a target VM, here vm2. A sample script is available at http://aka.ms/StaticVnetIPSample.

 

imageNow vm2 has an IP address as 10.0.0.9. If this is a static IP address, redeploying all three VMs will not change the IP address of vm2 at this time.

 

 

imageTo redeploy the three VMs, I first shut down them from Windows Azure Management Portal and brought all to the state, Stopped (Deallocated), which released the internal IP address of each VM. As shown on the left, neither VM had an IP address.

 

imageThe three VMs were at this time started in the order of vm3, vm1, and vm2. This order is different from that of the initial deployment. And upon the deployment, notice that vm3 which was first started got the IP address, 10.0.0.4. The 2nd started VM was vm1 which got 10.0.0.5. Nevertheless, the last one started, vm2 (instead of getting the next IP address, 10.0.0.6) maintained the assigned static IP, 10.0.0.9. In other words, once a static IP address is assigned to a VM instance, the deployment order no longer impact the IP address assignment of the VM.

Best Practices for Static IP Assignments

The recommended usage pattern is to employ separate subnets for static IP address VMs and dynamic IP address VMs. Use a static IP address subnet for all static IP address VMs, while it is needed to specify a different static IP address for each VM deployed to this subnet. And use a dynamic IP address subnet for PaaS web/worker roles and those Infrastructure Services VMs without the needs for static IP addresses.

Managing VMs with static IP assignments in a separate subnet can help standardize the processes and configurations, hence better automation and streamlined operations.

Additional Resources

Windows Azure Infrastructure Services IP Address management (Part 1 of 2)

This two-part series details the IP address management of Windows Azure Infrastructure Services including:

We will first examine some basic concepts to better understand how IP addresses are assigned in Windows Azure and what are the implications.

VIP vs. DIP

Upon deployment, a Windows Azure VM instance has two IP addresses:

  • A VIP (Virtual IP address) is the public IP address pointing to the cloud service where the VM is deployed to. Notice that a VIP once assigned is not released from a cloud service till every VM instance in the cloud services has either a “Stopped (Deallocated)” status or deleted.
  • a DIP (an internal IP assigned by Windows Azure with DHCP) is the IP address assigned to the VM for communicating within Windows Azure. Notice that a DIP once assigned is not released from a VM till  the VM has a “Stopped (Deallocated)” status.

image

The following illustrates a conceptual model where multiple VMs are deployed to a cloud service. A VIP is assigned to Windows Azure’s public interface and pointing to this cloud service, while each VM within the cloud service has an individual DIP assigned by Windows Azure via DHCP.

image

Notice that if these VMs are deployed to a virtual network, a DIP will be assigned from or released to the address pool defined in the virtual network configuration.

Endpoint vs. Port

In Windows Azure, accessing a cloud service from Internet requires an endpoint which is a pair of two ports associated with the VIP of the cloud service. The public port of an endpoint is the one facing Internet, while within Windows Azure the corresponding port is the private port. The above diagram depicts the concept of an endpoint as the vehicle to access a cloud service from Internet, while a defined endpoint effectively connects a public interface and a private one of a cloud service with a port translation at the edge of Windows Azure where the VIP points.

Stopped vs. Stopped (Deallocated)

A Windows Azure VM instance can be shut down with two fundamental ways. One is to shut down a VM within the VM instance itself as shown below. This will bring the VM to a “Stopped” state. At this time, although the instance is stopped, it is however not deallocated, and consequently it is still being charged by minute based on the pricing model as detailed in http://aka.ms/waPrice.

image

Another way to stop the VM instance is to operate directly from Windows Azure Management Portal by highlighting a VM an click the Shutdown button from the black menu bar as shown above. Which in addition to shutting down the VM, it also deallocates the instance as the status, Stopped (Deallocated), indicated. At this time, the VM instance is not being charged any more.

The storage cost of a VM is introduced by the associated VHD file which include an OS disk and additional data disk, if added. This storage cost is always there since regardless the state of a VM, storage capacity is consumed to store a VHD file which is stored as page blobs in an associated storage account and depending on if geo-replication is maintained, there may be additional storage and transmission costs. The article, http://aka.ms/HADR explains the two storage account types and how they work.

IP Assignments in Windows Azure Virtual Network (VNET)

When allocating addresses, Azure reserved the first three and the last IP addresses in an address space and a subnet. For instance, defining an address space, 10.0.0.0/24, as the following results in a usable address range of 10.0.0.4 to 10.0.0.254 where the first three and the last IP addresses of this address space, i.e. 10.0.0.1-3 and 10.0.0.255 are reserved for Azure’s use. This behavior is consistent throughout the subnets. Essentially any address space allocated in Azure, the first three and the last one are reserved for system use.

image

When deploying VMs to a VNET, the DIPs (i.e. internal IP addresses) of VMs are allocated from a configured address pool (as defined in VNET) in the order of each VM is deployed. Therefore deploying the same VMs in a different order to the VNET or deallocating then redeploying VMs in a VNET will likely result in different internal IP addresses assigned. For example, two VMs in a VNET had had a Stopped Deallocated state and then both were restarted in a random order. The new internal IP addresses assigned to the two VMs will likely be different than those internal IP addresses previously assigned before deallocation. And it is an issue for VMs requiring persistent IP addresses throughout the lifetime of those VMs. However, if a static IP address is assigned to a VM, the same predictable IP address will be assigned with the VM upon restart. For a VM deployed to a VNET, a static IP can be assigned to the VM using Windows Azure PowerShell.

In Part 2, we will walk through a sample session on assigning a static IP to a VM which is deployed to a VNET.

My Selected Contents for IT Pros

The following is a list of articles I have recently published. It highlights a core set of hybrid cloud computing contents which I want to share and help IT pros better understand at this juncture.

Cloud Concepts

 

Windows Azure

 

Windows Server

 

On-premises Deployment

 

TechNet Radio

App Controller with Multiple Windows Azure Subscriptions

 App Controller (http://aka.ms/appController) is a component of Microsoft System Center as a self-service portal for managing a hybrid deployment environment. On one hand, App Controller connects to System Center Virtual Machine Manager (VMM) servers and manage VMM-based private cloud resources. On the other, App Controller can also connect to multiple public cloud service providers like Windows Azure and 3rd-party vendors for managing resources deployed to off-premises facilities. As part of the Microsoft System Center family, VMM and App Controller together strategically converge all IT cloud deployment models (i.e. private, public, and hybrid clouds) into a common and management platform. A free ebook highlighting essential concepts and operations is available for download.

Here is a quick walkthrough of the user experience. The following shows two Windows Azure subscriptions as indicated by the subscription IDs are connected with an App Controller instance.

image

Within App Controller in this setting, an administrator can manage the two public cloud subscriptions for off-premises deployment and a VMM managed private cloud for on-premises deployment.

image

The security of App Controller is a role-based model. Within App Controller UI, under Settings one can create a User Role where to restrict access to a Windows Azure subscription, as shown below.

  image

A Memorandum to IT Pros on Cloud Computing

The IT industry is moving fast with cloud computing. And there are fundamental changes on how to approach businesses, which we must grasp to fully appreciate the opportunities presented to us.

Fabric, Not Virtualization

In cloud computing, resources for consumption are via abstractions without the need to reveal the underlying physical complexities. Ultimately all cloud computing artifacts eventually consist of resources categorized into three pools, namely compute, networking, and storage. Compute is the ability to execute code and run instances. Networking is how instances and resources are glued or isolated. And storage is where the resources, configurations, and data are kept. In current state of technologies, one approach is to deliver resources via a form of virtualization. And these three resource pools are abstracted with server virtualization, network virtualization, and storage virtualization, respectively, to collectively form the so-called fabric, as detailed in “Resource Pooling, Virtualization, Fabric, and Cloud.”

Fabric is an abstraction signifying the ability to discover and manage datacenter resources. Sometimes we refer the owner of fabric as a fabric controller which is essentially a datacenter management solution and manages all datacenter physical and virtual resources, With fabric, a server is delivered with a virtual machine (VM), an IP address space can be logically defined through a virtual network layer, and a disk drive appearing with a massive and continuous storage space is in fact an aggregate of the storage provided by just a bunch of disks. Virtualization is an essential building block of cloud computing. We must however go beyond virtualization and envision “fabric” as the architectural layer of abstraction. 

A critical decision in transforming into cloud computing is to as  early as possible establish a holistic approach of fabric management, i.e. deploying a system management solution to provide a common and comprehensive platform for integrating, managing, and operating the three resource pools. This management solution is to strategically form the fabric such that datacenter resources regardless physical or virtualized, deployed on premises or off premises are all discoverable and manageable in a transparent fashion.

Service Architecture, Not VMs 

Many may consider IaaS is about deploying VMs. In cloud computing, an IaaS deployment is nevertheless not just about individual VMs. Modern computing models employ distributed computing with multiple machine tiers while each tier may have multiple VM instances taking incoming requests or processing data. A typical example is a three-tier web application including a frontend, mid-tier, and backend, which maintains multiple frontend instances for load-balancing and multiple backend instances for high-availability of data. And an application is functional only when all three tiers are considered and operated as a whole. Although there are times, perhaps an application architecture is formed with a single machine tier, i.e. one machine instance constitutes an application instance, and operating directly on a VM is equivalent to that on the service. We must however manage the deployment as a service architecture deployment and not as an individual VM deployment.

A service in cloud computing is an application instance, a security boundary, and a management entity. One will deploy a service, start and stop a service, scale a service, upgrade a service. Service from an operations point of view is a set of VMs collectively maintaining an application architecture, run-time environment, and a running instance. No, cloud computing is not just about deploying VMs, since cloud has no concept on individual VMs. it is about the ability to deploy an application architecture followed by configuring target application run-time environment before finally installing and running an application. It is about a service, i.e. an application. And IaaS is about the ability to deploy a service architecture and not individual VMs.

Services, Not Servers

A similar concept to a service architecture vs. VMs is services vs. servers. Here a server is the server OS instance running in a deployed VM. “Service” is operationally a set of servers which forms a service, or an application, architecture. In the context of cloud computing, a service carries a set of attributes, five to be specific as defined in NIST SP 800-145 and summarized in the 5-3-2 Principle of Cloud Computing. Deploying a server (or a VM) and deploying a service denote very different capabilities. Deploying ten VMs is a process of placing ten individual servers, and it suggests little on the scope, relationship, scalability, and management of the ten servers. At the same time, deploying ten instances of a service denotes there is one service definition and with ten instantiations. The significance of the ten service instances is that since all instances are based on a the same service definition, there is an opportunity to optimize business objectives via “service” management. An example is to employ upgrade domains to eliminate downtime during application upgrade.

A service is also how cloud computing is delivered and consumed.  IaaS, PaaS, and SaaS all ended with the term, service, is a clear indication on how significant a service plays. It is “the” way in cloud computing to deliver resource for consumption. If it is not delivered as a servicer, it is not cloud.

From a customer’s point of view, a service (i.e. an application) is what is consumed. Therefore IT pros should pay attention to what is running within a server and not just the server itself. Form a system management viewpoint, what matters is the ability to look into a server instance and drill down to application level, and gain insights of application usage and health. For instance, for a database application what is critical to know and respond to is the health of databases and not just the state of the server which hosts the database application.

So for IT pros, cloud computing is more than just how a server is automatically configures and deployed, it is how the application running in the server instance is defined, constructed, deployed, and managed including fault domain and upgrade domain, availability, geo-redundancy, SLA, pricing, costs, cross-site recovery, etc.

Hybrid, Not On-Premises

With virtualization in place, enterprise IT can accelerate cloud computing adoption by hybrid deployment scenarios. Here a hybrid cloud is a private cloud with a cross-premises deployment. For example an on-premises private cloud with some off-premises resources is a form of hybrid cloud, and vice versa. A hybrid cloud based on an on-premises private cloud offers an opportunity for keeping sensitive information on premises while taking advantages of the flexibility and readiness that a 3rd-party cloud service provider can provide to host non-sensitive data. An on-premises private cloud solution is a stepping stone, the ability to define, deploy, and manage a hybrid cloud is where IT needs to be. 

The idea of a hybrid cloud surfaces an immediate challenge: how to enable a user to self-serve resources in a cross-premises deployment. Self-servicing is an essential characteristic in cloud computing and plays a crucial role in fundamentally minimizing training and support cost while continually promoting resource consumption. For a hybrid IT environment, there are strategically important considerations including consistent user experience with on-premises and off-premises deployments, SSO maturity and federated identity solutions, a manageable delegation model, and inter-op capabilities with 3rd-party vendors. To ensure IT agility, a management platform to manage resources not just physical and virtualized, but also those deployed to a private cloud, a public cloud, or a hybrid cloud is increasingly critical.

Closing Thoughts

Transitioning into cloud computing platform is critical for enterprise IT to compete in the emerging economy which is driven by emotions and current events and intensified by social media. IT should institute a comprehensive management solution while the first opportunity arises to facilitate and converge fabric construction with cloud computing methodology. Keep staying focused on constructing, deploying, and managing:

  • Not virtualization, but fabric
  • Not VMs, but a service architecture
  • Not servers, but service instances
  • Not on-premises, but hybrid deployment scenarios

For enterprise IT, the determining factor of a successful transformation is the ability to continue managing not only what has been established, but what is emerging; not only physical and virtualized, but those deployed to private, public, and hybrid clouds; not only from one vendor’s solution platform, but vSphere, Hyper-V, Citrix and beyond.

Resource Pooling, Virtualization, Fabric, and Cloud

One of the five essential attributes of cloud computing (ref. The 5-3-2 Principle of Cloud Computing) is resource pooling which is an important differentiator separating the thought process of traditional IT from that of a service-based, cloud computing approach.

Resource pooling in the context of cloud computing and from a service provider’s viewpoint denotes a set of strategies for standardizing, automating, and optimizing resources. For a user, resource pooling institutes an abstraction for presenting and consuming resources in a consistent and transparent fashion.

This article presents key concepts derived from resource pooling as the following:

  • Resource Pools
  • Virtualization in the Context of Cloud Computing
  • Standardization, Automation, and Optimization
  • Fabric
  • Cloud
  • Closing Thoughts

Resource Pools

Ultimately data center resources can be logically placed into three categories. They are: compute, networks, and storage. For many, this grouping may appear trivial. It is however a foundation upon which cloud computing methodologies are developed, products designed, and solution formulated.

Compute

This is a collection of all CPU-related capabilities. Essentially all datacenter physical sand virtual servers, either for supporting or actually running a workload, are all part of this compute pool. Compute pool represents the total capacity for executing code and running instances. The process to construct a compute pool is to first inventory all servers and identify virtualization candidates followed by implementing server virtualization. And it is never too early to introduce a system management solution to facilitate the processes. Which in my view is a strategic investment and a critical component for all cloud initiatives.

Networks

The physical and logical artifacts putting in place to connect, segment, and isolate resources from layer 3 and below, etc. are gathered in the network pool. Networking enables resources becoming discoverable and hence possibly manageable. In this age of instant gratification, everything has to be readily available with a relatively short windows of opportunities. This is an economy driven by emotions and current events. The needs to be mobile and connected at the same time are redefining the IT security and system administration boundaries. Which plays a direct and impactful role in user productivity and customer satisfaction. Networking in cloud computing is more than just remote access, but an abstraction empowering thousands users to self-serve and consume resources anytime anywhere with any device. Bring your own device, bring your own network, and consumerization of IT are various expressions of the network and mobility requirements of cloud computing.

Storage

This has long been a very specialized and sometimes mysterious part of IT. An enterprise storage solution frequently characterizes as a high cost item with significant financial and technical commitments on specialized hardware, proprietary API and software, a dependency on direct vendor support, etc. In cloud computing, storage solutions are becoming very noticeable since the ability to grow and shrink resource capacities based on demands, i.e. elasticity, demands an enterprise-level, massive, reliable, and resilient storage solution at scale. While enterprise IT is consolidating resources and transforming existing establishments into a cloud computing environment, the ability to leverage existing storage solutions from various vendors and integrate them into an emerging cloud storage solution is a noticeable opportunity for minimizing the cost.

Virtualization in the Context of Cloud Computing

In the last decade, virtualization has proved its values and accelerated the realization of cloud computing. Then, virtualization was mainly server virtualization. Which in an over-simplified description means hosting multiple server instances with the same hardware while each instance runs transparently and in isolation, namely as if each consumes the entire hardware and is the only instance running. Now, with so many new technologies and solutions emerging, customers’ expectations and business requirements have been evolving. And we should validate virtualization in the context of cloud computing to fully address the innovations rapidly changing how IT conducts business and delivers services. As presented below, in the context of clod computing, consumable resources are delivered in some virtualized form. Various virtualization layers collectively construct and form the so-called fabric.

Server Virtualization

The concept of server virtualization remains as: running multiple server instances with the same hardware while each instance runs transparently and in isolation, as if each instance is the only instance running and consuming the entire server hardware.

In addition to virtualizing and consolidating servers, server virtualization also signifies the practices of standardizing server deployment, switching away from physical boxes to VMs. Server virtualization is the abstraction layer for packaging, delivering, and consuming a compute pool.

There are a few important considerations of virtualizing servers. IT needs the ability to identify and manage bare metal such that the entire resource life-cycle management from commencing an employment to the decommissioning of server hardware can be automated. To fundamentally reduce the support and training cost while increasing the productivity, a consistent platform with tools applicable across physical, virtual, on-premises, and off-premises deployments is essential. The last thing IT wants is one set of tools for managing physical resources and another for working on those virtualized; one set of tools for on-premises deployment and another for those deployed to a service provider; one set of tools for infrastructure development and another for configuring applications. The essential goal is one skill set for all, namely one platform for all, one methodology for all, and one set of tools for all. This advantage is obvious when, for example, developing applications and deploying Windows Server 2012 R2 on premises or off premises to Windows Azure. The Active Directory security model can work across sites, System Center can manage resources deployed off premises to Windows Azure, and Visual Studio can publish applications across platforms. IT pros can operate all based on Windows system administration skills. There is minimal operations training needed since all provides a consistent Windows user experience on a common platform and security model.

Network Virtualization

Similar idea of Server Virtualization applies here. Network virtualization is the ability to run multiple networks on the same network infrastructure while each network runs transparently and in isolation, as if each network is the only network running and consuming the entire network hardware.

Conceptually since each network instance is running in isolation, one tenant’s 192.168.x network is not aware of another tenant’s and identical 192.168.x network running with the same network infrastructure. Network virtualization provides the translation between physical network characteristics and the representation of a logical network. Consequently, above the network virtualization layer, various tenants while running in isolation can have identical network configurations. This abstraction essentially substantiates the concept of bringing your own network.

A great example of network virtualization is Windows Azure virtual network. At any given time, there can be multiple Windows Azure subscribers all allocate the same 192.168.x address space and with identical subnet scheme, 192.168.1.x/16, for deploying VMs. Those VMs belonging to one subscriber are however not aware of or visible to these deployed by others, despite the network configuration, IP scheme, and IP address assignments may be all identical. Network virtualization in Windows Azure isolates one subscriber from the others such that each subscriber operates as if the subscription account is the only one employing 192.168.x address space.

Storage Virtualization

I believe this is where the next wave of drastic cost reduction of IT post server virtualization happens. Historically, storage has been a high cost item in IT budget in each and every aspects including hardware, software, staffing, maintenance, SLA, etc. Since the introduction of Windows Server 2012, there is a clear direction where storage virtualization is becoming a commodity and an essential skill for IT pros since storage virtualization is now built into Windows OS. New capabilities like Storage Pool, Hyper-V over SMB, Scale-Out Fire Share, etc. are making storage virtualization part of server administration routines and easily manageable with tools and utilities like PowerShell which many IT professionals are familiar with.

The concept of storage virtualization remains consistent with the idea of logically separating a computing object from the hardware where it is running. Storage virtualization is the ability to integrate multiple and heterogeneous storage devices, aggregate the storage capacities, and present/manage as one logical storage device with a continuous storage space. And it should be apparent that JBOD is a visualization of this concept.

Standardization, Automation, and Optimization

Each of the three resource pools has an abstraction to logically present itself with characteristics and work patterns. A compute pool is a collection of physical hosts and VM instances. A virtualization host runs VMs which carry workloads deployed by service owners and consumed by authorized users. A network pool encompasses network resources including physical devices, logical switches, logical networks, address spaces, and site configurations. Network virtualization can map many identical logical/virtual IP addresses with the same physical NIC, such a service provider can host tenants implementing identical network scheme with the same network hardware without a concern. A storage pool is based on storage virtualization which is a concept of presenting an aggregated storage capacity as one continuous storage space as if provided from one logical storage device.

In other words, the three resource pools are wrapped with server virtualization, network virtualization, and storage virtualization, respectively. Together these three virtualization layers forms the cloud fabric which is presented as an architectural layer with opportunities to standardize, automate, and optimize deployments without the needs to know the physical complexities.

Standardization

Virtualizing resources decouples the dependency between instances and the underlying hardware. This offers an opportunity to simplify and standardize the logical representation of a resource. For instance, a VM is defined and deployed with a VM template which provides a level of consistency with a standardized configuration. 

Automation

Once a VM characteristics are identified and standardized with configurations, we can now generate an instance by providing only instance-based information such as the VM machine name which must be validated at deployment time to prevent duplicated names. With a deployment template, only minimal information at deployment time us needed. Which can significantly simplify the process and facilitate automation. Standardization and automation are essential mechanisms so that a workload can scale on demand, i.e. become elastic, once a threshold is specified from a system management perspective.

Optimization

Standardization provides a set of common criteria. Automation executes operations based on set criteria with volumes, consistency, and expediency. With standardization and automation, instances can be instantiated with consistency and predictability. In other words, cloud resources can be operated in bulk with consistency and predictability by standardizing configurations and automate operations. The next logical step is then to optimize the usage based on SLA.

Optimization is essentially standardization and automation with intelligence and objectives. The intelligence is why we need to have analytics designed into a cloud solution. With self-servicing, ubiquitous access, and elasticity, technically a cloud resource can be consumed anytime, by any number of users, and with capacities changed on demand. We will need a mechanism to provide us insights of the usage including when it is used, by whom, how much, to what degree, etc. A consumption-based chargeback or show-back model is what the analytics must provide so system management can optimize resources accordingly and timely.

Integrating resource pooling and virtualization concepts into OS has happened since Windows Server 2012 R2 and System Center 2012. Server virtualization, network virtualization, and storage virtualization are now integrated into Windows platform and part of the Server OS.

Fabric

A significant abstraction in cloud computing this is. Fabric implies the discoverability and manageability. Which denotes the ability to discover, identify, and manage a datacenter resource. Conceptually fabric is an umbrella term encompassing all the datacenter physical and virtualized resources supporting a cloud computing environment. At the same time, a fabric controller represents the system management solution which manages, i.e. owns, fabric.

In cloud architecture, fabric consists the three resource pools: compute, networking, and storage. Compute provides the computing capabilities, executes code, and runs instances. Networking glues the resources based on requirements. And storage is where VMs, configurations, data, and resources are kept. Fabric shields a user from the physical complexities of the three resource pools. All operations are eventually managed by the fabric controller of a datacenter. Above fabric, there are logical views of consumable resources including VMs, virtual networks, and logical storage drives. By deploying VMs, configuring virtual networks, or acquire storage, a user consumes resources. Under fabric, there are virtualization and infrastructure hosts, Active Directory, DNS, clusters, load balancers, address pools, network sites, library shares, storage arrays, topology, racks, cables, etc. all under the fabric controller’s command to collectively form the fabric.

For a service provider, building a cloud computing environment is essentially to establish a fabric controller and construct fabric. Namely institute a comprehensive management solution, build the three resource pools, and integrate server virtualization, network virtualization, and storage virtualization to form fabric. Form a user’s point of view, how and where a resource is physically located is not a concern. The user experience and satisfaction are much based on the accessibility, readiness, and scalability of requested resources and fulfillment of SLA.

Cloud

This is a well-defined term by NIST SP 800-145 and the 5-3-2 Principle of Cloud Computing. We need to be very clear on: what a cloud must exhibit (the five essential attributes), how to consume it (with SaaS, PaaS, or IaaS), and the model a service is deployed in (like private cloud, public cloud, and hybrid cloud). Cloud is a concept, a state, a set of capabilities such that the capacity of a requested resource can be delivered as a service, i.e. available on demand.

The architecture of a cloud computing environment is presented with three resource pools: compute, networks, and storage. Each is abstraction provided by a virtualization layer. Server virtualization presents a compute pool with VMs which supplies the computing power to execute code and run instances. Network virtualization offers a network pool and is the mechanism to allow multiple tenants with identical network configurations on the same virtualization hosts while connecting, segmenting, isolating network traffic with virtual NICs, logical switches, address space, network sites, IP pools, etc. Storage virtualization provides a logical storage device with the capacity appearing continuous by aggregating the capacity of a pool of storage devices behind the scene. The three resource pools together form an abstraction, namely fabric, such that despite the underlying physical infrastructure may be intricate, the user experience above fabric is presented with a logical and consistent fashion. Deploying a VM, configuring a virtual network, or acquiring storage is transparent with virtualization regardless where the VM actually resides, how the virtual network is physically wired, or what devices are included in the aggregate the requested storage.

Closing Thoughts

Cloud is an all consumer-focused approach. It is about enabling a customer to consume resources on demand and with scale. And perhaps more significant than the need to increase capacity on demand is the ability to release resources when no longer required. Cloud is not about products and technologies. Cloud is about standardizing, automating, and optimizing the consumption of resources and ultimately strengthening the bottom line.