Here’s my presentation in pdf format.
As IT considers and adopts cloud computing, I thought it is crucial to understand the fundamental difference between cloud and virtualization. For many IT pros the questions are where to start and what is the road map from on-premises to cloud, to hybrid cloud. In my previous memo to IT pros on cloud computing I highlighted a number of considerations when examining virtualization and cloud. In this article, I detail more on the underlying approach of the two and how they are different on problem domain, the scope of impact, and most significantly service model.
In the context of cloud computing, virtualization is the capability to run multiple server, network or storage instances on the same hardware, while each instance runs as if it is the only instance occupying and consuming the entire underlying hardware. Notice networking and storage devices generally present themselves via some form of an OS instance, namely for network and storage there is a software component. However here I uses the term, hardware, to include the needed software and hardware to form an underlying layers for a network or a storage device to run.
Virtualization is an enabler of cloud computing and a critical component in a cloud implementation. Virtualization is however not cloud computing and should not be considered at the same level as illustrated in the figure later in this article. And IT pros must recognize that virtualization is a mean, while cloud computing is an intermediate step and also a mean for the ultimate goal which is to deliver a service to customer.
These terms: virtualization, cloud and service are chosen carefully, and not to be confused or used interchangeably with simulation, infrastructure or application, for instance. The three terms signify specific capabilities and represent an emerging model of solving business problems with IT.
Here, I will present the concepts of virtualization, cloud and then service to make these points that
- Cloud computing is to deliver services on a user’s demand, while virtualization is a technical component enabling cloud computing.
- Virtualization is a necessary, yet not sufficient condition for cloud computing.
First to fully appreciate the role virtualization plays in IT, let’s first realize the following.
Isolation vs. Emulation/Simulation
A key concept in virtualization technologies is the notion of isolation. The term, isolation, signifies the ability to provide an instance of server, network, or storage a logical (i.e. virtualized) environment such that the instance runs as if it is the only instance of its kind occupying and consuming the entire underlying supporting layer. This isolation concept differentiates virtualization from the so-called emanation and simulation.
Either emulation or simulation mimics the runtime environment of an application instance. While running multiple emulated or simulated runtime environments, each remains running within a common and single OS instance. Essentially emulation or simulation runs multiple runtime environments in a shared address space, therefore either approach can potentially introduce resource contention and limit the performance and scalability rather quickly. At the same time, virtualization isolates or virtualizes at an OS level by letting each virtualized environment have its own OS with virtual CPUs, network configuration and local storage to form a virtual machine or VM. And a VM can and will actually schedule assigned virtual CPUs, hence the underlying physical CPUs, based on an associated VM configuration. As explained below, this is server virtualization.
Virtualization in the Context of Cloud Computing
Cloud computing recognizes virtualization in three categories. They are server virtualization, network virtualization and storage virtualization. Servers represent the computing capabilities, the abilities to schedule CPUs and execute compute instructions. Network is to direct, separate, or aggregate communication traffics among resources, while storage is where applications, data, and resources are stored.
These three are all implemented with virtualization. And each of the three plays a vital part. The integration of the three forms what is referenced as “fabric” as explained in the article “Fabric, Oh, Fabric.”
Hence server virtualization means to run multiple server instances on the same computer hardware, while each server instance (i.e. VM) runs in isolation, meaning each server instance runs as if it is the only instance occupying and consuming the entire hardware.
This is an ability to establish multiple network instances, i.e. IP addressing scheme, name resolution, subnets, etc. on the same network layer while each network instance runs in isolation. Here I don’t call it network hardware, instead of a layer since a network instance is wrapped within and requires a form of OS which runs on computer hardware, so there is also a software component with networking.
Multiple network instances in isolation means each network instance exists as if the network configuration is unique on the supporting network layer. For instance, in a network virtualization environment, there can be multiple tenants which all have an identical network configuration like 192.168.x.x while within each virtual network when referencing the same IP address like 192.168.1.1, virtualization will ensure tenant A’s network and tenant B’s network remains operating correctly. And a reference within one network will not be confused with one in other network even both have the same IP configured with respective logical network.
A similar concept applies to storage virtualization which is a capability to aggregate storage devices in various kinds while presenting all as one logical device with a continuous storage space. For instance, storage virtualization enables a user to form a logical storage volume by detecting and selecting storage devices with unallocated storage in various interfaces like SATA, SAS, IDE, SCSI, iSCSI, and USB. This volume then aggregates all available storage from those selected devices and presented to a user as a single formatted volume with continuous storage. This concept has been long realized by SAN technologies which is an expensive and specialized storage virtualization solution. Lately, storage virtualization has become a software solution, been integrated into OS, and become a core IT skill and integral part of JBOD storage solution for IT.
Fabric represents three things: compute, network and storage. Compute is basically the ability to execute instructions in CPUs. Network is how resources are glued together. Storage is where application and data are stored. This fabric concept is something we all have been exercising often and sometimes not fully aware of. For instance, when buying a computer, we mainly consider a number of items: the chip family and processor speed (compute), how much hard disk capacity and the spindle speed (storage), while network capabilities are expected and integrated into the mother board. The fact is that when buying a laptop, we focus on the fabric configuration which forms the capabilities of a solution, here a laptop, a personal computing solution. This practice is applicable from an individual’s hardware purchase, to IT’s running datacenter facilities. For cloud computing, because the transparency of physical locations and hardware where applications are running, we can use fabric to collectively represent the IT capabilities of compute, network, and storage relevant to a datacenter.
This term denotes the management solution for managing compute, network and storage in a datacenter. Fabric controller, as the name indicates, controls and is the owner of a datacenter’s fabric. The what, when, where and how of IT capabilities in a datacenter are all managed by this logical role, Fabric Controller.
There is another significance of employing the term, fabric. It signifies the ability to discover, identify, and manager a resource. In other words, a fabric controller knows if there are newly deployed resources and is able to identify, bring up, monitor or shut down resources within fabric.
Virtualization and Fabric
From a service provider’s view point, constructing cloud is mainly to construct cloud fabric, so all resources in cloud can be discovered, identified, and managed. Form a user’s point of view, all consumable resources in cloud are above the virtualization layer.
Cloud relies on virtualization to form fabric which wraps and shields the underlying technical and physical complexities of servers, networks, and storage infrastructures from a user. Virtualization is an enabling technology for cloud computing. Virtualization is nevertheless no cloud computing itself.
Commitments of Cloud Computing
What is cloud? Cloud is a state, a set of capabilities which enables a user to consume IT resources on demand. Cloud is about the ability to consume resource on demand, and not implementations. What cloud guarantees are the capabilities which a solution must deliver. Cloud is about the existence of particular computer programs or an employment of specific products or technologies.
Above all, NIST SP 800-145 (800-145) has define a set of criteria including essential characteristic, service models, and deployment models. NIST essentially specifies what capabilities cloud has and how they must be delivered. 800-145 has been adopted as an industry standard for IT on adopting cloud. Here is how I interpret the definition on essential characteristics of cloud computing.
- A cloud must enable a user to self-serve. That means that a user is to call no one and email no one, when consuming cloud resources. A user should be able to serve oneself to allocate storage, create network, deploy VMs and configure application instances without the service provider’s intervention. Just this requirement is very significant already.
- A cloud must be generally accessible for a user. Self-service and ubiquitous access are complementary. Self-service implies always accessibility, while always accessibility enables self-service. You can’t self-serve if you can’t get to it. And if you can’t get to it, you can’t self-serve. This requirement is mandatory and logical.
- A cloud employs resource pooling. In my view, this implies actually three things: standardization, automation, and optimization. A pool is a metaphor for a collection of things with a same set of criteria. A swimming pool with water is an obvious example. Putting things in a resource pool implies that there is a standardization process based on set criteria to prepare resources for automation. Automation improves efficiency, not necessarily always the overall productivity. For instance, shortening the time by automating a process from 8 hours to 2 hours, by this time reduction itself does not necessarily increase the productivity accordingly since the assembly line may be idling for the rest of 6 hours. Unless relevant materials are continuously coming into the process line to optimize the productivity. Although this example is apparently trivial, the applicability is apparent for resource pooling to encompass standardization, automation and optimization as a whole to better manage cloud resources.
- Elasticity is perhaps one of the most referenced cloud computing terms since it vividly describes cloud computing to be able to grow and shrink the capacity of a target resource in responding to demands, by visualizing a rubber band expands and retracts as the applied force varies.
- Finally, for measured service, some call it a charge-back or show-back model, others may state it as pay-as-you-go. The significance of either is the underlying requirement of designed-in analytics. If a cloud enables a user to self-serve, is accessible anytime, grow and shrink based on demands, etc. as the first four essential characteristics state, we need a fundamental way to understand and develop date usage patterns to carry out a realistic capacity planning. Analytics is an essential part of cloud computing, must be architected and designed into a cloud solution and definitely not an afterthought.
All the above are five essential characteristics defined in 800-145. In other words, to be qualified as a cloud, these five attributes are expected, given, guaranteed, and committed. This is tremendous for both user and IT.
Is Virtualization Cloud Computing
The answer should be an obvious “NO.” In virtualization, the requirement is rooted on providing an isolation environment. There are no requirements on a virtualization solution must establish all including self-servicing, offer general accessibility, employ standardization/automation/optimization as a whole, grow and shrink virtualized resources based on demands, and provide analytics of consumptions.
Unique Values of Cloud Computing
There are three service models (IaaS, PaaS and SaaS) to consume cloud as defined in 800-145. From the definition, it is apparent that software is an application, platform means an application runtime and infrastructure implies a set of servers (or VMs) to form the fabric of a target application.
Some observations I have from examining these service models. There are three sets of capabilities offered by cloud, i.e. three way a user can consume cloud.
- Enabling a user to consume an application as a service is SaaS.
- Providing an application runtime such that a user can develop and deploy a target application as a service is PaaS.
- Provisioning application fabric, i.e. compute, network and storage, for a target application is IaaS.
Notice all three must be delivered as a service. Application, platform and infrastructure are much described in 800-145, however it was unclear what a service is. This is an essential term which surprisingly NIST did not define in 800-145. In addition to what has been stated in “An Inconvenience Truth of the NIST Definition of Cloud Computing, SP 800-145,” this is another unfortunate missing part of NIST definition. So what is a service? Without some clarity, IaaS/PaaS/SaaS are designated for ambiguity.
“Service” in this context means simply “capacity on demand” or simply “on demand.” The three service models are therefore
- IaaS: Infrastructure or a set of VMs as a whole forms an application infrastructure is available on demand
- PaaS: Platform or an application runtime is available on demand
- SaaS: Software or an application is available on demand
On-demand is a very powerful word. In this context, it means anytime, anywhere, on any network, and with any device. Cloud enables a user to consume an application, deploy an application to a runtime environment, or build application fabric as a service. It means anytime, anywhere, on any network, with any device a user is guaranteed to be able to build application fabric (IaaS), deploy application (PaaS), and consume an application (SaaS). This is where cloud delivers most of its business values by committing the readiness for a user to consume cloud resources anytime, anywhere, on any network, with any device.
On-demand also implies a series of management requirements including monitoring, availability, and scalability relevant to all five essential characteristics. To make a resource available on demand, the fabric controller must know everything about the resource. The on-demand requirement, i.e. delivered as a service, is a requirement which a cloud service provider must ensure every delivery fulfills.
Cloud Defines Service Models, Virtualization Does Not
Virtualization on the other hand defines what it delivers, i.e. an isolated environment for delivering compute, network, or storage resources. Virtualization however does not define what a virtualization service model is. For instance, server virtualization is the ability to create VMs. Server virtualization does not define if a VM must be instantaneously created and deployed upon a user’s request anytime, anywhere, on any network, and with any device. Missing a service model in virtualization is a very obvious difference why virtualization is no cloud, and cloud goes far beyond virtualization. Virtualization and cloud are contributing to business values at different dimensions. Those effects from the former are localized and limited, while the impacts from the latter are fundamental and significant.
Cloud Deployment Models
For cloud, 800-145 defines four deployment models including public, community, private, and hybrid. Those with different views (see “An Inconvenience Truth of the NIST Definition of Cloud Computing, SP 800-145,”) may consider otherwise. Regardless how many deployment models, it is a classification to address the ramifications in ownership, manageability, security, compliance, portability, licenses, etc. introduced by deployment.
Virtualization has no concern on a deployment model. The scope of virtualization is confined within the ability to virtualize a compute, network, or storage instance. Where a virtualized instance is deployed, who’s using it, etc. has no influence on how the virtualized instance is constructed. What is running and how an application is licensed within a VM, a logical network, or a JBOD volume is not a concern to virtualization. The job of virtualization is to construct a virtual instance and let the fabric controller manage the rest. For instance, a VM is a VM regardless it is deployed to a company host, a vendor’s datacenter, or a personal laptop. It is still a VM as far as virtualization is concerned.
Examining the definitions of virtualization and cloud, it is clear that
- Virtualization assures system integrity within a VM, a network, or a storage volume while multiple instances of a kind running on the same underlying hardware. At the same time, cloud computing guarantees a user experience presented by five essential capabilities.
- Virtualization does not define how a VM to be delivered, while cloud computing commits all deliveries are on a user’s demand.
- Virtualization technology is transparent on where a VM, a logical network, or a JBOD volume is deployed and who are the intended users. Cloud computing on the other hand categorizes deployment models to address the business impact on management, audit, security, compliance, licensing, etc.
Overall, cloud is about user experience, enabling consumption on demand, rapid or even proactive response to expected business scenarios on resource capacities. Virtualization on the other hand is a technical component which isolates multiple VM, network, or storage instances running on the same hardware layer. The focus and impact on virtualization is relatively localized and limited.
It should be very clear that cloud and virtualization are operating at different levels with very different scope of impact. Needless to say that virtualization is no cloud, and cloud goes far beyond virtualization.
In this presentation, screen by screen I walked through the installation and configuration of Azure PowerShell. There are two ways to connect PowerShell with an Azure subscription. One uses Azure Active Directory and the other is with a publish-settings file. Both are detailed in this delivery. I also demonstrated a simple routine to remotely stop and deallocate a VM instance with Azure PowerShell cmdlets.
To benefit most form this content, you should have already reviewed Azure 101 Series, http://aka.ms/Prerequisites, which is a prerequisite for those to attend Microsoft IT Camp or Azure-related events
This Azure 101 series presents a set of core competencies of Microsoft Azure for IT professionals and serves as prerequisites (http://aka.ms/Prerequisites) for attending Microsoft events relevant to Microsoft Azure including:
In each topic, I walked through the specifics of processes and steps with screen-to-screen details of examined scenarios. This post is specific to Microsoft Azure cloud service model, VM deployment and user experience.
To publish a gallery item in Windows Azure Pack (WAP), the associated OS image disks, i.e. vhd files, must be set according to what’s in the readme file of a gallery resource package. For those who are not familiar with the operations, this can be a frustrating learning experience before finally getting it right. This blog post is to address this concern by presenting a routine with a sample PowerShell script [download] to facilitate the process.
Required Values of OS Image Disks for WAP Gallery Items
For example, below is from the readme file of the gallery resource package, Windows Server 2012 R2 VM Role. It lists out specific property values for WAP to recognize a vhd as an applicable OS image disk for the Role. To find out more about WAP gallery resources, the information is available at http://aka.ms/WAPGalleryResource.
As a gallery item introduced into vmm and WAP, the item then becomes available when a tenant is provisioning a Role as shown below.
There are several steps involved including:
- Prepping vhds of and importing resource extension of the gallery item, as applicable, to vmm server library shares
- Importing resource definition to WAP
Here, prepping vhds is the focus. And the process and operations are rather mechanical as detailed in the following.
Process and Operations
The script below illustrates a routine for a vmm administrator to set required property values on applicable OS image disks in a target vmm server’s library shares, . This sample script is available for download.
Line 23 connects to a target vmm server.
Line 25 builds a list of vhd objects the prefix, ws2012r2, in their names. Which suggests a vmm administrator to develop a meaningful naming scheme for referenced resources.
Line 27 and 28 display settings of the vhd files before making changes.
Line 30 to line 35 are to set the values to specific fields including OS, familyname and release according to the readme file of a particular gallery resource package, for example, WS2012_R2_WG_VMRole_Pkg. And as preferred, one can also default a product key to a vhd.
The foreach loop goes through each vhd in the list and set the values. WAP references the tag values of a vhd file to realize if a vhd is applicable for various workloads. Make sure to add all tag values specified in the readme file, as demonstrated between line 41 and line 44 to build the list. Line 46 to line 52 sets all specified values to corresponding property fields of a currently referenced vhd file.
Finally upon finishing the foreach loop, line 56 and line 57 present the updated settings of the processed vhd files for verification.
Here’s an example of running the script:
At this time, with correctly populated property values and tags, the vhds are ready for this particular WAP gallery item, Windows Server 2012 R2 VM Role.
For all the gallery items which WAP displays, a vmm administrator must reference the readme file of each gallery resource package and carry out the above exercise to set property values of the applicable OS image disks. Pay attention to the tags. Missing a tag may invalidate an image disk for some workload and inadvertently prevent that workload from being available for a tenant to provision an associated VM Role in WAP, despite the OS is properly installed on the disk.
The tasks of prepping OS imaging disks for WAP gallery items are simple and predictable. Each step is however critical for successfully publishing an associated gallery item in WAP. Like many mechanics, understand the routine, practice, and practice more. A vmm administrator needs to perform these operations with confidence and precision. The alternative is needless frustration and delay, while both are absolutely avoidable at this juncture of deploying WAP.
Continuing our “Accelerate DevOps with the Cloud” series on TechNet Radio, Yung Chou welcomes Sr. Program Manager Charles Joy to the show as they discuss the be the importance of automation in your datacenter especially when it comes to advancing your DevOps strategy.
- [2:36] How does automation help organizations accelerate the delivery of new solutions as they move to the Cloud?
- [5:18] What tools and resources are available to help IT Pros get started with automation? Do they need to be a professional “scripter”?
- [6:04] Do IT Pros need to learn a different set of tools for automating each component?
- [6:26] If an IT Pro is automating cloud resources in Azure, do they have to spin up an entire set of infrastructure components just to handle automation? How does Azure automation organize and leverage these automation sequences?
- [7:22] How can Runbooks be triggered? Based on schedule? Based on other events?
- [8:24] Is Azure Automation extensible? Can I incorporate other PowerShell modules?
- [9:10] DEMO: Quick walkthrough of Azure Automation accounts, assets, runbooks, schedule
Websites & Blogs:
This presentation focuses on
- Microsoft Azure Infrastructure Services essentials
- Windows AD operability in Microsoft Azure
It is not about
- Windows AD design, implementation, or sys admin
- Microsoft Azure Active Directory
Call to Action
- Get it! Microsoft Azure 30-Day free trial
- Learn it! Microsoft Azure Infrastructure Services
- Check it! StaticVNet IP
- Find it! Azure pricing
- What-if!! Azure cost calculator
- Know it! Azure compliance
- Read it! Microsoft Azure SLA
My presentation on Deploying Windows 8.1 and Windows 7 in Microsoft Azure
- 30-Day free trial – http://aka.ms/R2
- VM deployment methodology – http://aka.ms/AzureIaaSMethod
- User experience (Gallery, Quick Create)
- Storage Account
- Cloud Service
- Capturing (live) image
- Attaching data disk
- RDP experience
- Azure pricing – http://aka.ms/Pricing
- Azure what-if cost calculator – http://aka.ms/Calculator
- Azure compliance – http://aka.ms/AzureCompliance
- Azure SLA – http://aka.ms/AzureSLA
Build Your Lab! Download Windows Server 2012 R2, System Center 2012 R2 and Hyper-V Server 2012 R2 and get the best virtualization platform and private cloud management solution on the market. Try it FREE now!
Websites & Blogs:
Convention wisdom refers DevOps as a strategy, an approach, or a movement as some call it. DevOps addresses the idea that Development and Operations need to be aligned well in an application lifecycle, work closely and collaboratively to eliminate inefficiency, reduce bottlenecks, and maximize productivity. The concept is certainly not new, for decades business processes and operations based on software engineering concepts or lifecycle management methodologies are all trying to minimize inefficiency and maximize productivity. So what is different now.
DevOps in Cloud Computing
In cloud computing, infrastructure construction, run-time configuration, and application deployment are now delivered on demand, i.e. as a service, hence IaaS, PaaS, and SaaS. (1, 2) Form an operation’s point of view, cloud computing eliminates most, if not all, physical aspects of Dev and Ops. The physical establishments of Dev and Ops including servers, networks, and storage are now a lesser concern on application administration, maintenance, and costs since applications are based and operated upon logical artifacts like virtual machines, virtual networks, and virtual storage. From a user’s point of view, infrastructure support, run-time operations, and application maintenance are now all logical models where Dev and Ops can operate on a common, i.e. identical platform with standardized services from a cloud provider. This setting offers many opportunities to promote and to practice DevOps. With cloud computing, the integration of Dev and Ops becomes lucid and an achievable proposal both financially and administratively. DevOps is also an opportunity to further increase productivity, hence the ROI, of adopting cloud. I believe the inevitability to integrate Dev and Ops is quickly becoming apparent as IT continues to adopt cloud as a service delivery platform.
DevOps and Automation
Why to automate anything should be obvious. In addition to efficiency, there are also considerations on consistency, repeatability, and predictability to programmatically carry out tasks. Considering Dev and Ops, automation is an effective vehicle to minimal user interventions from both Dev and Ops for establishing application infrastructure, configuring run-time, and deploying a target application. This automation provides consistency and predictability of application deployment with transparency to both Dev and Ops. The theme is that DevOps calls for automation and automation sets DevOps in motion.
DevOps and Tools
Here, I highlight a few tools which can cultivate DevOps. Azure PowerShell Cmdlets and Cross-Platform Command Line Interface are for installing on individual devices, and with which each of us can take a DevOps approach and automate as much as applicable on our deliveries. Parallel processes, plan operations based on a logical unit of work, separate business logic from data, etc. and make automation a common work style.
Azure Automation is a Microsoft’s solution for IT automation. For those have not had a chance to work on Microsoft System Center Orchestrator, this may possibly present a learn curve to some. Orchestrator as the name suggests is a powerful component in System Center to for automating and orchestrating a data center. You can consider it as a turbo DevOps engine leaning towards the Ops side.
Visual Studio Lab Management on the other side is the ultimate DevOps operating environment facilitating entire application lifecycle management. Form departments, business units, organizations, to an entire enterprise, Visual Studio Lab Management can be scoped accordingly.
This is the tool to manage Microsoft Azure with Windows PowerShell. For those who are new to PowerShell, relevant information is readily available in TechNet Library. To learn and assess Microsoft Azure cmdlets, one needs first acquiring a trial subscription, followed by downloading and installing Microsoft Azure PowerShell. Then, one must first configure a secure connection between an intended subscription and the Windows PowerShell environment in a local device. Instructions to Install and configure Windows Azure PowerShell are well documented.
This is an extension of Windows PowerShell in Windows Server 2012 R2 and Windows 8.1. Notice as of May, 2014, both Windows 7 and windows 8.1 are also available for MSDN subscribers to deploy in Microsoft Azure. DSC provides a set of PowerShell cmdlets and resources enables to declaratively specify how to configure, manage and maintain configuration data for services and managing the environment in which these services run including:
- managing server roles and features, registry settings, processes, files and directories, local groups and user accounts, environment variables, etc.
- installing .msi and .exe packages
- discover current state on a given node and validate its configuration
DSC is a vehicle to realize automation for predictability. Additional information of DSC is available elsewhere.
Azure Cross-Platform Command Line Interface (Xplat-CLI)
Both Puppet and Chet offers technologies to automate deployments, configurations, and management of VMs across platforms. Microsoft Azure VM has both as configuration extensions as shown below. Microsoft Azure VM Image Gallery also includes a pre-configured Puppet Master based on Ubuntu Linux distribution. These additions offers vehicles to integrate with developed DevOps communities and practices, facilitate hybrid cloud adoption with automated deployment across platforms, and realize operating on Microsoft Azure is essentially a continual DevOps practice.
This is an IT automation solution for Microsoft Azure by employing the concept of a runbook as employed in System Center Orchestrator. A runbook comprises activities which are the steps, i.e. instructions for an automated process, operations, and tasks. A runbook in Microsoft Azure is a Windows PowerShell workflow to automating the management and deployment of resources. Above all, an Azure runbook can automate essentially what a Microsoft Azure PowerShell and Windows PowerShell script can perform.
To configure Azure Automation, one must first create an automation account which is a container for managing automation resources including runbooks, jobs, and assets. Each automation account is associated with an Azure subscription, and a subscription can have multiple automation accounts.
This feature is being previewed in Microsoft Azure at this time.
Managing development, test, staging, user acceptance tests, demo environment, etc. are key part of application life cycle management. Visual Studio Lab Manager, a feature of Team Foundation Server (TFS), facilitates the management of existing and simplifies the process of provisioning of new environments for all team members. It can automate the routines of building, deploying, running tests on, and removing a lab environment. Here a lab environment is a collection of virtual and physical machines for developing and testing applications. A target lab environment, for instance, can now be automatically provisioned using templates with consistency and predictability, and as needed reverted to a specific point of time with snapshots. Visual Studio Lab Manager is raising application lifecycle management to a new standard.
For developers, cloud computing has rejuvenated DevOps with a higher intensity and a bigger ambition. The integration and collaboration of Dev and Ops, from users’ point of view are now pertinent most at a logical level and above fabric, i.e. the virtualization layer. DevOps is no longer just a right idea, but now an approachable proposition with cloud computing both administratively and financially. Application lifecycle management environment like Visual Studio takes full advantages of System Center and Hyper-V and integrates Dev with Ops (System Center and Hyper-V) at an enterprise architecture level and with methodologies. The result of practicing DevOps will be timely, impactful, and rewarding.
For IT professionals, DevOps signifies a call to action for data center automation with a comprehensive system management solution. DevOps is much more than just automation, nevertheless towards DevOps automation is an essential step. Above all, I view DevOps as an exciting prospect with a strategic roadmap for IT professionals like you and me to continue growing professionally, and explore career opportunities with DevOps or data center automation discipline in this cloud computing era.