Disabling/Enabling Azure VM BootDiagnostic Using PowerShell

Despite a simple operation, apply the following sample statements to your environment if experiencing an issue in changing an Azure VM’s BootDiagnostic setting.

ISSUE for

  • Not able to disable/enable BootDiagoistic of an Azure VM

HOW-TO

References:

SAMSPLE STATEMENTS

The same process applies to enabling BootDiagnostic by specifying a vm object with the associated resource group and an intended storage account in step 4.

<# 
Disabling Azure VM BootDiagnostic Using PowerShell 

The following illustrates the process to disable VM BootDiagnostic. 
The statements are intended to be executed manually and in sequence. 
#>
 
# 1. Log in Azure and set the context, as appropriate
Connect-AzAccountstep 4
Get-AzContext
Set-AzContext -Subscription '????' -Tenant '????'
 
# 2. Specify a target VM 
$vmName = 'your vm name'
$vmRG = 'the resource group name of the vm'
 
# 3. Check the current BootDiagnostics status
($VM = Get-AzVM -ResourceGroupName $vmRG -Name $vmName).DiagnosticsProfile.BootDiagnostics
 
# 4. Disable BootDiagnostic of the VM
Set-AzVMBootDiagnostic -VM $VM -Disable
 
# 5. Update the VM settings
Update-AzVM -ResourceGroupName $vmRG -VM $VM
 
# 6. Check the current BootDiagnostics status and verify the change made
($VM = Get-AzVM -ResourceGroupName $vmRG -Name $vmName).DiagnosticsProfile.BootDiagnostics
 
# Notice it may take a few minutes for azure portal to reflect 
# the changes made to BootDiagnostic.
 

SAMPLE SESSION

  • Examine status before making a change

BootDiagnostic Settig

  • Disable BootDiagnistic

  • Examine status after making the change

Creating Azure Managed Disk with VHD Using PowerShell


#region [CREATE MANAGED DISK WITH VHD]

write-host "
------------------------------------------------------------

This script is based on the following reference and for
learning Azure and PowerShell. I have made changes to the
original scrtip for clarity and portability.

Ref: Create a managed disk from a VHD file
https://docs.microsoft.com/en-us/azure/virtual-machines/scripts/virtual-machines-windows-powershell-sample-create-managed-disk-from-vhd

Recommend manually running the script statement by
statement in cloud shell.

© 2020 Yung Chou. All Rights Reserved.

------------------------------------------------------------
"

#region [CUSTOMIZATION]

#region [Needed only if an account owns multiple subscriptions]

Get-AzSubscription | Out-GridView  # Copy the target subscription name

# Set the context for subsequent operations
$context = (Get-AzSubscription | Out-GridView -Title 'Set subscription context' -PassThru)
Set-AzContext -Subscription $context | Out-Null
write-host "Azure context set for the subscription, `n$((Get-AzContext).Name)" -f green

#endregion

$sourceVhdStorageAccountResourceId = '/subscriptions/…/StorageAccounts/'
$sourceVhdUri = 'https://.../.vhd'

#Get-AzLocation
$sourceVhdLoc = 'centralus'

$mngedDiskRgName ="da-mnged-$(get-date -format 'mmss')"
#$mngedDiskRgName ='dnd-mnged'

#Provide the name of a to-be-created Managed Disk
$mngedDiskName = 'myMngedDisk'
$mngedStorageType = 'Premium_LRS' # Premium_LRS,Standard_LRS,Standard_ZRS
$mngedDiskSize = '128' # In GB greater than the source VHD file size

#endregion

if (($existingRG = (Get-AzResourceGroup | Where {$_.ResourceGroupName -eq $mngedDiskRgName})) -eq $Null) {
write-host "Resource group, $mngedDiskRgName, not found, creating it" -f y
New-AzResourceGroup -Name $mngedDiskRgName -Location $mngedDiskLoc
} else {
write-host "Using this resource group, $mngedDiskRgName, for the managed disk, $mngedDiskName" -f y
}

$diskConfig = New-AzDiskConfig `
-AccountType $mngedStorageType `
-Location $sourceVhdLoc `
-CreateOption Import `
-StorageAccountId $sourceVhdStorageAccountResourceId `
-SourceUri $sourceVhdUri

New-AzDisk `
-Disk $diskConfig `
-ResourceGroupName $mngedDiskRgName `
-DiskName $mngedDiskName `
-Verbose

#endregion [CREATE MANAGED DISK WITH VHD]

#region [CLean up]
# Remove-AzResourceGroup -Name $mngedDiskRgName -Force -AsJob
#endregion

Deploying Azure VM with Diagnostics Extension and Boot Diagnostics

This is a sample script for deploying an Azure VM with Diagnostics Extension and Boot Diagnostics, while each in a different resource group. The intent is to clearly illustrate the process with required operations, while paying minimal effort for code optimization.

Ideally an Azure VM, Diagnostic Extension, and Boot Diagnostics are to be deployed with the same resource group. However in production, it may be necessary to organize them into individual resource groups for standardization, which is what this script demonstrates.

The script can be run as it is. Or simply make changes in customization section and leave the rest in place. For VM Diagnostic Extension, the configuration file should be placed where the script is. Or update the variable, $diagnosticsConfigPath, accordingly. This script uses Storage Account Key for access which allows the storage account with a subscription different from that deploys the VM. A sample configuration file, diagnostics_publicconfig_NoStorageAccount.xml, is available  and notice there is no <StorageAccount> element specified in this file.

Here’s the user experience up to finishing the [Deploying] section in the script. By default, an Azure VM is deployed with Boot Diagnostic enabled. The script upon a VM deployment changes and disables the Boot Diagnostic of the VM. For the following sample run, it took 3 minutes and 58 seconds.

Deploying Azure VM and setting Boot Diagnostics as disabled Deploying Azure VM and setting Boot Diagnostics as disabled

Now with an Azure VM in place, the script adds VM Diagnostic Extension, followed by enabling Boot Diagnostics. Herr either extension uses a storage account in a resource group different form the VM’s. So this script creates 3 resource groups for: a VM itself, and the Diagnostics Extension and the Boot Diagnostics of the VM.

VM, Diagnostics, and Boot Diagnostics deployed with individual resource groups

VM, Diagnostics, and Boot Diagnostics deployed with individual resource groups


write-host "
---------------------------------------------------------

This is a sample script for deploying an Azure VM
with Diagnostics Extension and Boot Diagnostics,
while each in a different resource group.

The intent is to clearly illustrate the process with
required operations, while paying minimal effort for
code optimization.

Ideally an Azure VM, Diagnostic Extension, and Boot Diagnostics
are to be deployed with the same resource group. However
in production, it may be necessary to organize them into
individual resource groups for standardization,
which is what this script demonstrates.

The script can be run as it is. Or simply make changes
in customization section, while leave the rest in place.
For VM Diagnostic Extension, the configuration file should
be placed where thi script is. Or update the variable,
$diagnosticsConfigPath, accordingly. This script uses a
Storage Account Key for access. This configuration allows
the storage account with a subscription different from that
deploys the VM. A sample configuration file,
diagnostics_publicconfig_NoStorageAccount.xml, is available at

https://1drv.ms/u/s!AuraBlxqDFRshVSl0IpWcsjRQkUX?e=3CGcgq

and notice there is no <StorageAccount> element specified in this file.

© 2020 Yung Chou. All Rights Reserved.

---------------------------------------------------------
"

Disconnect-AzAccount; Connect-AzAccount
# If multipel subscription
# Set-AzContext -SubscriptionId "xxxx-xxxx-xxxx-xxxx"

#region [Customization]

$cust=@{
initial='yc'
;region='southcentralus'
}

$diagnosticsConfigPath='diagnostics_publicconfig_NoStorageAccount.xml'

#region [vm admin credentials]

# 1.To hard-code
$cust+=@{
vmAdmin ='changeMe'
;vmAdminPwd='forDemoOnly!'
}
$vmAdmPwd=ConvertTo-SecureString $cust.vmAdminPwd -AsPlainText -Force
$vmAdmCred=New-Object System.Management.Automation.PSCredential ($cust.vmAdmin, $vmAdmPwd);
#>

# 2. Or interactively
#$vmAdminCred = Get-Credential -Message "Enter the VM Admin credentials."

#endregion

$tag=$cust.initial+(get-date -format 'mmss')
Write-host "`nSession ID = $tag" -f y

# Variables for common values
$vmRGName=$tag+'-RG'
$loc=$cust.region
$vmName=$tag+'vm'

$deployment=@{
vmSize='Standard_B2ms'
;dataDiskSzieInGB=5
;publisher='MicrosoftWindowsServer'
;offer='WindowsServer'
;sku='2016-Datacenter'
;version='latest'
;vnetAddSpace='192.168.0.0/16'
;subnetAddSpace='192.168.1.0/24'
}

#endregion

#region [Deployment Preping]

# Create a resource group
New-AzResourceGroup -Name $vmRGName -Location $loc
# Remove-AzResourceGroup -Name $vmRGName -AsJob

# Create a subnet configuration
$subnetConfig = `
New-AzVirtualNetworkSubnetConfig `
-Name 'default' `
-AddressPrefix ($deployment.subnetAddSpace) `
-WarningAction 'SilentlyContinue'

# Create a virtual network
$vnet = `
New-AzVirtualNetwork `
-ResourceGroupName $vmRGName `
-Location $loc `
-Name "$tag-vnet" `
-AddressPrefix $deployment.vnetAddSpace `
-Subnet $subnetConfig

# Create a public IP address and specify a DNS name
$pip = `
New-AzPublicIpAddress `
-ResourceGroupName $vmRGName `
-Location $loc `
-Name "$vmName-pip" `
-AllocationMethod Static `
-IdleTimeoutInMinutes 4

# Create an inbound network security group rule for port 3389
$nsgRuleRDP = `
New-AzNetworkSecurityRuleConfig `
-Name "$vmName-rdp" `
-Protocol Tcp `
-Direction Inbound `
-Priority 1000 `
-SourceAddressPrefix * `
-SourcePortRange * `
-DestinationAddressPrefix * `
-DestinationPortRange 3389 `
-Access Allow

# Create an inbound network security group rule for port 80,443
$nsgRuleHTTP = `
New-AzNetworkSecurityRuleConfig `
-Name "$vmName-http" -Protocol Tcp `
-Direction Inbound `
-Priority 1010 `
-SourceAddressPrefix * `
-SourcePortRange * `
-DestinationAddressPrefix * `
-DestinationPortRange 80,443 `
-Access Allow

$nsg= `
New-AzNetworkSecurityGroup `
-ResourceGroupName $vmRGName `
-Location $loc `
-Name "$vmName-nsg" `
-SecurityRules $nsgRuleRDP, $nsgRuleHTTP `
-Force

# Create a virtual network card and associate with public IP address and NSG
$nic = `
New-AzNetworkInterface `
-Name "$vmName-nic" `
-ResourceGroupName $vmRGName `
-Location $loc `
-SubnetId $vnet.Subnets[0].Id `
-PublicIpAddressId $pip.Id `
-NetworkSecurityGroupId $nsg.Id

$vmConfig = `
New-AzVMConfig `
-VMName $vmName `
-VMSize $deployment.vmSize `
| Set-AzVMOperatingSystem `
-Windows `
-ComputerName $vmName `
-Credential $vmAdmCred `
| Set-AzVMSourceImage `
-PublisherName $deployment.publisher `
-Offer $deployment.offer `
-Skus $deployment.sku `
-Version $deployment.version `
| Add-AzVMNetworkInterface `
-Id $nic.Id

#endregion

#region [Deploying]

$StopWatch = New-Object -TypeName System.Diagnostics.Stopwatch; $stopwatch.start()
write-host "`nDeploying the vm, $vmName, to $loc...`n" -f y

$vmStatus = `
New-AzVM `
-ResourceGroupName $vmRGName `
-Location $loc `
-VM $vmConfig `
-WarningAction 'SilentlyContinue' `
-Verbose

Set-AzVMBgInfoExtension `
-ResourceGroupName $vmRGName `
-VMName $vmName `
-Name 'bginfo'

$vm = Get-AzVM -ResourceGroupName $vmRGName -Name $vmName
# Set by default not to enable boot diagnostic
Set-AzVMBootDiagnostic `
-VM $vm `
-Disable `
| Update-AzVM
write-host "`nSet the vm, $vmName, with BootDiagnostic 'Disabled'`n" -f y

write-host '[Deployment Elapsed Time]' -f y
$stopwatch.stop(); $stopwatch.elapsed

#endregion

#region [Set VM Diagnostic Extension]
<# If using a diagnostics storage account name for the VM Diagnostic Extension, the storage account must be in the same subscription as the virtual machine. If the diagnostics storage account is in a different subscription than the virtual machine's, then enable sending diagnostics data to that storage account by explicitly specifying its name and key. #>
$vmDiagRGName=$tag+'vmDiag-RG'
$vmDiagStorageName=$tag+'vmdiagstore'

New-AzResourceGroup -Name $vmDiagRGName -Location $loc
#Remove-AzResourceGroup -Name $vmDiagRGName -AsJob

New-AzStorageAccount `
-ResourceGroupName $vmDiagRGName `
-AccountName $vmDiagStorageName `
-Location $loc `
-SkuName Standard_LRS

Set-AzVMDiagnosticsExtension `
-ResourceGroupName $vmRGName `
-VMName $vmName `
-DiagnosticsConfigurationPath $diagnosticsConfigPath `
-StorageAccountName $vmDiagStorageName `
-StorageAccountKey (
Get-AzStorageAccountKey `
-ResourceGroupName $vmDiagRGName `
-AccountName $vmDiagStorageName
).Value[0] `
-WarningAction 'SilentlyContinue'

$vmExtDiag = Get-AzVMDiagnosticsExtension -ResourceGroupName $vmRGName -VMName $vmName

#endregion

#region [Enable Boot Diagnostic]

# The resource group and the storage account are
# different from the vm's.

$vmBootDiagRGName=$tag+'bootDiag-RG'
$bootDiagStorageName=$tag+'bootdiagstore'

New-AzResourceGroup -Name $vmBootDiagRGName -Location $loc
#Remove-AzResourceGroup -Name $vmBootDiagRGName -AsJob

New-AzStorageAccount `
-ResourceGroupName $vmBootDiagRGName `
-AccountName $bootDiagStorageName `
-Location $loc `
-SkuName Standard_LRS

Set-AzVMBootDiagnostic `
-Enable `
-VM $vm `
-ResourceGroupName $vmBootDiagRGName `
-StorageAccountName $bootDiagStorageName `
| Update-AzVM

#endregion

#region [Session Summary]

($RGs = Get-AzResourceGroup | Where ResourceGroupName -like "$tag*") `
| ft ResourceGroupName, Location

($vms = Get-AzVM| Where ResourceGroupName -like "$tag*") `
| ft ResourceGroupName, Location, Name

($SAs = Get-AzStorageAccount | Where ResourceGroupName -like "$tag*") `
| ft ResourceGroupName, Location, StorageAccountName

#endregion

<# [Clean Up] 
Remove-AzResourceGroup -Name $vmRGName -AsJob 
Remove-AzResourceGroup -Name $vmDiagRGName -AsJob 
Remove-AzResourceGroup -Name $vmBootDiagRGName -AsJob 
#>

Microsoft Nano Server with Docker Enterprise Edition (EE)

This article details the process to install the latest Docker EE Version 17.06.2-ee-3 to a Microsoft Nano Server. I am sure there are different ways to do this. After a few iterations, here is one verified approach. A sample script is available.

Background

As shown below, when adding the server feature, containers, in Windows Server 2016, it installs Docker EE Version 17.06-1-ee-2. As opposed to what is in Windows 10, adding containers in ‘Turn Windows features on or off’ of  Program and Features of Control Panel installs Docker CE Version 17.03.1-ce, i.e. Community Edition. Information about the two versions is available. The latest version of Docker EE is 17.06-2-ee-3. To keep all Docker EE to the same and the latest version, one may need to manually install Docker EE, instead of employing the default version with a Windows Server. To manually install Docker EE to a Microsoft Nano Server, follow the steps provided below.

Windows Server 2016 patched on 10/05/2017

image

Windows 10 patched on 10/05/2017

image

Step 1 – Create a Nano Server vhdx file with the container package

First, use Nano Server Image Builder to create a vhdx file with intended packages including containers. Notice if Windows ADK (Assessment and Deployment Kit) is not in place, it will prompt for installing ADK. Which is about 6.7 GB download. Once ADK is in place, start the image builder which is wizard-driven and straightforward. I picked vhdx format for building a Gen2 VM. And as shown below, I also added containers, Hyper-V and Anti-Virus packages. The Windows Server 2016 media used to create the Nano Server vhdx file is en_windows_server_2016_x64_dvd_9718492.iso download from my MSDN subscription.

clip_image001

Step 2 – Update the Nano Server OS

In Hyper-V manager, I created and started a Gen2 VM using the vhdx created in Step 1. And log in the VM to find out the IP address, as shown below.

image

I did the following to connect to the host. Once connected, not shown in the following is that to test the Internet connectivity and update the DNS setting, as needed, by following the instructions in the sample script. What should be done first is to carry out a Windows update. Which I did.

image

For this particular VM, I had already updated the OS before taking this screen capture, thus there was no, i.e. zero, updates applicable. Originally there were two updates, KB4035631 and KB4038782, listed as applicable. This update took 20 minutes with about 2 GB download, followed by a reboot of the system. If there is an interest in examining the list of applicable updates beforehand, you can run the following in the PSSession before the Invoke-CimMethod in line 8,

$updateList = ($ci | Invoke-CimMethod -MethodName ScanForUpdates -Arguments @{SearchCriteria=”IsInstalled=0″;OnlineScan=$true}).Updates

Step 3 – Install Docker EE

If to simply use the Docker default to current Windows Server 2016 installation, which is Docker EE Version 17.06-1-ee-2, as stated earlier, install the provider and package will do. In this case, execute line 1 and line 4 to start a PSSession after updating followed by rebooting the OS, then run the following PowerShell commands to install Docker.

Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
Install-Package -Name docker -ProviderName DockerMsftProvider
Restart-Computer -Force; exit

To manually install Docker EE Version 17.06.2-ee-3, I did the following:

image

After downloading and extracting the source file in line 27 and 28, the PATH in the registry was updated with the installation of Docker to persist the reference across sessions. Rather than reboot the system, the current PATH was updated to start and verify the installation of an intended version of Docker from line 39 to line 41. The following is the user experience of executing line 1 to line 41 and successfully installed Docker EE Version 17.06.2-ee-3.

image

In a swarm, keeping all Docker instances in the same version is essential. In case there is a need to have Docker EE Version 17.06.2-ee-3 in Windows workloads, the presented steps achieve that.

What’s Next

Having installed Docker EE, start pulling down images and building containers. Deploying a swarm in a hybrid setting is what I plan to share next.

Microsoft Cortana Intelligence Suite Workshop Video Tutorial Series (5/5): Predictive Web Service

The last part of this video tutorial series includes three exercises. First, Exercise 6 uses Power BI Desktop, import the summary data from the Spark cluster and create a report with drag-n-drop to visualize the data. Exercise 7 is the exciting part, configures and deploys a sample web app and configures it to consume the predictive web service published in Exercise 1, followed by conducting a few simple tests. Finally, Exercise 8 shows how to clean up the deployed resources of the workshop.

Here you start.

Microsoft Cortana Intelligence Workshop encompasses a set of processes and supporting tools to architect, construct, package and deploy a predictive analytics solution. It is a friendly platform with no hardware to purchase, no software to configure. The workshop ultimately deploys a web application with a predictive analytics service. The app predicts the total number and the probability of flight delays between two cities based on date, time, carrier and real-time forecast weather information. It is a relative simple project, however includes all the essential components to formulate a modern and intelligent application.

Microsoft Cortana Intelligence Suite Workshop Video Tutorial Series by Yung Chou

The workshop is intended to be delivered as a whole-day event with presentation sessions and lab time. On the other hand, within 75 minutes the above video tutorial series can also offer you an experience and guide you through all the screens and interactions to successfully deploy the web service.

The next step is to apply what learned from this series to your work. Good luck.

Microsoft Cortana Intelligence Suite Workshop Video Tutorial Series (3/5): Azure Data Factory

Machine Learning, predictive analytics, web services and all the rest to make it happen are really about one thing. And that is to acquire, process and act on data. For the workshop, this is done with a Data Factory pipeline configured to automatically upload a dataset to the storage account of a Spark cluster where Azure Machine Learning is integrated to score the dataset. Importantly, this addresses a fundamental requirement relevant to data-centric applications involved cloud computing. Which is to securely, automatically and on demand moving data between an on-premises location and a designated one in the cloud. For IT today, cloud can be a source, a destination and a broker of data and the ability to securely move data between an on-premises facility and a cloud destination is imperative for a hybrid cloud setting and a backup-and-restore scenarios. And Azure Data Factory is a vehicle to achieve that ability.

image

The workshop video tutorial series is as listed below:

Specifically, Exercises 2 -4 are to accomplish three things:

  • Creating an Azure Data Factory service and pairing which with a designated
    on-premises (file) server
  • Constructing an Azure Data Factory Pipeline to automatically and securely
    move data from the designated on-premises server to a target Azure blob storage
    account
  • Enabling the developed Azure Machine Learning model to score the date
    provided by Azure Data Factory pipeline

Notice that the lab VM is also employed as an on-premises file server hosting a dataset to be uploaded to Azure. At one moment, you may be using the lab VM as a workstation to access Azure remotely, and the next on an on-premises file server installing a gateway. When following the instructions, be mindful where a task is carried out, as the context switching is not always apparently.

Microsoft Cortana Intelligence Suite Workshop Video Tutorial Series (2/5): Azure Machine Learning

This video tutorial series walks you through the development of a predictive analytics solution using Microsoft Cortana Intelligence Suite. The solution is realized as a web application to predict the number of delays with probability of a flight segment between two cities with a particular airline at a particular date and time. The content of this series is based on what is published at http://aka.ms/CortanaManual by and thanks to Todd Kitta.

The first video is an introduction of how the workshop is structured and highlights a few important items to get you prepared. There are additional four to cover all eight exercises as

Here, the video walks through the process and operations in Exercise 1 to build an Azure Machine Learning model, as below,

image

and package it as a web service for consumption. If having not tried Microsoft Azure Machine Learning Studio before, I hope you will enjoy and appreciate the build-in canvas and native drag-and-drop capability for creating and composing a model. It let you explore and realize your creativities in multiple dimensions.

There are nine tasks total. Here we go.

So, what qualifies the machine as being able to learn? What is learning anyway? Look for my upcoming blog posts to examine the concept of “learning” and more.