Disabling/Enabling Azure VM BootDiagnostic Using PowerShell

Despite a simple operation, apply the following sample statements to your environment if experiencing an issue in changing an Azure VM’s BootDiagnostic setting.

ISSUE for

  • Not able to disable/enable BootDiagoistic of an Azure VM

HOW-TO

References:

SAMSPLE STATEMENTS

The same process applies to enabling BootDiagnostic by specifying a vm object with the associated resource group and an intended storage account in step 4.

<# 
Disabling Azure VM BootDiagnostic Using PowerShell 

The following illustrates the process to disable VM BootDiagnostic. 
The statements are intended to be executed manually and in sequence. 
#>
 
# 1. Log in Azure and set the context, as appropriate
Connect-AzAccountstep 4
Get-AzContext
Set-AzContext -Subscription '????' -Tenant '????'
 
# 2. Specify a target VM 
$vmName = 'your vm name'
$vmRG = 'the resource group name of the vm'
 
# 3. Check the current BootDiagnostics status
($VM = Get-AzVM -ResourceGroupName $vmRG -Name $vmName).DiagnosticsProfile.BootDiagnostics
 
# 4. Disable BootDiagnostic of the VM
Set-AzVMBootDiagnostic -VM $VM -Disable
 
# 5. Update the VM settings
Update-AzVM -ResourceGroupName $vmRG -VM $VM
 
# 6. Check the current BootDiagnostics status and verify the change made
($VM = Get-AzVM -ResourceGroupName $vmRG -Name $vmName).DiagnosticsProfile.BootDiagnostics
 
# Notice it may take a few minutes for azure portal to reflect 
# the changes made to BootDiagnostic.
 

SAMPLE SESSION

  • Examine status before making a change

BootDiagnostic Settig

  • Disable BootDiagnistic

  • Examine status after making the change

Finding an Azure VM Image Sku Using PowerShell

<#

The function, azure-vm-image-sku, returns the sku of a user-selected 
Azure VM image interactively. It calls the function, pick-one-item, 
which accepts an item-list and returns a selected item interactively.

This script is for demonstrating and learning Azure and PowerShell. 
The code is not optimized and does not handle all error messages. 

Usage:

# Get a sku in the default text mode
azure-vm-image-sku 

# Get a sku with GUI
azure-vm-image-sku -gui $true

# Get a sku with optional switches
azure-vm-image-sku `
  -region 'targetAzureRegion' `
  -publisher 'targetPublisherName' `
  -offer 'targetOffer' `
  -gui $true

Examples: 

azure-vm-image-sku -region 'south central us' -gui $true

azure-vm-image-sku `
  -region 'south central us' `
  -publisher 'microsoftwindowsserver'

azure-vm-image-sku `
  -region 'south central us' `
  -publisher 'microsoftwindowsserver' `
  -offer 'windowsserver' 

© 2020 Yung Chou. All Rights Reserved.

#>

function pick-one-item {

  param (
    [array  ]$thisList = @('East Asia','South Central US', 'West Europe', 'UAE North', 'South Afraica North'), 
    [string ]$itemDescription ='Azure Region', 
    [boolean]$gui = $false
    )

  if ($gui) {

    $thisOne = $thisList | Out-GridView -Title "$itemDescription List" -PassThru
  
  } else {
  
    if ($thisList.count -eq 1) { $thisOne = $thisList[0] 
    } else {
      
      $i=1; $tempList = @()

      foreach ( $item in $thisList )  {
          $tempList+="`n$i.`t$item" 
          $i++
      }

      do {
          write-host "`nHere's the $itemDescription list `n$tempList"
          $thePick = Read-Host "`nWhich $itemDEscription"
      } while(1..$tempList.length -notcontains $thePick)

      $thisOne = $thisList[($thePick-1)]
    }
  }

  write-host "$(get-date -f T) - Selecting '$thisOne' from the $itemDescription list " -f green -b black

  return $thisOne
  
}

function azure-vm-image-sku {

  param (

    [boolean]$gui = $false, 
    
    [string]$region = (pick-one-item `
      -thisList (Get-AzLocation).DisplayName `
      -itemDescription "Azure region" `
      -gui $gui ),

    [string]$publisher = (pick-one-item `
      -thisList (Get-AzVMImagePublisher -Location $region).PublisherName `
      -itemDescription "Azure $region publisher" `
      -gui $gui ),

    [string]$offer = (pick-one-item `
      -thisList (Get-AzVMImageOffer -Location $region -PublisherName $publisher).offer `
      -itemDescription "Azure $region $publisher's Offer" `
      -gui $gui ),  

    [string]$itemDescription = "Azure $region $publisher $offer Sku"

    )

  return $sku = (pick-one-item `
    -thisList (Get-AzVMImageSku -Location $region -PublisherName $publisher -Offer $offer).skus `
    -itemDescription $itemDescription `
    -gui $gui )

}

Creating Azure Usage and Quota Report Using PowerShell


write-host "

This script, based on the original script published in the article,

Report Azure resource usage with PowerShell
https://4sysops.com/archives/report-azure-resource-usage-with-powershell/,

displays interactively the Azure compute, storage, 
and network quota and usage of an examined region relevant to 
an Azure subscription. It also generates a text file accordingly. 

© 2020 Yung Chou. All Rights Reserved.

"

#region [Customization]

$region="uksouth"

#endregion

Connect-AzAccount

#region [Needed only if an account owns multiple subscriptions]

# Set the context for subsequent operations
$context = (Get-AzSubscription | Out-GridView -Title 'Set subscription context' -PassThru)
Set-AzContext -Subscription $context | Out-Null
write-host "Azure context set for the subscription, `n$((Get-AzContext).Name)" -f green

#endregion

#region [DO NOT CHANGE]

($vm = Get-AzVMUsage -Location $region `
| select @{label='ResourceType';expression={$_.name.LocalizedValue}}, currentvalue, limit) `
| Out-GridView -Title "Azure $region Region Compute Quota & Usage"

($storage = Get-AzStorageUsage -Location $region `
| select @{label='ResourceType';expression={$_.name}}, currentvalue, limit) `
| Out-GridView -Title "Azure $region Region Storage Account Quota & Usage"

($network = Get-AzNetworkUsage -Location $region `
| select @{label='ResourceType';expression={$_.resourcetype}}, currentvalue, limit) `
| Out-GridView -Title "Azure $region Network Quota & Usage"

$when=get-date -format 'yyyyMMdd-hhmm'

($usage = @("Azure $region Region Quota and usage, as of $when",$vm,"`n",$storage,"`n",$network) | ft) `
>> "usage-$region-$when.txt"

#endregion [DO NTO CHANGE]

Creating Azure Managed Disk with VHD Using PowerShell


#region [CREATE MANAGED DISK WITH VHD]

write-host "
------------------------------------------------------------

This script is based on the following reference and for
learning Azure and PowerShell. I have made changes to the
original scrtip for clarity and portability.

Ref: Create a managed disk from a VHD file
https://docs.microsoft.com/en-us/azure/virtual-machines/scripts/virtual-machines-windows-powershell-sample-create-managed-disk-from-vhd

Recommend manually running the script statement by
statement in cloud shell.

© 2020 Yung Chou. All Rights Reserved.

------------------------------------------------------------
"

#region [CUSTOMIZATION]

#region [Needed only if an account owns multiple subscriptions]

Get-AzSubscription | Out-GridView  # Copy the target subscription name

# Set the context for subsequent operations
$context = (Get-AzSubscription | Out-GridView -Title 'Set subscription context' -PassThru)
Set-AzContext -Subscription $context | Out-Null
write-host "Azure context set for the subscription, `n$((Get-AzContext).Name)" -f green

#endregion

$sourceVhdStorageAccountResourceId = '/subscriptions/…/StorageAccounts/'
$sourceVhdUri = 'https://.../.vhd'

#Get-AzLocation
$sourceVhdLoc = 'centralus'

$mngedDiskRgName ="da-mnged-$(get-date -format 'mmss')"
#$mngedDiskRgName ='dnd-mnged'

#Provide the name of a to-be-created Managed Disk
$mngedDiskName = 'myMngedDisk'
$mngedStorageType = 'Premium_LRS' # Premium_LRS,Standard_LRS,Standard_ZRS
$mngedDiskSize = '128' # In GB greater than the source VHD file size

#endregion

if (($existingRG = (Get-AzResourceGroup | Where {$_.ResourceGroupName -eq $mngedDiskRgName})) -eq $Null) {
write-host "Resource group, $mngedDiskRgName, not found, creating it" -f y
New-AzResourceGroup -Name $mngedDiskRgName -Location $mngedDiskLoc
} else {
write-host "Using this resource group, $mngedDiskRgName, for the managed disk, $mngedDiskName" -f y
}

$diskConfig = New-AzDiskConfig `
-AccountType $mngedStorageType `
-Location $sourceVhdLoc `
-CreateOption Import `
-StorageAccountId $sourceVhdStorageAccountResourceId `
-SourceUri $sourceVhdUri

New-AzDisk `
-Disk $diskConfig `
-ResourceGroupName $mngedDiskRgName `
-DiskName $mngedDiskName `
-Verbose

#endregion [CREATE MANAGED DISK WITH VHD]

#region [CLean up]
# Remove-AzResourceGroup -Name $mngedDiskRgName -Force -AsJob
#endregion

Deploying Azure VM with Diagnostics Extension and Boot Diagnostics

This is a sample script for deploying an Azure VM with Diagnostics Extension and Boot Diagnostics, while each in a different resource group. The intent is to clearly illustrate the process with required operations, while paying minimal effort for code optimization.

Ideally an Azure VM, Diagnostic Extension, and Boot Diagnostics are to be deployed with the same resource group. However in production, it may be necessary to organize them into individual resource groups for standardization, which is what this script demonstrates.

The script can be run as it is. Or simply make changes in customization section and leave the rest in place. For VM Diagnostic Extension, the configuration file should be placed where the script is. Or update the variable, $diagnosticsConfigPath, accordingly. This script uses Storage Account Key for access which allows the storage account with a subscription different from that deploys the VM. A sample configuration file, diagnostics_publicconfig_NoStorageAccount.xml, is available  and notice there is no <StorageAccount> element specified in this file.

Here’s the user experience up to finishing the [Deploying] section in the script. By default, an Azure VM is deployed with Boot Diagnostic enabled. The script upon a VM deployment changes and disables the Boot Diagnostic of the VM. For the following sample run, it took 3 minutes and 58 seconds.

Deploying Azure VM and setting Boot Diagnostics as disabled Deploying Azure VM and setting Boot Diagnostics as disabled

Now with an Azure VM in place, the script adds VM Diagnostic Extension, followed by enabling Boot Diagnostics. Herr either extension uses a storage account in a resource group different form the VM’s. So this script creates 3 resource groups for: a VM itself, and the Diagnostics Extension and the Boot Diagnostics of the VM.

VM, Diagnostics, and Boot Diagnostics deployed with individual resource groups

VM, Diagnostics, and Boot Diagnostics deployed with individual resource groups


write-host "
---------------------------------------------------------

This is a sample script for deploying an Azure VM
with Diagnostics Extension and Boot Diagnostics,
while each in a different resource group.

The intent is to clearly illustrate the process with
required operations, while paying minimal effort for
code optimization.

Ideally an Azure VM, Diagnostic Extension, and Boot Diagnostics
are to be deployed with the same resource group. However
in production, it may be necessary to organize them into
individual resource groups for standardization,
which is what this script demonstrates.

The script can be run as it is. Or simply make changes
in customization section, while leave the rest in place.
For VM Diagnostic Extension, the configuration file should
be placed where thi script is. Or update the variable,
$diagnosticsConfigPath, accordingly. This script uses a
Storage Account Key for access. This configuration allows
the storage account with a subscription different from that
deploys the VM. A sample configuration file,
diagnostics_publicconfig_NoStorageAccount.xml, is available at

https://1drv.ms/u/s!AuraBlxqDFRshVSl0IpWcsjRQkUX?e=3CGcgq

and notice there is no <StorageAccount> element specified in this file.

© 2020 Yung Chou. All Rights Reserved.

---------------------------------------------------------
"

Disconnect-AzAccount; Connect-AzAccount
# If multipel subscription
# Set-AzContext -SubscriptionId "xxxx-xxxx-xxxx-xxxx"

#region [Customization]

$cust=@{
initial='yc'
;region='southcentralus'
}

$diagnosticsConfigPath='diagnostics_publicconfig_NoStorageAccount.xml'

#region [vm admin credentials]

# 1.To hard-code
$cust+=@{
vmAdmin ='changeMe'
;vmAdminPwd='forDemoOnly!'
}
$vmAdmPwd=ConvertTo-SecureString $cust.vmAdminPwd -AsPlainText -Force
$vmAdmCred=New-Object System.Management.Automation.PSCredential ($cust.vmAdmin, $vmAdmPwd);
#>

# 2. Or interactively
#$vmAdminCred = Get-Credential -Message "Enter the VM Admin credentials."

#endregion

$tag=$cust.initial+(get-date -format 'mmss')
Write-host "`nSession ID = $tag" -f y

# Variables for common values
$vmRGName=$tag+'-RG'
$loc=$cust.region
$vmName=$tag+'vm'

$deployment=@{
vmSize='Standard_B2ms'
;dataDiskSzieInGB=5
;publisher='MicrosoftWindowsServer'
;offer='WindowsServer'
;sku='2016-Datacenter'
;version='latest'
;vnetAddSpace='192.168.0.0/16'
;subnetAddSpace='192.168.1.0/24'
}

#endregion

#region [Deployment Preping]

# Create a resource group
New-AzResourceGroup -Name $vmRGName -Location $loc
# Remove-AzResourceGroup -Name $vmRGName -AsJob

# Create a subnet configuration
$subnetConfig = `
New-AzVirtualNetworkSubnetConfig `
-Name 'default' `
-AddressPrefix ($deployment.subnetAddSpace) `
-WarningAction 'SilentlyContinue'

# Create a virtual network
$vnet = `
New-AzVirtualNetwork `
-ResourceGroupName $vmRGName `
-Location $loc `
-Name "$tag-vnet" `
-AddressPrefix $deployment.vnetAddSpace `
-Subnet $subnetConfig

# Create a public IP address and specify a DNS name
$pip = `
New-AzPublicIpAddress `
-ResourceGroupName $vmRGName `
-Location $loc `
-Name "$vmName-pip" `
-AllocationMethod Static `
-IdleTimeoutInMinutes 4

# Create an inbound network security group rule for port 3389
$nsgRuleRDP = `
New-AzNetworkSecurityRuleConfig `
-Name "$vmName-rdp" `
-Protocol Tcp `
-Direction Inbound `
-Priority 1000 `
-SourceAddressPrefix * `
-SourcePortRange * `
-DestinationAddressPrefix * `
-DestinationPortRange 3389 `
-Access Allow

# Create an inbound network security group rule for port 80,443
$nsgRuleHTTP = `
New-AzNetworkSecurityRuleConfig `
-Name "$vmName-http" -Protocol Tcp `
-Direction Inbound `
-Priority 1010 `
-SourceAddressPrefix * `
-SourcePortRange * `
-DestinationAddressPrefix * `
-DestinationPortRange 80,443 `
-Access Allow

$nsg= `
New-AzNetworkSecurityGroup `
-ResourceGroupName $vmRGName `
-Location $loc `
-Name "$vmName-nsg" `
-SecurityRules $nsgRuleRDP, $nsgRuleHTTP `
-Force

# Create a virtual network card and associate with public IP address and NSG
$nic = `
New-AzNetworkInterface `
-Name "$vmName-nic" `
-ResourceGroupName $vmRGName `
-Location $loc `
-SubnetId $vnet.Subnets[0].Id `
-PublicIpAddressId $pip.Id `
-NetworkSecurityGroupId $nsg.Id

$vmConfig = `
New-AzVMConfig `
-VMName $vmName `
-VMSize $deployment.vmSize `
| Set-AzVMOperatingSystem `
-Windows `
-ComputerName $vmName `
-Credential $vmAdmCred `
| Set-AzVMSourceImage `
-PublisherName $deployment.publisher `
-Offer $deployment.offer `
-Skus $deployment.sku `
-Version $deployment.version `
| Add-AzVMNetworkInterface `
-Id $nic.Id

#endregion

#region [Deploying]

$StopWatch = New-Object -TypeName System.Diagnostics.Stopwatch; $stopwatch.start()
write-host "`nDeploying the vm, $vmName, to $loc...`n" -f y

$vmStatus = `
New-AzVM `
-ResourceGroupName $vmRGName `
-Location $loc `
-VM $vmConfig `
-WarningAction 'SilentlyContinue' `
-Verbose

Set-AzVMBgInfoExtension `
-ResourceGroupName $vmRGName `
-VMName $vmName `
-Name 'bginfo'

$vm = Get-AzVM -ResourceGroupName $vmRGName -Name $vmName
# Set by default not to enable boot diagnostic
Set-AzVMBootDiagnostic `
-VM $vm `
-Disable `
| Update-AzVM
write-host "`nSet the vm, $vmName, with BootDiagnostic 'Disabled'`n" -f y

write-host '[Deployment Elapsed Time]' -f y
$stopwatch.stop(); $stopwatch.elapsed

#endregion

#region [Set VM Diagnostic Extension]
<# If using a diagnostics storage account name for the VM Diagnostic Extension, the storage account must be in the same subscription as the virtual machine. If the diagnostics storage account is in a different subscription than the virtual machine's, then enable sending diagnostics data to that storage account by explicitly specifying its name and key. #>
$vmDiagRGName=$tag+'vmDiag-RG'
$vmDiagStorageName=$tag+'vmdiagstore'

New-AzResourceGroup -Name $vmDiagRGName -Location $loc
#Remove-AzResourceGroup -Name $vmDiagRGName -AsJob

New-AzStorageAccount `
-ResourceGroupName $vmDiagRGName `
-AccountName $vmDiagStorageName `
-Location $loc `
-SkuName Standard_LRS

Set-AzVMDiagnosticsExtension `
-ResourceGroupName $vmRGName `
-VMName $vmName `
-DiagnosticsConfigurationPath $diagnosticsConfigPath `
-StorageAccountName $vmDiagStorageName `
-StorageAccountKey (
Get-AzStorageAccountKey `
-ResourceGroupName $vmDiagRGName `
-AccountName $vmDiagStorageName
).Value[0] `
-WarningAction 'SilentlyContinue'

$vmExtDiag = Get-AzVMDiagnosticsExtension -ResourceGroupName $vmRGName -VMName $vmName

#endregion

#region [Enable Boot Diagnostic]

# The resource group and the storage account are
# different from the vm's.

$vmBootDiagRGName=$tag+'bootDiag-RG'
$bootDiagStorageName=$tag+'bootdiagstore'

New-AzResourceGroup -Name $vmBootDiagRGName -Location $loc
#Remove-AzResourceGroup -Name $vmBootDiagRGName -AsJob

New-AzStorageAccount `
-ResourceGroupName $vmBootDiagRGName `
-AccountName $bootDiagStorageName `
-Location $loc `
-SkuName Standard_LRS

Set-AzVMBootDiagnostic `
-Enable `
-VM $vm `
-ResourceGroupName $vmBootDiagRGName `
-StorageAccountName $bootDiagStorageName `
| Update-AzVM

#endregion

#region [Session Summary]

($RGs = Get-AzResourceGroup | Where ResourceGroupName -like "$tag*") `
| ft ResourceGroupName, Location

($vms = Get-AzVM| Where ResourceGroupName -like "$tag*") `
| ft ResourceGroupName, Location, Name

($SAs = Get-AzStorageAccount | Where ResourceGroupName -like "$tag*") `
| ft ResourceGroupName, Location, StorageAccountName

#endregion

<# [Clean Up] 
Remove-AzResourceGroup -Name $vmRGName -AsJob 
Remove-AzResourceGroup -Name $vmDiagRGName -AsJob 
Remove-AzResourceGroup -Name $vmBootDiagRGName -AsJob 
#>

My presentation on 20191113

Four topics I talked about in this presentation :

  • VM preparation for migrating from on-premises to cloud
  • High Availability, Disaster Recovery, and Scalability
  • Azure Internet-of-Things (IoT) edge computing
  • Machine Learning (ML) application development

A Secure Software Supply Chain with Containers

The concept of software supply chain is not a new one. What may be new is that CI/CD (Continuous Integration/Continuous Delivery) with containers makes it conceptually easy to understand and technically practical to implement. Here’s a process diagram illustrates this approach with five steps.

image

CI/CD Process

A software supply chain is here the “master” branch of a release, while development activities at other branch level are not considered here. The start of a master branch is where and when code or a change is introduced, while on the other end of the master branch is a production runtime environment where applications run. The process, as shown above, are highlighted in 5 steps.

  1. A CI/CD process starts by committing and pushing code to a centralized repository. Code here encompasses all programmatic assets relevant to defining, configuring, deploying and managing a relevant application workload.
  2. Changes made on the master branch triggers the CI process to automatically (re)build and (re)test all assets relevant to the workload which the master branch delivers.
  3. The successful outcome will generate a container images which are automatically versioned, tagged, registered and published in a designated trusted registry. In Docker Enterprise Edition, this is implemented as a Docker Trusted Registry, or DTR. The function of a trusted registry is to secure and host container images. Important tasks carried out here are to at a binary level scan for known malware, check vulnerabilities and digitally sign a container image upon being successfully processed. The generation of a container image signifies application assets are successfully integrated and packaged, which signifies the end of CI part.
  4. CD kicks off, executes and validates the steps for deploying the workload to a target production environment.
  5. Upon substantiating containers, referenced container images are then as needed pulled to a local host and start the container instances, hence an application or service.

Notice that Continuous Delivery is a reference of capability and not the state. Continuous Delivery signifies the ability to maintain payload at a production ready and deployable condition at all time. It is not necessary suggesting payload once validated is deployed to production immediately.

One Version of Truth

A software supply chain starts with developers commit and push the code into the master branch of a release’s centralized repository. To have one version of truth, centralized management is essential. Nevertheless, as preferred we can operate a centralized repository in a distributed fashion like github. Further, a centralize source code hosting solution must properly address these priorities including role-based access control, naming and branching, high availability, network latency, single-point-of-failure, etc. With source control, promoted code can be versioned and tagged for asset and release management.

Triggering upon Pushing Changes

When changes introduced into the master branch which is the supply chain, CI must and will automatically kick off validation process which includes a set of defined criteria, namely test cases.

Test-Driven, a Must

Once the development criteria (or requirements) are set, test cases can be developed accordingly prior to writing the code. And this essentially establishes the acceptance criteria and force a developer to focus on designing code guided by what must be later validated. Which in essence designs in the quality.

When changes are made, there is no point to “manually” execute all test cases against all the code. Let me be clear. The regression tests are necessary, but the repetitions with manual labor is counterproductive and error-prone. A better approach is to programmatically automate all structured tests (i.e. the criteria are structured and stable, like canned test cases) and let a tester to do exploratory testing which may not be performed with scripted, expected or even logical steps, but with the intent to break the system. The automation makes regression tests consistent and efficient, while exploratory testing adds extra values by expanding the test coverage.

Master Branch Has No Downtime

A test-driven development in CI/CD holds a key objective that the master branch is always functional and ready to deliver. This may first appear to some of us as idealistic and over-committed. While in fact, considering an automobile production line, as material and parts are put together while moving through one station to another, an automobile is being built. If at any time, a station breaks down, it must be fixed at the scene since the whole production line is on hold. Fixing what stops a subject moving from one station to the next on the spot is necessary to keep the production producing.

Software supply chain, or a CI/CD pipeline, with containers is a digital version of the above-mentioned model where artifacts are definitions, configurations and scripts. As these artifacts are integrated, built and tested throughout the pipeline, the process to construct a service based on containers is validated. If a step fails the validation, the pipeline stops and the issue must be immediately addressed and resulted, so the process can continue to the next step, hence material flows through the pipeline. To CI/CD, a master branch is the pipeline and must be always kept functional and ready to deliver.

Containers Are Not the Deliverables

It should be noted here that artifacts passing through the CI/CD pipeline are neither container images, nor container instances. What the pipeline validates are a set of developed definitions, configurations and processes based on application requirements and presented in mark-up language like json or yaml. In Docker, they are dockerfile, compose yaml file and template-based scripts, for example, to define application architecture with configured Docker runtime environment for a target application delivered as containers upon instantiation.

Container images and instances are in a way by-products. Container images and instances are however not intended to be deliverables. A container image generated by a CI/CD pipeline should always first programmatically created by the initial CI and later reference or updated by CD. The key is that images must be pulled or generated by executing CI. With Docker, thanks for natively configuration management as code, a release may employ a particular version of a container image. And upgrade or fall back a software supply chain may be as easy as changing the reference version, followed by redeploying the associated payload.

Trusted Registry, the Connective Tissue

CI once successfully generated a container image should register and upload the image to a trusted registry for security scanning and digital signing, before CD takes over and later pulls or updates the images, as needed to complete the CI/CD pipeline. Technically CI starts from receiving code changes and ends at successfully register a container image.

Fail to register a resulting container image will prevent CD from progressing upon referencing the image. In other words, a trusted registry is like a connecting tissue holding and keeping CI and CD fully synchronized and functional with the associated container images. A generated container image does not flow through every step of a CI/CD pipeline, the image is however the focal point to the validity of produced results. As shown in the above diagram, I used a dashed line between CI and CD to indicate there is a dependency of the trusted registry. Failing a registration will eventually fail the overall process.

Closing Thoughts

The essence of CI is automatic testing against defined criteria at unit, function and integration levels. These criteria are basically test cases which should be developed prior to code development as acceptance criteria. This is a key. Development must fully address these test criteria at coding time to build in quality.

Software supply chain is a better way. Wait, make that a much better way than just “developing” applications. I remember those days when every release to production was a nightmare. And code promotion was an event full of anxiety, numerous crashes and many long hours. Good riddance, so long those days. CI/CD with containers presents a very interesting and powerful combination for quality and speed, which is unusual to be able to achieve both at the same time. Docker with Jenkins, github and Azure, a software supply chain is realistic and approachable. Which I will detail in an upcoming post. Stay tuned.

Connecting Raspberry Pi with Sense HAT to Azure IoT Hub Using Node-RED

Following up with A Simulated IoT Device with Node-RED, I replaced the simulated device with a Raspberry Pi with Sense HAT.

Node-RED Flow

The flow is much similar to that of a simulated device with as shown below,

image

where

  • I initialize temperature, humidity and pressure as global variables and each is set to 0.
  • In the sensehat function, load the ambient data from Sense HAT, use mathjs to round the output to 2 decimal points and update the global variables with the rounded values before sending the data to Azure IoT Hub. Here I include the device connection string in the payload.
  • For the dashboard, the gauges and the charts are reading the data from the global variables. I noticed the Sense HAT temperature reading is generally about 10 degree Celsius higher than my room temperature.

Dashboard

Other than the ranges of data and minor cosmetic changes, the settings are the same with what I used with a simulated device. Here’s a snapshot.

image

Next Step: Azure IoT Edge

Introduce Azure IoT Edge in opaque mode and connect the Raspberry Pi as an isolated device from Azure IoT Hub. Should be interesting. Stay tuned.

A Simulated IoT device with Node-RED

In the last few months, I have gradually shifted to use Node-RED as the tool for demonstrating and prototyping Azure IoT solutions. In particular, I configure a dashboard to display the ambient information sent from the device and verify the data received by an Azure IoT Hub and stored in an Azure storage account using Azure Storage Explorer form my desktop. Ideally, I would configure all on an Arduino or a Raspberry Pi. To make it more portable, I also do it with a local Ubuntu VM, so no need to plug in anything and I can demo a simple IoT setup anytime and anywhere on demand with Internet connectivity. Briefly, here’s an outline of what I did.

1. Installing & Starting Node-RED

On my Ubuntu (16.04 LTS) VM, update and upgrade everything, followed by install Node-Red.

If you need to make a required node module globally available in Node_RED, edit the file, ~/.node-red/settings.js accordingly. Here, I made the module, math.js, globally available and used it to round the ambient data to two decimal points.

imageimage

Now, start Node-RED, as the below.

image

As activities being carried out in Node-RED, this session displays the log with diagnostics in real-time.

In Ubuntu, when close out the terminal session running Node-RED, somehow it also stops the Node-RED service. This is different than how it behaves in Raspberry Pi where closing a Node-RED terminal session will not stop the service.

2. Accessing Node-RED IDE

The default port for Node-RED IDE is 1880, as shown below accessing the service from localhost. If preferred, authentication can be enabled and port changed by following what is stated in documentation and the above-mentioned settings.js file.

image

By default, there are a number of nodes installed as shown on the left panel. And you may install addition nodes to better fit the needs.

3. Install additional Nodes

There are ample node and flow examples in Node-RED website which you may leverage. In addition to installing these node modules with a command line interface, doing it interactively is also an option. In the IDE, click the upper right waffle within the Node-RED session and click ‘Manage palette’. If you do not see the option, update npm to the latest should make this option appear. As shown below, the Nodes tab presents the nodes installable directly or already installed currently. The

image

and on the install tab, you may keyword-search the Node-RED repository for relevant modules. Below, I search the modules relevant to Azure.

image

A few modules, I frequently install including:

4. Develop & Deploy a Node-RED Process Flow

To create a flow, start dragging selected nodes from the left panel to the canvas and construct flows by connecting the nodes. There is a copious amount of contents with how-to instructions on Node-RED in Internet already. Or if you like to do it in an old-fashioned way, like me, by reading the document. The following flow is for a simulated device to send ambient data to an Azure IoT Hub called thisiothub, while displaying the data on the local node-red dashboard.

SNAGHTMLe2519f

Global Variables

Here, I added a config node to set the global variable to set the baseline temperature, humidity and pressure for a simulation run. Node-RED will always initialize a config node prior to executing all flows presented on the canvas.

Timestamp

The timestamp node sets the time interval for sending data. When developing and troubleshooting, I set it to a long period between messages to minimize the noises. When demoing, I will then set it based on a customer’s requirements. Each time, the timestamp triggers, the connected nodes are consequently executing the programmed the logic, respectively.

The Functions

In this setting, each emission by the timestamp node has the following effects.

  • This IoT Device function prepares the message payload and updates current ambient data which are global variables.
  • The temperature, humidity and pressure functions pipe the data stored in the global variables to a configured Node-RED dashboard.

This IoT Hub

This node has the host name of a target Azure IoT hub, here thisiothub, and the device connection information is provided in the function, This IoT Device.

msg.payload

This is a debug node. Once dragged to the canvas, it will automatically rename itself to msg.payload. Once connected, this node becomes a standardout of Node-RED. And you can examine the output in the debug tab in the right panel. In the screen capture above, you will find that I rounded the data to two decimal points and send it with mqtt.

SNAGHTML118120f

Gauges and Charts

A main reason motivating me to use Node-RED is the simplicity to configure and deploy a dashboard directly on an IoT device. An IoT solution is really about data and data visualization plays a critical part. The ability to deploy a dashboard right there and when and on demand is a significant time saver and a noticeable advantage. It did however took me some practices to correctly place those gauges and charts the way I wanted. Once configured, the dashboard is published automatically at http://node-red-instance/ui and here is what I got.

image

Verifying the Data Sent to Azure IoT Hub

There are two tools I use for managing and examining the activities between an IoT device and Azure IoT Hub. They are iothub-explorer for Linux and Device Explorer for Windows. The latter is a great tool for Windows users to examine the data received by an IoT Hub.

image

And Device Explorer also provides a convenient way to verify device properties, acquire the connection string or change the state of a device as shown below.

image

I also deployed a sample web app which plots the temperature and the humidity data received from thisiotdevice in real-time. Here’s a snapshot.

image

So either from Azure IoT Hub using a web app or directly on the device with a Node-RED dashboard, we may present the data visually.

Some Gotcha

Ubuntu frequently stopped Node-RED when a deployment had failed to connect to Azure IoT Hub, and the node will also lose the configured hostname data. And I had to frequently restart the services and re-enter the Azure IoT Hub hostname in the node configuration. Therefore it is better to leave the terminal session where you started the Node-RED service visible at all time to have a clear indication of the state. I once spent hours troubleshooting a flow, researching material and was not able to figure out why, and only to later find out the Node-RED service exited its session upon a failed deployment behind the scene.

Closing Thoughts

Node-RED is a great learning and prototyping tool. And once learned, you can create process logic based on data flows relatively easily. It is visual and a picture is always worth a thousand words

Azure IoT Hub is the Swiss army knife for formulating an IoT solution. It does the heavy lifting for registering, securing and managing devices with interfaces to integrate other Azure or 3rd-party services. The recent announcement of Azure IoT Edge opens up many scenarios and opportunities to increase ROI by processing data right where they are collected. Which is what I plan to include to the next version of my Node-RED flow. Stay tuned.