Azure OpenAI Is Ready, Are You?

Azure OpenAI can be utilized for a wide range of tasks that cater to both business and technical requirements. It offers various capabilities, including but not limited to:

Content Generation: Azure OpenAI can generate high-quality and coherent text content for a variety of purposes, such as writing articles, product descriptions, marketing materials, and more. It can help automate content creation and save time and effort.

Summarization: With Azure OpenAI, you can extract key information and generate concise summaries from large volumes of text. This can be particularly useful for processing lengthy documents, news articles, research papers, or any content that requires distilling important points.

Semantic Search: Azure OpenAI enables semantic search capabilities, allowing you to perform more advanced and accurate searches based on the meaning and context of the query. This can improve search results by understanding the intent behind the search terms, resulting in more relevant and targeted information retrieval.

Natural Language to Code Translation: Azure OpenAI can assist in translating natural language queries or instructions into executable code. This feature can be helpful for developers and non-technical users alike, allowing them to express their requirements in plain language and receive code snippets or solutions that align with their intentions.

In summary, Azure OpenAI offers a powerful suite of tools for content generation, summarization, semantic search, and translating natural language to code. It empowers businesses and individuals to leverage advanced AI capabilities to automate tasks, enhance productivity, and unlock new possibilities in various domains.

Here’s how to start using Azure OpenAI services:

TaskDescription
Accessing Azure OpenAITo access Azure OpenAI, you need to create an Azure subscription and apply for access to the Azure OpenAI service by completing the form at https://aka.ms/oai/access
Azure OpenAI StudioAzure OpenAI provides a web-based interface in the Azure OpenAI Studio to access OpenAI’s powerful language models including the GPT-3, Codex and Embeddings model series.
Python SDKAzure OpenAI provides a Python SDK to access the service.
Quotas and LimitsAzure OpenAI has certain quotas and limits that apply to the service, such as the number of requests per second per deployment and the total number of training jobs per resource.
Business Continuity and Disaster Recovery (BCDR)Azure OpenAI provides BCDR considerations for implementing BCDR with Azure OpenAI.

References:

  1. Quickstart – Deploy a model and generate text using Azure OpenAI Service
  2. How to customize a model with Azure OpenAI Service
  3. What’s new in Azure OpenAI Service? – Azure Cognitive Services
  4. Business Continuity and Disaster Recovery (BCDR) with Azure OpenAI Service

Deploying Azure VM with a Generalized VHD file Using Azure Portal

Assuming one has already

  • logged in Azure portal
  • had a generalized vhd stored in an Azure storage account,
  1. Create an image with a target vhd file by

searching and find the image service

adding a vm image

browsing and selecting an intended vhd file to create a vm image

  1. Create a vm with the image by

form the Images page, selecting/clicking the target image

creating a vm with the image from the image overview page

 Test RDP

From the vm overview page, start and connect to the VM

  • If RDP does not start a dialogue as the following,

use RUN command to review and validate the VM RDP settings, as needed

  • If experiencing a credential issue,

reset the user password or create new user credential

 

Disabling/Enabling Azure VM BootDiagnostic Using PowerShell

Despite a simple operation, apply the following sample statements to your environment if experiencing an issue in changing an Azure VM’s BootDiagnostic setting.

ISSUE for

  • Not able to disable/enable BootDiagoistic of an Azure VM

HOW-TO

References:

SAMSPLE STATEMENTS

The same process applies to enabling BootDiagnostic by specifying a vm object with the associated resource group and an intended storage account in step 4.

<# 
Disabling Azure VM BootDiagnostic Using PowerShell 

The following illustrates the process to disable VM BootDiagnostic. 
The statements are intended to be executed manually and in sequence. 
#>
 
# 1. Log in Azure and set the context, as appropriate
Connect-AzAccountstep 4
Get-AzContext
Set-AzContext -Subscription '????' -Tenant '????'
 
# 2. Specify a target VM 
$vmName = 'your vm name'
$vmRG = 'the resource group name of the vm'
 
# 3. Check the current BootDiagnostics status
($VM = Get-AzVM -ResourceGroupName $vmRG -Name $vmName).DiagnosticsProfile.BootDiagnostics
 
# 4. Disable BootDiagnostic of the VM
Set-AzVMBootDiagnostic -VM $VM -Disable
 
# 5. Update the VM settings
Update-AzVM -ResourceGroupName $vmRG -VM $VM
 
# 6. Check the current BootDiagnostics status and verify the change made
($VM = Get-AzVM -ResourceGroupName $vmRG -Name $vmName).DiagnosticsProfile.BootDiagnostics
 
# Notice it may take a few minutes for azure portal to reflect 
# the changes made to BootDiagnostic.
 

SAMPLE SESSION

  • Examine status before making a change

BootDiagnostic Settig

  • Disable BootDiagnistic

  • Examine status after making the change

Creating Azure Managed Disk with VHD Using PowerShell


#region [CREATE MANAGED DISK WITH VHD]

write-host "
------------------------------------------------------------

This script is based on the following reference and for
learning Azure and PowerShell. I have made changes to the
original scrtip for clarity and portability.

Ref: Create a managed disk from a VHD file
https://docs.microsoft.com/en-us/azure/virtual-machines/scripts/virtual-machines-windows-powershell-sample-create-managed-disk-from-vhd

Recommend manually running the script statement by
statement in cloud shell.

© 2020 Yung Chou. All Rights Reserved.

------------------------------------------------------------
"

#region [CUSTOMIZATION]

#region [Needed only if an account owns multiple subscriptions]

Get-AzSubscription | Out-GridView  # Copy the target subscription name

# Set the context for subsequent operations
$context = (Get-AzSubscription | Out-GridView -Title 'Set subscription context' -PassThru)
Set-AzContext -Subscription $context | Out-Null
write-host "Azure context set for the subscription, `n$((Get-AzContext).Name)" -f green

#endregion

$sourceVhdStorageAccountResourceId = '/subscriptions/…/StorageAccounts/'
$sourceVhdUri = 'https://.../.vhd'

#Get-AzLocation
$sourceVhdLoc = 'centralus'

$mngedDiskRgName ="da-mnged-$(get-date -format 'mmss')"
#$mngedDiskRgName ='dnd-mnged'

#Provide the name of a to-be-created Managed Disk
$mngedDiskName = 'myMngedDisk'
$mngedStorageType = 'Premium_LRS' # Premium_LRS,Standard_LRS,Standard_ZRS
$mngedDiskSize = '128' # In GB greater than the source VHD file size

#endregion

if (($existingRG = (Get-AzResourceGroup | Where {$_.ResourceGroupName -eq $mngedDiskRgName})) -eq $Null) {
write-host "Resource group, $mngedDiskRgName, not found, creating it" -f y
New-AzResourceGroup -Name $mngedDiskRgName -Location $mngedDiskLoc
} else {
write-host "Using this resource group, $mngedDiskRgName, for the managed disk, $mngedDiskName" -f y
}

$diskConfig = New-AzDiskConfig `
-AccountType $mngedStorageType `
-Location $sourceVhdLoc `
-CreateOption Import `
-StorageAccountId $sourceVhdStorageAccountResourceId `
-SourceUri $sourceVhdUri

New-AzDisk `
-Disk $diskConfig `
-ResourceGroupName $mngedDiskRgName `
-DiskName $mngedDiskName `
-Verbose

#endregion [CREATE MANAGED DISK WITH VHD]

#region [CLean up]
# Remove-AzResourceGroup -Name $mngedDiskRgName -Force -AsJob
#endregion

Deploying Azure VM with Diagnostics Extension and Boot Diagnostics

This is a sample script for deploying an Azure VM with Diagnostics Extension and Boot Diagnostics, while each in a different resource group. The intent is to clearly illustrate the process with required operations, while paying minimal effort for code optimization.

Ideally an Azure VM, Diagnostic Extension, and Boot Diagnostics are to be deployed with the same resource group. However in production, it may be necessary to organize them into individual resource groups for standardization, which is what this script demonstrates.

The script can be run as it is. Or simply make changes in customization section and leave the rest in place. For VM Diagnostic Extension, the configuration file should be placed where the script is. Or update the variable, $diagnosticsConfigPath, accordingly. This script uses Storage Account Key for access which allows the storage account with a subscription different from that deploys the VM. A sample configuration file, diagnostics_publicconfig_NoStorageAccount.xml, is available  and notice there is no <StorageAccount> element specified in this file.

Here’s the user experience up to finishing the [Deploying] section in the script. By default, an Azure VM is deployed with Boot Diagnostic enabled. The script upon a VM deployment changes and disables the Boot Diagnostic of the VM. For the following sample run, it took 3 minutes and 58 seconds.

Deploying Azure VM and setting Boot Diagnostics as disabled Deploying Azure VM and setting Boot Diagnostics as disabled

Now with an Azure VM in place, the script adds VM Diagnostic Extension, followed by enabling Boot Diagnostics. Herr either extension uses a storage account in a resource group different form the VM’s. So this script creates 3 resource groups for: a VM itself, and the Diagnostics Extension and the Boot Diagnostics of the VM.

VM, Diagnostics, and Boot Diagnostics deployed with individual resource groups

VM, Diagnostics, and Boot Diagnostics deployed with individual resource groups


write-host "
---------------------------------------------------------

This is a sample script for deploying an Azure VM
with Diagnostics Extension and Boot Diagnostics,
while each in a different resource group.

The intent is to clearly illustrate the process with
required operations, while paying minimal effort for
code optimization.

Ideally an Azure VM, Diagnostic Extension, and Boot Diagnostics
are to be deployed with the same resource group. However
in production, it may be necessary to organize them into
individual resource groups for standardization,
which is what this script demonstrates.

The script can be run as it is. Or simply make changes
in customization section, while leave the rest in place.
For VM Diagnostic Extension, the configuration file should
be placed where thi script is. Or update the variable,
$diagnosticsConfigPath, accordingly. This script uses a
Storage Account Key for access. This configuration allows
the storage account with a subscription different from that
deploys the VM. A sample configuration file,
diagnostics_publicconfig_NoStorageAccount.xml, is available at

https://1drv.ms/u/s!AuraBlxqDFRshVSl0IpWcsjRQkUX?e=3CGcgq

and notice there is no <StorageAccount> element specified in this file.

© 2020 Yung Chou. All Rights Reserved.

---------------------------------------------------------
"

Disconnect-AzAccount; Connect-AzAccount
# If multipel subscription
# Set-AzContext -SubscriptionId "xxxx-xxxx-xxxx-xxxx"

#region [Customization]

$cust=@{
initial='yc'
;region='southcentralus'
}

$diagnosticsConfigPath='diagnostics_publicconfig_NoStorageAccount.xml'

#region [vm admin credentials]

# 1.To hard-code
$cust+=@{
vmAdmin ='changeMe'
;vmAdminPwd='forDemoOnly!'
}
$vmAdmPwd=ConvertTo-SecureString $cust.vmAdminPwd -AsPlainText -Force
$vmAdmCred=New-Object System.Management.Automation.PSCredential ($cust.vmAdmin, $vmAdmPwd);
#>

# 2. Or interactively
#$vmAdminCred = Get-Credential -Message "Enter the VM Admin credentials."

#endregion

$tag=$cust.initial+(get-date -format 'mmss')
Write-host "`nSession ID = $tag" -f y

# Variables for common values
$vmRGName=$tag+'-RG'
$loc=$cust.region
$vmName=$tag+'vm'

$deployment=@{
vmSize='Standard_B2ms'
;dataDiskSzieInGB=5
;publisher='MicrosoftWindowsServer'
;offer='WindowsServer'
;sku='2016-Datacenter'
;version='latest'
;vnetAddSpace='192.168.0.0/16'
;subnetAddSpace='192.168.1.0/24'
}

#endregion

#region [Deployment Preping]

# Create a resource group
New-AzResourceGroup -Name $vmRGName -Location $loc
# Remove-AzResourceGroup -Name $vmRGName -AsJob

# Create a subnet configuration
$subnetConfig = `
New-AzVirtualNetworkSubnetConfig `
-Name 'default' `
-AddressPrefix ($deployment.subnetAddSpace) `
-WarningAction 'SilentlyContinue'

# Create a virtual network
$vnet = `
New-AzVirtualNetwork `
-ResourceGroupName $vmRGName `
-Location $loc `
-Name "$tag-vnet" `
-AddressPrefix $deployment.vnetAddSpace `
-Subnet $subnetConfig

# Create a public IP address and specify a DNS name
$pip = `
New-AzPublicIpAddress `
-ResourceGroupName $vmRGName `
-Location $loc `
-Name "$vmName-pip" `
-AllocationMethod Static `
-IdleTimeoutInMinutes 4

# Create an inbound network security group rule for port 3389
$nsgRuleRDP = `
New-AzNetworkSecurityRuleConfig `
-Name "$vmName-rdp" `
-Protocol Tcp `
-Direction Inbound `
-Priority 1000 `
-SourceAddressPrefix * `
-SourcePortRange * `
-DestinationAddressPrefix * `
-DestinationPortRange 3389 `
-Access Allow

# Create an inbound network security group rule for port 80,443
$nsgRuleHTTP = `
New-AzNetworkSecurityRuleConfig `
-Name "$vmName-http" -Protocol Tcp `
-Direction Inbound `
-Priority 1010 `
-SourceAddressPrefix * `
-SourcePortRange * `
-DestinationAddressPrefix * `
-DestinationPortRange 80,443 `
-Access Allow

$nsg= `
New-AzNetworkSecurityGroup `
-ResourceGroupName $vmRGName `
-Location $loc `
-Name "$vmName-nsg" `
-SecurityRules $nsgRuleRDP, $nsgRuleHTTP `
-Force

# Create a virtual network card and associate with public IP address and NSG
$nic = `
New-AzNetworkInterface `
-Name "$vmName-nic" `
-ResourceGroupName $vmRGName `
-Location $loc `
-SubnetId $vnet.Subnets[0].Id `
-PublicIpAddressId $pip.Id `
-NetworkSecurityGroupId $nsg.Id

$vmConfig = `
New-AzVMConfig `
-VMName $vmName `
-VMSize $deployment.vmSize `
| Set-AzVMOperatingSystem `
-Windows `
-ComputerName $vmName `
-Credential $vmAdmCred `
| Set-AzVMSourceImage `
-PublisherName $deployment.publisher `
-Offer $deployment.offer `
-Skus $deployment.sku `
-Version $deployment.version `
| Add-AzVMNetworkInterface `
-Id $nic.Id

#endregion

#region [Deploying]

$StopWatch = New-Object -TypeName System.Diagnostics.Stopwatch; $stopwatch.start()
write-host "`nDeploying the vm, $vmName, to $loc...`n" -f y

$vmStatus = `
New-AzVM `
-ResourceGroupName $vmRGName `
-Location $loc `
-VM $vmConfig `
-WarningAction 'SilentlyContinue' `
-Verbose

Set-AzVMBgInfoExtension `
-ResourceGroupName $vmRGName `
-VMName $vmName `
-Name 'bginfo'

$vm = Get-AzVM -ResourceGroupName $vmRGName -Name $vmName
# Set by default not to enable boot diagnostic
Set-AzVMBootDiagnostic `
-VM $vm `
-Disable `
| Update-AzVM
write-host "`nSet the vm, $vmName, with BootDiagnostic 'Disabled'`n" -f y

write-host '[Deployment Elapsed Time]' -f y
$stopwatch.stop(); $stopwatch.elapsed

#endregion

#region [Set VM Diagnostic Extension]
<# If using a diagnostics storage account name for the VM Diagnostic Extension, the storage account must be in the same subscription as the virtual machine. If the diagnostics storage account is in a different subscription than the virtual machine's, then enable sending diagnostics data to that storage account by explicitly specifying its name and key. #>
$vmDiagRGName=$tag+'vmDiag-RG'
$vmDiagStorageName=$tag+'vmdiagstore'

New-AzResourceGroup -Name $vmDiagRGName -Location $loc
#Remove-AzResourceGroup -Name $vmDiagRGName -AsJob

New-AzStorageAccount `
-ResourceGroupName $vmDiagRGName `
-AccountName $vmDiagStorageName `
-Location $loc `
-SkuName Standard_LRS

Set-AzVMDiagnosticsExtension `
-ResourceGroupName $vmRGName `
-VMName $vmName `
-DiagnosticsConfigurationPath $diagnosticsConfigPath `
-StorageAccountName $vmDiagStorageName `
-StorageAccountKey (
Get-AzStorageAccountKey `
-ResourceGroupName $vmDiagRGName `
-AccountName $vmDiagStorageName
).Value[0] `
-WarningAction 'SilentlyContinue'

$vmExtDiag = Get-AzVMDiagnosticsExtension -ResourceGroupName $vmRGName -VMName $vmName

#endregion

#region [Enable Boot Diagnostic]

# The resource group and the storage account are
# different from the vm's.

$vmBootDiagRGName=$tag+'bootDiag-RG'
$bootDiagStorageName=$tag+'bootdiagstore'

New-AzResourceGroup -Name $vmBootDiagRGName -Location $loc
#Remove-AzResourceGroup -Name $vmBootDiagRGName -AsJob

New-AzStorageAccount `
-ResourceGroupName $vmBootDiagRGName `
-AccountName $bootDiagStorageName `
-Location $loc `
-SkuName Standard_LRS

Set-AzVMBootDiagnostic `
-Enable `
-VM $vm `
-ResourceGroupName $vmBootDiagRGName `
-StorageAccountName $bootDiagStorageName `
| Update-AzVM

#endregion

#region [Session Summary]

($RGs = Get-AzResourceGroup | Where ResourceGroupName -like "$tag*") `
| ft ResourceGroupName, Location

($vms = Get-AzVM| Where ResourceGroupName -like "$tag*") `
| ft ResourceGroupName, Location, Name

($SAs = Get-AzStorageAccount | Where ResourceGroupName -like "$tag*") `
| ft ResourceGroupName, Location, StorageAccountName

#endregion

<# [Clean Up] 
Remove-AzResourceGroup -Name $vmRGName -AsJob 
Remove-AzResourceGroup -Name $vmDiagRGName -AsJob 
Remove-AzResourceGroup -Name $vmBootDiagRGName -AsJob 
#>

Microsoft Nano Server with Docker Enterprise Edition (EE)

This article details the process to install the latest Docker EE Version 17.06.2-ee-3 to a Microsoft Nano Server. I am sure there are different ways to do this. After a few iterations, here is one verified approach. A sample script is available.

Background

As shown below, when adding the server feature, containers, in Windows Server 2016, it installs Docker EE Version 17.06-1-ee-2. As opposed to what is in Windows 10, adding containers in ‘Turn Windows features on or off’ of  Program and Features of Control Panel installs Docker CE Version 17.03.1-ce, i.e. Community Edition. Information about the two versions is available. The latest version of Docker EE is 17.06-2-ee-3. To keep all Docker EE to the same and the latest version, one may need to manually install Docker EE, instead of employing the default version with a Windows Server. To manually install Docker EE to a Microsoft Nano Server, follow the steps provided below.

Windows Server 2016 patched on 10/05/2017

image

Windows 10 patched on 10/05/2017

image

Step 1 – Create a Nano Server vhdx file with the container package

First, use Nano Server Image Builder to create a vhdx file with intended packages including containers. Notice if Windows ADK (Assessment and Deployment Kit) is not in place, it will prompt for installing ADK. Which is about 6.7 GB download. Once ADK is in place, start the image builder which is wizard-driven and straightforward. I picked vhdx format for building a Gen2 VM. And as shown below, I also added containers, Hyper-V and Anti-Virus packages. The Windows Server 2016 media used to create the Nano Server vhdx file is en_windows_server_2016_x64_dvd_9718492.iso download from my MSDN subscription.

clip_image001

Step 2 – Update the Nano Server OS

In Hyper-V manager, I created and started a Gen2 VM using the vhdx created in Step 1. And log in the VM to find out the IP address, as shown below.

image

I did the following to connect to the host. Once connected, not shown in the following is that to test the Internet connectivity and update the DNS setting, as needed, by following the instructions in the sample script. What should be done first is to carry out a Windows update. Which I did.

image

For this particular VM, I had already updated the OS before taking this screen capture, thus there was no, i.e. zero, updates applicable. Originally there were two updates, KB4035631 and KB4038782, listed as applicable. This update took 20 minutes with about 2 GB download, followed by a reboot of the system. If there is an interest in examining the list of applicable updates beforehand, you can run the following in the PSSession before the Invoke-CimMethod in line 8,

$updateList = ($ci | Invoke-CimMethod -MethodName ScanForUpdates -Arguments @{SearchCriteria=”IsInstalled=0″;OnlineScan=$true}).Updates

Step 3 – Install Docker EE

If to simply use the Docker default to current Windows Server 2016 installation, which is Docker EE Version 17.06-1-ee-2, as stated earlier, install the provider and package will do. In this case, execute line 1 and line 4 to start a PSSession after updating followed by rebooting the OS, then run the following PowerShell commands to install Docker.

Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
Install-Package -Name docker -ProviderName DockerMsftProvider
Restart-Computer -Force; exit

To manually install Docker EE Version 17.06.2-ee-3, I did the following:

image

After downloading and extracting the source file in line 27 and 28, the PATH in the registry was updated with the installation of Docker to persist the reference across sessions. Rather than reboot the system, the current PATH was updated to start and verify the installation of an intended version of Docker from line 39 to line 41. The following is the user experience of executing line 1 to line 41 and successfully installed Docker EE Version 17.06.2-ee-3.

image

In a swarm, keeping all Docker instances in the same version is essential. In case there is a need to have Docker EE Version 17.06.2-ee-3 in Windows workloads, the presented steps achieve that.

What’s Next

Having installed Docker EE, start pulling down images and building containers. Deploying a swarm in a hybrid setting is what I plan to share next.

Microsoft Cortana Intelligence Suite Workshop Video Tutorial Series (5/5): Predictive Web Service

The last part of this video tutorial series includes three exercises. First, Exercise 6 uses Power BI Desktop, import the summary data from the Spark cluster and create a report with drag-n-drop to visualize the data. Exercise 7 is the exciting part, configures and deploys a sample web app and configures it to consume the predictive web service published in Exercise 1, followed by conducting a few simple tests. Finally, Exercise 8 shows how to clean up the deployed resources of the workshop.

Here you start.

Microsoft Cortana Intelligence Workshop encompasses a set of processes and supporting tools to architect, construct, package and deploy a predictive analytics solution. It is a friendly platform with no hardware to purchase, no software to configure. The workshop ultimately deploys a web application with a predictive analytics service. The app predicts the total number and the probability of flight delays between two cities based on date, time, carrier and real-time forecast weather information. It is a relative simple project, however includes all the essential components to formulate a modern and intelligent application.

Microsoft Cortana Intelligence Suite Workshop Video Tutorial Series by Yung Chou

The workshop is intended to be delivered as a whole-day event with presentation sessions and lab time. On the other hand, within 75 minutes the above video tutorial series can also offer you an experience and guide you through all the screens and interactions to successfully deploy the web service.

The next step is to apply what learned from this series to your work. Good luck.

Microsoft Cortana Intelligence Suite Workshop Video Tutorial Series (3/5): Azure Data Factory

Machine Learning, predictive analytics, web services and all the rest to make it happen are really about one thing. And that is to acquire, process and act on data. For the workshop, this is done with a Data Factory pipeline configured to automatically upload a dataset to the storage account of a Spark cluster where Azure Machine Learning is integrated to score the dataset. Importantly, this addresses a fundamental requirement relevant to data-centric applications involved cloud computing. Which is to securely, automatically and on demand moving data between an on-premises location and a designated one in the cloud. For IT today, cloud can be a source, a destination and a broker of data and the ability to securely move data between an on-premises facility and a cloud destination is imperative for a hybrid cloud setting and a backup-and-restore scenarios. And Azure Data Factory is a vehicle to achieve that ability.

image

The workshop video tutorial series is as listed below:

Specifically, Exercises 2 -4 are to accomplish three things:

  • Creating an Azure Data Factory service and pairing which with a designated
    on-premises (file) server
  • Constructing an Azure Data Factory Pipeline to automatically and securely
    move data from the designated on-premises server to a target Azure blob storage
    account
  • Enabling the developed Azure Machine Learning model to score the date
    provided by Azure Data Factory pipeline

Notice that the lab VM is also employed as an on-premises file server hosting a dataset to be uploaded to Azure. At one moment, you may be using the lab VM as a workstation to access Azure remotely, and the next on an on-premises file server installing a gateway. When following the instructions, be mindful where a task is carried out, as the context switching is not always apparently.

Microsoft Cortana Intelligence Suite Workshop Video Tutorial Series (2/5): Azure Machine Learning

This video tutorial series walks you through the development of a predictive analytics solution using Microsoft Cortana Intelligence Suite. The solution is realized as a web application to predict the number of delays with probability of a flight segment between two cities with a particular airline at a particular date and time. The content of this series is based on what is published at http://aka.ms/CortanaManual by and thanks to Todd Kitta.

The first video is an introduction of how the workshop is structured and highlights a few important items to get you prepared. There are additional four to cover all eight exercises as

Here, the video walks through the process and operations in Exercise 1 to build an Azure Machine Learning model, as below,

image

and package it as a web service for consumption. If having not tried Microsoft Azure Machine Learning Studio before, I hope you will enjoy and appreciate the build-in canvas and native drag-and-drop capability for creating and composing a model. It let you explore and realize your creativities in multiple dimensions.

There are nine tasks total. Here we go.

So, what qualifies the machine as being able to learn? What is learning anyway? Look for my upcoming blog posts to examine the concept of “learning” and more.

Microsoft Cortana Intelligence Suite Workshop Video Tutorial Series (1/5): Introduction

This series, based on the content developed by Microsoft, offers a learning path with minimal time and effort to acquire the essential operation-level knowledge of Microsoft Cortana Intelligence Suite. The workshop steps through a process to construct and deploy a web application with predictive analytics, while along the way introducing key functional components. By specifying an origin and a destination airports, a future date and time and an airline carrier, this application predicts a flight delay with probability based on the weather forecast. The video tutorial series runs about 75 minutes and has captured exactly when and what you will see on the screen, where and how to respond based on the instruction of each exercise in the workshop.

I believe this series will most benefit those who function in a technical leadership capacity including: enterprise architect, solution architect, cloud architect, application architect, DevOps lead, etc. and are interested in the solution architecture of an application of predictive analytics. Going through the recordings will provide you an end-to-end view and clarity on how to constructing and deploying a predictive analytics solution, hence a better understanding on the processes and technologies, integration points, packaging and publishing, resource skill profiles, critical path, cost model, etc.

Cortana Intelligence Suite is a set of processes and tools. This workshop outlines an approach where analytic models, data, analysis, visualization, packaging, publishing and deployment are delivered in an integrated fashion. In my view, this is a productive and the right way to start learning how to architect a predictive analytics solution. The above video is the first of five to accelerate your learning of Cortana Intelligence Suite, and highlight a few important items before starting the workshop.

Content Repo

The content of this workshop made available by Todd Kitta is at http://aka.ms/CortanaManual in github. The readme file of the workshop details the scenario, architecture, prerequisites and a list of links to the instructions of all eight exercises.

image

The above architecture diagram of the workshop depicts the functional components for a web application with predictive analytics. Here the lab VM is also employed as an on-premises file server as the source of a data pipeline securely connected to a created Azure Data Factory service to automatically upload data to be scored by the Azure Machine Learning model. At the center is a Spark HDInsight cluster for data analysis, while the data are visualized by Power BI. The predictive analytics model is integrated and package as a web service consumed by a web application.

Introduction

Let’s first pay attention to a few important items before doing the workshop. There are eight exercises in this workshop and I have grouped them into five videos: an introduction and four learning units.

I recommend reading the instruction of an exercise in its entirety before doing the exercise, this will help set the context and gain clarity the objectives of each exercise. To do the workshop, one will need an active Azure subscription. Notice that a free trial account does provide sufficient credit for doing the entire workshop.

image

The workshop environment is a collection of resources deployed to Azure, as shown above, including:

  • A VM with Internet connectivity for a student to log in and work on all the exercises, such that there is no need to download or install anything locally for this workshop
  • A Machine Learning workspace accessed via Microsoft Azure Machine Learning studio to develop an experiment of predictive analytics
  • A Spark cluster for hosting and analyzing data including a scored dataset and a summary table
  • A number of storage accounts for storing workshop data

These resources do incur a cost. And to minimize the cost, try deploying the workshop environment only when you are ready to work on the exercises and delete it once completed the workshop. The deployment will take about fifteen minutes, if not more. And do deploy all resources and create services into the same resource group, so all can be later removed by simply deleting the resources group. Personally, when doing the workshop, I will set aside at least a four-hour block, find a quiet room and get a great cup of coffee. It is indeed a lot to consume.

Enjoy the workshop. Let’s get started.