Two core approaches: traditional and Azure Virtual WAN
The above document has a topology diagram for each model.
Feature
Traditional Azure Network Topology
Azure Virtual WAN Network Topology
Highlights
Customer-managed routing and security
An Azure subscription can create up to 50 vnets across all regions.
Vnet Peering links two vnets either in the same region or in different regions and enables you to route traffic between them using private IP addresses (carry a nominal charge).
Inbound and outbound traffic is charged at both ends of the peered networks. Network appliances such as VPN Gateway and Application Gateway that are run inside a virtual network are also charged.
A Microsoft-managed networking service providing optimized and automated branch to branch connectivity through Azure.
Virtual WAN allows customers to connect branches to each other and Azure, centralizing their network and security needs with virtual appliances such as firewalls and Azure network and security services.
If you have basic understanding of cloud computing, while new to Azure, I recommend starting with the following:
Bookmark Azure documentation for finding all published Azure product documents including tutorials, demos, sample scripts/code, etc., if not sure where to start.
“A migration landing zone is an environment that’s been provisioned and prepared to host certain workloads. These workloads are being migrated from an on-premises environment into Azure.”
“The CAF Foundation blueprint does not deploy a landing zone. Instead, it deploys the tools required to establish a governance MVP (minimum viable product) to begin developing your governance disciplines. This blueprint is designed to be additive to an existing landing zone and can be applied to the CAF Migration landing zone blueprint with a single action.”
“Getting your Azure landing zone (ALZ) done right and on time is important. Working with a certified Azure partner is a great way to get the support you need to build your ALZ.”
Option 1 – use Azure Migrate and Modernize.
Option 2 – find a partner offer for a landing zone in our marketplace.
Azure Defender for Cloud is a cloud-native application protection platform (CNAPP) with a set of security measures and practices designed to protect cloud-based applications from various cyber threats and vulnerabilities.
Defender for Cloud combines the capabilities of
Cloud security operations (DevSecOps),
Cloud security posture management (CSPM), and
Cloud workload protection platform (CWPP) with specific protections for servers, containers, storage, databases, and other workloads
Ransomware attacks target your data, your backups, and also key documentation and the best way to prevent falling victim to ransomware is to implement preventive measures and have tools that protect your organization from every step that attackers take to infiltrate your systems.
One of the most important steps you can take to protect your data and avoid paying a ransom is to have a reliable backup and restore plan for your business-critical information. Since ransomware attackers have invested heavily into neutralizing backup applications and operating system features like volume shadow copy, it is critical to have backups that are inaccessible to a malicious attacker.
An online tier optimized for storing data that is accessed or modified frequently. The hot tier has the highest storage costs but the lowest access costs.
Cool tier
An online tier optimized for storing data that is infrequently accessed or modified. Data in the cool tier should be stored for a minimum of 30 days. The cool tier has lower storage costs and higher access costs compared to the hot tier.
Cold tier
An online tier optimized for storing data that is infrequently accessed or modified. Data in the cold tier should be stored for a minimum of 90 days. The cold tier has lower storage costs and higher access costs compared to the cool tier.
Most frequently accessed files are cached on your local server and your least frequently accessed files are tiered to the cloud.
With cloud tiering enabled, this feature stores only frequently accessed (hot) files on your local server. Infrequently accessed (cool) files are split into namespace (file and folder structure) and file content. The namespace is stored locally and the file content stored in an Azure file share in the cloud.
Azure File Sync is ideal for distributed access scenarios. For each of your offices, you can provision a local Windows Server as part of your Azure File Sync deployment. Changes made to a server in one office automatically sync to the servers in all other offices.
Azure File Sync is backed by Azure Files, which offers several redundancy options for highly available storage. Because Azure contains resilient copies of your data, your local server becomes a disposable caching device, and recovering from a failed server can be done by adding a new server to your Azure File Sync deployment.
It is an Azure native, first-party, enterprise-class, high-performance file storage service.
It provides NAS volumes as a service for which you can create NetApp accounts, capacity pools, select service and performance levels, create volumes, and manage data protection.
For IT decision makers, here’s why it’s pertinent to consider Azure Arc:
An integrated management and governance solution that is centralized and unified, providing streamlined control and oversight.
Securely extending your on-premandnon-Azureresources into Azure Resource Manager (ARM), empowering you to:
Define, deploy, and manage resources in a declarative fashion using JSON template for dependencies, configuration settings, policies, etc.
Manage Azure Arc-enabled servers, Kubernetes clusters, and databases as if they were running in Azure with consistent user experience.
Harness your existing Windows and Azure sysadmin skills honed from on-premises deployment.
When connecting to Azure Arc-enabled servers, you may perform many operational functions, just as you would with native Azure VMs including these key supported actions:
Secure non-Azure servers with Microsoft Defender for Endpoint, included through Microsoft Defender for Cloud, for threat detection, vulnerability management, and proactive monitoring for potential security threats. Microsoft Defender for Cloud presents the alerts and remediation suggestions from the threats detected.
Keep an eye on OS, processes, and dependencies along with other resources using VM insights. Additionally collect, store, and analyze OS as well as workload logs, performance data, and events. Which may be injected into Microsoft Sentinel real-time analysis, threat detection, and proactive security measures across the entire IT environment.
Extended Security Updates (ESUs) is enabled by Azure Arc. IT can seamlessly deploy ESUs through Azure Arc in on-premises or multi-cloud environments, right from the Azure portal. In addition to providing a centralized management of security patching, ESUs enabled by Azure Arc is flexible with a pay-as-you-go subscription model compared to the classic ESU offered through the Volume Licensing Center which are purchased in yearly increments.
Azure OpenAI is a service provided by Microsoft Azure that allows users to access OpenAI’s powerful language models, including the GPT-3, Codex, and Embeddings model series. Users can access the service through REST APIs, Python SDK, or a web-based interface in the Azure OpenAI Studio.
Azure OpenAI Service gives customers advanced language AI with OpenAI
GPT-4, GPT-3, Codex, and DALL-E
Models with the enterprise security and privacy of Azure.
Azure OpenAI co-develops the APIs with OpenAI, ensuring compatibility and a smooth transition from one to the other
Enterprise-grade security with role-based access control (RBAC) and private networks
Essentially Security, Privacy, and Trust
Microsoft values a customer’s privacy and security of data. When using Azure AI services, Microsoft may collect and store data to improve the session experience and supportability of models. However, customer data is anonymized and aggregated to protect individual privacy.
Microsoft does not use customer data for fine-tuning or customizing models for individual users.
For building AI systems according to six principles:
Fairness and Inclusiveness
Make the same recommendations to everyone who has similar symptoms, financial circumstances, or professional qualifications.
Reliability and Safety
Operate as originally designed, respond safely to unanticipated conditions, and resist harmful manipulation.
Privacy and Security
Restrict access to resources and operations by user account or group.
Restrict incoming and outgoing network communications.
Encrypt data in transit and at rest.
Scan for vulnerabilities.
Apply and audit configuration policies.
Microsoft has also created two open-source packages that can enable further implementation of privacy and security principles: SmartNoise and Counterfit
Transparency and Accountability
The model interpretability component provides multiple or global, local, and model explanations/views into a model’s behavior.
The people who design and deploy AI systems must be accountable for how their systems operate.
By default, Microsoft-managed encryption keys are used, but you also have the option to use customer-managed keys (CMK) for greater control over encryption key management.
With Azure OpenAI, customers get the security capabilities of Microsoft Azure while running the same models as OpenAI. Azure OpenAI offers private networking, regional availability, and responsible AI content filtering.
Azure OpenAI Service contains neural multi-class classification models aimed at detecting and filtering harmful content; the models cover
four categories: hate, sexual, violence, and self-harm across
four severity levels: safe, low, medium, and high.
The default content filtering is default to filter at the medium severity threshold for all four content harm categories for both prompts and completions. That means that content that is detected at severity level medium or high is filtered, while content detected at severity level low is not filtered by the content filters. The configurability feature is available in preview and allows customers to adjust the settings, separately for prompts and completions, to filter content for each content category at different severity levels.
GPT-3 (4k/16k): A series of models that can understand and generate natural language. This includes the new ChatGPT model.
DALL-E: A series of models that can generate original images from natural language.
Codex: A series of models that can understand and generate code, including translating natural language to code.
Embeddings: A set of models that can understand and use embeddings. An embedding is a special format of data representation that can be easily utilized by machine learning models and algorithms. The embedding is an information dense representation of the semantic meaning of a piece of text. Currently, we offer three families of Embeddings models for different functionalities: similarity, text search, and code search.
AZURE OPENAI ON YOUR DATA
With Azure OpenAI GPT-35-Turbo and GPT-4 models, enable them to provide responses based on your data. You can access Azure OpenAI on your data using a REST API or the web-based interface in the Azure OpenAI Studio to create a solution that connects to your data to enable an enhanced chat experience.
Per the document, Azure OpenAI on your data, Azure OpenAI Service supports the following file types:
Previous models were text-in and text-out, meaning they accepted a prompt string and returned a completion to append to the prompt. However, the GPT-35-Turbo and GPT-4 models are conversation-in and message-out.
Azure OpenAI processes text by breaking it down into tokens. Tokens can be words or just chunks of characters. For example, the word “hamburger” gets broken up into the tokens “ham”, “bur” and “ger”, while a short and common word like “pear” is a single token. Many tokens start with a whitespace, for example “ hello” and “ bye”.
The total number of tokens processed in a given request depends on
the length of your input,
output and
request parameters.
The quantity of tokens being processed will also affect your response latency and throughput for the models.
Pricing will be based on the pay-as-you-go consumption model with a price per unit for each model, which is similar to other Azure AI Services pricing models.
SLA: This describes Microsoft’s commitments for uptime and connectivity for Microsoft Online Services covering Azure, Dynamics 365, Office 365, and Intune.
The system role also known as the system message is included at the beginning of the array. This message provides the initial instructions to the model. You can provide various information in the system role including:
A brief description of the assistant
Personality traits of the assistant
Instructions or rules you would like the assistant to follow
Data or information needed for the model, such as relevant questions from an FAQ
You can customize the system role for your use case or just include basic instructions. The system role/message is optional, but it’s recommended to at least include a basic one to get the best results.
“To simplify our product naming and unify our product family, we’re changing the name of Azure AD to Microsoft Entra ID. Capabilities and licensing plans, sign-in URLs, and APIs remain unchanged, and all existing deployments, configurations, and integrations will continue to work as before. Starting today, you’ll see notifications in the administrator portal, on our websites, in documentation, and in other places where you may interact with Azure AD. We’ll complete the name change from Azure AD to Microsoft Entra ID by the end of 2023. No action is needed from you.
To automatically update scripts, reference this quickstart guide.
May want to upgrade sooner than later since Az PowerShell module runs cross-platform and supports all Azure services including Azure authentication mechanisms.