Microsoft Azure
Hybrid-cloud and Multi-cloud server management using Azure Arc
Govern AWS EC2 instances and On-Premises server all within Azure control plane
In this article we are going to take a look at an example of how to get started with Azure Arc enabled servers and see what this control plane looks like from within the Azure portal.
Azure Arc simplifies governance and management of servers, Kubernetes clusters and data services such as SQL Server in Azure, on-premise and other Cloud like AWS or GCP. Azure Arc provides a single, consistent control plane within Azure. What this means is that you can have your VMs, Kubernetes clusters or databases hosted and running anywhere outside of Azure, but manage them within Azure as if they are running on Azure.
When it comes to Azure Arc servers, the magic-enabler here is really the Connected Machine Agent. This agent needs to be installed and run within the off-Azure machines. Subsequent sections of this article will walk you through how to achieve this sample set up with Azure Arc, then demonstrate one use case of how to use Azure Policy with the Azure Arc-enabled server. Feel free to get hands-on and try this out yourself, or simply follow along through reading.
Prerequisite
The first thing we need to do is to ensure the following 3 resource providers are registered in your Azure subscription from which you want to use Azure Arc:
- Microsoft.HybridCompute
- Microsoft.HybridConnectivity
- Microsoft.GuestConfiguration
You can check them by going to Subscriptions > Settings > Resource Providers, then find it using the text filter
If they’re not registered, simply click on the provider and then click on the Register button. Alternatively, use Azure Cloud Shell to enable them. Here is the CLI command that you can use from Cloud Shell, just make sure you are in Bash environment if you want to use this code, and put your subscription name in the script.
az account set --subscription "{Subscription Name or ID}"
az provider register --namespace 'Microsoft.HybridCompute'
az provider register --namespace 'Microsoft.GuestConfiguration'
az provider register --namespace 'Microsoft.HybridConnectivity'
Another thing that is not made immediately apparent is that to onboard servers to Azure Arc, you need to ensure that your target servers have network connectivity to Azure via port 443 to a select known URLs for the Connected Machine Agent to work properly with Azure Arc. Typically this requires early engagement with the Infrastructure team to establish this connectivity upfront. This means opening outbound traffic on TCP port 443 and allow connectivity to the following URLs:
management.azure.com
login.windows.net
dc.services.visualstudio.com
agentserviceapi.azure-automation.net
*-agentservice-prod-1.azure-automation.net
*.guestconfiguration.azure.com
*.his.arc.azure.com
Connectivity can be via the internet over a public endpoint, through a proxy server, or over a private endpoint. Please consult this article for more details.
Prepare to install Connected Machine Agent
Now we’re ready to extend our Azure control plane. For this, we go to Azure Arc on the Azure portal, and will be immediately greeted by a list of workloads that can be managed using Azure Arc. For the scope of this article, we’ll pick Servers.
After clicking on Add, we’ll be prompted a few options. There are a few ways to add servers to Azure Arc. Depending on your use case, you can:
- Add a single server. This option will generate a script to run on your target server. You use your Azure login credentials to run this script, and it is probably the most straightforward approach for adding servers one at a time, especially for Windows servers. Just note that with this approach, you need to login with an Azure credential that has the Azure Connected Machine Onboarding role.
- Add multiple servers. If using this option, Azure will generate a script that handles authentication through a service principal. With this method, you will first have to have a service principal created. To do this, navigate to Azure Arc > Service Principals and follow through the prompt to create a service principal that you can use to onboard your workload. A service principal is valid for a limited amount of time. When you create the service principal, note down the Client ID and Client Secret, as you will need to put this into the script during onboarding.
- Add servers with Azure migrate. If you have a VMWare environment on prem, this is quite a powerful approach to onboard the VMs. This allows you to automatically onboard VMs to Azure Arc with the Azure Migrate: Discovery and assessment tool. This is beyond the scope of this article but if you are interested, I encourage you to check out this article from Microsoft that tells you how to do it.
- Add servers from Azure Automation Update Management. This is useful if you want to onboard servers outside of Azure that you already manage using Azure Automation Update Management. This onboarding process automates the download and installation of the Connected Machine Agent using the Add-AzureConnectedMachines Azure automation runbook.
I opted for the second method: add multiple servers. This is because I don’t want to go through the hassle of opening web browser and having to logon. I can just use the service principal details in the script and run the script, which makes it a bit faster and easier. If you want to use this method, just remember to create a service principal beforehand, it’s very easy to do, simply go to the service principal blade within Azure Arc, click Add and follow through the prompt. If you need more details, follow this article.
On the next screen, you will need to choose a resource group, the region (where server metadata will be stored in), the OS of your target server you are onboarding and how you’ll connect: public, via proxy or private connection, then you got to specify tags.
Finally, the portal will display the script that can either be download or copied for it to then be run on your target server which you want to onboard to Azure Arc. Here’s a sample of how the script look like for onboarding the Linux server I have on AWS.
# Add the service principal application ID and secret here
servicePrincipalClientId="<Service-Principal-Id>"
servicePrincipalSecret="<Client-Secret>"# Download the installation package
wget https://aka.ms/azcmagent -O ~/install_linux_azcmagent.sh# Install the hybrid agent
bash ~/install_linux_azcmagent.sh# Run connect command
azcmagent connect --service-principal-id "$servicePrincipalClientId" --service-principal-secret "$servicePrincipalSecret" --resource-group "AzArcPoc" --tenant-id "[Az_Tenant_Id]" --location "[Az_Region]" --subscription-id "[Az_Subscription_Id]" --cloud "AzureCloud" --tags "Datacenter=LinuxOnAWS,CountryOrRegion=AWS-ap-southeast-1,GuestOS='Ubuntu 20'" --correlation-id "[Arc_VmId_Guid]"if [ $? = 0 ]; then echo -e "\033[33mTo view your onboarded server(s), navigate to https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.HybridCompute%2Fmachines\033[m"; fi
Install the Connected Machine Agent
Obviously, the next step from here is to go into the server and install the Connected Machine Agent using the script generated from Azure. Just remember that you need to do this using a local admin account.
Note: Technically, you don’t have to go into each of the server if you have other means to run script remotely, like using the AWS Systems Manager Run Command to execute scripts at-scale on AWS, but I only have 1 EC2 instance for this so it’ll be overkill.
Here’s a snippet that shows the time when I logged into the Linux EC2 instance on AWS using EC2 instance connect and in the midst of installing the Connected Machine Agent.
After the script is run, I would see this success message:
To onboard another server, repeat the process above. Because I have chosen to generate a script that can be used to onboard multiple servers, I can easily run the same script on multiple servers, be it on prem or on other Cloud provider like AWS. However, we do have to generate 2 scripts: 1 for each OS. For Windows, just remember to run it in PowerShell.
Control Plane in Azure Arc
If you follow this article up until this point and especially if you chose to try it out hands-on, congratulations, because you have just successfully set up a hybrid-cloud and multi-cloud environment using Azure Arc. You can now see that we have an AWS Linux machine and an on-premise Windows machine through a single pane of glass within Azure, as if these servers are running in Azure.
Through this single pane of glass, we can view details of the machine running outside of Azure. The following shows the overview of the machine, where we can see the agent version, OS name, OS version, Cloud provider, etc.
Now that we have our machines in Azure Arc, what can we do on them? Well, there’s quite a lot powerful things that can be done, such as:
- Monitoring: You can now monitor the performance and health of these servers all within Azure by deploying the Azure Monitor Agent, which will collect performance metrics and logs (event log on Windows and syslog on Linux) and send them across to Azure Monitor. It can send to multiple Log Analytics Workspaces, also known as multi-homing. Once the logs and metrics land in Azure Monitor, they can be viewed, queried and analysed using Log Analytics and Metrics Explorer.
- Security: Use Microsoft Defender for Endpoint for threat detection, vulnerability management, and to proactively monitor for potential security threats, automatically refresh certificates stored in Azure Key Vault.
- Simplify operations: Use Update Management to manage operating system updates for both Windows and Linux machines, run configuration or deployment scripts at scale using the custom script VM extension available for both Windows and Linux servers.
- Governance: you can use Azure Policy to check for compliance of your servers that are running outside of Azure, and use remediation to fix all non-compliant machines. We’ll take a look at an example of this shortly.
Use Azure Policy for resource compliance
Building on the sample set up we have done so far, I am now going to assign an Azure Policy to the Linux VM running on AWS. The policy that I will use for this example is the Configure Linux Arc-enabled machines to run Azure Monitor Agent built-in policy.
While doing the policy assignment, I have also configured remediation. See this article for more details on using remediation with Azure Policy. I will be using a remediation task to install the Azure Monitor Agent on this EC2 instance without having to go into the EC2 instance.
A few minutes later, Azure policy evaluation will kick in and surely the Amazon EC2 instance shows as non-compliant because it hasn’t got the Azure Monitor Agent installed on it.
In order to fix this compliance issue, we can simply go into the policy and create a remediation task.
A few minutes later, the remediation task would have completed, which will result in the Linux VM running on AWS now showing as compliant to the policy.
If we go to the VM Extensions, we can also see that the remediation task have installed the AzureMonitorLinuxAgent VM Extension to get the agent on the Amazon EC2 instance. Note all this is done just by configurations within Azure without even having to SSH into the Amazon EC2 instance.
I hope this article helps you a bit in understanding what Azure Arc is, one way to get started with Arc enabled servers, and explain what can be done and achieved using Azure Arc. I highly encourage you to review the official Azure Arc documentation for more details on Azure Arc.
Thank you for reading.