Azure python sdk - getting the machine state - python

Using the python api for azure, I want to get the state of one of my machines.
I can't find anywhere to access this information.
Does someone know?
After looking around, I found this:
get_with_instance_view(resource_group_name, vm_name)
https://azure-sdk-for-python.readthedocs.org/en/latest/ref/azure.mgmt.compute.computemanagement.html#azure.mgmt.compute.computemanagement.VirtualMachineOperations.get_with_instance_view

if you are using the legacy api (this will work for classic virtual machines), use
from azure.servicemanagement import ServiceManagementService
sms = ServiceManagementService('your subscription id', 'your-azure-certificate.pem')
your_deployment = sms.get_deployment_by_name('service name', 'deployment name')
for role_instance in your_deployment.role_instance_list:
print role_instance.instance_name, role_instance.instance_status
if you are using the current api (will not work for classic vm's), use
from azure.common.credentials import UserPassCredentials
from azure.mgmt.compute import ComputeManagementClient
import retry
credentials = UserPassCredentials('username', 'password')
compute_client = ComputeManagementClient(credentials, 'your subscription id')
#retry.retry(RuntimeError, tries=3)
def get_vm(resource_group_name, vm_name):
'''
you need to retry this just in case the credentials token expires,
that's where the decorator comes in
this will return all the data about the virtual machine
'''
return compute_client.virtual_machines.get(
resource_group_name, vm_name, expand='instanceView')
#retry.retry((RuntimeError, IndexError,), tries=-1)
def get_vm_status(resource_group_name, vm_name):
'''
this will just return the status of the virtual machine
sometime the status may be unknown as shown by the azure portal;
in that case statuses[1] doesn't exist, hence retrying on IndexError
also, it may take on the order of minutes for the status to become
available so the decorator will bang on it forever
'''
return compute_client.virtual_machines.get(resource_group_name, vm_name, expand='instanceView').instance_view.statuses[1].display_status

If you are using Azure Cloud Services, you should use the Role Environment API, which provides state information regarding the current instance of your current service instance.
https://msdn.microsoft.com/en-us/library/azure/microsoft.windowsazure.serviceruntime.roleenvironment.aspx

In the new API resource manager
There's a function:
get_with_instance_view(resource_group_name, vm_name)
It's the same function as get machine, but it also returns an instance view that contains the machine state.
https://azure-sdk-for-python.readthedocs.org/en/latest/ref/azure.mgmt.compute.computemanagement.html#azure.mgmt.compute.computemanagement.VirtualMachineOperations.get_with_instance_view

Use this method get_deployment_by_name to get the instances status:
subscription_id = '****-***-***-**'
certificate_path = 'CURRENT_USER\\my\\***'
sms = ServiceManagementService(subscription_id, certificate_path)
result=sms.get_deployment_by_name("your service name","your deployment name")
You can get instance status via "instance_status" property.
Please see this post https://stackoverflow.com/a/31404545/4836342

As mentioned in other answers the Azure Resource Manager API has an instance view query to show the state of running VMs.
The documentation listing for this is here: VirtualMachineOperations.get_with_instance_view()
Typical code to get the status of a VM is something like this:
resource_group = "myResourceGroup"
vm_name = "myVMName"
creds = azure.mgmt.common.SubscriptionCloudCredentials(…)
compute_client = azure.mgmt.compute.ComputeManagementClient(creds)
vm = compute_client.virtual_machines.get_with_instance_view(resource_group, vm_name).virtual_machine
# Index 0 is the ProvisioningState, index 1 is the Instance PowerState, display_status will typically be "VM running, VM stopped, etc.
vm_status = vm.instance_view.statuses[1].display_status

There is no direct way to get the state of a virtual machine while listing them.
But, we can list out the vms by looping into them to get the instance_view of a machine and grab its power state.
In the code block below, I am doing the same and dumping the values into a .csv file to make a report.
import csv
from azure.common.credentials import ServicePrincipalCredentials
from azure.mgmt.compute import ComputeManagementClient
def get_credentials():
subscription_id = "*******************************"
credential = ServicePrincipalCredentials(
client_id="*******************************",
secret="*******************************",
tenant="*******************************"
)
return credential, subscription_id
credentials, subscription_id = get_credentials()
# Initializing compute client with the credentials
compute_client = ComputeManagementClient(credentials, subscription_id)
resource_group_name = "**************"
json_list = []
json_object = {"Vm_name": "", "Vm_state": "", "Resource_group": resource_group_name}
# listing out the virtual machine names
vm_list = compute_client.virtual_machines.list(resource_group_name=resource_group_name)
# looping inside the list of virtual machines, to grab the state of each machine
for i in vm_list:
vm_state = compute_client.virtual_machines.instance_view(resource_group_name=resource_group_name, vm_name=i.name)
json_object["Vm_name"] = i.name
json_object["Vm_state"] = vm_state.statuses[1].code
json_list.append(json_object)
csv_columns = ["Vm_name", "Vm_state", "Resource_group"]
f = open("vm_state.csv", 'w+')
csv_file = csv.DictWriter(f, fieldnames=csv_columns)
csv_file.writeheader()
for i in json_list:
csv_file.writerow(i)
To grab the state of a single virtual machine, where you know its resource_group_name and vm_name, just use the block below.
vm_state = compute_client.virtual_machines.instance_view(resource_group_name="foo_rg_name", vm_name="foo_vm_name")
power_state = vm_state.statuses[1].code
print(power_state)

As per the new API reference, this worked for me
vm_status = compute_client.virtual_machines.instance_view(GROUP_NAME, VM_NAME).statuses[1].code
it will return any one of these states, based on the current state
"PowerState/stopped", "PowerState/running","PowerState/stopping", "PowerState/starting"

Related

Azure managed disk's backup history

I am new to Azure and Azure Python SDK and I would like to ask several questions. How to use Python SDK to:
Given a VM, how do I get all the attached disks and their complete information?
Then how do I get backup history of a disk? How do I know what was the latest backup job executed?
Please explain clearly with references if it is possible. Any help will be appreciated.
The below code was suggested by #Shui shengbao here,to list the disks inside the Resource Group:
from azure.common.credentials import ServicePrincipalCredentials
from azure.mgmt.compute import ComputeManagementClient
from azure.mgmt.resource import ResourceManagementClient, SubscriptionClient
# Tenant ID for your Azure Subscription
TENANT_ID = ''
# Your Service Principal App ID
CLIENT = ''
# Your Service Principal Password
KEY = ''
credentials = ServicePrincipalCredentials(
client_id = CLIENT,
secret = KEY,
tenant = TENANT_ID
)
subscription_id = ''
compute_client = ComputeManagementClient(credentials, subscription_id)
rg = 'shuilinux'
disks = compute_client.disks.list_by_resource_group(rg)
for disk in disks:
print disk
And also refer this thread to fetch the backup details of Azure VM using python SDK.

Invalid scope for ComputeManagementClient for Azure US Government account?

I'm trying to create a simple script that lists out the virtual machines on my Azure US Government account. However, I am faced with this error:
azure.core.exceptions.ClientAuthenticationError: DefaultAzureCredential failed to retrieve a token from the included credentials.
Attempted credentials:
VisualStudioCodeCredential: Azure Active Directory error '(invalid_scope) AADSTS70011: The provided request must include a 'scope' input parameter. The provided value for the input parameter 'scope' is not valid. The scope https://management.azure.com/.default https://management.core.usgovcloudapi.net/.default is not valid. static scope limit exceeded.
This is the code I have used:
def get_access_to_virtual_machine():
subscription_id = key.SUBSCRIPTION_ID
credentials = DefaultAzureCredential(authority = AZURE_US_GOV_CLOUD.endpoints.active_directory,
tenant_id = key.TENANT_ID,
exclude_environment_credential = True,
exclude_managed_identity_credential = True,
exclude_shared_token_cache_credential = True)
compute_client = ComputeManagementClient(credential = credentials,
subscription_id = subscription_id,
base_url = AZURE_US_GOV_CLOUD.endpoints.resource_manager,
credential_scopes = [AZURE_US_GOV_CLOUD.endpoints.active_directory_resource_id + '.default'])
return compute_client
def get_azure_vm(resource_group_name, virtual_machine_name):
compute_client = get_access_to_virtual_machine()
vm_data = compute_client.virtual_machines.get(resource_group_name,
virtual_machine_name,
expand = 'instanceView')
return vm_data
I have signed into my Azure US Government account using Visual Studio as well. The error stems from the compute_client.virtual_machines.get() command. I am 100% sure the credentials I am using are correct but I am really stuck on this. I've tried using ClientSecretCredential instead of DefaultAzureCredential and ran into the same ClientAuthenticationError. In addition, I'm not sure where this scope parameter that the error mentions should be passed in.
For Azure Subscriptions management, the scope should be {management-endpoint}/user_impersonation and not {management-endpoint}/.default. For example, in Azure Commercial the scope will be https://management.azure.com/user_impersonation.
I'm not 100% sure but the management endpoint for Azure Government is either https://management.usgovcloudapi.net/ or https://management.core.usgovcloudapi.net/. Based on the correct endpoint, your scope value should be either https://management.usgovcloudapi.net/user_impersonation or https://management.core.usgovcloudapi.net/user_impersonation.
Please try by changing that.
UPDATE
Looking at the GitHub issue here, it seems there's an issue with the SDK itself. Please try the solution proposed here.
Not sure which version of the Python SDK you have, but I was able to load the latest modules and get the following code to run in the Azure US Government cloud and pull back VM data:
import os
from msrestazure.azure_cloud import AZURE_US_GOV_CLOUD as CLOUD
from azure.mgmt.compute import ComputeManagementClient
from azure.identity import DefaultAzureCredential
subscription_id = 'xxx-xxx-xxx-xxxx'
tenant_id = 'xxxx-xxxx-xxxx-xxxx'
resource_group_name = 'rgName'
vm_name = 'vmName'
credential = DefaultAzureCredential(
authority=CLOUD.endpoints.active_directory,
tenant_id=tenant_id)
compute_client = ComputeManagementClient(
credential, subscription_id,
base_url=CLOUD.endpoints.resource_manager,
credential_scopes=[CLOUD.endpoints.resource_manager + '/.default'])
vm_data = compute_client.virtual_machines.get(
resource_group_name,
vm_name,
expand = 'instanceView')
print(f"{vm_data.name}")
Some things to note:
You had a few of the authentication methods set as excluded, you may want to ensure the method you are expecting is not excluded
The latest SDK sets the environment in the import, I set it to "CLOUD" so that the same code can be used for various cloud by simply changing the import statement
The latest SDK does seem to want '/.default' as part of the credential_scopes

List google cloud compute engine active instance

I'm looking to find out all the active resources( like compute engine, gke etc) and the respective zones .
I tried below python code to print that but its printing all zone information wherever compute engine is available , can please someone guide me what functions are available to do so .
compute = googleapiclient.discovery.build('compute', 'v1')
request = compute.instances().aggregatedList(project=project)
while request is not None:
response = request.execute()
for name, instances_scoped_list in response['items'].items():
pprint((name, instances_scoped_list))
request = compute.instances().aggregatedList_next(previous_request=request, previous_response=response)
You can list all instances you have in your project, using the Cloud Console gcloud compute instances list command or the instances.list() method.
To list all instances in a project in table form, run:
gcloud compute instances list
You will get something like :
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
instance-1 us-central1-a n1-standard-1 10.128.0.44 xx.xx.xxx.xx RUNNING
instance-2 us-central1-b n1-standard-1 10.128.0.49 xx.xx.xxx.xx RUNNING
Edit1
As you mentioned aggregatedList() is the correct one, and to get the information required is necessary to go over the JSON Response Body.
If you need some specific fields you can check the Response body information.
Also, you can use this code as a guide, I’m getting all the information from the instances.
from pprint import pprint
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('compute', 'v1', credentials=credentials)
# Project ID for this request.
project = "{Project-ID}" # TODO: Update placeholder value.
request = service.instances().aggregatedList(project=project)
while request is not None:
response = request.execute()
instance = response.get('items', {})
for instance in instance.values():
for a in instance.get('instances', []):
print(str(instance))
request = service.instances().aggregatedList_next(previous_request=request, previous_response=response)

Create linked service with key Vault using python

Here is my problem, I am trying to create linked service using python sdk and I was successful if I provide the storage account name and key. But I would like to create Linked service with key vaults reference, the below runs fine and creates the linked service. However when I go to datafactory and test connection.. it fails.. Please help!
store = LinkedServiceReference(reference_name ='LS_keyVault_Dev')
storage_string = AzureKeyVaultSecretReference( store=store, secret_name = 'access_key')
ls_azure_storage = AzureStorageLinkedService(connection_string=storage_string)
ls = adf_client.linked_services.create_or_update(rg_name, df_name, ls_name, ls_azure_storage)
Error Message
Invalid storage connection string provided to 'AzureTableConnection'. Check the storage connection string in configuration. No valid combination of account information found.
I test your code, it created the linked service successfully, and I navigate to the portal to Test connection, it also works, you could follow the steps below.
1.Navigate to the azure keyvault in the portal -> Secrets -> Create a secret, I'm not sure why you can use access_key as the name of the secret, pey my test, it is invalid. So in my sample, I use accesskey as the name of the secret, then store the Connection string of the storage account.
2.Navigate to the Access policies of the keyvault, add the MSI of your data factory with correct secret permission. If you did not enable the MSI of the data factory, follow this link to generate it, this is used to for the Azure Key Vault linked service to access your keyvault secret.
3.Navigate to the Azure Key Vault linked service of your data factory, make sure the connection is successful.
4.Use the code below to create the storage linked service.
Version of the libraries:
azure-common==1.1.23
azure-mgmt-datafactory==0.9.0
from azure.common.credentials import ServicePrincipalCredentials
from azure.mgmt.datafactory import DataFactoryManagementClient
from azure.mgmt.datafactory.models import *
subscription_id = '<subscription-id>'
credentials = ServicePrincipalCredentials(client_id='<client-id>', secret='<client-secret>', tenant='<tenant-id>')
adf_client = DataFactoryManagementClient(credentials, subscription_id)
rg_name = '<resource-group-name>'
df_name = 'joyfactory'
ls_name = 'storageLinkedService'
store = LinkedServiceReference(reference_name ='AzureKeyVault1') # AzureKeyVault1 is the name of the Azure Key Vault linked service
storage_string = AzureKeyVaultSecretReference( store=store, secret_name = 'accesskey')
ls_azure_storage = AzureStorageLinkedService(connection_string=storage_string)
ls = adf_client.linked_services.create_or_update(rg_name, df_name, ls_name, ls_azure_storage)
print(ls)
5.Go back to the linked service page, refresh and test the connection, it works fine.

Azure Data Factory Pipelines: Creating pipelines with Python: Authentication (via az cli)

I'm trying to create azure data factory pipelines via python, using the example provided by Microsoft here:
https://learn.microsoft.com/en-us/azure/data-factory/quickstart-create-data-factory-python
def main():
# Azure subscription ID
subscription_id = '<Specify your Azure Subscription ID>'
# This program creates this resource group. If it's an existing resource group, comment out the code that creates the resource group
rg_name = 'ADFTutorialResourceGroup'
# The data factory name. It must be globally unique.
df_name = '<Specify a name for the data factory. It must be globally unique>'
# Specify your Active Directory client ID, client secret, and tenant ID
credentials = ServicePrincipalCredentials(client_id='<Active Directory application/client ID>', secret='<client secret>', tenant='<Active Directory tenant ID>')
resource_client = ResourceManagementClient(credentials, subscription_id)
adf_client = DataFactoryManagementClient(credentials, subscription_id)
rg_params = {'location':'eastus'}
df_params = {'location':'eastus'}
However I cannot pass the credentials in as shown above since azure login is carried out as a separate step earlier in the pipeline, leaving me with an authenticated session to azure (no other credentials may be passed into this script).
Before I run the python code to create the pipeline, I do "az login" via a Jenkins deployment pipeline, which gets me an authenticated azurerm session. I should be able to re-use this session in the python script to get a data factory client, without authenticating again.
However, I'm unsure how to modify the client creation part of the code, as there do not seem to be any examples that make use of an already established azurerm session:
adf_client = DataFactoryManagementClient(credentials, subscription_id)
rg_params = {'location':'eastus'}
df_params = {'location':'eastus'}
#Create a data factory
df_resource = Factory(location='eastus')
df = adf_client.factories.create_or_update(rg_name, df_name, df_resource)
print_item(df)
while df.provisioning_state != 'Succeeded':
df = adf_client.factories.get(rg_name, df_name)
time.sleep(1)
Microsofts authentication documentation suggests I can authenticate using a previously established session as follows:
from azure.common.client_factory import get_client_from_cli_profile
from azure.mgmt.compute import ComputeManagementClient
client = get_client_from_cli_profile(ComputeManagementClient)
( ref: https://learn.microsoft.com/en-us/python/azure/python-sdk-azure-authenticate?view=azure-python )
This works, however azure data factory object instantiation fails with:
Traceback (most recent call last):
File "post-scripts/check-data-factory.py", line 72, in <module>
main()
File "post-scripts/check-data-factory.py", line 65, in main
df = adf_client.factories.create_or_update(rg_name, data_factory_name, df_resource)
AttributeError: 'ComputeManagementClient' object has no attribute 'factories'
So perhaps some extra steps are required between this and getting a df object?
Any clue appreciated!
Just replace the class with the correct type:
from azure.common.client_factory import get_client_from_cli_profile
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.datafactory import DataFactoryManagementClient
resource_client = get_client_from_cli_profile(ResourceManagementClient)
adf_client = get_client_from_cli_profile(DataFactoryManagementClient)
The error you got is because you created a Compute client (to handle VM), not a ADF client. But yes, you found the right doc for your needs :)
(disclosure: I work at MS in the Python SDK team)

Categories