Stuck with azure function app in python using managed identity - python

I am trying to code function app which will get data from Log analytics workspace and push to event hub using python3. Function app uses managed identity.i am using azure sdk for python. my current code looks like this:
def getAzureEventData():
"""
if "MSI_ENDPOINT" in os.environ:
print("GeTTING MSI Authentication")
creds = MSIAuthentication()
else:
creds, *_ = get_azure_cli_credentials()
"""
## want to find out which one is correct tested each one.
creds = DefaultAzureCredential()
creds=CredentialWrapper()
creds = MSIAuthentication()
#creds, _ = get_azure_cli_credentials(resource="https://api.loganalytics.io")
log_client = LogAnalyticsDataClient(creds)
laQuery = 'ActivityLog | where TimeGenerated > ago(1d)'
result = log_client.query(cisalog_workspace_id, QueryBody(query=laQuery))
as per examples I have seen ,
creds, _ = get_azure_cli_credentials(resource="https://api.loganalytics.io")
was used, but when I use that function without any DefaultCredential(), then I get 404 error which says System managed identity is not enabled. when I use DefualtCrednetial I get access_token error and as per suggestion I am using wrapper found in internet. when I use that, I get Exception: ErrorResponseException: (InvalidTokenError) The provided authentication is not valid for this resource.
So I am confused how to use Loganalytics SDK client. I am testing in local and also in portal. My end goal is a function app using system managed identity with IAM roles to access LA workspace . I have granted Monitoring reader role on workspace to SMI. still facing issue.

If you want to call Azure Log Analytics Rest API in Azure function with AzureMSI, you need to assign Azure RABC role Log Analytics Reader to the MSI. For more details, please refer to here.
For example
Enable Azure Function MSI
Assign role
New-AzRoleAssignment -ObjectId "<the objectId of Azure function MSI>" -RoleDefinitionName "Log Analytics Reader" -Scope "/subscriptions/{subId}"
Code
My cred_wrapper.py
from msrest.authentication import BasicTokenAuthentication
from azure.core.pipeline.policies import BearerTokenCredentialPolicy
from azure.core.pipeline import PipelineRequest, PipelineContext
from azure.core.pipeline.transport import HttpRequest
from azure.identity import DefaultAzureCredential
class CredentialWrapper(BasicTokenAuthentication):
def __init__(self, credential=None, resource_id="https://westus2.api.loganalytics.io/.default", **kwargs):
"""Wrap any azure-identity credential to work with SDK that needs azure.common.credentials/msrestazure.
Default resource is ARM (syntax of endpoint v2)
:param credential: Any azure-identity credential (DefaultAzureCredential by default)
:param str resource_id: The scope to use to get the token (default ARM)
"""
super(CredentialWrapper, self).__init__(None)
if credential is None:
#credential = DefaultAzureCredential()
credential = DefaultAzureCredential()
self._policy = BearerTokenCredentialPolicy(
credential, resource_id, **kwargs)
def _make_request(self):
return PipelineRequest(
HttpRequest(
"CredentialWrapper",
"https://fakeurl"
),
PipelineContext(None)
)
def set_token(self):
"""Ask the azure-core BearerTokenCredentialPolicy policy to get a token.
Using the policy gives us for free the caching system of azure-core.
We could make this code simpler by using private method, but by definition
I can't assure they will be there forever, so mocking a fake call to the policy
to extract the token, using 100% public API."""
request = self._make_request()
self._policy.on_request(request)
# Read Authorization, and get the second part after Bearer
token = request.http_request.headers["Authorization"].split(" ", 1)[1]
self.token = {"access_token": token}
def signed_session(self, session=None):
self.set_token()
return super(CredentialWrapper, self).signed_session(session)
My function code
import logging
from azure.loganalytics import LogAnalyticsDataClient
from .cred_wrapper import CredentialWrapper
import azure.functions as func
from azure.loganalytics.models import QueryBody
import json
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
creds = CredentialWrapper()
client = LogAnalyticsDataClient(creds)
result = client.query(workspace_id='',
body=QueryBody(query='Heartbeat | take 10'))
return func.HttpResponse(
json.dumps(result.tables[0].rows),
status_code=200
)

Related

Connecting Twilio API to Google Cloud Functions

I am trying to deploy a python script in Google Cloud Functions where the user submits a yes-or-no answer from Whatsapp and it reaches the Twilio API. If this function receives a 'Yes' it activates a query in Google Big Query table and reply with a report. My point is: if I create this function with no authentication, it works fine. However, it doesn't work for me, because it gives private information about my company.
So, in order to avoid problems, I create a function authenticating with Cloud IAM. When I pass the service account to this function, I give permission to invoke cloud function, the account inherit all permissions to read and execute jobs in Big Query and I give permission to service agents. Even following all these steps, I'm still receiving a 403 error.
Here is the code I'm trying to deploy in Google Cloud Functions:
from datetime import datetime, timedelta
from dotenv import load_dotenv
import os
from sales import Sales
from flask import Flask, request
from functools import wraps
from utils import format_currency, emoji_alerts
load_dotenv()
app = Flask(__name__)
def message_sales():
data = (datetime.now()-timedelta(days=1)).strftime('%d/%m/%Y')
resultado = Sales()
return f""" message """
#here is where the message is generated, this f-string queries results in bigquery
#app.route('/reply', methods=['GET','POST'])
def send_message(request):
from twilio.rest import Client
from twilio.twiml.messaging_response import MessagingResponse
account_sid = os.getenv('TWILIO_ACCOUNT_SID')
auth_token = os.getenv('TWILIO_AUTH_TOKEN')
incoming_msg = request.values.get('Body', '').lower()
resp = MessagingResponse()
msg = resp.message()
responded = False
report = message_sales()
if 'yes' in incoming_msg:
msg.body(report)
responded = True
elif 'no' in incoming_msg:
msg.body('Ok')
responded = True
return str(resp)
if __name__ == "__main__":
app.run(debug=False, host='0.0.0.0', port=2020)
And here is the connection function to big query and a example of a query:
def run_google_query(query):
credentials = service_account.Credentials.from_service_account_file(GOOGLE_APPLICATION_CREDENTIALS)
client = bigquery.Client(project='project-id', credentials=credentials)
return [row[0] for row in client.query(query)][0]
def get_resultado_dia(self):
return f"""
SELECT RESULTADO_ATUAL FROM `table`
WHERE DATA_VENDA = '{self.ontem.strftime('%Y-%m-%d')}'
"""
This is my last deploy. I have tried to use Secret Manager library, I have created a service account with the necessary permissions, I've given more permission to actual service accounts and nothing worked.
I believe that I need to authenticate the Twilio API with Google Cloud, but I can't find a clear explanation on how to procedure with that. Anyway, create a unauthenticated http request won't be an option, since the information shouldn't be open.

How to access accountUserLinks in the Google Analtyics managment api with python?

I'm looking for the library for this method
analytics.management().accountUserLinks().insert
everytime I try to run it the error is always the same, the method management() doesn't exists into the analytics library.
I've got this from the documentation so I think it should works.
I've tried to download different python libraries without success.
The Google analytics management api. Is part of the Google apis library.
Which means you can use the google-api-python-client
sudo pip install --upgrade google-api-python-client
sample
"""A simple example of how to access the Google Analytics API."""
import argparse
from googleapiclient.discovery import build
import httplib2
from oauth2client import client
from oauth2client import file
from oauth2client import tools
# If modifying these scopes, delete the file token.json.
SCOPES = ['https://www.googleapis.com/auth/analytics.readonly', 'https://www.googleapis.com/auth/analytics.manage.users.readonly']
# Service account key file
CREDENTIALS = 'C:\YouTube\dev\credentials.json'
VIEW_ID = '78110423'
def get_service(api_name, api_version, scope, client_secrets_path):
"""Get a service that communicates to a Google API.
Args:
api_name: string The name of the api to connect to.
api_version: string The api version to connect to.
scope: A list of strings representing the auth scopes to authorize for the
connection.
client_secrets_path: string A path to a valid client secrets file.
Returns:
A service that is connected to the specified API.
"""
# Parse command-line arguments.
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
parents=[tools.argparser])
flags = parser.parse_args([])
# Set up a Flow object to be used if we need to authenticate.
flow = client.flow_from_clientsecrets(
client_secrets_path, scope=scope,
message=tools.message_if_missing(client_secrets_path))
# Prepare credentials, and authorize HTTP object with them.
# If the credentials don't exist or are invalid run through the native client
# flow. The Storage object will ensure that if successful the good
# credentials will get written back to a file.
storage = file.Storage(api_name + '.dat')
credentials = storage.get()
if credentials is None or credentials.invalid:
credentials = tools.run_flow(flow, storage, flags)
http = credentials.authorize(http=httplib2.Http())
# Build the service object.
service = build(api_name, api_version, http=http)
return service
def get_first_profile_id(service):
# Use the Analytics service object to get the first profile id.
# Get a list of all Google Analytics accounts for the authorized user.
accounts = service.management().accounts().list().execute()
if accounts.get('items'):
# Get the first Google Analytics account.
account = accounts.get('items')[0].get('id')
account_links = service.management().accountUserLinks().list(
accountId=account
).execute()
if account_links.get('items', []):
# return the first view (profile) id.
return account_links.get('items', [])
return None
def print_results(results):
print(results)
def main():
# Authenticate and construct service.
service = get_service('analytics', 'v3', SCOPES, CREDENTIALS)
profile = get_first_profile_id(service)
print_results(profile)
if __name__ == '__main__':
main()

How do I execute Google app script function from Python script via API? Not able to locate credentials.json for download in order to execute appscript

I'm trying to run a google app script function remotely from a python flask app. This function creates google calendar events with inputs from a google sheet. I referred to this documentation from Google in order to set up the python script to run the appscript function. I followed every step required to deploy the app script project as an executable API and connected it to a google developer project and made OAuth 2.0 ID credentials as well.
From the API executable documentation, I got the following code and modified it to run as an object which can be called from the main server file.
from __future__ import print_function
from googleapiclient import errors
from googleapiclient.discovery import build
from httplib2 import Http
from oauth2client import file as oauth_file, client, tools
class CreateGCalEvent:
def main(self):
"""Runs the sample.
"""
SCRIPT_ID = 'my app script deployment ID was put here'
# Set up the Apps Script API
SCOPES = [
'https://www.googleapis.com/auth/script.scriptapp',
'https://www.googleapis.com/auth/drive.readonly',
'https://www.googleapis.com/auth/drive',
]
store = oauth_file.Storage('token.json')
creds = store.get()
if not creds or creds.invalid:
flow = client.flow_from_clientsecrets('app_script_creds.json', SCOPES)
creds = tools.run_flow(flow, store)
service = build('script', 'v1', credentials=creds)
# Create an execution request object.
request = {"function": "getFoldersUnderRoot"}
try:
# Make the API request.
response = service.scripts().run(body=request,
scriptId=SCRIPT_ID).execute()
if 'error' in response:
# The API executed, but the script returned an error.
# Extract the first (and only) set of error details. The values of
# this object are the script's 'errorMessage' and 'errorType', and
# an list of stack trace elements.
error = response['error']['details'][0]
print("Script error message: {0}".format(error['errorMessage']))
if 'scriptStackTraceElements' in error:
# There may not be a stacktrace if the script didn't start
# executing.
print("Script error stacktrace:")
for trace in error['scriptStackTraceElements']:
print("\t{0}: {1}".format(trace['function'],
trace['lineNumber']))
else:
# The structure of the result depends upon what the Apps Script
# function returns. Here, the function returns an Apps Script Object
# with String keys and values, and so the result is treated as a
# Python dictionary (folderSet).
folderSet = response['response'].get('result', {})
if not folderSet:
print('No folders returned!')
else:
print('Folders under your root folder:')
for (folderId, folder) in folderSet.items():
print("\t{0} ({1})".format(folder, folderId))
except errors.HttpError as e:
# The API encountered a problem before the script started executing.
print(e.content)
Here is where the error comes. It can neither locate token.json nor the app_script_creds.json.
Now with a service account and any normal OAuth2.0 ID, when I create it, I will be given the option to download the credentials.json but here, this is all I seem to be getting, an App Script ID with no edit access or credentials to download as JSON. I created another OAuth ID in the same project as shown in the screenshot which has the edit access and json ready for download. When I used that json file inside the python script, It told me that it was expecting redirect uris, which I don't know for what it is or where to redirect to.
What do I need to do to get this working?
I adapted some code that I used for connecting to the App Scripts API. I hope it works for you too. The code is pretty much the same thing as this.
You can use from_client_secrets_file since you're already loading these credentials from the file. So, what the code does is look for a token file first. If the token file is not there, it logs in the user (prompting using the Google authorization screen) and stores the new token in the file as pickle.
Regarding the credentials in the Google console you need to pick the Desktop application when creating them because that is basically what a server is.
Note: with this, you can only have one user that will be doing all of these actions. This is because the server script will start a local server on the server machine to authenticate you, your client code will not see any of this.
import logging
import pickle
from pathlib import Path
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
class GoogleApiService:
def __init__(self, , scopes):
"""
Args:
scopes: scopes required by the script. There needs to be at least
one scope specified.
"""
self.client_secrets= Path('credentials/credentials.json')
self.token_path = Path('credentials/token.pickle')
self.credentials = None
self.scopes = scopes
def get_service(self):
self.__authenticate()
return build('script', 'v1', credentials=self.credentials)
def __authenticate(self):
log.debug(f'Looking for existing token in {self.token_path}')
if self.token_path.exists():
with self.token_path.open('rb') as token:
self.credentials = pickle.load(token)
if self.__token_expired():
self.credentials.refresh(Request())
# If we can't find any token, we log in and save it
else:
self.__log_in()
self.__save_token()
def __log_in(self):
flow = InstalledAppFlow.from_client_secrets_file(
self.client_secrets,
self.scopes
)
self.credentials = flow.run_local_server(port=0)
def __save_token(self):
with self.token_path.open('wb') as token:
pickle.dump(self.credentials, token)
def __token_expired(self):
return self.credentials and self.credentials.expired and \
self.credentials.refresh_token
# Example for Google Apps Scripts
def main():
request = {'function': 'some_function', 'parameters': params}
gapi_service = GoogleApiService()
with gapi_service.get_service() as service:
response = service.scripts().run(
scriptId=self.script_id,
body=request
).execute()
if response.get('error'):
message = response['error']['details'][0]['errorMessage']
raise RuntimeError(message)
else:
return response['response']['result']

Azure Data Factory Pipelines: Creating pipelines with Python: Authentication (via az cli)

I'm trying to create azure data factory pipelines via python, using the example provided by Microsoft here:
https://learn.microsoft.com/en-us/azure/data-factory/quickstart-create-data-factory-python
def main():
# Azure subscription ID
subscription_id = '<Specify your Azure Subscription ID>'
# This program creates this resource group. If it's an existing resource group, comment out the code that creates the resource group
rg_name = 'ADFTutorialResourceGroup'
# The data factory name. It must be globally unique.
df_name = '<Specify a name for the data factory. It must be globally unique>'
# Specify your Active Directory client ID, client secret, and tenant ID
credentials = ServicePrincipalCredentials(client_id='<Active Directory application/client ID>', secret='<client secret>', tenant='<Active Directory tenant ID>')
resource_client = ResourceManagementClient(credentials, subscription_id)
adf_client = DataFactoryManagementClient(credentials, subscription_id)
rg_params = {'location':'eastus'}
df_params = {'location':'eastus'}
However I cannot pass the credentials in as shown above since azure login is carried out as a separate step earlier in the pipeline, leaving me with an authenticated session to azure (no other credentials may be passed into this script).
Before I run the python code to create the pipeline, I do "az login" via a Jenkins deployment pipeline, which gets me an authenticated azurerm session. I should be able to re-use this session in the python script to get a data factory client, without authenticating again.
However, I'm unsure how to modify the client creation part of the code, as there do not seem to be any examples that make use of an already established azurerm session:
adf_client = DataFactoryManagementClient(credentials, subscription_id)
rg_params = {'location':'eastus'}
df_params = {'location':'eastus'}
#Create a data factory
df_resource = Factory(location='eastus')
df = adf_client.factories.create_or_update(rg_name, df_name, df_resource)
print_item(df)
while df.provisioning_state != 'Succeeded':
df = adf_client.factories.get(rg_name, df_name)
time.sleep(1)
Microsofts authentication documentation suggests I can authenticate using a previously established session as follows:
from azure.common.client_factory import get_client_from_cli_profile
from azure.mgmt.compute import ComputeManagementClient
client = get_client_from_cli_profile(ComputeManagementClient)
( ref: https://learn.microsoft.com/en-us/python/azure/python-sdk-azure-authenticate?view=azure-python )
This works, however azure data factory object instantiation fails with:
Traceback (most recent call last):
File "post-scripts/check-data-factory.py", line 72, in <module>
main()
File "post-scripts/check-data-factory.py", line 65, in main
df = adf_client.factories.create_or_update(rg_name, data_factory_name, df_resource)
AttributeError: 'ComputeManagementClient' object has no attribute 'factories'
So perhaps some extra steps are required between this and getting a df object?
Any clue appreciated!
Just replace the class with the correct type:
from azure.common.client_factory import get_client_from_cli_profile
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.datafactory import DataFactoryManagementClient
resource_client = get_client_from_cli_profile(ResourceManagementClient)
adf_client = get_client_from_cli_profile(DataFactoryManagementClient)
The error you got is because you created a Compute client (to handle VM), not a ADF client. But yes, you found the right doc for your needs :)
(disclosure: I work at MS in the Python SDK team)

Azure python sdk - getting the machine state

Using the python api for azure, I want to get the state of one of my machines.
I can't find anywhere to access this information.
Does someone know?
After looking around, I found this:
get_with_instance_view(resource_group_name, vm_name)
https://azure-sdk-for-python.readthedocs.org/en/latest/ref/azure.mgmt.compute.computemanagement.html#azure.mgmt.compute.computemanagement.VirtualMachineOperations.get_with_instance_view
if you are using the legacy api (this will work for classic virtual machines), use
from azure.servicemanagement import ServiceManagementService
sms = ServiceManagementService('your subscription id', 'your-azure-certificate.pem')
your_deployment = sms.get_deployment_by_name('service name', 'deployment name')
for role_instance in your_deployment.role_instance_list:
print role_instance.instance_name, role_instance.instance_status
if you are using the current api (will not work for classic vm's), use
from azure.common.credentials import UserPassCredentials
from azure.mgmt.compute import ComputeManagementClient
import retry
credentials = UserPassCredentials('username', 'password')
compute_client = ComputeManagementClient(credentials, 'your subscription id')
#retry.retry(RuntimeError, tries=3)
def get_vm(resource_group_name, vm_name):
'''
you need to retry this just in case the credentials token expires,
that's where the decorator comes in
this will return all the data about the virtual machine
'''
return compute_client.virtual_machines.get(
resource_group_name, vm_name, expand='instanceView')
#retry.retry((RuntimeError, IndexError,), tries=-1)
def get_vm_status(resource_group_name, vm_name):
'''
this will just return the status of the virtual machine
sometime the status may be unknown as shown by the azure portal;
in that case statuses[1] doesn't exist, hence retrying on IndexError
also, it may take on the order of minutes for the status to become
available so the decorator will bang on it forever
'''
return compute_client.virtual_machines.get(resource_group_name, vm_name, expand='instanceView').instance_view.statuses[1].display_status
If you are using Azure Cloud Services, you should use the Role Environment API, which provides state information regarding the current instance of your current service instance.
https://msdn.microsoft.com/en-us/library/azure/microsoft.windowsazure.serviceruntime.roleenvironment.aspx
In the new API resource manager
There's a function:
get_with_instance_view(resource_group_name, vm_name)
It's the same function as get machine, but it also returns an instance view that contains the machine state.
https://azure-sdk-for-python.readthedocs.org/en/latest/ref/azure.mgmt.compute.computemanagement.html#azure.mgmt.compute.computemanagement.VirtualMachineOperations.get_with_instance_view
Use this method get_deployment_by_name to get the instances status:
subscription_id = '****-***-***-**'
certificate_path = 'CURRENT_USER\\my\\***'
sms = ServiceManagementService(subscription_id, certificate_path)
result=sms.get_deployment_by_name("your service name","your deployment name")
You can get instance status via "instance_status" property.
Please see this post https://stackoverflow.com/a/31404545/4836342
As mentioned in other answers the Azure Resource Manager API has an instance view query to show the state of running VMs.
The documentation listing for this is here: VirtualMachineOperations.get_with_instance_view()
Typical code to get the status of a VM is something like this:
resource_group = "myResourceGroup"
vm_name = "myVMName"
creds = azure.mgmt.common.SubscriptionCloudCredentials(…)
compute_client = azure.mgmt.compute.ComputeManagementClient(creds)
vm = compute_client.virtual_machines.get_with_instance_view(resource_group, vm_name).virtual_machine
# Index 0 is the ProvisioningState, index 1 is the Instance PowerState, display_status will typically be "VM running, VM stopped, etc.
vm_status = vm.instance_view.statuses[1].display_status
There is no direct way to get the state of a virtual machine while listing them.
But, we can list out the vms by looping into them to get the instance_view of a machine and grab its power state.
In the code block below, I am doing the same and dumping the values into a .csv file to make a report.
import csv
from azure.common.credentials import ServicePrincipalCredentials
from azure.mgmt.compute import ComputeManagementClient
def get_credentials():
subscription_id = "*******************************"
credential = ServicePrincipalCredentials(
client_id="*******************************",
secret="*******************************",
tenant="*******************************"
)
return credential, subscription_id
credentials, subscription_id = get_credentials()
# Initializing compute client with the credentials
compute_client = ComputeManagementClient(credentials, subscription_id)
resource_group_name = "**************"
json_list = []
json_object = {"Vm_name": "", "Vm_state": "", "Resource_group": resource_group_name}
# listing out the virtual machine names
vm_list = compute_client.virtual_machines.list(resource_group_name=resource_group_name)
# looping inside the list of virtual machines, to grab the state of each machine
for i in vm_list:
vm_state = compute_client.virtual_machines.instance_view(resource_group_name=resource_group_name, vm_name=i.name)
json_object["Vm_name"] = i.name
json_object["Vm_state"] = vm_state.statuses[1].code
json_list.append(json_object)
csv_columns = ["Vm_name", "Vm_state", "Resource_group"]
f = open("vm_state.csv", 'w+')
csv_file = csv.DictWriter(f, fieldnames=csv_columns)
csv_file.writeheader()
for i in json_list:
csv_file.writerow(i)
To grab the state of a single virtual machine, where you know its resource_group_name and vm_name, just use the block below.
vm_state = compute_client.virtual_machines.instance_view(resource_group_name="foo_rg_name", vm_name="foo_vm_name")
power_state = vm_state.statuses[1].code
print(power_state)
As per the new API reference, this worked for me
vm_status = compute_client.virtual_machines.instance_view(GROUP_NAME, VM_NAME).statuses[1].code
it will return any one of these states, based on the current state
"PowerState/stopped", "PowerState/running","PowerState/stopping", "PowerState/starting"

Categories