How to access Azure Service Bus using Function App identity - python

I am following the steps listed here, but for python code:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-identity-based-connections-tutorial-2
Objective is to create a simple (hello world) function app which is triggered by Azure Service Bus message queue using identity-based connection. Function app works fine when ASB is reference via connection string, but gives this error when trying to connect via managed service identity of function app (used the specific configuration pattern __fullyQualifiedNamespace). MSI has been granted Role (Azure Service Bus Data Receiver) on ASB.
Microsoft.Azure.WebJobs.ServiceBus: Microsoft Azure WebJobs SDK ServiceBus connection string 'ServiceBusConnection__fullyQualifiedNamespace' is missing or empty.
Function code (autogenerated)
import logging
import azure.functions as func
def main(msg: func.ServiceBusMessage):
logging.info('Python ServiceBus queue trigger processed message: %s',
msg.get_body().decode('utf-8'))
function.json (connection value modified based on ms docs)
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "msg",
"type": "serviceBusTrigger",
"direction": "in",
"queueName": "erpdemoqueue",
"connection": "ServiceBusConnection"
}
]
}
host.json (version modified based on ms docs)
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.3.0, 4.0.0)"
}
}

To use a managed identity, you'll need to add a setting that identifies the fully qualified namespace of your Service Bus instance.
For example, in your local.settings.json file for local development:
{
"Values": {
"<connection_name>__fullyQualifiedNamespace": "<service_bus_namespace>.servicebus.windows.net"
}
}
Or in the application settings for your function when deployed to Azure:
<connection_name>__fullyQualifiedNamespace=<service_bus_namespace>.servicebus.windows.net
This is mentioned only briefly in the tutorial that you linked. The Microsoft.Azure.WebJobs.Extensions.ServiceBus documentation does covers this a bit better in the Managed identity authentication section.

Related

Azure function app not triggering from an Event Hub

I have a event hub which is in Subscription A and a function app in Subscription B, i am trying to trigger the function app from the event hub in Subscription A, as per my research this should be possible and the correct connection string must be provided in the configuration of function app. I have done this but for some reason i am not able to trigger the function app.
Below is my function.json
{
"scriptFile": "__init__.py",
"bindings": [
{
"type": "eventHubTrigger",
"name": "event",
"direction": "in",
"eventHubName": "%eventHubName%",
"connection": "TestBench",
"cardinality": "one",
"consumerGroup": "$Default"
},
{
"type": "eventHub",
"name": "outputHub",
"direction": "out",
"connection": "outputConnection"
}
I have double checked the "TestBench" (eventhubs) connection string and also eventhub's name, they are correct.
Below is my function app code in __init__.py :
def main(event: func.EventHubEvent, outputHub: func.Out[List[str]]):
data=json.loads(event.get_body().decode('utf-8'))
logging.info(data)
Please verify if you have configured eventHubName property in the application setting of your function app as you have defined the binding as
"eventHubName": "%eventHubName%"
In case if this is correct then please validate whether the connection string is configured correctly or not.
I will suggest you to review the Diagnostic and solve problem blade on your function app that will help you diagnose the issue and the recommend solution to resolve your issue.
Please review the python example here and test the same at your end.

Python Function App connections using managed identity

Unable to set up connection information for service bus with Python Azure functions for managed identity.
I have the following settings in function.json
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "msg",
"type": "serviceBusTrigger",
"direction": "in",
"queueName": "myinputqueue",
"connection": "MySvcConn"
}
]
}
and in Application settings in Azure portal i have set
"MySvcConn__fullyQualifiedNamespace":"mysvcns.servicebus.windows.net"
I get the message
"Microsoft.Azure.ServiceBus: Value for the connection string
parameter name 'mysvcns.servicebus.windows.net' was not found. (Parameter 'connectionString').
Version of Runtime used ~4
host.json configuration
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[2.*, 3.0.0)"
}
}
To connect to service bus using a managed Identity we have first add servicebus data receiver role to the access control.
Then add the connection setting to application setting under the name
ServiceBusConnection__fullyQualifiedNamespace and add the connection value as <Name_of_servicebus>.servicebus.windows.net.
The above setting will connect the function app to the service bus using managed identity without the use of connection string.
Refer this documentation
Was able to figure this out, need to set up application settings in function app for the following
ServiceBusConnection__clientID:<managedidenity client id>
ServiceBusConnection__credential:managedidentity
ServiceBusConnection__fullyQualifiedNamespace:<servicebusname>.servicebus.windows.net
"ServiceBusConnection" in the above settings being the connection name in function.json file.

How Do I Find Errors and Debug A Cloud Function?

I have created a cloud function that when triggered is supposed to create a VM instance and run a python script.
However, the VM is not being created.
I can see the following message in the CF log, to do with my deployment:
resource.type = "cloud_function"
resource.labels.region = "europe-west2"
severity>=DEFAULT
severity=DEBUG
...
However, for the life of me, I can't see where to go to actually view the error itself.
I then Googled and found the following thread about an issue where Cloud Functions is not showing any logs.
Thinking it may be the same issue I added the recommended environment variables to my deployment but still I cant find the error anywhere in the logs.
Can anyone point me in the right direction?
Here is my cloud function code as well:
import os
from googleapiclient import discovery
from google.oauth2 import service_account
scopes = ["https://www.googleapis.com/auth/cloud-platform"]
sa_file = "key.json"
zone = "europe-west2-c"
project_id = "<<proj id>>" # Project ID, not Project Name
credentials = service_account.Credentials.from_service_account_file(
sa_file, scopes=scopes
)
# Create the Cloud Compute Engine service object
service = discovery.build("compute", "v1", credentials=credentials)
def create_instance(compute, project, zone, name):
# Get the latest Debian Jessie image.
image_response = (
compute.images()
.getFromFamily(project="debian-cloud", family="debian-9")
.execute()
)
source_disk_image = image_response["selfLink"]
# Configure the machine
machine_type = "zones/%s/machineTypes/n1-standard-1" % zone
config = {
"name": name,
"machineType": machine_type,
# Specify the boot disk and the image to use as a source.
"disks": [
{
"kind": "compute#attachedDisk",
"type": "PERSISTENT",
"boot": True,
"mode": "READ_WRITE",
"autoDelete": True,
"deviceName": "instance-1",
"initializeParams": {
"sourceImage": "projects/my_account/global/images/instance-image3",
"diskType": "projects/my_account/zones/europe-west2-c/diskTypes/pd-standard",
"diskSizeGb": "10",
},
"diskEncryptionKey": {},
}
],
"metadata": {
"kind": "compute#metadata",
"items": [
{
"key": "startup-script",
"value": "sudo apt-get -y install python3-pip\npip3 install -r /home/will_charles/requirements.txt\ncd /home/will_peebles/\npython3 /home/will_charles/main.py",
}
],
},
"serviceAccounts": [
{
"email": "837516068454-compute#developer.gserviceaccount.com",
"scopes": ["https://www.googleapis.com/auth/cloud-platform"],
}
],
"networkInterfaces": [
{
"network": "global/networks/default",
"accessConfigs": [{"type": "ONE_TO_ONE_NAT", "name": "External NAT"}],
}
],
"tags": {"items": ["http-server", "https-server"]},
}
return compute.instances().insert(project=project, zone=zone, body=config).execute()
def run(data, context):
create_instance(service, project_id, zone, "pagespeed-vm-4")
The reason that you do not see anything in the logs for Cloud Functions is that your code is executing but is not logging the results of the API calls.
Your code is succeeding in calling the API to create a compute instance. This does not mean the API succeeded just the call itself. The API returns an operation handle that you then later call to check on status. You are not doing that, so your Cloud Function has no idea that the create instance failed.
To see logs for the create instance API, go to Operations Logging -> VM Instance. Select "All instance_id". If the API to create an instance does not succeed, there will be no instance id to select therefore you have to select all instances and then find logs related to the API call.

EventGrid-triggered Python Azure Function "ClientOtherError" and "AuthorizationError", how to troubleshoot?

For some reason, today my Python Azure Function is not firing.
Setup:
Trigger: Blob upload to storage account
Method: EventGrid
Auth: Uses System-assigned Managed Identity to auth to Storage Account
Advanced Filters:
Subject ends with .csv, .json
data.api contains "FlushWithClose"
Issue:
Upload a .csv file
No EventGrid triggered
New "ClientOtherError" and "AuthorizationError"s shown in logs
Question:
These are NEW errors and this is NEW behavior of an otherwise working Function. No changes have been recently made.
What do these errors mean?
How do I troubleshoot them?
The way I troubleshot the Function was to:
Remove ALL ADVANCED FILTERS from the EventGrid trigger
Attempt upload
Upload successful
Look at EventGrid message
The culprit (though unclear why ClientOtherError and AuthorizationError are generated here!) seems to be:
Files pushed to Azure Storage via Azure Data Factory use the FlushWithClose api.
These are the only ones I want to grab
Our automations all use ADF and if you don't have the FlushWithClose filter in place, your Functions will run 2x (because ADF causes two events on the storage but only one (flush with close) is the actual blob write.)
{
"id": "redact",
"data": {
"api": "FlushWithClose",
"requestId": "redact",
"eTag": "redact",
"contentType": "application/octet-stream",
"contentLength": 87731520,
"contentOffset": 0,
"blobType": "BlockBlob",
"blobUrl": "https://mything.blob.core.windows.net/mything/20201209/yep.csv",
"url": "https://mything.dfs.core.windows.net/mything/20201209/yep.csv",
"sequencer": "0000000000000000000000000000701b0000000000008177",
"identity": "redact",
"storageDiagnostics": {
"batchId": "redact"
}
},
"topic": "/subscriptions/redact/resourceGroups/redact/providers/Microsoft.Storage/storageAccounts/redact",
"subject": "/blobServices/default/containers/mything/blobs/20201209/yep.csv",
"event_type": "Microsoft.Storage.BlobCreated"
}
Files pushed to Azure Storage via Azure Storage Explorer (and via Azure Portal) use the PutBlob api.
{
"id": "redact",
"data": {
"api": "PutBlob",
"clientRequestId": "redact",
"requestId": "redact",
"eTag": "redact",
"contentType": "application/vnd.ms-excel",
"contentLength": 1889042,
"blobType": "BlockBlob",
"blobUrl": "https://mything.blob.core.windows.net/thing/yep.csv",
"url": "https://mything.blob.core.windows.net/thing/yep.csv",
"sequencer": "0000000000000000000000000000761d0000000000000b6e",
"storageDiagnostics": {
"batchId": "redact"
}
},
"topic": "/subscriptions/redact/resourceGroups/redact/providers/Microsoft.Storage/storageAccounts/redact",
"subject": "/blobServices/default/containers/thing/blobs/yep.csv",
"event_type": "Microsoft.Storage.BlobCreated"
}
I was testing locally with ASE instead of using our ADF automations
Thus the advanced filter for data.api was not triggering the EventGrid
Ok... but what about the errors?

Azure Error: data protection system cannot create a new key because auto-generation of keys is disabled

I am trying to run an azure function on my local machine using Visual Studio Code.
My main.py looks like this:
import logging
import azure.functions as func
def main(event: func.EventHubEvent):
logging.info('Python EventHub trigger processed an event: %s', event.get_body().decode('utf-8'))
My host.json file looks like this:
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[1.*, 2.0.0)"
}
}
My function.json looks something like this:
{
"scriptFile": "main.py",
"bindings": [
{
"type": "eventHubTrigger",
"name": "event",
"direction": "in",
"eventHubName": "myhubName",
"connection": "myHubConnection",
"cardinality": "many",
"consumerGroup": "$Default"
}
]
}
The problem is when I run this, it throws me the following error:
A host error has occurred at
Microsoft.AspNetCore.DataProtection: An error occurred while trying to encrypt the provided data. Refer to the inner exception for more information. Microsoft.AspNetCore.DataProtection: The key ring does not contain a valid default protection key. The data protection system cannot create a new key because auto-generation of keys is disabled.
Value cannot be null.
Parameter name: provider
I am not sure what I am I missing ? Any help is appreciated
The problem was with the Azure Storage account:
Make sure the local.settings.json has the correct credentials for the storage account
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "MyStorageKey",
"FUNCTIONS_WORKER_RUNTIME": "python",
}
}

Categories