Azure function timeTrigger is not present in portal - python

I have an azure function app running in Linux system. In it, I have create two functions, one is triggered by a blob trigger and the other with timetrigger.
Both functions are deployed with azure DevOps but when I go to portal, timetrigger function is not present.
To deploy the functions, I have the code in Git repository and it is copied to a .zip folder to build the artifact. Once artifact is built, it is deployed to function app with azure cli.
Code:
function.json
{
"schedule": "0 30 14 * * *",
"name": "myTimer",
"type": "timerTrigger",
"direction": "in",
"runOnStartup": false
}
host.json
{
"version": "2.0",
"logging": {
"fileLoggingMode": "always",
"applicationInsights": {
"samplingSettings": {
"isEnabled": true
}
}
},
"extensions": {
"queues": {
"maxPollingInterval": "00:00:02",
"visibilityTimeout" : "00:00:30",
"batchSize": 8,
"maxDequeueCount": 5,
"newBatchThreshold": 4,
"messageEncoding": "base64"
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.3.0, 4.0.0)"
},
"functionTimeout": "-1",
"retry": {
"strategy": "fixedDelay",
"maxRetryCount": 0,
"delayInterval": "00:00:05"
}
}
init.py
import datetime
import logging
import azure.functions as func
def main(mytimer: func.TimerRequest) -> None:
utc_timestamp = datetime.datetime.utcnow().replace(
tzinfo=datetime.timezone.utc).isoformat()
if mytimer.past_due:
logging.info('The timer is past due!')
logging.info('Python timer trigger function ran at %s', utc_timestamp)
SOLUTION
I had wrong configured function.json file. Correct content is:
{
"bindings": [
{
"schedule": "0 30 14 * * *",
"name": "myTimer",
"type": "timerTrigger",
"direction": "in",
"runOnStartup": false
}
],
"disabled": false,
"scriptFile": "__init__.py"
}

Thanks for confirming #vll1990,
Posting the solution as an answer This will be beneficial for other members for the similar issue so that they can find and fix their issue.
We have tried to create a python azure function using timer trigger and figured out as you that we need to have our function.json file in below format.
The function.json that you are using is:
{
"schedule": "0 30 14 * * *",
"name": "myTimer",
"type": "timerTrigger",
"direction": "in",
"runOnStartup": false
}
Instead of that We need to specify the bindings before adding the value in our .json file .
For example:-
{
"bindings": [
{
"schedule": "0 30 14 * * *",
"name": "myTimer",
"type": "timerTrigger",
"direction": "in",
"runOnStartup": false //
}
],
"disabled": false,
"scriptFile": "__init__.py"
}
And then after deploy we will be able to see the timer function in our function app on Portal itself.
For more information please refer this MICROSOFT DOCUMENT| Timer trigger for Azure Functions.

Related

Azure Functions Python App - enable IdentityModelEventSource.ShowPII Property

I'm having some issues with the AAD authentication of my Python API which is hosted in Azure Functions.
The official documentation suggests to "enable PII to see the values removed from the message" in order to be able to check the Issuer & ValidIssuer. The documentation only references the .NET extension however. The search on learn.microsoft.com also only shows hits for .NET developers. How can I activate it for my Python API application?
The error code / return JSON I am stuck on:
{
"code": 401,
"message": "IDX10205: Issuer validation failed. Issuer: '[PII of type 'System.String' is hidden.
For more details, see https://aka.ms/IdentityModel/PII.]'.
Did not match: validationParameters.ValidIssuer:
'[PII of type 'System.String' is hidden. For more details, see https://aka.ms/IdentityModel/PII.]'
or validationParameters.ValidIssuers: '[PII of type 'System.String'
is hidden. For more details, see https://aka.ms/IdentityModel/PII.]'.
For more details, see https://aka.ms/IdentityModel/issuer-validation. "
}
host.json file:
{
"version": "2.0",
"extensions": {
"http": {
"routePrefix": ""
}
}
}
local.settings.json file:
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "",
"FUNCTIONS_WORKER_RUNTIME": "python"
}
}
MyApp/function.json file:
{
"scriptFile": "__init__.py", # see below for contents
"disabled": false,
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
],
"route": "{*route}"
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}
MyApp/__ init __.py file:
from ..FastAPIApp import app # see below for contents
nest_asyncio.apply()
logger = logging.getLogger()
#app.get("/status")
async def index() -> Dict:
return {
"info": "API is working normally.",
}
FastAPIApp/__ init __.py file:
import fastapi
app = fastapi.FastAPI()

Triggering Azure Function based on ServiceBus and writing back not working (Python)

I have a Python Azure function that triggers based on messages to a topic, which works fine independently. However, if I then try to also write a message to a different ServiceBus Queue it doesn't work (as in the Azure Function won't even trigger if new messages are published to the topic). Feels like the trigger conditions aren't met when I include the msg_out: func.Out[str] component. Any help would be much appreciated!
__init.py
import logging
import azure.functions as func
def main(msg: func.ServiceBusMessage, msg_out: func.Out[str]):
# Log the Service Bus Message as plaintext
# logging.info("Python ServiceBus topic trigger processed message.")
logging.info("Changes are coming through!")
msg_out.set("Send an email")
function.json
{
"scriptFile": "__init__.py",
"entryPoint": "main",
"bindings": [
{
"name": "msg",
"type": "serviceBusTrigger",
"direction": "in",
"topicName": "publish-email",
"subscriptionName": "validation-sub",
"connection": "Test_SERVICEBUS"
},
{
"type": "serviceBus",
"direction": "out",
"connection": "Test_SERVICEBUS",
"name": "msg_out",
"queueName": "email-test"
}
]
}
host.json
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[2.*, 3.0.0)"
},
"extensions": {
"serviceBus": {
"prefetchCount": 100,
"messageHandlerOptions": {
"autoComplete": true,
"maxConcurrentCalls": 32,
"maxAutoRenewDuration": "00:05:00"
},
"sessionHandlerOptions": {
"autoComplete": false,
"messageWaitTimeout": "00:00:30",
"maxAutoRenewDuration": "00:55:00",
"maxConcurrentSessions": 16
}
}
}
}
I can reproduce your problem, it seems to be caused by the following error:
Property sessionHandlerOptions is not allowed.
Deleting sessionHandlerOptions can be triggered normally.

Azure function (python) write output to dynamic naming of the output blob path

I am using an Azure function as Webhook Consumer to receive http events as JSON and storing it on Azure Storage.
I want to do Dynamic naming of the output blob path based on date as shown below. I tried lots of options, but not able to get the desired output.
I am followed this post but no luck.
Excepted write path:
source/
ctry/
yyyy/
mm/
date/
hrs/
event_{systemtime}.json
function.json:
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
},
{
"type": "blob",
"name": "outputblob",
"path": "source/ctry/{datetime:yyyy}/{datetime:MM}/{datetime:dd}/{datetime:hh}/event_{systemtime}.json",
"direction": "out",
"connection": "MyStorageConnectionAppSetting"
}
]
}
init.py
import logging
import azure.functions as func
def main(req: func.HttpRequest,outputblob: func.Out[str]) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
name = 'some_name'
if not name:
try:
req_body = 'req_body_test'#req.get_json()
except ValueError:
pass
else:
name = 'name'#req_body.get('name')
print(str(req.get_json()))
outputblob.set(str(req.get_json()))
Dynamic blob name needs you to post a request in json format.
For example, if want to output blob to test/{testdirectory}/test.txt, then, you need to post a request like:
{
"testdirectory":"nameofdirectory"
}
After that, the binding of azure function will be able to get the directory name.
By the way, I don't recommend binding for complex blob operations. I recommend using SDK more than binding.
I am able to achieve the dynamic path by making below changes to
function.json
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
},
{
"type": "blob",
"name": "outputblob",
"path": "source/ctry/{DateTime:yyyy}/{DateTime:MM}/{DateTime:dd}/event_{DateTime}.json",
"direction": "out",
"connection": "MyStorageConnectionAppSetting"
}
]
}

Azure Timer Functions - Python: TimeSpan used in CRON settings is not accepting in the binding configuration

Tried the below Azure Python Function binding JSON configuration with "TimeSpan", "timeSpan" & "timespan" but getting error
Error: The function is in error: Can't figure out which ctor to call.
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "mytimer",
"type": "timerTrigger",
"direction": "in",
"timespan": "00:00:01",
"authLevel": "anonymous"
}
]
}
Refer to Documentation: https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-timer?tabs=python#timespan
You can configure like this:
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "mytimer",
"type": "timerTrigger",
"direction": "in",
"schedule": "00:00:01"
}
]
}
I did a test and there seems to be no problem:

gce startup startup-script not firing

I have the following startup script variable defined in my python script:
default_startup_script = """
#! /bin/bash
cd ~/git/gcloud;
git config --global user.email "my.email#gmail.com";
git config --global user.name "my.name";
git stash;
git pull https://user:pw#bitbucket.org/url/my_repo.git;
"""
and the following config:
config = {
"name": "instance-bfb6559d-788f-48b7-85a3-8ff3ab6e5a60",
"zone": "projects/username-165421/zones/us-east1-b",
"machineType": "projects/username-165421/zones/us-east1-b/machineTypes/f1-micro",
"metadata": {
"items": [{'key':'startup-script','value':default_startup_script}]
},
"tags": {
"items": [
"http-server",
"https-server"
]
},
"disks": [
{
"type": "PERSISTENT",
"boot": True,
"mode": "READ_WRITE",
"autoDelete": True,
"deviceName": "instance-4",
"initializeParams": {
"sourceImage": "projects/username-165421/global/images/image-id",
"diskType": "projects/username-165421/zones/us-east1-b/diskTypes/pd-standard",
"diskSizeGb": "10"
}
}
],
"canIpForward": False,
"networkInterfaces": [
{
"network": "projects/username-165421/global/networks/default",
"subnetwork": "projects/username-165421/regions/us-east1/subnetworks/default",
"accessConfigs": [
{
"name": "External NAT",
"type": "ONE_TO_ONE_NAT"
}
]
}
],
"description": "",
"labels": {},
"scheduling": {
"preemptible": False,
"onHostMaintenance": "MIGRATE",
"automaticRestart": True
},
"serviceAccounts": [
{
"email": "123456-compute#developer.gserviceaccount.com",
"scopes": [
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring.write",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/trace.append"
]
}
]
}
Now - the instance creates without issue, but the startup script does not fire.
I am creating the instance by running:
compute.instances().insert(
project=project,
zone=zone,
body=config).execute()
All of the samples were retrieved from here.
Once the instance is created and I paste my startup script manually it works without issue.
Does anyone have any idea what I am doing wrong here?
This works. My issue was related to user accounts. I was not logging in as the default user (Eg: username#instance-id).
If you are reading this question just be sure of which username you are intending to run this for and manage accordingly.

Categories