HTTP Requests in Azure Python Function App - python

I have an Azure function app with a timer trigger. Inside the app it makes several HTTP requests using the requests library:
requests.get(baseURL, params=params)
When I debug on my computer it runs without error. Requests take anywhere from 2 to 30 seconds to return. When I deploy in Azure, though, the function will hang after sending some requests and you have to restart it to get it to work again. It never throws an exception and never fails. Just hangs.
The number of requests that Azure successfully completes varies between 2 and 6. The requests are always sent in the same order and always return the same data. There doesn't seem to be any clear pattern for when it hangs. Sometimes it's on requests that return little data, sometimes requests that return more data.
Any ideas??

For this problem, please check if the function show the logs in time. Please click "Monitor" to check the log but not check logs in "Logs" window because the logs sometimes not show in "Logs" window. By the way, logs in "Monitor" may delay about 5 minutes.
Then please check if your function is timeout. The function in consumption plan set timeout value 5 minutes as default. So if your request several request in function and each request will take tens of seconds, please set functionTimeout with 00:10:00 in "host.json" (the max value of functionTimeout is 10 minutes in consumption plan).
If the function still doesn't work, please check if you request other HttpTrigger function url in your timer trigger function(and the HttpTrigger function is in same function app with your timer trigger function). If so, it may leads to some problem, you can refer to this post which I found similar problem in the past. To solve this problem, just create another function app to separate the timer trigger function and http trigger function.

Related

My Azure Function in Python v2 doesn't show any signs of running, but it probably is

I have a simple function app in Python v2. The plan is to process millions of images, but right I just want to make the scaffolding right, i.e. no image processing, just dummy data. So I have two functions:
process with an HTTP trigger #app.route, this inserts 3 random image URLs to the Azure Queue Storage,
process_image with a Queue trigger #app.queue_trigger, that processes one image URL from above (currently only logs the event).
I trigger the first one with curl request and as expected, I can see the invocation in the Azure portal in the function's invocation section and I can see the items in the Storage Explorer's queue.
But unexpectedly, I do not see any invocations for the second function, even though after a few seconds the items disappear from the images queue and end up in the images-poison queue. So this means that something did run with the queue items 5 times. I see the following warning in the application insights checking traces and exceptions:
Message has reached MaxDequeueCount of 5. Moving message to queue 'case-images-deduplication-poison'.
Can anyone help with what's going on? Here's the gist of the code.
If I was to guess, something else is hitting that storage queue, like your dev machine or another function, can you put logging into the second function? (sorry c# guy so I don't know the code for logging)
Have you checked the individual function metric, in the portal, Function App >> Functions >> Function name >> overview >> Total execution Count and expand to the relevant time period?
Do note that it take up to 5 minutes for executions to show but after that you'll see them in the metrics

How to create scheduler in httpTrigger in azure function using python?

i have one httpTrigger where i have implemented cache we have a requirement where we have to update cache after 2 hr.
Solution 1:
we can expire the cache after 2 hour.. but we don't want to use this solution
Solution 2:
we want a function to get triggered (update_cache()) after every 2 hour.
I find out some library
But i am unable to get how i can implement this..
# i want to trigger this function every 2 hour
def trigger_scheduler_update():
logging.info("Hi i am scheduler and got triggered...")
schedule.every(2).hours.do(trigger_scheduler_update)
But the problem i am facing here is we have to write this kind of code.
# ref: https://www.geeksforgeeks.org/python-schedule-library/
while True:
# Checks whether a scheduled task
# is pending to run or not
schedule.run_pending()
time.sleep(1)
As its an infinite loop i can place it in http trigger is there a way i can implement a scheduler that run after 2 hr.
i don't know that can it be done using threading?
i found one more library but looks like it also won't work.
Your function is shut down after a period of time, unless you are on a premium plan. Even then you cannot guarantee your function keeps on running.
What cache are you referring to?
Note that you cannot do threading in azure functions and you shouldn't actually. Abandon the idea of refreshing the cache from your httpTrigger function and create a separate scheduleTriggered function to update the cache that your http function is using.

Put a time limit on a request

I have a program and in order verify that the user doesnt download such big files using input i need a time limit on how long each request is allowed to take.
Does anyone know a good way to put a time limit(/lifetime) on each python requests get requests so if it takes 10 seconds an exception will be thrown.
Thanks
You can define your own timeout like:
requests.get('https://github.com/', timeout=0.001)
You can pass an additional timeout parameter to every request you make. This is always recommended as it will make your code more robust to hanging indefinitely in case you don't receive a response from the other end.
requests.get('https://github.com/', timeout=0.001)
Read the official python request documentation for timeouts here.

Avoid Python CGI browser timeout

I have a Python CGI that I use along with SQLAlchemy, to get data from a database, process it and return the result in Json format to my webpage.
The problem is that this process takes about 2mins to complete, and the browsers return a time out after 20 or 30 seconds of script execution.
Is there a way in Python (maybe a library?) or an idea of design that can help me let the script execute completely ?
Thanks!
You will have to set the timing on the http server's (Apache for
example) configuration. The default should be more than 120 seconds, if I remember
correct.

Exact mechanics of time limits during Google App Engine Python's urlfetch?

I have a http request in my code that takes ~5-10 s to run. Through searching this site, I've found the code to increase the limit before timeout:
from google.appengine.api import urlfetch
urlfetch.set_default_fetch_deadline(60)
My question: What is that number '60'? Seconds or tenths of a second? Most responses seem to imply it's seconds, but that can't be right. When I use 60, I get a time out in less than 10 s while testing on localhost. I have to set the number to at least 100 to avoid the issue - which I worry will invoke the ire of the Google gods.
It's seconds, you can passed it in the fetch function. Have you tried to fetch another website? Are you sure it's a timeout not another error?
https://developers.google.com/appengine/docs/python/urlfetch/fetchfunction

Categories