I am new to Google Vision API but i have been working with gunicorn and flask for some time. I installed all the required libraries. i have my api key in environment via gunicorn bash file. Whenever i try to hit gcp API, it just freezes with no response.
Can anybody help?
Here's my gunicorn_start.bash
NAME="test"
NUM_WORKERS=16
PYTHONUNBUFFERED=True
FLASK_DIR=/home/user/fold/API/
echo "Starting $NAME"
cd $FLASK_DIR
conda activate tf
export development=False
export GOOGLE_APPLICATION_CREDENTIALS='/home/user/test-6f4e7.json'
exec /home/user/anaconda3/envs/tf/bin/gunicorn --bind 0.0.0.0:9349 --timeout 500 --worker-class eventlet --workers $NUM_WORKERS app:app
EDIT
It freezes during API call.
Code for API call:
client = vision.ImageAnnotatorClient()
with io.open(path, 'rb') as image_file:
content = image_file.read()
image = vision.types.Image(content=content)
response = client.document_text_detection(image=image)
There is no log as it just freezes,nothing else
The code looks fine and it doesn't seem to be a permission error. Since there are no logs, the issue is hard to troubleshoot; however, I have two theories of what could be happening. I'll leave them below, with some information on how to troubleshoot them.
The API call is not reaching Google's servers
This could be happening due to a networking error. To discard this, try to make a request from the development environment (or where the application is running) using curl.
You can follow the CLI quickstart for Vision API to prepare the environment, and make the request. If it works, then you can discard the network as a possible cause. If the request fails or freezes, then you might need to check the network configuration of your environment.
In addition, you can go to the API Dashboard in the Cloud Console and look at the metrics of the Vision API. In these graphs, you can see if your requests are reaching the server, as well as some useful information like: errors by API method, errors by credential, latency of requests, etc.
There's an issue with the image/document you're sending
Note: Change the logging level of the application to DEBUG (if it's not already at this level).
If you're certain that the requests are reaching the server, the possible issue could be with the file that you're trying to send. If the file is too big, the connection might look as if it was frozen while it is being uploaded, and also it might take some time to be processed. Try with smaller files to see the results.
Also, I noticed that you're currently using a synchronous method to perform the recognition. If the file is too big, you could try the asynchronous annotation approach. Basically, you upload your file(s) to Cloud Storage first and then create a request indicating: the storage URI where your file is located and the destination storage URI where you want the results to be written to.
What you'll receive from the service is an operation Id. With this Id, you can check the status of the recognition request and make your code wait until the process has finished. You can use this example as a reference to implement it.
Hopefully with this information you can determine what is the issue you're facing and solve it.
I had the exact same issue, and it turned out to be gunicorn's fault. When I switched to the Django dev server, the problem went away. I tried with older gunicorn versions (back to 2018), and the problem persisted. I should probably report this to gunicorn. :D
Going to switch to uwsgi in the meantime.
Related
I have a python script that I want to make accessible through a website with an userinterface.
I was experimenting with Flask, but I'm not sure this is the right tool for what I want to do.
My script takes userdata (.doc/.txt files), does something with it and returns it to the user. I don't want to save anything and I don't think that I need a database for this (is this right?). The file will be temporarily saved on the server and everything will be deleted once the user downloaded the modified file.
My webhosting provider supports Python and only accepts CGI. I read that WSGI is the preferred method to use with Python and that CGI has scaling issues and can only process one request at a time. I'm not sure if I understand this correctly. If several users would upload files at the same time, the server would only accept one request or overwrite previous requests? Or it can do one request per unique IP address/user?
Would CGI be ok for the simple get/process/return task of my python script or should I look into a hosting service that uses WSGI?
I had a look at Heroku and Render to deploy a flask app, but I think I could do that through my webhosting provider I guess.
For anyone interested in this topic,
I decided to deploy my app on render.com, which supports gunicorn (WSGI).
I followed this Google cloud Kubernetes tutorial for python. I basically changed what's in their hello world function to plot with matplotlib (with some other functions beforehand to get data to plot). It all worked (with some changes to the dockerfile, to pip install modules, and use just python 3.7 instead of the slim version) until where it says to view a deployed app. I copy the external IP and try it in the browser, but it just loads. I'm not sure what to check to see why it won't finish loading.
So I'm wondering how I check where the problem is. The python code works fine elsewhere, outputting a plot with flask on a local computer
You can try proxying from your localhost directly to the pod to see if there's a problem with the load balancer.
kubectl port-forward your-pod-xxxxxxxxxx-xxxxx <local-port>:<pod-port>
Then you can just hit http://172.0.0.1:<local-port> on your browser.
You can also take a look at the pod logs:
kubectl logs your-pod-xxxxxxxxxx-xxxxx
I have a little bit weirdo behavior, of my Django app, after deployment on server with gunicorn.
First, I add script to upstart, using runit tool (linux, of course). And saw that my server respond on request as it wants. Maybe it respond on request, maybe not.
I was shocked, because the same configuration on my local machine works the properly.
So, I decided to remove script from upstart, and try to run it as on local machine, with the same script, that I removed from runit upstart. Result is better, it respond on 95% of ajax calls, but one is still does not works.
Screen from chrome network monitoring.
10 seconds, takes SIMPLE request for stop/ url. I never saw, that server respond to client on start/ url, when app deployed on server.
There are screens from my local machine, from Chrome network monitoring.
I run apps on google compute engine, so I thought that server have not enough performance. But it's wrong. Changes of machine type have no influence.
Then, I decided to take a look to logs and code. Before response I wrote these lines of code:
log.info('Start activity for {}'.format(username))
return HttpResponse("started")
And I can see it in logs. But it still does not respond.
I still can't understand what is going on. It makes me crazy.
everyone.
I solve this issue by:
change all scripts to minified versions (*.js -> *.min.js)
add django.middleware.http.ConditionalGetMiddleware to middlewares
add SESSION_COOKIE_DOMAIN = ".yourdomain.com" to settings
Maybe it would be helpful for someone.
I have an AWS EC2 machine that has been running nightly google analytics scripts to load into a database. It has been working fine up for months until this weekend. I have not made any changes to the code.
These are the two errors that are showing up in my logs:
/venv/lib/python3.5/site-packages/oauth2client/_helpers.py:256: UserWarning: Cannot access analytics.dat: No such file or directory
warnings.warn(_MISSING_FILE_MESSAGE.format(filename))
Failed to start a local webserver listening on either port 8080
or port 8090. Please check your firewall settings and locally
running programs that may be blocking or using those ports.
Falling back to --noauth_local_webserver and continuing with
authorization.
It looks like it is missing my analytics.dat file but I have checked and the file is in the same folder as the script that calls the GA API. I have been searching for hours trying to figure this out but there are very little resources on the above errors for GA.
Does anyone know what might be going on here? Any ideas on how to troubleshoot more?
I am not sure why this is happening, But I have a list of steps which might help you.
check if this issue is caused by google analytics API version, google generally deprecates the previous versions of their API.
I am guessing that you are running this code on cron on your EC2 serv, make sure that you include the path to the folder where the .dat file is.
3.check whether you have the latest credentials in the .dat file.
Authentication to the API will happen through the .dat file.
Hope this solves your issue.
The thing is, I read this post stating best practices to set up a code to run at every specified interval over a period of time using the python library - APS Scheduler. Now, it obviously works perfectly fine if I do it on a test environment and run the application from the command prompt.
However, I come from a background where most my projects are university level and never ran in production but for this one, I would like to. I have access to AWS and can configure any kind of server on AWS and I am open to other options as well. It would be great if I could get a headstart on what to look if I have to run this application as a service from a server or a remote machine without having to constantly monitoring it and providing interrupts on command prompt.
I do not have any experience of running Python applications in production so any input would be appreciated. Also, I do not know how to execute this code in production (except for through aws cli) but that session expires once I close my CLI so that does not seem like the most appropriate way to do it so any help on that end would be appreciated too.
The Answer was very simple and does not make a lot of sense and might not be applicable to all.
Now, what I had was a python flask application so I configured the app in a virtual environment using eb-virt on the aws server and then created an executable wsgi script which I then ran as a service using mod_wsgi plugin for the apache http server and then I was able to run my app.