I have a little bit weirdo behavior, of my Django app, after deployment on server with gunicorn.
First, I add script to upstart, using runit tool (linux, of course). And saw that my server respond on request as it wants. Maybe it respond on request, maybe not.
I was shocked, because the same configuration on my local machine works the properly.
So, I decided to remove script from upstart, and try to run it as on local machine, with the same script, that I removed from runit upstart. Result is better, it respond on 95% of ajax calls, but one is still does not works.
Screen from chrome network monitoring.
10 seconds, takes SIMPLE request for stop/ url. I never saw, that server respond to client on start/ url, when app deployed on server.
There are screens from my local machine, from Chrome network monitoring.
I run apps on google compute engine, so I thought that server have not enough performance. But it's wrong. Changes of machine type have no influence.
Then, I decided to take a look to logs and code. Before response I wrote these lines of code:
log.info('Start activity for {}'.format(username))
return HttpResponse("started")
And I can see it in logs. But it still does not respond.
I still can't understand what is going on. It makes me crazy.
everyone.
I solve this issue by:
change all scripts to minified versions (*.js -> *.min.js)
add django.middleware.http.ConditionalGetMiddleware to middlewares
add SESSION_COOKIE_DOMAIN = ".yourdomain.com" to settings
Maybe it would be helpful for someone.
Related
I have a Python application that's that has been working correctly as backend for my website, up to now I have been running it using "python manage.py runserver IP:8000" in CMD. However I would like it to start using HTTPS, but when I try to access through my browser on https://IP:PORT I get the following error:
You're accessing the development server over HTTPS, but it only
supports HTTP.
The server I am running all of this is a Windows Center 2019 DataCenter, normally on a linux environment I would just use NGINX+GUNICORN.
I was browsing possible solutions and stumbled upon this, however I already am using IIS to host a website (My frontend), so I needed to figure out how to host several websites for the same IP, I have now found this.
Long story short, I configured the new IIS website for it to access my django, I then changed the hostname since both frontend and the new backend will using the same ip and ports (80, 443).
But now I have hit a spot where I'm confused due to my lack of experience in IIS and networking. I can't seem to understand how the request will go through to my Python-Django APP.
Something important to mention on how I access the Django APP in the past.
Lets say my front end is https://pr.app.com, whenever any request needed to be made to the backend. I would ask for said information in http://pr.app.com:8000/APIService/..../
This is how the binding for my frontend looks like
And this is the binding for the new backend where I changed the hostname as the second guide linked said
Any guidance or help would be most appreciated,
Thanks in advance
*Update
So I tried pausing my frontend website and used these bindings on the new backend website, I was able to get a screen of Django meaning it seems to be working or at least communicating.
Now I would need to have the hostname of the backend (pr.abcapi.com) somehow refer or redirect to the hostname of the frontend (pr.abc.com).
How could I achieve this?
I am new to Google Vision API but i have been working with gunicorn and flask for some time. I installed all the required libraries. i have my api key in environment via gunicorn bash file. Whenever i try to hit gcp API, it just freezes with no response.
Can anybody help?
Here's my gunicorn_start.bash
NAME="test"
NUM_WORKERS=16
PYTHONUNBUFFERED=True
FLASK_DIR=/home/user/fold/API/
echo "Starting $NAME"
cd $FLASK_DIR
conda activate tf
export development=False
export GOOGLE_APPLICATION_CREDENTIALS='/home/user/test-6f4e7.json'
exec /home/user/anaconda3/envs/tf/bin/gunicorn --bind 0.0.0.0:9349 --timeout 500 --worker-class eventlet --workers $NUM_WORKERS app:app
EDIT
It freezes during API call.
Code for API call:
client = vision.ImageAnnotatorClient()
with io.open(path, 'rb') as image_file:
content = image_file.read()
image = vision.types.Image(content=content)
response = client.document_text_detection(image=image)
There is no log as it just freezes,nothing else
The code looks fine and it doesn't seem to be a permission error. Since there are no logs, the issue is hard to troubleshoot; however, I have two theories of what could be happening. I'll leave them below, with some information on how to troubleshoot them.
The API call is not reaching Google's servers
This could be happening due to a networking error. To discard this, try to make a request from the development environment (or where the application is running) using curl.
You can follow the CLI quickstart for Vision API to prepare the environment, and make the request. If it works, then you can discard the network as a possible cause. If the request fails or freezes, then you might need to check the network configuration of your environment.
In addition, you can go to the API Dashboard in the Cloud Console and look at the metrics of the Vision API. In these graphs, you can see if your requests are reaching the server, as well as some useful information like: errors by API method, errors by credential, latency of requests, etc.
There's an issue with the image/document you're sending
Note: Change the logging level of the application to DEBUG (if it's not already at this level).
If you're certain that the requests are reaching the server, the possible issue could be with the file that you're trying to send. If the file is too big, the connection might look as if it was frozen while it is being uploaded, and also it might take some time to be processed. Try with smaller files to see the results.
Also, I noticed that you're currently using a synchronous method to perform the recognition. If the file is too big, you could try the asynchronous annotation approach. Basically, you upload your file(s) to Cloud Storage first and then create a request indicating: the storage URI where your file is located and the destination storage URI where you want the results to be written to.
What you'll receive from the service is an operation Id. With this Id, you can check the status of the recognition request and make your code wait until the process has finished. You can use this example as a reference to implement it.
Hopefully with this information you can determine what is the issue you're facing and solve it.
I had the exact same issue, and it turned out to be gunicorn's fault. When I switched to the Django dev server, the problem went away. I tried with older gunicorn versions (back to 2018), and the problem persisted. I should probably report this to gunicorn. :D
Going to switch to uwsgi in the meantime.
I have a small Python flask webserver on an Ubuntu box (ngnix and uwsgi) that I just started using to receive and process webhooks. Part of the webhook processing can include sending an email, which I noticed causes a delay and subsequently blocks the response back to the server sending the webhook.
In researching a way to mitigate this, I discovered python-rq (aka rq), which lets me queue up a function call and then immediately respond to the webhook. In testing, this works great!
I'm testing it on my server, and to start rq I have to run rqworker in the same directory as my website. This is great for testing, but I don't want to have to log into the server to start rq just to keep in running.
Some ideas I've come across:
The python-rq docs mention supervisor, http://python-rq.org/patterns/supervisor/, but I don't know if I need that much overhead.
Would a simple cron job do the trick, using reboot?
This is a small internal-only server. I don't want to over-engineer it (I feel like I'm creeping in that direction already), but I also don't want to have to babysit it to make sure all of the pieces are working.
How can I set up rqworker to run in the web site application directory on its own?
I'm writing an installed desktop app that I would like users to try out. I'd like to launch a pre-alpha release and collect some feedback, especially to fix any uncaught exceptions that might be thrown. As the developer would like to know about in the first instant.
i.e. I would like the installed desktop app to automatically submit relevant log entries to a remote server such that I can inspect them and fix the error.
I've considered using cloud-based services (they provide a nice dashboard interface: this is ideal) like but they're not really what I need:
Airbrake.io — quite pricey, geared towards webapps and servers
Loggly — has a forever free plan, but for servers only, based on syslog monitoring. I cannot expect users to install a syslog client as well as my application
I have never done centralized logging over internet connections, but in a local network. I used the standard sockethandler: http://docs.python.org/2/library/logging.handlers.html#sockethandler and it worked for me.
Other alternatives may be:
http://code.google.com/p/python-loggingserver/
https://papertrailapp.com/
http://pyfunc.blogspot.de/2013/08/centralized-logging-for-distributed.html
Also saving to a regular local log on crash may be a solution, and on the next startup of the app check if the log contains errors and send the log to your email.
I've been working on a Flask app which handles SMS messages using Twilio, stores them in a database, and provides access to a frontend via JSONP GET requests. I've daemonized it using supervisord, which seems to be working pretty well, but every few days it starts to hang (i.e. all requests pend forever or time out) and I have to restart the process. (I've also tried simply running it with nohup, but same problem.) I was suspicious that sqlite3 was somehow blocking occasionally, but my most recent test was to write a request method which didn't involve database access, and that's timing out too. I'm incredibly puzzled -- hopefully you've seen something similar or know what might be causing this.
The relevant code can be found here, and it's currently running (and stalled, as of this post) on my VPS at mattnichols.net:6288
Thanks!
Update: do you think this could be an issue with Flask's dev server? I'd like to believe that wrapping my app with Tornado (or something similar) could solve the problem, but I've also run other things for much longer without problems using the dev server.
For the record, this seems to have been solved by running my app using Tornado instead of the Flask dev server. Wrapping my Flask code into a Tornado server was super easy once I decided to do so: consult http://flask.pocoo.org/docs/deploying/wsgi-standalone/#tornado if you find yourself in my same situation.