I'm running Django web app on a docker container where I use Nginx with uwsgi.
Overall the web works just fine it fails only on specific callback endpoints during the social app (Google, Facebook) registration.
Below is the command I use to run the uswgi
uwsgi --socket :8080 --master --strict --enable-threads --vacuum --single-interpreter --need-app --die-on-term --module config.wsgi
Below is the endpoint where it fails (Django allauth lib)
accounts/google/login/callback/?state=..........
Below is the error message:
!! uWSGI process 27 got Segmentation Fault !!!
...
upstream prematurely closed connection while reading response header from upstream, client: ...
...
DAMN ! worker 1 (pid: 27) died :( trying respawn ...
Respawned uWSGI worker 1 (new pid: 28)
Just FYI.. this works without any issues in the local docker container but it fails when I use the GCP container. Also, this used to work fine on GCP as well so probably something happened after recent dependency updates.
Environment:
Python: 3.9.16
Django: 3.2.3
allauth: 0.44.0 (Django authentication library)
Nginx: nginx/1.23.3
uWSGI: 2.0.20
Related
I have a working django application deployed to Elastic Beanstalk. I am trying to add some asynchronous commands to it so am adding Celery. Currently I am running container commands through python.config in my .ebextensions.
I have added the command:
06startworker:
command: "source /var/app/venv/*/bin/activate && celery -A project worker --loglevel INFO -B &"
to my python.config. When I add this command and try to deploy my elasticbeanstalk instance timesout and deployment fails.
I have confirmed that connection to my redis server is working and my application can connect to it. Checking my logs in my cfn-init.log I see:
Command 01wsgipass succeeded
Test failed with code 1
...
Command 06startworker succeeded
So I think that when adding the 06startworker command it is somehow interfering with my 01wsgipass command which runs fine when I dont have the start worker command.
For reference my wsgi command is:
01wsgipass: command: 'echo "WSGIPassAuthorization On" >> ../wsgi.conf'
I'm at a bit of a loss on how to troubleshoot this from here. I'm finding the logs that I am getting are not very helpful.
I'm using Django and Celery with RabbitMQ as the message broker. While developing in Windows I installed RabbitMQ and configured Celery inside Django like this:
celery.py
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'main.settings')
app = Celery('DjangoExample')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
init.py
from .celery import app as celery_app
__all__ = ['celery_app']
When running Celery inside my development Windows machine everything works correctly and tasks are being executed as expected.
Now I'm trying to deploy the app inside a Centos7 machine.
I installed RabbitMQ and I tried running Celery with the following command:
celery -A main worker -l INFO
But I get a "connection refused" error:
[2021-02-24 17:39:58,221: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
I don't have any special configuration for Celery inside my settings.py since it was working fine in Windows without it.
You can find the settings.py here:
https://github.com/adrenaline681/DjangoExample/blob/master/main/settings.py
Here is a screenshot of the celery error:
And here is the status of my RabbitMQ Server that shows that it's currently installed and running.
Here is an image of the RabbitMQ Management Plugin web interface, we you can see the port used for amqp:
Does anyone know why this is happening?
How can I get Celery to work correctly with RabbitMQ inside of Centos7?
Many thanks in advance!
I had similar problem and it was SELinux blocking access between those two processes, I mean RabbitMQ and Python. To check my guess please disable selinux temporarily and check if it goes ok. And if it is ok then you have to configure selinux to grant access Python to connect Rabbitmq. To disable SELinux temporarily you can run in shell
# setenforce 0
See more here about disabling SELinux either temporarily or permanently. But actually I would not recommend disabling SELinux. Actually it is better to configure SELinux to grant access. See more about SELinux here.
You said you are developing on Windows but you showed some outputs that look like Linux. Are you using Docker or some other container?
I don't know if you are using Docker or some other containers but you can likely adapt my advice to your setup.
If you are using docker you'll have need to have Django's settings.py configured to docker container running RabbitMQ instead of 127.0.0.1. The URL you provided for your settings.py file doesn't work so I cannot see what you have in there.
Here's my CELERY_... settings:
# Celery settings
CELERY_BROKER_URL = 'amqp://user:TheUserName#rabbitmq'
CELERY_RESULT_BACKEND = 'redis://redis:6379/'
I set it to the name I use for my container_name hosting each service because my docker-compose file has these:
services:
rabbitmq:
...
container_name: rabbitmq
redis:
...
container_name: redis
So, I have a Django Project which has a background task for a method to run.
I made following adjustments to procfile
Initially
web: python manage.py collectstatic --no-input; gunicorn project.wsgi --log-file - --log-level debug
Now
web: python manage.py collectstatic --no-input; gunicorn project.wsgi --log-file - --log-level debug
worker: python manage.py process_tasks
Inspite of adding worker, when I deploy my project on heroku it does not run the background task. The background task gets created and can be seen registered in django admin but does not run. I hoped after reading various articles (one of them being https://medium.com/#201651034/background-tasks-in-django-and-heroku-58ac91bc881c) adding worker: python mnanage.py process_tasks would do the job but it didn't.
If I execute in my heroku cli: heroku run python manage.py process_tasks it only runs on the data which was initially present in database and not on any new data that I add after deployment.
Note: python manage.py process_tasks is what I use to get the background task to run on my local server.
So, if anyone could help me in running the background task after deployment on heroku.
Your Procfile seems right, you need to scale your worker dyno using heroku scale worker=1 from heroku CLI or you can also scale your worker dyno from heroku dashboard.
For scaling worker dyno through browser:-
Visit https://dashboard.heroku.com/apps/<your-app-name>/resources
Edit your worker dyno and scale it from there confirm your changes
In CLI use command heroku logs -t -p worker to see status and logs for worker dyno
I've been stuck trying to get my simple bottle app starting when deployed on heroku.
After quite some searching and tinkering I've got a setup that works locally, but not on heroku.
In /app.py:
import bottle
import beaker.middleware
from bottle import route, redirect, post, run, request, hook, template, static_file, default_app
bottle.debug(True)
app = beaker.middleware.SessionMiddleware(bottle.app(), session_opts)
...
# app routes etc, no run()
Then in /Procfile:
web: gunicorn app:app --bind="0.0.0.0:$PORT" --debug
Correct me if I misunderstand how gunicorn works, I understand the "app:app" portion as look in the module (=file) called app.py and use whatever is in variable "app" as your WSIG, yes?
I've checked via $ heroku run bash if $PORT is set, seems ok
The "0.0.0.0" IP I've got from other heroku examples, that should anyway accept any IPs server end, no?
Python dependencies seem to get installed fine
I've got this locally running by setting the $POST variable via an .env file for foreman, everything seems working ok on my setup
Based on this SO question I checked $ heroku ps
=== web (1X): `gunicorn app:app --bind="0.0.0.0:$PORT" --debug`
web.1: crashed 2014/12/24 22:43:00 (~ 1m ago)*)
And $ heroku logs shows:
2014-12-24T20:42:59.235657+00:00 heroku[web.1]: Starting process with command `gunicorn app:app --bind="0.0.0.0:23177" --debug`
2014-12-24T20:43:00.434570+00:00 heroku[web.1]: State changed from starting to up
2014-12-24T20:43:01.813679+00:00 heroku[web.1]: State changed from up to crashed
2014-12-24T20:43:01.803122+00:00 heroku[web.1]: Process exited with status 3
Not sure really how I could get better debugging results either. Somehow the Procfile web process just doesn't seem to work / start, but how can I get info on what's breaking?
Anybody got ideas what's going on here?
P.S.: I'm rather new to heroku, python, bottle & gunicorn :O
What version of gunicorn do you use?
gunicorn 19.1 doesn't write errorlog by default. Try gunicorn --log-file=-.
-R is also useful option to investigate error.
Can you try having just web: gunicorn app:app in your Procfile with nothing else?
How do I deploy a Flask application in IIS 8 (Windows Server 2012)? There are many partial explanations around, but nothing seems to work.
Just in case. I wouldn't do anything of this in production for complex and important app.
I'd go for reverse proxy + gunicorn. That's what I do most of the times nowadays but with nginx and on linux machines. The problem here is that gunicorn doesn't support windows for now (but support is planned). Now you have an option to run your Flask app with gunicorn in Cygwin.
The other way around would be to try this https://serverfault.com/questions/366348/how-to-set-up-django-with-iis-8 but instead of Django related stuff and especialy
from django.core.handlers.wsgi import WSGIHandler as DjangoHandler
you need your Flask paths and env variables and
from yourapplication import app as FlaskHandler
NB: instead of gunicorn you can try other launchers listed here. May be there's more luck with Twisted or Tornado on Windows
Update: Gunicorn in Cygwin
I'm on Window 7 64bit with Cygwin 1.7.5 32bit. Python version 2.6.8.
I had some issues running Flask with Cygwin 64bit and Python 2.7 although gunicorn seemed to work ok.
You can get Cygwin here.
Packages I've installed:
nano
python 2.6.8
curl
Then I installed pip with:
$ curl https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py | python
$ easy_install pip
And then flask and gunicorn:
$ pip install flask gunicorn
I've made simple app.py:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run()
And run it with gunicorn:
$ gunicorn app:app
2013-11-27 16:21:53 [8836] [INFO] Starting gunicorn 18.0
2013-11-27 16:21:53 [8836] [INFO] Listening at: http://127.0.0.1:8000 (8836)
2013-11-27 16:21:53 [8836] [INFO] Using worker: sync
2013-11-27 16:21:53 [6140] [INFO] Booting worker with pid: 6140
After that you'll need to make your gunicorn app to run like windows service. But that part I haven't done for a long time so memories are shaded:)
NB: I've found another option https://code.google.com/p/modwsgi/wiki/InstallationOnWindows if you are ready to try
I had more success with this method using FastCGI: http://codesmartinc.com/2013/04/12/running-django-in-iis7iis8/
Just use (yourModule).app instead of django.core.handlers.wsgi.WSGIHandler() for the WSGI_Handler variable.