Heroku one request spawns two responses that crashes my app - python

In my Heroku Django app, for the user account registration flow, there is a typical request activate account that happens when the user receives an email with a special "activate" URL. In the app, this activation should happen only once. The only identifier on that URL is an activation token. The token is used to "activate" the account, and also to identify the user (retrieve his username from the DB). Once this token is used, it is deleted.
For that reason, activation must happen only once. However, for really odd reasons, when the request is sent to my Heroku app, it triggers the activate function twice. I am quite sure this is not a programming mistake in the app, because on local development the activation is not called twice, and neither on staging environment (which is on Heroku too, almost identical to production settings). It only happens in production.
Here is what I see in heroku logs:
http://pastebin.com/QeuP74fa
The first quarter of this log is interesting. Notice that at some point activation succeeded and attempted to redirect the user to the next correct page: GET Request to /iro/dashboard. But then after that the activation request happens, hence the crash that you see related to a "NoneType" object has no attribute...
My Procfile looks like this
web: newrelic-admin run-program gunicorn --bind=0.0.0.0:$PORT --workers=1 --log-level=debug iroquote.wsgi:application
worker: python manage.py rqworker high default low
I had 2 web dynos running, and 1 worker dyno, when I found the bug. I have tried to scale down to 1 web dyno and 1 worker, same bug. Then 0 dynos at all, and restarted 1 web dyno, still 0 worker, same bug.
Might something related to the Heroku router calling the dyno twice, or might not.
Help?

Silently, it stopped happening. This was likely a routing problem in Heroku that affected my app.

Related

Flask flashes appear on development server, but not with uWSGI/Nginx

I have been learning Flask by making a little website and using the built in flask server that runs with python. I have a page where you press a button, and it flashes a message using the flash system inside of flask. These flashes work fine when I am using the built in flask server on my windows machine. However, I have deployed the website to a Linux server, using uWSGI which goes through Nginx. My issue is that when I access this server, the flashes don't work. Most things like loading pages work fine on both servers, but flashing is broken. I don't see any error messages from uWSGI's logs.
The code I am using for the flash is implemented as follows:
flash("Made new post.")
return redirect(url_for("posts"))
The redirect takes me to the correct page, and if I run a print() statement before the redirect the statements are clearly being reached, the flash just doesn't do anything.
The main other issue I am running into is with sessions and trying to store session variables. Nothing happens when I try to do this either. (but it works on my personal machine)
Any ideas why this might be, or at least a way to get an error message from uWSGI?
To properly set cookies (cookies are what make message Flashing work), both nginx and the Flask application need to agree on the server name.
So make sure your server_name in nginx.conf matches SERVER_NAME (or
SESSION_COOKIE_DOMAIN, if set) in your flask configuration.
There are also limits enforced by nginx on the size of cookies, but this should only be a problem if your flashed messages are really large.

Airflow Dag Statuses Inconsistent in Webserver

I have an Airflow cluster up, configured to use the CeleryExecutor and a Postgres backend.
For some reason, the statuses of the DAGs on the Webserver UI are inconsistent every time I refresh. Upon each refresh, it shows many different things such as the DAG not available in the webserver dagbag object, or black statuses, or hiding the links on the right.
It changes on each refresh.
Here are a few screenshots:
Webserver UI 1
Webserver UI 2
Run airflow web-server in debug mode than you can get this resolved
airflow webserver -p <<port>> -d
The problem seems to be some dynamic code changes happening on new dag creation and hence production mode flask server is not patching it through

readthedocs, local instance, building documentation never concludes

I started with fresh install of readthedocs.org:
http://read-the-docs.readthedocs.org/en/latest/install.html
Next I have added SocialApp GitHub in admin panel and then connected my superuser to that GitHub account.
Then I went to github and I forked readthedocs repository.
https://github.com/Drachenfels/Test-Fork
Next I clicked import projects. Task never concludes but when I refresh page, repos are there.
I picked forked repository Test-Fork and I clicked build.
Task never finishes, when I refresh or start another one, they are stuck in state "Triggered". There is no error, nothing.
What is more I am default configuration of readthedocs.
I have running in the background following processes:
./manager.py runserver 9000
./manage.py celerdybeat --verbosity=3
./manage.py celeryd -E
./manage.py celercycam
redis-server
Do I miss anything at this point?
It looks like for me that despite having celery active and running tasks are never initiated nor killed, nor errored.
Problem was not with celery, tasks were running eagerly (what I was suspecting but not really sure), so as soon as they triggered they were executed.
Problem was that task responsible for building documentation (update_docs) was failing silently. Thus state 'Triggering' never concluded and build was never initiated. It happens that this error was my own fault I run django server on different port than it's in default settings. Exception was thrown, it was never logged, state of task was never updated, readthedocs was left in the limbo. I hope it will help some lost souls out there.

How can I change this Django Application

I was tasked with making some changes to a Django application. I've never worked with Django and I am having trouble figuring out how to get my changes to compile and be available online.
What I know so far is that the application is currently available online. netstat tells me that httpd is listening on port 80. My change was made in the myapp/views.py file.
I tried to restart httpd using services httpd restart but my changes did not take effect. I've been looking into the issue a bit an I believe that I need to run a command along the lines of:
I tried calling python manage.py runserver MY.IP.AD.DR:8000 and I get:
python manage.py runserver 129.64.101.14:8000
Validating models...
0 errors found
Django version 1.4.1, using settings 'cutsheets.settings'
Development server is running at http://MY.IP.AD.DR:8000/
Quit the server with CONTROL-C.
Nice that no errors are found but when I navigate to http://MY.IP.AD.DR:8000/ I just get a "Unable to connect" message from my browser. I tried with port 81 too and had the same problem.
Without knowing exactly how your application is set up, I can't really say exactly how to solve this problem.
I can tell you that it's quite common to use two web servers with Django - one handles the static content, and reverse proxies everything else to a different port where the Django app is listening. Restarting the normal HTTP daemon therefore wouldn't affect the Django app, so you need to restart the one handling the Django app. Until you restart it, the prior version of the code will be running.
I generally use Nginx as my static server and Gunicorn with the Django app, with Supervisor used to run Gunicorn, and this is a common setup. I recommend you take a look at the config for the main web server to see if it forwards anything to another port. If so, you need to see what server is running on that port and restart it.
Also, is there a Fabric configuration (fabfile.py)? A lot of people use Fabric to automate Django deployments, and if there is one then there may be a command already defined for deploying.

Django Ldap authentication timed out

I am hosting a django-based site on a local machine (I have full access/control to it).
This site authenticates users against a remote active directory via the django ldap plugin.
authenticating against LDAP server used to work!
Now, when trying to authenticate against the LDAP server, the request just hangs until it times out. I couldn’t find anything useful in the logs.
The server setup is:
NginX, Django 1.3, Fedora 15, mySql 5.1.
I don’t know what logs I should try to look at.
(I've tried looking in nginx access and error logs but to no use.)
Things I tried:
Running the site on django's and accessing it via localhost (not going through Nginx, but accessing python manage.py directly, via the runserver command). this works
Running ldapsearch from the command line. this works
edit:
i used wireshark to look at the back-and-forth with the ldap server. the interaction seems to be fine - django sends a request to bind and it receives a success msg, and then sends a search query and a user object is returned. however, after this communication django seems to hang. when i "Ctrl-c" in the django shell after running "authenticate(username=user, password=pass)", the stack trace is sitting somewhere in the django-ldap library.
Please help, I have no idea what changed that caused this problem.
Thank you in advance
Active Directory does not allow anonymous binds for authorization; you can bind anonymously but you cannot do anything else.
Check if the user that is being used to bind with AD has valid credentials (ie, the account hasn't expired). If it has, you'll get these strange errors.

Categories