Django from Virtualenv Multiple processes running - python

I'm running a local django development server together with virtualenv and it's been a couple of days that it behaves in a weird way. Sometimes I don't see any log in the console sometimes I see them.
A couple of times I've tried to quit the process and restart it and I've got the port already taken error, so I inspected the running process and there was still an instance of django running.
Other SO answers said that this is due to the autoreload feature, well so why sometimes I have no problem at all and sometimes I do?
Anyway For curiosity I ps aux| grep python and the result is always TWO running process, one from python and one from my activated "virtualenv" python:
/Users/me/.virtualenvs/myvirtualenv/bin/python manage.py runserver
python manage.py runserver
Is this supposed to be normal?

I've solved the mistery: Django was trying to send emails but it could not because of improper configuration, so it was hanging there forever trying to send those emails.
Most probably (I'm not sure here) Django calls an OS function or a subprocess to do so. The point is that the main process was forking itself and giving the job to a subprocess or thread, or whatever, I'm not expert in this.
It turns out that when your python is forked and you kill the father, the children can apparently keep on living after it.
Correct me if I'm wrong.

Related

I need to Ctrl-C/Pause after every manage.py command in Django

No, I'm not talking about runserver listening for requests.
Whenever i run makemigrations, migrate or scripts i wrote using django-extensions command runscript, I need to stop the execution of the program before typing in another. This was not the case before restarting my PC this morning.
I'm building a small QR code ticketing app and it was working up until this morning. I fixed a bug regarding opencv since and the app is functional again, but this command line issue is bothering me. I'm going to have to make the /admin and scripts available to a few of my colleagues tomorrow and i'm afraid we won't know when the script running is actually done since it doesn't prompt or allow another command. Having a script executed terminated early because of this would be catastrophic.
whenever i run a command, the blank line not accepting input appears
There's way too much possible reasons why this could've happened; and it's impossible for anyone else to reproduce your scenario just based on what you described. This isn't a widespread issue, so it must something particular to do with your setup.
Run a debugger, like pudb or pdb to find out where the script is stuck on.
Or add a try-except block to catch KeyboardInterrupt to your manage.py and use traceback library to narrow down where it's stuck.

How can I keep my python-daemon process running or restart it on fail?

I have a python3.9 script I want to have running 24/7. In it, I use python-daemon to keep it running like so:
import daemon
with daemon.DaemonContext():
%%script%%
And it works fine but after a few hours or days, it just crashes randomly. I always start it with sudo but I can't seem to figure out where to find the log file of the daemon process for debugging. What can I do to ensure logging? How can I keep the script running or auto-restart it after crashing?
You can find the full code here.
If you really want to run a script 24/7 in background, the cleanest and easiest way to do it would surely be to create a systemd service.
There are already many descriptions of how to do that, for example here.
One of the advantages of systemd, in addition to being able to launch a service at startup, is to be able to restart it after failure.
Restart=on-failure
If all you want to do is automatically restart the program after a crash, the easiest method would probably be to use a bash script.
You can use the until loop, which is used to execute a given set of commands as long as the given condition evaluates to false.
#!/bin/bash
until python /path/to/script.py; do
echo "The program crashed at `date +%H:%M:%S`. Restarting the script..."
done
If the command returns a non zero exit-status, then the script is restarted.
I would start with familiarizing myself with those two questions:
How to make a Python script run like a service or daemon in Linux
Run a python script with supervisor
Looks like you need a supervisor that will make sure that your script/daemon is still running. You can take a look at supervisord.

Threads not being executed under supervisord

I am working on a basic crawler which crawls 5 websites concurrently using threads.
For each site it creates a new thread. When I run the program from the shell then the output log indicates that all the 5 threads run as expected.
But when I run this program as a supervisord program then the log indicates that only 2 threads are being run everytime! The log indicates that the all the 5 threads have started but only the same two of them are being executed and the rest get stuck.
I cannot understand why this inconsistency is happening when it is run from a shell and when it run from supervisor. Is there something I am not taking into account?
Here is the code which creates the threads:
for sid in entries:
url = entries[sid]
threading.Thread(target=self.crawl_loop, \
args=(sid, url)).start()
UPDATES:
As suggested by tdelaney in the comments, I changed the working directory in the supervisord configuration and now all the threads are being run as expected. Though I still don't understand that why setting the working directory to the crawler file directory rectifies the issue. Perhaps some one who knows about how supervisor manages processes can explain?
AFAIK python threads can't do threads properly because it is not thread safe. It just gives you a facility to simulate simultaneous run of the code. Your code will still use 1 core only.
https://wiki.python.org/moin/GlobalInterpreterLock
https://en.wikibooks.org/wiki/Python_Programming/Threading
Therefore it is possible that it does not spawn more processes/threads.
You should use multiprocessing I think?
https://docs.python.org/2/library/multiprocessing.html
I was having the same silent problem, but then realised that I was setting daemon to true, which was causing supervisor problems.
https://docs.python.org/2/library/threading.html#threading.Thread.daemon
So the answer is, daemon = true when running the script yourself, false when running under supervisor.
Just to say, I was just experiencing a very similar problem.
In my case, I was working on a low powered machine (RaspberryPi), with threads that were dedicated to listening to a serial device (an Arduino nano on /dev/ttyUSB0). Code worked perfectly on the command line - but the serial reading thread stalled under supervisor.
After a bit of hacking around (and trying all of the options here), I tried running python in unbuffered mode and managed to solve the issue! I got the idea from https://stackoverflow.com/a/17961520/741316.
In essence, I simply invoked python with the -u flag.

Multiple python processes in Foreman stop logging

I am running two python processes (clock and web) through Heroku and Foreman
When I run locally with Foreman:
1. Both processes log to terminal
2. Then the clock process stops outputting (even though its still running). This halting of the output doesn't happen at a consistent place in the code but usually somewhere between 3-5 iterations.
3. The web process continues to output correctly.
Oddly enough, when I run the same code on Heroku, the logs output just fine.
We have PYTHONUNBUFFERED set to true locally (with .env) and on Heroku. Has anybody come across this issue? Is there a solution to it? Thanks.
I couldn't fix this problem with Foreman, but I did come up with solution. There is a python port of Foreman called honcho. I've switched to honcho and it fixed my logging/freezing issue.

have python process run similar to how redis process does (in the background)

I asked this question on superuser, but haven't gotten a response. Maybe here would of been more appropriate.
When I start my redis server with redis-server, even after I close the terminal or logout the process will still be there when I log back in, but my python bottle server python server.py will turn off if I close to terminal or logout. How do I get similar behavior as redis in python.
The easy way is to run the process through screen or tmux.
You could also try doing something with e.g. python-daemon on Unix, or various other approaches for running daemons.

Categories