uwsgi attach-daemon before python process starts - python

I have a separate process that I want to run alongside the python process I have managed by uWSGI. I wanted to use the attach-daemon option to start this process, but it seems that bash command specified in attach-daemon does not get called until after the python process' app gets started up. However, I need the process to be running before the python process starts up in order for everything to run correctly. Is there any way to specify which order things get started in? It's not even necessary to me that I use attach-daemon, if there's a simpler way to initialize a set of managed processes in a defined order.

Use --lazy-apps, in this way the app will be loaded by each worker after the master has been fully spawned (and its external daemons started)

Related

How can I keep my python-daemon process running or restart it on fail?

I have a python3.9 script I want to have running 24/7. In it, I use python-daemon to keep it running like so:
import daemon
with daemon.DaemonContext():
%%script%%
And it works fine but after a few hours or days, it just crashes randomly. I always start it with sudo but I can't seem to figure out where to find the log file of the daemon process for debugging. What can I do to ensure logging? How can I keep the script running or auto-restart it after crashing?
You can find the full code here.
If you really want to run a script 24/7 in background, the cleanest and easiest way to do it would surely be to create a systemd service.
There are already many descriptions of how to do that, for example here.
One of the advantages of systemd, in addition to being able to launch a service at startup, is to be able to restart it after failure.
Restart=on-failure
If all you want to do is automatically restart the program after a crash, the easiest method would probably be to use a bash script.
You can use the until loop, which is used to execute a given set of commands as long as the given condition evaluates to false.
#!/bin/bash
until python /path/to/script.py; do
echo "The program crashed at `date +%H:%M:%S`. Restarting the script..."
done
If the command returns a non zero exit-status, then the script is restarted.
I would start with familiarizing myself with those two questions:
How to make a Python script run like a service or daemon in Linux
Run a python script with supervisor
Looks like you need a supervisor that will make sure that your script/daemon is still running. You can take a look at supervisord.

How to restart python script if process hangs/crashes

I have simple python code which is using 2 processes one is the main process and another which is created by multiprocessing module. Both processes runs in infinite loop. I want my python code to never crash/hang/freeze. I've already handled most of the errors/exceptions. FYI its a IOT project and I'm running this code as launcher in /etc/rc.local path. I tried using pid module from python as given here
Accoring to the link given the pid module works as below.
from pid import PidFile
with PidFile():
do_something()
My question is, does above logic meets my requirements or do I need to put some more logic like checking the existance of pid file and then decide to kill/stop/restart the process (or code itself) if any of the two processes freezes due to any bugs from code.
Please suggest is there any other way to achieve this, if pid module is not suitable for my requirement.
Hi I resolved this issue by creating a separate python scripts for both tasks rather using of multiprocessing modules such as queue. I suggest not to use multiprocessing queue inside infinite loops as it freezes the process/processes after some time.

Threads not being executed under supervisord

I am working on a basic crawler which crawls 5 websites concurrently using threads.
For each site it creates a new thread. When I run the program from the shell then the output log indicates that all the 5 threads run as expected.
But when I run this program as a supervisord program then the log indicates that only 2 threads are being run everytime! The log indicates that the all the 5 threads have started but only the same two of them are being executed and the rest get stuck.
I cannot understand why this inconsistency is happening when it is run from a shell and when it run from supervisor. Is there something I am not taking into account?
Here is the code which creates the threads:
for sid in entries:
url = entries[sid]
threading.Thread(target=self.crawl_loop, \
args=(sid, url)).start()
UPDATES:
As suggested by tdelaney in the comments, I changed the working directory in the supervisord configuration and now all the threads are being run as expected. Though I still don't understand that why setting the working directory to the crawler file directory rectifies the issue. Perhaps some one who knows about how supervisor manages processes can explain?
AFAIK python threads can't do threads properly because it is not thread safe. It just gives you a facility to simulate simultaneous run of the code. Your code will still use 1 core only.
https://wiki.python.org/moin/GlobalInterpreterLock
https://en.wikibooks.org/wiki/Python_Programming/Threading
Therefore it is possible that it does not spawn more processes/threads.
You should use multiprocessing I think?
https://docs.python.org/2/library/multiprocessing.html
I was having the same silent problem, but then realised that I was setting daemon to true, which was causing supervisor problems.
https://docs.python.org/2/library/threading.html#threading.Thread.daemon
So the answer is, daemon = true when running the script yourself, false when running under supervisor.
Just to say, I was just experiencing a very similar problem.
In my case, I was working on a low powered machine (RaspberryPi), with threads that were dedicated to listening to a serial device (an Arduino nano on /dev/ttyUSB0). Code worked perfectly on the command line - but the serial reading thread stalled under supervisor.
After a bit of hacking around (and trying all of the options here), I tried running python in unbuffered mode and managed to solve the issue! I got the idea from https://stackoverflow.com/a/17961520/741316.
In essence, I simply invoked python with the -u flag.

Using python, how do I launch an independent python process

I am making a python program, lets say A. Which is used to monitor python script B
When the python program shuts down, there is an exit function that as registered via atexit.register(), to do some clean up it need to re-run python script B, which need to stay running even when python script A has shutdown.
Python Script B can't be part of Python Script A.
What do I need to do to make that happen, I have already tried a few things like using subprocess.Popen(programBCommand), but that doesn't seem to work as it prevents A from shutting down.
I am using a Debian Operating System
If script B needs to be launched by script A, and continue running whether or not A completes (and not prevent A from exiting), you're looking at writing a UNIX daemon process. The easiest way to do this is to use the python-daemon module to make script B daemonize itself without a lot of explicit mucking about with the details of changing the working directory, detaching from the parent, etc.
Note: The process of daemonizing, UNIX-style, detaches from the process that launched it, so you couldn't directly monitor script B from script A through the Popen object (it would appear to exit immediately). You'd need to arrange some other form of tracking, e.g. identifying or communicating the pid of the daemonized process to script A by some indirect method.

How do create multiple processes that will pull messages from rabbitmq?

Say I have a python script that pulls messages of a queue and process it
process_queue_emails.py
Now I want to somehow run multiple processes of this file at once, how would I do that? I need it to run in the background, and I'm guessing on a seperate port? (this is on ubuntu)
So I want to write messages to the queue in my web application, and then I want these worker processes (the .py file above) to recieve and respond to the messages in parallel i.e. I need to run them in their own process.
The zdaemon module can be used to write demonized processes. Or you can look into the multiprocessing module of Python. And alternative is using supervisord for starting arbitrary scripts or programs as daemon.

Categories