Difference between python-daemon and multiprocessing libraries - python

I need to run a daemon process from a python django module which will be running an xmlrpc server. The main process will host an xmlrpc client. I am a bit confused regarding creating, starting, stopping and terminating daemons in python. I have seen two libraries, the standard python multiprocessing, and another python-daemon (https://pypi.python.org/pypi/python-daemon/1.6), but not quite understanding which would be effective in my case. Also when and how do I need to handle SIGTERM for my daemons? Can anybody help me to understand these please?

The multiprocessing module is designed as a drop-in replacement for the threading module. It's designed to be used for the same kind of tasks you'd normally use threads for; speeding up execution by running against multiple cores, background polling, and any other task that you want running concurrently with some other task. It's not designed to launch standalone daemon processes, so I don't think it's appropriate for your use-case.
The python-daemon library is designed to "daemonize" the currently running Python process. I think what you want is to use the subprocess library from your main process (the xmlrpc client) to launch your daemon process (the xmlrpc server), using subprocess.Popen. Then, inside the daemon process, you can use the python-daemon library to become a daemon.
So in the main process, something like this:
subprocess.Popen([my_daemon.py, "-o", "some_option"])
And in my_daemon.py:
import daemon
...
def main():
# Do normal startup stuff
if __name__ == "__main__":
with daemon.DaemonContext(): # This makes the process a daemon
main()

Related

Speeding up launch of processes using multiprocessing in case of Windows

I have a machine learning application in Python. And I'm using the multiprocessing module in Python to parallelize some of the work (specifically feature computation).
Now, multiprocessing works differently on Unix variants, and Windows OS.
Unix (mac/linux): fork/forkserver/spawn
Windows: spawn
Why multiprocessing.Process behave differently on windows and linux for global object and function arguments
Because of spawn being used on Windows, the launch of multiprocessing processes is really slow. It loads all the modules from scratch for each process on Windows.
Is there a way to speed up the creation of the extra processes on Windows? (using threads instead of multiple processes is not an option)
Instead of creating multiple new processes each time, I highly suggest using concurrent.futures ProcessPoolExecutor and leaving the executor open in the background.
That way, you don't create a new process each time, but rather leave them open in the background and pass some work using the module's functions or queues and pipes.
Bottom line - Don't create new processes each time. Leave them open and pass work.

uWSGI mules VS native Python threads

I have read the documentation of uWSGI mules, and some info from other sources. But I'm confused about the differences between mules and python threads. Can anyone explain to me, what can mules do that threads cannot and why do mules even exist?
uWSGI mule can be thought of as a separate worker process, which is not accessible via sockets (eg. direct web requests). It executes an instance of your application and can be used for offloading tasks using mulefunc Python decorator for example. Also, as mentioned in the documentation, mule can be configured to execute custom logic.
On the other hand, a thread runs in its parent's (uWSGI worker) address space. So if the worker dies or is reloaded, the thread behaves the same way. It can handle requests and also can execute specified tasks (functions) via thread decorator.
Python threads do no span on multiple CPUs, roughly said can't use all the CPU power, this is a Python GIL limitation What is the global interpreter lock (GIL) in CPython?
This is one of the reasons for using web servers, their duty is to spawn a process worker or use idle one for each task received (http request).
A mule function on the same principal, but is particular in a sense that it is intended to run tasks outside of an http request context. the idea behind, is that you could reserve some mules, each will be running in a separate process (span on multiple CPUs) as regular workers do, but they don't serve any http request, only tasks to be setup as mentioned in the uwsgi documentation.
Worth to mention that mules are also monitored by the master process of the web server, such they are respawned when killed or dead.

Does asyncio support running a subprocess from a non-main thread?

I'm developing an application that mainly consists of services which are threads with custom run loops.
One of the services needs to spawn subprocesses and I don't really understand whether it's valid or not. Official documentation is ambiguous. Namely it says both asyncio supports running subprocesses from different threads and An event loop must run in the main thread in the same section.
How is it even possible to run subprocess from different threads if event loop must run in the main thread?
Documentation says:
You should have running event loop in the main thread.
In the main thread please call asyncio.get_child_watcher() at the start of the program.
After that you may create subprocess from non-main thread.
UPD
Starting from Python 3.8 asyncio has no limitations mentioned above.
Everything just works.

Python: How to Run multiple programs on same interpreter

How to start an always on Python Interpreter on a server?
If bash starts multiple python programs, how can I run it on just one interpreter?
And how can I start a new interpreter after tracking number of bash requests, say after X requests to python programs, a new interpreter should start.
EDIT: Not a copy of https://stackoverflow.com/questions/16372590/should-i-run-1000-python-scripts-at-once?rq=1
Requests may come pouring in sequentially
You cannot have new Python programs started through bash run on the same interpreter, each program will always have its own. If you want to limit the number of Python programs running the best approach would be to have a Python daemon process running on your server and instead of creating a new program through bash on each request you would signal the daemon process to create a thread to handle the task.
To run a program forever in python:
while True :
do_work()
You could look at spawning threads for incoming request. Look at threading.Thread class.
from threading import Thread
task = new Thread(target=do_work, args={})
task.start()
You probably want to take a look at http://docs.python.org/3/library/threading.html and http://docs.python.org/3/library/multiprocessing.html; threading would be more lightweight but only allows one thread to execute at a time (meaning it won't take advantage of multicore/hyperthreaded systems), while multiprocessing allows for true simultaneous execution but can be a bit less lightweight than threading if you're on a system that doesn't utilize lightweight subprocesses and may not be as necessary if the threads/processes spend lots of time doing I/O requests.

How create threads under Python for Delphi

I'm hosting Python script with Python for Delphi components inside my Delphi application. I'd like to create background tasks which keep running by script.
Is it possible to create threads which keep running even if the script execution ends (but not the host process, which keeps going on). I've noticed that the program gets stuck if the executing script ends and there is thread running. However if I'll wait until the thread is finished everything goes fine.
I'm trying to use "threading" standard module for threads.
Python has its own threading module that comes standard, if it helps. You can create thread objects using the threading module.
threading Documentation
thread Documentation
The thread module offers low level threading and synchronization using simple Lock objects.
Again, not sure if this helps since you're using Python under a Delphi environment.
If a process dies all it's threads die with it, so a solution might be a separate process.
See if creating a xmlrpc server might help you, that is a simple solution for interprocess communication.
Threads by definition are part of the same process. If you want them to keep running, they need to be forked off into a new process; see os.fork() and friends.
You'll probably want the new process to end (via exit() or the like) immediately after spawning the script.

Categories