How create threads under Python for Delphi - python

I'm hosting Python script with Python for Delphi components inside my Delphi application. I'd like to create background tasks which keep running by script.
Is it possible to create threads which keep running even if the script execution ends (but not the host process, which keeps going on). I've noticed that the program gets stuck if the executing script ends and there is thread running. However if I'll wait until the thread is finished everything goes fine.
I'm trying to use "threading" standard module for threads.

Python has its own threading module that comes standard, if it helps. You can create thread objects using the threading module.
threading Documentation
thread Documentation
The thread module offers low level threading and synchronization using simple Lock objects.
Again, not sure if this helps since you're using Python under a Delphi environment.

If a process dies all it's threads die with it, so a solution might be a separate process.
See if creating a xmlrpc server might help you, that is a simple solution for interprocess communication.

Threads by definition are part of the same process. If you want them to keep running, they need to be forked off into a new process; see os.fork() and friends.
You'll probably want the new process to end (via exit() or the like) immediately after spawning the script.

Related

Speeding up launch of processes using multiprocessing in case of Windows

I have a machine learning application in Python. And I'm using the multiprocessing module in Python to parallelize some of the work (specifically feature computation).
Now, multiprocessing works differently on Unix variants, and Windows OS.
Unix (mac/linux): fork/forkserver/spawn
Windows: spawn
Why multiprocessing.Process behave differently on windows and linux for global object and function arguments
Because of spawn being used on Windows, the launch of multiprocessing processes is really slow. It loads all the modules from scratch for each process on Windows.
Is there a way to speed up the creation of the extra processes on Windows? (using threads instead of multiple processes is not an option)
Instead of creating multiple new processes each time, I highly suggest using concurrent.futures ProcessPoolExecutor and leaving the executor open in the background.
That way, you don't create a new process each time, but rather leave them open in the background and pass some work using the module's functions or queues and pipes.
Bottom line - Don't create new processes each time. Leave them open and pass work.

How to restart python script if process hangs/crashes

I have simple python code which is using 2 processes one is the main process and another which is created by multiprocessing module. Both processes runs in infinite loop. I want my python code to never crash/hang/freeze. I've already handled most of the errors/exceptions. FYI its a IOT project and I'm running this code as launcher in /etc/rc.local path. I tried using pid module from python as given here
Accoring to the link given the pid module works as below.
from pid import PidFile
with PidFile():
do_something()
My question is, does above logic meets my requirements or do I need to put some more logic like checking the existance of pid file and then decide to kill/stop/restart the process (or code itself) if any of the two processes freezes due to any bugs from code.
Please suggest is there any other way to achieve this, if pid module is not suitable for my requirement.
Hi I resolved this issue by creating a separate python scripts for both tasks rather using of multiprocessing modules such as queue. I suggest not to use multiprocessing queue inside infinite loops as it freezes the process/processes after some time.

Does asyncio support running a subprocess from a non-main thread?

I'm developing an application that mainly consists of services which are threads with custom run loops.
One of the services needs to spawn subprocesses and I don't really understand whether it's valid or not. Official documentation is ambiguous. Namely it says both asyncio supports running subprocesses from different threads and An event loop must run in the main thread in the same section.
How is it even possible to run subprocess from different threads if event loop must run in the main thread?
Documentation says:
You should have running event loop in the main thread.
In the main thread please call asyncio.get_child_watcher() at the start of the program.
After that you may create subprocess from non-main thread.
UPD
Starting from Python 3.8 asyncio has no limitations mentioned above.
Everything just works.

Difference between python-daemon and multiprocessing libraries

I need to run a daemon process from a python django module which will be running an xmlrpc server. The main process will host an xmlrpc client. I am a bit confused regarding creating, starting, stopping and terminating daemons in python. I have seen two libraries, the standard python multiprocessing, and another python-daemon (https://pypi.python.org/pypi/python-daemon/1.6), but not quite understanding which would be effective in my case. Also when and how do I need to handle SIGTERM for my daemons? Can anybody help me to understand these please?
The multiprocessing module is designed as a drop-in replacement for the threading module. It's designed to be used for the same kind of tasks you'd normally use threads for; speeding up execution by running against multiple cores, background polling, and any other task that you want running concurrently with some other task. It's not designed to launch standalone daemon processes, so I don't think it's appropriate for your use-case.
The python-daemon library is designed to "daemonize" the currently running Python process. I think what you want is to use the subprocess library from your main process (the xmlrpc client) to launch your daemon process (the xmlrpc server), using subprocess.Popen. Then, inside the daemon process, you can use the python-daemon library to become a daemon.
So in the main process, something like this:
subprocess.Popen([my_daemon.py, "-o", "some_option"])
And in my_daemon.py:
import daemon
...
def main():
# Do normal startup stuff
if __name__ == "__main__":
with daemon.DaemonContext(): # This makes the process a daemon
main()

Python: How to Run multiple programs on same interpreter

How to start an always on Python Interpreter on a server?
If bash starts multiple python programs, how can I run it on just one interpreter?
And how can I start a new interpreter after tracking number of bash requests, say after X requests to python programs, a new interpreter should start.
EDIT: Not a copy of https://stackoverflow.com/questions/16372590/should-i-run-1000-python-scripts-at-once?rq=1
Requests may come pouring in sequentially
You cannot have new Python programs started through bash run on the same interpreter, each program will always have its own. If you want to limit the number of Python programs running the best approach would be to have a Python daemon process running on your server and instead of creating a new program through bash on each request you would signal the daemon process to create a thread to handle the task.
To run a program forever in python:
while True :
do_work()
You could look at spawning threads for incoming request. Look at threading.Thread class.
from threading import Thread
task = new Thread(target=do_work, args={})
task.start()
You probably want to take a look at http://docs.python.org/3/library/threading.html and http://docs.python.org/3/library/multiprocessing.html; threading would be more lightweight but only allows one thread to execute at a time (meaning it won't take advantage of multicore/hyperthreaded systems), while multiprocessing allows for true simultaneous execution but can be a bit less lightweight than threading if you're on a system that doesn't utilize lightweight subprocesses and may not be as necessary if the threads/processes spend lots of time doing I/O requests.

Categories