How to find the call to fork in my python program - python

Some module in my python program is calling fork(), and my mpi environment is unhappy with this:
A process has executed an operation involving a call to the "fork()"
system call to create a child process. Open MPI is currently
operating in a condition that could result in memory corruption or
other system errors; your job may hang, crash, or produce silent data
corruption. The use of fork() (or system() or other calls that create
child processes) is strongly discouraged.
The process that invoked fork was:
Local host:
If you are absolutely sure that your application will successfully
and correctly survive a call to fork(), you may disable this warning
by setting the mpi_warn_on_fork MCA parameter to 0.
The program still runs but the output is garbage.
I'm not sure the if the call to fork is through os.system, is that the only way python will ever call fork? I didn't write many of these modules myself, is there some tool I can use to figure out what line is generating that warning?

Related

Debugging Python Multiprocessing Wont Shutdown

I am working on a rather complex python multiprocessing codebase. It is an IOT type problem where multiple processes need to be active simultaneously to receive data. There is no set kill flag / kill condition (time, jobs etc). Instead kill is accomplished by switching a flag referenced by all processes which interrupts their run loops.
The issue I am having is that I am nesting multiple packages and some are containing their own run loops which are not terminated and block the flag check for termination. Correcting this may require a restructuring of the code base.
What I am currently looking for is an external (outside of the program) way to see which processes are running and failing to shutdown. If the tool can also show why all the better. I am welcome to any bash tricks or other methods people know for debugging python multiprocessing.

Using python, how do I launch an independent python process

I am making a python program, lets say A. Which is used to monitor python script B
When the python program shuts down, there is an exit function that as registered via atexit.register(), to do some clean up it need to re-run python script B, which need to stay running even when python script A has shutdown.
Python Script B can't be part of Python Script A.
What do I need to do to make that happen, I have already tried a few things like using subprocess.Popen(programBCommand), but that doesn't seem to work as it prevents A from shutting down.
I am using a Debian Operating System
If script B needs to be launched by script A, and continue running whether or not A completes (and not prevent A from exiting), you're looking at writing a UNIX daemon process. The easiest way to do this is to use the python-daemon module to make script B daemonize itself without a lot of explicit mucking about with the details of changing the working directory, detaching from the parent, etc.
Note: The process of daemonizing, UNIX-style, detaches from the process that launched it, so you couldn't directly monitor script B from script A through the Popen object (it would appear to exit immediately). You'd need to arrange some other form of tracking, e.g. identifying or communicating the pid of the daemonized process to script A by some indirect method.

Attaching GDB to a dying process in linux

I would like to attach gdb to a dying process, because the program runs in production and I need to debug it there, if I open the program with gdb it slows down and the computers are not that great. I tried to catch signals in the application and attach gdb there but it just works if I send them signals myself. When the program stalls (multi-threaded program, and the main thread gets a deadlock or somehow gets stuck (or apparently stuck)), and the user forces it to quit in the Desktop Environment (LXDE), I can't catch no signal. The program is all python with PySide for the graphical interface. Just care about linux.
My idea is to create a kernel driver and try too hook process termination or signals sending in there but since it would be much of a hassle I would like to ask if there is some tool for this kind of thing or some information that I could make use of. Thanks.
There might be a way to do what you want, but if you can't perhaps it would be sufficient to freeze the program and inspect its memory image?
Enable core dump file generation before it starts, and then once the process is hosed, terminate it with kill. Then use gdb to open the core file and analyze what was happening.

What are the consequences of killing a python script with SIGHUP?

Sometimes I run many instances of a python script simultaneously. To do it anagrammatically I use tmux (a terminal multiplexer), and when I fill I'm done, or I when I have to fix something, then I kill the tmux session instead of exiting each of the (up to 100) script manually.
Killing the tmux session actually kills the bash processes which are parents of the python processes that were executed from them. If I understand correctly, it means a SIGHUP signal is sent to all of the python processes.
It cleans everything quite quickly - memory is freed (it seems), cpu is freed, sockets are closed and apparently ports are freed. The advantage is that it is a much quicker and simpler task than exiting each of the scripts.
My question is: are there any possible consequences to such a habit? If I don't care about the output of the script itself - may it cause any other damage, such as making the OS dirtier, heavier, etc? Is there a better practice?
The SIGHUP handler is called. If no SIGHUP handler is installed, then the default action as shown by the signal(7) man page is invoked.
To be certain that your scripts close all files, release all resources, etc., install a SIGHUP handler that performs the appropriate actions.

How create threads under Python for Delphi

I'm hosting Python script with Python for Delphi components inside my Delphi application. I'd like to create background tasks which keep running by script.
Is it possible to create threads which keep running even if the script execution ends (but not the host process, which keeps going on). I've noticed that the program gets stuck if the executing script ends and there is thread running. However if I'll wait until the thread is finished everything goes fine.
I'm trying to use "threading" standard module for threads.
Python has its own threading module that comes standard, if it helps. You can create thread objects using the threading module.
threading Documentation
thread Documentation
The thread module offers low level threading and synchronization using simple Lock objects.
Again, not sure if this helps since you're using Python under a Delphi environment.
If a process dies all it's threads die with it, so a solution might be a separate process.
See if creating a xmlrpc server might help you, that is a simple solution for interprocess communication.
Threads by definition are part of the same process. If you want them to keep running, they need to be forked off into a new process; see os.fork() and friends.
You'll probably want the new process to end (via exit() or the like) immediately after spawning the script.

Categories