python bottle zombie process - python

When developing a bottle webapp with python 3.5, I regularly get a zombie process. I get this when using the auto-restart development mode.
The windows console still updates with the access logs, and the errors, but the program isn't running in foreground anymore, so I can't access it to use Ctrl+C.
The only way to kill this is to open the task manager and end the process manually.
If I don't kill it, it will still be listening on the port, and have precedence on a newly started process.
I haven't found a rule for when this happens, nor have I found a way to reproduce.
How can I avoid this multi-spawned zombie process?

Related

Python schedule persistence

What is the correct way to have python schedule (by daniel bader) persistently run. I currently run the job by having a terminal open, connected to a VM where the scripts actually run. There I run python "scheduler.py" - where scheduler.py has all the jobs.
But when the connection closes, or I close the terminal, the scheduler stops.
Any easy solutions to fix this?
You have a couple options here. You are starting the process in your ssh session, but then killing the ssh session, which then kills the process.
One way to handle this, would be to have the VM run the script on startup. You could set the script as a service, so even if it goes down for some reason it will come back up. Read into init.rc for info on how launch a script at boot on linux. I'm not well-versed in Windows any more but I believe there is a way to do the same.
Another option is to keep the session open by connecting to it with screen or tmux. This article explains the problem some and gives you a few different ways to work around the issue: https://www.tecmint.com/keep-remote-ssh-sessions-running-after-disconnection/

Attaching GDB to a dying process in linux

I would like to attach gdb to a dying process, because the program runs in production and I need to debug it there, if I open the program with gdb it slows down and the computers are not that great. I tried to catch signals in the application and attach gdb there but it just works if I send them signals myself. When the program stalls (multi-threaded program, and the main thread gets a deadlock or somehow gets stuck (or apparently stuck)), and the user forces it to quit in the Desktop Environment (LXDE), I can't catch no signal. The program is all python with PySide for the graphical interface. Just care about linux.
My idea is to create a kernel driver and try too hook process termination or signals sending in there but since it would be much of a hassle I would like to ask if there is some tool for this kind of thing or some information that I could make use of. Thanks.
There might be a way to do what you want, but if you can't perhaps it would be sufficient to freeze the program and inspect its memory image?
Enable core dump file generation before it starts, and then once the process is hosed, terminate it with kill. Then use gdb to open the core file and analyze what was happening.

Handling SIGINT (ctrl+c) in script but not interpreter?

I'm working on a project that spins off several long-running workers as processes. Child workers catch SIGINT and clean up after themselves - based on my research, this is considered a best practice, and works as expected when terminating scripts.
I am actively developing this project, which means that I am regularly testing changes in the interpreter. When I'm working in an interpreter, I often hit CTRL+C to clear currently written text and get a fresh prompt. Unfortunately, if I do this while a subprocess is running, SIGINT is sent to that worker, causing it to terminate.
Is there a solution to this problem other than "never hit CTRL+C in your interpreter"?
One option is to set a variable (e.g. environment variable, commandline option) when debugging.

What are the consequences of killing a python script with SIGHUP?

Sometimes I run many instances of a python script simultaneously. To do it anagrammatically I use tmux (a terminal multiplexer), and when I fill I'm done, or I when I have to fix something, then I kill the tmux session instead of exiting each of the (up to 100) script manually.
Killing the tmux session actually kills the bash processes which are parents of the python processes that were executed from them. If I understand correctly, it means a SIGHUP signal is sent to all of the python processes.
It cleans everything quite quickly - memory is freed (it seems), cpu is freed, sockets are closed and apparently ports are freed. The advantage is that it is a much quicker and simpler task than exiting each of the scripts.
My question is: are there any possible consequences to such a habit? If I don't care about the output of the script itself - may it cause any other damage, such as making the OS dirtier, heavier, etc? Is there a better practice?
The SIGHUP handler is called. If no SIGHUP handler is installed, then the default action as shown by the signal(7) man page is invoked.
To be certain that your scripts close all files, release all resources, etc., install a SIGHUP handler that performs the appropriate actions.

have python process run similar to how redis process does (in the background)

I asked this question on superuser, but haven't gotten a response. Maybe here would of been more appropriate.
When I start my redis server with redis-server, even after I close the terminal or logout the process will still be there when I log back in, but my python bottle server python server.py will turn off if I close to terminal or logout. How do I get similar behavior as redis in python.
The easy way is to run the process through screen or tmux.
You could also try doing something with e.g. python-daemon on Unix, or various other approaches for running daemons.

Categories