I am running a script that launches
run_app.py >& log.out
In run_app.py, it will start a few subprocesses and will read stdout/stderr of the subprocesses through pipe. I can run the script fine but if I try to put it into background by:
run_app.py >& log.out &
The run_app.py will hang on reading data from subprocess. It seems that it is similar to this thread:
ffmpeg hangs when run in background
My subprocess also write a lot which might overflow the PIPE_BUF.
However, I am redirecting&writing my stdout/stderr to a file. Are there any suggestions might prevent hanging when I put the script to background while able to save output in a file instead of redirecting them to /dev/null?
When a background process is running, its standard I/O streams are still connected to the screen and keyboard. Processes will be suspended (stopped) if they try to read from the keyboard.
You should have to a message saying something like: Stopped (tty input). That would have been sent to the shell's stderr.
Normally redirecting stdin covers that problem, but some programs access the keyboard directly rather than using stdin, typically those prompting for a password.
Related
I have a Raspberry Pi which I use to play a video on a loop. I have a button which I use to end the video to display the desktop wallpaper which I have as a static image.
To do this I use a simple Python script that launches omxplayer and loops waiting for the button to be pressed, when pressed it kills omxplayer, waits a while then re-starts the loop.
This all works fine.
I am wanting to use plink to launch this script from a Windows machine, and have used the following:
plink.exe -ssh pi#192.168.0.201 -pw ****** "sudo python /home/pi/ftp/files/button.py"
This launches the script no problem, but because the script does not 'end' the batch file just sits there.
I have other batch files using plink to kill the script and others to turn the monitor on & off using CEC all of which work fine because plink gets a return, but because the Python script runs indefinitely there is nothing returned, so plink just seems to hang.
So..Question is, can plink be told to send the command and terminate, regardless of response, or (and I've looked for this with no joy) is there a way of setting a timeout for plink to give up waiting for a response?
If I understand your question correctly (not sure), you want plink to start the the script on the server and exit (keeping the script running).
The plink is just an alternative SSH client, similar to OpenSSH ssh. So just use the same techniques you will find on Internet for ssh.
Two of zillions of questions on this topic:
How to run a command in background using ssh and detach the session
Use SSH to start a background process on a remote server, and exit session
Is there a way to send input to a command shell without actually entering any keystroke ?
When launching gdb or ipython, the programs waits for user keyboard input. I was wondering if there is a way to communicate to those shells without entering any keystroke?
Maybe the command could be send to the shells through a fifo, and the answer from the shell redirected to a file or a fifo ?
Or use python subprocess Popen to create a wrapper around the shell?
The goal is to work with any kind of command shell.
I am trying to run another Python script in background from a CGI python script and want this script to run the process in background without waiting for the other script to complete. Somehow when I am running the same from Linux shell, I can run the other python script in background. But when I tried doing the same through CGI, The front end keeps on loading until the other script completes and not just make it run in background.
I have tried running the same on the Linux Shell and it works. When I shifted to CGI that is when the script waits for the other process to complete.
python1.py:
command = [sys.executable,'python2.py', str(senddata)]
proc=subprocess.Popen(command,shell=False,stdin=None,stdout=None,stderr=None,close_fds=True)
print("Content-type: text/html\r\n\r\n")
print("The script is running in background! Expect an email in 10 minutes.")
python2.py:
This script takes 2-5 minutes to execute and then sends an email to the group.
The expected output is to have this message:
The script is running in background! Expect an email in 10 minutes.
And run python2.py in background without waiting for it to complete.
The webserver will keep the client response active (causing the client to stay in a "loading" state) until all of the output from the CGI program has been collected and forwarded to the client. The way the webserver knows that all output has been collected is that it sees the standard output stream of the CGI process being closed. That normally happens when the CGI process exits.
The reason why you're having this problem is that when subprocess.Popen is told to execute a program with stdout=None, the spawned program will share its parent's standard output stream. Here that means that your background program shares the CGI process's standard output. That means that from the webserver's point of view that stream remains open until both of the processes exit.
To fix, launch the background process with stdout=subprocess.PIPE. If the background process misbehaves if its stdout gets closed when the CGI process dies, try launching it with stdout=open('/dev/null') instead.
The background process's stdin and stderr will have the same issue; they will be shared with the CGI process. AFAIK sharing those will not cause trouble as long as the background process does not attempt to read from its standard input, but if it does do that (or if you just want to be cautious) you can treat them the same way as you treat stdout and either set them to subprocess.PIPE or associate them with /dev/null.
More detail is at https://docs.python.org/2/library/subprocess.html#popen-constructor
I have a Python process that uses os.popen to run tcpdump in the background. It then reads and processes the output from tcpdump. The process runs in the background as a daemon. When I execute this process from the command line, it runs just fine--it fires up tcpdump and reads the output properly. However, I want this process to run automatically at boot and I've directed it to do so in cron. When I do this, my process is running (per the ps command) but tcpdump is not.
Is there some reason the behavior is different starting a process in cron vs starting it from the command line? My code looks something like this:
p = os.popen('/usr/sbin/tcpdump -l -i eth0')
while True:
data = p.readline()
# do something with data
cron will send you an email when there is a problem. So the first thing is to look into your mailbox (run mailx to access it).
If there is no mail, make sure the processes write messages to stdout/stderr when there is a problem.
Also: Check that you're using the correct user. On some systems, tcpdump needs to be run as root, so you need to install the job into root's crontab (instead of the one of your normal user).
I have a python script that needs to send control C to the mac terminal. I've tried sending the plain text "^C" but I get back that the terminal does not recognize the command. (The terminal meaning the pseudo terminal that python creates)
Basically, I am using the terminal to run an old Unix Executable and the only way that I can think of to terminate this gracefully is to send the interrupt signal. Is there any way I can fool the terminal into thinking that I pressed control C?
Thanks in advance!
You can explicitly send the SIGINT signal to the process if you can get its PID using os.kill.
os.kill(pid, signal.SIGINT)
This will require you to instrument your script to grab the process PID, but it's the best way to emulate the "ctrl-c" behavior.
If you open the process using subprocess's Popen, you should be able to send a control signal like this:
proc.send_signal(signal.SIGINT)
You'll need to import signal to get SIGINT.