I have a script that uses a really simple file based IPC to communicate with another program. I write a tmp file with the new content and mv it onto the IPC file to keep stuff atomar (the other program listens of rename events).
But now comes the catch: This works like 2 or 3 times but then the exchange is stuck.
time.sleep(10)
# check lsof => target file not opened
subprocess.run(
"mv /tmp/tempfile /tmp/target",
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
universal_newlines=True,
shell=True,
)
# check lsof => target file STILL open
time.sleep(10)
/tmp/tempfile will get prepared for every write
The first run results in:
$ lsof /tmp/target
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
python 1714 <user> 3u REG 0,18 302 10058 /tmp/target
which leave it open until I terminate the main python program. Consecutive runs change the content as expected, the inode and file descriptor but its still open what I would not expect from a mv.
The file is finally gets closed when the python program featuring these lines above is getting closed.
EDIT:
Found the bug: mishandeling the tempfile.mkstemp(). See: https://docs.python.org/3/library/tempfile.html#tempfile.mkstemp
I created the tempfile like so:
_fd, temp_file_path = tempfile.mkstemp()
where I discarded the filedescriptor _fd which was open by default. I did not close it and so it was left open even after the move. This resulted in an open target and since I was just lsofing on the target, I did not see that the tempfile was already opened. This would be the corrected version:
fd, temp_file_path = tempfile.mkstemp()
fd.write(content)
fd.close()
# ... mv/rename via shell execution/shutil/pathlib
Thank you all very much for your help and your suggestions!
I wasn't able reproduce this behavior. I created a file /tmp/tempfile and ran a python script with the subprocess.run call you give followed by a long sleep. /tmp/target was not in use, nor did I see any unexpected open files in lsof -p <pid>.
(edit) I'm not surprised at this, because there's no way that your subprocess command is opening the file: mv does not open its arguments (you can check this with ltrace) and subprocess.run does not parse its argument or do anything with it besides pass it along to be exec-ed.
However, when I added some lines to open a file and write to it and then move that file, I see the same behavior you describe. This is the code:
import subprocess
out=open('/tmp/tempfile', 'w')
out.write('hello')
subprocess.run(
"mv /tmp/tempfile /tmp/target",
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
universal_newlines=True,
shell=True,
)
import time
time.sleep(5000)
In this case, the file is still open because it was never closed, and even though it's been renamed the original file handle still exists. My bet would be that you have something similar in your code that's creating this file and leaving open a handle to it.
Is there any reason why you don't use shutil.move? Otherwise it may be necassary to wait for the mv command to finish moving and then kill it, read stdin, run something like
p = subprocess.run(...)
# wait to finish moving/read from stdin
p.terminate()
Of course terminate would be a bit harsh.
Edit: depending on your use rsync, which is not part of python, may be a elegant solution to keep your data synced over the network without writing a single line of code
you say it is still open by "mv" but you lsof result shown open by python. As it is an sub process see if the pid is the same of the python process maybe it is another python process.
Related
I want to close some files like .txt, .csv, .xlsx that I have opened using os.startfile().
I know this question asked earlier but I did not find any useful script for this.
I use windows 10 Environment
I believe the question wording is a bit misleading - in reality you want to close the app you opend with the os.startfile(file_name)
Unfortunately, os.startfile does not give you any handle to the returned process.
help(os.startfile)
startfile returns as soon as the associated application is launched.
There is no option to wait for the application to close, and no way
to retrieve the application's exit status.
Luckily, you have an alternative way of opening a file via a shell:
shell_process = subprocess.Popen([file_name],shell=True)
print(shell_process.pid)
Returned pid is the pid of the parent shell, not of your process itself.
Killing it won't be sufficient - it will only kill a shell, not the child process.
We need to get to the child:
parent = psutil.Process(shell_process.pid)
children = parent.children(recursive=True)
print(children)
child_pid = children[0].pid
print(child_pid)
This is the pid you want to close.
Now we can terminate the process:
os.kill(child_pid, signal.SIGTERM)
# or
subprocess.check_output("Taskkill /PID %d /F" % child_pid)
Note that this is a bit more convoluted on windows - there is no os.killpg
More info on that: How to terminate a python subprocess launched with shell=True
Also, I received PermissionError: [WinError 5] Access is denied when trying to kill the shell process itself with os.kill
os.kill(shell_process.pid, signal.SIGTERM)
subprocess.check_output("Taskkill /PID %d /F" % child_pid) worked for any process for me without permision error
See WindowsError: [Error 5] Access is denied
In order to properly get the pid of children,you may add a while cycle
import subprocess
import psutil
import os
import time
import signal
shell_process = subprocess.Popen([r'C:\Pt_Python\data\1.mp4'],shell=True)
parent = psutil.Process(shell_process.pid)
while(parent.children() == []):
continue
children = parent.children()
print(children)
os.startfile()
helps to launch the application but has no option to exit, kill or close the launched application.
The other alternative would be using subprocesses this way:
import subprocess
import time
# File (a CAD in this case) and Program (desired CAD software in this case) # r: raw strings
file = r"F:\Pradnil Kamble\GetThisOpen.3dm"
prog = r"C:\Program Files\Rhino\Rhino.exe"
# Open file with desired program
OpenIt = subprocess.Popen([prog, file])
# keep it open for 30 seconds
time.sleep(30)
# close the file and the program
OpenIt.terminate()
Based on this SO post, there's no way to close the file being opened with os.startfile(). Similar things are discussed in this Quora post.
However, as is suggested in the Quora post, using a different tool to open your file, such as subprocess or open(), would grant you greater control to handle your file.
I assume you're trying to read in data, so in regards to your comment about not wanting to close the file manually, you could always use a with statement, e.g.
with open('foo') as f:
foo = f.read()
Slightly cumbersome, as you would have to also do a read(), but it may suit your needs better.
os.system('taskkill /f /im Rainmeter.exe') work for me
in my case, i use the os.startfile("C:\\Program Files\\Rainmeter\\Rainmeter.exe") command to open Rainmeter.exe
replace file and path with yours
Normally you can automate answers to an interactive prompt by piping stdin:
import subprocess as sp
cmd = 'rpmbuild --sign --buildroot {}/BUILDROOT -bb {}'.format(TMPDIR, specfile)
p = sp.Popen(cmd, stdout=sp.PIPE, stderr=sp.PIPE, stdin=sp.PIPE, universal_newline=True, shell=True)
for out in p.communicate(input='my gpg passphrase\n'):
print(out)
For whatever reason, this is not working for me. I've tried writing to p.stdin, before executing p.communicate(), I've tried flushing the buffer, I've tried using bytes without universal_newlines=True, I've hard coded things, etc. In all scenarios, the command is executed and hangs on:
Enter pass phrase:
My first hunch was that stdin was not the correct file descriptor and that rpmbuild was internally calling a gpg command, and maybe my input isn't piped. But when I do p.stdin.close() I get an OSerror about subprocess trying to write to the closed descriptor.
What is the rpmbuild command doing to stdin that prevents me from writing to it?
Is there a hack I can do? I tried echo "my passphrase" | rpmbuild .... as the command but that doesn't work.
I know I can do something with gpg like command and sign packages without a passphrase but I kind of want to avoid that.
EDIT:
After some more reading, I realize this is issue is common to commands that require password input, typically using a form of getpass.
I see a solution would be to use a library like pexpect, but I want something from the standard library. I am going to keep looking, but I think maybe i can try writing to something similar /dev/tty.
rpm uses getpass(3) which reopens /dev/tty.
There are 2 approaches to automating:
1) create a pseudotty
2) (linux) find the reopened file descriptor in /proc
If scripting, expect(1) has (or had) a short example with pseudotty's that can be used.
I'm struggling to get some python script to start a subprocess, wait until it completes and then retrieve the required data. I'm quite new to Python.
The command I wish to run as a subprocess is
./bin.testing/Eva -t --suite="temp0"
Running that command by hand in the Linux terminal produces:
in terminal mode
Evaluation error = 16.7934
I want to run the command as a python sub-process, and receive the output back. However, everything I try seems to skip the second line (ultimately, it's the second line that I want.) At the moment, I have this:
def job(self,fen_file):
from subprocess import Popen, PIPE
from sys import exit
try:
eva=Popen('{0}/Eva -t --suite"{0}"'.format(self.exedir,fen_file),shell=True,stdout=PIPE,stderr=PIPE)
stdout,stderr=eva.communicate()
except:
print ('Error running test suite '+fen_file)
exit("Stopping")
print(stdout)
.
.
.
return 0
All this seems to produce is
in terminal mode
0
with the important line missing. The print statement is just so I can see what I am getting back from the sub-process -- the intention is that it will be replaced with code that processes the number from the second line and returns the output (here I'm just returning 0 just so I can get this particular bit to work first. The caller of this function prints the result, which is why there is a zero at the end of the output.) exedir is just the directory of the executable for the sub-process, and fen-file is just an ascii file that the sub-process needs. I have tried removing the 'in terminal mode' from the source code of the sub-process and re compiling it, but that doesn't work -- it still doesn't return the important second line.
Thanks in advance; I expect what I am doing wrong is really very simple.
Edit: I ought to add that the subprocess Eva can take a second or two to complete.
Since the 2nd line is an error message, it's probably stored in your stderr variable!
To know for sure you can print your stderr in your code, or you can run the program on the command line and see if the output is split into stdout and stderr. One easy way is to do ./bin.testing/Eva -t --suite="temp0" > /dev/null. Any messages you get are stderr since stdout is redirected to /dev/null.
Also, typically with Popen the shell=True option is discouraged unless really needed. Instead pass a list:
[os.path.join(self.exedir, 'Eva'), '-t', '--suite=' + fen_file], shell=False, ...
This can avoid problems down the line if one of your arguments would normally be interpreted by the shell. (Note, I removed the ""'s, because the shell would normally eat those for you!)
Try using subprocess check_output.
output_lines = subprocess.check_output(['./bin.testing/Eva', '-t', '--suite="temp0"'])
for line in output_lines.splitlines():
print(line)
I have the simplified following code in Python:
proc_args = "gzip --force file; echo this_still_prints > out"
post_proc = subprocess.Popen(proc_args, shell=True)
while True:
time.sleep(1)
Assume file is big enough to take several seconds do process. If I close the Python process while gzip is still running, it will cause gzip to end, but it will still execute the following line to gzip. I'd like to know why this happens, and if there's a way I can make it to not continue executing the following commands.
Thank you!
A process exiting does not automatically cause all its child processes to be killed. See this question and its related questions for much discussion of this.
gzip exits because the pipe containing its standard input gets closed when the parent exits; it reads EOF and exits. However, the shell that's running the two commands is not reading from stdin, so it doesn't notice this. So it just continues on and executes the echo command (which also doesn't read stdin).
post_proc.kill() I believe is what you are looking for ... but afaik you must explicitly call it
see: http://docs.python.org/library/subprocess.html#subprocess.Popen.kill
I use try-finally in such cases (unfortunately you cannot employ with like you would in file.open()):
proc_args = "gzip --force file; echo this_still_prints > out"
post_proc = subprocess.Popen(proc_args, shell=True)
try:
while True:
time.sleep(1)
finally:
post_proc.kill()
i have a java program which runs on a particular port in ubuntu. While running the program i need to take the output from the program and needs to save it in the log file. I use nohub to run them currently.when they fail I don't know why they fail.Then the process restart the nohub get overwritten . I want the process to restart and update the log file, I can check it at a later date. Currently I don't know the state of it, is it running or failed.
I heard that it is pretty easy to do it using python scripts .
Anyone please help me to do this?
Thanks in advance
Renjith Raj
You should use the subprocess module of python.
If your logs are not too big, you can simply use :
# for python >=2.7
result = subprocess.check_output(["/path/to/process_to_lauch", "arg1"])
# for python < 2.7
process = subprocess.Popen(["/path/to/process_to_lauch", "arg1"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
str_out, str_errr = process.communicate()
# in str_out you will find the standard output of your process
# in str_err you will find the standard output of your process
But if your outputs are really big (let's talk in Mo, not in Ko), this may cause some memory overflow...
In case of big output, use file handles for stdout and stderr:
out_file = open(out_file_name, "w")
err_file = open(out_file_name, "w")
process = subprocess.Popen(["/path/to/process_to_lauch", "arg1"], stdout=out_file, stderr=err_file)
return_code = process.wait()
out_file.close()
err_file.close()
And then, in out_file you'll find the output of the process, and in err_file the error output.
Of course, if you want to relaunch the process when it die, put this code in a loop ;)