I have a python script which uses subprocess.check_call to launch Wine (Windows Emulator on Linux), then the wine launches Z:\\Program Files (x86)\\PeaZip\\peazip.exe.
Firstly, when I tested this python script in debugging mode python3 -u -m ipdb unpack_archive.py, and set breakpoint around wine launch and run statements step by step, the Wine runs peazip.exe successfully. That is, peazip successfully extract the PEA archive on Linux.
However, when I tested this python script not in debugging mode python3 unpack_archive.py, then I find peazip.exe doesn't extract PEA archive successfully. So I suspect there is a synchronization problem in wine or python subprocess.check_call().
Now my workaround is, inserting time.sleep(1.0) after launching wine :
elif 'PEA archive' in ftype:
if splitext(arcname)[1] != '.pea':
tmpfile = os.path.join(tmpdir, basename(arcname))+'.pea'
else:
tmpfile = os.path.join(tmpdir, basename(arcname))
shutil.copy(arcname, tmpfile)
subprocess.check_call(["wine", "/home/acteam/.wine/drive_c/Program Files (x86)/PeaZip/peazip.exe",
"-ext2here", to_wine_path(tmpfile)])
import time
time.sleep(1.0) # if we don't sleep, then peazip.exe won't extract file successfully
os.remove(tmpfile)
copy_without_symlink(tmpdir, outdir)
I checked the wine manual, it doesn't mention anything about synchronization. I also checked subprocess.check_call(). The document explicitly says the check_call() will wait for the command completed.
I don't want this workaround, because if the PEA archive file is very large, then the timeout value for sleep() must be larger, and we can't predict the sufficient timeout value before running it.
I referred to #jasonharper's suggestion. Use subprocess.check_output() instead of check_call()
elif 'PEA archive' in ftype:
if splitext(arcname)[1] != '.pea':
tmpfile = os.path.join(tmpdir, basename(arcname))+'.pea'
else:
tmpfile = os.path.join(tmpdir, basename(arcname))
shutil.copy(arcname, tmpfile)
subprocess.check_output(["wine", "/home/acteam/.wine/drive_c/Program Files (x86)/PeaZip/peazip.exe",
"-ext2here", to_wine_path(tmpfile)])
os.remove(tmpfile)
copy_without_symlink(splitext(tmpfile)[0], outdir)
I tested it with python3 unpack_archive.py Kevin.pea, which is a 2.0GB PEA archive. The extraction process costs 4 minutes 16 seconds. Three subfiles are unpacked successfully.
My understanding is that the wine executable is not the actual emulator - it just launches a background process called wineserver if it's not already running, tells it to run the Windows program, and then immediately exits itself - quite possibly before the Windows program has even started running.
One of the answers to this question suggests that piping the output of wine to another program will delay things until the Windows program actually exits. In Python terms, this would be equivalent to using check_output() instead of check_call(), although I haven't tried this myself.
Consider using advisory locking to block until the process has exited:
lockfile=open(tmpfile, 'a')
subprocess.check_call([
"wine", "/home/acteam/.wine/drive_c/Program Files (x86)/PeaZip/peazip.exe",
"-ext2here", to_wine_path(tmpfile)],
preexec_fn=lambda: fcntl.flock(lockfile, fcntl.LOCK_EX),
close_fds=False)
fcntl.flock(lockfile, fcntl.LOCK_EX)
Here, our preexec_fn (run after we've fork()ed off the subprocess but before wine has been started) grabs a lock, and after check_call() has returned, we then try to grab that lock ourselves -- which will block if it's not yet released.
(Note that you'll need to be sure that wine doesn't close that file descriptor itself prior to program exit; if it does, one way to avoid that is to create the lock on a descriptor passed as stdin, stdout or stderr).
Related
I'm working on a BCP wrapper method in Python, but have run into an issue invoking the command with subprocess.
As far as I can tell, the BCP command doesn't return any value or indication that it has completed outside of what it prints to the terminal window, which causes subprocess.call or subprocess.run to hang while they wait for a return.
subprocess.Popen allows a manual .terminate() method, but I'm having issues getting the table to write afterwards.
The bcp command works from the command line with no issues, it loads data from a source csv according to a .fmt file and writes an error log file. My script is able to dismount the file from log path, so I would consider the command itself irrelevant and the question to be around the behavior of the subprocess module.
This is what I'm trying at the moment:
process = subprocess.Popen(bcp_command)
try:
path = Path(log_path)
sleep_counter = 0
while path.is_file() == False and sleep_counter < 16:
sleep(1)
sleep_counter +=1
finally:
process.terminate()
self.datacommand = datacommand
My idea was to check that the error log file has been written by the bcp command as a way to tell that the process had finished, however while my script no longer freezes with this, and the files are apparently being successfully written and dismounted later on in the script. The script terminates in less than the 15 seconds that the sleep loop would use to end it as well.
When the process froze my Spyder shell (and Idle, so it's not the IDE), I could force terminate it by closing the console itself and it would write to the server at least.
However it seems like by using the .terminate() the command isn't actually writing anything to the server.
I checked if a dumb 15 second time-out (it takes about 2 seconds to do the BCP with this data) would work as well, in case it was writing an error log before the load finished.
Still resulted in an empty table on SQL server.
How can I get subprocess to execute a command without hanging?
Well, it seems to be a more general issue about calling helper functions with Popen
as seen here:
https://github.com/dropbox/pyannotate/issues/67
I was able to fix the hanging issue by changing it to:
subprocess.Popen(bcp_command, close_fds = True)
I have a script that uses a really simple file based IPC to communicate with another program. I write a tmp file with the new content and mv it onto the IPC file to keep stuff atomar (the other program listens of rename events).
But now comes the catch: This works like 2 or 3 times but then the exchange is stuck.
time.sleep(10)
# check lsof => target file not opened
subprocess.run(
"mv /tmp/tempfile /tmp/target",
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
universal_newlines=True,
shell=True,
)
# check lsof => target file STILL open
time.sleep(10)
/tmp/tempfile will get prepared for every write
The first run results in:
$ lsof /tmp/target
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
python 1714 <user> 3u REG 0,18 302 10058 /tmp/target
which leave it open until I terminate the main python program. Consecutive runs change the content as expected, the inode and file descriptor but its still open what I would not expect from a mv.
The file is finally gets closed when the python program featuring these lines above is getting closed.
EDIT:
Found the bug: mishandeling the tempfile.mkstemp(). See: https://docs.python.org/3/library/tempfile.html#tempfile.mkstemp
I created the tempfile like so:
_fd, temp_file_path = tempfile.mkstemp()
where I discarded the filedescriptor _fd which was open by default. I did not close it and so it was left open even after the move. This resulted in an open target and since I was just lsofing on the target, I did not see that the tempfile was already opened. This would be the corrected version:
fd, temp_file_path = tempfile.mkstemp()
fd.write(content)
fd.close()
# ... mv/rename via shell execution/shutil/pathlib
Thank you all very much for your help and your suggestions!
I wasn't able reproduce this behavior. I created a file /tmp/tempfile and ran a python script with the subprocess.run call you give followed by a long sleep. /tmp/target was not in use, nor did I see any unexpected open files in lsof -p <pid>.
(edit) I'm not surprised at this, because there's no way that your subprocess command is opening the file: mv does not open its arguments (you can check this with ltrace) and subprocess.run does not parse its argument or do anything with it besides pass it along to be exec-ed.
However, when I added some lines to open a file and write to it and then move that file, I see the same behavior you describe. This is the code:
import subprocess
out=open('/tmp/tempfile', 'w')
out.write('hello')
subprocess.run(
"mv /tmp/tempfile /tmp/target",
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
universal_newlines=True,
shell=True,
)
import time
time.sleep(5000)
In this case, the file is still open because it was never closed, and even though it's been renamed the original file handle still exists. My bet would be that you have something similar in your code that's creating this file and leaving open a handle to it.
Is there any reason why you don't use shutil.move? Otherwise it may be necassary to wait for the mv command to finish moving and then kill it, read stdin, run something like
p = subprocess.run(...)
# wait to finish moving/read from stdin
p.terminate()
Of course terminate would be a bit harsh.
Edit: depending on your use rsync, which is not part of python, may be a elegant solution to keep your data synced over the network without writing a single line of code
you say it is still open by "mv" but you lsof result shown open by python. As it is an sub process see if the pid is the same of the python process maybe it is another python process.
I have a problem. I need to kill the batch file using the python script residing within the same batch file. The batch file has abc.py script as the first one along with other scripts. So I need to kill the batch file so that others don't get executed. Here is what I have tried:
for proc in psutil.process_iter():
if proc.name() == "python.exe" and len(proc.cmdline()) > 1 and
"abc.py" in proc.cmdline()[1]:
proc.terminate()
But this only kills the python script, not the batch file. Tried killing the pid with the same effect.
os.system("taskkill /F /PID " + str(os.getpid()))
Edit 1
The script checks for existence of another running script and then needs to terminate itself.
If you're just looking to kill whoever your parent is, that's easy: just use os.getppid() instead of os.getpid():
os.system("taskkill /F /PID " + str(os.getppid()))
Of course it's better to use subprocess instead of os.system for all the usual reasons, like getting a useful error if it fails:
subprocess.run(['taskkill', '/F', '/PID', str(os.getppid())])
Or, even better, don't use taskkill, just kill it directly. This also gives you the option of using a nicer Ctrl-C or Ctrl-Break kill instead of a hard kill, if preferred:
os.kill(os.getppid(), signal.CTRL_BREAK_EVENT)
If you're using Python 2.7, getppid doesn't work on Windows; that was only added in 3.2. (And I think the same is true for os.kill, and definitely for signal.CTRL_BREAK_EVENT.)
Since you're already apparently amenable to using psutil, you can use that.
There's no need to search through every process on the system to find yourself, just construct a default Process. And you can go from any process to its parent with parent. And then you can use the kill or terminate
proc = psutil.Process().parent()
proc.kill()
All of the above, except using CTRL_C_EVENT or CTRL_BREAK_EVENT instead of a standard signal), have the nice advantage of being cross-platform—you can run the same script on Linux or macOS or whatever and it'll kill the shell script that ran it.
Your batch file will need to check whether the last command succeeded and exit if it didn't.
See How do I make a batch file terminate upon encountering an error?
I have some script in Python, which does some work. I want to re-run this script automatically. Also, I want to relaunch it on any crashes/freezes.
I can do something like this:
while True:
try:
main()
except Exception:
os.execv(sys.executable, ['python'] + sys.argv)
But, for unknown reason, this still crashes or freezes one time in few days. So I see crash, write "Python main.py" in cmd and it started, so I don't know why os.execv don't do this work by self. I guess it's because this code is part of this app. So, I prefer some script/app, which will control relaunch in external way. I hope it will be more stable.
So this script should work in this way:
Start any script
Check that process of this script is working, for example check some file time change and control it by process name|ID|etc.
When it dissapears from process list, launch it again
When file changed more than 5 minutes ago, stop process, wait few sec, launch it again.
In general: be cross-platform (Linux/Windows)
not important log all crashes.
I can do this by self (right now working on it), but I'm pretty sure something like this must already be done by somebody, I just can't find it in Google\Github.
UPDATE: added code from the #hansaplast answer to GitHub. Also added some changes to it: relauncher. Feel free to copy/use it.
As it needs to work both in windows and on linux I don't know a way to do that with standard tools, so here's a DIY solution:
from subprocess import Popen
import os
import time
# change into scripts directory
abspath = os.path.abspath(__file__)
dname = os.path.dirname(abspath)
os.chdir(dname)
while True:
p = Popen(['python', 'my_script.py', 'arg1', 'arg2'])
time.sleep(20) # give the program some time to write into logfile
while True:
if p.poll() != None:
print('crashed or regularly terminated')
break
file_age_in_s = time.time() - os.path.getmtime('output.log')
if file_age_in_s > 60:
print('frozen, killing process')
p.kill()
break
time.sleep(1)
print('restarting..')
Explanation:
time.sleep(20): give script 20 seconds to write into the log file
poll(): regularly check if script died (either crashed or regularly terminated, you can check the return value of poll() to differentiate that)
getmtime(): regularly check output.log and check if that was changed the past 60 seconds
time.sleep(1): between every check wait for 1s as otherwise it would eat up too many system resources
The script assumes that the check-script and the run-script are in the same directory. If that is not the case, change the lines beneath "change into scripts directory"
I personally like supervisor daemon, but it has two issues here:
It is only for unix systems
It restarts app only on crashes, not freezes.
But it has simple XML-RPC API, so It makes your job to write an freeze-watchdog app simplier. You could just start your process under supervisor and restart it via supervisor API when you see it freezes.
You could install it via apt install supervisor on ubuntu and write config like this:
[program:main]
user=vladimir
command=python3 /var/local/main/main.py
process_name=%(program_name)s
directory=/var/local/main
autostart=true
autorestart=true
I'm writing some code in Python 3 on Windows that looks like this:
try:
do something that takes a long time
(training a neural network in TensorFlow, as it happens)
except KeyboardInterrupt:
print('^C')
print a summary of results
still useful even if the training was cut short early
This works perfectly if run directly from the console with python foo.py.
However, if the call to Python was within a batch file, it ends up doing all the above but then still spamming the console with the 'terminate batch job' prompt.
Is there a way to stop that happening? By fully eating the ^C within Python, jumping all the way out of the batch file or otherwise?
Use the break (More info here) command in the batch file, which will disable CTRL+C halting the file
EDIT: According to this site of the break command
Newer versions of Windows (Windows ME, Windows 2000, Windows XP, and higher) only include this command for backward compatibility and turning the break off has no effect.
I personally tested this, and can confirm, I will edit when I find a workaround
EDIT #2: If you could have a second batch script that runs start "" /b /wait cmd /c "yourfile.bat" although, this is known to cause glitches with other nested batch files
The flag to disable Ctrl+C is inherited by child processes, so Python will no longer raise a KeyboardInterrupt. Plus we still have bugs here in Python if reading from the console gets interrupted by Ctrl+C without getting a SIGINT from the CRT. The Python script should manually enable Ctrl+C via ctypes. Use import ctypes; kernel32 = ctypes.WinDLL('kernel32', use_last_error=True); success = kernel32.SetConsoleCtrlHandler(None, False)
EDIT #3 As pointed by Eryksyn (in the comments), you can use cytpes to ENABLE it;
import ctypes;
kernel32 = ctypes.WinDLL('kernel32', use_last_error=True); success = kernel32.SetConsoleCtrlHandler(None, False)
EDIT #4: I think I found it, try this (Although it may not work) Can you use the threading import?
import time
from threading import Thread
def noInterrupt():
for i in xrange(4):
print i
time.sleep(1)
a = Thread(target=noInterrupt)
a.start()
a.join()
print "done"