Pyinstaller executable doesn't run processes when hidden - python

I'm trying to run a single-file python executable packaged with PyInstaller. The script contains system commands that need to be executed. However, when I attempt to run them (on windows) they do not execute. The thing is, they only fail to execute when the PyInstaller option no-console is used, which hides the console and runs it in the background.
I am using the following options: --noconsole and -F.
I have not only tried the subprocess.open function, but also os.popen(), both of which do not work.
Also, I need console output, so os.system() will not be an option... please answer with this in mind. Although, this function did actually execute the commands, so I think getting output is the issue. I am assuming that I have to change the standard output or something, or maybe if a command is executed without a console, the output is lost or never generated in the first place. Sorry if I sound inexperienced.
I do not have any antivirus software running on my computer, and no Windows Defender, etc. messages are appearing. I understand that this is a precarious combination - running system commands whilst hidden - I only wish to make a non-malicious program that kills another program every minute. Sorry if there is anything left unclarified... just ask if anything is unclear. Thanks :)
EDIT
Here's some code to help
command = data['command']
command_split = command.split(" ")
p = subprocess.Popen(command_split, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
result = out.decode() if out else err.decode()
io.emit("client:console_output", {
'output':result,
'adminID':data['adminID']
})
EDIT 2
Understand that this function is working when the console is not hidden; therefore, it is nothing to do with the logic or code, it is solely because it is being hidden. Here is the output anyway.
x = "taskkill /im chrome.exe /f"
print(x.split(" "))
-> ['taskkill', '/im', 'chrome.exe', '/f']

You need to use subprocess.PIPE to redirect the process output to a variable. You also need to handle subprocess stdin and close it manually.
Then you can simply disable console with -w or --noconsole flag.
import subprocess
p = subprocess.Popen(["ipconfig"], shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
out, err = p.communicate()
p.stdin.close()
result = out.decode() if out else err.decode()

Related

Run a program in the background and then open another program using subprocess

On the terminal, I have two programs to run using subprocess
First, I will call ./matrix-odas & so the first program will run in the background and I can then type the second command. The first command will return some messages.
The second command ~/odas/bin/odaslive -vc ~/odas/config/odaslive/matrix_creator.cfg will open the second program and it will keep running and keep printing out text. I'd like to use subprocess to open these programs and capture both outputs.
I have never used subprocess before and following tutorials, I am writing the script on Jupyter notebook (python 3.7) in order to see the output easily.
from subprocess import Popen, PIPE
p = Popen(["./matrix-odas", "&"], stdout=PIPE, stderr=PIPE, cwd=wd, universal_newlines=True)
stdout, stderr = p.communicate()
print(stdout)
This is the code that i tried to open the first program. But Jupyter notebook always gets stuck at p.communicate() and I can't see the messages. Without running the first program in the background, I won't be able to get the command prompt after the messages are printed.
I would like to know what subprocess function should I use to solve this issue and which platform is better to test subprocess code. Any suggestions will be appreciated. Thank you so much!
From this example at the end of this section of the docs
with Popen(["ifconfig"], stdout=PIPE) as proc:
log.write(proc.stdout.read())
it looks like you can access stdout (and I would assume stderr) from the object directly. I am not sure whether you need to use Popen as a context manager to access that property or not.

Python subprocess.call not waiting for process to finish blender

I have a python script in blender where it has
subprocess.call(os.path.abspath('D:/Test/run-my-script.sh'),shell=True)
followed by many other code which depends on this shell script to finish. What happens is that it doesn't wait for it to finish, I don't know why? I even tried using Popen instead of call as shown:
p1 = subprocess.Popen(os.path.abspath('D:/Test/run-my-script.sh'),shell=True)
p1.wait()
and I tried using commuincate but it still didn't work:
p1 = subprocess.Popen(os.path.abspath('D:/Test/run-my-script.sh'),shell=True).communicate()
this shell script works great on MacOS (after changing paths) and waits when using subprocess.call(['sh', '/userA/Test/run-my-script.sh'])
but on Windows this is what happens, I run the below python script in Blender then once it gets to the subprocess line Git bash is opened and runs the shell script while blender doesn't wait for it to finish it just prints Hello in its console without waiting for the Git Bash to finish. Any help?
import bpy
import subprocess
subprocess.call(os.path.abspath('D:/Test/run-my-script.sh'),shell=True)
print('Hello')
You can use subprocess.call to do exactly that.
subprocess.call(args, *, stdin=None, stdout=None, stderr=None, shell=False, timeout=None)
Run the command described by args. Wait for command to complete, then return the returncode attribute.
Edit: I think I have a hunch on what's going on. The command works on your Mac because Macs, I believe, support Bash out of the box (at least something functionally equivalent) while on Windows it sees your attempt to run a ".sh" file and instead fires up Git Bash which I presume performs a couple forks when starting.
Because of this Python thinks that your script is done, the PID is gone.
If I were you I would do this:
Generate a unique, non-existing, absolute path in your "launching" script using the tempfile module.
When launching the script, pass the path you just made as an argument.
When the script starts, have it create a file at the path. When done, delete the file.
The launching script should watch for the creation and deletion of that file to indicate the status of the script.
Hopefully that makes sense.
You can use Popen.communicate API.
p1 = subprocess.Popen(os.path.abspath('D:/Test/run-my-script.sh'),shell=True)
sStdout, sStdErr = p1.communicate()
The command
Popen.communicate(input=None, timeout=None)
Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for the process to terminate.
subprocess.run will by default wait for the process to finish.
Use subprocess.Popen and Popen.wait:
process = subprocess.Popen(['D:/Test/run-my-script.sh'],shell=True, executable="/bin/bash")
process.wait()
You could also use check_call() instead of Popen.
You can use os.system, like this:
import bpy
import os
os.system("sh "+os.path.abspath('D:/Test/run-my-script.sh'))
print('Hello')
There are apparently cases when the run command fails.
This is my workaround:
def check_has_finished(pfi, interval=1, timeout=100):
if os.path.exists(pfi):
if pfi.endswith('.nii.gz'):
mustend = time.time() + timeout
while time.time() < mustend:
try:
# Command is an ad hoc one to check if the process has finished.
subprocess.check_output('command {}'.format(pfi), shell=True)
except subprocess.CalledProcessError:
print "Caught CalledProcessError"
else:
return True
time.sleep(interval)
msg = 'command {0} not working after {1} tests. \n'.format(pfi, timeout)
raise IOError(msg)
else:
return True
else:
msg = '{} does not exist!'.format(pfi)
raise IOError(msg)
A wild try, but are you running the shell as Admin while Blender as regular user or vice versa?
Long story short (very short), Windows UAC is a sort of isolated environment between admin and regular user, so random quirks like this can happen. Unfortunately I can't remember the source of this, the closest I found is this.
My problem was the exact opposite of yours, the wait() got stuck in a infinite loop because my python REPL was fired from an admin shell and wasn't able to read the state of the regular user subprocess. Reverting to normal user shell got it fixed. It's not the first time I'm bit from this UAC snafu.

Unable to Exit or Quit Python Script From Command Line

I am running this python code from the command line:
# run on command line as: python firstscript.py
import sys, subprocess
pid = subprocess.Popen([sys.executable, 'secondscript.py']).pid
sys.exit()
Unfortunately I can't get it to exit all the way to the command line. If I hit the enter key (on OSX) it will finally exit. Is there a way to force the script to exit all the way to the command line without lingering in this weird limbo state? Also, I don't want to redirect stdout or stderr anywhere else because if I do, I lose the ability in secondscript.py to log output to a log file.
Thanks for the help.
The changes below worked for me:
# run on command line as: python firstscript.py
import sys, subprocess
process = subprocess.Popen([sys.executable, 'secondscript.py'])
output = process.communicate()[0]
You seem to be asking if there is a better way to do this. Check out check_output. I have always found it much more convenient and fool proof compared to the lower level stuff in subprocess.

How to execute a shell script in the background from a Python script

I am working on executing the shell script from Python and so far it is working fine. But I am stuck on one thing.
In my Unix machine I am executing one command in the background by using & like this. This command will start my app server -
david#machineA:/opt/kml$ /opt/kml/bin/kml_http --config=/opt/kml/config/httpd.conf.dev &
Now I need to execute the same thing from my Python script but as soon as it execute my command it never goes to else block and never prints out execute_steps::Successful, it just hangs over there.
proc = subprocess.Popen("/opt/kml/bin/kml_http --config=/opt/kml/config/httpd.conf.dev &", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, executable='/bin/bash')
if proc.returncode != 0:
logger.error("execute_steps::Errors while executing the shell script: %s" % stderr)
sleep(0.05) # delay for 50 ms
else:
logger.info("execute_steps::Successful: %s" % stdout)
Anything wrong I am doing here? I want to print out execute_steps::Successful after executing the shell script in the background.
All other command works fine but only the command which I am trying to run in background doesn't work fine.
There's a couple things going on here.
First, you're launching a shell in the background, and then telling that shell to run the program in the background. I don't know why you think you need both, but let's ignore that for now. In fact, by adding executable='/bin/bash' on top of shell=True, you're actually trying to run a shell to run a shell to run the program in the background, although that doesn't actually quite work.*
Second, you're using PIPE for the process's output and error, but then not reading them. This can cause the child to deadlock. If you don't want the output, use DEVNULL, not PIPE. If you want the output to process yourself, use proc.communicate().**, or use a higher-level function like check_output. If you just want it to intermingle with your own output, just leave those arguments off.
* If you're using the shell because kml_http is a non-executable script that has to be run by /bin/bash, then don't use shell=True for that, or executable, just make make /bin/bash the first argument in the command line, and /opt/kml/bin/kml_http the second. But this doesn't seem likely; why would you install something non-executable into a bin directory?
** Or you can read it explicitly from proc.stdout and proc.stderr, but that gets more complicated.
At any rate, the whole point of executing something in the background is that it keeps running in the background, and your script keeps running in the foreground. So, you're checking its returncode before it's finished, and then moving on to whatever's next in your code, and never coming back again.
It seems like you want to wait for it to be finished. In that case, don't run it in the background—use proc.wait, or just use subprocess.call() instead of creating a Popen object. And don't use & either, of course. While we're at it, don't use the shell, either:
retcode = subprocess.call(["/opt/kml/bin/kml_http",
"--config=/opt/kml/config/httpd.conf.dev"],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
if retcode != 0:
# etc.
Now, you won't get to that if statement until kml_http finishes running.
If you want to wait for it to be finished, but at the same time keep doing other stuff, then you're trying to do two things at once in your program, which means you need a thread to do the waiting:
def run_kml_http():
retcode = subprocess.call(["/opt/kml/bin/kml_http",
"--config=/opt/kml/config/httpd.conf.dev"],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
if retcode != 0:
# etc.
t = threading.Thread(target=run_kml_http)
t.start()
# Now you can do other stuff in the main thread, and the background thread will
# wait around until kml_http is finished and execute the `if` statement whenever
# that happens
You're using stderr=PIPE, stdout=PIPE which means that rather than letting the stdin and stdout of the child process be forwarded to the current process' standard output and error streams, they are being redirected to a pipe which you must read from in your python process (via proc.stdout and proc.stderr.
To "background" a process, simply omit the usage of PIPE:
#!/usr/bin/python
from subprocess import Popen
from time import sleep
proc = Popen(
['/bin/bash', '-c', 'for i in {0..10}; do echo "BASH: $i"; sleep 1; done'])
for x in range(10):
print "PYTHON: {0}".format(x)
sleep(1)
proc.wait()
which will show the process being "backgrounded".

Asynchronously read stdout from subprocess.Popen

I am running a sub-program using subprocess.popen. When I start my Python program from the command window (cmd.exe), the program writes some info and dates in the window as the program evolves.
When I run my Python code not in a command window, it opens a new command window for this sub-program's output, and I want to avoid that. When I used the following code, it doesn't show the cmd window, but it also doesn't print the status:
p = subprocess.Popen("c:/flow/flow.exe", shell=True, stdout=subprocess.PIPE)
print p.stdout.read()
How can I show the sub-program's output in my program's output as it occurs?
Use this:
cmd = subprocess.Popen(["c:/flow/flow.exe"], stdout=subprocess.PIPE)
for line in cmd.stdout:
print line.rstrip("\n")
cmd.wait() # you may already be handling this in your current code
Note that you will still have to wait for the sub-program to flush its stdout buffer (which is commonly buffered differently when not writing to a terminal window), so you may not see each line instantaneously as the sub-program prints it (this depends on various OS details and details of the sub-program).
Also notice how I've removed the shell=True and replaced the string argument with a list, which is generally recommended.
Looking for a recipe to process Popen data asynchronously I stumbled upon http://code.activestate.com/recipes/576759-subprocess-with-async-io-pipes-class/
This looks quite promising, however I got the impression that there might be some typos in it. Not tried it yet.
It is an old post, but a common problem with a hard to find solution. Try this: http://code.activestate.com/recipes/440554-module-to-allow-asynchronous-subprocess-use-on-win/

Categories