Run a (detached) QProcess using linux aliases and get result - python

I'm trying to create a "runner" using QProcess.startDetached`, but I'm having issues using Linux's aliases.
So far I've been able to run an alias using this code (based on this answer):
QtCore.QProcess.startDetached('/bin/bash', ['-i', '-c', 'somealias'])
The issue is that, since the actual "program" that is going to run is bash, it always returns True, even if the alias doesn't exists, meaning that the process "believes" that it has started, even if it actually hasn't, because bash was successfully run even if its alias hasn't been "found".
Obviously, this is not going to be a cross-platform program, and I know I could try and parse the current environment from the output of aliases (but, since aliases might include bash commands, that could be a problem).
Anyway I wander if there's another way to run a detached QProcess using Linux aliases, and eventually get the False returned value if the alias doesn't exists.

As the docs points out:
bool QProcess::startDetached(const QString &program, const QStringList
&arguments, const QString &workingDirectory = QString(), qint64 *pid =
nullptr)
This function overloads startDetached().
Starts the program program with the arguments arguments in a new
process, and detaches from it. Returns true on success; otherwise
returns false. If the calling process exits, the detached process will
continue to run unaffected.
Argument handling is identical to the respective start() overload.
The process will be started in the directory workingDirectory. If
workingDirectory is empty, the working directory is inherited from the
calling process.
If the function is successful then *pid is set to the process
identifier of the started process.
(emphasis mine)
In other words, returns a boolean that indicates if the command was started, it does not analyze if in the execution of the program your command was broken so it is not optimal for your requirement.
In this case, the solution is to use the finished signal that returns the execution status:
from PyQt5 import QtCore
if __name__ == "__main__":
import sys
command = "foo"
app = QtCore.QCoreApplication(sys.argv)
process = QtCore.QProcess()
def on_finished(exitCode, exitStatus):
print(exitCode == 0)
QtCore.QCoreApplication.quit()
process.finished.connect(on_finished)
process.start("/bin/bash", ["-i", "-c", command])
sys.exit(app.exec_())
Output
False
from PyQt5 import QtCore
def check_alias(command):
val = False
process = QtCore.QProcess()
loop = QtCore.QEventLoop()
def on_finished(exitCode, exitStatus):
nonlocal val
val = exitCode == 0
loop.quit()
process.finished.connect(on_finished)
process.start("/bin/bash", ["-i", "-c", command])
loop.exec_()
return val
if __name__ == "__main__":
import sys
app = QtCore.QCoreApplication(sys.argv)
for command in ("foo", "ls"):
print(f"{command}: {check_alias(command)}")
Output:
foo: False
ls: True
from PyQt5 import QtCore
def check_alias(command):
val = False
process = QtCore.QProcess()
process.start("/bin/bash", ["-i", "-c", command])
process.waitForFinished(-1)
return process.exitCode() == 0
if __name__ == "__main__":
import sys
app = QtCore.QCoreApplication(sys.argv)
for command in ("foo", "ls"):
print(f"{command}: {check_alias(command)}")
Output:
foo: False
ls: True

Related

Can't kill robocopy subprocess from python

In my project on windows, I would like to start the mirroring of two directories.
I know that I can use python watchdog to do that, but I though that using robocopy would be easier and faster.
To simplify the situation, let's assume I have a GUI with two buttons: start and stop mirroring.
Here below is a snippet with the relevant code:
class MirrorDemon(Thread):
def __init__(self, src, dest) :
self.threading_flag = Event()
self.src = src
self.dest = dest
self.opt = ' /MIR /MON:1 /MOT:1'
self.mirror = None
Thread.__init__(self)
def run(self):
command = 'robocopy {} {} {}'.format(str(self.src),str(self.dest), self.opt)
self.p = subprocess.Popen(command.split(), shell=True)
print(command)
print('start robocopy with PID {}'.format(self.p.pid))
class Window(QMainWindow, Ui_MainWindow):
def __init__(self, parent=None):
super().__init__(parent)
self.setupUi(self)
def stop_demon(self):
self.mirror.threading_flag.set()
self.mirror.p.kill()
self.mirror.join()
print('stop demon')
def start_demon(self):
self.mirror = MirrorDemon(Path('./src'), Path('./dest'))
self.mirror.setDaemon(True)
self.mirror.start()
print('start demon')
if __name__ == "__main__":
app = QApplication(sys.argv)
win = Window()
win.show()
sys.exit(app.exec())
When you click on the start button, you get a PID print out on the console and if I check this PID in the tasklist it corresponds to 'cmd.exe' process and the robocopy starts its job.
When you click on stop, the cmd.exe process corresponding to the PID disappers, but the background robocopy continues!!
I have tried several variations, but no luck.
Do you have some advises? Do you know if somebody has found a solution? Or maybe implemented a mirroring watchdog?
thanks
Update
Following the the suggestion of #Ulrich, setting shell=False is actually doing the trick and killing the robocopy process.
Thanks!
By changing this:
self.p = subprocess.Popen(command.split(), shell=True)
To this:
self.p = subprocess.Popen(command.split(), shell=False)
... you're ensuring that the process will be started directly from the current process, without starting a new shell process to start it in.
The PID you were getting back was for the shell process, and you can kill the shell without killing processes launched from that shell. By not starting it in a new shell, the PID you're getting back is the PID of the actual process and you'll be able to kill it as expected.
As the documentation states: "The only time you need to specify shell=True on Windows is when the command you wish to execute is built into the shell (e.g. dir or copy). You do not need shell=True to run a batch file or console-based executable."

Creating a Flag file

I'm relatively new to python so please forgive early level understanding!
I am working to create a kind of flag file. Its job is to monitor a Python executable, the flag file is constantly running and prints "Start" when the executable started, "Running" while it runs and "Stop" when its stopped or crashed, if a crash occurs i want it to be able to restart the script. so far i have this down for the Restart:
from subprocess import run
from time import sleep
# Path and name to the script you are trying to start
file_path = "py"
restart_timer = 2
def start_script():
try:
# Make sure 'python' command is available
run("python "+file_path, check=True)
except:
# Script crashed, lets restart it!
handle_crash()
def handle_crash():
sleep(restart_timer) # Restarts the script after 2 seconds
start_script()
start_script()
how can i implement this along with a flag file?
Not sure what you mean with "flag", but this minimally achieves what you want.
Main file main.py:
import subprocess
import sys
from time import sleep
restart_timer = 2
file_path = 'sub.py' # file name of the other process
def start():
try:
# sys.executable -> same python executable
subprocess.run([sys.executable, file_path], check=True)
except subprocess.CalledProcessError:
sleep(restart_timer)
return True
else:
return False
def main():
print("starting...")
monitor = True
while monitor:
monitor = start()
if __name__ == '__main__':
main()
Then the process that gets spawned, called sub.py:
from time import sleep
sleep(1)
print("doing stuff...")
# comment out to see change
raise ValueError("sub.py is throwing error...")
Put those files into the same directory and run it with python main.py
You can comment out the throwing of the random error to see the main script terminate normally.
On a larger note, this example is not saying it is a good way to achieve the quality you need...

Stop one Python script that is running within another

I have a Python app that initiates from a main script, let's say a main.py. main.py (since my app is organized) references and imports other .py files within the same directory, that house other functions. As my app is continuously running, it imports such a function from another script, which is also supposed to run forever until it is explicitly cancelled.
Thing is, how would I cancel that specific script, while leaving its affected variables untouched and the main script/larger app still running?
I do not how I would go about targeting a specific function to stop its execution.
I use a kill function in my utils to kill any unneeded python process who's name I know. Note the following code was tested/works on Ubuntu Linux and Mac OS machines.
def get_running_pids(process_name):
pids = []
p = subprocess.Popen(['ps', '-A'], stdout=subprocess.PIPE)
out, err = p.communicate()
for line in out.splitlines():
if process_name in line.decode('utf-8'):
pid = int(line.decode('utf-8').split(None, 1)[0])
pids.append(pid)
return pids
def kill_process_with_name(process_name):
pids = get_running_pids(process_name)
for pid in pids:
os.kill(pid, signal.SIGKILL)
You Could set up user defined, custom, Exceptions. Extending Pythons builtin Exception object. Further reading here : Pythons User Defined Exceptions
CustomExceptions.py:
class HaltException(Exception):
pass
-
main.py:
from CustomExceptions import HaltException
class Functions():
def a(self):
print("hey")
self.b()
return "1"
def b(self):
print("hello")
raise HaltException()
def main():
func_obj = Functions()
try:
func_obj.a()
except HaltException as e:
pass
print("Awesome")
main()
Programs may name their own exceptions by creating a new exception
class (see Classes for more about Python classes). Exceptions should
typically be derived from the Exception class, either directly or
indirectly.

Python send a command to Xterm

I have a python script that opens up file for me in emacs, and to do that it calls a process in xterm like so
"""AutoEmacs Document"""
# imports
import sys
import os
import psutil
import subprocess
from argparse import ArgumentParser
# constants
xlaunch_config = "C:\\cygwin64\\home\\nalis\\Documents\\experiments\\emacs\\Autoemacs\\config.xlaunch"
script = "xterm -display :0 -e emacs-w32 --visit {0}"
# exception classes
# interface functions
# classes
# internal functions & classes
def xlaunch_check():
# checks if an instance of Xlaunch is running
xlaunch_state = []
for p in psutil.process_iter(): #list all running process
try:
if p.name() == 'xlaunch.exe':# once xlaunch is found make an object
xlaunch_state.append(p)
except psutil.Error: # if xlaunch is not found return false
return False
return xlaunch_state != [] #double checks that xlaunch is running
def xlaunch_run(run):
if run == False:
os.startfile(xlaunch_config)
return 0 #Launched
else:
return 1 #Already Running
def emacs_run(f):
subprocess.Popen(script.format(f))
return 0#Launched Sucessfully
def sysarg():
f = sys.argv[1]
il = f.split()
l = il[0].split('\\')
return l[(len(l) - 1)]
def main():
f = sysarg()
xlaunch_running = xlaunch_check()
xlaunch_run(xlaunch_running)
emacs_run(f)
return 0
if __name__ == '__main__':
status = main()
sys.exit(status)
`
and it works fairly fine with the occasional bug, but I want to make it a little more versatile by having python send the Xterm console it launches commands after it launched like "-e emacs-w32" and such based off of the input it receives. I've already tried something like this:
# A test to send Xterm commands
import subprocess
xterm = subprocess.Popen('xterm -display :0', shell=True)
xterm.communicate('-e emacs')
but that doesn't seem to do anything. besides launch the terminal. I've done some research on the matter but it has only left me confused. Some help would be very much appreciated.
To open emacs in terminal emulator, use this:
Linux;
Popen(['xterm', '-e', 'emacs'])
Windows:
Popen(['cmd', '/K', 'emacs'])
For cygwin use:
Popen(['mintty', '--hold', 'error', '--exec', 'emacs'])

Python Daemon calling a subprocess periodically

I am building a simple pyhon daemon based on Sander Marechal's code. Daemon's whole purpose is to run a php file every second (php file loops through database checking values and updating database). Problem arises on the section
subprocess.call(['php','test.php'])
I can run "php test.php" on shell and it does what it is suppose to do but when it is called periodically from the daemon it doesn't seem to be executed. I also know daemon works on the background via checking running process ps aux | grep "daemon-example" also i included a do_something function which records every time function executed and appends time to a text file.
#!/usr/bin/env python
import sys, time,subprocess
from daemon import Daemon
def runphp():
#subprocess.call(['php ~/pydaemon/test.php'], shell=True)
subprocess.call(['python', 'test.py'])
def do_something():
with open("/tmp/current_time.txt",'a') as f:
f.write("The time is now\n" + time.ctime())
class MyDaemon(Daemon):
def run(self):
while True:
time.sleep(1)
do_something()
subprocess.call(['php','test.php'])
#runphp()
if __name__ == "__main__":
daemon = MyDaemon('/tmp/daemon-example.pid')
if len(sys.argv) == 2:
if 'start' == sys.argv[1]:
daemon.start()
elif 'stop' == sys.argv[1]:
daemon.stop()
elif 'restart' == sys.argv[1]:
daemon.restart()
else:
print "Unknown command"
sys.exit(2)
sys.exit(0)
else:
print "usage: %s start|stop|restart" % sys.argv[0]
sys.exit(2)
The script you are trying to run is not executed because the working directory is the root directory ('/') and that's because of this piece of code:
# decouple from parent environment
os.chdir("/")
So actually your code tries to execute: python /test.py(which does not exist) and not 'your_current_directory/test.py'.
To fix it either remove os.chdir("/"), or provide the full path to the file like so:
subprocess.call(['python','my_full_path_to_working_directory/test.py'])

Categories