I had the tskill operation working fine through a python script when using Python 2.7 but when I switched to 2.5 this code fails.
import os
import time
import win32com.client
os.startfile('cmd')
shell = win32com.client.Dispatch("WScript.Shell")
shell.AppActivate('Administrator:C:\Windows\System32\cmd.exe')
time.sleep(2)
shell.SendKeys('tskill java + {ENTER}')
time.sleep(2)
shell.Sendkeys('tskill javaw + {ENTER}')
#shell.SendKeys('exit + {ENTER}')
I use os.startfile to run a python script that opens and runs the application(It's a graphics tool).
Note: The application is already running by the time this script is called, hence I don't need to open it here.
Can some one please tell me how to close an application successfully using this or similar method on Python 2.5. Thanks for the help!
I don't have tskill on my system, but I think it's similar to taskkill /F /FI "IMAGENAME eq java*". Anyway, here's how to call a normal console application in the background (i.e. a hidden window) using the subprocess module (tested in Python 2.5.2, but using taskkill instead):
import subprocess
def tskill(*args):
si = subprocess.STARTUPINFO()
si.dwFlags |= subprocess.STARTF_USESHOWWINDOW
command = ('tskill',) + args
rc = subprocess.call(command, startupinfo=si)
return rc == 0
if tskill('java') and tskill('javaw'):
#success
I used call because it only matters whether the call succeeds or fails, but in general you can use subprocess.PIPE to capture stdout and stderr using subprocess.Popen.
Edit: How to kill a process started with the subprocess module
command = ['path/to/executable', 'arg0', 'arg1', '...']
p = subprocess.Popen(command)
#do something
#kill if it's still alive
if p.poll() is None:
p.kill()
Related
I need to wait until the user is done editing a text file in the default graphical application (Debian and derivates).
If I use xdg-open with subprocess.call (which usually waits) it will continue after opening the file in the editor. I assume because xdg-open itself starts the editor asynchronously.
I finally got a more or less working code by retrieving the launcher for the text/plain mime-type and use that with Gio.DesktopAppInfo.new to get the command for the editor. Provided that the editor is not already open in which case the process ends while the editor is still open.
I have added solutions checking the process.pid and polling for the process. Both end in an indefinite loop.
It seems such a overly complicated way to wait for the process to finish. So, is there a more robust way to do this?
#! /usr/bin/env python3
import subprocess
from gi.repository import Gio
import os
from time import sleep
import sys
def open_launcher(my_file):
print('launcher open')
app = subprocess.check_output(['xdg-mime', 'query', 'default', 'text/plain']).decode('utf-8').strip()
print(app)
launcher = Gio.DesktopAppInfo.new(app).get_commandline().split()[0]
print(launcher)
subprocess.call([launcher, my_file])
print('launcher close')
def open_xdg(my_file):
print('xdg open')
subprocess.call(['xdg-open', my_file])
print('xdg close')
def check_pid(pid):
""" Check For the existence of a unix pid. """
try:
os.kill(int(pid), 0)
except OSError:
return False
else:
return True
def open_pid(my_file):
pid = subprocess.Popen(['xdg-open', my_file]).pid
while check_pid(pid):
print(pid)
sleep(1)
def open_poll(my_file):
proc = subprocess.Popen(['xdg-open', my_file])
while not proc.poll():
print(proc.poll())
sleep(1)
def open_ps(my_file):
subprocess.call(['xdg-open', my_file])
pid = subprocess.check_output("ps -o pid,cmd -e | grep %s | head -n 1 | awk '{print $1}'" % my_file, shell=True).decode('utf-8')
while check_pid(pid):
print(pid)
sleep(1)
def open_popen(my_file):
print('popen open')
process = subprocess.Popen(['xdg-open', my_file])
process.wait()
print(process.returncode)
print('popen close')
# This will end the open_xdg function while the editor is open.
# However, if the editor is already open, open_launcher will finish while the editor is still open.
#open_launcher('test.txt')
# This solution opens the file but the process terminates before the editor is closed.
#open_xdg('test.txt')
# This will loop indefinately printing the pid even after closing the editor.
# If you check for the pid in another terminal you see the pid with: [xdg-open] <defunct>.
#open_pid('test.txt')
# This will print None once after which 0 is printed indefinately: the subprocess ends immediately.
#open_poll('test.txt')
# This seems to work, even when the editor is already open.
# However, I had to use head -n 1 to prevent returning multiple pids.
#open_ps('test.txt')
# Like open_xdg, this opens the file but the process terminates before the editor is closed.
open_popen('test.txt')
Instead of trying to poll a PID, you can simply wait for the child process to terminate, using subprocess.Popen.wait():
Wait for child process to terminate. Set and return returncode attribute.
Additionally, getting the first part of get_commandline() is not guaranteed to be the launcher. The string returned by get_commandline() will match the Exec key spec, meaning the %u, %U, %f, and %F field codes in the returned string should be replaced with the correct values.
Here is some example code, based on your xdg-mime approach:
#!/usr/bin/env python3
import subprocess
import shlex
from gi.repository import Gio
my_file = 'test.txt'
# Get default application
app = subprocess.check_output(['xdg-mime', 'query', 'default', 'text/plain']).decode('utf-8').strip()
# Get command to run
command = Gio.DesktopAppInfo.new(app).get_commandline()
# Handle file paths with spaces by quoting the file path
my_file_quoted = "'" + my_file + "'"
# Replace field codes with the file path
# Also handle special case of the atom editor
command = command.replace('%u', my_file_quoted)\
.replace('%U', my_file_quoted)\
.replace('%f', my_file_quoted)\
.replace('%F', my_file_quoted if app != 'atom.desktop' else '--wait ' + my_file_quoted)
# Run the default application, and wait for it to terminate
process = subprocess.Popen(
shlex.split(command), stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
process.wait()
# Now the exit code of the text editor process is available as process.returncode
I have a few remarks on my sample code.
Remark 1: Handling spaces in file paths
It is important the file path to be opened is wrapped in quotes, otherwise shlex.split(command) will split the filename on spaces.
Remark 2: Escaped % characters
The Exec key spec states
Literal percentage characters must be escaped as %%.
My use of replace() then could potentially replace % characters that were escaped. For simplicity, I chose to ignore this edge case.
Remark 3: atom
I assumed the desired behaviour is to always wait until the graphical editor has closed. In the case of the atom text editor, it will terminate immediately on launching the window unless the --wait option is provided. For this reason, I conditionally add the --wait option if the default editor is atom.
Remark 4: subprocess.DEVNULL
subprocess.DEVNULL is new in python 3.3. For older python versions, the following can be used instead:
with open(os.devnull, 'w') as DEVNULL:
process = subprocess.Popen(
shlex.split(command), stdout=DEVNULL, stderr=DEVNULL)
Testing
I tested my example code above on Ubuntu with the GNOME desktop environment. I tested with the following graphical text editors: gedit, mousepad, and atom.
I have a python script that calls a shell scrips, that in turn calls a .exe called iv4_console. I need to print the stdout of iv4_console for debugging purposes. I used this:
Python:
import sys
import subprocess
var="rW015005000000"
proc = subprocess.Popen(["c.sh", var], shell=True, stdout=subprocess.PIPE)
output = ''
for line in iter(proc.stdout.readline, ""):
print line
output += line
Shell:
start_dir=$PWD
release=$1
echo Release inside shell: $release
echo Directory: $start_dir
cd $start_dir
cd ../../iv_system4/ports/visualC12/Debug
echo Debug dir: $PWD
./iv4_console.exe ../embedded/LUA/analysis/verbose-udp-toxml.lua ../../../../../logs/$release/VASP_DUN722_20160307_Krk_Krk_113048_092_1_$release.dvl &>../../../../FCW/ObjectDetectionTest/VASP_DUN722_20160307_Krk_Krk_113048_092_1_$release.xml
./iv4_console.exe ../embedded/LUA/analysis/verbose-udp-toxml.lua ../../../../../logs/$release/VASP_FL140_20170104_C60_Checkout_afterIC_162557_001_$release.dvl &>../../../../FCW/ObjectDetectionTest/VASP_FL140_20170104_C60_Checkout_afterIC_162557_001_$release.xml
exit
But this didn't work, it prints nothing. What do you think?
See my comment, best approach (i.m.o) would be to just use python only.
However, in answer of your question, try:
import sys
import subprocess
var="rW015005000000"
proc = subprocess.Popen(["/bin/bash", "/full/path/to/c.sh"], stdout=subprocess.PIPE)
# Best to always avoid shell=True because of security vulnerabilities.
proc.wait() # To make sure the shell script does not continue running indefinitely in the background
output, errors = proc.communicate()
print(output.decode())
# Since subprocess.communicate() returns a bytes-string, you can use .decode() to print the actual output as a string.
You can use
import subprocess
subprocess.call(['./c.sh'])
to call the shell script in python file
or
import subprocess
import shlex
subprocess.call(shlex.split('./c.sh var'))
I have written a simple python script that is supposed to run a "sh" file when it is executed.
The problem is, that the script runs but it does not start the ".sh" file. When I execute the ".sh" file manually using "puffy" it does the job, but not when I use my python script. So, what do I have to change in my script in order for it to work?
I will post the methods below so you could get a better idea. I also am using python 3.3.5, Oracle Linux 6.8.
The method that calls the ".sh" file, I have used Popen.
def runPrestage2Stage(self):
import time
time.sleep(60)
reload(Queries)
if "FINISHED" in Queries.statusS2S:
#run batch file
p = Popen(["sh", "/u01/app/Oracle_ODI_11/oracledi/agent/bin/start_prestage2stage_simple.sh"], stdout=PIPE, stderr=PIPE)
stdout, stderr = p.communicate()
print("Prestage2Stage has started")
elif 'ERROR' in Queries.statusS2S:
print("Can not start Prestage Converter, please check runtime status of Stage Converter")
else:
print("Prestage2Stage Converter cannot be started")
Part of main method, that calls the method runPRestage2Stage.
DailyLoadsAutomation.DailyLoads.runPrestage2Stage(self)
load_paraprak_prestage = True
count2 = 0
while load_paraprak_prestage:
reload(Queries)
sleep(300) #waits 300 seconds (5 minutes) and re-checks load status.
if "FINISHED" in Queries.statusPreStage:
load_paraprak_prestage = False
else:
load_paraprak_prestage = True
if count2 == 8:
import sys
sys.exit()
else:
count2 += 1
print("PreStage is finished")
When I run the script,
It will print "Prestage2Stage has started", and "Prestage is finished", as it is supposed to, but It will not run the ".sh" file.
Any idea what is wrong?
usually it is related to user rights or errors in path. You can replace you .sh script with some simple one like "echo "I am running"" and see if it can be accessed and executed. If it is under linux, I hope you are are giving execution rights to you sh script via chmod.
You can run sh file by importing os module like this:
import os
os.system('sh filename.sh')
I've got a python script that calls ffmpeg via subprocess to do some mp3 manipulations. It works fine in the foreground, but if I run it in the background, it gets as far as the ffmpeg command, which itself gets as far as dumping its config into stderr. At this point, everything stops and the parent task is reported as stopped, without raising an exception anywhere. I've tried a few other simple commands in the place of ffmpeg, they execute normally in foreground or background.
This is the minimal example of the problem:
import subprocess
inf = "3HTOSD.mp3"
outf = "out.mp3"
args = [ "ffmpeg",
"-y",
"-i", inf,
"-ss", "0",
"-t", "20",
outf
]
print "About to do"
result = subprocess.call(args)
print "Done"
I really can't work out why or how a wrapped process can cause the parent to terminate without at least raising an error, and how it only happens in so niche a circumstance. What is going on?
Also, I'm aware that ffmpeg isn't the nicest of packages, but I'm interfacing with something that has using ffmpeg compiled into it, so using it again seems sensible.
It might be related to Linux process in background - “Stopped” in jobs? e.g., using parent.py:
from subprocess import check_call
check_call(["python", "-c", "import sys; sys.stdin.readline()"])
should reproduce the issue: "parent.py script shown as stopped" if you run it in bash as a background job:
$ python parent.py &
[1] 28052
$ jobs
[1]+ Stopped python parent.py
If the parent process is in an orphaned process group then it is killed on receiving SIGTTIN signal (a signal to stop).
The solution is to redirect the input:
import os
from subprocess import check_call
try:
from subprocess import DEVNULL
except ImportError: # Python 2
DEVNULL = open(os.devnull, 'r+b', 0)
check_call(["python", "-c", "import sys; sys.stdin.readline()"], stdin=DEVNULL)
If you don't need to see ffmpeg stdout/stderr; you could also redirect them to /dev/null:
check_call(ffmpeg_cmd, stdin=DEVNULL, stdout=DEVNULL, stderr=STDOUT)
I like to use the commands module. It's simpler to use in my opinion.
import commands
cmd = "ffmpeg -y -i %s -ss 0 -t 20 %s 2>&1" % (inf, outf)
status, output = commands.getstatusoutput(cmd)
if status != 0:
raise Exception(output)
As a side note, sometimes PATH can be an issue, and you might want to use an absolute path to the ffmpeg binary.
matt#goliath:~$ which ffmpeg
/opt/local/bin/ffmpeg
From the python/subprocess/call documentation:
Wait for command to complete, then return the returncode attribute.
So as long as the process you called does not exit, your program does not go on.
You should set up a Popen process object, put its standard output and error in different buffers/streams and when there is an error, you terminate the process.
Maybe something like this works:
proc = subprocess.Popen(args, stderr = subprocess.PIPE) # puts stderr into a new stream
while proc.poll() is None:
try:
err = proc.stderr.read()
except: continue
else:
if err:
proc.terminate()
break
i have a python script on the server
#!/usr/bin/env python
import cgi
import cgitb; #cgitb.enable()
import sys, os
from subprocess import call
import time
import subprocess
form = cgi.FieldStorage()
component = form.getvalue('component')
command = form.getvalue('command')
success = True
print """Content-Type: text/html\n"""
if component=="Engine" and command=="Start":
try:
process = subprocess.Popen(['/usr/sbin/telepath','engine','start'], shell=False, stdout=subprocess.PIPE)
print "{ans:12}"
except Exception, e:
success = False
print "{ans:0}"
When I run this script and add the component and command parameters to be "Engine" and "Start" respectively - it starts the process and prints to the shell
"""Content-Type: text/html\n"""
{ans:12}
but most importantly - it starts the process!
however, when I run the script by POSTing to it, it returns {ans:12} but does not run the process which was the whole intention in the first place. Any logical explanation?
I suspect it's one of two things, firstly your process is probably running but your python code doesn't handle the output so do:
process = subprocess.Popen(['/usr/sbin/telepath','engine','start'], shell=False, stdout=subprocess.PIPE)
print process.stdout.read()
This is the most likely and explains why you see the output from the command line and not the browser, or secondly because the script is run through the browsers as the user apache and not with your userid check the permission for /usr/sbin/telepath.