I am able to start application using quite simple syntax. Example
app="/Applications/MyApp/myAppExecutable"
file="/Users/userName/Pictures/myPicture.jpg"
cmd="open -a '%s' '%s'" % (app, file)
os.system(cmd)
Resulted cmd here is:
open -a '/Applications/MyApp/myAppExecutable' '/Users/userName/Pictures/myPicture.jpg'
and it runs just fine.
But the application I am running accepts an optional startup argument -proj filepath, so a full cmd string with this optional argument should look like:
open -a '/Applications/MyApp/myAppExecutable' -proj '/Users/userName/Pictures' '/Users/userName/Pictures/myPicture.jpg'
But if I feed os.system() with a such cmdstring I am getting:
open: invalid option -- p
How to pass an optional app argument without causing an error and crash?
Per the open man page, you have to pass --args before arguments to the program:
--args
All remaining arguments are passed to the opened application in the
argv parameter to main(). These arguments are not opened or inter-
preted by the open tool.
As an aside, you may want to look into using subprocess. Here's how the command looks like with subprocess:
subprocess.check_call(['open', '-a', app, file])
No fiddling with string interpolation required.
It is quite possible to submit the application's specific arguments to OS open command.
The syntax to open:
open -a /path/to/application/executable.file /filepath/to/the/file/to/open/with/app.ext
It appears it is OK to enclose both paths in double or single quotes such as:
open -a '/path/to/application/executable.file' '/filepath/to/the/file/to/open/with/app.ext'
As Ned has mentioned flag --args can be used to specify any Application's specific startup flags.
The App specific flags are placed after open's --args flag such as:
open -a /path/to/application/executable.file --args -proj
The problem is that the App specific flags (such as -proj) will only be passed at the time the App is starting up. If it is already running the open -a command will only open a file (if the file_to_be_opened is specified) but won't deliver the App's specific args. By other words the App is only able to receive its args at the time it starts.
There is -n flag available to be used with open -a command. When used open -a will start as many instances of Apps as needed. Each of App instances will get the App args properly.
open -a '/path/to/application/executable.file' '/filepath/to/the/file/to/open/with/app.ext -n --args -proj "SpecificToApp arg or command" '
It all translates if used with subprocess:
subprocess.check_call(['open', '-a', app, file, '-n', '--args', '-proj', 'proj_flag_values'])
or simply pass it as a string arg to:
os.system(cmdString)
Related
I am trying to make a python program(python 3.6) that writes commands to terminal to download a specific youtube video(using youtube-dl).
If I go on terminal and execute the following command:
cd; cd Desktop; youtube-dl "https://www.youtube.com/watch?v=b91ovTKCZGU"
It will download the video to my desktop. However, if I execute the below code, which should be doing the same command on terminal, it does not throw an error but also does not download that video.
import subprocess
cmd = ["cd;", "cd", "Desktop;", "youtube-dl", "\"https://www.youtube.com/watch?v=b91ovTKCZGU\""]
print(subprocess.call(cmd, stderr=subprocess.STDOUT,shell=True))
It seems that this just outputs 0. I do not think there is any kind of error 0 that exists(there are error 126 and 127). So if it is not throwing an error, why does it also not download the video?
Update:
I have fixed the above code by passing in a string, and have checked that youtube-dl is installed in my default python and is also in the folder where I want to download the videos, but its still throwing error 127, meaning command "youtube-dl" is not found.
cd; cd Desktop; youtube-dl "https://www.youtube.com/watch?v=b91ovTKCZGU" is not a single command; it's a list (delimited by ;) of three separate commands.
subprocess.call(cmd, ..., shell=True) is effectively the same as
subprocess.call(['sh', '-c'] + cmd)
which is almost never what you want. Instead, just pass a single string and let the shell parse it.
subprocess.call('cd; cd Desktop; youtube-dl "https://www.youtube.com/watch?v=b91ovTKCZGU"', shell=True)
If you really want to use the list form (which is always a good idea), use the cwd parameter instead of running cd.
subprocess.call(['youtube-dl', 'https://www.youtube.com/watch?v=b91ovTKCZGU'],
cwd=os.path.expanduser("~/Desktop"))
I'll answer this with an example:
>>> subprocess.call(["echo $0 $2", "foo", "skipped", "bar"], shell=True)
foo bar
0
The first element of the list is the shell command (echo $0 $2), and the remaining elements are the positional parameters that the command can optionally use ($0, $1, ...).
In your example, you are creating a subshell that only runs the cd; command. The positional parameters are ignored. See the Popen and bash docs for details.
As noted in the comments, you should make the command a string (not a list).
When using subprocess.Popen(args, shell=True) to run "gcc --version" (just as an example), on Windows we get this:
>>> from subprocess import Popen
>>> Popen(['gcc', '--version'], shell=True)
gcc (GCC) 3.4.5 (mingw-vista special r3) ...
So it's nicely printing out the version as I expect. But on Linux we get this:
>>> from subprocess import Popen
>>> Popen(['gcc', '--version'], shell=True)
gcc: no input files
Because gcc hasn't received the --version option.
The docs don't specify exactly what should happen to the args under Windows, but it does say, on Unix, "If args is a sequence, the first item specifies the command string, and any additional items will be treated as additional shell arguments." IMHO the Windows way is better, because it allows you to treat Popen(arglist) calls the same as Popen(arglist, shell=True) ones.
Why the difference between Windows and Linux here?
Actually on Windows, it does use cmd.exe when shell=True - it prepends cmd.exe /c (it actually looks up the COMSPEC environment variable but defaults to cmd.exe if not present) to the shell arguments. (On Windows 95/98 it uses the intermediate w9xpopen program to actually launch the command).
So the strange implementation is actually the UNIX one, which does the following (where each space separates a different argument):
/bin/sh -c gcc --version
It looks like the correct implementation (at least on Linux) would be:
/bin/sh -c "gcc --version" gcc --version
Since this would set the command string from the quoted parameters, and pass the other parameters successfully.
From the sh man page section for -c:
Read commands from the command_string operand instead of from the standard input. Special parameter 0 will be set from the command_name operand and the positional parameters ($1, $2, etc.) set from the remaining argument operands.
This patch seems to fairly simply do the trick:
--- subprocess.py.orig 2009-04-19 04:43:42.000000000 +0200
+++ subprocess.py 2009-08-10 13:08:48.000000000 +0200
## -990,7 +990,7 ##
args = list(args)
if shell:
- args = ["/bin/sh", "-c"] + args
+ args = ["/bin/sh", "-c"] + [" ".join(args)] + args
if executable is None:
executable = args[0]
From the subprocess.py source:
On UNIX, with shell=True: If args is a string, it specifies the
command string to execute through the shell. If args is a sequence,
the first item specifies the command string, and any additional items
will be treated as additional shell arguments.
On Windows: the Popen class uses CreateProcess() to execute the child
program, which operates on strings. If args is a sequence, it will be
converted to a string using the list2cmdline method. Please note that
not all MS Windows applications interpret the command line the same
way: The list2cmdline is designed for applications using the same
rules as the MS C runtime.
That doesn't answer why, just clarifies that you are seeing the expected behavior.
The "why" is probably that on UNIX-like systems, command arguments are actually passed through to applications (using the exec* family of calls) as an array of strings. In other words, the calling process decides what goes into EACH command line argument. Whereas when you tell it to use a shell, the calling process actually only gets the chance to pass a single command line argument to the shell to execute: The entire command line that you want executed, executable name and arguments, as a single string.
But on Windows, the entire command line (according to the above documentation) is passed as a single string to the child process. If you look at the CreateProcess API documentation, you will notice that it expects all of the command line arguments to be concatenated together into a big string (hence the call to list2cmdline).
Plus there is the fact that on UNIX-like systems there actually is a shell that can do useful things, so I suspect that the other reason for the difference is that on Windows, shell=True does nothing, which is why it is working the way you are seeing. The only way to make the two systems act identically would be for it to simply drop all of the command line arguments when shell=True on Windows.
The reason for the UNIX behaviour of shell=True is to do with quoting. When we write a shell command, it will be split at spaces, so we have to quote some arguments:
cp "My File" "New Location"
This leads to problems when our arguments contain quotes, which requires escaping:
grep -r "\"hello\"" .
Sometimes we can get awful situations where \ must be escaped too!
Of course, the real problem is that we're trying to use one string to specify multiple strings. When calling system commands, most programming languages avoid this by allowing us to send multiple strings in the first place, hence:
Popen(['cp', 'My File', 'New Location'])
Popen(['grep', '-r', '"hello"'])
Sometimes it can be nice to run "raw" shell commands; for example, if we're copy-pasting something from a shell script or a Web site, and we don't want to convert all of the horrible escaping manually. That's why the shell=True option exists:
Popen(['cp "My File" "New Location"'], shell=True)
Popen(['grep -r "\"hello\"" .'], shell=True)
I'm not familiar with Windows so I don't know how or why it behaves differently.
I want to execute a Python script from AutoIt using ShellExecuteWait(). My Attempt:
$x = ShellExecuteWait("E:/Automation/Python/Scripts/ReadLog.py", '-f "file.log" -k "key" -e "errMsg" ')
MsgBox(0,"x=",String($x))
If #error Then
MsgBox(0,"Error=",String(#error))
EndIf
I can see some process id in $x, and #error also gets set to 0 (means AutoIt executed the script). But my Python script is not producing results (it writes to a txt file when executed independently). Seems the problem is with passing command line arguments like:
ShellExecuteWait("E:/Automation/Python/Scripts/ReadLog.py", '-f "file.log" -k "key" -e "errMsg" ')
How can I pass command line arguments using ShellExecuteWait()? Syntax:
ShellExecuteWait ( "filename" [, "parameters" [, "workingdir" [,"verb" [, showflag]]]] )
Parameters:
filename :- The name of the file to run (EXE, .txt, .lnk, etc).
parameters :- [optional] Any parameters for the program. Blank ("") uses none.
This misses examples for use of parameters. There are no problems with the Python script (it requires 3 command line arguments, strings with options -f, -k and -e).
Related: How to run or execute python file from autoit.
Check the path to your Python binary (e.g. Python.exe, wherever your Python program/binary is located) in your Window's system environment/path.
Execute Python script from AutoIt If path is there then your code must work. In $x you will receive return exit code of the Python script.
Also you can try:
RunWait('full_path\Python.exe ReadLog.py -f "file.log" -k "key" -e "errMsg"', 'full_path_of_working_directory')
AutoIt does not execute external programs/scripts until you pass the working directory (optional parameter to all the execute and run commands). So pass the working directory as a separate parameter and it will work:
RunWait('full_path\Python.exe ReadLog.py -f "file.log" -k "key" -e "errMsg"', 'full_path_of_working_directory')
I need to start a python script with bash using nohup passing an arg that aids in defining a constant in a script I import. There are lots of questions about passing args but I haven't found a successful way using nohup.
a simplified version of my bash script:
#!/bin/bash
BUCKET=$1
echo $BUCKET
script='/home/path/to/script/script.py'
echo "starting $script with nohup"
nohup /usr/bin/python $script $BUCKET &
the relevant part of my config script i'm importing:
FLAG = sys.argv[0]
if FLAG == "b1":
AWS_ACCESS_KEY_ID = "key"
BUCKET = "bucket1"
AWS_SECRET_ACCESS_KEY = "secret"
elif FLAG == "b2":
AWS_ACCESS_KEY_ID = "key"
BUCKET = "bucket2"
AWS_SECRET_ACCESS_KEY = "secret"
else:
AWS_ACCESS_KEY_ID = "key"
BUCKET = "bucket3"
AWS_SECRET_ACCESS_KEY = "secret"
the script thats using it:
from config import BUCKET, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY
#do stuff with the values.
Frankly, since I'm passing the args to script.py, I'm not confident that they'll be in scope for the import script. That said, when I take a similar approach without using nohup, it works.
In general, the argument vector for any program starts with the program itself, and then all of its arguments and options. Depending on the language, the program may be sys.argv[0], argv[0], $0, or something else, but it's basically always argument #0.
Each program whose job is to run another program—like nohup, and like the Python interpreter itself—generally drops itself and all of its own options, and gives the target program the rest of the command line.
So, nohup takes a COMMAND and zero or more ARGS. Inside that COMMAND, argv[0] will be COMMAND itself (in this case, '/usr/bin/python'), and argv[1] and later will be the additional arguments ('/home/path/to/script/script.py' and whatever $BUCKET resolves to).
Next, Python takes zero or more options, a script, and zero or more args to that script, and exposes the script and its args as sys.argv. So, in your script, sys.argv[0] will be '/home/path/to/script/script.py', and sys.argv[1] will be whatever $BUCKET resolves to.
And bash works similarly to Python; $1 will be the first argument to the bash wrapper script ($0 will be the script itself), and so on. So, sys.argv[1] in the inner Python script will end up getting the first argument passed to the bash wrapper script.
Importing doesn't affect sys.argv at all. So, in both your config module and your top-level script, if you import sys, sys.argv[1] will hold the $1 passed to the bash wrapper script.
(On some platforms, in some circumstances argv[0] may not have the complete path, or may even be empty. But that isn't relevant here. What you care about is the eventual sys.argv[1], and bash, nohup, and python are all guaranteed to pass that through untouched.)
nohup python3 -u ./train.py --dataset dataset_directory/ --model model_output_directory > output.log &
Here Im executing train.py file with python3, Then -u is used to ignore buffering and show the logs on the go without storing, specifying my dataset_directory with argument style and model_output_directory then Greater than symbol(>)
then the logs is stored in output.log and them atlast and(&) symbol is used
To terminate this process
ps ax | grep train
then note the process_ID
sudo kill -9 Process_ID
I have tested optcomplete working with the optparse module. Its example is a simple file so I could get that working. I also tested it using the argparse module as the prior one is deprecated. But I really do not understand how and by whom the python program gets called on tab presses. I suspect bash together with the shebang line and the argparse (or optparse) module are involved in some way. I have been trying to figure this out (now gonna read the source code).
I have a little more complex program structure, which includes a wrapper around the piece of code which handles the arguments. Its argparse.ArgumentParser() instantiation and calls to add_argument() - which are superclassed into another intermediate module to avoid duplicating code, and wrapper around that is being called - are inside a function.
I want to understand the way this tab completion works between bash and python (or for that matter any other interpretor like perl).
NOTE: I have a fair understanding of bash completion (which I learned just now), and I think I understand the bash(only) custom completion.
NOTE: I have read other similar SO questions, and none really answer this Q.
Edit: Here is the bash function.
I already understood how the python module gets to know about words typed in the command line, by reading os.environ values of variables
$COMP_WORDS
$COMP_CWORD
$COMP_LINE
$COMP_POINT
$COMPREPLY
These variables have values only on tab press.
My question is how does the python module gets triggered?
To understand what's happening here, let's check what that bash function actually does:
COMPREPLY=( $( \
COMP_LINE=$COMP_LINE COMP_POINT=$COMP_POINT \
COMP_WORDS="${COMP_WORDS[*]}" COMP_CWORD=$COMP_CWORD \
OPTPARSE_AUTO_COMPLETE=1 $1 ) )
See the $1 at the end? That means that it actually calls the Python file we want to execute with special environment variables set! To trace what's happening, let's prepare a little script to intercept what optcomplete.autocomplete does:
#!/usr/bin/env python2
import os, sys
import optparse, optcomplete
from cStringIO import StringIO
if __name__ == '__main__':
parser = optparse.OptionParser()
parser.add_option('-s', '--simple', action='store_true',
help="Simple really simple option without argument.")
parser.add_option('-o', '--output', action='store',
help="Option that requires an argument.")
opt = parser.add_option('-p', '--script', action='store',
help="Option that takes python scripts args only.")
opt.completer = optcomplete.RegexCompleter('.*\.py')
# debug env variables
sys.stderr.write("\ncalled with args: %s\n" % repr(sys.argv))
for k, v in sorted(os.environ.iteritems()):
sys.stderr.write(" %s: %s\n" % (k, v))
# setup capturing the actions of `optcomplete.autocomplete`
def fake_exit(i):
sys.stderr.write("autocomplete tried to exit with status %d\n" % i)
sys.stdout = StringIO()
sys.exit = fake_exit
# Support completion for the command-line of this script.
optcomplete.autocomplete(parser, ['.*\.tar.*'])
sys.stderr.write("autocomplete tried to write to STDOUT:\n")
sys.stderr.write(sys.stdout.getvalue())
sys.stderr.write("\n")
opts, args = parser.parse_args()
This gives us the following when we try to autocomplete it:
$ ./test.py [tab]
called with args: ['./test.py']
...
COMP_CWORD: 1
COMP_LINE: ./test.py
COMP_POINT: 10
COMP_WORDS: ./test.py
...
OPTPARSE_AUTO_COMPLETE: 1
...
autocomplete tried to exit with status 1
autocomplete tried to write to STDOUT:
-o -h -s -p --script --simple --help --output
So optcomplete.autocomplete just reads the environment, prepares the matches, writes them to STDOUT and exits. The result -o -h -s -p --script --simple --help --output is then put into a bash array (COMPREPLY=( ... )) and returned to bash to present the choices to the user. No magic involved :)