Fabfiles With Command Line Arguments - python

Is there a clean way to have your fabfile take command line arguments? I'm writing an installation script for a tool that I want to be able to specify an optional target directory via the command line.
I wrote some code to test what would happen if I passed in some command line arguments:
# fabfile.py
import sys
def install():
_get_options()
def _get_options():
print repr(sys.argv[1:])
A couple of runs:
$ fab install
['install']
Done.
$ fab install --electric-boogaloo
Usage: fab [options] <command>[:arg1,arg2=val2,host=foo,hosts='h1;h2',...] ...
fab: error: no such option: --electric-boogaloo

I ended up using the per-task arguments. It seems like a better idea than doing unattached command line arguments.

Related

How to use a python module in combination with a command from shell

I would like to use a python module from shell (to be exact: indirect from gnuplot). I do not want to write for every call an extra script or implement some I/O logic.
Let's say as a minimal working example, I have a python module module_foo.py with
#!/usr/bin/python
def bar():
print(1,2,3)
My question is:
Why isn't it possible to use a python module combining module loading and command execution like here?:
$ python -m module_foo -c 'bar()'
When executed, nothing happens. But what does work, is using only a command call like this
$ python -c 'import module_foo; module_foo.bar()'
1 2 3
or this
$ python -c 'from module_foo import *; bar()'
1 2 3
As soon as I load a module before, even a syntactically errorneous command is “accepted” – not executed, I suppose (the bracked of the call to bar isn't closed):
$ python -m module_foo -c 'bar('
$
It is, however, possible to use the -m module option using a python unit test (from the python docs):
python -m unittest test_module1 test_module2
The python manpage says for both options:
-c command
Specify the command to execute (see next section). This terminates the option
list (following options are passed as arguments to the command).
-m module-name
Searches sys.path for the named module and runs the corresponding .py file as
a script.
So I'd expect to be able to use path options in this -m ... -c ..., but not in reverse order -c ... -m ...'. Am I missing something obvious?
If you want your Python module to be executable and to call function bar(), you should add this to the end of the python file:
if __name__ == "__main__": # this checks that the file is "executed", rather than "imported"
bar() # call the function you want to call
Then call:
python module_foo.py
If you want more control, you can pass arguments to the script and access them from sys.argv.
For even more flexibility in arguments passed to the script, see argparse module.

Error in check_call() subprocess, executing 'mv' unix command: "Syntax error: '(' unexpected"

I'm making a python script for Travis CI.
.travis.yml
...
script:
- support/travis-build.py
...
The python file travis-build.py is something like this:
#!/usr/bin/env python
from subprocess import check_call
...
check_call(r"mv !(my_project|cmake-3.0.2-Darwin64-universal) ./my_project/final_folder", shell=True)
...
When Travis building achieves that line, I'm getting an error:
/bin/sh: 1: Syntax error: "(" unexpected
I just tried a lot of different forms to write it, but I get the same result. Any idea?
Thanks in advance!
Edit
My current directory layout:
- my_project/final_folder/
- cmake-3.0.2-Darwin64-universal/
- fileA
- fileB
- fileC
I'm trying with this command to move all the current files fileA, fileB and fileC, excluding my_project and cmake-3.0.2-Darwin64-universal folders into ./my_project/final_folder. If I execute this command on Linux shell, I get my aim but not through check_call() command.
Note: I can't move the files one by one, because there are many others
I don't know which shell Travis are using by default because I don't specify it, I only know that if I write the command in my .travis.yml:
.travis.yml
...
script:
# Here is the previous Travis code
- mv !(my_project|cmake-3.0.2-Darwin64-universal) ./my_project/final_folder
...
It works. But If I use the script, it fails.
I found this command from the following issue:
How to use 'mv' command to move files except those in a specific directory?
You're using the bash feature extglob, to try to exclude the files that you're specifying. You'll need to enable it in order to have it exclude the two entries you're specifying.
The python subprocess module explicitly uses /bin/sh when you use shell=True, which doesn't enable the use of bash features like this by default (it's a compliance thing to make it more like original sh).
If you want to get bash to interpret the command; you have to pass it to bash explicitly, for example using:
subprocess.check_call(["bash", "-O", "extglob", "-c", "mv !(my_project|cmake-3.0.2-Darwin64-universal) ./my_project/final_folder"])
I would not choose to do the job in this manner, though.
Let me try again: in which shell do you expect your syntax !(...) to work? Is it bash? Is it ksh? I have never used it, and a quick search for a corresponding bash feature led nowhere. I suspect your syntax is just wrong, which is what the error message is telling you. In that case, your problem is entirely independent form python and the subprocess module.
If a special shell you have on your system supports this syntax, you need to make sure that Python is using the same shell when invoking your command. It tells you which shell it has been using: /bin/sh. This is usually just a link to the real shell executable. Does it point to the same shell you have tested your command in?
Edit: the SO solution you referenced contains the solution in the comments:
Tip: Note however that using this pattern relies on extglob. You can
enable it using shopt -s extglob (If you want extended globs to be
turned on by default you can add shopt -s extglob to .bashrc)
Just to demonstrate that different shells might deal with your syntax in different ways, first using bash:
$ !(uname)
-bash: !: event not found
And then, using /bin/dash:
$ !(uname)
Linux
The argument to a subprocess.something method must be a list of command line arguments. Use e.g. shlex.split() to make the string be split into correct command line arguments:
import shlex, subprocess
subprocess.check_call( shlex.split("mv !(...)") )
EDIT:
So, the goal is to move files/directories, with the exemption of some file(s)/directory(ies). By playing around with bash, I could get it to work like this:
mv `ls | grep -v -e '\(exclusion1\|exclusion2\)'` my_project
So in your situation that would be:
mv `ls | grep -v -e '\(myproject\|cmake-3.0.2-Darwin64-universal\)'` my_project
This could go into the subprocess.check_call(..., shell=True) and it should do what you expect it to do.

How to implement full wget command into python

Is there a way you can use a full wget command into python?
I know that we can do this: os.system('wget %s' %%url)
But I want a full command with all of the data saved into a directory:
wget -r --accept "*.exe,*.dll,*.zip,*.msi,*.rar,*.iso" ftp://ftp.apple.asimov.com/ -P e:\e
There is the subprocess module for that (this is what os.system calls but with a bit more flexibility). Specifically, you can use the call function in the following way to execute any command
import subprocess
subprocess.call(r'wget -r --accept "*.exe,*.dll,*.zip,*.msi,*.rar,*.iso" ftp://ftp.apple.asimov.com/ -P e:\e', shell=True)
Alternatively, you can pass individual arguments as a list omitting the shell flag:
subprocess.call(['wget', '-r', ...])
Also check the return value for errors. For details, see the standard library documentation on subprocess.

Execute bash script from URL using python

Assume I have a file at http://mysite.com/myscript.sh that contains:
#!/bin/bash
echo "Hello $1"
From the command line, I can execute my script (without downloading it) using the following command:
bash <(curl -s http://mysite.com/myscript.sh) World
Now, instead of executing the above command from the command line, I want to execute it from a python script. I tried doing the following:
import os
os.system('bash <(curl -s http://mysite.com/myscript.sh) World')
...but I get the following error:
sh: -c: line 0: syntax error near unexpected token `('
How do I make this execute correctly in python?
Evidently, os.system runs its command through /bin/sh, which usually causes whichever shell it's linked to to drop to a compatibility mode that doesn't include the <(...) construction. You can get around it by either storing the result in a temporary file or using another level of shell. Ugly, but it works.
os.system('bash -c "bash <(curl -s http://mysite.com/myscript.sh) World"')
There is a libcurl for python so you don't have to go the way around to command line behaviour. Here's the function list that should really do it - have never run remote scripts myself though. If you need installing the python binding, the instructions are here.
import curl

How to use pdoc on a script that takes command line args?

I'm trying to use pdoc to document my python script but I'm having an issue due to the fact that my program requires the use of command line args.
My program is called as follows: myprogram.py -p arg1 -s arg2
And I try to run pdoc as follows: pdoc --html myprogram
But I get an error saying pdoc: error: the following arguments are required: -p
But if I try to run pdoc like such: pdoc --html myprogram.py -p arg1 -s arg2, I get this error:
pdoc: error: unrecognized arguments: -p arg1 -s arg2
IS there a way to run pdoc on a module that requires command line args?
Since pdoc imports the documented modules, you need to nest your CLI program execution under a main guard:
def main():
from argparse import ArgumentParser
parser = ArgumentParser(...)
args = parser.parse_args()
...
if __name__ == '__main__':
# Run main entry point only if the script was run as a program
# and not if it was merely imported
main()

Categories