Need help with python script with bash commands - python

I copied this script from internet but idon't know how to use it. i am newbiw to python so please help. When i execute it using
./test.py then i can only see
usage: py4sa [option]
A unix toolbox
options:
--version show program's version number and exit
-h, --help show this help message and exit
-i, --ip gets current IP Address
-u, --usage gets disk usage of homedir
-v, --verbose prints verbosely
when i type py4sa then it says bash command not found
The full script is
#!/usr/bin/env python
import subprocess
import optparse
import re
#Create variables out of shell commands
#Note triple quotes can embed Bash
#You could add another bash command here
#HOLDING_SPOT="""fake_command"""
#Determines Home Directory Usage in Gigs
HOMEDIR_USAGE = """
du -sh $HOME | cut -f1
"""
#Determines IP Address
IPADDR = """
/sbin/ifconfig -a | awk '/(cast)/ { print $2 }' | cut -d':' -f2 | head -1
"""
#This function takes Bash commands and returns them
def runBash(cmd):
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
out = p.stdout.read().strip()
return out #This is the stdout from the shell command
VERBOSE=False
def report(output,cmdtype="UNIX COMMAND:"):
#Notice the global statement allows input from outside of function
if VERBOSE:
print "%s: %s" % (cmdtype, output)
else:
print output
#Function to control option parsing in Python
def controller():
global VERBOSE
#Create instance of OptionParser Module, included in Standard Library
p = optparse.OptionParser(description='A unix toolbox',
prog='py4sa',
version='py4sa 0.1',
usage= '%prog [option]')
p.add_option('--ip','-i', action="store_true", help='gets current IP Address')
p.add_option('--usage', '-u', action="store_true", help='gets disk usage of homedir')
p.add_option('--verbose', '-v',
action = 'store_true',
help='prints verbosely',
default=False)
#Option Handling passes correct parameter to runBash
options, arguments = p.parse_args()
if options.verbose:
VERBOSE=True
if options.ip:
value = runBash(IPADDR)
report(value,"IPADDR")
elif options.usage:
value = runBash(HOMEDIR_USAGE)
report(value, "HOMEDIR_USAGE")
else:
p.print_help()
#Runs all the functions
def main():
controller()
#This idiom means the below code only runs when executed from command line
if __name__ == '__main__':
main()

It seems to me you have stored the script under another name: test.py rather than py4sa. So typing ./test.py, like you did, is correct for you. The program requires arguments, however, so you have to enter one of the options listed under 'usage'.
Normally 'py4sa [OPTIONS]' would mean that OPTIONS is optional, but looking at the code we can see that it isn't:
if options.verbose:
# ...
if options.ip:
# ...
elif options.usage:
# ...
else:
# Here's a "catch all" in case no options are supplied.
# It will show the help text you get:
p.print_help()
Note that the program probably would not be recognized by bash even if you renamed it to py4sa, as the current directory is often not in bash's PATH. It says 'usage: py4sa (..)' because that's hard-coded into the program.

The script is called "test.py". Either invoke it as such, or rename it to "py4sa".

you run a Python script using the interpreter, so
$ python py4sa

Related

subprocess.call to run mafft

I wrote a script to run mafft module from the terminal:
import subprocess
def linsi_MSA(sequnces_file_path):
cmd = ' mafft --maxiterate 1000 --localpair {seqs} > {out}'.format(seqs=sequnces_file_path, out=sequnces_file_path)
subprocess.call(cmd.split(), shell=True)
if __name__ == '__main__':
import logging
logger = logging.getLogger('main')
from sys import argv
if len(argv) < 2:
logger.error('Usage: MSA <sequnces_file_path> ')
exit()
else:
linsi_MSA(*argv[1:])
for some reason when trying to run the script from the terminal using:
python ./MSA.py ./sample.fa
I get the mafft interactive version opening directly in the trminal (asking for input ..output etc..)
when i'm trying to write the cmd directly in the terminal using:
mafft --maxiterate 1000 --localpair sample.fa > sample.fa
its working as expected and perfoming the command line version as without opening the interactive version.
I want my script to be able to perform the cmd line version on the terminal. what seems to be the problem?
thanks!
If you use shell=True you should pass one string as argument, not a list, e.g.:
subprocess.call("ls > outfile", shell=True)
It's not explained in the docs, but I suspect it has to do with what low-level library function is ultimately called:
call(["ls", "-l"]) --> execlp("ls", "-l")
^^^^^^^^^^ ^^^^^^^^^^
call("ls -l", shell=True) --> execlp("sh", "-c", "ls -l")
^^^^^^^ ^^^^^^^
call(["ls", "-l"], shell=True) --> execlp("sh", "-c", "ls", "-l")
# which can be tried from command line:
sh -c ls -l
# result is a list of files without details, -l was ignored.
# see sh(1) man page for -c string syntax and what happens to further arguments.

unit tests for command line args python

I have a shell script which currently takes 3 args. I run this via a shell script with the shell script file name , a directory to run the python script upon , along with the name of the test data directory. I want to be able to write a unit tests which executes the command below but only if i was to change the date , depending on the data that is available it would either pass or fail.
main_config.sh
yamldir=$1
for yaml in $(ls ${yamldir}/*.yaml | grep -v "export_config.yaml"); do
if [ "$yaml" != "export_config.yaml" ]; then
echo "Running export for $yaml file...";
python valid.py -p ${yamldir}/export_config.yaml -e $yaml -d ${endDate}
wait
fi
done
This is what is executed on the command line
./main_config.sh /Users/name/Desktop/yaml/ 2018-12-23
This will fail and output on the terminal since there is no directory called 2012-12-23 :
./main_config.sh /yaml/ 2018-12-23
Running export for apa.yaml file...
apa.json does not exist
If the directory existed this would pass and would output on the terminal :
Running export for apa.yaml file...
File Name: apa.json Exists
File Size: 234 Bytes
Writing to file
My python script script is as follows :
def main(get_config):
cfg = get_config()[0] # export_config.yaml
data = get_config()[1] # export_apa.yaml
date = get_config()[2] # data folder - YYYY-MM-DD
# Conditional Logic
def get_config():
parser = argparse.ArgumentParser()
parser.add_argument("-p", "--parameter-file", action="store", required=True)
parser.add_argument("-e", "--export-data-file", action="store", required=True)
parser.add_argument("-d", "--export-date", action="store", required=False)
args = parser.parse_args()
return [funcs.read_config(args.parameter_file), funcs.read_config(args.export_data_file), args.export_date]
if __name__ == "__main__":
logging.getLogger().setLevel(logging.INFO)
main(get_config)
To me it looks like this is not a typical unit test (that tests a function or method) but an integration test (that tests a subsystem from the outside). But of course you could still solve this with your typical Python testing tools like unittest.
A simple solution would be to run your script using subprocess, capture the output, and then parse that output as part of your test:
import unittest
import os
import sys
if os.name == 'posix' and sys.version_info[0] < 3:
import subprocess32 as subprocess
else:
import subprocess
class TestScriptInvocation(unittest.TestCase):
def setUp(self):
"""call the script and record its output"""
result = subprocess.run(["./main_config.sh", "/Users/yasserkhan/Desktop/yaml/", "2018-12-23"], stdout=subprocess.PIPE)
self.returncode = result.returncode
self.output_lines = result.stdout.decode('utf-8').split('\n')
def test_returncode(self):
self.assertEqual(self.returncode, 0)
def test_last_line_indicates_success(self):
self.assertEqual(self.output_lines[-1], 'Writing to file')
if __name__ == '__main__':
unittest.main()
Note that this code uses the backport of the Python 3 subprocess module. Also, it tries to decode the contents of result.stdout because on Python 3 that would be a bytes object and not a str as on Python 2. I didn't test it, but these two things should make the code portable between 2 and 3.
Also note that using absolute paths like "/Users/yasserkhan/Desktop/yaml" could easily break, so you will either need to find a relative path or pass a base path to your tests using environment variables for example.
You could add additional tests that parse the other lines and check for reasonable outputs like a file size in the expected range.

Python argparse versatility ability for true/false and string?

I have the following arguments parser using argparse in a python 2.7 script:
parser = argparse.ArgumentParser(description=scriptdesc)
parser.add_argument("-l", "--list", help="Show current running sesssions", dest="l_list", type=str, default=None)
I want to be able to run:
./script -l and ./script -l session_1
So that the script returns either all sessions or a single session without an extra parameter such as -s
However I can't find a way to do this in a single arg.
This is a bit of a hack since it relies on accessing sys.argv outside of any argparse function but you can do something like:
import argparse
import sys
parser = argparse.ArgumentParser(description='')
parser.add_argument("-l", "--list", help="Show current running sesssions", dest="l_list", nargs='?')
args = parser.parse_args()
if args.l_list == None:
if '-l' in sys.argv or '--list' in sys.argv:
print('display all')
else:
print('display %s only' %args.l_list)
And you would obviously replace the print statements with your actual code. This works by allowing 0 or 1 argument (using nargs='?'). This allows you to either pass an argument with -l or not. This means that in the args namespace, l_list can be None (the default) if you call -l without an argument OR if you don't use -l at all. Then later you can check if -l was called without an argument (if l_list == None and -l or --list is in sys.argv).
If I name this script test.py I get the following outputs when calling it from the command line.
$python test.py
$python test.py -l
display all
$python test.py -l session1
display session1 only
EDIT
I figured out an argparse only solution!! No relying on sys.argv:
import argparse
parser = argparse.ArgumentParser(description='')
parser.add_argument("-l", "--list", help="Show current running sesssions", dest="l_list", nargs='?', default=-1)
args = parser.parse_args()
if args.l_list == None:
print('display all')
elif args.l_list != -1:
print('display %s only' %args.l_list)
So it turns out that the default keyword in .add_argument only applies when the argument flag is not called at all. If the flag is used without anything following it, it will default to None regardless of what the default keyword is. So if we set the default to something that is not None and not an expected argument value (in this case I chose -1), then we can handle all three of your cases:
$ python test.py
$ python test.py -l
display all
$ python test.py -l session1
display session1 only

How to check whether a shell command returned nothing or something

I am writing a script to extract something from a specified path. I am returning those values into a variable. How can i check whether the shell command has returned something or nothing.
My Code:
def any_HE():
global config, logger, status, file_size
config = ConfigParser.RawConfigParser()
config.read('config2.cfg')
for section in sorted(config.sections(), key=str.lower):
components = dict() #start with empty dictionary for each section
#Retrieving the username and password from config for each section
if not config.has_option(section, 'server.user_name'):
continue
env.user = config.get(section, 'server.user_name')
env.password = config.get(section, 'server.password')
host = config.get(section, 'server.ip')
print "Trying to connect to {} server.....".format(section)
with settings(hide('warnings', 'running', 'stdout', 'stderr'),warn_only=True, host_string=host):
try:
files = run('ls -ltr /opt/nds')
if files!=0:
print '{}--Something'.format(section)
else:
print '{} --Nothing'.format(section)
except Exception as e:
print e
I tried checking 1 or 0 and True or false but nothing seems to be working. In some servers, the path '/opt/nds/' does not exist. So in that case, nothing will be there on files. I wanted to differentiate between something returned to files and nothing returned to files.
First, you're hiding stdout.
If you get rid of that you'll get a string with the outcome of the command on the remote host. You can then split it by os.linesep (assuming same platform), but you should also take care of other things like SSH banners and colours from the retrieved outcome.
As perror commented already, the python subprocess module offers the right tools.
https://docs.python.org/2/library/subprocess.html
For your specific problem you can use the check_output function.
The documentation gives the following example:
import subprocess
subprocess.check_output(["echo", "Hello World!"])
gives "Hello World"
plumbum is a great library for running shell commands from a python script. E.g.:
from plumbum.local import ls
from plumbum import ProcessExecutionError
cmd = ls['-ltr']['/opt/nds'] # construct the command
try:
files = cmd().splitlines() # run the command
if ...:
print ...:
except ProcessExecutionError:
# command exited with a non-zero status code
...
On top of this basic usage (and unlike the subprocess module), it also supports things like output redirection and command pipelining, and more, with easy, intuitive syntax (by overloading python operators, such as '|' for piping).
In order to get more control of the process you run, you need to use the subprocess module.
Here is an example of code:
import subprocess
task = subprocess.Popen(['ls', '-ltr', '/opt/nds'], stdout=subprocess.PIPE)
print task.communicate()

Logging last Bash command to file from script

I write lots of small scripts to manipulate files on a Bash-based server. I would like to have a mechanism by which to log which commands created which files in a given directory. However, I don't just want to capture every input command, all the time.
Approach 1: a wrapper script that uses a Bash builtin (a la history or fc -ln -1) to grab the last command and write it to a log file. I have not been able to figure out any way to do this, as the shell builtin commands do not appear to be recognized outside of the interactive shell.
Approach 2: a wrapper script that pulls from ~/.bash_history to get the last command. This, however, requires setting up the Bash shell to flush every command to history immediately (as per this comment) and seems also to require that the history be allowed to grow inexorably. If this is the only way, so be it, but it would be great to avoid having to edit the ~/.bashrc file on every system where this might be implemented.
Approach 3: use script. My problem with this is that it requires multiple commands to start and stop the logging, and because it launches its own shell it is not callable from within another script (or at least, doing so complicates things significantly).
I am trying to figure out an implementation that's of the form log_this.script other_script other_arg1 other_arg2 > file, where everything after the first argument is logged. The emphasis here is on efficiency and minimizing syntax overhead.
EDIT: iLoveTux and I both came up with similar solutions. For those interested, my own implementation follows. It is somewhat more constrained in its functionality than the accepted answer, but it also auto-updates any existing logfile entries with changes (though not deletions).
Sample usage:
$ cmdlog.py "python3 test_script.py > test_file.txt"
creates a log file in the parent directory of the output file with the following:
2015-10-12#10:47:09 test_file.txt "python3 test_script.py > test_file.txt"
Additional file changes are added to the log;
$ cmdlog.py "python3 test_script.py > test_file_2.txt"
the log now contains
2015-10-12#10:47:09 test_file.txt "python3 test_script.py > test_file.txt"
2015-10-12#10:47:44 test_file_2.txt "python3 test_script.py > test_file_2.txt"
Running on the original file name again changes the file order in the log, based on modification time of the files:
$ cmdlog.py "python3 test_script.py > test_file.txt"
produces
2015-10-12#10:47:44 test_file_2.txt "python3 test_script.py > test_file_2.txt"
2015-10-12#10:48:01 test_file.txt "python3 test_script.py > test_file.txt"
Full script:
#!/usr/bin/env python3
'''
A wrapper script that will write the command-line
args associated with any files generated to a log
file in the directory where the files were made.
'''
import sys
import os
from os import listdir
from os.path import isfile, join
import subprocess
import time
from datetime import datetime
def listFiles(mypath):
"""
Return relative paths of all files in mypath
"""
return [join(mypath, f) for f in listdir(mypath) if
isfile(join(mypath, f))]
def read_log(log_file):
"""
Reads a file history log and returns a dictionary
of {filename: command} entries.
Expects tab-separated lines of [time, filename, command]
"""
entries = {}
with open(log_file) as log:
for l in log:
l = l.strip()
mod, name, cmd = l.split("\t")
# cmd = cmd.lstrip("\"").rstrip("\"")
entries[name] = [cmd, mod]
return entries
def time_sort(t, fmt):
"""
Turn a strftime-formatted string into a tuple
of time info
"""
parsed = datetime.strptime(t, fmt)
return parsed
ARGS = sys.argv[1]
ARG_LIST = ARGS.split()
# Guess where logfile should be put
if (">" or ">>") in ARG_LIST:
# Get position after redirect in arg list
redirect_index = max(ARG_LIST.index(e) for e in ARG_LIST if e in ">>")
output = ARG_LIST[redirect_index + 1]
output = os.path.abspath(output)
out_dir = os.path.dirname(output)
elif ("cp" or "mv") in ARG_LIST:
output = ARG_LIST[-1]
out_dir = os.path.dirname(output)
else:
out_dir = os.getcwd()
# Set logfile location within the inferred output directory
LOGFILE = out_dir + "/cmdlog_history.log"
# Get file list state prior to running
all_files = listFiles(out_dir)
pre_stats = [os.path.getmtime(f) for f in all_files]
# Run the desired external commands
subprocess.call(ARGS, shell=True)
# Get done time of external commands
TIME_FMT = "%Y-%m-%d#%H:%M:%S"
log_time = time.strftime(TIME_FMT)
# Get existing entries from logfile, if present
if LOGFILE in all_files:
logged = read_log(LOGFILE)
else:
logged = {}
# Get file list state after run is complete
post_stats = [os.path.getmtime(f) for f in all_files]
post_files = listFiles(out_dir)
# Find files whose states have changed since the external command
changed = [e[0] for e in zip(all_files, pre_stats, post_stats) if e[1] != e[2]]
new = [e for e in post_files if e not in all_files]
all_modded = list(set(changed + new))
if not all_modded: # exit early, no need to log
sys.exit(0)
# Replace files that have changed, add those that are new
for f in all_modded:
name = os.path.basename(f)
logged[name] = [ARGS, log_time]
# Write changed files to logfile
with open(LOGFILE, 'w') as log:
for name, info in sorted(logged.items(), key=lambda x: time_sort(x[1][1], TIME_FMT)):
cmd, mod_time = info
if not cmd.startswith("\""):
cmd = "\"{}\"".format(cmd)
log.write("\t".join([mod_time, name, cmd]) + "\n")
sys.exit(0)
You can use the tee command, which stores its standard input to a file and outputs it on standard output. Pipe the command line into tee, and pipe tee's output into a new invocation of your shell:
echo '<command line to be logged and executed>' | \
tee --append /path/to/your/logfile | \
$SHELL
i.e., for your example of other_script other_arg1 other_arg2 > file,
echo 'other_script other_arg1 other_arg2 > file' | \
tee --append /tmp/mylog.log | \
$SHELL
If your command line needs single quotes, they need to be escaped properly.
OK, so you don't mention Python in your question, but it is tagged Python, so I figured I would see what I could do. I came up with this script:
import sys
from os.path import expanduser, join
from subprocess import Popen, PIPE
def issue_command(command):
process = Popen(command, stdout=PIPE, stderr=PIPE, shell=True)
return process.communicate()
home = expanduser("~")
log_file = join(home, "command_log")
command = sys.argv[1:]
with open(log_file, "a") as fout:
fout.write("{}\n".format(" ".join(command)))
out, err = issue_command(command)
which you can call like (if you name it log_this and make it executable):
$ log_this echo hello world
and it will put "echo hello world" in a file ~/command_log, note though that if you want to use pipes or redirection you have to quote your command (this may be a real downfall for your use case or it may not be, but I haven't figured out how to do this just yet without the quotes) like this:
$ log_this "echo hello world | grep h >> /tmp/hello_world"
but since it's not perfect, I thought I would add a little something extra.
The following script allows you to specify a different file to log your commands to as well as record the execution time of the command:
#!/usr/bin/env python
from subprocess import Popen, PIPE
import argparse
from os.path import expanduser, join
from time import time
def issue_command(command):
process = Popen(command, stdout=PIPE, stderr=PIPE, shell=True)
return process.communicate()
home = expanduser("~")
default_file = join(home, "command_log")
parser = argparse.ArgumentParser()
parser.add_argument("-f", "--file", type=argparse.FileType("a"), default=default_file)
parser.add_argument("-p", "--profile", action="store_true")
parser.add_argument("command", nargs=argparse.REMAINDER)
args = parser.parse_args()
if args.profile:
start = time()
out, err = issue_command(args.command)
runtime = time() - start
entry = "{}\t{}\n".format(" ".join(args.command), runtime)
args.file.write(entry)
else:
out, err = issue_command(args.command)
entry = "{}\n".format(" ".join(args.command))
args.file.write(entry)
args.file.close()
You would use this the same way as the other script, but if you wanted to specify a different file to log to just pass -f <FILENAME> before your actual command and your log will go there, and if you wanted to record the execution time just provide the -p (for profile) before your actual command like so:
$ log_this -p -f ~/new_log "echo hello world | grep h >> /tmp/hello_world"
I will try to make this better, but if you can think of anything else this could do for you, I am making a github project for this where you can submit bug reports and feature requests.

Categories