Running a command as the first command in subprocess - python

I have a function that will open a terminal:
def open_next_terminal():
import subprocess
def rand(path="/tmp"):
acc = string.ascii_letters
retval = []
for _ in range(7):
retval.append(random.choice(acc))
return "{}/{}.sh".format(path, ''.join(retval))
file_path = rand()
with open(file_path, "a+") as data:
data.write(
'''
#!/bin/bash
"$#"
exec $SHELL
'''
)
subprocess.call(["sudo", "bash", "{}".format(file_path)])
return file_path
I want to run a command in this newly opened terminal before anything is done in it. For example:
subprocess.call(["sudo", "bash", "{}".format(file_path)]) #<= is called
ls #<= is run
#<= some output of files and folders
root#host:~# #<= the shell is now available
Does subprocess allow a way for me to run a "first command" during the initialization of the shell?

Simply pass the shell=True parameter to subprocess.call, and you can run multiple commands (delimited by semicolons or newlines) as a single string.
subprocess.call('your_first_command.sh; your_real_work.sh', shell=True)

Related

subprocess.call to run mafft

I wrote a script to run mafft module from the terminal:
import subprocess
def linsi_MSA(sequnces_file_path):
cmd = ' mafft --maxiterate 1000 --localpair {seqs} > {out}'.format(seqs=sequnces_file_path, out=sequnces_file_path)
subprocess.call(cmd.split(), shell=True)
if __name__ == '__main__':
import logging
logger = logging.getLogger('main')
from sys import argv
if len(argv) < 2:
logger.error('Usage: MSA <sequnces_file_path> ')
exit()
else:
linsi_MSA(*argv[1:])
for some reason when trying to run the script from the terminal using:
python ./MSA.py ./sample.fa
I get the mafft interactive version opening directly in the trminal (asking for input ..output etc..)
when i'm trying to write the cmd directly in the terminal using:
mafft --maxiterate 1000 --localpair sample.fa > sample.fa
its working as expected and perfoming the command line version as without opening the interactive version.
I want my script to be able to perform the cmd line version on the terminal. what seems to be the problem?
thanks!
If you use shell=True you should pass one string as argument, not a list, e.g.:
subprocess.call("ls > outfile", shell=True)
It's not explained in the docs, but I suspect it has to do with what low-level library function is ultimately called:
call(["ls", "-l"]) --> execlp("ls", "-l")
^^^^^^^^^^ ^^^^^^^^^^
call("ls -l", shell=True) --> execlp("sh", "-c", "ls -l")
^^^^^^^ ^^^^^^^
call(["ls", "-l"], shell=True) --> execlp("sh", "-c", "ls", "-l")
# which can be tried from command line:
sh -c ls -l
# result is a list of files without details, -l was ignored.
# see sh(1) man page for -c string syntax and what happens to further arguments.

How to call a python script on a new shell window from python code?

I'm trying to execute 10 python scripts from python code and open each of them in a new shell window.
My code :
for i in range(10):
name_of_file = "myscript"+str(i)+".py"
cmd = "python " + name_of_file
os.system("gnome-terminal -e 'bash -c " + cmd + "'")
But each script file are not executing, I get only the live interpreter of python in the new terminal...
Thank you guys
I would suggest using the subprocess module (https://docs.python.org/2/library/subprocess.html).
In this way, you'll write something like the following:
import subprocess
cmd = ['gnome-terminal', '-x', 'bash', '-c']
for i in range(10):
name_of_file = "myscript"+str(i)+".py"
your_proc = subprocess.Popen(cmd + ['python %s' % (name_of_file)])
# or if you want to use the "modern" way of formatting string you can write
# your_proc = subprocess.Popen(cmd + ['python {}'.format(name_of_file)])
...
and you have more control over the processes you start.
If you want to keep using os.system(), build your command string first, then pass it to the function. In your case would be:
cmd = 'gnome-terminal -x bash -c "python {}"'.format(name_of_file)
os.system(cmd)
something along these lines.
Thanks to #anishsane for some suggestions!
I think that it is to do with the string quoting of the argument to os.system. Try this:
os.system("""gnome-terminal -e 'bash -c "{}"'""".format(cmd))

Logging last Bash command to file from script

I write lots of small scripts to manipulate files on a Bash-based server. I would like to have a mechanism by which to log which commands created which files in a given directory. However, I don't just want to capture every input command, all the time.
Approach 1: a wrapper script that uses a Bash builtin (a la history or fc -ln -1) to grab the last command and write it to a log file. I have not been able to figure out any way to do this, as the shell builtin commands do not appear to be recognized outside of the interactive shell.
Approach 2: a wrapper script that pulls from ~/.bash_history to get the last command. This, however, requires setting up the Bash shell to flush every command to history immediately (as per this comment) and seems also to require that the history be allowed to grow inexorably. If this is the only way, so be it, but it would be great to avoid having to edit the ~/.bashrc file on every system where this might be implemented.
Approach 3: use script. My problem with this is that it requires multiple commands to start and stop the logging, and because it launches its own shell it is not callable from within another script (or at least, doing so complicates things significantly).
I am trying to figure out an implementation that's of the form log_this.script other_script other_arg1 other_arg2 > file, where everything after the first argument is logged. The emphasis here is on efficiency and minimizing syntax overhead.
EDIT: iLoveTux and I both came up with similar solutions. For those interested, my own implementation follows. It is somewhat more constrained in its functionality than the accepted answer, but it also auto-updates any existing logfile entries with changes (though not deletions).
Sample usage:
$ cmdlog.py "python3 test_script.py > test_file.txt"
creates a log file in the parent directory of the output file with the following:
2015-10-12#10:47:09 test_file.txt "python3 test_script.py > test_file.txt"
Additional file changes are added to the log;
$ cmdlog.py "python3 test_script.py > test_file_2.txt"
the log now contains
2015-10-12#10:47:09 test_file.txt "python3 test_script.py > test_file.txt"
2015-10-12#10:47:44 test_file_2.txt "python3 test_script.py > test_file_2.txt"
Running on the original file name again changes the file order in the log, based on modification time of the files:
$ cmdlog.py "python3 test_script.py > test_file.txt"
produces
2015-10-12#10:47:44 test_file_2.txt "python3 test_script.py > test_file_2.txt"
2015-10-12#10:48:01 test_file.txt "python3 test_script.py > test_file.txt"
Full script:
#!/usr/bin/env python3
'''
A wrapper script that will write the command-line
args associated with any files generated to a log
file in the directory where the files were made.
'''
import sys
import os
from os import listdir
from os.path import isfile, join
import subprocess
import time
from datetime import datetime
def listFiles(mypath):
"""
Return relative paths of all files in mypath
"""
return [join(mypath, f) for f in listdir(mypath) if
isfile(join(mypath, f))]
def read_log(log_file):
"""
Reads a file history log and returns a dictionary
of {filename: command} entries.
Expects tab-separated lines of [time, filename, command]
"""
entries = {}
with open(log_file) as log:
for l in log:
l = l.strip()
mod, name, cmd = l.split("\t")
# cmd = cmd.lstrip("\"").rstrip("\"")
entries[name] = [cmd, mod]
return entries
def time_sort(t, fmt):
"""
Turn a strftime-formatted string into a tuple
of time info
"""
parsed = datetime.strptime(t, fmt)
return parsed
ARGS = sys.argv[1]
ARG_LIST = ARGS.split()
# Guess where logfile should be put
if (">" or ">>") in ARG_LIST:
# Get position after redirect in arg list
redirect_index = max(ARG_LIST.index(e) for e in ARG_LIST if e in ">>")
output = ARG_LIST[redirect_index + 1]
output = os.path.abspath(output)
out_dir = os.path.dirname(output)
elif ("cp" or "mv") in ARG_LIST:
output = ARG_LIST[-1]
out_dir = os.path.dirname(output)
else:
out_dir = os.getcwd()
# Set logfile location within the inferred output directory
LOGFILE = out_dir + "/cmdlog_history.log"
# Get file list state prior to running
all_files = listFiles(out_dir)
pre_stats = [os.path.getmtime(f) for f in all_files]
# Run the desired external commands
subprocess.call(ARGS, shell=True)
# Get done time of external commands
TIME_FMT = "%Y-%m-%d#%H:%M:%S"
log_time = time.strftime(TIME_FMT)
# Get existing entries from logfile, if present
if LOGFILE in all_files:
logged = read_log(LOGFILE)
else:
logged = {}
# Get file list state after run is complete
post_stats = [os.path.getmtime(f) for f in all_files]
post_files = listFiles(out_dir)
# Find files whose states have changed since the external command
changed = [e[0] for e in zip(all_files, pre_stats, post_stats) if e[1] != e[2]]
new = [e for e in post_files if e not in all_files]
all_modded = list(set(changed + new))
if not all_modded: # exit early, no need to log
sys.exit(0)
# Replace files that have changed, add those that are new
for f in all_modded:
name = os.path.basename(f)
logged[name] = [ARGS, log_time]
# Write changed files to logfile
with open(LOGFILE, 'w') as log:
for name, info in sorted(logged.items(), key=lambda x: time_sort(x[1][1], TIME_FMT)):
cmd, mod_time = info
if not cmd.startswith("\""):
cmd = "\"{}\"".format(cmd)
log.write("\t".join([mod_time, name, cmd]) + "\n")
sys.exit(0)
You can use the tee command, which stores its standard input to a file and outputs it on standard output. Pipe the command line into tee, and pipe tee's output into a new invocation of your shell:
echo '<command line to be logged and executed>' | \
tee --append /path/to/your/logfile | \
$SHELL
i.e., for your example of other_script other_arg1 other_arg2 > file,
echo 'other_script other_arg1 other_arg2 > file' | \
tee --append /tmp/mylog.log | \
$SHELL
If your command line needs single quotes, they need to be escaped properly.
OK, so you don't mention Python in your question, but it is tagged Python, so I figured I would see what I could do. I came up with this script:
import sys
from os.path import expanduser, join
from subprocess import Popen, PIPE
def issue_command(command):
process = Popen(command, stdout=PIPE, stderr=PIPE, shell=True)
return process.communicate()
home = expanduser("~")
log_file = join(home, "command_log")
command = sys.argv[1:]
with open(log_file, "a") as fout:
fout.write("{}\n".format(" ".join(command)))
out, err = issue_command(command)
which you can call like (if you name it log_this and make it executable):
$ log_this echo hello world
and it will put "echo hello world" in a file ~/command_log, note though that if you want to use pipes or redirection you have to quote your command (this may be a real downfall for your use case or it may not be, but I haven't figured out how to do this just yet without the quotes) like this:
$ log_this "echo hello world | grep h >> /tmp/hello_world"
but since it's not perfect, I thought I would add a little something extra.
The following script allows you to specify a different file to log your commands to as well as record the execution time of the command:
#!/usr/bin/env python
from subprocess import Popen, PIPE
import argparse
from os.path import expanduser, join
from time import time
def issue_command(command):
process = Popen(command, stdout=PIPE, stderr=PIPE, shell=True)
return process.communicate()
home = expanduser("~")
default_file = join(home, "command_log")
parser = argparse.ArgumentParser()
parser.add_argument("-f", "--file", type=argparse.FileType("a"), default=default_file)
parser.add_argument("-p", "--profile", action="store_true")
parser.add_argument("command", nargs=argparse.REMAINDER)
args = parser.parse_args()
if args.profile:
start = time()
out, err = issue_command(args.command)
runtime = time() - start
entry = "{}\t{}\n".format(" ".join(args.command), runtime)
args.file.write(entry)
else:
out, err = issue_command(args.command)
entry = "{}\n".format(" ".join(args.command))
args.file.write(entry)
args.file.close()
You would use this the same way as the other script, but if you wanted to specify a different file to log to just pass -f <FILENAME> before your actual command and your log will go there, and if you wanted to record the execution time just provide the -p (for profile) before your actual command like so:
$ log_this -p -f ~/new_log "echo hello world | grep h >> /tmp/hello_world"
I will try to make this better, but if you can think of anything else this could do for you, I am making a github project for this where you can submit bug reports and feature requests.

pass python var to bash

I'm making a script to take pictures and write them to a folder created/named with the "data&time"
I made this part to create the directory and take the pictures
pathtoscript = "/home/pi/python-scripts"
current_time = time.localtime()[0:6]
dirfmt = "%4d-%02d-%02d-%02d-%02d-%02d"
dirpath = os.path.join(pathtoscript , dirfmt)
dirname = dirpath % current_time[0:6] #dirname created with date and time
os.mkdir(dirname) #mkdir
pictureName = dirname + "/image%02d.jpg" #path+name of pictures
camera.capture_sequence([pictureName % i for i in range(9)])
Then I would like to pass the dirname to a bash script (picturesToServer) which uploads the pictures to a server.
How can I do it?
cmd = '/home/pi/python-scripts/picturesToServer >/dev/null 2>&1 &'
call ([cmd], shell=True)
Maybe I could stay in the python script scp the pictures to the server? I have a ssh-agent with the paraphrase set (ssh-add mykey).
Place the variable in the environment (it'll be available as a regular bash variable in the bash script, e.g. as VAR_NAME in the example below) by replacing your call with:
import subprocess
p = subprocess.Popen(cmd, shell=True, env={"VAR_NAME": dirname})
Or pass it as a positional argument (it'll be available in $1 in the script) by replacing your cmd with:
cmd = '/home/pi/python-scripts/picturesToServer >/dev/null 2>&1 "{0}" &'.format(dirname)
As a side note, consider not using shell = True when you call a subprocess. Using shell = True is a bad idea for a lot of reasons that are documented in the Python docs

Executing an R script in python via subprocess.Popen

When I execute the script in R, it is:
$ R --vanilla --args test_matrix.csv < hierarchical_clustering.R > out.txt
In Python, it works if I use:
process = subprocess.call("R --vanilla --args "+output_filename+"_DM_Instances_R.csv < /home/kevin/AV-labels/Results/R/hierarchical_clustering.R > "+output_filename+"_out.txt", shell=True)
But this method doesn't provide the process.wait() function.
So, I would like to use the subprocess.Popen, I tried:
process = subprocess.Popen(['R', '--vanilla', '--args', "\'"+output_filename+"_DM_Instances_R.csv\'", '<', '/home/kevin/AV-labels/Results/R/hierarchical_clustering.R'])
But it didn't work, Python just opened R but didn't execute my script.
Instead of 'R', give it the path to Rscript. I had the same problem. Opens up R but doesn't execute my script. You need to call Rscript (instead of R) to actually execute the script.
retcode = subprocess.call("/Pathto/Rscript --vanilla /Pathto/test.R", shell=True)
This works for me.
Cheers!
I've solved this problem by putting everything into the brackets..
process = subprocess.Popen(["R --vanilla --args "+output_filename+"_DM_Instances_R.csv < /home/kevin/AV-labels/Results/R/hierarchical_clustering.R > "+output_filename+"_out.txt"], shell=True)
process.wait()
A couple of ideas:
You might want to consider using the Rscript frontend, which makes
running scripts easier; you can pass the script filename directly
as a parameter, and do not need to read the script in through standard input.
You don't need the shell for just redirecting standard output to a file, you can
do that directly with subprocess.Popen.
Example:
import subprocess
output_name = 'something'
script_filename = 'hierarchical_clustering.R'
param_filename = '%s_DM_Instances_R.csv' % output_name
result_filename = '%s_out.txt' % output_name
with open(result_filename, 'wb') as result:
process = subprocess.Popen(['Rscript', script_filename, param_filename],
stdout=result);
process.wait()
You never actually execute it fully ^^ try the following
process = subprocess.Popen(['R', '--vanilla', '--args', '\\%s_DM_Instances_R.csv\\' % output_filename, '<', '/home/kevin/AV-labels/Results/R/hierarchical_clustering.R'], stdout=subprocess.PIPE, stdin=subprocess.PIPE, shell=True)
process.communicate()#[0] is stdout
Keven's solution works for my requirement. Just to give another example about #Kevin's solution. You can pass more parameters to the rscript with python-style string:
import subprocess
process = subprocess.Popen(["R --vanilla --args %s %d %.2f < /path/to/your/rscript/transformMatrixToSparseMatrix.R" % ("sparse", 11, 0.98) ], shell=True)
process.wait()
Also, to make things easier you could create an R executable file. For this you just need to add this in the first line of the script:
#! /usr/bin/Rscript --vanilla --default-packages=utils
Reference: Using R as a scripting language with Rscript or this link

Categories