I am trying to run the shell command which tees the output to some file within python script but when I see the log file of tee output it's empty. The code looks like this:
val_home = os.environ["xyz"]
pwd = os.getcwd()
os.chdir(val_home)
os.chdir("..")
avk_home = os.environ["abc"]
del os.environ['abc']
command = "source %s/somefile/ %s/inputparamater | tee config.log" %(val_home,val_home)
execute(command)#function written to execute shell command ,no issue with this
assert search_string("config.log","Success") ,"Target config failed"
rm_file(['config.log'])
os.environ["AVKRUN_HOME"] = avk_home
os.chdir(pwd)
The file config.out is empty but when I run the command manually I see the tee file output.
Related
I'm having an issue running a simple python script that reads a helm command from a .sh script and outputs it.
When I run the command directly in the terminal, it runs fine:
helm list | grep prod- | cut -f5
# OUTPUT: prod-L2.0.3.258
But when I run python test.py (see below for whole source code of test.py), I get an error as if the command I'm running is helm list -f5 and not helm list | grep prod- | cut -f5:
user#node1:$ python test.py
# OUTPUT:
# Opening file 'helm_chart_version.sh' for reading...
# Running command 'helm list | grep prod- | cut -f5'...
# Error: unknown shorthand flag: 'f' in -f5
The test.py script:
import subprocess
# Open file for reading
file = "helm_chart_version.sh"
print("Opening file '" + file + "' for reading...")
bashCommand = ""
with open (file) as fh:
next(fh)
bashCommand = next(fh)
print("Running command '" + bashCommand + "'...")
process = subprocess.Popen(bashCommand.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
if error is None:
print output
else:
print error
Contents of helm_chart_version.sh:
cat helm_chart_version.sh
# OUTPUT:
## !/bin/bash
## helm list | grep prod- | cut -f5
Try to avoid running complex shell pipelines from higher-level languages. Given the command you show, you can run helm list as a subprocess, and then do the post-processing on it in Python.
process = subprocess.run(["helm", "list"], capture_output=True, text=True, check=True)
for line in process.stdout.splitlines():
if 'prod-' not in line:
continue
words = line.split()
print(words[4])
The actual Python script you show doesn't seem to be semantically different from just directly running the shell script. You can use the sh -x option or the shell set -x command to cause it to print out each line as it executes.
I have a python script that should run 7z.exe with the command: "x" and switch " -o" using subprocess.run(). The script is as follows:
import subprocess as sb
zipperpath = "C:\\Program Files\\7-zip\\7z.exe"
dirname ="C:\\Users\\ajain\\Desktop\\TempData"
archivename="UnprocessedData_v3.7z"
outputfilename="foo"
sb.run([zipperpath,"x",os.path.join(dirname,archivename)," -o",os.path.join(dirname,outputfilename)])
Output is:
Although the return code is 0, the zip never gets unzipped.
Try with this :
import subprocess
# variable cmd is is your command line , like you put in your console
cmd='7z.exe x UnprocessedData_v3.7z'
process = subprocess.Popen(cmd,shell=True,stdin=None,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
# The output from your shell command in an array
result=process.stdout.readlines()
if len(result) >= 1:
for line in result:
print(line)
I am trying to run python subprocess.run function to execute following command:
pdftoppm -jpeg -f 1 -scale-to 200 data/andromeda.pdf and-page
pdftoppm - is part of poppler utility and it generates images from pdf files.
File data/andromeda.pdf exists. Folder data is on same level with python script and/or where I run command from.
Command basically will generate a jpeg file, from page 1 (-f 1) 200px wide (-scale-to) from given file of and-page-1.jpeg format (so called ppmtroot).
Long story short: from command line it works as expected i.e. if I call the above command either from zsh or bash shell, manually - it generates thumbnail as expected. However if I run it from python subprocess module - it fails it returns 99 error code!
Following is python code (file name is sc_02_thumbnails.py):
import subprocess
import sys
def main(filename, ppmroot):
cmd = [
'pdftoppm',
'-f 1',
'-scale-to 200',
'-jpeg',
filename,
ppmroot
]
result = subprocess.run(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
if result.returncode:
print("Failed to generate thumbnail. Return code: {}. stderr: {}".format(
result.returncode,
result.stderr
))
print("Used cmd: {}".format(' '.join(cmd)))
sys.exit(1)
else:
print("Success!")
if __name__ == "__main__":
if len(sys.argv) > 2:
filename = sys.argv[1]
ppmroot = sys.argv[2]
else:
print("Usage: {} <pdffile> <ppmroot>".format(sys.argv[0]))
sys.exit(1)
main(filename, ppmroot)
And here is repo which includes data/andromeda.pdf file as well.
I call my script with as (from zsh):
$ chmod +x ./sc_02_thumbnauils.py
$ ./sc_02_thumbnails.py data/andromeda.pdf and-page
and ... thumbnail generating fails!
I have tried executing python script from both, from zsh and bash shells :(
What I am doing wrong?
The quoting is wrong, you should have '-f', '1', etc
I write lots of small scripts to manipulate files on a Bash-based server. I would like to have a mechanism by which to log which commands created which files in a given directory. However, I don't just want to capture every input command, all the time.
Approach 1: a wrapper script that uses a Bash builtin (a la history or fc -ln -1) to grab the last command and write it to a log file. I have not been able to figure out any way to do this, as the shell builtin commands do not appear to be recognized outside of the interactive shell.
Approach 2: a wrapper script that pulls from ~/.bash_history to get the last command. This, however, requires setting up the Bash shell to flush every command to history immediately (as per this comment) and seems also to require that the history be allowed to grow inexorably. If this is the only way, so be it, but it would be great to avoid having to edit the ~/.bashrc file on every system where this might be implemented.
Approach 3: use script. My problem with this is that it requires multiple commands to start and stop the logging, and because it launches its own shell it is not callable from within another script (or at least, doing so complicates things significantly).
I am trying to figure out an implementation that's of the form log_this.script other_script other_arg1 other_arg2 > file, where everything after the first argument is logged. The emphasis here is on efficiency and minimizing syntax overhead.
EDIT: iLoveTux and I both came up with similar solutions. For those interested, my own implementation follows. It is somewhat more constrained in its functionality than the accepted answer, but it also auto-updates any existing logfile entries with changes (though not deletions).
Sample usage:
$ cmdlog.py "python3 test_script.py > test_file.txt"
creates a log file in the parent directory of the output file with the following:
2015-10-12#10:47:09 test_file.txt "python3 test_script.py > test_file.txt"
Additional file changes are added to the log;
$ cmdlog.py "python3 test_script.py > test_file_2.txt"
the log now contains
2015-10-12#10:47:09 test_file.txt "python3 test_script.py > test_file.txt"
2015-10-12#10:47:44 test_file_2.txt "python3 test_script.py > test_file_2.txt"
Running on the original file name again changes the file order in the log, based on modification time of the files:
$ cmdlog.py "python3 test_script.py > test_file.txt"
produces
2015-10-12#10:47:44 test_file_2.txt "python3 test_script.py > test_file_2.txt"
2015-10-12#10:48:01 test_file.txt "python3 test_script.py > test_file.txt"
Full script:
#!/usr/bin/env python3
'''
A wrapper script that will write the command-line
args associated with any files generated to a log
file in the directory where the files were made.
'''
import sys
import os
from os import listdir
from os.path import isfile, join
import subprocess
import time
from datetime import datetime
def listFiles(mypath):
"""
Return relative paths of all files in mypath
"""
return [join(mypath, f) for f in listdir(mypath) if
isfile(join(mypath, f))]
def read_log(log_file):
"""
Reads a file history log and returns a dictionary
of {filename: command} entries.
Expects tab-separated lines of [time, filename, command]
"""
entries = {}
with open(log_file) as log:
for l in log:
l = l.strip()
mod, name, cmd = l.split("\t")
# cmd = cmd.lstrip("\"").rstrip("\"")
entries[name] = [cmd, mod]
return entries
def time_sort(t, fmt):
"""
Turn a strftime-formatted string into a tuple
of time info
"""
parsed = datetime.strptime(t, fmt)
return parsed
ARGS = sys.argv[1]
ARG_LIST = ARGS.split()
# Guess where logfile should be put
if (">" or ">>") in ARG_LIST:
# Get position after redirect in arg list
redirect_index = max(ARG_LIST.index(e) for e in ARG_LIST if e in ">>")
output = ARG_LIST[redirect_index + 1]
output = os.path.abspath(output)
out_dir = os.path.dirname(output)
elif ("cp" or "mv") in ARG_LIST:
output = ARG_LIST[-1]
out_dir = os.path.dirname(output)
else:
out_dir = os.getcwd()
# Set logfile location within the inferred output directory
LOGFILE = out_dir + "/cmdlog_history.log"
# Get file list state prior to running
all_files = listFiles(out_dir)
pre_stats = [os.path.getmtime(f) for f in all_files]
# Run the desired external commands
subprocess.call(ARGS, shell=True)
# Get done time of external commands
TIME_FMT = "%Y-%m-%d#%H:%M:%S"
log_time = time.strftime(TIME_FMT)
# Get existing entries from logfile, if present
if LOGFILE in all_files:
logged = read_log(LOGFILE)
else:
logged = {}
# Get file list state after run is complete
post_stats = [os.path.getmtime(f) for f in all_files]
post_files = listFiles(out_dir)
# Find files whose states have changed since the external command
changed = [e[0] for e in zip(all_files, pre_stats, post_stats) if e[1] != e[2]]
new = [e for e in post_files if e not in all_files]
all_modded = list(set(changed + new))
if not all_modded: # exit early, no need to log
sys.exit(0)
# Replace files that have changed, add those that are new
for f in all_modded:
name = os.path.basename(f)
logged[name] = [ARGS, log_time]
# Write changed files to logfile
with open(LOGFILE, 'w') as log:
for name, info in sorted(logged.items(), key=lambda x: time_sort(x[1][1], TIME_FMT)):
cmd, mod_time = info
if not cmd.startswith("\""):
cmd = "\"{}\"".format(cmd)
log.write("\t".join([mod_time, name, cmd]) + "\n")
sys.exit(0)
You can use the tee command, which stores its standard input to a file and outputs it on standard output. Pipe the command line into tee, and pipe tee's output into a new invocation of your shell:
echo '<command line to be logged and executed>' | \
tee --append /path/to/your/logfile | \
$SHELL
i.e., for your example of other_script other_arg1 other_arg2 > file,
echo 'other_script other_arg1 other_arg2 > file' | \
tee --append /tmp/mylog.log | \
$SHELL
If your command line needs single quotes, they need to be escaped properly.
OK, so you don't mention Python in your question, but it is tagged Python, so I figured I would see what I could do. I came up with this script:
import sys
from os.path import expanduser, join
from subprocess import Popen, PIPE
def issue_command(command):
process = Popen(command, stdout=PIPE, stderr=PIPE, shell=True)
return process.communicate()
home = expanduser("~")
log_file = join(home, "command_log")
command = sys.argv[1:]
with open(log_file, "a") as fout:
fout.write("{}\n".format(" ".join(command)))
out, err = issue_command(command)
which you can call like (if you name it log_this and make it executable):
$ log_this echo hello world
and it will put "echo hello world" in a file ~/command_log, note though that if you want to use pipes or redirection you have to quote your command (this may be a real downfall for your use case or it may not be, but I haven't figured out how to do this just yet without the quotes) like this:
$ log_this "echo hello world | grep h >> /tmp/hello_world"
but since it's not perfect, I thought I would add a little something extra.
The following script allows you to specify a different file to log your commands to as well as record the execution time of the command:
#!/usr/bin/env python
from subprocess import Popen, PIPE
import argparse
from os.path import expanduser, join
from time import time
def issue_command(command):
process = Popen(command, stdout=PIPE, stderr=PIPE, shell=True)
return process.communicate()
home = expanduser("~")
default_file = join(home, "command_log")
parser = argparse.ArgumentParser()
parser.add_argument("-f", "--file", type=argparse.FileType("a"), default=default_file)
parser.add_argument("-p", "--profile", action="store_true")
parser.add_argument("command", nargs=argparse.REMAINDER)
args = parser.parse_args()
if args.profile:
start = time()
out, err = issue_command(args.command)
runtime = time() - start
entry = "{}\t{}\n".format(" ".join(args.command), runtime)
args.file.write(entry)
else:
out, err = issue_command(args.command)
entry = "{}\n".format(" ".join(args.command))
args.file.write(entry)
args.file.close()
You would use this the same way as the other script, but if you wanted to specify a different file to log to just pass -f <FILENAME> before your actual command and your log will go there, and if you wanted to record the execution time just provide the -p (for profile) before your actual command like so:
$ log_this -p -f ~/new_log "echo hello world | grep h >> /tmp/hello_world"
I will try to make this better, but if you can think of anything else this could do for you, I am making a github project for this where you can submit bug reports and feature requests.
I have an .R file saved locally at the following path:
Rfilepath = "C:\\python\\buyback_parse_guide.r"
The command for RScript.exe is:
RScriptCmd = "C:\\Program Files\\R\\R-2.15.2\\bin\\Rscript.exe --vanilla"
I tried running:
subprocess.call([RScriptCmd,Rfilepath],shell=True)
But it returns 1 -- and the .R script did not run successfully. What am I doing wrong? I'm new to Python so this is probably a simple syntax error... I also tried these, but they all return 1:
subprocess.call('"C:\Program Files\R\R-2.15.2\bin\Rscript.exe"',shell=True)
subprocess.call('"C:\\Program Files\\R\\R-2.15.2\\bin\\Rscript.exe"',shell=True)
subprocess.call('C:\Program Files\R\R-2.15.2\bin\Rscript.exe',shell=True)
subprocess.call('C:\\Program Files\\R\\R-2.15.2\\bin\\Rscript.exe',shell=True)
Thanks!
The RScriptCmd needs to be just the executable, no command line arguments. So:
RScriptCmd = "\"C:\\Program Files\\R\\R-2.15.2\\bin\\Rscript.exe\""
Then the Rfilepath can actually be all of the arguments - and renamed:
RArguments = "--vanilla \"C:\\python\\buyback_parse_guide.r\""
It looks like you have a similar problem to mine. I had to reinstall RScript to a path which has no spaces.
See: Running Rscript via Python using os.system() or subprocess()
This is how I worked out the communication between Python and Rscript:
part in Python:
from subprocess import PIPE,Popen,call
p = subprocess.Popen([ path/to/RScript.exe, path/to/Script.R, Arg1], stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
out = p.communicate()
outValue = out[0]
outValue contains the output-Value after executing the Script.R
part in the R-Script:
args <- commandArgs(TRUE)
argument1 <- as.character(args[1])
...
write(output, stdout())
output is the variable to send to Python