I created a batch file to run files in sequence, however my python file takes in an input (from calling raw_input), and I am trying to figure out how to handle this over the batch file.
run.bat
The program doesn't proceed to the next line after the .py file is executed, for brevity i just just showed necessary commands
cd C:\Users\myname\Desktop
python myfile.py
stop
myfile.py
print ("Enter environment (dev | qa | prod) or stop to STOP")
environment = raw_input()
Here's a solution.
Take your myfile.py file, and change it to the following:
import sys
def main(arg = None):
# Put all of your original myfile.py code here
# You could also use raw_input for a fallback, if no arg is provided:
if arg is None:
arg = raw_input()
# Keep going with the rest of your script
# if __name__ == "__main__" ensures this code doesn't run on import statements
if __name__ == "__main__":
#arg = sys.argv[1] allows you to run this myfile.py directly 1 time, with the first command line paramater, if you want
if len(sys.argv) > 0:
arg = sys.argv[1]
else:
arg = None
main(arg)
Then create another python file, called wrapper.py:
#importing myfile here allows you to use it has its own self contained module
import sys, myfile
# this loop loops through the command line params starting at index 1 (index 0 is the name of the script itself)
for arg in sys.argv[1 : ]:
myfile.main(arg)
And then at the command line, you can simply type:
python wrapper.py dev qa prod
You could also put the above line of code in your run.bat file, making it look at follows:
cd C:\Users\myname\Desktop
python wrapper.py dev qa prod
stop
the question is not related to python. To your shell. According http://ss64.com/nt/syntax-redirection.html cmd.exe uses the same syntax as unix shell (cmd1 | cmd2), so your bat file should work fine when called with command, which will send the file content to standard output.
Edit: added example
echo "dev" | run.bat
C:\Python27\python.exe myfile.py
Enter environment (dev | qa | prod) or stop to STOP
environment="dev"
Related
I have a shell script which currently takes 3 args. I run this via a shell script with the shell script file name , a directory to run the python script upon , along with the name of the test data directory. I want to be able to write a unit tests which executes the command below but only if i was to change the date , depending on the data that is available it would either pass or fail.
main_config.sh
yamldir=$1
for yaml in $(ls ${yamldir}/*.yaml | grep -v "export_config.yaml"); do
if [ "$yaml" != "export_config.yaml" ]; then
echo "Running export for $yaml file...";
python valid.py -p ${yamldir}/export_config.yaml -e $yaml -d ${endDate}
wait
fi
done
This is what is executed on the command line
./main_config.sh /Users/name/Desktop/yaml/ 2018-12-23
This will fail and output on the terminal since there is no directory called 2012-12-23 :
./main_config.sh /yaml/ 2018-12-23
Running export for apa.yaml file...
apa.json does not exist
If the directory existed this would pass and would output on the terminal :
Running export for apa.yaml file...
File Name: apa.json Exists
File Size: 234 Bytes
Writing to file
My python script script is as follows :
def main(get_config):
cfg = get_config()[0] # export_config.yaml
data = get_config()[1] # export_apa.yaml
date = get_config()[2] # data folder - YYYY-MM-DD
# Conditional Logic
def get_config():
parser = argparse.ArgumentParser()
parser.add_argument("-p", "--parameter-file", action="store", required=True)
parser.add_argument("-e", "--export-data-file", action="store", required=True)
parser.add_argument("-d", "--export-date", action="store", required=False)
args = parser.parse_args()
return [funcs.read_config(args.parameter_file), funcs.read_config(args.export_data_file), args.export_date]
if __name__ == "__main__":
logging.getLogger().setLevel(logging.INFO)
main(get_config)
To me it looks like this is not a typical unit test (that tests a function or method) but an integration test (that tests a subsystem from the outside). But of course you could still solve this with your typical Python testing tools like unittest.
A simple solution would be to run your script using subprocess, capture the output, and then parse that output as part of your test:
import unittest
import os
import sys
if os.name == 'posix' and sys.version_info[0] < 3:
import subprocess32 as subprocess
else:
import subprocess
class TestScriptInvocation(unittest.TestCase):
def setUp(self):
"""call the script and record its output"""
result = subprocess.run(["./main_config.sh", "/Users/yasserkhan/Desktop/yaml/", "2018-12-23"], stdout=subprocess.PIPE)
self.returncode = result.returncode
self.output_lines = result.stdout.decode('utf-8').split('\n')
def test_returncode(self):
self.assertEqual(self.returncode, 0)
def test_last_line_indicates_success(self):
self.assertEqual(self.output_lines[-1], 'Writing to file')
if __name__ == '__main__':
unittest.main()
Note that this code uses the backport of the Python 3 subprocess module. Also, it tries to decode the contents of result.stdout because on Python 3 that would be a bytes object and not a str as on Python 2. I didn't test it, but these two things should make the code portable between 2 and 3.
Also note that using absolute paths like "/Users/yasserkhan/Desktop/yaml" could easily break, so you will either need to find a relative path or pass a base path to your tests using environment variables for example.
You could add additional tests that parse the other lines and check for reasonable outputs like a file size in the expected range.
Using Python's sh, I am running 3rd party shell script that requests my input (not that it matters much, but to be precise, I'm running an Ansible2 playbook with the --step option)
As an oversimplification of what is happening, I built a simple bash script that requests an input. I believe that if make this simple example work I can make the original case work too.
So please consider this bash script hello.sh:
#!/bin/bash
echo "Please input your name and press Enter:"
read name
echo "Hello $name"
I can run it from python using sh module, but it fails to receive my input...
import errno
import sh
cmd = sh.Command('./hello.sh')
for line in cmd(_iter=True, _iter_noblock=True):
if line == errno.EWOULDBLOCK:
pass
else:
print(line)
How could I make this work?
After following this tutorial, this works for my use case:
#!/usr/bin/env python3
import errno
import sh
import sys
def sh_interact(char, stdin):
global aggregated
sys.stdout.write(char)
sys.stdout.flush()
aggregated += char
if aggregated.endswith(":"):
val = input()
stdin.put(val + "\n")
cmd = sh.Command('./hello.sh')
aggregated = ""
cmd(_out=sh_interact, _out_bufsize=0)
For example, the output is:
$ ./testinput.py
Please input your name and press Enter:arod
Hello arod
There are two ways to solve this:
Using _in:
using _in, we can pass a list which can be taken as input in the python script
cmd = sh.Command('./read.sh')
stdin = ['hello']
for line in cmd(_iter=True, _iter_noblock=True, _in=stdin):
if line == errno.EWOULDBLOCK:
pass
else:
print(line)
Using command line args if you are willing to modify the script.
I write lots of small scripts to manipulate files on a Bash-based server. I would like to have a mechanism by which to log which commands created which files in a given directory. However, I don't just want to capture every input command, all the time.
Approach 1: a wrapper script that uses a Bash builtin (a la history or fc -ln -1) to grab the last command and write it to a log file. I have not been able to figure out any way to do this, as the shell builtin commands do not appear to be recognized outside of the interactive shell.
Approach 2: a wrapper script that pulls from ~/.bash_history to get the last command. This, however, requires setting up the Bash shell to flush every command to history immediately (as per this comment) and seems also to require that the history be allowed to grow inexorably. If this is the only way, so be it, but it would be great to avoid having to edit the ~/.bashrc file on every system where this might be implemented.
Approach 3: use script. My problem with this is that it requires multiple commands to start and stop the logging, and because it launches its own shell it is not callable from within another script (or at least, doing so complicates things significantly).
I am trying to figure out an implementation that's of the form log_this.script other_script other_arg1 other_arg2 > file, where everything after the first argument is logged. The emphasis here is on efficiency and minimizing syntax overhead.
EDIT: iLoveTux and I both came up with similar solutions. For those interested, my own implementation follows. It is somewhat more constrained in its functionality than the accepted answer, but it also auto-updates any existing logfile entries with changes (though not deletions).
Sample usage:
$ cmdlog.py "python3 test_script.py > test_file.txt"
creates a log file in the parent directory of the output file with the following:
2015-10-12#10:47:09 test_file.txt "python3 test_script.py > test_file.txt"
Additional file changes are added to the log;
$ cmdlog.py "python3 test_script.py > test_file_2.txt"
the log now contains
2015-10-12#10:47:09 test_file.txt "python3 test_script.py > test_file.txt"
2015-10-12#10:47:44 test_file_2.txt "python3 test_script.py > test_file_2.txt"
Running on the original file name again changes the file order in the log, based on modification time of the files:
$ cmdlog.py "python3 test_script.py > test_file.txt"
produces
2015-10-12#10:47:44 test_file_2.txt "python3 test_script.py > test_file_2.txt"
2015-10-12#10:48:01 test_file.txt "python3 test_script.py > test_file.txt"
Full script:
#!/usr/bin/env python3
'''
A wrapper script that will write the command-line
args associated with any files generated to a log
file in the directory where the files were made.
'''
import sys
import os
from os import listdir
from os.path import isfile, join
import subprocess
import time
from datetime import datetime
def listFiles(mypath):
"""
Return relative paths of all files in mypath
"""
return [join(mypath, f) for f in listdir(mypath) if
isfile(join(mypath, f))]
def read_log(log_file):
"""
Reads a file history log and returns a dictionary
of {filename: command} entries.
Expects tab-separated lines of [time, filename, command]
"""
entries = {}
with open(log_file) as log:
for l in log:
l = l.strip()
mod, name, cmd = l.split("\t")
# cmd = cmd.lstrip("\"").rstrip("\"")
entries[name] = [cmd, mod]
return entries
def time_sort(t, fmt):
"""
Turn a strftime-formatted string into a tuple
of time info
"""
parsed = datetime.strptime(t, fmt)
return parsed
ARGS = sys.argv[1]
ARG_LIST = ARGS.split()
# Guess where logfile should be put
if (">" or ">>") in ARG_LIST:
# Get position after redirect in arg list
redirect_index = max(ARG_LIST.index(e) for e in ARG_LIST if e in ">>")
output = ARG_LIST[redirect_index + 1]
output = os.path.abspath(output)
out_dir = os.path.dirname(output)
elif ("cp" or "mv") in ARG_LIST:
output = ARG_LIST[-1]
out_dir = os.path.dirname(output)
else:
out_dir = os.getcwd()
# Set logfile location within the inferred output directory
LOGFILE = out_dir + "/cmdlog_history.log"
# Get file list state prior to running
all_files = listFiles(out_dir)
pre_stats = [os.path.getmtime(f) for f in all_files]
# Run the desired external commands
subprocess.call(ARGS, shell=True)
# Get done time of external commands
TIME_FMT = "%Y-%m-%d#%H:%M:%S"
log_time = time.strftime(TIME_FMT)
# Get existing entries from logfile, if present
if LOGFILE in all_files:
logged = read_log(LOGFILE)
else:
logged = {}
# Get file list state after run is complete
post_stats = [os.path.getmtime(f) for f in all_files]
post_files = listFiles(out_dir)
# Find files whose states have changed since the external command
changed = [e[0] for e in zip(all_files, pre_stats, post_stats) if e[1] != e[2]]
new = [e for e in post_files if e not in all_files]
all_modded = list(set(changed + new))
if not all_modded: # exit early, no need to log
sys.exit(0)
# Replace files that have changed, add those that are new
for f in all_modded:
name = os.path.basename(f)
logged[name] = [ARGS, log_time]
# Write changed files to logfile
with open(LOGFILE, 'w') as log:
for name, info in sorted(logged.items(), key=lambda x: time_sort(x[1][1], TIME_FMT)):
cmd, mod_time = info
if not cmd.startswith("\""):
cmd = "\"{}\"".format(cmd)
log.write("\t".join([mod_time, name, cmd]) + "\n")
sys.exit(0)
You can use the tee command, which stores its standard input to a file and outputs it on standard output. Pipe the command line into tee, and pipe tee's output into a new invocation of your shell:
echo '<command line to be logged and executed>' | \
tee --append /path/to/your/logfile | \
$SHELL
i.e., for your example of other_script other_arg1 other_arg2 > file,
echo 'other_script other_arg1 other_arg2 > file' | \
tee --append /tmp/mylog.log | \
$SHELL
If your command line needs single quotes, they need to be escaped properly.
OK, so you don't mention Python in your question, but it is tagged Python, so I figured I would see what I could do. I came up with this script:
import sys
from os.path import expanduser, join
from subprocess import Popen, PIPE
def issue_command(command):
process = Popen(command, stdout=PIPE, stderr=PIPE, shell=True)
return process.communicate()
home = expanduser("~")
log_file = join(home, "command_log")
command = sys.argv[1:]
with open(log_file, "a") as fout:
fout.write("{}\n".format(" ".join(command)))
out, err = issue_command(command)
which you can call like (if you name it log_this and make it executable):
$ log_this echo hello world
and it will put "echo hello world" in a file ~/command_log, note though that if you want to use pipes or redirection you have to quote your command (this may be a real downfall for your use case or it may not be, but I haven't figured out how to do this just yet without the quotes) like this:
$ log_this "echo hello world | grep h >> /tmp/hello_world"
but since it's not perfect, I thought I would add a little something extra.
The following script allows you to specify a different file to log your commands to as well as record the execution time of the command:
#!/usr/bin/env python
from subprocess import Popen, PIPE
import argparse
from os.path import expanduser, join
from time import time
def issue_command(command):
process = Popen(command, stdout=PIPE, stderr=PIPE, shell=True)
return process.communicate()
home = expanduser("~")
default_file = join(home, "command_log")
parser = argparse.ArgumentParser()
parser.add_argument("-f", "--file", type=argparse.FileType("a"), default=default_file)
parser.add_argument("-p", "--profile", action="store_true")
parser.add_argument("command", nargs=argparse.REMAINDER)
args = parser.parse_args()
if args.profile:
start = time()
out, err = issue_command(args.command)
runtime = time() - start
entry = "{}\t{}\n".format(" ".join(args.command), runtime)
args.file.write(entry)
else:
out, err = issue_command(args.command)
entry = "{}\n".format(" ".join(args.command))
args.file.write(entry)
args.file.close()
You would use this the same way as the other script, but if you wanted to specify a different file to log to just pass -f <FILENAME> before your actual command and your log will go there, and if you wanted to record the execution time just provide the -p (for profile) before your actual command like so:
$ log_this -p -f ~/new_log "echo hello world | grep h >> /tmp/hello_world"
I will try to make this better, but if you can think of anything else this could do for you, I am making a github project for this where you can submit bug reports and feature requests.
I am trying to pass a variable to a abaqus script file(.psf) through command line. The command line call is made every time another script is executed and has different value for the variable in each call. Can I have help in this regard on the command syntax to be used. I tried os.system and subprocess.Popen, both are giving some sort of errors.
In my main script(.py file) it calls .psf
Xa=150000
abaqusCall = 'abaqus script=tt_Par.psf'
runCommand = 'cmd.exe /c ' + abaqusCall
process = subprocess.Popen(runCommand, cwd=workDir, args=Xa)
and in .psf
it accepts variables in this format..
import sys,os
for item in sys.argv
x1 = sys.argv[0]
x2 = sys.argv[1]
print x1,x2
Could anyone give directions in this regard?
try this out. I am not sure what as psf file is, but I just use py files.
def abaqus_cmd(mycmd):
'''
used to execute abaqus commands in the windows OS console
inputs : mycmd, an abaqus command
'''
import subprocess, sys
try:
retcode = subprocess.call(mycmd,shell=True)
if retcode < 0:
print >>sys.stderr, mycmd+"...failed during execution", -retcode
else:
print >>sys.stderr, mycmd+"...success"
except OSError as e:
print >>sys.stderr, mycmd+"...failed at execution", e
to run a simple command do this
abaqus_cmd('abaqus fetch job=beamExample')
to pass a variable you can do this
odbfile = 'test.odb'
abaqus_cmd('abaqus python odb_to_txt.py '+odbfile)
however, this is just a python instance in abaqus, you cannot access the abaqus kernal here. To access the abaqus kernal, you need to run the script like this.
abaqus_cmd('abaqus cae noGUI=beamExample.py')
I HAVE NOT figured out how to pass variables into scripts in the abaqus kernel, see my comment
Very late to the party but I also needed to call abaqus scripts with variables passed in and out of system. My main script is in Py3 but Abaqus (2021) is still using Py27. You don't need to worry as long as your model script is in py27 you can still call the command using Py3.
I needed to run my model script in a directory different to my py3 main script so in the main script I have:
aba_dir = PATH_TO_DIRECTORY_IN_WHICH_I_WANT_ABAQUS_FILES (.odb, .cae etc.)
script_dir = DIRECTORY_FOR_MODEL_SCRIPT (build_my_model.py - this is py27 script)
job_name = call_cae(cwd, aba_dir, script_dir, var1, var2, var3)
The following functions are used to build the correct command:
Function to define whether to call CAE or ODB viewer:
def caller_type(cae):
#DIFFERENTIATE BETWEEN CAE AND VIEWER
if cae:
caller = 'abaqus cae noGui='
else:
caller = 'abaqus viewer noGui='
return caller
Function to build command string with variables:
def build_command_string(script, cae, *args):
# ## GET THE CAE/VIEWER CALLER
caller = caller_type(cae)
# ##CREATE STRING REPRESENTING ABAQUS COMMAND
caller = caller + script
# ##STRING ALL ARGUMENTS AS INDIVIDUAL ITEMS
str_args = [str(arg) for arg in args]
# ##CREATE COMMAND INITIALISER WITH CALLER
c = ['cmd.exe', '/C', caller, '--']
# ##APPEND STRING ARGS TO COMMAND LIST
for arg in str_args:
c.append(arg)
# ##RETURN COMMAND LIST
return c
Function to submit the command to system and return the job name:
def call_cae(cwd, aba_dir, script, *args):
# SET CAE TO TRUE
cae = True
# CHANGE TO OUTPUT DIRECTORY
os.chdir(aba_dir)
# ##BUILD COMMAND STRING
command = build_command_string(script, cae, *args)
# ##RUN SUBPROCESS COMMAND
p1 = subprocess.run(command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True)
# ##RETURN TO ORIGINAL WORKING DIRECTORY (MAIN FILE)
os.chdir(cwd)
if p1.stdout == None or p1.stdout == '':
# ##RETURN JOB NAME
job_name = p1.stderr[p1.stderr.rfind('\n'):].strip('\n')
else:
# ##RETURN JOB NAME
job_name = p1.stderr[p1.stderr.rfind('\n'):].strip('\n')
# ##PRINT STATEMENT TO CHECK CONVERGANCE
print('This job exited with error')
return job_name
Inside my code to build and execute the abaqus model I specify:
sim_num = int(sys.argv[-3]) #var1
E = float(sys.argv[-2]) #var2
nu = float(sys.argv[-1]) #var3
and at the end of my script
sys.stderr.write(cJob.name) #
This is working for me on PyCharm with Py3 in main file and Abaqus python 27 in 'build and execute model.py' file. Hopefully helpful to others too. Now onto creating commands for the ODB output!
I copied this script from internet but idon't know how to use it. i am newbiw to python so please help. When i execute it using
./test.py then i can only see
usage: py4sa [option]
A unix toolbox
options:
--version show program's version number and exit
-h, --help show this help message and exit
-i, --ip gets current IP Address
-u, --usage gets disk usage of homedir
-v, --verbose prints verbosely
when i type py4sa then it says bash command not found
The full script is
#!/usr/bin/env python
import subprocess
import optparse
import re
#Create variables out of shell commands
#Note triple quotes can embed Bash
#You could add another bash command here
#HOLDING_SPOT="""fake_command"""
#Determines Home Directory Usage in Gigs
HOMEDIR_USAGE = """
du -sh $HOME | cut -f1
"""
#Determines IP Address
IPADDR = """
/sbin/ifconfig -a | awk '/(cast)/ { print $2 }' | cut -d':' -f2 | head -1
"""
#This function takes Bash commands and returns them
def runBash(cmd):
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
out = p.stdout.read().strip()
return out #This is the stdout from the shell command
VERBOSE=False
def report(output,cmdtype="UNIX COMMAND:"):
#Notice the global statement allows input from outside of function
if VERBOSE:
print "%s: %s" % (cmdtype, output)
else:
print output
#Function to control option parsing in Python
def controller():
global VERBOSE
#Create instance of OptionParser Module, included in Standard Library
p = optparse.OptionParser(description='A unix toolbox',
prog='py4sa',
version='py4sa 0.1',
usage= '%prog [option]')
p.add_option('--ip','-i', action="store_true", help='gets current IP Address')
p.add_option('--usage', '-u', action="store_true", help='gets disk usage of homedir')
p.add_option('--verbose', '-v',
action = 'store_true',
help='prints verbosely',
default=False)
#Option Handling passes correct parameter to runBash
options, arguments = p.parse_args()
if options.verbose:
VERBOSE=True
if options.ip:
value = runBash(IPADDR)
report(value,"IPADDR")
elif options.usage:
value = runBash(HOMEDIR_USAGE)
report(value, "HOMEDIR_USAGE")
else:
p.print_help()
#Runs all the functions
def main():
controller()
#This idiom means the below code only runs when executed from command line
if __name__ == '__main__':
main()
It seems to me you have stored the script under another name: test.py rather than py4sa. So typing ./test.py, like you did, is correct for you. The program requires arguments, however, so you have to enter one of the options listed under 'usage'.
Normally 'py4sa [OPTIONS]' would mean that OPTIONS is optional, but looking at the code we can see that it isn't:
if options.verbose:
# ...
if options.ip:
# ...
elif options.usage:
# ...
else:
# Here's a "catch all" in case no options are supplied.
# It will show the help text you get:
p.print_help()
Note that the program probably would not be recognized by bash even if you renamed it to py4sa, as the current directory is often not in bash's PATH. It says 'usage: py4sa (..)' because that's hard-coded into the program.
The script is called "test.py". Either invoke it as such, or rename it to "py4sa".
you run a Python script using the interpreter, so
$ python py4sa