i'm building tester for programs in different languages, but I'm not able to get C program working, currently the command is called like this:
codeResult = subprocess.run(self.createRunCommand(currLanguage, file),
input = codeToTest,
shell = True,
timeout = TIMEOUT,
capture_output=True)
and createRunCommand() returns:
def createRunCommand(self, language, file):
if language == '.py':
command = f'python {file}'
elif language == '.c':
if not os.path.exists(f'C:/<myPath>/{file}.out'):
command = f'gcc -std=c11 {file} -o C:/<myPath>/{file}.out \
./C:/<myPath>/{file}.out'
else:
command = f'./C:/<myPath>/{file}.out'
elif language == '.java':
command = f''
elif language == '.cpp':
command = f''
return command
the input and test itself is good, as it runs correctly with a python program, but I cannot figure out how to setup C (and probably other compiled first languages).
You'll need multiple command invocations for (e.g.) C/C++, so have your createRunCommand return multiple.
I also changed things up here to
automatically figure out the language from the extension of the filename
use a list of arguments instead of a string; it's safer
use sys.executable for the current Python interpreter, and shutil.which("gcc") to find gcc.
import os
import shlex
import shutil
import subprocess
import sys
def get_commands(file):
"""
Get commands to (compile and) execute `file`, as a list of subprocess arguments.
"""
ext = os.path.splitext(file)[1].lower()
if ext == ".py":
return [(sys.executable, file)]
if ext in (".c", ".cpp"):
exe_file = f"{file}.exe"
return [
(shutil.which("gcc"), "-std=c11", file, "-o", exe_file),
(exe_file,),
]
raise ValueError(f"Unsupported file type: {ext}")
filename = "foo.py"
for command in get_commands(filename):
print(f"Running: {shlex.join(command)}")
code_result = subprocess.run(command, capture_output=True)
Related
below is my code this is a mini IDE for a class project I am stuck here am trying to build an IDE that compiles java. I downloaded JDK and am using subprocess to pipe cmd and communicate with javac but I need to pass the file name with extension so it just shows output and I also need help with outputting a console because it tends to only open in visual studio terminal please help me because I will be submitting on Thursday.
femi.femiii#gmail.com
from tkinter import filedialog
from tkinter import messagebox
import subprocess
import os
name_file = os.path.basename(__file__)
# run button that opens command line
def run(self, *args):
p1 = subprocess.Popen('cmd', shell=True, stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p2 = subprocess.Popen('javac name_file; java name_file', shell=True, stdin=p1.stdout)
p1.stdout.close()
out, err = p2.communicate()
if __name__ == "__main__":
master = tk.Tk()
pt = PyText(master)
master.mainloop()```
The easiest way I can think of would be to use sys argv's to pass the file name to a node or java routine
modify your python script to pass info to command line in argv
fname = file
#file you want to pass
pythonCall = 'node javascript.js -- '+fname
#the -- passes what follows into the sys argvs
#this is python notation, java/node may not need the '--' but just a space
runCommand = 'cmd.exe /c ' + pythonCall
subprocess.Popen(runCommand)
then at the start of your javascript, this will give you the file name that was passed into argv, its from the link I included below.
var args = process.argv.slice(2);
if you need more help accessing process.argv in java:
How do I pass command line arguments to a Node.js program?
I am trying to execute several lines of bash in Python 3 and check the status of each line separately.
I first tried to use gestatusoutput from subprocess, but each line is run in a separated process that does not communicate with the others (for the sake of simplicity, the given MWE consists of setting a variable, but what I intend to do in my actual code is more complex than that — and I know about os.environ for this very specific example):
from subprocess import getstatusoutput as cmd
stat, out = cmd("export TEST=1")
stat, out = cmd("echo $TEST")
will therefore returns:
>>> print(out)
(0, "")
I then tried the following:
cmdline = """export TEST=1
echo $TEST"""
stat, out = cmd(cmdline)
That works but forces me to parse the output, specially if I want to check the status of the first command (if echo works, the status returns by cmd is 0 whatever happens before), that is not very robust.
I saw some things using Popen (still from subprocess) but was unable to use it efficiently.
Any help would be appreciated!
To me, you are trying to share the environment variable between two process, which is not possible.
It looks like this:
Process 1 python main.py #TEST = ""
|Process 2-->"export TEST=1" #Change Process2 env variable TEST to '1'
|Process 3-->"echo $TEST" #Print Process3 env variable TEST (get from process 1)
You can use os.environ[] to change the current environment first (Process 1 variable),Later on use the variable after fork.
Something like this
import os
import subprocess
import sys
os.environ['TEST'] = '1'
out = subprocess.check_call('echo $TEST',shell = True)
I resulted doing the following:
create a launch command wrapping subprocess.Popen to launch my bash commands, that in addition allows me either to retrieve the current environment or to pass a custom environment
create a get_env to parse the return from the previous command and get a dict of the environment
launch wrapper
import os
import subprocess as sp
def launch(cmd_, env=os.environ, get_env=False):
if get_env: cmd_ += " && printenv"
load = sp.Popen(cmd_, shell=True, stdout=sp.PIPE, stderr=sp.PIPE, env=env)
out = load.communicate()
err = load.returncode
return(err, out)
Retrieve the environment
def get_env(out, encoding='utf-8'):
lout = str(out[0], encoding).split('\n')
new_env = {}
for line in lout:
if len(line.split('=')) <= 1:
pass
else:
k = line.split("=")[0]
v = "=".join(line.split("=")[1:])
new_env[k] = v
return new_env
(This is a simple version, it may be more complicated if you have things like functions in your environment — it happens.)
Results:
I can use it as follow:
err, out = launch("export TEST=1", get_env=True)
if not err: new_env = get_env(out)
err, out = launch("echo $TEST", env=new_env)
and therefore:
>>> print(str(out[0], encoding='utf-8'))
1
This question already has answers here:
How do I execute a program or call a system command?
(65 answers)
Closed 6 years ago.
For say in terminal I did cd Desktop you should know it moves you to that directory, but how do I do that in python but with use Desktop with raw_input("") to pick my command?
The following code reads your command using raw_input, and execute it using os.system()
import os
if __name__ == '__main__':
while True:
exec_cmd = raw_input("enter your command:")
os.system(exec_cmd)
Best Regards,
Yaron
To go with your specific example, you'd do the following:
import os
if __name__ == "__main__":
directory = raw_input("Please enter absolute path: ")
old_dir = os.getcwd() #in case you need your old directory
os.chdir(directory)
I've used this technique before in some directory maintenance functions I've written and it works. If you want to run shell commands more generally you'd something like:
import subprocess
if __name__ == "__main__":
command_list = raw_input("").split(" ")
ret = subprocess(command_list)
#from here you can check ret if you need to
But beware with this method. The system here has no knowledge about whether it's passing a valid command, so it's likely to fail and miss exceptions. A better version might look like:
import subprocess
if __name__ == "__main__":
command_kb = {
"cd": True,
"ls": True
#etc etc
}
command_list = raw_input("").split(" ")
command = command_list[0]
if command in command_kb:
#do some stuff here to the input depending on the
#function being called
pass
else:
print "Command not supported"
return -1
ret = subprocess(command_list)
#from here you can check ret if you need to
This method represents a list of supported commands. You can then manipulate the list of args as needed to verify it's a valid command. For instance, you can check if the directory you're about to cd exists and return an error to the user if not. Or you can check if the path name is valid, but only when joined by an absolute path.
maybe you can do this:
>>> import subprocess
>>> input = raw_input("")
>>> suprocess.call(input.split()) # for detail usage, search subprocess
for details, you can search subprocess module
I write lots of small scripts to manipulate files on a Bash-based server. I would like to have a mechanism by which to log which commands created which files in a given directory. However, I don't just want to capture every input command, all the time.
Approach 1: a wrapper script that uses a Bash builtin (a la history or fc -ln -1) to grab the last command and write it to a log file. I have not been able to figure out any way to do this, as the shell builtin commands do not appear to be recognized outside of the interactive shell.
Approach 2: a wrapper script that pulls from ~/.bash_history to get the last command. This, however, requires setting up the Bash shell to flush every command to history immediately (as per this comment) and seems also to require that the history be allowed to grow inexorably. If this is the only way, so be it, but it would be great to avoid having to edit the ~/.bashrc file on every system where this might be implemented.
Approach 3: use script. My problem with this is that it requires multiple commands to start and stop the logging, and because it launches its own shell it is not callable from within another script (or at least, doing so complicates things significantly).
I am trying to figure out an implementation that's of the form log_this.script other_script other_arg1 other_arg2 > file, where everything after the first argument is logged. The emphasis here is on efficiency and minimizing syntax overhead.
EDIT: iLoveTux and I both came up with similar solutions. For those interested, my own implementation follows. It is somewhat more constrained in its functionality than the accepted answer, but it also auto-updates any existing logfile entries with changes (though not deletions).
Sample usage:
$ cmdlog.py "python3 test_script.py > test_file.txt"
creates a log file in the parent directory of the output file with the following:
2015-10-12#10:47:09 test_file.txt "python3 test_script.py > test_file.txt"
Additional file changes are added to the log;
$ cmdlog.py "python3 test_script.py > test_file_2.txt"
the log now contains
2015-10-12#10:47:09 test_file.txt "python3 test_script.py > test_file.txt"
2015-10-12#10:47:44 test_file_2.txt "python3 test_script.py > test_file_2.txt"
Running on the original file name again changes the file order in the log, based on modification time of the files:
$ cmdlog.py "python3 test_script.py > test_file.txt"
produces
2015-10-12#10:47:44 test_file_2.txt "python3 test_script.py > test_file_2.txt"
2015-10-12#10:48:01 test_file.txt "python3 test_script.py > test_file.txt"
Full script:
#!/usr/bin/env python3
'''
A wrapper script that will write the command-line
args associated with any files generated to a log
file in the directory where the files were made.
'''
import sys
import os
from os import listdir
from os.path import isfile, join
import subprocess
import time
from datetime import datetime
def listFiles(mypath):
"""
Return relative paths of all files in mypath
"""
return [join(mypath, f) for f in listdir(mypath) if
isfile(join(mypath, f))]
def read_log(log_file):
"""
Reads a file history log and returns a dictionary
of {filename: command} entries.
Expects tab-separated lines of [time, filename, command]
"""
entries = {}
with open(log_file) as log:
for l in log:
l = l.strip()
mod, name, cmd = l.split("\t")
# cmd = cmd.lstrip("\"").rstrip("\"")
entries[name] = [cmd, mod]
return entries
def time_sort(t, fmt):
"""
Turn a strftime-formatted string into a tuple
of time info
"""
parsed = datetime.strptime(t, fmt)
return parsed
ARGS = sys.argv[1]
ARG_LIST = ARGS.split()
# Guess where logfile should be put
if (">" or ">>") in ARG_LIST:
# Get position after redirect in arg list
redirect_index = max(ARG_LIST.index(e) for e in ARG_LIST if e in ">>")
output = ARG_LIST[redirect_index + 1]
output = os.path.abspath(output)
out_dir = os.path.dirname(output)
elif ("cp" or "mv") in ARG_LIST:
output = ARG_LIST[-1]
out_dir = os.path.dirname(output)
else:
out_dir = os.getcwd()
# Set logfile location within the inferred output directory
LOGFILE = out_dir + "/cmdlog_history.log"
# Get file list state prior to running
all_files = listFiles(out_dir)
pre_stats = [os.path.getmtime(f) for f in all_files]
# Run the desired external commands
subprocess.call(ARGS, shell=True)
# Get done time of external commands
TIME_FMT = "%Y-%m-%d#%H:%M:%S"
log_time = time.strftime(TIME_FMT)
# Get existing entries from logfile, if present
if LOGFILE in all_files:
logged = read_log(LOGFILE)
else:
logged = {}
# Get file list state after run is complete
post_stats = [os.path.getmtime(f) for f in all_files]
post_files = listFiles(out_dir)
# Find files whose states have changed since the external command
changed = [e[0] for e in zip(all_files, pre_stats, post_stats) if e[1] != e[2]]
new = [e for e in post_files if e not in all_files]
all_modded = list(set(changed + new))
if not all_modded: # exit early, no need to log
sys.exit(0)
# Replace files that have changed, add those that are new
for f in all_modded:
name = os.path.basename(f)
logged[name] = [ARGS, log_time]
# Write changed files to logfile
with open(LOGFILE, 'w') as log:
for name, info in sorted(logged.items(), key=lambda x: time_sort(x[1][1], TIME_FMT)):
cmd, mod_time = info
if not cmd.startswith("\""):
cmd = "\"{}\"".format(cmd)
log.write("\t".join([mod_time, name, cmd]) + "\n")
sys.exit(0)
You can use the tee command, which stores its standard input to a file and outputs it on standard output. Pipe the command line into tee, and pipe tee's output into a new invocation of your shell:
echo '<command line to be logged and executed>' | \
tee --append /path/to/your/logfile | \
$SHELL
i.e., for your example of other_script other_arg1 other_arg2 > file,
echo 'other_script other_arg1 other_arg2 > file' | \
tee --append /tmp/mylog.log | \
$SHELL
If your command line needs single quotes, they need to be escaped properly.
OK, so you don't mention Python in your question, but it is tagged Python, so I figured I would see what I could do. I came up with this script:
import sys
from os.path import expanduser, join
from subprocess import Popen, PIPE
def issue_command(command):
process = Popen(command, stdout=PIPE, stderr=PIPE, shell=True)
return process.communicate()
home = expanduser("~")
log_file = join(home, "command_log")
command = sys.argv[1:]
with open(log_file, "a") as fout:
fout.write("{}\n".format(" ".join(command)))
out, err = issue_command(command)
which you can call like (if you name it log_this and make it executable):
$ log_this echo hello world
and it will put "echo hello world" in a file ~/command_log, note though that if you want to use pipes or redirection you have to quote your command (this may be a real downfall for your use case or it may not be, but I haven't figured out how to do this just yet without the quotes) like this:
$ log_this "echo hello world | grep h >> /tmp/hello_world"
but since it's not perfect, I thought I would add a little something extra.
The following script allows you to specify a different file to log your commands to as well as record the execution time of the command:
#!/usr/bin/env python
from subprocess import Popen, PIPE
import argparse
from os.path import expanduser, join
from time import time
def issue_command(command):
process = Popen(command, stdout=PIPE, stderr=PIPE, shell=True)
return process.communicate()
home = expanduser("~")
default_file = join(home, "command_log")
parser = argparse.ArgumentParser()
parser.add_argument("-f", "--file", type=argparse.FileType("a"), default=default_file)
parser.add_argument("-p", "--profile", action="store_true")
parser.add_argument("command", nargs=argparse.REMAINDER)
args = parser.parse_args()
if args.profile:
start = time()
out, err = issue_command(args.command)
runtime = time() - start
entry = "{}\t{}\n".format(" ".join(args.command), runtime)
args.file.write(entry)
else:
out, err = issue_command(args.command)
entry = "{}\n".format(" ".join(args.command))
args.file.write(entry)
args.file.close()
You would use this the same way as the other script, but if you wanted to specify a different file to log to just pass -f <FILENAME> before your actual command and your log will go there, and if you wanted to record the execution time just provide the -p (for profile) before your actual command like so:
$ log_this -p -f ~/new_log "echo hello world | grep h >> /tmp/hello_world"
I will try to make this better, but if you can think of anything else this could do for you, I am making a github project for this where you can submit bug reports and feature requests.
I'm attempting to execute a command over SSH, but bash on the other end doesn't think it's escaped properly.
Here, self._client is a paramiko.SSHClient object; args is a list of arguments, the command to execute.
def run(self, args, stdin=None, capture_stdout=False):
"""Runs a command.
On success, returns the output, if requested, or None.
On failure, raises CommandError, with stderr and, if captured, stdout,
as well as the exit code.
"""
command = ' '.join(_shell_escape(arg) for arg in args)
print('About to run command:\n {}'.format(command))
print('About to run command:\n {!r}'.format(command))
channel = self._client.get_transport().open_session()
channel.exec_command(command)
_shell_escape:
_SHELL_SAFE = _re.compile(r'^[-A-Za-z0-9_./]+$')
def _shell_escape(s):
if _SHELL_SAFE.match(s):
return s
return '\'{}\''.format(s.replace('\'', '\'\\\'\''))
I'm attempt to run some Python through this. On stderr, I get back:
bash: -c: line 5: unexpected EOF while looking for matching `''
bash: -c: line 6: syntax error: unexpected end of file
The output from the two print statements:
About to run command:
python -c 'import os, sys
path = sys.argv[1]
if sys.version_info.major == 2:
path = path.decode('\''utf-8'\'')
entries = os.listdir(path)
out = b'\'''\''.join(e.encode('\''utf-8'\'') + b'\'''\'' for e in entries)
sys.stdout.write(out)
' .
About to run command:
"python -c 'import os, sys\npath = sys.argv[1]\nif sys.version_info.major == 2:\n path = path.decode('\\''utf-8'\\'')\nentries = os.listdir(path)\nout = b'\\'''\\''.join(e.encode('\\''utf-8'\\'') + b'\\''\x00'\\'' for e in entries)\nsys.stdout.write(out)\n' ."
If I copy and paste the output of command, and paste it into bash, it executes, so it really does appear to be properly escaped. My current understanding is that SSH, on the other end, will take command, and run [my_shell, '-c', command].
Why is bash erroring on that command?
The input contains an embedded nul character, which bash appears to treat as the end of the string. (I'm not sure there's any way it couldn't!). This is visible in my question, where I output command:
About to run command:
"python -c 'import os, sys [SNIP…] + b'\\''\x00'\\'' for [SNIP…]"
That's a repr output, but notice the single slash before the x in \x00: that's an actual \x00 that made it through. My original code has this Python embedded as a snippet, which I didn't include (I didn't believe it was relevant):
_LS_CODE = """\
import os, sys
path = sys.argv[1]
if sys.version_info.major == 2:
path = path.decode('utf-8')
entries = os.listdir(path)
out = b''.join(e.encode('utf-8') + b'\x00' for e in entries)
sys.stdout.write(out)
"""
Here, Python's """ is still processing \ as an escape character. I need to double up, or look into raw strings (r""")
You need to escape newlines as well. A better option is to put the program text in a here document.
Make the output of "About to run command:" to look like
python -c << EOF
import os, sys
path = sys.argv[1]
if sys.version_info.major == 2:
path = path.decode('\''utf-8'\'')
entries = os.listdir(path)
out = b'\'''\''.join(e.encode('\''utf-8'\'') + b'\'''\'' for e in entries)
sys.stdout.write(out)
.
EOF
Maybe you wouldn't need to escape anything at all.