I'm trying to run a Perl script from Python. I know that if run the Perl script in terminal and I want the output of the Perl script to be written a file I need to add > results.txt after perl myCode.pl. This works fine in the terminal, but when I try to do this in Python it doesn't work.
This the code:
import shlex
import subprocess
args_str = "perl myCode.pl > results.txt"
args = shlex.split(args_str)
subprocess.call(args)
Despite the > results.txt it does not output to that file but it does output to the command line.
subprocess.call("perl myCode.pl >results.txt", shell=True)
or
subprocess.call(["sh", "-c", "perl myCode.pl >results.txt"])
or
with open('results.txt', 'wb', 0) as file:
subprocess.call(["perl", "myCode.pl"], stdout=file)
The first two invoke a shell to execute the shell command perl myCode.pl > results.txt. The last one executes perl directly by having call do the redirection itself. This is the more reliable solution.
Related
I would like to save the input code and the output result into a file. For example the following python code code.py:
print(2+2)
print(3+2)
to create a code-and-output.txt:
>>> print(2+2)
4
>>> print(3+2)
5
But I can not get it working. Basically, I want to code-and-output.txt to capture what would happen if I run interpreted python and run statements in python interactive environment (code + output).
Ways that I have tried so far:
Redirect stdout:
python code.py > code-and-output.txt
It only saves the output.
Redirect stdout and stdin:
python < code.py > code-and-output.txt
It does the same (only output).
nohup
nohup python code.py
The same problem: only output.
Script
script -q code-and-output.txt
python
print(2+2)
print(2+3)
ctr+d
ctr+d
It works but I need to do it manually. Moreover, it saves some garbage that I can not make them quiet with -q.
Bash Script
# bash-file.sh
python &
print(2+2)
print(2+3)
Does not work: commands run in console bash, not python. It does not work with & either: never ends python repl.
Using tty
open another terminal like /dev/pts/2 and send above bash-file.sh
cat bash-file.sh > /dev/pts/2
It just copies but does not run.
I am not interested in solutions like Jupyter and iPython. They have their own problems that does not address my requirement.
Any solution through linux commands (preferably) or python? Thank you.
Save this is as repl_sim.py in the same directory as your code.py:
with open("code.py", 'r') as input_file:
for line in input_file:
print(f">>> {line.strip()}")
eval(line)
Then run in your terminal with the following if you want to redirect the output to a text file named code-and-output.txt:
python repl_sim.py > code-and-output.txt
-OR-
Then run in your terminal with the following if you want to see the output as well as the make the text file matching:
python repl_sim.py | tee code-and-output.txt
It at least works for the example you provided as code.py.
Pure Python version of first option above so that you don't need shell redirect.
Save this code as repl_sim.py:
import contextlib
with open('code-and-output.txt', 'w') as f:
with contextlib.redirect_stdout(f):
with open("code.py", 'r') as input_file:
for line in input_file:
print(f">>> {line.strip()}")
eval(line)
Then run in your terminal with:
python repl_sim.py
That will result in code-and-output.txt with your desired content.
Contextlib use based on Raymond Hettinger's August 17th 2018 Tweet and contextlib.redirect_stdout() documentation
.
Are there any differences between the following 2 lines:
subprocess.Popen(command + '> output.txt', shell=True)
subprocess.Popen(command +' &> output.txt', shell=True)
As the popen already triggers the command to run in the background, should I use &? Does use of & ensure that the command runs even if the python script ends executing?
Please let me know the difference between the 2 lines and also suggest which of the 2 is better.
Thanks.
&> specifies that standard error has to be redirected to the same destination that standard output is directed. Which means both the output log of the command and error log will also be written in the output.txt file.
using > alone makes only standard output being copies to the output.txt file and the standard error can be written using command 2> error.txt
I have added a bash command to my Python script - this bash command is used to linearise an input file (i.e. take a file with many lines, and merge it into one line). It works when I manually write the name of the file for the command (in this example, the input file is called input.txt):
import subprocess
holder = subprocess.Popen('perl -ne "chomp;print;" input.txt', shell=True, stdout=subprocess.PIPE).stdout.read()
print(holder)
However, I would like this bash command to take the name of the file specified by the user in the command line, and linearise it.
I have already tried to achieve this using %s:
import subprocess
import sys
import os
path = sys.argv[1]
file = open(path, 'r')
inp = file.read()
holder = subprocess.Popen('perl -ne "chomp;print;" %s' % inp, shell=True, stdout=subprocess.PIPE).stdout.read()
print(holder)
However, when I try this, the shell freezes and shows no output, and the bash $ prompt does not show, there is no error message.
I would like to know if there is a valid way to add a user input in this case and would appreciate any help.
EDIT: just to clarify, here is what I type into my shell to run the program with the second example:
$ python3 program.py input.txt
Why not just directly feed the input path to your subprocess, like this? I'm slightly confused as to why you would like to use perl within python do this however..
import subprocess
import sys
import os
path = sys.argv[1]
holder = subprocess.Popen('perl -ne "chomp;print;" %s' % path, shell=True, stdout=subprocess.PIPE).stdout.read()
print(holder)
with this example input:
test.txt
i
i
i
i
i
the program will output:
python linearize.py test.txt
b'iiiii'
import subprocess
import sys
import os
holder = subprocess.Popen('perl -ne "chomp;print;" {}'.format(sys.argv[1]), shell=True, stdout=subprocess.PIPE).stdout.read()
print(holder)
But why ? Why calling a perl script using python ? Python has all the tools you need to do the same using native, debuggable and crossplatform code on one process. It does not make sense and it is a bad practice.
import sys
with open(sys.argv[1]) as input_file:
holder = ''.join(input_file.read().splitlines())
print(holder)
I have to run an executable for which input parameters are saved in a text file, say input.txt. The output is then redirected to a text file, say output.txt. In windows terminal, I use the command,
executable.exe < input.txt > output.txt
How can I do this from within a python program?
I understand that this can be achieved using os.system. But I wanted to run the same using subprocess module. I was trying something like this,
input_path = '<'+input+'>'
temp = subprocess.call([exe_path, input_path, 'out.out'])
But, the python code executes the exe file without directing the text file to it.
To redirect input/output, use the stdin and stdout parameters of call:
with open(input_path, "r") as input_fd, open("out.out", "w") as output_fd:
temp = subprocess.call([exe_path], stdin=input_fd, stdout=output_fd)
I am trying to run the bash command pdfcrack in Python on a remote server. This is my code:
bashCommand = "pdfcrack -f pdf123.pdf > myoutput.txt"
import subprocess
process = subprocess.Popen(bashCommand.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
I, however, get the following error message:
Non-option argument myoutput2.txt
Error: file > not found
Can anybody see my mistake?
The first argument to Popen is a list containing the command name and its arguments. > is not an argument to the command, though; it is shell syntax. You could simply pass the entire line to Popen and instruct it to use the shell to execute it:
process = subprocess.Popen(bashCommand, shell=True)
(Note that since you are redirecting the output of the command to a file, though, there is no reason to set its standard output to a pipe, because there will be nothing to read.)
A better solution, though, is to let Python handle the redirection.
process = subprocess.Popen(['pdfcrack', '-f', 'pdf123.pdf'], stdout=subprocess.PIPE)
with open('myoutput.txt', 'w') as fh:
for line in process.stdout:
fh.write(line)
# Do whatever else you want with line
Also, don't use str.split as a replacement for the shell's word splitting. A valid command line like pdfcrack -f "foo bar.pdf" would be split into the incorrect list ['pdfcrack', '-f', '"foo', 'bar.pdf"'], rather than the correct list ['pdfcrack', '-f', 'foo bar.pdf'].
> is interpreted by shell, but not valid otherwise.
So, that would work (don't split, use as-is):
process = subprocess.Popen(bashCommand, shell=True)
(and stdout=subprocess.PIPE isn't useful since all output is redirected to the output file)
But it could be better with native python for redirection to output file and passing arguments as list (handles quote protection if needed)
with open("myoutput.txt","w") as f:
process = subprocess.Popen(["pdfcrack","-f","pdf123.pdf"], stdout=subprocess.PIPE)
f.write(process.read())
process.wait()
Your mistake is > in command.
It doesn't treat this as redirection to file because normally bash does it and now you run it without using bash.
Try with shell=True if you whan to use bash. And then you don't have to split command into list.
subprocess.Popen("pdfcrack -f pdf123.pdf > myoutput.txt", shell=True)