I am attempting to cat a CSV file into stdout and then pipe the printed output as input into a python program that also takes a system argument vector with 1 argument. I ran into an issue I think directly relates to how Python's fileinput.input() function reacts with regards to occupying the stdin file descriptor.
generic_user% cat my_data.csv | python3 my_script.py myarg1
Here is a sample Python program:
import sys, fileinput
def main(argv):
print("The program doesn't even print this")
data_list = []
for line in fileinput.input():
data_list.append(line)
if __name__ == "__main__":
main(sys.argv)
If I attempt to run this sample program with the above terminal command and no argument myarg1, the program is able to evaluate and parse the stdin for the data output from the CSV file.
If I run the program with the argument myarg1, it will end up throwing a FileNotFoundError directly related to myarg1 not existing as a file.
FileNotFoundError: [Errno 2] No such file or directory: 'myarg1'
Would someone be able to explain in detail why this behavior takes place in Python and how to handle the logic such that a Python program can first handle stdin data before argv overwrites the stdin descriptor?
You can read from the stdin directly:
import sys
def main(argv):
print("The program doesn't even print this")
data_list = []
for line in iter(sys.stdin):
data_list.append(line)
if __name__ == "__main__":
main(sys.argv)
You are trying to access a file which has not been yet created, hence fileinput cannot open it, but since you are piping the data you have no need for it.
This is by design. The conceptors of fileinput thought that there were use cases where reading from stdin would be non sense and just provided a way to specifically add stdin to the list of files. According to the reference documentation:
import fileinput
for line in fileinput.input():
process(line)
This iterates over the lines of all files listed in sys.argv[1:], defaulting to sys.stdin if the list is empty. If a filename is '-', it is also replaced by sys.stdin.
Just keep your code and use: generic_user% cat my_data.csv | python3 my_script.py - myarg1
to read stdin before myarg1 file or if you want to read it after : ... python3 my_script.py myarg1 -
fileinput implements a pattern common for Unix utilities:
If the utility is called with commandline arguments, they are files to read from.
If it is called with no arguments, read from standard input.
So fileinput works exactly as intended. It is not clear what you are using commandline arguments for, but if you don't want to stop using fileinput, you should modify sys.argv before you invoke it.
some_keyword = sys.argv[1]
sys.argv = sys.argv[:1] # Retain only argument 0, the command name
for line in fileinput.input():
...
Related
I have been trying to append my output of a command to a temporary file in python and later doing some operations. Not able to append the data to a temporary file. Any help is appreciated! My sample code as follows.
Getting the error like this.
with open(temp1 , 'r') as f:
TypeError: expected str, bytes or os.PathLike object, not _TemporaryFileWrapper
import tempfile
import os
temp1 = tempfile.NamedTemporaryFile()
os.system("echo Hello world | tee temp1")
with open(temp1 , 'r') as f:
a = f.readlines()[-1]
print(a)
import tempfile
import os
# Opening in update-text mode to avoid encoding the data written to it
temp1 = tempfile.NamedTemporaryFile("w+")
# popen opens a pipe from the command, allowing one to capture its output
output = os.popen("echo Hello world")
# Write the command output to the temporary file
temp1.write(output.read())
# Reset the stream position at the beginning of the file, if you want to read its contents
temp1.seek(0)
print(temp1.read())
Check out subprocess.Popen for more powerful subprocess communication.
Whatever you're trying to do isn't right. It appears that you are trying to have a system call write to a file, and then you want to read that file in your Python code. You're creating a temporary file, but then your system call is writing to a statically named file, named 'temp1' rather than to the temporary file you've opened. So it's unclear if you want/need to use a computed temporary file name or if using temp1 is OK. The easiest way to fix your code to do what I think you want is like this:
import os
os.system("echo Hello world | tee temp1")
with open('temp1' , 'r') as f:
a = f.readlines()[-1]
print(a)
If you need to create a temporary file name in your situation, then you have to be careful if you are at all concerned about security or thread safety. What you really want to do is have the system create a temporary directory for you, and then create a statically named file in that directory. Here's your code reworked to do that:
import tempfile
import os
with tempfile.TemporaryDirectory() as dir:
tempfile = os.path.join(dir, "temp1")
os.system("echo Hello world /tmp > " + tempfile)
with open(tempfile) as f:
buf = f.read()
print(buf)
This method has the added benefit of automatically cleaning up for you.
UPDATE: I have now seen #UlisseBordingnon's answer. That's a better solution overall. Using os.system() is discouraged. I would have gone a bit different of a way by using the subprocess module, but what they suggest is 100% valid, and is thread and security safe. I guess I'll leave my answer here as maybe you or other readers need to use os.system() or otherwise have the shell process you execute write directly to a file.
As others have suggested, you should use the subprocess module instead of os.system. However from subprocess you can use the most recent interface (and by most recent, I believe this was adding in Python 3.4) of subprocess.run.
The neat thing about using .run is that you can pass any file-like object to stdout and the stdout stream will automatically redirect to that file.
import tempfile
import subprocess
with tempfile.NamedTemporaryFile("w+") as f:
subprocess.run(["echo", "hello world"], stdout=f)
# command has finished running, let's check the file
f.seek(0)
print(f.read())
# hello world
If you are using python 3.5 or later (as with most of us), then use subprocess.run is better because you do not need a temporary file:
import subprocess
completed_process = subprocess.run(
["echo", "hello world"],
capture_output=True,
encoding="utf-8",
)
print(completed_process.stdout)
Notes
The capture_output parameter tells run() to save the output to the .stdout and .stderr attributes
The encoding parameter will convert the output from bytes to string
Depending on your needs, if your print your output, a quickier way, but maybe not exactly what you are looking for is to redirect the output to a file, at the command line level
Example(egfile.py):
import os
os.system("echo Hello world")
At command level you can simply do:
python egfile.py > file.txt
The output of the file will be redirected to the file instead to the screen
I am trying to run a list of files in a directory through a UNIX executable using a python. I would the output of the executable for each file written to a different directory but retaining the original filename.
I am using python 2.7 so using the subprocess.call method. I am getting an error that says "'bool' object is not iterable" which I am guessing is due to the part where I am trying to write the output files as when I run the following script through the console I get an expected output specific to the executable within the console window:
import subprocess
import os
for inp in os.listdir('/path/to/input/directory/'):
subprocess.call(['/path/to/UNIX/executable', inp])
My code is currently this:
import subprocess
import os
for inp in os.listdir('/path/to/input/directory/'):
out = ['/path/to/output/directory/%s' % inp]
subprocess.call(['/path/to/UNIX/executable', inp] > out)
However, this second lot of code returns the "'bool' is not iterable" error.
I'm guessing the solution is pretty trivial as it is not a complicated task however, as a beginner, I do not know where to start!
SOLVED: following #barak-itkin's answer, for those who may stumble across this issue in the future, the code ran successfully using the following:
import subprocess
import os
for inp in os.listdir('/path/to/input/directory/'):
with open('/path/to/output/directory/%s' % inp, 'w') as out_file:
subprocess.call(['/path/to/UNIX/executable', inp], stdout=out_file)
To write the output of a subprocess.call to a file, you would need to either use the > path/to/out as part of the command itself, or to do it "properly" by specifying the file to which the output should go:
# Option 1:
# Specify that the command is using a "shell" syntax, meaning that
# things like output redirection (such as with ">") should be handled
# by the shell that will evaluate the command
subprocess.call('my_command arg1 arg2 > /path/to/out', shell=True)
# Option 2:
# Open the file to which you want to write the output, and then specify
# the `stdout` parameter to be that file
with open('/path/to/out', 'w') as out_file:
subprocess.call(['my_command', 'arg1', 'arg2'], stdout=out_file)
Does this work for you?
Using Python in NetBeans and having some trouble to set up file arguments as input/output. For instance:
import re, sys
for line in sys.stdin:
for token in re.split("\s+", line.strip()):
print(token)
Command line usage python splitprog.py < input.txt > output.txt works great. But in NetBeans the output window just waits, with nothing happening even if one give a file name (tested many combinations).
The Application Arguments row in project properties (where one would enter these files for a Java project) doesn’t seem to be used either, as the behaviour is the same regardless of whether there are file names/paths there or not. Is there some trick to get this to work or are file args currently unusable when it comes to Python in NetBeans?
ADD: As per suggestion by #John Zwinck, an example solution:
import re, sys
with open(sys.argv[1]) as infile:
with open(sys.argv[2], "w") as outfile:
for line in infile:
for token in re.split("\s+", line.strip()):
print(token, file = outfile)
Argument files are set in NB project properties. In command prompt, the programme is now simply run by python splitprog.py input.txt output.txt.
When you do this:
python splitprog.py < input.txt > output.txt
You are redirecting input.txt to stdin of python, and stdout of python to output.txt. You aren't using command line arguments to splitprog.py at all.
NetBeans does not support this.
Instead, you should pass the filenames as arguments, like this:
python splitprog.py input.txt output.txt
Then in NetBeans you just set the command line arguments to input.txt output.txt and it will work the same as the above command line in the shell. You'll need to modify your program slightly, perhaps like this:
with open(sys.argv[1]) as infile:
for line in infile:
# ...
If you still want to support stdin and stdout, one convention is to use - to mean those standard streams, so you could code your program to support this:
python splitprog.py - - < input.txt > output.txt
That is, you can write your program to understand - as "use the standard stream from the shell", if you need to support the old way of doing things. Or just default to this behavior if no command line arguments are given.
I am trying to test a simple code that reads a file line-by-line with Pycharm.
for line in sys.stdin:
name, _ = line.strip().split("\t")
print name
I have the file I want to input in the same directory: lib.txt
How can I debug my code in Pycharm with the input file?
You can work around this issue if you use the fileinput module rather than trying to read stdin directly.
With fileinput, if the script receives a filename(s) in the arguments, it will read from the arguments in order. In your case, replace your code above with:
import fileinput
for line in fileinput.input():
name, _ = line.strip().split("\t")
print name
The great thing about fileinput is that it defaults to stdin if no arguments are supplied (or if the argument '-' is supplied).
Now you can create a run configuration and supply the filename of the file you want to use as stdin as the sole argument to your script.
Read more about fileinput here
I have been trying to find a way to use reading file as stdin in PyCharm.
However, most of guys including jet brains said that there is no way and no support, it is the feature of command line which is not related PyCharm itself.
* https://intellij-support.jetbrains.com/hc/en-us/community/posts/206588305-How-to-redirect-standard-input-output-inside-PyCharm-
Actually, this feature, reading file as stdin, is somehow essential for me to ease giving inputs to solve a programming problem from hackerank or acmicpc.
I found a simple way. I can use input() to give stdin from file as well!
import sys
sys.stdin = open('input.in', 'r')
sys.stdout = open('output.out', 'w')
print(input())
print(input())
input.in example:
hello world
This is not the world ever I have known
output.out example:
hello world
This is not the world ever I have known
You need to create a custom run configuration and then add your file as an argument in the "Script Parameters" box. See Pycharm's online help for a step-by-step guide.
However, even if you do that (as you have discovered), your problem won't work since you aren't parsing the correct command line arguments.
You need to instead use argparse:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("filename", help="The filename to be processed")
args = parser.parse_args()
if args.filename:
with open(filename) as f:
for line in f:
name, _ = line.strip().split('\t')
print(name)
For flexibility, you can write your python script to always read from stdin and then use command redirection to read from a file:
$ python myscript.py < file.txt
However, as far as I can tell, you cannot use redirection from PyCharm as Run Configuration does not allow it.
Alternatively, you can accept the file name as a command-line argument:
$ python myscript.py file.txt
There are several ways to deal with this. I think argparse is overkill for this situation. Alternatively, you can access command-line arguments directly with sys.argv:
import sys
filename = sys.argv[1]
with open(filename) as f:
for line in f:
name, _ = line.strip().split('\t')
print(name)
For robust code, you can check that the correct number of arguments are given.
Here's my hack for google code jam today, wish me luck. Idea is to comment out monkey() before submitting:
def monkey():
print('Warning, monkey patching')
global input
input = iter(open('in.txt')).next
monkey()
T = int(input())
for caseNum in range(1, T + 1):
N, L = list(map(int, input().split()))
nums = list(map(int, input().split()))
edit for python3:
def monkey():
print('Warning, monkey patching')
global input
it = iter(open('in.txt'))
input = lambda : next(it)
monkey()
I'm making a call to a program from the shell using the subprocess module that outputs a binary file to STDOUT.
I use Popen() to call the program and then I want to pass the stream to a function in a Python package (called "pysam") that unfortunately cannot Python file objects, but can read from STDIN. So what I'd like to do is have the output of the shell command go from STDOUT into STDIN.
How can this be done from within Popen/subprocess module? This is the way I'm calling the shell program:
p = subprocess.Popen(my_cmd, stdout=subprocess.PIPE, shell=True).stdout
This will read "my_cmd"'s STDOUT output and get a stream to it in p. Since my Python module cannot read from "p" directly, I am trying to redirect STDOUT of "my_cmd" back into STDIN using:
p = subprocess.Popen(my_cmd, stdout=subprocess.PIPE, stdin=subprocess.PIPE, shell=True).stdout
I then call my module, which uses "-" as a placeholder for STDIN:
s = pysam.Samfile("-", "rb")
The above call just means read from STDIN (denoted "-") and read it as a binary file ("rb").
When I try this, I just get binary output sent to the screen, and it doesn't look like the Samfile() function can read it. This occurs even if I remove the call to Samfile, so I think it's my call to Popen that is the problem and not downstream steps.
EDIT: In response to answers, I tried:
sys.stdin = subprocess.Popen(tagBam_cmd, stdout=subprocess.PIPE, shell=True).stdout
print "Opening SAM.."
s = pysam.Samfile("-","rb")
print "Done?"
sys.stdin = sys.__stdin__
This seems to hang. I get the output:
Opening SAM..
but it never gets past the Samfile("-", "rb") line. Any idea why?
Any idea how this can be fixed?
EDIT 2: I am adding a link to Pysam documentation in case it helps, I really cannot figure this out. The documentation page is:
http://wwwfgu.anat.ox.ac.uk/~andreas/documentation/samtools/usage.html
and the specific note about streams is here:
http://wwwfgu.anat.ox.ac.uk/~andreas/documentation/samtools/usage.html#using-streams
In particular:
"""
Pysam does not support reading and writing from true python file objects, but it does support reading and writing from stdin and stdout. The following example reads from stdin and writes to stdout:
infile = pysam.Samfile( "-", "r" )
outfile = pysam.Samfile( "-", "w", template = infile )
for s in infile: outfile.write(s)
It will also work with BAM files. The following script converts a BAM formatted file on stdin to a SAM formatted file on stdout:
infile = pysam.Samfile( "-", "rb" )
outfile = pysam.Samfile( "-", "w", template = infile )
for s in infile: outfile.write(s)
Note, only the file open mode needs to changed from r to rb.
"""
So I simply want to take the stream coming from Popen, which reads stdout, and redirect that into stdin, so that I can use Samfile("-", "rb") as the above section states is possible.
thanks.
I'm a little confused that you see binary on stdout if you are using stdout=subprocess.PIPE, however, the overall problem is that you need to work with sys.stdin if you want to trick pysam into using it.
For instance:
sys.stdin = subprocess.Popen(my_cmd, stdout=subprocess.PIPE, shell=True).stdout
s = pysam.Samfile("-", "rb")
sys.stdin = sys.__stdin__ # restore original stdin
UPDATE: This assumed that pysam is running in the context of the Python interpreter and thus means the Python interpreter's stdin when "-" is specified. Unfortunately, it doesn't; when "-" is specified it reads directly from file descriptor 0.
In other words, it is not using Python's concept of stdin (sys.stdin) so replacing it has no effect on pysam.Samfile(). It also is not possible to take the output from the Popen call and somehow "push" it on to file descriptor 0; it's readonly and the other end of that is connected to your terminal.
The only real way to get that output onto file descriptor 0 is to just move it to an additional script and connect the two together from the first. That ensures that the output from the Popen in the first script will end up on file descriptor 0 of the second one.
So, in this case, your best option is to split this into two scripts. The first one will invoke my_cmd and take the output of that and use it for the input to a second Popen of another Python script that invokes pysam.Samfile("-", "rb").
In the specific case of dealing with pysam, I was able to work around the issue using a named pipe (http://docs.python.org/library/os.html#os.mkfifo), which is a pipe that can be accessed like a regular file. In general, you want the consumer (reader) of the pipe to listen before you start writing to the pipe, to ensure you don't miss anything. However, pysam.Samfile("-", "rb") will hang as you noted above if nothing is already registered on stdin.
Assuming you're dealing with a prior computation that takes a decent amount of time (e.g. sorting the bam before passing it into pysam), you can start that prior computation and then listen on the stream before anything gets output:
import os
import tempfile
import subprocess
import shutil
import pysam
# Create a named pipe
tmpdir = tempfile.mkdtemp()
samtools_prefix = os.path.join(tmpdir, "namedpipe")
fifo = samtools_prefix + ".bam"
os.mkfifo(fifo)
# The example below sorts the file 'input.bam',
# creates a pysam.Samfile object of the sorted data,
# and prints out the name of each record in sorted order
# Your prior process that spits out data to stdout/a file
# We pass samtools_prefix as the output prefix, knowing that its
# ending file will be named what we called the named pipe
subprocess.Popen(["samtools", "sort", "input.bam", samtools_prefix])
# Read from the named pipe
samfile = pysam.Samfile(fifo, "rb")
# Print out the names of each record
for read in samfile:
print read.qname
# Clean up the named pipe and associated temp directory
shutil.rmtree(tmpdir)
If your system supports it; you could use /dev/fd/# filenames:
process = subprocess.Popen(args, stdout=subprocess.PIPE)
samfile = pysam.Samfile("/dev/fd/%d" % process.stdout.fileno(), "rb")