Differences between enumerate(fileinput.input(file)) and enumerate(file) - python

I'm looking for some help with my code which is rigth below :
for file in file_name :
if os.path.isfile(file):
for line_number, line in enumerate(fileinput.input(file, inplace=1)):
print file
os.system("pause")
if line_number ==1:
line = line.replace('Object','#Object')
sys.stdout.write(line)
I wanted to modify some previous extracted files in order to plot them with matplotlib. To do so, I remove some lines, comment some others.
My problem is the following :
Using for line_number, line in enumerate(fileinput.input(file, inplace=1)): gives me only 4 out of 5 previous extracted files (when looking file_name list contains 5 references !)
Using for line_number, line in enumerate(file): gives me the 5 previous extracted file, BUT I don't know how to make modifications using the same file without creating another one...
Did you have an idea on this issue? Is this a normal issue?

There a number of things that might help you.
Firstly file_name appears to be a list of file names. It might be better named file_names and then you could use file_name for each one. You have verified that this does hold 5 entries.
The enumerate() function is used to help when enumerating a list of items to provide both an index and the item for each loop. This saves you having to use a separate counter variable, e.g.
for index, item in enumerate(["item1", "item2", "item3"]):
print index, item
would print:
0 item1
1 item2
2 item3
This is not really required, as you have chosen to use the fileinput library. This is designed to take a list of files and iterate over all of the lines in all of the files in one single loop. As such you need to tweak your approach a bit, assuming your list of files is called file_names then you write something as follows:
# Keep only files in the file list
file_names = [file_name for file_name in file_names if os.path.isfile(file_name)]
# Iterate all lines in all files
for line in fileinput.input(file_names, inplace=1):
if fileinput.filelineno() == 1:
line = line.replace('Object','#Object')
sys.stdout.write(line)
The main point here being that it is better to pre filter any non-filenames before passing the list to fileinput. I will leave it up to you to fix the output.
fileinput provides a number of functions to help you figure out which file or line number is currently being processed.

Assuming you're still having trouble, my typical approach is to open a file read-only, read its contents into a variable, close the file, make an edited variable, open the file to write (wiping out original file), and finally write the edited contents.
I like this approach since I can simply change the file_name that gets written out if I want to test my edits without wiping out the original file.
Also, I recommend naming containers using plural nouns, like #Martin Evans suggests.
import os
file_names = ['file_1.txt', 'file_2.txt', 'file_3.txt', 'file_4.txt', 'file_5.txt']
file_names = [x for x in file_names if os.path.isfile(x)] # see #Martin's answer again
for file_name in file_names:
# Open read-only and put contents into a list of line strings
with open(file_name, 'r') as f_in:
lines = f_in.read().splitlines()
# Put the lines you want to write out in out_lines
out_lines = []
for index_no, line in enumerate(lines):
if index_no == 1:
out_lines.append(line.replace('Object', '#Object'))
elif ...
else:
out_lines.append(line)
# Uncomment to write to different file name for edits testing
# with open(file_name + '.out', 'w') as f_out:
# f_out.write('\n'.join(out_lines))
# Write out the file, clobbering the original
with open(file_name, 'w') as f_out:
f_out.write('\n'.join(out_lines))
Downside with this approach is that each file needs to be small enough to fit into memory twice (lines + out_lines).
Best of luck!

Related

Modifying a file in-place inside nested for loops

I am iterating directories and files inside of them while I modify in place each file. I am looking to have the new modified file being read right after.
Here is my code with descriptive comments:
# go through each directory based on their ids
for id in id_list:
id_dir = os.path.join(ouput_dir, id)
os.chdir(id_dir)
# go through all files (with a specific extension)
for filename in glob('*' + ext):
# modify the file by replacing all new-line characters with an empty space
with fileinput.FileInput(filename, inplace=True) as f:
for line in f:
print(line.replace('\n', ' '), end='')
# here I would like to read the NEW modified file
with open(filename) as newf:
content = newf.read()
As it stands, the newf is not the new modified one, but instead the original f. I think I understand why that is, however I found it difficult to overcome that issue.
I can always do 2 separate iterations (go through each directory based on their ids, go through all files (with a specific extension) and modify the file, and then repeat iteration to read each one of them) but I was hoping if there was a more efficient way around it. Perhaps if it would be possible to restart the second for loop after the modification has taken place and then have the read take place (so to avoid at least repeating the outer for loop).
Any ideas/designs of to achieve the above with a clean and efficient way?
For me it works with this code:
#!/usr/bin/env python3
import os
from glob import glob
import fileinput
id_list=['1']
ouput_dir='.'
ext = '.txt'
# go through each directory based on their ids
for id in id_list:
id_dir = os.path.join(ouput_dir, id)
os.chdir(id_dir)
# go through all files (with a specific extension)
for filename in glob('*' + ext):
# modify the file by replacing all new-line characters with an empty space
for line in fileinput.FileInput(filename, inplace=True):
print(line.replace('\n', ' ') , end="")
# here I would like to read the NEW modified file
with open(filename) as newf:
content = newf.read()
print(content)
notice how I iterate over the lines!
I am not saying that the way you are going about doing this is incorrect but I feel that you are overcomplicating it. Here is my super simple solution.
import glob, fileinput
for filename in glob('*' + ext):
f_in = (x.rstrip() for x in open(filename, 'rb').readlines()) #instead of trying to modify in place we instead read in data and replace raw_values.
with open(filename, 'wb') as f_out: # we then write the data stream back out
#extra modification to the data can go here, i just remove the /r and /n and write back out
for i in f_in:
f_out.write(i)
#now there is no need to read the data back in because we already have a static referance to it.

update records in file2 with data found in file1

There is a large file with fixed format,file1. Another CSV file, file2 has id's and values, using which, specific portions of a record with same id in file1 need to be updated. Here is my attempt. I really appreciate any help you can offer to make this work.
file2 comma separated
clr,code,type
Red,1001,1
Red,2001,2
Red,3001,3
blu,1002,1
blu,2002,2
blu,3002,3
file1 (fixed width format)
clrtyp1typ2typ3notes
red110121013101helloworld
blu110221023102helloworld2
the file1 need to be updated to the following
clrtyp1typ2typ3notes
red100120013001helloworld
blu100220023002helloworld2
please note that both the files are fairly large files(multiple GB each). I am python noob, please excuse any gross mistakes. I'd really appreciate any help you could offer.
import shutil
#read both input files
file1=open("file1.txt",'r').read()
file2='file2.txt'
#make a copy of the input file to make edits to it.
file2Edit=file2+'.EDIT'
shutil.copy(file2, baseEdit)
baseEditFile = open(baseEdit,'w').read()
#go thru eachline, pick clr from file1 and look for it in file2, if found, form a string to be replaced and replace the original line.
with open('file2.txt','w') as f:
for line in f:
base_clr = line[:3]
findindex = file1.find(base_recid)
if findindex != -1:
for line2 in file1:
#print(line)
clr = line2.split(",")[0]
code = line2.split(",")[1]
type = line2.split(",")[2]
if keytype = 1:
finalline=line[:15]+string.rjust(keyid, 15)+line[30:]
baseEditFile.write( replace(line,finalline)
baseEditFile.replace(line,finalline)
If I get you right, you need something like this:
# declare file names and necessary lists
file1 = "file1.txt"
file2 = "file2.txt"
file1_new = "file1.txt.EDIT"
clrs = {}
# read clrs to update
with open(file1, "r") as f:
# skip header line
f.next()
for line in f:
clrs[line[:3]] = []
# read the new codes
with open(file2, "r") as f:
# skip header
f.next()
for line in f:
current = line.strip().split(",")
key = current[0].lower()
if key in clrs:
clrs[key].append(current[1])
# write the new lines (old codes replaced with the new ones) to new file
with open(file1, "r") as f_in:
with open(file1_new, "w") as f_out:
# writes header
f_out.write(f_in.next())
for line in f_in:
line_new = list(line)
key = line[:3]
# checks if new codes were found for that key
if key in clrs.keys():
# replaces old keys by the new keys
line_new[3:15] = "".join(clrs[key])
f_out.write("".join(line_new))
This works only for the given example. If your file has another format in real use, you have to adjust the indices used.
This little script first opens your file1, iterates over it, and adds the clr as a key to a dictionary. The value for that key is an empty list.
Then it opens file2, and iterates over every clr here. If the clr is in the dictionary, it appends the code to the list. So after running this part, the dictionary contains key, value pairs, where the keys are the clr's and the values are lists containing the codes (in the order that was given by the file).
And in the last part of the script, every line of file1.txt is written to file1.txt.EDIT. Before writing, the old codes are replaced by the new ones.
The codes saved in file2.txt have to be in the same order as they are saved in file1.txt. If the order can be different, or the there is the possibility that there are more codes in file2.txt than you need to replace in file1.txt, you need to add a query to check for the correct codes. That's no big business, but this script will solve your problem for the files you gave us as an example.
If you have any questions or need more help, feel free to ask.
EDIT: Besides some syntactic mistakes and wrong method calls you made in your question's code, you shouldn't read in the whole data saved in a file at once, especially if you know the files can get very large. This consumes a lot of memory and may cause the program to run very slow. That's why iterating line by line is much better. The example I provided reads only one line of the file at once and writes it to the new file directly, instead of saving both old files and the new file in memory and writing it as the last step.

Python removing duplicates and saving the result

I am trying to remove duplicates of 3-column tab-delimited txt file, but as long as the first two columns are duplicates, then it should be removed even if the two has different 3rd column.
from operator import itemgetter
import sys
input = sys.argv[1]
output = sys.argv[2]
#Pass any column number you want, note that indexing starts at 0
ig = itemgetter(0,1)
seen = set()
data = []
for line in input.splitlines():
key = ig(line.split())
if key not in seen:
data.append(line)
seen.add(key)
file = open(output, "w")
file.write(data)
file.close()
First, I get error
key = ig(line.split())
IndexError: list index out of range
Also, I can't see how to save the result to output.txt
People say saving to output.txt is a really basic matter. But no tutorial helped.
I tried methods that use codec, those that use with, those that use file.write(data) and all didn't help.
I could learn MatLab quite easily. The online tutorial was fantastic and a series of Googling always helped a lot.
But I can't find a helpful tutorial of Python yet. This is obviously because I am a complete novice. For complete novices like me, what would be the best tutorial with 1) comprehensiveness AND 2) lots of examples 3) line by line explanation that dosen't leave any line without explanation?
And why is the above code causing error and not saving result?
I'm assuming since you assign input to the first command line argument with input = sys.argv[1] and output to the second, you intend those to be your input and output file names. But you're never opening any file for the input data, so you're callling .splitlines() on a file name, not on file contents.
Next, splitlines() is the wrong approach here anyway. To iterate over a file line-by-line, simply use for line in f, where f is an open file. Those lines will include the newline at the end of the line, so it needs to be stripped if it's not supposed to be part of the third columns data.
Then you're opening and closing the file inside your loop, which means you'll try to write the entire contents of data to the file every iteration, effectively overwriting any data written to the file before. Therefore I moved that block out of the loop.
It's good practice to use the with statement for opening files. with open(out_fn, "w") as outfile will open the file named out_fn and assign the open file to outfile, and close it for you as soon as you exit that indented block.
input is a builtin function in Python. I therefore renamed your variables so no builtin names get shadowed.
You're trying to directly write data to the output file. This won't work since data is a list of lines. You need to join those lines first in order to turn them in a single string again before writing it to a file.
So here's your code with all those issues addressed:
from operator import itemgetter
import sys
in_fn = sys.argv[1]
out_fn = sys.argv[2]
getkey = itemgetter(0, 1)
seen = set()
data = []
with open(in_fn, 'r') as infile:
for line in infile:
line = line.strip()
key = getkey(line.split())
if key not in seen:
data.append(line)
seen.add(key)
with open(out_fn, "w") as outfile:
outfile.write('\n'.join(data))
Why is the above code causing error?
Because you haven't opened the file, you are trying to work with the string input.txtrather than with the file. Then when you try to access your item, you get a list index out of range because line.split() returns ['input.txt'].
How to fix that: open the file and then work with it, not with its name.
For example, you can do (I tried to stay as close to your code as possible)
input = sys.argv[1]
infile = open(input, 'r')
(...)
lines = infile.readlines()
infile.close()
for line in lines:
(...)
Why is this not saving result?
Because you are opening/closing the file inside the loop. What you need to do is write the data once you're out of the loop. Also, you cannot write directly a list to a file. Hence, you need to do something like (outside of your loop):
outfile = open(output, "w")
for item in data:
outfile.write(item)
outfile.close()
All together
There are other ways of reading/writing files, and it is pretty well documented on the internet but I tried to stay close to your code so that you would understand better what was wrong with it
from operator import itemgetter
import sys
input = sys.argv[1]
infile = open(input, 'r')
output = sys.argv[2]
#Pass any column number you want, note that indexing starts at 0
ig = itemgetter(0,1)
seen = set()
data = []
lines = infile.readlines()
infile.close()
for line in lines:
print line
key = ig(line.split())
if key not in seen:
data.append(line)
seen.add(key)
print data
outfile = open(output, "w")
for item in data:
outfile.write(item)
outfile.close()
PS: it seems to produce the result that you needed there Python to remove duplicates using only some, not all, columns

How to iterate over arbitrary number of files in parallel in python?

I have a list of file objects in a list called paths
I'd like to be able to go through and read the first line of each file, do something with this n-tuple of data, then move on the second line of each file. The number of file objects in path is arbitrary.
Is this possible?
import itertools
for line_tuple in itertools.izip(*files):
whatever()
I'd use zip, but that would read the entire contents of the files into memory. Note that files should be a list of file objects; I'm not sure what you mean by "list of file handlers".
This depends on how "arbitrary" it actually is. As long as the number is less than the limit of your OS, then itertools.izip should work just fine (or itertools.izip_longest as appropriate).
files = [open(f) for f in filenames]
for lines in itertools.izip(*files):
# do something
for f in files:
f.close()
If you can have more files than your OS will allow you to open, then you're out of luck (at least as far as an easy solution is concerned).
the first idea pop into my mind the following code , it seems too Straightforward
fp_list = []
for file in path_array:
fp = open(file)
fp_list.append(fp)
line_list = []
for fp in fp_list:
line = fp.readline()
line_list.append(line)
## you code here process the line_list
for fp in fp_list:
fp.close()

How to skip 2 lines in a file with Python?

I have a series of files and I want to extract a specific number from each of them.
In each of the files I have this line:
name, registration num
and exactly two lines after that there is the registration number. I would like to extract this number from each file. and put it as a value of a dictionary.Anyone have any idea how it is possible ?
my current code that does not actually work is like below:
matches=[]
for root, dirnames, filenames in os.walk('D:/Dataset2'):
for filename in fnmatch.filter(filenames, '*.txt'):
matches.append([root, filename])
filenames_list={}
for root,filename in matches:
filename_key = (os.path.join(filename).strip()).split('.',1)[0]
fullfilename = os.path.join(root, filename)
f= open(fullfilename, 'r')
for line in f:
if "<name, registration num'" in line:
key=filename_key
line+=2
val=line
I usually use next() when I want to skip a single line, usually a header for a file.
with open(file_path) as f:
next(f) # skip 1 line
next(f) # skip another one.
for line in f:
pass # now you can keep reading as if there was no first or second line.
Note: In Python 2.6 or earlier you must use f.next()
One way would be to load the whole line into an array, and then read the line(s) you want from it. Example
A file called testfile contains the following:
A1
B2
C3
D4
E5
A program test.py:
#!/usr/bin/env python
file = open('testfile')
lines = file.readlines()[2:]
file.close()
for line in lines:
print(line.strip())
Output:
$./test.py
C3
D4
E5
EDIT: I read the question again, and noticed you just want a single line. Then you could just remove the :, and use f.getlines()[2] to get the third line in a file
Or you could use f.getline() three times, and just ignore the first two
Or you could use a for line in f type loop, and just ignore the first two line (have an incrementing counter)
I suppose something like that would work...
f= open(fullfilename, 'r')
for line in f:
if "name, registration num" in line:
key=filename_key
break
f.readline()
res = f.readline()[:-1] #removed trailin newline
from itertools import islice
with open('data.txt') as f:
for line in islice(f, 2, None):
print line
Generally speaking, if you want to do something to a python iterator in-loop, like look two ahead, I find a good first place to look is to import itertools and look here. In your case, you might benefit from their implementation of consume.
Worth having a look to see if this issue hasn't been covered on SO before.
Edit: Indeed- look here, which includes a good discussion of python iterators.

Categories