With statement being ignored in Python - python

I have two functions. The first creates a new CSV file (from an existing CSV). The second appends the same data to the new CSV, but in a slightly different order of the rows.
When I run this together all in one file the first function works but the second does not. However when I tried putting the second function in a separate file then calling it in the first script, it did work, albeit I had to enter the input twice.
What do I need to change to get the second function to run properly?
import csv
export = raw_input('>')
new_file = raw_input('>')
ynabfile = open(export, 'rb')
reader = csv.reader(ynabfile)
def create_file():
with open(new_file, 'wb') as result:
writer = csv.writer(result)
for r in reader:
writer.writerow((r[3], r[5], r[6],r[7], r[7],
r[8],r[8],r[9],r[10]))
def append():
with open(new_file, 'ab') as result2:
writer2 = csv.writer(result2)
for i in reader:
writer.writerow((r[3], r[5], r[6], r[7], r[7],
r[8], r[8], r[10], r[9]))
create_file()
append()
I'm new to Python and programming in general, so if there is an all around better way to do this, I'm all ears.

The csv reader has already read the entire file pointed to by ynabfile, so on the second call (or any subsequent calls) to either create_file or append will not be able to fetch any more data using the reader until the file pointer is sent back to the beginning. In your case, a quick fix would be this:
create_file()
ynabfile.seek(0)
append()
I recommend restructuring your code a bit to avoid pitfalls like this. A few recommendations:
Read all the contents in ynabfile into another list instead, if you can fit the entirety of the file into memory
Have create_file and append take parameter of input and output file names
Alternatively, have those two functions take the file pointer (ynabfile in this case), and ensure that it is seeked to the beginning then create a new csv.reader instance using that.

Related

Access values outside with-block

Is there a way, in the code below, to access the variable utterances_dict outside of the with-block? The code below obviously returns the error: ValueError: I/O operation on closed file.
from csv import DictReader
utterances_dict = {}
utterance_file = 'toy_utterances.csv'
with open(utterance_file, 'r') as utt_f:
utterances_dict = DictReader(utt_f)
for line in utterances_dict:
print(line)
I am not an expert on DictReader implementation, however their documentation leaves the implementation open to the reader itself parsing the file after construction. Meaning it may be possible that the underlying file has to remain open until you are done using it. In this case, it would be problematic to attempt to use the utterances_dict outside of the with block because the underlying file will be closed by then.
Even if the current implementation of DictReader does in fact parse the whole csv on construction, it doesn't mean their implementation won't change in the future.
DictReader returns a view of the csv file.
Convert the result to a list of dictionaries.
from csv import DictReader
utterances = []
utterance_file = 'toy_utterances.csv'
with open(utterance_file, 'r') as utt_f:
utterances = [dict(row) for row in DictReader(utt_f) ]
for line in utterances:
print(line)

Python: I/O operation on closed file

So I am tasked with creating a function that returns the amount of times a substring appears in a given string and the index of the substring every time it appears.
But when I run my code, I get a "I/O operation on closed file" error. Anyone know how to fix this?
# 1. Import the text.csv file
import csv
with open('text.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
# 2. Complete function counter. The function should return the number
of times the substring appears & their index
def counter(substring):
substring_counter = 0
string = csv_reader
for substring in csv_file:
substring_counter = substring_counter + 1
print('Counter = ', substring_counter)
print(string.find(substring))
# do not edit the code below
counter("TCA")
This is happening because csv.reader is local to the module, not your function. The reader object returned by csv module maintains a reference a file handle (returned by open). On the first iteration of the for loop, that file handle ends up going to the end, i.e., it gets exhausted but on the second iteration, you are causing csv reader object to try to read from the end of a file which causes that error.
Before or after every iteration of the loop, you can reset file pointer to beginning by doing something like this: csv_file.seek(0)
A better solution would be to store all the file contents into a buffer and access it repeatedly (without having to re-read file contents from file handle).
Since you are using with (which is good), you have explicitly limited the scope of the open file, yet as the previous response pointed out, the csv reader object is used outside that scope. The reader in this case is just a wrapper for the file and does not read everything initially. You either need to read the whole file within the with or move the with inside the function and everything that references the file under it.

How can I search for someones name then replace the number in that same line as the persons name?

I have the following data in a file called data.txt and would like to be able to add to the numbers at the end and replace them in the file without creating a new one:
Alfreda,art,2015,35
brook,biology,2015,3
charlie,chemistry,2015,140
dolly,Design,2015,120
Emilia,English,2015,150
Fiona,french,2015,40
Grace,Greek,2015,12
Hanna,history,2015,15
Here is the code I currently have:
with open("data.txt", "r") as f:
newline=[]
for word in f.line():
newline.append(word.replace(35,str(New))
with open("data.txt", "w") as f:
for line in newline :
f.writelines(line)
If you just want to add string to each line then update the file, this code can solve your problem but this is not optimal.
with open("data.txt", "r") as myFile:
newline=[]
# Use the readlines method to get all the lines
for line in myFile.readlines():
# Remove the \n character with the rstrip method
line = line.rstrip('\n')
newline.append(line+",35\n") # Don't forget to add \n
# Test
print newline
myFile.close()
with open("data.txt", "w") as myFile:
for line in newline :
myFile.writelines(line)
If this is not your problem, try to use the pickle module and work with objects, it will be easier.
I'm going to have to make some of your question up. If you have a file and you want to update it, the updates have to come from somewhere. The code in the question has a New variable but there is no indication of how New is supposed to get a value, or how the program is supposed to know which row to update.
I'm going to assume you have a file of updates called updates.txt that looks like this (and it is deliberately not in alphabetical order):
Emilia,45
Alfreda,35
So after your program runs the resulting file will have two rows different:
Alfreda,art,2015,70 ...this one
brook,biology,2015,3
charlie,chemistry,2015,140
dolly,Design,2015,120
Emilia,English,2015,195 ...and this one
Fiona,french,2015,40
Grace,Greek,2015,12
Hanna,history,2015,15
But the rest the same.
Since your sample data file is a .csv file I am using the Python csv module, rather than picking the data apart by hand. It doesn't matter much with simple data like this but it's a good module to know about.
import csv
marks = {}
# Read in existing data into a dictionary:
# key is name, value is a list [subject, year, score]
# like this: {"Alfreda": ["art",2015,35], ... }
# This is to make it easy to do random updates based on name
with open("data.txt", "r") as f:
for row in csv.reader(f):
name,subject,year,score = row
marks[name] = [subject,int(year),int(score)]
# Read in updates and apply each line to the corresponding entry in marks
with open("updates.txt", "r") as f:
for row in csv.reader(f):
name,added_score = row
try:
marks[name][2] += int(added_score) # for example marks["Alfreda"][2] += int("35")
except KeyError:
print(f"Name {name} not found to update, nothing done")
# Write out updated dictionary:
with open("data.txt", "w") as f:
writer = csv.writer(f,lineterminator="\n")
for name in sorted(marks.keys(), key=lambda n: n.lower()):
row=[name]+marks[name] # for example ["Alfreda"] + ["art",2015,70]
writer.writerow(row)
This line:
for name in sorted(marks.keys(), key=lambda n: n.lower()):
looks complicated but it is needed because you obviously expect the names Alfreda brook charlie dolly Emilia Fiona Grace Hanna to be in that order. But just doing the obvious
for name in sorted(marks.keys()):
will put them in the order Alfreda Emilia Fiona Grace Hanna brook charlie dolly.
In the interests of keeping the code simple and as close to your original as possible, it does no validity checks, so if this line
charlie,chemistry,2015,140
was wrongly entered as
charlie,chemistry,2015,14O
(with the letter O instead of a zero), the program will just fail. Ditto if the update file is missing a comma somewhere.
This works and will do what I think you want. But...
There are issues with the design. Your program reads in the data from data.txt, then overwrites it with new data. But suppose your program fails just after this line:
with open("data.txt", "w") as f:
Then you won't have your original data (because the call to open() truncated it), and you won't have the new data either (because you haven't written it out yet). Or suppose you accidentally run the program twice. There will be no way to tell you have done that.
You can provide some insurance against this sort of mishap by using the fileinput module, like this:
import fileinput
# Read in existing data
with fileinput.input("data.txt", inplace=True, backup=".bkp") as f:
for row in csv.reader(f):
name,subject,year,score = row
marks[name] = [subject,int(year),int(score)]
With this change, your updates will be in data.txt as before, but your original data will still be around, in a file called data.txt.bkp.
But that is just a fix. It avoids the real issue, which is that you really have a database application and you are trying to implement it using textfiles. The code above is all very well for an exercise, but it's not robust and it won't scale.

How can read two field of a csv file separately in python?

I want to read two column of a csv file separately, but when I wrote code like below python just show first column and nothing for second, but in the csv file the second column also has lots of rows.
import csv
import pprint
f = open("arachnid.csv", 'r')
read = csv.DictReader(f)
for i in range(3):
read.next()
for i in read:
pprint.pprint(i["binomialAuthority_label"])
for i in read:
pprint.pprint(i["rdf-schema#label"])
The reason for this is that when you use DictReader the way you are using it it will create what is called an iterator/generator. So, when you have iterated over it once, you cannot iterate over it again the way you are doing it.
If you want to keep your logic as is, you can actually call seek(0) on your file reader object to reset its position as such:
f.seek(0)
The next time you iterate over your dictreader object, it will give you what you are looking for. So the part of your code of interest would be this:
for i in read:
pprint.pprint(i["binomialAuthority_label"])
# This is where you set your seek(0) before the second loop
f.seek(0)
for i in read:
pprint.pprint(i['rdf-schema#label'])
Your DictReader instance gets exhausted after your first for i in read: loop, so when you try to do your second loop, there is nothing to iterate over.
What you want to do, once you've iterated over the CSV the first time, you can seek your file back to the start, and create a new instance of the DictReader and start again. You'll want to create a new DictReader instance otherwise you'll need to manually skip the header line.
f = open(filename)
read = csv.DictReader(f)
for i in read:
print i
f.seek(0)
read = csv.DictReader(f)
for i in read:
print i

Why can't I repeat the 'for' loop for csv.Reader?

I am a beginner of Python. I am trying now figuring out why the second 'for' loop doesn't work in the following script. I mean that I could only get the result of the first 'for' loop, but nothing from the second one. I copied and pasted my script and the data csv in the below.
It will be helpful if you tell me why it goes in this way and how to make the second 'for' loop work as well.
My SCRIPT:
import csv
file = "data.csv"
fh = open(file, 'rb')
read = csv.DictReader(fh)
for e in read:
print(e['a'])
for e in read:
print(e['b'])
"data.csv":
a,b,c
tree,bough,trunk
animal,leg,trunk
fish,fin,body
The csv reader is an iterator over the file. Once you go through it once, you read to the end of the file, so there is no more to read. If you need to go through it again, you can seek to the beginning of the file:
fh.seek(0)
This will reset the file to the beginning so you can read it again. Depending on the code, it may also be necessary to skip the field name header:
next(fh)
This is necessary for your code, since the DictReader consumed that line the first time around to determine the field names, and it's not going to do that again. It may not be necessary for other uses of csv.
If the file isn't too big and you need to do several things with the data, you could also just read the whole thing into a list:
data = list(read)
Then you can do what you want with data.
I have created small piece of function which doe take path of csv file read and return list of dict at once then you loop through list very easily,
def read_csv_data(path):
"""
Reads CSV from given path and Return list of dict with Mapping
"""
data = csv.reader(open(path))
# Read the column names from the first line of the file
fields = data.next()
data_lines = []
for row in data:
items = dict(zip(fields, row))
data_lines.append(items)
return data_lines
Regards

Categories