Writing with Python's built-in .csv module - python

[Please note that this is a different question from the already answered How to replace a column using Python’s built-in .csv writer module?]
I need to do a find and replace (specific to one column of URLs) in a huge Excel .csv file. Since I'm in the beginning stages of trying to teach myself a scripting language, I figured I'd try to implement the solution in python.
I'm having trouble when I try to write back to a .csv file after making a change to the contents of an entry. I've read the official csv module documentation about how to use the writer, but there isn't an example that covers this case. Specifically, I am trying to get the read, replace, and write operations accomplished in one loop. However, one cannot use the same 'row' reference in both the for loop's argument and as the parameter for writer.writerow(). So, once I've made the change in the for loop, how should I write back to the file?
edit: I implemented the suggestions from S. Lott and Jimmy, still the same result
edit #2: I added the "rb" and "wb" to the open() functions, per S. Lott's suggestion
import csv
#filename = 'C:/Documents and Settings/username/My Documents/PALTemplateData.xls'
csvfile = open("PALTemplateData.csv","rb")
csvout = open("PALTemplateDataOUT.csv","wb")
reader = csv.reader(csvfile)
writer = csv.writer(csvout)
changed = 0;
for row in reader:
row[-1] = row[-1].replace('/?', '?')
writer.writerow(row) #this is the line that's causing issues
changed=changed+1
print('Total URLs changed:', changed)
edit: For your reference, this is the new full traceback from the interpreter:
Traceback (most recent call last):
File "C:\Documents and Settings\g41092\My Documents\palScript.py", line 13, in <module>
for row in reader:
_csv.Error: iterator should return strings, not bytes (did you open the file in text mode?)

You cannot read and write the same file.
source = open("PALTemplateData.csv","rb")
reader = csv.reader(source , dialect)
target = open("AnotherFile.csv","wb")
writer = csv.writer(target , dialect)
The normal approach to ALL file manipulation is to create a modified COPY of the original file. Don't try to update files in place. It's just a bad plan.
Edit
In the lines
source = open("PALTemplateData.csv","rb")
target = open("AnotherFile.csv","wb")
The "rb" and "wb" are absolutely required. Every time you ignore those, you open the file for reading in the wrong format.
You must use "rb" to read a .CSV file. There is no choice with Python 2.x. With Python 3.x, you can omit this, but use "r" explicitly to make it clear.
You must use "wb" to write a .CSV file. There is no choice with Python 2.x. With Python 3.x, you must use "w".
Edit
It appears you are using Python3. You'll need to drop the "b" from "rb" and "wb".
Read this: http://docs.python.org/3.0/library/functions.html#open

Opening csv files as binary is just wrong. CSV are normal text files so You need to open them with
source = open("PALTemplateData.csv","r")
target = open("AnotherFile.csv","w")
The error
_csv.Error: iterator should return strings, not bytes (did you open the file in text mode?)
comes because You are opening them in binary mode.
When I was opening excel csv's with python, I used something like:
try: # checking if file exists
f = csv.reader(open(filepath, "r", encoding="cp1250"), delimiter=";", quotechar='"')
except IOError:
f = []
for record in f:
# do something with record
and it worked rather fast (I was opening two about 10MB each csv files, though I did this with python 2.6, not the 3.0 version).
There are few working modules for working with excel csv files from within python - pyExcelerator is one of them.

the problem is you're trying to write to the same file you're reading from. write to a different file and then rename it after deleting the original.

Related

Python problem reading CSV files that contain the word NUL [duplicate]

I'm working with some CSV files, with the following code:
reader = csv.reader(open(filepath, "rU"))
try:
for row in reader:
print 'Row read successfully!', row
except csv.Error, e:
sys.exit('file %s, line %d: %s' % (filename, reader.line_num, e))
And one file is throwing this error:
file my.csv, line 1: line contains NULL byte
What can I do? Google seems to suggest that it may be an Excel file that's been saved as a .csv improperly. Is there any way I can get round this problem in Python?
== UPDATE ==
Following #JohnMachin's comment below, I tried adding these lines to my script:
print repr(open(filepath, 'rb').read(200)) # dump 1st 200 bytes of file
data = open(filepath, 'rb').read()
print data.find('\x00')
print data.count('\x00')
And this is the output I got:
'\xd0\xcf\x11\xe0\xa1\xb1\x1a\xe1\x00\x00\x00\x00\x00\x00\x00\x00\ .... <snip>
8
13834
So the file does indeed contain NUL bytes.
As #S.Lott says, you should be opening your files in 'rb' mode, not 'rU' mode. However that may NOT be causing your current problem. As far as I know, using 'rU' mode would mess you up if there are embedded \r in the data, but not cause any other dramas. I also note that you have several files (all opened with 'rU' ??) but only one causing a problem.
If the csv module says that you have a "NULL" (silly message, should be "NUL") byte in your file, then you need to check out what is in your file. I would suggest that you do this even if using 'rb' makes the problem go away.
repr() is (or wants to be) your debugging friend. It will show unambiguously what you've got, in a platform independant fashion (which is helpful to helpers who are unaware what od is or does). Do this:
print repr(open('my.csv', 'rb').read(200)) # dump 1st 200 bytes of file
and carefully copy/paste (don't retype) the result into an edit of your question (not into a comment).
Also note that if the file is really dodgy e.g. no \r or \n within reasonable distance from the start of the file, the line number reported by reader.line_num will be (unhelpfully) 1. Find where the first \x00 is (if any) by doing
data = open('my.csv', 'rb').read()
print data.find('\x00')
and make sure that you dump at least that many bytes with repr or od.
What does data.count('\x00') tell you? If there are many, you may want to do something like
for i, c in enumerate(data):
if c == '\x00':
print i, repr(data[i-30:i]) + ' *NUL* ' + repr(data[i+1:i+31])
so that you can see the NUL bytes in context.
If you can see \x00 in the output (or \0 in your od -c output), then you definitely have NUL byte(s) in the file, and you will need to do something like this:
fi = open('my.csv', 'rb')
data = fi.read()
fi.close()
fo = open('mynew.csv', 'wb')
fo.write(data.replace('\x00', ''))
fo.close()
By the way, have you looked at the file (including the last few lines) with a text editor? Does it actually look like a reasonable CSV file like the other (no "NULL byte" exception) files?
data_initial = open("staff.csv", "rb")
data = csv.reader((line.replace('\0','') for line in data_initial), delimiter=",")
This works for me.
Reading it as UTF-16 was also my problem.
Here's my code that ended up working:
f=codecs.open(location,"rb","utf-16")
csvread=csv.reader(f,delimiter='\t')
csvread.next()
for row in csvread:
print row
Where location is the directory of your csv file.
You could just inline a generator to filter out the null values if you want to pretend they don't exist. Of course this is assuming the null bytes are not really part of the encoding and really are some kind of erroneous artifact or bug.
with open(filepath, "rb") as f:
reader = csv.reader( (line.replace('\0','') for line in f) )
try:
for row in reader:
print 'Row read successfully!', row
except csv.Error, e:
sys.exit('file %s, line %d: %s' % (filename, reader.line_num, e))
I bumped into this problem as well. Using the Python csv module, I was trying to read an XLS file created in MS Excel and running into the NULL byte error you were getting. I looked around and found the xlrd Python module for reading and formatting data from MS Excel spreadsheet files. With the xlrd module, I am not only able to read the file properly, but I can also access many different parts of the file in a way I couldn't before.
I thought it might help you.
Converting the encoding of the source file from UTF-16 to UTF-8 solve my problem.
How to convert a file to utf-8 in Python?
import codecs
BLOCKSIZE = 1048576 # or some other, desired size in bytes
with codecs.open(sourceFileName, "r", "utf-16") as sourceFile:
with codecs.open(targetFileName, "w", "utf-8") as targetFile:
while True:
contents = sourceFile.read(BLOCKSIZE)
if not contents:
break
targetFile.write(contents)
Why are you doing this?
reader = csv.reader(open(filepath, "rU"))
The docs are pretty clear that you must do this:
with open(filepath, "rb") as src:
reader= csv.reader( src )
The mode must be "rb" to read.
http://docs.python.org/library/csv.html#csv.reader
If csvfile is a file object, it must be opened with the ‘b’ flag on platforms where that makes a difference.
appparently it's a XLS file and not a CSV file as http://www.garykessler.net/library/file_sigs.html confirm
Instead of csv reader I use read file and split function for string:
lines = open(input_file,'rb')
for line_all in lines:
line=line_all.replace('\x00', '').split(";")
I got the same error. Saved the file in UTF-8 and it worked.
This happened to me when I created a CSV file with OpenOffice Calc. It didn't happen when I created the CSV file in my text editor, even if I later edited it with Calc.
I solved my problem by copy-pasting in my text editor the data from my Calc-created file to a new editor-created file.
I had the same problem opening a CSV produced from a webservice which inserted NULL bytes in empty headers. I did the following to clean the file:
with codecs.open ('my.csv', 'rb', 'utf-8') as myfile:
data = myfile.read()
# clean file first if dirty
if data.count( '\x00' ):
print 'Cleaning...'
with codecs.open('my.csv.tmp', 'w', 'utf-8') as of:
for line in data:
of.write(line.replace('\x00', ''))
shutil.move( 'my.csv.tmp', 'my.csv' )
with codecs.open ('my.csv', 'rb', 'utf-8') as myfile:
myreader = csv.reader(myfile, delimiter=',')
# Continue with your business logic here...
Disclaimer:
Be aware that this overwrites your original data. Make sure you have a backup copy of it. You have been warned!
I opened and saved the original csv file as a .csv file through Excel's "Save As" and the NULL byte disappeared.
I think the original encoding for the file I received was double byte unicode (it had a null character every other character) so saving it through excel fixed the encoding.
For all those 'rU' filemode haters: I just tried opening a CSV file from a Windows machine on a Mac with the 'rb' filemode and I got this error from the csv module:
Error: new-line character seen in unquoted field - do you need to
open the file in universal-newline mode?
Opening the file in 'rU' mode works fine. I love universal-newline mode -- it saves me so much hassle.
I encountered this when using scrapy and fetching a zipped csvfile without having a correct middleware to unzip the response body before handing it to the csvreader. Hence the file was not really a csv file and threw the line contains NULL byte error accordingly.
Have you tried using gzip.open?
with gzip.open('my.csv', 'rb') as data_file:
I was trying to open a file that had been compressed but had the extension '.csv' instead of 'csv.gz'. This error kept showing up until I used gzip.open
One case is that - If the CSV file contains empty rows this error may show up. Check for row is necessary before we proceed to write or read.
for row in csvreader:
if (row):
do something
I solved my issue by adding this check in the code.

Read() function erases text in file [duplicate]

Started Python a week ago and I have some questions to ask about reading and writing to the same files. I've gone through some tutorials online but I am still confused about it. I can understand simple read and write files.
openFile = open("filepath", "r")
readFile = openFile.read()
print readFile
openFile = open("filepath", "a")
appendFile = openFile.write("\nTest 123")
openFile.close()
But, if I try the following I get a bunch of unknown text in the text file I am writing to. Can anyone explain why I am getting such errors and why I cannot use the same openFile object the way shown below.
# I get an error when I use the codes below:
openFile = open("filepath", "r+")
writeFile = openFile.write("Test abc")
readFile = openFile.read()
print readFile
openFile.close()
I will try to clarify my problems. In the example above, openFile is the object used to open file. I have no problems if I want write to it the first time. If I want to use the same openFile to read files or append something to it. It doesn't happen or an error is given. I have to declare the same/different open file object before I can perform another read/write action to the same file.
#I have no problems if I do this:
openFile = open("filepath", "r+")
writeFile = openFile.write("Test abc")
openFile2 = open("filepath", "r+")
readFile = openFile2.read()
print readFile
openFile.close()
I will be grateful if anyone can tell me what I did wrong here or is it just a Pythong thing. I am using Python 2.7. Thanks!
Updated Response:
This seems like a bug specific to Windows - http://bugs.python.org/issue1521491.
Quoting from the workaround explained at http://mail.python.org/pipermail/python-bugs-list/2005-August/029886.html
the effect of mixing reads with writes on a file open for update is
entirely undefined unless a file-positioning operation occurs between
them (for example, a seek()). I can't guess what
you expect to happen, but seems most likely that what you
intend could be obtained reliably by inserting
fp.seek(fp.tell())
between read() and your write().
My original response demonstrates how reading/writing on the same file opened for appending works. It is apparently not true if you are using Windows.
Original Response:
In 'r+' mode, using write method will write the string object to the file based on where the pointer is. In your case, it will append the string "Test abc" to the start of the file. See an example below:
>>> f=open("a","r+")
>>> f.read()
'Test abc\nfasdfafasdfa\nsdfgsd\n'
>>> f.write("foooooooooooooo")
>>> f.close()
>>> f=open("a","r+")
>>> f.read()
'Test abc\nfasdfafasdfa\nsdfgsd\nfoooooooooooooo'
The string "foooooooooooooo" got appended at the end of the file since the pointer was already at the end of the file.
Are you on a system that differentiates between binary and text files? You might want to use 'rb+' as a mode in that case.
Append 'b' to the mode to open the file in binary mode, on systems
that differentiate between binary and text files; on systems that
don’t have this distinction, adding the 'b' has no effect.
http://docs.python.org/2/library/functions.html#open
Every open file has an implicit pointer which indicates where data will be read and written. Normally this defaults to the start of the file, but if you use a mode of a (append) then it defaults to the end of the file. It's also worth noting that the w mode will truncate your file (i.e. delete all the contents) even if you add + to the mode.
Whenever you read or write N characters, the read/write pointer will move forward that amount within the file. I find it helps to think of this like an old cassette tape, if you remember those. So, if you executed the following code:
fd = open("testfile.txt", "w+")
fd.write("This is a test file.\n")
fd.close()
fd = open("testfile.txt", "r+")
print fd.read(4)
fd.write(" IS")
fd.close()
... It should end up printing This and then leaving the file content as This IS a test file.. This is because the initial read(4) returns the first 4 characters of the file, because the pointer is at the start of the file. It leaves the pointer at the space character just after This, so the following write(" IS") overwrites the next three characters with a space (the same as is already there) followed by IS, replacing the existing is.
You can use the seek() method of the file to jump to a specific point. After the example above, if you executed the following:
fd = open("testfile.txt", "r+")
fd.seek(10)
fd.write("TEST")
fd.close()
... Then you'll find that the file now contains This IS a TEST file..
All this applies on Unix systems, and you can test those examples to make sure. However, I've had problems mixing read() and write() on Windows systems. For example, when I execute that first example on my Windows machine then it correctly prints This, but when I check the file afterwards the write() has been completely ignored. However, the second example (using seek()) seems to work fine on Windows.
In summary, if you want to read/write from the middle of a file in Windows I'd suggest always using an explicit seek() instead of relying on the position of the read/write pointer. If you're doing only reads or only writes then it's pretty safe.
One final point - if you're specifying paths on Windows as literal strings, remember to escape your backslashes:
fd = open("C:\\Users\\johndoe\\Desktop\\testfile.txt", "r+")
Or you can use raw strings by putting an r at the start:
fd = open(r"C:\Users\johndoe\Desktop\testfile.txt", "r+")
Or the most portable option is to use os.path.join():
fd = open(os.path.join("C:\\", "Users", "johndoe", "Desktop", "testfile.txt"), "r+")
You can find more information about file IO in the official Python docs.
Reading and Writing happens where the current file pointer is and it advances with each read/write.
In your particular case, writing to the openFile, causes the file-pointer to point to the end of file. Trying to read from the end would result EOF.
You need to reset the file pointer, to point to the beginning of the file before through seek(0) before reading from it
You can read, modify and save to the same file in python but you have actually to replace the whole content in file, and to call before updating file content:
# set the pointer to the beginning of the file in order to rewrite the content
edit_file.seek(0)
I needed a function to go through all subdirectories of folder and edit content of the files based on some criteria, if it helps:
new_file_content = ""
for directories, subdirectories, files in os.walk(folder_path):
for file_name in files:
file_path = os.path.join(directories, file_name)
# open file for reading and writing
with io.open(file_path, "r+", encoding="utf-8") as edit_file:
for current_line in edit_file:
if condition in current_line:
# update current line
current_line = current_line.replace('john', 'jack')
new_file_content += current_line
# set the pointer to the beginning of the file in order to rewrite the content
edit_file.seek(0)
# delete actual file content
edit_file.truncate()
# rewrite updated file content
edit_file.write(new_file_content)
# empties new content in order to set for next iteration
new_file_content = ""
edit_file.close()

Writing to CSV and saving the file

I have two programs in Python. One writes a customer's information to a CSV. The other accesses it. When the first has written it, I can open the CSV file (in Excel) and see that it has been written correctly. However for the other program to access the new data in the CSV file I have to manually open it and save it (in Excel) otherwise it doesn't work. Does anyone know why this may be?
Edit:
This writes to it (from first program):
f = open('details.csv', 'at', newline=''); csv_f = csv.reader(f)
csv_w.writerow(clientList)
f.close()
And this reads it (second program):
f = open('details.csv', 'rt', newline=''); csv_f = csv.reader(f)
for row in csv_f:
name.append(row[0])
I get this error when trying to append row[0] to a list.
Traceback (most recent call last):
File "C:\Users\Dan\Desktop\Garden Centre\work.py", line 8, in <module>
name.append(row[0])
IndexError: list index out of range
I have seen a number of such problems stemming from differnt platform line endings. Under Python 2, you might try the "universal line endings" file reading mode:
with open('data.csv', 'rU') as f:
for row in csv.reader(f):
print row
Because Excel does often use the old Mac (\r) and Windows (\r\n) standards, which can get in the csv module's way, esp. on a Unix or Mac platform where Python expects the Unix standard line ending (\n). Python 3 is generally smarter about this (and other file/string encoding issues), so generally doesn't need a special mode.
I found an answer after hours of trying. In Excel, each item in an 'empty' row contains '' for the largest number of items in any row. Python doesn't write it like that to the CSV and instead only one item on an empty row contains None. As it iterated through each row, there was no first item to add to the list on the empty rows.
I had to manually add an extra '' to the list that is wrote by the first program.

How to erase the file contents of text file in Python?

I have text file which I want to erase in Python. How do I do that?
In python:
open('file.txt', 'w').close()
Or alternatively, if you have already an opened file:
f = open('file.txt', 'r+')
f.truncate(0) # need '0' when using r+
Opening a file in "write" mode clears it, you don't specifically have to write to it:
open("filename", "w").close()
(you should close it as the timing of when the file gets closed automatically may be implementation specific)
Not a complete answer more of an extension to ondra's answer
When using truncate() ( my preferred method ) make sure your cursor is at the required position.
When a new file is opened for reading - open('FILE_NAME','r') it's cursor is at 0 by default.
But if you have parsed the file within your code, make sure to point at the beginning of the file again i.e truncate(0)
By default truncate() truncates the contents of a file starting from the current cusror position.
A simple example
As #jamylak suggested, a good alternative that includes the benefits of context managers is:
with open('filename.txt', 'w'):
pass
When using with open("myfile.txt", "r+") as my_file:, I get strange zeros in myfile.txt, especially since I am reading the file first. For it to work, I had to first change the pointer of my_file to the beginning of the file with my_file.seek(0). Then I could do my_file.truncate() to clear the file.
Writing and Reading file content
def writeTempFile(text = None):
filePath = "/temp/file1.txt"
if not text: # If not provided return file content
f = open(filePath, "r")
slug = f.read()
return slug
else:
f = open(filePath, "a") # Create a blank file
f.seek(0) # sets point at the beginning of the file
f.truncate() # Clear previous content
f.write(text) # Write file
f.close() # Close file
return text
It Worked for me
If security is important to you then opening the file for writing and closing it again will not be enough. At least some of the information will still be on the storage device and could be found, for example, by using a disc recovery utility.
Suppose, for example, the file you're erasing contains production passwords and needs to be deleted immediately after the present operation is complete.
Zero-filling the file once you've finished using it helps ensure the sensitive information is destroyed.
On a recent project we used the following code, which works well for small text files. It overwrites the existing contents with lines of zeros.
import os
def destroy_password_file(password_filename):
with open(password_filename) as password_file:
text = password_file.read()
lentext = len(text)
zero_fill_line_length = 40
zero_fill = ['0' * zero_fill_line_length
for _
in range(lentext // zero_fill_line_length + 1)]
zero_fill = os.linesep.join(zero_fill)
with open(password_filename, 'w') as password_file:
password_file.write(zero_fill)
Note that zero-filling will not guarantee your security. If you're really concerned, you'd be best to zero-fill and use a specialist utility like File Shredder or CCleaner to wipe clean the 'empty' space on your drive.
You have to overwrite the file. In C++:
#include <fstream>
std::ofstream("test.txt", std::ios::out).close();
You can also use this (based on a few of the above answers):
file = open('filename.txt', 'w')
file.close()
of course this is a really bad way to clear a file because it requires so many lines of code, but I just wrote this to show you that it can be done in this method too.
happy coding!
You cannot "erase" from a file in-place unless you need to erase the end. Either be content with an overwrite of an "empty" value, or read the parts of the file you care about and write it to another file.
Assigning the file pointer to null inside your program will just get rid of that reference to the file. The file's still there. I think the remove() function in the c stdio.h is what you're looking for there. Not sure about Python.
Since text files are sequential, you can't directly erase data on them. Your options are:
The most common way is to create a new file. Read from the original file and write everything on the new file, except the part you want to erase. When all the file has been written, delete the old file and rename the new file so it has the original name.
You can also truncate and rewrite the entire file from the point you want to change onwards. Seek to point you want to change, and read the rest of file to memory. Seek back to the same point, truncate the file, and write back the contents without the part you want to erase.
Another simple option is to overwrite the data with another data of same length. For that, seek to the exact position and write the new data. The limitation is that it must have exact same length.
Look at the seek/truncate function/method to implement any of the ideas above. Both Python and C have those functions.
This is my method:
open the file using r+ mode
read current data from the file using file.read()
move the pointer to the first line using file.seek(0)
remove old data from the file using file.truncate(0)
write new content and then content that we saved using file.read()
So full code will look like this:
with open(file_name, 'r+') as file:
old_data = file.read()
file.seek(0)
file.truncate(0)
file.write('my new content\n')
file.write(old_data)
Because we are using with open, file will automatically close.

How to append a new row to an old CSV file in Python?

I am trying to add a new row to my old CSV file. Basically, it gets updated each time I run the Python script.
Right now I am storing the old CSV rows values in a list and then deleting the CSV file and creating it again with the new list value.
I wanted to know are there any better ways of doing this.
with open('document.csv','a') as fd:
fd.write(myCsvRow)
Opening a file with the 'a' parameter allows you to append to the end of the file instead of simply overwriting the existing content. Try that.
I prefer this solution using the csv module from the standard library and the with statement to avoid leaving the file open.
The key point is using 'a' for appending when you open the file.
import csv
fields=['first','second','third']
with open(r'name', 'a') as f:
writer = csv.writer(f)
writer.writerow(fields)
If you are using Python 2.7 you may experience superfluous new lines in Windows. You can try to avoid them using 'ab' instead of 'a' this will, however, cause you TypeError: a bytes-like object is required, not 'str' in python and CSV in Python 3.6. Adding the newline='', as Natacha suggests, will cause you a backward incompatibility between Python 2 and 3.
Based in the answer of #G M and paying attention to the #John La Rooy's warning, I was able to append a new row opening the file in 'a'mode.
Even in windows, in order to avoid the newline problem, you must declare it as newline=''.
Now you can open the file in 'a'mode (without the b).
import csv
with open(r'names.csv', 'a', newline='') as csvfile:
fieldnames = ['This','aNew']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writerow({'This':'is', 'aNew':'Row'})
I didn't try with the regular writer (without the Dict), but I think that it'll be ok too.
If you use pandas, you can append your dataframes to an existing CSV file this way:
df.to_csv('log.csv', mode='a', index=False, header=False)
With mode='a' we ensure that we append, rather than overwrite, and with header=False we ensure that we append only the values of df rows, rather than header + values.
Are you opening the file with mode of 'a' instead of 'w'?
See Reading and Writing Files in the python docs
7.2. Reading and Writing Files
open() returns a file object, and is most commonly used with two arguments: open(filename, mode).
>>> f = open('workfile', 'w')
>>> print f <open file 'workfile', mode 'w' at 80a0960>
The first argument is a string containing the filename. The second argument is
another string containing a few characters describing the way in which
the file will be used. mode can be 'r' when the file will only be
read, 'w' for only writing (an existing file with the same name will
be erased), and 'a' opens the file for appending; any data written to
the file is automatically added to the end. 'r+' opens the file for
both reading and writing. The mode argument is optional; 'r' will be
assumed if it’s omitted.
On Windows, 'b' appended to the mode opens the file in binary mode, so
there are also modes like 'rb', 'wb', and 'r+b'. Python on Windows
makes a distinction between text and binary files; the end-of-line
characters in text files are automatically altered slightly when data
is read or written. This behind-the-scenes modification to file data
is fine for ASCII text files, but it’ll corrupt binary data like that
in JPEG or EXE files. Be very careful to use binary mode when reading
and writing such files. On Unix, it doesn’t hurt to append a 'b' to
the mode, so you can use it platform-independently for all binary
files.
If the file exists and contains data, then it is possible to generate the fieldname parameter for csv.DictWriter automatically:
# read header automatically
with open(myFile, "r") as f:
reader = csv.reader(f)
for header in reader:
break
# add row to CSV file
with open(myFile, "a", newline='') as f:
writer = csv.DictWriter(f, fieldnames=header)
writer.writerow(myDict)
I use the following approach to append a new line in a .csv file:
pose_x = 1
pose_y = 2
with open('path-to-your-csv-file.csv', mode='a') as file_:
file_.write("{},{}".format(pose_x, pose_y))
file_.write("\n") # Next line.
[NOTE]:
mode='a' is append mode.
# I like using the codecs opening in a with
field_names = ['latitude', 'longitude', 'date', 'user', 'text']
with codecs.open(filename,"ab", encoding='utf-8') as logfile:
logger = csv.DictWriter(logfile, fieldnames=field_names)
logger.writeheader()
# some more code stuff
for video in aList:
video_result = {}
video_result['date'] = video['snippet']['publishedAt']
video_result['user'] = video['id']
video_result['text'] = video['snippet']['description'].encode('utf8')
logger.writerow(video_result)

Categories