I'm having some trouble with some test code. This code is meant to:
read a csv file
take two inputs
put the inputs in a list
make a new list with the csv contents + the input list
overwrite the csv with the new list.
import csv
input1 = input("input 1")
input2 = input("input 2")
original = []
with open('test.csv', 'r') as f:
reader = csv.reader(f)
original = list(reader)
data = [input1,input2]
original.append(data)
with open('test.csv', 'w') as f:
writer = csv.writer(f)
writer.writerows(original)
For example, if 'cats' and 'dogs' were in the file, and I typed in 'zebras' and 'giraffes', I'd expect the csv to look like this when I open it in Notepad:
link
However blank lines are produced in between the lists when I run the code more than once, and I don't know why.
link
I am new to Python and any help is appreciated.
The solution is:
open('test.csv', 'w', newline='')
In new versions of Python csv.Writer now handles newline, but open also does. Then you must tell open that it must not add newline when writing to the file. See API documentation for more explanations.
Related
I am trying to write to a csv file that has been saved with another program (excel and others).
However when I open the file to write to it, the first line written is added to last cell of the last line.
file.csv
['1','2','3']
['1','2','3']
import csv
fields=['A','B','C']
with open('file.csv', 'a', newline='') as f:
writer = csv.writer(f)
writer.writerow(fields)
Expected results:
['1','2','3']
['1','2','3']
['A','B','C']
Actual results:
['1','2','3']
['1','2','3A','B','C']
If I just write to file then write to it again without viewing it, there's no issue, but if I open the file, and save it inside the program the next line written is added to the previous line instead of becoming it's own line.
I assume there is a formatting issue with how the program saves the file, but I am at a loss as to how to fix it.
Lines in CSV files should always be terminated with a "\r\n" sequence, even if its the last line in the file. In the grand tradition of CSV programming, this is often ignored. The fix is to write a program that peeks at the file and fixes it as needed before use. And write a bug against the "other" program that wrote the nonconforming CSV in the first place.
import csv
def csv_delimiter_fixer(filename):
with open(filename, 'a+b') as fileobj:
fileobj.seek(-1, 2)
if fileobj.read(1) != b"\n":
fileobj.write(b"\r\n")
fields=['A','B','C']
filename = 'file.csv'
csv_delimiter_fixer(filename)
with open('file.csv', 'a', newline='') as f:
writer = csv.writer(f)
writer.writerow(fields)
I am incredibly new to python, so I might not have the right terminology...
I've extracted text from a pdf using pdfplumber. That's been saved as a object. The code I used for that is:
with pdfplumber.open('Bell_2014.pdf') as pdf:
page = pdf.pages[0]
bell = page.extract_text()
print(bell)
So "bell" is all of the text from the first page of the imported PDF.
what bell looks like I need to write all of that text as a string to a csv. I tried using:
with open('Bell_2014_ex.csv', 'w', newline='') as f:
writer = csv.writer(f)
writer.writerows(bell)
and
bell_ex = 'bell_2014_ex.csv'
with open(bell_ex, 'w', newline='') as csvfile:
file_writer = csv.writer(csvfile,delimiter=',')
file_writer.writerow(bell)
All I keep finding when I search this is how to create a csv with specific characters or numbers, but nothing from an output of an already executed code. For instance, I can get the above code:
bell_ex = 'bell_2014_ex.csv'
with open(bell_ex, 'w', newline='') as csvfile:
file_writer = csv.writer(csvfile,delimiter=',')
file_writer.writerow(['bell'])
to create a csv that has "bell" in one cell of the csv, but that's as close as I can get.
I feel like this should be super easy, but I just can't seem to get it to work.
Any thoughts?
Please and thank you for helping my inexperienced self.
page.extract_text() is defined as: "Collates all of the page's character objects into a single string." which would make bell just a very long string.
The CSV writerow() expects by default a list of strings, with each item in the list corresponding to a single column.
Your main issue is a type mismatch, you're trying to write a single string where a list of strings is expected. You will need to further operate on your bell object to convert it into a format acceptable to be written to a CSV.
Without having any knowledge of what bell contains or what you intend to write, I can't get any more specific, but documentation on Python's CSV module is very comprehensive in terms of settings delimiters, dialects, column definitions, etc. Once you have converted bell into a proper iterable of lists of strings, you can then write it to a CSV.
Some similar code I wrote recently converts a tab-separated file to csv for insertion into sqlite3 database:
Maybe this is helpful:
retval = ''
mode = 'r'
out_file = os.path.join('input', 'listfile.csv')
"""
Convert tab-delimited listfile.txt to comma separated values (.csv) file
"""
in_text = open(listfile.txt, 'r')
in_reader = csv.reader(in_text, delimiter='\t')
out_csv = open(out_file, 'w', newline='\n')
out_writer = csv.writer(out_csv, dialect=csv.excel)
for _line in in_reader:
out_writer.writerow(_line)
out_csv.close()
... and that's it, not too tough
So my problem was that I was missing the "encoding = 'utf-8'" for special characters and my delimiter need to be a space instead of a comma. What ended up working was:
from pdfminer.high_level import extract_text
object = extract_text('filepath.pdf')
print(object)
new_csv = 'filename.csv'
with open(new_csv, 'w', newline='', encoding = 'utf-8') as csvfile:
file_writer = csv.writer(csvfile,delimiter=' ')
file_writer.writerow(object)
However, since a lot of my pdfs weren't true pdfs but scans, the csv ended up having a lot of weird symbols. This worked for about half of the pdfs I have. If you have true pdfs, this will be great. If not, I'm currently trying to figure out how to extract all the text into a pandas dataframe separated by headers within the pdfs since pdfminer extracted all text perfectly.
Thank you for everyone that helped!
I wanted to read some input from the csv file and then modify the input and replace it with the new value. For this purpose, I first read the value but then I'm stuck at this point as I want to modify all the values present in the file.
So is it possible to open the file in r mode in one for loop and then immediately in w mode in another loop to enter the modified data?
If there is a simpler way to do this please help me out
Thank you.
Yes, you can open the same file in different modes in the same program. Just be sure not to do it at the same time. For example, this is perfectly valid:
with open("data.csv") as f:
# read data into a data structure (list, dictionary, etc.)
# process lines here if you can do it line by line
# process data here as needed (replacing your values etc.)
# now open the same filename again for writing
# the main thing is that the file has been previously closed
# (after the previous `with` block finishes, python will auto close the file)
with open("data.csv", "w") as f:
# write to f here
As others have pointed out in the comments, reading and writing on the same file handle at the same time is generally a bad idea and won't work as you expect (unless for some very specific use case).
You can do open("data.csv", "rw"), this allows you to read and write at the same time.
Just like others have mentioned, modifying the same file as both input and output without any backup method is such a terrible idea, especially in a condensed file like most .csv files, which is normally more complicated than a single .Txt based file, but if you insisted you can do with the following:
import csv
file path = 'some.csv'
with open('some.csv', 'rw', newline='') as csvfile:
read_file = csv.reader(csvfile)
write_file = csv.writer(csvfile)
Note that code above will trigger an error with a message ValueError: must have exactly one of create/read/write/append mode.
For safety, I preferred to split it into two different files
import csv
in_path = 'some.csv'
out_path = 'Out.csv'
with open(in_path, 'r', newline='') as inputFile, open(out_path, 'w', newline='') as writerFile:
read_file = csv.reader(inputFile)
write_file = csv.writer(writerFile, delimiter=' ', quotechar='|', quoting=csv.QUOTE_MINIMAL)
for row in read_file:
# your modifying input data code here
........
My situation is, I have csv file and here is its code.
user_file = Path(str(message.author.id) + '.cvs')
if user_file.exists():
with open('test.csv', 'a') as fp:
writer = csv.writer(fp, delimiter=',')
writer.writerows(data)
else:
with open(user_file, 'w') as fp:
data = [('xp', 0)]
writer = csv.writer(fp, delimiter=',')
writer.writerows(data)
I'm wanting a csv file that keeps track of how many times they type a message so i need a way of editing the csv file and adding 1 to what it already has. But i have no idea how to do that! please help!<3
test.csv:
4
Python:
# Replace test.csv with the file you wish to open. Keep "w+"
with open("test.csv", "w+") as dat:
# Assumes the text in the file is an int
n = int(dat.read())
dat.write(str(n+1))
Result in test.csv:
5
This way it opens the file as write and read, reads the number, then writes it back as a string. Note that write() will override any text in the current file, so you don't need to remove the text
P.S if that if else statement is to check the file actually exists, it's unnecessary. If you try to open() a file which doesn't exist, python will create it for you.
I'm new to python.
I have a list with 19188 rows that I want to save as a csv.
When I write the list's rows to the csv, it does not have the last rows (it stops at 19112).
Do you have any idea what might cause this?
Here is how I write to the csv:
mycsvfile = open('file.csv', 'w')
thedatawriter = csv.writer(mycsvfile, lineterminator = '\n')
list = []
#list creation code
thedatawriter.writerows(list)
Each row of list has 4 string elements.
Another piece of information:
If I create a list that contains only the last elements that are missing and add them to the csv file, it kind of works (it is added, but twice...).
mycsvfile = open('file.csv', 'w')
thedatawriter = csv.writer(mycsvfile, lineterminator = '\n')
list = []
#list creation code
thedatawriter.writerows(list)
list_end = []
#list_end creation code
thedatawriter.writerows(list_end)
If I try to add the list_end alone, it doesn't seem to be working. I'm thinking there might be a csv writing parameter that I got wrong.
Another piece of information:
If I open the file adding ", newline=''", then it write more rows to it (though not all)
mycsvfile = open('file.csv', 'w', newline='')
There must be a simple mistake in the way I open or write to the csv (or in the dialect?)
Thanks for your help!
I found my answer! I was not closing the filehandle before script end which left unwritten rows.
Here is the fix:
with open('file.csv', 'w', newline='') as mycsvfile:
thedatawriter = csv.writer(mycsvfile, lineterminator = '\n')
thedatawriter.writerows(list)
See: Writing to CSV from list, write.row seems to stop in a strange place
Close the filehandle before the script ends. Closing the filehandle
will also flush any strings waiting to be written. If you don't flush
and the script ends, some output may never get written.
Using the with open(...) as f syntax is useful because it will close
the file for you when Python leaves the with-suite. With with, you'll
never omit closing a file again.