CSV file with some empty rows in Python with Tkintertable - python

I fill in the table as any data and then export the file (CSV). OK so far. But when I open the CSV file I notice empty rows between the rows I filled.
How to solve it? Run the code, fill the table, export and see the file to understand.
from tkintertable import TableCanvas, TableModel
from tkinter import *
root=Tk()
t_frame=Frame(root)
t_frame.pack(fill='both', expand=True)
table = TableCanvas(t_frame, cellwidth=60, thefont=('Arial',12),rowheight=18, rowheaderwidth=30,
rowselectedcolor='yellow', editable=True)
table.show()
root.mainloop()
look the empty rows in the CSV file

If you open the CSV file in notepad or some other basic text editor you will see that the data is saved with spaces in between each row of data. Because EXCEL reads a CSV per new line then it will always import this way into EXCEL.
Only things you can really do is manually going in and removing the lines from the CSV, building a macro to clean it up for you after the fact or editing the tkintertable library where the issue is occurring.
The Usage page of the GitHub for tkintertable does not appear to have any details on exporting to CSV that will fix this issue. Just a problem with how this library works with data I guess.
Even adding a Button that runs a command to export you get the same problem.
def export_to_csv():
table.exportTable()
Button(root, text='test CSV', command=export_to_csv).pack()
Usage
UPDATE:
After doing some digging and realizing the tkintertable uses the csv librarty to write to csv I did some googling on the same problem with writing to csv using the csv library and found this post (csv.write skipping lines when writing to csv). With that post I did some digging in the tkintertable library to find where the writing occurred and here is what I found.
The writing of new lines in between data appears to be a result of how the csv library is writing data. If we dig in you will find that the tkintertable class that is writing your table to CSV is called ExportTableData. That class is as follows:
class TableExporter:
def __init__(self):
"""Provides export utility methods for the Table and Table Model classes"""
return
def ExportTableData(self, table, sep=None):
"""Export table data to a comma separated file"""
parent=table.parentframe
filename = filedialog.asksaveasfilename(parent=parent,defaultextension='.csv',
filetypes=[("CSV files","*.csv")] )
if not filename:
return
if sep == None:
sep = ','
writer = csv.writer(open(filename, "w"), delimiter=sep)
model=table.getModel()
recs = model.getAllCells()
#take column labels as field names
colnames = model.columnNames
collabels = model.columnlabels
row=[]
for c in colnames:
row.append(collabels[c])
writer.writerow(row)
for row in recs.keys():
print(row)
writer.writerow(recs[row])
return
If we update this line:
writer = csv.writer(open(filename, "w"), delimiter=sep)
To include lineterminator = '\n' you will find that the problem goes away:
writer = csv.writer(open(filename, "w"), delimiter=sep, lineterminator = '\n')
Results:
Keep in mind editing libraries is probably not a great idea but for this situation I think it is your only option.
To get to this class you will need to open the python file called Tables_IO.py in the tkintertable library.
If you are using PyCharm you can navigate through the library files with CTRL+Left Click.
First Ctrl+Click on the import name . Then you will Ctrl+Click on the import name . Then you will search for a class method called exportTable and inside that method you will Ctrl+Click . This will take you to the above mentioned class that you can edit to solve the problem.
Please take care not to edit anything else as you can very easily break your library and will need to reinstall it if that happens.

Related

No data written in to csv file. What's wrong in this class based approach?

I have a project in that I initialize two log files in a class using 'setup_image_and_model_folders' as follows.
results_header = ['Image count','File path','Wafer','Die','Name']
def setup_image_and_model_folders(self):
self.open_results_log()
def open_results_log(self):
# creating log file to write all inspected files
log_file = open(self.logs_csv,'a')
self.log_writer = csv.writer(log_file)
#creating results log file which only includes metal detected
csv_file = open(self.results_csv,'w')
self.csv_writer = csv.writer(csv_file)
if not os.path.exists(self.results_csv):
self.csv_writer.writerow(results_header)
def write_results(self, data):
self.csv_writer.writerow(data)
def write_logs(self, log):
self.log_writer.writerow(log)
It creates the CSV files at two locations specified in the path. However, 'results_csv' is empty although I write the header.
Also no matter how many times I call 'write_results' and 'write_logs', nothing gets written into the files.
This is the first time I am using the class-based approach to update files. If I write the same thing in one function, it works fine.
What is wrong here?
Maybe you forgot to close the file.
Try to store the csv_file as self.csv_file and run close(your_class_instance.csv_file) after writing.
If this solves the issue, try to rewrite your code so that it uses a context manager, via:
with open(path_to_file, "w") as file:
... write to file ...
then file will be closed automatically.

Need a push to start with a function about text files, I can't figure this out on my own

I don't need the entire code but I want a push to help me on the way, I've been searching on the internet for clues on how to start to write a function like this but I haven't gotten any further then just the name of the function.
So I haven't got the slightest clue on how to start with this, I don't know how to work with text files. Any tips?
These text files are CSV (Comma Separated Values). It is a simple file format used to store tabular data.
You may explore Python's inbuilt module called csv.
Following code snippet an example to load .csv file in Python:
import csv
filename = 'us_population.csv'
with open(filename, 'r') as csvfile:
csvreader = csv.reader(csvfile)

Reading & Writing CSV Files

I started learning Python and I'm taking a Google course on Coursera about automation and IT using it. In the Practice Quiz: Reading & Writing CSV Files, the first question is:
We're working with a list of flowers and some information about each one. The create_file function writes this information to a CSV file. The contents_of_file function reads this file into records and returns the information in a nicely formatted block. Fill in the gaps of the contents_of_file function to turn the data in the CSV file into a dictionary using DictReader.
After giving an answer I receive "Incorrect. Something went wrong! Contact Coursera Support about this question!. I've found a page here and copied that code but the answer is always the same. So I contacted Coursera, but they say there's no problem on their end. That's the code I provided:
import os
import csv
# Create a file with data in it
def create_file(filename):
with open(filename, "w") as file:
file.write("name,color,type\n")
file.write("carnation,pink,annual\n")
file.write("daffodil,yellow,perennial\n")
file.write("iris,blue,perennial\n")
file.write("poinsettia,red,perennial\n")
file.write("sunflower,yellow,annual\n")
# Read the file contents and format the information about each row
def contents_of_file(filename):
return_string = ""
# Call the function to create the file
create_file(filename)
# Open the file
with open(filename) as f:
# Read the rows of the file into a dictionary
x = csv.DictReader(f)
# Process each item of the dictionary
for row in x:
return_string += "a {} {} is {}\n".format(row["color"], row["name"], row["type"])
return return_string
#Call the function
print(contents_of_file("flowers.csv"))
Has anyone encountered the same issues? Or can you explain to me why it doesn't work?
Adding the console log of the browser here. Tried with Firefox, Chrome and now on Opera.
Console Log
As it is an online evaluation platform, it might prohibit things like import OS for security reasons. Besides, it's not doing anything in your code. Did you try removing that line?
It seems you missed the some options(delimiter and newline='') in the reader function. Here is the working code:
import os
import csv
# Create a file with data in it
def create_file(filename):
with open(filename, "w") as file:
file.write("name,color,type\n")
file.write("carnation,pink,annual\n")
file.write("daffodil,yellow,perennial\n")
file.write("iris,blue,perennial\n")
file.write("poinsettia,red,perennial\n")
file.write("sunflower,yellow,annual\n")
# Read the file contents and format the information about each row
def contents_of_file(filename):
return_string = ""
# Call the function to create the file
create_file(filename)
# Open the file
with open(filename, "r", newline='') as f:
# Read the rows of the file into a dictionary
reader = csv.DictReader(f, delimiter=",")
# Process each item of the dictionary
for row in reader:
return_string += "a {} {} is {}\n".format(row["color"], row["name"], row["type"])
return return_string
#Call the function
print(contents_of_file("flowers.csv"))
and result is:
a pink carnation is annual
a yellow daffodil is perennial
a blue iris is perennial
a red poinsettia is perennial
a yellow sunflower is annual
Keep in mind that newline = '' is for python3 and the delimiter must be set in order to be read correctly.
This issue still persists. I reported it to Coursera today. There has to be an error on their side. Well, at least it's not a graded assessment, just a practice quiz. But frustrating nevertheless.

Python error: need more than one value to unpack

Ok, so I'm learning Python. But for my studies I have to do rather complicated stuff already. I'm trying to run a script to analyse data in excel files. This is how it looks:
#!/usr/bin/python
import sys
#lots of functions, not relevant
resultsdir = /home/blah
filename1=sys.argv[1]
filename2=sys.argv[2]
out = open(sys.argv[3],"w")
#filename1,filename2="CNVB_reads.403476","CNVB_reads.403447"
file1=open(resultsdir+"/"+filename1+".csv")
file2=open(resultsdir+"/"+filename2+".csv")
for line in file1:
start.p,end.p,type,nexons,start,end,cnvlength,chromosome,id,BF,rest=line.split("\t",10)
CNVs1[chr].append([int(start),int(end),float(BF)])
for line in file2:
start.p,end.p,type,nexons,start,end,cnvlength,chromosome,id,BF,rest=line.split("\t",10)
CNVs2[chr].append([int(start),int(end),float(BF)])
These are the titles of the columns of the data in the excel files and I want to split them, I'm not even sure if that is necessary when using data from excel files.
#more irrelevant stuff
out.write(filename1+","+filename2+","+str(chromosome)+","+str(type)+","+str(shared)+"\n")
This is what it should write in my output, 'shared' is what I have calculated, the rest is already in the files.
Ok, now my question, finally, when I call the script like that:
python script.py CNVB_reads.403476 CNVB_reads.403447 script.csv in my shell
I get the following error message:
start.p,end.p,type,nexons,start,end,cnvlength,chromosome,id,BF,rest=line.split("\t",10)
ValueError: need more than 1 value to unpack
I have no idea what is meant by that in relation to the data... Any ideas?
The line.split('\t', 10) call did not return eleven elements. Perhaps it is empty?
You probably want to use the csv module instead to parse these files.
import csv
import os
for filename, target in ((filename1, CNVs1), (filename2, CNVs2)):
with open(os.path.join(resultsdir, filename + ".csv"), 'rb') as csvfile:
reader = csv.reader(csvfile, delimiter='\t')
for row in reader:
start.p, end.p = row[:2]
BF = float(row[8])
target[chr].append([int(start), int(end), BF])

Error with urlopen: new-line character seen in unquoted field

I am using urllib.urlopen with Python 2.7 to read csv files located on an external webserver:
# Try & Except statements removed for clarity
import urllib
import csv
url = ...
csv_file = urllib.urlopen(url)
for row in csv.reader(csv_file):
do_something()
All 100+ files can be read fine, except one that has been updated recently and that returns:
Error: new-line character seen in unquoted field - do you need to open the file in universal-newline mode?
The file is accessible here. According to my text editor, its mode is Mac (CR), as opposed to Windows (CRLF) for the other files.
I found that based on this thread, python urlopen will handle correctly all formats of newlines. Therefore, the problem is likely to come from somewhere else. I have no clue though. The file opens fine with all my text editors and my speadsheet editors.
Does any one have any idea how to diagnose the problem ?
* EDIT *
The creator of the file informed me by email that I was not the only one to experience such issues. Therefore, he decided to make it again. The code above now works fine again. Unfortunately, using a new file also means that the issue can no longer be reproduced, and the solutions tested properly.
Before closing the question, I want to thank all the stackers who dedicated some of their time to figure out a solution and post it here.
It might be a corrupt .csv file? Otherwise, this code runs perfectly.
#!/usr/bin/python
import urllib
import csv
url = "http://www.football-data.co.uk/mmz4281/1213/I1.csv"
csv_file = urllib.urlopen(url)
for row in csv.reader(csv_file):
print row
Credits to J.F. Sebastian for the .csv file.
Altough, you might want to consider sharing the specific .csv file with us? So we can try to re-create the error.
The following code runs without any error:
#!/usr/bin/env python
import csv
import urllib2
r = urllib2.urlopen('http://www.football-data.co.uk/mmz4281/1213/I1.csv')
for row in csv.reader(r):
print row
I was having the same problem with a downloaded csv.
I know the fix would be to use open with 'rU'. But I would rather not have to save the file to disk, just to open back up into a variable. That seems unnecessary.
file = open(filepath,'rU')
mydata = csv.reader(file)
So if someone has a better solution that would be nice. Stackoverflow links that got me this far:
CSV new-line character seen in unquoted field error
Open the file in universal-newline mode using the CSV Django module
I found what I actually wanted with stringIO, or cStringIO, or io:
Using Python, how do I to read/write data in memory like I would with a file?
I ended up getting io working,
import csv
import urllib2
import io
# warning its a 20MB csv
url = 'http://poweredgec.com/latest_poweredge-11g.csv'
urlRead = urllib2.urlopen(url).read()
ramFile = io.open(urlRead, mode='w')
openRamFile = open(ramFile, 'rU')
csvCurrent = csv.reader(openRamFile)
csvTuple = map(tuple, csvCurrent)
print csvTuple

Categories