Can't open csv file in "excel format" - python

I have the following code to create a CSV file.
with open('CSV\globalLS', 'a', newline = "") as f:
writer = csv.writer(f,quoting=csv.QUOTE_ALL )
writer.writerow([name, datetime.datetime.now().date(), openPrice, highPrice, lowPrice, closePrice])
However, once the file is created and I try to open it - using computer - the file type is listed as bigdEal and the file opening default setting - excel - is not shown.
It opens fine using notebook but it needs to be in an excel file.
Does anyone know what is happening?
Thanks

Specify the extension in the filename you are writing to:
with open('CSV\globalLS.csv', 'a', newline = "") as f:
This way, when you try and open the file, your operating system will be able to apply the default application used for .csv type.
If your default application is not set correctly for csv files, you need to set this yourself. The process will be dependent on your operating system.

Related

Where can I find my .csv file after running my python code on Ubuntu?

I've just managed to run my python code on ubuntu, all seems to be going well. My python script writes out a .csv file every hour and I can't seem to find the .csv file.
Having the .csv file is important as I need to do research on the data. I am also using Filezilla, I would have thought the .csv would have run into there.
import csv
import time
collectionTime= datetime.now().strftime('%Y-%m-%d %H:%M:%S')
mylist= [d['Spaces'] for d in data]
mylist.append(collectionTime)
print(mylist)
with open("CarparkData.csv","a",newline="") as f:
writer = csv.writer(f)
writer.writerow(mylist)
In short, your code is outputting to wherever the file you're opening is in this line:
with open("CarparkData.csv","a",newline="") as f:
You can change this filename to the location of wherever you'd like the file to be read/written from/to. For example, data/CarparkData.csv if you had a folder named data/ within your application dedicated to holding data files.
As written in your code, writer.writerow will write the lines to both python's in-memory object of the file (instantiated with open("filename.csv"...), and the file itself (in this case, CarparkData.csv).
The way your code is structured, it won't be creating a new .csv every hour because it is using a static filename. If a file with this name did not exist at time of opening, it will create one, and if it did, it will continue to append new lines to the existing file.

How does python access the file from a right click on Windows

I've written a simple python script that takes a .csv file, rearranges it and spits out an excel file. My aim is to be able to right click on a .csv file in Windows and for it to generate an .xslx file. I've used PyInstaller to successfully create an .exe and I've used default programs editor to put my executable in the context menu when csv files are right clicked. What I can't figure out is how to do the I/O correctly.
What I have is :
import fileinput
import csv
try:
csv_filename = fileinput.filename()
print(csv_filename)
except: print('no input')
with open(filename_csv, 'rt', newline='', encoding='utf8') as csvfile:
# do stuff
# write xslx_filename
Which doesn't work.
How do I access the file windows passes me when I open a file?
Edit:
Just to clear up confusion. If I hard code the location of a csv file, my script works just fine. My problem is how do I access the file that Windows (presumably) passes to my script when I right click on a csv file and choose to open with csv2xslx (my script).
Thanks to martineau above for the pointer. The following made it work:
file = sys.argv[1]
with open(file, 'rt', newline='', encoding='utf8') as csvfile:
# do stuff
# write xslx_filename
I'm still not sure if that is the 'correct' way of doing it, but it works so I'm happy.

python 3.4.3 unable to write to a file in a desktop folder

I have a simple code to write a file in a specific folder. System creates the file in the folder but couldn't write on it. It is on windows and I checked the IDE write access (Pycharm) they seems fine. File is empty.
Following with code is to read whether I could write or ensure the previous one is finished. It is not writing the short string to the file. I have tried it on command line but it didn't work there also.
with open ('C:/Users/***/Desktop/***/output.log',mode ='w', encoding ='utf-8') as a_file:
a_file.write ="test"
with open ('C:/Users/***/Desktop/***/output.log', encoding ='utf-8') as a_file:
print(a_file.read())
The write is a method (function), you need to call, instead of assigning to it.
with open ('C:/Users/***/Desktop/***/output.log', mode='w', encoding='utf-8') as a_file:
a_file.write("test") # <---

Excel fails to open Python-generated CSV files

I have many Python scripts that output CSV files. It is occasionally convenient to open these files in Excel. After installing OS X Mavericks, Excel no longer opens these files properly: Excel doesn't parse the files and it duplicates the rows of the file until it runs out of memory. Specifically, when Excel attempts to open the file, a prompt appears that reads: "File not loaded completely."
Example of code I'm using to generate the CSV files:
import csv
with open('csv_test.csv', 'wb') as f:
writer = csv.writer(f)
writer.writerow([1,2,3])
writer.writerow([4,5,6])
Even the simple file generated by the above code fails to load in Excel. However, if I open the CSV file in a text editor and copy/paste the text into Excel, parse it with text to columns, and then save as CSV from Excel, then I can reopen the CSV file in Excel without issue. Do I need to pass an additional parameter in my scripts to make Excel parse the CSV files the same way it used to? Or is there some setting I can change in OS X Mavericks or Excel? Thanks.
Maybe I had the similar problem, the error message "SYLK: File format is not valid" when open python autogenerated csv file. The solution is really funny. The first two characters must not be I and D in uppercase (ID). Also see "SYLK: File format is not valid" error message when you open file.
Possible solution1: use *.txt instead of *.csv. In this case Excel (at least, 2010) will show you an import data wizard where you can specify delimiters, character encoding, field types, etc.
UPD: Solution2:
The python "csv" module has a "dialect" feature. For example, the following modification of your code generates valid csv file for my environment (Python 2.7, Excel 2010, Windows7, locale with ";" list delimiters):
import csv
with open('csv_test2.csv', 'wb') as f:
csv.excel.delimiter=';'
writer = csv.writer(f, dialect=csv.excel)
writer.writerow([1,2,3])
writer.writerow([4,5,6])

new line chars added to csv file after ftp.storbinary()

I am attempting to store a csv file on an ftp server using python's ftplib module.
Right now, I have about 30 lines of code which generates probabilities of weather values in a 2-d array. I then write this 2-d array to a csv file.
When I write the csv file onto my local drive, the file displays as expected within excel. However, when I view the file after I uploaded it to an ftp server, I see that a new line character has been added after every row.
I've done some minor testing to see what the problem may be, and I have been able to upload the csv file with coreftp. The csv file displays correctly after I do that. So I am pretty sure the file is fine, its something that is happening when python uploads it onto an ftp server.
I was originally creating a text file with a .csv extension file then reopening it as a binary file and uploading it. I thought that may be the issue so I tried using the csv module, but same issue.
Here is my code at the moment...
TEMPSHEADER = [i-50 for i in range(181)]#upper bounds exclusive
WINDSHEADER = [i for i in range(101)]#upper bounds exclusive
HEADER = TEMPSHEADER + WINDSHEADER
for site in ensmosdic:
ensmos = ensmosdic.get(site)
with open(utcnow.strftime("%Y-%m-%d") + "-" +site+"-prob.csv","w",newline='') as csvfile:
writer = csv.writer(csvfile, delimiter=",")
writer.writerow(["CODE ","F","ForecastDate","HOUR"]+HEADER)
siteTable =[[0 for x in range(286)] for y in range(24,169)]#upper bounds exclusive
###########
#other code here, but not important with regards to post
###########
for i in siteTable:
writer.writerow(i)
csvfile.close()#not sure if you have to close csv file, not in csv module docs
f = open(utcnow.strftime("%Y-%m-%d") + "-" +site+"-prob.csv","rb")
ftpInno.storbinary("STOR " + utcnow.strftime("%Y-%m-%d-") + site +"-prob.csv",f)
f.close()
ftpInno.close()
Thanks in advance
After an hour or so of trouble shooting, the answer is fairly simple, although I am not entirely sure why it works.
what i did was create a text file instead of a csv file which I was doing in my original question
with open(FILELOCATION + utcnow.strftime("%Y-%m-%d") + "-" +site+"-prob.txt","w") as f:
#write to the file below
f.close()
#open file again as a txt file
f = open(FILELOCATION + utcnow.strftime("%Y-%m-%d") + "-" +site+"-prob.txt","rb")
ftp.storlines("STOR " + utcnow.strftime("%Y-%m-%d-") + site +"-prob.csv",f)
f.close()
reading the file as a binary file and then storing it with the storlines method removed the extra lines I was seeing within the file after I uploaded it to an ftp server.
This might shed some light on your issue. I had a project where I was using windows command line and also windows powershell to transfer .csv files with the ftp get and mget commands. And like you said I was getting a extra between each row. It seems like switching to binary transfer mode fixed my issue. For example once you are in the ftp dialog just type "binary" and hit enter and it switches the mode.

Categories