database to csv in django using python - python

I am writing this small code which is part of my Django application. It is supposed to pick up data from a DB table(MySql) and make a csv file. May be its a very simple error I am getting, but I am not able to resolve it.
Name of the file: write_to_csv.py
import csv
def createCSV():
from django.db import connection, transaction
cursor = connection.cursor()
cursor.execute("select * from avg_max_min;")
csv_writer = csv.writer(open("out.csv", "wt"), delimiter=',')
csv_writer.writerow([i[0] for i in cursor.description]) # write headers
csv_writer.writerows(cursor)
del csv_writer # this will close the CSV file
Error
Exception Value:
'module' object has no attribute 'writer'
Exception Location: C:\Python26\Lib\site-packages\django\bin\report\src\report ..\report\report_view\write_to_csv.py in createCSV, line 6

open's second argument should be wb not wt. Other than that, it looks like you are doing everything right.
If it's still not working, can you update your question with the results of doing dir(csv)? (It's most likely that you have some other module installed in your Python distribution or in the same directory as write_to_csv.py with the same name.)

I'm guessing it's a setup problem (something like this). Make sure you don't have a file named csv.py or some other weirdness that is "hiding" python's csv module.

Related

import cvs into postgres with python script

I am trying to import a CSV file of IP addresses into Postgres via python script. This is what I am at
Python script
since this is for testing. this is how test csv file is. CSV FILE
Also this is the error I am getting
Error
I ran same python script with text file, same error.
Also, I tried manually uploading the same file via pgadmin. No issue. so its probably something I am missing in my code.
Also, i am able to connect to DB as in the screenshot above so not connection issue for issue.
Thanks in advance.
You do not open anywhere the actual file. You are trying to iterate over the file name.
You need to read the file lines and pass those lines into execute/execute_many.
Sample code:
import csv
with open("test.csv", "r") as my_file:
readers = csv.reader(my_file)
for line in reader:
cur.execute("INSERT INTO x(y) VALUES (%s)", line)
cur.commit()

AttributeError: 'file' object has no attribute 'DictReader'

I'm creating a temporary CSV file:
for formname in formnamesFinal:
csv = tempfile.NamedTemporaryFile("w", prefix=formname+'_', suffix=".csv", dir = "/var/tmp/")
csv.write(....)
And I'm writing something in it. Now I want to read this file with DictReader:
content = csv.DictReader(csv, delimiter=';')
for contenthelp in content:
contentlist.append(contenthelp)
But I'm receiving the following error:
AttributeError: 'file' object has no attribute 'DictReader'
I have to step through the temp CSV files, because I have huge datasets to get from a database for the following steps and it would take too much time to load the data over and over.
csv = tempfile.NamedTemporaryFile("w", prefix=formname+'_', suffix=".csv", dir = "/var/tmp/")
This line overwrites your reference to the csv module. Try renaming it to something else.
my_csv = tempfile.NamedTemporaryFile("w", prefix=formname+'_', suffix=".csv", dir = "/var/tmp/")
Now you should be able to access csv properly again.
Another cause for this error is there is a python script with a filename of csv.py.
This hides the name of Python's built-in csv module.
Resolve the issue by renaming the user created csv.py script file.

Open excel output with xlsxwriter from temporary file fail

I'm creating a xlsx output with xlsxwriter into a temporary file using tempfile module, I store the path to this temporary file inside a variable that I later use in another script to open it.
The problem is that sometimes opening the file fails with the error :
"[Errno 2] No such file or directory: '/tmp/xls5TnVsx'"
Sorry I don't have an exact idea about the frequency of this problem occurring but it seems like it happens from time to time, so I don't understand why...
This is how I save into a temporary file :
f = tempfile.NamedTemporaryFile(prefix="xls",delete=False)
xlsfilename = f.name
Then to create the xlsx output :
wb = xlsxwriter.Workbook(filename)
ws = wb.add_worksheet(sheetName)
# Write header
....
# Write data
for row, row_data in enumerate(data, start=1):
for column, key in enumerate(headers):
....
wb.close()
f.close()
Then in a Python CGI script I use the variable xlsxfilename which is the path to the script to open it :
print "Content-type: application/msexcel"
print "Content-Disposition: attachment; filename="+xlsfilename
print
try :
print open(xlsfilename,"rb").read()
finally:
try:
xlsfilename.close()
except:
pass
os.unlink(xlsfilename)
What am I doing wrong here and any ideas on how to solve this by maybe using another method to storing into a temporary file?
I believe the issue here is that your program is overwriting the created file with its own output, as the
wb = xlsxwriter.Workbook(filename)
statement creates a new file. The conditions under which this might be deleted will depend on when the named temporary file is deleted (technically this happens on close()).
You should think about using mkstemp instead, since you already explicity delete the file you are creating. Overwriting that file, whose name is guaranteed unique and which is not deleted automatically, should be more controllable.

Error with urlopen: new-line character seen in unquoted field

I am using urllib.urlopen with Python 2.7 to read csv files located on an external webserver:
# Try & Except statements removed for clarity
import urllib
import csv
url = ...
csv_file = urllib.urlopen(url)
for row in csv.reader(csv_file):
do_something()
All 100+ files can be read fine, except one that has been updated recently and that returns:
Error: new-line character seen in unquoted field - do you need to open the file in universal-newline mode?
The file is accessible here. According to my text editor, its mode is Mac (CR), as opposed to Windows (CRLF) for the other files.
I found that based on this thread, python urlopen will handle correctly all formats of newlines. Therefore, the problem is likely to come from somewhere else. I have no clue though. The file opens fine with all my text editors and my speadsheet editors.
Does any one have any idea how to diagnose the problem ?
* EDIT *
The creator of the file informed me by email that I was not the only one to experience such issues. Therefore, he decided to make it again. The code above now works fine again. Unfortunately, using a new file also means that the issue can no longer be reproduced, and the solutions tested properly.
Before closing the question, I want to thank all the stackers who dedicated some of their time to figure out a solution and post it here.
It might be a corrupt .csv file? Otherwise, this code runs perfectly.
#!/usr/bin/python
import urllib
import csv
url = "http://www.football-data.co.uk/mmz4281/1213/I1.csv"
csv_file = urllib.urlopen(url)
for row in csv.reader(csv_file):
print row
Credits to J.F. Sebastian for the .csv file.
Altough, you might want to consider sharing the specific .csv file with us? So we can try to re-create the error.
The following code runs without any error:
#!/usr/bin/env python
import csv
import urllib2
r = urllib2.urlopen('http://www.football-data.co.uk/mmz4281/1213/I1.csv')
for row in csv.reader(r):
print row
I was having the same problem with a downloaded csv.
I know the fix would be to use open with 'rU'. But I would rather not have to save the file to disk, just to open back up into a variable. That seems unnecessary.
file = open(filepath,'rU')
mydata = csv.reader(file)
So if someone has a better solution that would be nice. Stackoverflow links that got me this far:
CSV new-line character seen in unquoted field error
Open the file in universal-newline mode using the CSV Django module
I found what I actually wanted with stringIO, or cStringIO, or io:
Using Python, how do I to read/write data in memory like I would with a file?
I ended up getting io working,
import csv
import urllib2
import io
# warning its a 20MB csv
url = 'http://poweredgec.com/latest_poweredge-11g.csv'
urlRead = urllib2.urlopen(url).read()
ramFile = io.open(urlRead, mode='w')
openRamFile = open(ramFile, 'rU')
csvCurrent = csv.reader(openRamFile)
csvTuple = map(tuple, csvCurrent)
print csvTuple

Python open() with minimal fluff variables

The intent is to look in a json file in the directory above the script and load up what it finds in that file. This is what I've got:
import os
import json
settings_file = '/home/me/foo/bar.txt'
root = os.path.dirname(os.path.dirname(os.path.abspath(settings_file))) # '/home/me'
target = os.path.join(root,'.extras.txt') # '/home/me/.extras.txt'
db_file= open(target)
databases = json.load(db_file) # works, returns object
databases2 = json.load(open(target)) # equivalent to above, also works
# try to condense code, lose pointless variables target and file
databases3 = json.load(open(os.path.join(root,'.extras.txt'))) # equivalent (I thought!) to above, doesn't work.
So... why doesn't the all-at-once, no holding variables version work? Oh, the error returned is (now in it's entirety):
$ ./json_test.py
Traceback (most recent call last):
File "./json_test.py", line 69, in <module>
databases = json.load(open(os.path.join(root,'/.extras.txt')))
IOError: [Errno 2] No such file or directory: '/.extras.txt'
And to satisfy S.Lott's well-intentioned advice... it doesn't matter what target is set to. The databases and databases2 populate correctly while databases3 does not. target exists, is readable and contains what json expects to see. I suspect there's something I don't understand about the nature of stringing commands together... I can make the code work, was just wondering why the concise (or complex?) version failed.
Code looks fine, make sure referenced files are in the appropriate places. Given your code that includes target/file variable assignment, full path to .extras.txt is
/home/me/.extras.txt
You need to do:
file = open(target, 'w')
because by default open will try to open the file in read mode (r) but you need to open it in w (write) mode if you want it to be created.
Also, I would not use the variable name file since it is also a type (<type 'file'>) in python.
You could add the write-mode flag to this line as well:
databases = json.load(open(os.path.join(root,'.extras.txt'), 'w'))
because from the limited information we have in the question it appears your /.extras file does not previously exist.
Final note, you are losing the handle to your open file in this line (since you are not storing it in your file variable):
databases = json.load(open(os.path.join(root,'.extras.txt')))
How do you intend to close the file when you're finished with it?
You could do this with a context manager (python >=2.6 or 2.5 if import with_statement used):
with open(os.path.join(root,'.extras.txt'), 'w') as f:
databases = json.load(f)
which will take care of closing the file for you.

Categories