I am using python 2.7 in my Windows 10(64-bit) system. I have a string str, when executed it shows result as :-
'abcd'
'wxyz'
Now, I want to write this result into result.csv file. So I wrote this following python scrip:-
import csv
with open('result.csv', 'w') as csv_file:
csv_write = csv.writer(csv_file)
csv_write.writerow([str])
But whenever I execute this script, I am finding only wxyz in result.csv file.
Help me with this issue.
Thanks in advance.
Python 2.7 csv likes the 'b' mode for writing (in Python 3 just 'w').
Example: Pre-built list of string to file
import csv
strings = []
s1 = 'abcd'
s2 = 'wxyz'
strings.append(s1)
strings.append(s2)
csvf = r"C:\path\to\my\file.csv"
with open(csvf, 'wb') as f:
w = csv.writer(f, delimiter=',')
for s in strings:
w.writerow(s)
Example: Use of reader() to build list of rows to supply writer()
import csv
# read current rows in csv and return reader object
def read(_f):
with open(_f, 'rb') as f:
reader = csv.reader(f, delimiter=',')
return reader
# writes the reader object content
# then writes the new content to end
def write(_f, _reader, _adding):
with open(_f, 'wb') as f:
writer = csv.writer(f, delimiter=',')
for row in _reader:
writer.writerow(row)
for row in _adding:
writer.writerow(row)
strings = []
s1 = 'abcd'
s2 = 'wxyz'
strings.append(s1)
strings.append(s2)
csvf = r"C:\path\to\my\file.csv"
content = read(csvf)
write(csvf, content, strings)
Example: Quick append
import csv
strings = []
s1 = 'abcd'
s2 = 'wxyz'
strings.append(s1)
strings.append(s2)
csvf = r"C:\path\to\my\file.csv"
with open(csvf, 'ab') as f:
writer = csv.writer(f, delimiter=',')
for s in strings:
writer.writerow(s)
References:
In Python 2.x, the reader() and writer() objects required a 'b' flag upon opening. This was a result of how the module handle line termination.
In Python 3.x this was changed so that reader() and writer() objects should be opened with newline=''; line termination is still handled however.
There is also this post and that post covering some of this.
Related
I created a program to create a csv where every number from 0 to 1000000
import csv
nums = list(range(0,1000000))
with open('codes.csv', 'w') as f:
writer = csv.writer(f)
for val in nums:
writer.writerow([val])
then another program to remove a number from the file taken as input
import csv
import os
while True:
members= input("Please enter a number to be deleted: ")
lines = list()
with open('codes.csv', 'r') as readFile:
reader = csv.reader(readFile)
for row in reader:
if all(field != members for field in row):
lines.append(row)
else:
print('Removed')
os.remove('codes.csv')
with open('codes.csv', 'w') as writeFile:
writer = csv.writer(writeFile)
writer.writerows(lines)
The above code is working fine on any other device except my pc, in the first program it creates the csv file with empty rows between every number, in the second program the number of empty rows multiplies and the file size also multiples.
what is wrong with my device then?
Thanks in advance
I think you shouldn't use a csv file for single column data. Use a json file instead.
And the code that you've written for checking which value to not remove, is unnecessary. Instead you could write a list of numbers to the file, and read it back to a variable while removing a number you desire to, using the list.remove() method.
And then write it back to the file.
Here's how I would've done it:
import json
with open("codes.json", "w") as f: # Write the numbers to the file
f.write(json.dumps(list(range(0, 1000000))))
nums = None
with open("codes.json", "r") as f: # Read the list in the file to nums
nums = json.load(f)
to_remove = int(input("Number to remove: "))
nums.remove(to_remove) # Removes the number you want to
with open("codes.json", "w") as f: # Dump the list back to the file
f.write(json.dumps(nums))
Seems like you have different python versions.
There is a difference between built-in python2 open() and python3 open(). Python3 defaults to universal newlines mode, while python2 newlines depends on mode argument open() function.
CSV module docs provides a few examples where open() called with newline argument explicitly set to empty string newline='':
import csv
with open('some.csv', 'w', newline='') as f:
writer = csv.writer(f)
writer.writerows(someiterable)
Try to do the same. Probably without explicit newline='' your writerows calls add one more newline character.
CSV file from English - Comma-Separated Values, you have a record with spaces.
To remove empty lines - when opening a file for writing, add newline="".
Since this format is tabular data, you cannot simply delete the element, the table will go. It is necessary to insert an empty string or "NaN" instead of the deleted element.
I reduced the number of entries and made them in the form of a table for clarity.
import csv
def write_csv(file, seq):
with open(file, 'w', newline='') as f:
writer = csv.writer(f)
for val in seq:
writer.writerow([v for v in val])
nums = ((j*10 + i for i in range(0, 10)) for j in range(0, 10))
write_csv('codes.csv', nums)
nums_new = []
members = input("Please enter a number, from 0 to 100, to be deleted: ")
with open('codes.csv', 'r') as f:
reader = csv.reader(f)
for row in reader:
rows_new = []
for elem in row:
if elem == members:
elem = ""
rows_new.append(elem)
nums_new.append(rows_new)
write_csv('codesdel.csv', nums_new)
I am reading a CSV file through samba share. My CSV file format
hello;world
1;2;
Python code
import urllib
from smb.SMBHandler import SMBHandler
PATH = 'smb://myusername:mypassword#192.168.1.200/myDir/'
opener = urllib.request.build_opener(SMBHandler)
fh = opener.open(PATH + 'myFileName')
data = fh.read().decode('utf-8')
print(data) // This prints the data right
csvfile = csv.reader(data, delimiter=';')
for myrow in csvfile:
print(myrow) // This just prints ['h']. however it should print(hello;world)
break
fh.close()
The problem is that after decoding to utf-8, the rows are not the actual lines in the file
Desired output of a row after reading the file: hello;world
Current output of a row after reading the file: h
Any help is appreciated.
csv.reader takes an iterable that returns lines. Strings, when iterated, yield characters. The fix is simple:
csvfile = csv.reader(data.splitlines(), delimiter=';')
Taking a video game design course and I've never had to use python before so I am very confused... I am tasked with the following :
read in the CSV file into Python and store its contents as a list of lists
(or 2D list/array). To do so, you will make use of the CSV[1] library.
The reading of the CSV file should be done as its own function - please create a function called readCSV(...)
that takes in the file name as the argument and returns the 2D list.
As mentionned I have no previous coding experience with python. I have managed to do this so far and would greatly appreciate some support.
import csv
# reading each row and printing it
def readCSV(fileName):
TwoDimList = []
with open(fileName, 'r') as f:
r = csv.reader(f, delimiter=',')
for row in r:
entities = readCSV('entities.csv')
print(entities)
Just append each row (which is a list of columns values) to your 2d list and return it in the end:
def readCSV(fileName):
two_dim_list = [] # snake case ftw (PEP8)
with open(fileName, 'r') as f:
r = csv.reader(f, delimiter=',')
# next(r) # skip header line if necessary
for row in r:
two_dim_list.append(row)
return two_dim_list
The short version of that is:
def readCSV(fileName):
with open(fileName, 'r') as f:
r = csv.reader(f, delimiter=',')
# next(r) # skip header line
return list(r)
You can just call list on the reader to get the full 2d list:
def read_csv(file_name):
with open(file_name) as f:
return list(csv.reader(f))
This works because csv.reader is an iterable.
define a function to read csv and return list, and use it later in the program
def readCSVinList(fpath,fname):
with open(fpath+fname) as csv_file:
csv_reader=csv.reader(csv_file)
return list(csv_reader)
f= readCSVinList("A:\\Test\\","test.csv")
for row in f:
print(row)
I'm trying to read a csv file but for some reason when I ask it to print, it prints the memory address instead of the table.
Below is my result :
>>> read_table('books.csv')
['books.csv', 'boxoffice.csv', 'imdb.csv', 'olympics-locations.csv', 'olympics-results.csv', 'oscar-actor.csv', 'oscar-film.csv', 'seinfeld-episodes.csv', 'seinfeld-foods.csv']
<_csv.reader object at 0x03977C30>
This is my code :
import csv
import glob
from database import *
def read_table(name):
'''
(str) -> Table
Given a file name as a string, the function will return the file as a Table
object.
'''
# create a list that stores all comma-separate files(*.csv)
files_list = glob.glob('*.csv')
print(files_list)
# check if the desired file is in the list
if(name in files_list):
# if found, open the file for reading
with open(name) as csvfile:
readCSV = csv.reader(csvfile, delimiter = ',')
print(readCSV)
Something is false in my script ?
Try this:
for line in readCSV:
print(line)
See docs for a more complete example and explanation. Briefly csvreader returns an iterator object (a list is also an iterator).
import csv
from collections import defaultdict
col_names = defaultdict(list)
with open(filename) as f:
reader = csv.DictReader(f)
for each_row in reader:
for (i,j) in each_row.items():
col_names[i].append(j)
print(col_names[column name])
I'm having trouble using the unicodecsv reader. I keep looking for different examples of how to use the module, but everyone keeps referencing the exact sample from the unicodecsv website (or some similar variation).
import unicodecsv as csv
from io import BytesIO
f = BytesIO()
w = csv.writer(f, encoding='utf-8')
_ = w.writerow((u'é', u'ñ'))
_ = f.seek(0)
r = csv.reader(f, encoding='utf-8')
next(r) == [u'é', u'ñ']
>>> True
For me this example makes too many assumptions about our understanding. It doesn't look like a csv file is being passed. I've completely missed the plot.
What I want to do is:
Read the first line of the csv file which are headers
Read the remaining lines and put them in a dictionary
My broken code:
import unicodecsv
#
i = 0
myCSV = "$_input.csv"
dic = {}
#
f = open(myCSV, "rb")
reader = unicodecsv.reader(f, delimiter=',')
strHeader = reader.next()
#
# read the first line of csv
# use custom function to parse the header
myHeader = FNC.PARSE_HEADER(strHeader)
#
# read the remaining lines
# put data into dictionary of class objects
for row in reader:
i += 1
dic[i] = cDATA(myHeader, row)
And, as expected, I get the 'UnicodeDecodeError'. Maybe the example above has the answers, but they are just completely going over my head.
Can someone please fix my code? I'm running out of hair to pull out.
I switched the reader line to:
reader = unicodecsv.reader(f, encoding='utf-8')
Traceback:
for row in reader:
File "C:\Python27\unicodecsv\py2.py", line 128 in next
for value in row]
UnicodeDecodeError: 'utf8' codec can't decode byte 0x90 in position 48: invalide start byte
When I strictly print the data using:
f = open(myCSV, "rb")
reader = csv.reader(f, delimiter=',')
for row in reader:
print(str[row[9]] + '\n')
print(repr(row[9] + '\n')
>>> UTAS ? Offline
>>> 'UTAS ? Offline'
You need to declare the encoding of the input file when creating the reader, just like you did when creating the writer:
>>> import unicodecsv as csv
>>> with open('example.csv', 'wb') as f:
... writer = csv.writer(f, encoding='utf-8')
... writer.writerow(('heading0', 'heading1'))
... writer.writerow((u'é', u'ñ'))
... writer.writerow((u'ŋ', u'ŧ'))
...
>>> with open('example.csv', 'rb') as f:
... reader = csv.reader(f, encoding='utf-8')
... headers = next(reader)
... print headers
... data = {i: v for (i, v) in enumerate(reader)}
... print data
...
[u'heading0', u'heading1']
{0: [u'\xe9', u'\xf1'], 1: [u'\u014b', u'\u0167']}
Printing the dictionary shows the escaped representation of the data, but you can see the characters by printing them individually:
>>> for v in data.values():
... for s in v:
... print s
...
é
ñ
ŋ
ŧ
EDIT:
If the encoding of the file is unknown, then it's best to use some like chardet to determine the encoding before processing.
If your final goal is read csv file and convert data into dicts then I would recommend using csv.DictReader. DictRead will take care of reading header and converting rest of the rows into Dict (rowdicts). This uses CSV moduels, which contains lots of documentation/example available.
>>> import csv
>>> with open('names.csv') as csvfile:
... reader = csv.DictReader(csvfile)
... for row in reader:
... print(row['first_name'], row['last_name'])
To get more clarity you check examples here https://docs.python.org/2/library/csv.html#csv.DictReader