python cant read csv file downloaded from azure dev ops (utf-8) - python

I created an azure dev ops query, and chose 'download results as csv' which gave me a csv file. If I open this csv in vscode, I can see in the bottom right corner it says UTF-8 with BOM
I am trying to write some python function that will read in each value of this csv file. I can not rely parsing text myself and spitting values based on the , comma character, because I will have values that include commas inside them.
If I open my csv in excel, everything is organized perfectly. But if I try to parse the file in python, it reads in every row as a single string separated by commas (bad)
from csv import reader
import csv
# read in csv, convert to map organized by 'id' as index root parent value
def read_csv_as_map(csv_filename, id_format, encodingVar):
print('filename: '+csv_filename+', id_format: '+id_format+', encoding: '+encodingVar)
dict={}
dict['rows']={}
try:
with open(csv_filename, 'r', encoding=encodingVar) as read_obj:
csv_reader = reader(read_obj, delimiter='\t')
csv_cols = None
for row in csv_reader:
print('row=',row)
print('done')
return dict
except Exception as e:
print('err=',e)
return {}
ads_dict = read_csv_as_map(
csv_filename="csv_migration\\ads-test-direct-download.csv",
id_format='ID',
encodingVar='utf-8-sig'
)
console output:
filename: csv_migration\ads-test-direct-download.csv, id_format: ID, encoding: utf-8-sig
row= ['Title,State,Work Item Type,ID,12NC']
row= ['TITLE,WITH COMMAS,To Do,NAME,6034,"value,with,commas"']
done
How can I read this file in python so it separates each value into a list? Instead of this single string
I get the same result with encodingVar='utf-8', should I open my csv in some app like notepadd++ and convert it to utf-16? My code works great for .csv files with utf-16 encoding, it can parse each individual value into a list no problem. why wont this work with a utf-8 DOM csv, even when excel can parse the individual values perfectly fine?
csv file: https://file.io/TXh6uyXKZaug

from csv import reader
import csv
# read in csv, convert to map organized by 'id' as index root parent value
def read_csv_as_map(csv_filename, id_format, encodingVar):
print('filename: '+csv_filename+', id_format: '+id_format+', encoding: '+encodingVar)
dict={}
dict['rows']={}
try:
with open(csv_filename, 'r', encoding=encodingVar) as read_obj:
csv_reader = reader(read_obj, delimiter='\t')
csv_cols = None
for row in csv_reader:
row_as_list = row.split(",") # <-- Gets line as list!
print('row=',row_as_list)
print('done')
return dict
except Exception as e:
print('err=',e)
return {}
ads_dict = read_csv_as_map(
csv_filename="csv_migration\\ads-test-direct-download.csv",
id_format='ID',
encodingVar='utf-8-sig'
)
This snippet splits the line into a list that you can index to get the information out

Related

Read and write CSV file in Python

I'm trying to read sentences in a csv file, convert them to lowercase and save in other csv file.
import csv
import pprint
with open('dataset_elec_4000.csv') as f:
with open('output.csv', 'w') as ff:
data = f.read()
data = data.lower
writer = csv.writer(ff)
writer.writerow(data)
but I got error "_csv.Error: sequence expected". What should I do?
*I'm a beginner. Please be nice to me:)
You need to read over your input CSV row-by-row, and for each row, transform it, then write it out:
import csv
with open('output.csv', 'w', newline='') as f_out:
writer = csv.writer(f_out)
with open('dataset_elec_4000.csv', newline='') as f_in:
reader = csv.reader(f_in)
# comment these two lines if no input header
header = next(reader)
writer.writerow(header)
for row in reader:
# row is sequence/list of cells, so...
# select the cell with your sentence, I'm presuming it's the first cell (row[0])
data = row[0]
data = data.lower()
# need to put data back into a "row"
out_row = [data]
writer.writerow(out_row)
Python contains a module called csv for the handling of CSV files. The reader class from the module is used for reading data from a CSV file. At first, the CSV file is opened using the open() method in ‘r’ mode(specifies read mode while opening a file) which returns the file object then it is read by using the reader() method of CSV module that returns the reader object that iterates throughout the lines in the specified CSV document.
import csv
# opening the CSV file
with open('Giants.csv', mode ='r')as file:
# reading the CSV file
csvFile = csv.reader(file)
# displaying the contents of the CSV file
for lines in csvFile:
print(lines)

nested JSON to CSV using python script

i'm new to python and I've got a large json file that I need to convert to csv - below is a sample
{ "status": "success","Name": "Theresa May","Location": "87654321","AccountCategory": "Business","AccountType": "Current","TicketNo": "12345-12","AvailableBal": "12775.0400","BookBa": "123475.0400","TotalCredit": "1234567","TotalDebit": "0","Usage": "5","Period": "May 11 2014 to Jul 11 2014","Currency": "GBP","Applicants": "Angel","Signatories": [{"Name": "Not Available","BVB":"Not Available"}],"Details": [{"PTransactionDate":"24-Jul-14","PValueDate":"24-Jul-13","PNarration":"Cash Deposit","PCredit":"0.0000","PDebit":"40003.0000","PBalance":"40003.0000"},{"PTransactionDate":"24-Jul-14","PValueDate":"23-Jul-14","PTest":"Cash Deposit","PCredit":"0.0000","PDebit":"40003.0000","PBalance":"40003.0000"},{"PTransactionDate":"25-Jul-14","PValueDate":"22-Jul-14","PTest":"Cash Deposit","PCredit":"0.0000","PDebit":"40003.0000","PBalance":"40003.0000"},{"PTransactionDate":"25-Jul-14","PValueDate":"21-Jul-14","PTest":"Cash Deposit","PCredit":"0.0000","PDebit":"40003.0000","PBalance":"40003.0000"},{"PTransactionDate":"25-Jul-14","PValueDate":"20-Jul-14","PTest":"Cash Deposit","PCredit":"0.0000","PDebit":"40003.0000","PBalance":"40003.0000"}]}
I need this to show up as
name, status, location, accountcategory, accounttype, availablebal, totalcredit, totaldebit, etc as columns,
with the pcredit, pdebit, pbalance, ptransactiondate, pvaluedate and 'ptest' having new values each row as the JSON file shows
I've managed to put this script below together looking online, but it's showing me an empty csv file at the end. What have I done wrong? I have used the online json to csv converters and it works, however as these are sensitive files I'm hoping to write/manage with my own script so I can see exactly how it works. Please see below for my python script - can I have some advise on what to change? thanks
import csv
import json
infile = open("BankStatementJSON1.json","r")
outfile = open("testing.csv","w")
writer = csv.writer(outfile)
for row in json.loads(infile.read()):
writer.writerow(row)
import csv, json, sys
# if you are not using utf-8 files, remove the next line
sys.setdefaultencoding("UTF-8") # set the encode to utf8
# check if you pass the input file and output file
if sys.argv[1] is not None and sys.argv[2] is not None:
fileInput = sys.argv[1]
fileOutput = sys.argv[2]
inputFile = open("BankStatementJSON1.json","r") # open json file
outputFile = open("testing2.csv","w") # load csv file
data = json.load("BankStatementJSON1.json") # load json content
inputFile.close() # close the input file
output = csv.writer("testing.csv") # create a csv.write
output.writerow(data[0].keys()) # header row
for row in data:
output.writerow(row.values()) # values row
This works for the JSON example you posted. The issue is that you have nested dict and you can't create sub-headers and sub rows for pcredit, pdebit, pbalance, ptransactiondate, pvaluedate and ptest as you want.
You can use csv.DictWriter:
import csv
import json
with open("BankStatementJSON1.json", "r") as inputFile: # open json file
data = json.loads(inputFile.read()) # load json content
with open("testing.csv", "w") as outputFile: # open csv file
output = csv.DictWriter(outputFile, data.keys()) # create a writer
output.writeheader()
output.writerow(data)
Make sure you're closing the output file at the end as well.

Read csv from url one line at the time in Python 3.X

I have to read an online csv-file into a postgres database, and in that context I have some problems reading the online csv-file properly.
If I just import the file it reads as bytes, so I have to decode it. During the decoding it, however, seems that the entire file is turned into one long string.
# Libraries
import csv
import urllib.request
# Function for importing csv from url
def csv_import(url):
url_open = urllib.request.urlopen(url)
csvfile = csv.reader(url_open.decode('utf-8'), delimiter=',')
return csvfile;
# Reading file
p_pladser = csv_import("http://wfs-kbhkort.kk.dk/k101/ows?service=WFS&version=1.0.0&request=GetFeature&typeName=k101:p_pladser&outputFormat=csv&SRSNAME=EPSG:4326")
When I try to read the imported file line by line it only reads one character at the time.
for row in p_pladser:
print(row)
break
['F']
Can you help me identify where it goes wrong? I am using Python 3.6.
EDIT: Per request my solution in R
# Loading library
library(RPostgreSQL)
# Reading dataframe
p_pladser = read.csv("http://wfs-kbhkort.kk.dk/k101/ows?service=WFS&version=1.0.0&request=GetFeature&typeName=k101:p_pladser&outputFormat=csv&SRSNAME=EPSG:4326", encoding = "UTF-8", stringsAsFactors = FALSE)
# Creating database connection
drv <- dbDriver("PostgreSQL")
con <- dbConnect(drv, dbname = "secretdatabase", host = "secrethost", user = "secretuser", password = "secretpassword")
# Uploading dataframe to postgres database
dbWriteTable(con, "p_pladser", p_pladser , append = TRUE, row.names = FALSE, encoding = "UTF-8")
I have to upload several tables for 10,000 to 100,000 rows, and it total in R it takes 1-2 seconds to upload them all.
csv.reader expect as argument a file like object and not a string. You have 2 options here:
either you read the data into a string (as you currently do) and then use a io.StringIO to build a file like object around that string:
def csv_import(url):
url_open = urllib.request.urlopen(url)
csvfile = csv.reader(io.StringIO(url_open.read().decode('utf-8')), delimiter=',')
return csvfile;
or you use a io.TextIOWrapper around the binary stream provided by urllib.request:
def csv_import(url):
url_open = urllib.request.urlopen(url)
csvfile = csv.reader(io.TextIOWrapper(url_open, encoding = 'utf-8'), delimiter=',')
return csvfile;
How about loading the CSV with pandas!
import pandas as pd
csv = pd.read_csv("http://wfs-kbhkort.kk.dk/k101/ows?service=WFS&version=1.0.0&request=GetFeature&typeName=k101:p_pladser&outputFormat=csv&SRSNAME=EPSG:4326")
print csv.columns
OR if you have the CSV downloaded in your machine, then directly
csv = pd.read_csv("<path_to_csv>")
Ok! You may consider passing delimiter and quotechar arguments to csv.reader, because the CSV contains quotes as well! Something like this,
with open('p_pladser.csv') as f:
rows = csv.reader(f, delimiter=',', quotechar='"')
for row in rows:
print(row)

Convert CSV to YAML, with Unicode?

I'm trying to convert a CSV file, containing Unicode strings, to a YAML file using Python 3.4.
Currently, the YAML parser escapes my Unicode text into an ASCII string. I want the YAML parser to export the Unicode string as a Unicode string, without the escape characters. I'm misunderstanding something here, of course, and I'd appreciate any assistance.
Bonus points: how might this be done with Python 2.7?
CSV input
id, title_english, title_russian
1, A Title in English, Название на русском
2, Another Title, Другой Название
current YAML output
- id: 1
title_english: A Title in English
title_russian: "\u041D\u0430\u0437\u0432\u0430\u043D\u0438\u0435 \u043D\u0430\
\ \u0440\u0443\u0441\u0441\u043A\u043E\u043C"
- id: 2
title_english: Another Title
title_russian: "\u0414\u0440\u0443\u0433\u043E\u0439 \u041D\u0430\u0437\u0432\u0430\
\u043D\u0438\u0435"
desired YAML output
- id: 1
title_english: A Title in English
title_russian: Название на русском
- id: 2
title_english: Another Title
title_russian: Другой Название
Python conversion code
import csv
import yaml
in_file = open('csv_file.csv', "r")
out_file = open('yaml_file.yaml', "w")
items = []
def convert_to_yaml(line, counter):
item = {
'id': counter,
'title_english': line[0],
'title_russian': line[1]
}
items.append(item)
try:
reader = csv.reader(in_file)
next(reader) # skip headers
for counter, line in enumerate(reader):
convert_to_yaml(line, counter)
out_file.write( yaml.dump(items, default_flow_style=False) )
finally:
in_file.close()
out_file.close()
Thanks!
I ran into the same issue and this was how I was able to resolve it based on your example above
out_file.write(yaml.dump(items, default_flow_style=False,allow_unicode=True) )
including allow_unicode=True fixes the issue.
also specifically for python2 make use of safe_dump instead of dump to prevent the !!python/unicode displaying along with the unicode text.
out_file.write(yaml.safe_dump(items, default_row_style=False,allow_unicode=True)
In Python 2.x, you should use a Unicode CSV reader as Python's CSV reader doesn't support that. You can use unicodecsv for this purpose.
In your current Python 3.x code you should explicitly pass the file encoding when opening it:
import csv
with open('some.csv', newline='', encoding='utf-8') as f:
reader = csv.reader(f)
for row in reader:
print(row)
It may be that your system is already doing the right thing but you're relying on defaults in that case.
Lastly, you need to make sure the YAML file is opened with the correct encoding: open("yaml_file.yaml", "w", encoding="utf-8"). And this encoding should be used later when reading the YAML file.
I'm not sure what the yaml library does when given Python objects but you also need to check that line[0] and line[1] are Unicode strings when you're setting them inside convert_to_yaml.

Which encoding to use while reading Excel using xlrd

I am trying to read an Excel file using xlrd to write into txt files. Everything is being written fine except for some rows which has some spanish characters like 'Téd'. I can encode those using latin-1 encoding. However the code then fails for other rows which have a 'â' with unicode u'\u2013'. u'\2013' can't be encoded using latin-1. When using UTF-8 'â' are written out fine but 'Téd' is written as 'Téd' which is not acceptable. How do I correct this.
Code below :
#!/usr/bin/python
import xlrd
import csv
import sys
filePath = sys.argv[1]
with xlrd.open_workbook(filePath) as wb:
shNames = wb.sheet_names()
for shName in shNames:
sh = wb.sheet_by_name(shName)
csvFile = shName + ".csv"
with open(csvFile, 'wb') as f:
c = csv.writer(f)
for row in range(sh.nrows):
sh_row = []
cell = ''
for item in sh.row_values(row):
if isinstance(item, float):
cell=item
else:
cell=item.encode('utf-8')
sh_row.append(cell)
cell=''
c.writerow(sh_row)
print shName + ".csv File Created"
Python's csv module
doesn’t support Unicode input.
You are correctly encoding your input before writing it -- so you don't need codecs. Just open(csvFile, "wb") (the b is important) and pass that object to the writer:
with open(csvFile, "wb") as f:
writer = csv.writer(f)
writer.writerow([entry.encode("utf-8") for entry in row])
Alternatively, unicodecsv is a drop-in replacement for csv that handles encoding.
You are getting é instead of é because you are mistaking UTF-8 encoded text for latin-1. This is probably because you're encoding twice, once as .encode("utf-8") and once as codecs.open.
By the way, the right way to check the type of an xlrd cell is to do cell.ctype == xlrd.ONE_OF_THE_TYPES.

Categories