I am trying to add 2 new columns to an existing file in the same program. The csv is generated by the previous function.
After looking at many answers here, I tried this, but it doesn't work because I couldn't find any answers using the csv dict writer in them, they were all about csv writer. This just creates a new file with these 2 columns in them. Can I get some help with this?
for me, sp in zip(meds, specs):
print(me.text, sp.text)
dict2 = {"Medicines": me.text, "Specialities": sp.text}
with open(f'Infusion_t{zip_add}.csv', 'r') as read, \
open(f'(Infusion_final{zip_add}.csv', 'a+', encoding='utf-8-sig', newline='') as f:
reader = csv.reader(read)
w = csv.DictWriter(f, dict2.keys())
for row in reader:
if not header_added:
w.writeheader()
header_added = True
row.append(w.writerow(dict2))
You need to append the new columns to row, then write row to the output file. You don't need the dictionary or DictWriter.
You can also open the output file just once before the loop, and write the header there, rather than each time through the main loop.
with open(f'(Infusion_final{zip_add}.csv', 'w', encoding='utf-8-sig', newline='') as f:
w = csv.writer(f)
w.writerow(['col1', 'col2', 'col3', ..., 'Medicines', 'Specalities']) # replace colX with the names of the original columns
for me, sp in zip(meds, specs):
print(me.text, sp.text)
with open(f'Infusion_t{zip_add}.csv', 'r') as read:
reader = csv.reader(read)
for row in reader:
row.append(me.text)
row.append(sp.text)
w.writerow(row)
Related
I started learning python and was wondering if there was a way to create multiple files from unique values of a column. I know there are 100's of ways of getting it done through pandas. But I am looking to have it done through inbuilt libraries. I couldn't find a single example where its done through inbuilt libraries.
Here is the sample csv file data:
uniquevalue|count
a|123
b|345
c|567
d|789
a|123
b|345
c|567
Sample output file:
a.csv
uniquevalue|count
a|123
a|123
b.csv
b|345
b|345
I am struggling with looping on unique values in a column and then print them out. Can someone explain with logic how to do it ? That will be much appreciated. Thanks.
import csv
from collections import defaultdict
header = []
data = defaultdict(list)
DELIMITER = "|"
with open("inputfile.csv", newline="") as csvfile:
reader = csv.reader(csvfile, delimiter=DELIMITER)
for i, row in enumerate(reader):
if i == 0:
header = row
else:
key = row[0]
data[key].append(row)
for key, value in data.items():
filename = f"{key}.csv"
with open(filename, "w", newline="") as f:
writer = csv.writer(f, delimiter=DELIMITER)
rows = [header] + value
writer.writerows(rows)
import csv
with open('sample.csv', newline='') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
with open(f"{row[0]}.csv", 'a') as inner:
writer = csv.writer(
inner, delimiter='|',
fieldnames=('uniquevalue', 'count')
)
writer.writerow(row)
the task can also be done without using csv module. the lines of the file are read, and with read_file.read().splitlines()[1:] the newline characters are stripped off, also skipping the header line of the csv file. with a set a unique collection of inputdata is created, that is used to count number of duplicates and to create the output files.
with open("unique_sample.csv", "r") as read_file:
items = read_file.read().splitlines()[1:]
for line in set(items):
with open(line[:line.index('|')] + '.csv', 'w') as output:
output.write((line + '\n') * items.count(line))
I have a csv with two columns of data. I want to extract data from one column and write to a text file with single-quote on each element and separated by a comma. For example, I have this..
taxable_entity_id,id
45efc167-9254-406c-b5a8-6aef91a73dd9,331999
5ae97680-f489-4182-9dcb-eb07a73fab15,103507
00018d93-ae71-4367-a0da-f252cea4dfa2,32991
I want all the taxable_entity_ids in a text file like this
'45efc167-9254-406c-b5a8-6aef91a73dd9','5ae97680-f489-4182-9dcb-eb07a73fab15','00018d93-ae71-4367-a0da-f252cea4dfa2'
without any space between two elements, separated by a comma.
Edit:
This is what i tried..
import csv
with open("Taxable_entity_those_who_filed_G1_M_July_but_not_in_Aug.csv", 'r') as csv_File:
reader = csv.DictReader(csv_File)
with open("te_id.csv", 'w') as text_file:
writer = csv.writer(text_file, quotechar='\'', quoting=csv.QUOTE_MINIMAL)
for row in reader:
writer.writerow(row["taxable_entity_id"])
# print(row["taxable_entity_id"])
text_file.close()
csv_File.close()
and this is what I got..
4,5,e,f,c,1,6,7,-,9,2,5,4,-,4,0,6,c,-,b,5,a,8,-,6,a,e,f,9,1,a,7,3,d,d,9
5,a,e,9,7,6,8,0,-,f,4,8,9,-,4,1,8,2,-,9,d,c,b,-,e,b,0,7,a,7,3,f,a,b,1,5
0,0,0,1,8,d,9,3,-,a,e,7,1,-,4,3,6,7,-,a,0,d,a,-,f,2,5,2,c,e,a,4,d,f,a,2
You were close. Simply as you want one single line in the output file, you should write it at once by using a comprehension:
import csv
with open("Taxable_entity_those_who_filed_G1_M_July_but_not_in_Aug.csv", 'r') as csv_File:
reader = csv.DictReader(csv_File)
with open("te_id.csv", 'w') as text_file:
# use QUOTE_ALL to force the quoting
writer = csv.writer(text_file, quotechar='\'', quoting=csv.QUOTE_ALL)
writer.writerow((row["taxable_entity_id"] for row in reader))
And do not use close as you have (correctly) used with.
try that
import pandas as pd
df = pd.read_csv('nameoffile.csv',delimiter = ',')
X = df[0].values
f = open('newfile.txt','w')
for i in X:
f.write(X[i] + ',')
f.close()
It's seems a little odd that you basically want a one row csv file for the taxable_entity_ids, but certain possible. You also don't need to explicitly close() the open files because the with context manager will do it for you automatically.
You also need to open the CSV file with newline='' as shown in all the examples in the csv module's documentation.
Lastly, if you want the all the fields to be quoted you need to use quoting=csv.QUOTE_ALL instead of quoting=csv.QUOTE_MINIMAL.
import csv
inp_filename = "Taxable_entity_those_who_filed_G1_M_July_but_not_in_Aug.csv"
outp_filename = "te_id.csv"
with open(outp_filename, 'w', newline='') as text_file, \
open(inp_filename, 'r', newline='') as csv_File:
reader = csv.DictReader(csv_File)
writer = csv.writer(text_file, quotechar="'", quoting=csv.QUOTE_ALL)
taxable_entity_ids = (row["taxable_entity_id"] for row in reader)
writer.writerow(taxable_entity_ids)
print('done')
import csv
with open("somecities.csv") as f:
reader = csv.DictReader(f)
data = [r for r in reader]
Contents of somecities.csv:
Country,Capital,CountryPop,AreaSqKm
Canada,Ottawa,35151728,9984670
USA,Washington DC,323127513,9833520
Japan,Tokyo,126740000,377972
Luxembourg,Luxembourg City,576249,2586
New to python and I'm trying to read and append a csv file. I've spent some time experimenting with some responses to similar questions with no luck--which is why I believe the code above to be pretty useless.
What I am essentially trying to achieve is to store each row from the CSV in memory using a dictionary, with the country names as keys, and values being tuples containing the other information in the table in the sequence they are in within the CSV file.
And from there I am trying to add three more cities to the csv(Country, Capital, CountryPop, AreaSqKm) and view the updated csv. How should I go about doing all of this?
The desired additions to the updated csv are:
Brazil, Brasília, 211224219, 8358140
China, Beijing, 1403500365, 9388211
Belgium, Brussels, 11250000, 30528
EDIT:
Import csv
with open("somecities.csv", "r") as csvinput:
with open(" somecities_update.csv", "w") as csvresult:
writer = csv.writer(csvresult, lineterminator='\n')
reader = csv.reader(csvinput)
all = []
headers = next(reader)
for row in reader:
all.append(row)
# Now we write to the new file
writer.write(headers)
for record in all:
writer.write(record)
#row.append(Brazil, Brasília, 211224219, 8358140)
#row.append(China, Beijing, 1403500365, 9388211)
#row.append(Belgium, Brussels, 11250000, 30528)
So assuming you can use pandas for this I would go about it this way:
import pandas as pd
df1 = pd.read_csv('your_initial_file.csv', index_col='Country')
df2 = pd.read_csv('your_second_file.csv', index_col='Country')
dfs = [df1,df2]
final_df = pd.concat(dfs)
DictReader will only represent each row as a dictionary, eg:
{
"Country": "Canada",
...,
"AreaSqKm": "9984670"
}
If you want to store the whole CSV as a dictionary you'll have to create your own:
import csv
all_data = {}
with open("somecities.csv", "r") as f:
reader = csv.DictReader(f)
for row in reader:
# Key = country, values = tuple containing the rest of the data.
all_data[row["Country"]] = (row["Capital"], row["CountryPop"], row["AreaSqKm"])
# Add the new cities to the dictionary here...
# To write the data to a new CSV
with open("newcities.csv", "w") as f:
writer = csv.writer(f)
for key, values in all_data.items():
writer.writerow([key] + list(values))
As others have said, though, the pandas library could be a good choice. Check out its read_csv and to_csv functions.
Just another idea with creating and list and appending the new values through list construct as below, not tested:
import csv
with open("somecities.csv", "r") as csvinput:
with open("result.csv", "w") as csvresult:
writer = csv.writer(csvresult, lineterminator='\n')
reader = csv.reader(csvinput)
all = []
row = next(reader)
row.append(Brazil, Brasília, 211224219, 8358140)
row.append(China, Beijing, 1403500365, 9388211)
all.append(row)
for row in reader:
row.append(row[0])
all.append(row)
writer.writerows(all)
The simplest Form i see, tested in python 3.6
Opening a file with the 'a' parameter allows you to append to the end of the file instead of simply overwriting the existing content. Try that.
>>> with open("somecities.csv", "a") as fd:
... fd.write("Brazil, Brasília, 211224219, 8358140")
OR
#!/usr/bin/python3.6
import csv
fields=['Brazil', 'Brasília', '211224219','8358140']
with open(r'somecities.csv', 'a') as f:
writer = csv.writer(f)
writer.writerow(fields)
So far I have been trying to copy specific rows including headers from original csv file to a new one. However, once I run my code it was copying a total mess creating a huge document.
This is one of the options I have tried so far, which seems to be the closest to the solution:
import csv
with open('D:/test.csv', 'r') as f,open('D:/out.csv', 'w') as f_out:
reader = csv.DictReader(f)
writer = csv.writer(f_out)
for row in reader:
if row["ICLEVEL"] == "1":
writer.writerow(row)
The thing is that I have to copy only those rows where value of "ICLEVEL"(Header name) is equal to "1".
Note: test.csv is very huge file and I cannot hardcode all header names in the writer.
Any demostration of pythonic way of doing this is greatly appreciated. Thanks.
writer.writerow expects a sequence (a tuple or list). You can use DictWriter which expects a dict.
import csv
with open('D:/test.csv', 'r') as f, open('D:/out.csv', 'w') as f_out:
reader = csv.DictReader(f)
writer = csv.DictWriter(f_out, fieldnames=reader.fieldnames)
writer.writeheader() # For writing header
for row in reader:
if row['ICLEVEL'] == '1':
writer.writerow(row)
Your row is a dictionary. CSV writer cannot write dictionaries. Select the values from the dictionary and write just them:
writer.writerow(reader.fieldnames)
for row in reader:
if row["ICLEVEL"] == "1":
values = [row[field] for field in reader.fieldnames]
writer.writerow(values)
I would actually use Pandas, not a CSV reader:
import pandas as pd
df=pd.read_csv("D:/test.csv")
newdf = df[df["ICLEVEL"]==1]
newdf.to_csv("D:/out.csv",index=False)
The code is much more compact.
I have 2 files named input.csv (composed of one column count ) and output.csv (composed of one column id).
I want to paste my count column in output.csv, just after the id column.
Here is my snippet :
with open ("/home/julien/input.csv", "r") as csvinput:
with open ("/home/julien/excel/output.csv", "a") as csvoutput:
writer = csv.writer(csvoutput, delimiter = ";")
for row in csv.reader(csvinput, delimiter = ";"):
if row[0] != "":
result = row[0]
else:
result = ""
row.append(result)
writer.writerow(row)
But it doesn't work.
I've been searching the problem for many hours but I'v got no solution. Would you have any tricks to solve my problem ?
Thanks! Julien
You need to work with three files, two for reading and one for writing.
This should work.
import csv
in_1_name = "/home/julien/input.csv"
in_2_name = "/home/julien/excel/output.csv"
out_name = "/home/julien/excel/merged.csv"
with open(in_1_name) as in_1, open(in_2_name) as in_2, open(out_name, 'w') as out:
reader1 = csv.reader(in_1, delimiter=";")
reader2 = csv.reader(in_2, delimiter=";")
writer = csv.writer(out, delimiter=";")
for row1, row2 in zip(reader1, reader2):
if row1[0] and row2[0]:
writer.writerow([row1[0], row2[0]])
You write the row for each column:
row.append(result)
writer.writerow(row)
Dedent the last line to write only once:
row.append(result)
writer.writerow(row)
Open both files for input.
Open a new file for output.
In a loop, read a line from each, formatting an output line, which is then written to the output file
close all the files
Programmatically copy your output file on top of the input file
"output.csv".
Done
If anyone was given two tables, merging them by using first column of each is very easy. With my library pyexcel, you do the merge just like merging tables:
>>> from pyexcel import Reader,Writer
>>> f1=Reader("input.csv", delimiter=';')
>>> f2=Reader("output.csv", delimiter=';')
>>> columns = [f1.column_at(0), f2.column_at(0)]
>>> f3=Writer("merged.csv", delimiter=';')
>>> f3.write_columns(columns)
>>> f3.close()