Importing/Exporting a nested dictionary from a CSV file - python

So I have a CSV file with the data arranged like this:
X,a,1,b,2,c,3
Y,a,1,b,2,c,3,d,4
Z,l,2,m,3
I want to import the CSV to create a nested dictionary so that looks like this.
data = {'X' : {'a' : 1, 'b' : 2, 'c' : 3},
'y' : {'a' : 1, 'b' : 2, 'c' : 3, 'd' : 4},
'Z' : {'l' : 2, 'm' :3}}
After updating the dictionary in the program I wrote (I got that part figured out), I want to be able to export the dictionary onto the same CSV file, overwriting/updating it. However I want it to be in the same format as the previous CSV file so that I can import it again.
I have been playing around with the import and have this so far
import csv
data = {}
with open('userdata.csv', 'r') as f:
reader = csv.reader(f)
for row in reader:
data[row[0]] = {row[i] for i in range(1, len(row))}
But this doesn't work as things are not arranged correctly. Some numbers are subkeys to other numbers, letters are out of place, etc. I haven't even gotten to the export part yet. Any ideas?

Since you're not interested in preserving order, something relatively simple should work:
import csv
# import
data = {}
with open('userdata.csv', 'r') as f:
reader = csv.reader(f)
for row in reader:
a = iter(row[1:])
data[row[0]] = dict(zip(a, a))
# export
with open('userdata_exported.csv', 'w') as f:
writer = csv.writer(f)
for key, values in data.items():
row = [key] + [value for item in values.items() for value in item]
writer.writerow(row)
The latter could be done a little more efficiently by making only a single call to thecsv.writer's writerows()method and passing it a generator expression.
# export2
with open('userdata_exported.csv', 'w') as f:
writer = csv.writer(f)
rows = ([key] + [value for item in values.items() for value in item]
for key, values in data.items())
writer.writerows(rows)

You can use the grouper recipe from itertools:
def grouper(iterable, n, fillvalue=None):
"Collect data into fixed-length chunks or blocks"
# grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx
args = [iter(iterable)] * n
return itertools.izip_longest(fillvalue=fillvalue, *args)
This will group your data into the a1/b2/c3 pairs you want. So you can do data[row[0]] = {k: v for k, v in grouper(row[1:], 2)} in your loop.

from collections import defaultdict
data_lines = """X,a,1,b,2,c,3
Y,a,1,b,2,c,3,d,4
Z,l,2,m,3""".splitlines()
data = defaultdict(dict)
for line in data_lines:
# you should probably add guards against invalid data, empty lines etc.
main_key, sep, tail = line.partition(',')
items = [item.strip() for item in tail.split(',')]
items = zip(items[::2], map(int, items[1::2])
# data[main_key] = {key : value for key, value in items}
data[main_key] = dict(items)
print dict(data)
# {'Y': {'a': '1', 'c': '3', 'b': '2', 'd': '4'},
# 'X': {'a': '1', 'c': '3', 'b': '2'},
# 'Z': {'m': '3', 'l': '2'}
# }

I'm lazy, so I might do something like this:
import csv
data = {}
with open('userdata.csv', 'rb') as f:
reader = csv.reader(f)
for row in reader:
data[row[0]] = dict(zip(row[1::2], map(int,row[2::2])))
which works because row[1::2] gives every other element starting at 1, and row[2::2 every other element starting at 2. zip makes a tuple pair of those elements, and then we pass that to dict. This gives
{'Y': {'a': 1, 'c': 3, 'b': 2, 'd': 4},
'X': {'a': 1, 'c': 3, 'b': 2},
'Z': {'m': 3, 'l': 2}}
(Note that I changed your open to use 'rb', which is right for Python 2: if you're using 3, you want 'r', newline='' instead.)

Related

Creating a dictionary of dictionaries from a CSV file

I am trying to create a dictionary of dictionaries in Python from a CSV file, the file looks something like this:
Column 1
Column 2
Column 3
A
flower
12
A
sun
13
B
cloud
14
B
water
34
C
rock
12
And I am trying to get a dictionary of dictionaries that looks like this:
dict = {
'A': {'flower': 12, 'sun': 13},
'B': {'cloud': 14, 'water': 34},
'C': {'rock': 12}
}
The code I tried so far is as follows:
import csv
with open('file.csv', 'r') as csvFile:
rows=csv.reader(csvFile)
d=dict()
for row in rows:
head,tail=row[0], row[1:]
d[head]=dict(zip(tail[0:], tail[1:]))
print(d)
but it's not working well as I am getting this result:
dict = {
'A': {'sun': 13},
'B': {'water': 34},
'C': {'rock': 12}
}
You need to update your d[head] every iteration, not replace it:
import csv
with open('file.csv', 'r') as csvFile:
rows=csv.reader(csvFile)
d=dict()
for row in rows:
head,name,value=row[0], row[1], row[2]
if head not in d:
d[head]= {} # {} is like dict() but faster
d[head][name] = value
print(d)
Or with defaultdict to be more concise:
import csv
from collections import defaultdict
with open('file.csv', 'r') as csvFile:
rows=csv.reader(csvFile)
d = defaultdict(dict)
for row in rows:
head,name,value=row[0], row[1], row[2]
d[head][name] = value
print(d) # or print(dict(d))

how to export print result to excel using python (for loop)

Below is my code and need to generate the result into excel using xlsx writer module:
a=3
mylist = [{'a': 1, 'b': 2}, {'c': 3, 'd': 4}, {'e': 5, 'f': 6}]
for i,l in zip(mylist,range(a)):
print('\r')
print('Result : {0}'.format(l))
for key,value in i.items():
print(key,':',value)
you can use the pandas library
import pandas as pd
a = 3
mylist = [{'a': 1, 'b': 2}, {'c': 3, 'd': 4}, {'e': 5, 'f': 6}]
# Create an empty dataframe
df = pd.DataFrame()
for i, l in zip(mylist, range(a)):
result = 'Result : ' + str(l)
for key, value in i.items():
# Append the result and key-value pairs to the dataframe
df = df.append({'Result': result, 'Key': key, 'Value': value}, ignore_index=True)
# Save the dataframe to an Excel file
df.to_excel('output.xlsx', index=False)
I don't know what extension you are talking about when you say Excel. But you can use .csv file to write to a file.
It should go something like this.
with open('out.csv','w') as f:
for key,value in i.items():
print(key,',',value)
or you can convert the dictionary into a Pandas Dataframe and export it as .xlsx file using to_excel() function built-in to Pandas. https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_excel.html

Using csv.DictReader with a slice

I was wondering whether there was a way read all columns in a row except the first one as ints using csv.DictReader, kind of like this:
filename = sys.argv[1]
database = []
with open(filename) as file:
reader = csv.DictReader(file)
for row in reader:
row[1:] = int(row[1:])
database.append(row)
I know this isn't a correct way to do this as it gives out the error of being unable to hash slices. I have a way to circumvent having to do this at all, but for future reference, I'm curious whether, using slices or not, I can selectively interact with columns in a row without hardcoding each one?
You can do it by using the key() dictionary method to get a list of the keys in each dictionary and the slice that for doing the conversion:
import csv
from pprint import pprint
import sys
filename = sys.argv[1]
database = []
with open(filename) as file:
reader = csv.DictReader(file)
for row in reader:
for key in list(row.keys())[1:]:
row[key] = int(row[key])
database.append(row)
pprint(database)
Output:
[{'name': 'John', 'number1': 1, 'number2': 2, 'number3': 3, 'number4': 4},
{'name': 'Alex', 'number1': 4, 'number2': 3, 'number3': 2, 'number4': 1},
{'name': 'James', 'number1': 1, 'number2': 3, 'number3': 2, 'number4': 4}]
Use this:
import csv
filename = 'test.csv'
database = []
with open(filename) as file:
reader = csv.DictReader(file)
for row in reader:
new_d = {} # Create new dictionary to be appended into database
for i, (k, v) in enumerate(row.items()): # Loop through items of the row (i = index, k = key, v = value)
new_d[k] = int(v) if i > 0 else v # If i > 0, add int(v) to the dictionary, else add v
database.append(new_d) # Append to database
print(database)
test.csv:
Letter,Num1,Num2
A,123,456
B,789,012
C,345,678
D,901,234
E,567,890
Output:
[{'Letter': 'A', 'Num1': 123, 'Num2': 456},
{'Letter': 'B', 'Num1': 789, 'Num2': 12},
{'Letter': 'C', 'Num1': 345, 'Num2': 678},
{'Letter': 'D', 'Num1': 901, 'Num2': 234},
{'Letter': 'E', 'Num1': 567, 'Num2': 890}]

create a dictionary with a .txt file in python

I have a list of pairs like pair_users = [('a','b'), ('a','c'), ('e','d'), ('e','f')] when I saved this I used this code:
with open('pair_users.txt', 'w') as f:
f.write(','.join('%s' % (x,) for x in pair_users))
Then, when I want to use it in another notebook to create a dictionary which will look like this {'a': ['b', 'c'], 'b': [], 'c': [], 'd': [], 'e': ['d', 'f'], 'f': []}
to create that dictionary I am using this code:
graph = {}
for k, v in pair_users:
graph.setdefault(v, [])
graph.setdefault(k, []).append(v)
graph
but my problem is when I run the graph code after opening the file I saved pair_users = open('/content/pair_users.txt', 'r') the result I get is an empty dictionary {}.
This is the output I have when I open the file
When I use the code to create the graph without saving the list as txt file I get the right answer, my problem is when I save it and then I open it.
Here it worked for example:
Thanks in advance for your ideas!
Apart from using pickle or parsing the string as proposed in the other answeres, you can use some "more universal format", e.g. JSON:
import json
pair_users = [('a','b'), ('a','c'), ('e','d'), ('e','f')]
with open('pair_users.txt', 'w') as f:
json.dump(pair_users,f)
with open("pair_users.txt") as f:
pair_users = json.load(f)
graph = {}
for k, v in pair_users:
graph.setdefault(v, [])
graph.setdefault(k, []).append(v)
print(graph)
The issue is, pair_users is a str type when it's read back in.
Use ast.literal_eval to convert the string to a tuple.
from ast import literal_eval
pair_users = open('test.txt', 'r')
pair_users= literal_eval(pair_users.read())
graph = {}
for k, v in pair_users:
graph.setdefault(v, [])
graph.setdefault(k, []).append(v)
graph
[out]:
{'b': [], 'a': ['b', 'c'], 'c': [], 'd': [], 'e': ['d', 'f'], 'f': []}
The result you get when you read the file is just a single string. You cannot iterate the string as k,v.
You could either parse the string into tuples by your own parsing or you could go with an option like pickle.
With pickle it is :
import pickle
with open('pair_users.pkl', 'wb') as f:
pickle.dump(pair_users, f)
f.close()
with open('pair_users.pkl', 'rb') as f:
pair_tuple = pickle.load(f)
f.close()
print(pair_tuple)

Iterating through and adding multiple Json files to a dictionary

So my question is this. I have these JSON files stored in a list called json_list
['9.json',
'8.json',
'7.json',
'6.json',
'5.json',
'4.json',
'3.json',
'2.json',
'10.json',
'1.json',]
Each of these files contains a dictionary with an (ID NUMBER: Rating).
This is my code below. The idea is to store all of the keys and values of these files into a dictionary so it will be easier to search through. I've separated the keys and values so it will be easier to add into the dictionary. The PROBLEM is that this iteration only goes through the file '1.json' and then stops. I'm not sure why its not going through all 10.
for i in range(len(json_list)):
f = open(os.path.join("data", json_list[i]), encoding = 'utf-8')
file = f.read()
f.close()
data = json.loads(file)
keys = data.keys()
values = data.values()
Here:
data = json.loads(file)
keys = data.keys()
values = data.values()
You're resetting the value for keys and values instead of appending to it.
Maybe try appending them, something like (The dictionary keys MUST be unique in each file or else you'll be overwriting data):
data = json.loads(file)
keys += list(data.keys())
values += list(data.values())
Or better yet just append the dictionary (The dictionary keys MUST be unique in each file or else you'll be overwriting data):
all_data = {}
for i in range(len(json_list)):
f = open(os.path.join("data", json_list[i]), encoding = 'utf-8')
file = f.read()
f.close()
data = json.loads(file)
all_data = {**all_data, **data}
Working example:
import json
ds = ['{"1":"a","2":"b","3":"c"}','{"aa":"11","bb":"22","cc":"33", "dd":"44"}','{"foo":"bar","eggs":"spam","xxx":"yyy"}']
all_data = {}
for d in ds:
data = json.loads(d)
all_data = {**all_data, **data}
print (all_data)
Output:
{'1': 'a', '2': 'b', '3': 'c', 'aa': '11', 'bb': '22', 'cc': '33', 'dd': '44', 'foo': 'bar', 'eggs': 'spam', 'xxx': 'yyy'}
If the keys are not unique try appending the dictionaries to a list of dictionaries like this:
import json
ds = ['{"1":"a","2":"b","3":"c"}','{"aa":"11","bb":"22","cc":"33", "dd":"44"}','{"dd":"bar","eggs":"spam","xxx":"yyy"}']
all_dicts= []
for d in ds:
data = json.loads(d)
all_dicts.append(data)
print (all_dicts)
# to access key
print (all_dicts[0]["1"])
Output:
[{'1': 'a', '2': 'b', '3': 'c'}, {'aa': '11', 'bb': '22', 'cc': '33', 'dd': '44'}, {'dd': 'bar', 'eggs': 'spam', 'xxx': 'yyy'}]
a

Categories