I have a list of pairs like pair_users = [('a','b'), ('a','c'), ('e','d'), ('e','f')] when I saved this I used this code:
with open('pair_users.txt', 'w') as f:
f.write(','.join('%s' % (x,) for x in pair_users))
Then, when I want to use it in another notebook to create a dictionary which will look like this {'a': ['b', 'c'], 'b': [], 'c': [], 'd': [], 'e': ['d', 'f'], 'f': []}
to create that dictionary I am using this code:
graph = {}
for k, v in pair_users:
graph.setdefault(v, [])
graph.setdefault(k, []).append(v)
graph
but my problem is when I run the graph code after opening the file I saved pair_users = open('/content/pair_users.txt', 'r') the result I get is an empty dictionary {}.
This is the output I have when I open the file
When I use the code to create the graph without saving the list as txt file I get the right answer, my problem is when I save it and then I open it.
Here it worked for example:
Thanks in advance for your ideas!
Apart from using pickle or parsing the string as proposed in the other answeres, you can use some "more universal format", e.g. JSON:
import json
pair_users = [('a','b'), ('a','c'), ('e','d'), ('e','f')]
with open('pair_users.txt', 'w') as f:
json.dump(pair_users,f)
with open("pair_users.txt") as f:
pair_users = json.load(f)
graph = {}
for k, v in pair_users:
graph.setdefault(v, [])
graph.setdefault(k, []).append(v)
print(graph)
The issue is, pair_users is a str type when it's read back in.
Use ast.literal_eval to convert the string to a tuple.
from ast import literal_eval
pair_users = open('test.txt', 'r')
pair_users= literal_eval(pair_users.read())
graph = {}
for k, v in pair_users:
graph.setdefault(v, [])
graph.setdefault(k, []).append(v)
graph
[out]:
{'b': [], 'a': ['b', 'c'], 'c': [], 'd': [], 'e': ['d', 'f'], 'f': []}
The result you get when you read the file is just a single string. You cannot iterate the string as k,v.
You could either parse the string into tuples by your own parsing or you could go with an option like pickle.
With pickle it is :
import pickle
with open('pair_users.pkl', 'wb') as f:
pickle.dump(pair_users, f)
f.close()
with open('pair_users.pkl', 'rb') as f:
pair_tuple = pickle.load(f)
f.close()
print(pair_tuple)
Related
I've tried several methods to read from this file and turn it to a dictionary but I'm having a lot of errors
I've tried the following method but it did not work I got not enough values unpack.
d = {}
with open("file.txt") as f:
for line in f:
(key, val) = line.split()
d[int(key)] = val
I want to read and convert it to this:
{123: ['Ahmed Rashed', 'a', '1000.0'], 456: ['Noof Khaled', 'c', '0.0'], 777: ['Ali Mahmood', 'a', '4500.0']}
Split on commas instead.
d = {}
with open("file.txt") as f:
for line in f:
parts = line.rstrip('\n').split(',')
d[int(parts[0])] = parts[1:]
Using csv.reader to read the file and split it into its fields:
import csv
with open("file.txt") as f:
d = {
int(num): data
for num, *data in csv.reader(f)
}
So my question is this. I have these JSON files stored in a list called json_list
['9.json',
'8.json',
'7.json',
'6.json',
'5.json',
'4.json',
'3.json',
'2.json',
'10.json',
'1.json',]
Each of these files contains a dictionary with an (ID NUMBER: Rating).
This is my code below. The idea is to store all of the keys and values of these files into a dictionary so it will be easier to search through. I've separated the keys and values so it will be easier to add into the dictionary. The PROBLEM is that this iteration only goes through the file '1.json' and then stops. I'm not sure why its not going through all 10.
for i in range(len(json_list)):
f = open(os.path.join("data", json_list[i]), encoding = 'utf-8')
file = f.read()
f.close()
data = json.loads(file)
keys = data.keys()
values = data.values()
Here:
data = json.loads(file)
keys = data.keys()
values = data.values()
You're resetting the value for keys and values instead of appending to it.
Maybe try appending them, something like (The dictionary keys MUST be unique in each file or else you'll be overwriting data):
data = json.loads(file)
keys += list(data.keys())
values += list(data.values())
Or better yet just append the dictionary (The dictionary keys MUST be unique in each file or else you'll be overwriting data):
all_data = {}
for i in range(len(json_list)):
f = open(os.path.join("data", json_list[i]), encoding = 'utf-8')
file = f.read()
f.close()
data = json.loads(file)
all_data = {**all_data, **data}
Working example:
import json
ds = ['{"1":"a","2":"b","3":"c"}','{"aa":"11","bb":"22","cc":"33", "dd":"44"}','{"foo":"bar","eggs":"spam","xxx":"yyy"}']
all_data = {}
for d in ds:
data = json.loads(d)
all_data = {**all_data, **data}
print (all_data)
Output:
{'1': 'a', '2': 'b', '3': 'c', 'aa': '11', 'bb': '22', 'cc': '33', 'dd': '44', 'foo': 'bar', 'eggs': 'spam', 'xxx': 'yyy'}
If the keys are not unique try appending the dictionaries to a list of dictionaries like this:
import json
ds = ['{"1":"a","2":"b","3":"c"}','{"aa":"11","bb":"22","cc":"33", "dd":"44"}','{"dd":"bar","eggs":"spam","xxx":"yyy"}']
all_dicts= []
for d in ds:
data = json.loads(d)
all_dicts.append(data)
print (all_dicts)
# to access key
print (all_dicts[0]["1"])
Output:
[{'1': 'a', '2': 'b', '3': 'c'}, {'aa': '11', 'bb': '22', 'cc': '33', 'dd': '44'}, {'dd': 'bar', 'eggs': 'spam', 'xxx': 'yyy'}]
a
This is the txt file content I have:
salesUnits:500
priceUnit:11
fixedCosts:2500
variableCostUnit:2
I need to create a dictionary in Python that will read the file and make the keys the salesUnits etc. and the values the numbers. The code I have so far will only print the variable cost per unit:
with open("myInputFile.txt") as f:
content = f.readlines()
myDict = {}
for line in content:
myDict=line.rstrip('\n').split(":")
print(myDict)
How can I fix the code so that all key and value pairs show up? Thank you!
You're overwriting myDict each time you call myDict=line.rstrip('\n').split(":"). The pattern to add to a dictionary is dictionary[key] = value.
myDict = {}
with open("myInputFile.txt") as f:
for line in f:
key_value = line.rstrip('\n').split(":")
if len(key_value) == 2:
myDict[key_value[0]]=key_value[1]
print(myDict)
outputs
{'fixedCosts': '2500', 'priceUnit': '11', 'variableCostUnit': '2', 'salesUnits': '500'}
Using a simple dict comprehension will handle this:
with open('testinput.txt', 'r') as infile:
dict = {
line.strip().split(':')[0]:
int(line.strip().split(':')[1])
if line.strip().split(':')[1].isdigit()
else
line.strip().split(':')[1]
for line in infile.readlines()}
print(dict)
Output:
{'salesUnits': 500, 'priceUnit': 11, 'fixedCosts': 2500, 'variableCostUnit': 2}
If you wish to bring the numbers in as simple strings, just use:
dict = {
line.strip().split(':')[0]:
line.strip().split(':')[1]
for line in infile.readlines()}
Note also that you can add handling for other data types or data formatting using additional variations of:
int(line.strip().split(':')[1])
if line.strip().split(':')[1].isdigit()
else
myDict = {}
with open('dict.txt', 'r') as file:
for line in file:
key, value = line.strip().split(':')
myDict[key] = value
print myDict
Output:
{'fixedCosts': '2500', 'priceUnit': '11', 'variableCostUnit': '2', 'salesUnits': '500'}
So I have a CSV file with the data arranged like this:
X,a,1,b,2,c,3
Y,a,1,b,2,c,3,d,4
Z,l,2,m,3
I want to import the CSV to create a nested dictionary so that looks like this.
data = {'X' : {'a' : 1, 'b' : 2, 'c' : 3},
'y' : {'a' : 1, 'b' : 2, 'c' : 3, 'd' : 4},
'Z' : {'l' : 2, 'm' :3}}
After updating the dictionary in the program I wrote (I got that part figured out), I want to be able to export the dictionary onto the same CSV file, overwriting/updating it. However I want it to be in the same format as the previous CSV file so that I can import it again.
I have been playing around with the import and have this so far
import csv
data = {}
with open('userdata.csv', 'r') as f:
reader = csv.reader(f)
for row in reader:
data[row[0]] = {row[i] for i in range(1, len(row))}
But this doesn't work as things are not arranged correctly. Some numbers are subkeys to other numbers, letters are out of place, etc. I haven't even gotten to the export part yet. Any ideas?
Since you're not interested in preserving order, something relatively simple should work:
import csv
# import
data = {}
with open('userdata.csv', 'r') as f:
reader = csv.reader(f)
for row in reader:
a = iter(row[1:])
data[row[0]] = dict(zip(a, a))
# export
with open('userdata_exported.csv', 'w') as f:
writer = csv.writer(f)
for key, values in data.items():
row = [key] + [value for item in values.items() for value in item]
writer.writerow(row)
The latter could be done a little more efficiently by making only a single call to thecsv.writer's writerows()method and passing it a generator expression.
# export2
with open('userdata_exported.csv', 'w') as f:
writer = csv.writer(f)
rows = ([key] + [value for item in values.items() for value in item]
for key, values in data.items())
writer.writerows(rows)
You can use the grouper recipe from itertools:
def grouper(iterable, n, fillvalue=None):
"Collect data into fixed-length chunks or blocks"
# grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx
args = [iter(iterable)] * n
return itertools.izip_longest(fillvalue=fillvalue, *args)
This will group your data into the a1/b2/c3 pairs you want. So you can do data[row[0]] = {k: v for k, v in grouper(row[1:], 2)} in your loop.
from collections import defaultdict
data_lines = """X,a,1,b,2,c,3
Y,a,1,b,2,c,3,d,4
Z,l,2,m,3""".splitlines()
data = defaultdict(dict)
for line in data_lines:
# you should probably add guards against invalid data, empty lines etc.
main_key, sep, tail = line.partition(',')
items = [item.strip() for item in tail.split(',')]
items = zip(items[::2], map(int, items[1::2])
# data[main_key] = {key : value for key, value in items}
data[main_key] = dict(items)
print dict(data)
# {'Y': {'a': '1', 'c': '3', 'b': '2', 'd': '4'},
# 'X': {'a': '1', 'c': '3', 'b': '2'},
# 'Z': {'m': '3', 'l': '2'}
# }
I'm lazy, so I might do something like this:
import csv
data = {}
with open('userdata.csv', 'rb') as f:
reader = csv.reader(f)
for row in reader:
data[row[0]] = dict(zip(row[1::2], map(int,row[2::2])))
which works because row[1::2] gives every other element starting at 1, and row[2::2 every other element starting at 2. zip makes a tuple pair of those elements, and then we pass that to dict. This gives
{'Y': {'a': 1, 'c': 3, 'b': 2, 'd': 4},
'X': {'a': 1, 'c': 3, 'b': 2},
'Z': {'m': 3, 'l': 2}}
(Note that I changed your open to use 'rb', which is right for Python 2: if you're using 3, you want 'r', newline='' instead.)
I had a text file with some information like this:
A , 12
B , 34
A , 54
F , 60
I want to read the file, and store the information in a python dictionary, like this: {'A':['12','54',...], B:['34',....]...} and so on. But I am stuck with how to search every A in lines. This is my tried:
repo = {}
infile = open('test10.log','r')
lines = infile.readlines()[2:-1]
for i in lines:
module = ''.join(i.split(',')[:-1])
time = ''.join(i.split(',')[1:]).replace('\n','')
if not module in repo:
repo[module] = time
Thanks for your help!.
There is not such a data structure like {A:{12,54,...}, B:{34,....}. However:
repo = {}
infile = open('test10.log','r')
lines = infile.readlines()[2:-1]
for i in lines:
module, time = [a.strip() for a in i.split(',')]
repo.setdefault(module, []).append(int(time))
will give you a dict of lists:
{'A': [12, 54], 'B': [34], 'F': [60]}
Is this what you want?
You could try to use defaultdict from collections module:
from collections import defaultdict
repo = defaultdict(list)
infile = open('test10.log','r')
lines = infile.readlines()[2:-1]
for item in lines:
module, time = [a.strip() for a in item.split(',')]
repo[module].append(time)
If you print(dict(repo.items())) you should get:
{'A': ['12', '54'], 'B': ['34'], 'F': ['60']}
If you print(list(repo.items())) you should get:
[('A', ['12', '54']), ('B', ['34']), ('F', ['60'])]
See
http://docs.python.org/library/collections.html#collections.defaultdict
http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html#defaultdict
for documentation about defaultdict