How to convert dict value into readable CSV file? - python

How to convert dict value into readable CSV file? I already tried to write it, but not success.
Original value:
[328900559584, 185455615753, 296889631456]
I want to make my file look like this:
328900559584
185455615753
296889631456
My code:
with open('cluster1.csv', 'w') as f:
[f.write('{0},{1}\n'.format(key, value)) for key, value in my_dict.items()]

Noting that you're using a list instead of a dict:
my_lines = [328900559584, 185455615753, 296889631456]
with open('cluster1.csv', 'w') as f:
for line in my_lines:
f.write(str(line) + '\n')

You can do it like this:
import os
list = [328900559584, 185455615753, 296889631456]
with open('cluster1.csv', 'w') as f:
[f.write(str(n) + os.linesep) for n in list]
Using os.linesep is a good idea, because that will use the right line separator for your system ("\n" or "\n\r" etc).

Related

Python program for writing length of list to file

I have a file list.txt that contains a single list only e.g.
[asd,ask,asp,asq]
The list might be a very long one. I want to create a python program len.py that reads list.txt and writes the length of the within list to the file num.txt. Something like the following:
fin = open("list.txt", "rt")
fout = open("num.txt", "wt")
for list in fin:
fout.write(len(list))
fin.close()
fout.close()
However this does not work. Can someone point out what needs to be changed? Many thanks.
Use:
with open("list.txt") as f1, open("num.txt", "w") as f2:
for line in f1:
line = line.strip('\n[]')
f2.write(str(len(line.split(','))) + '\n')
with open("list.txt") as fin, open("num.txt", "w") as fout:
input_data = fin.readline()
# check if there was any info read from input file
if input_data:
# split string into list on comma character
strs = input_data.replace('[','').split('],')
lists = [map(int, s.replace(']','').split(',')) for s in strs]
print(len(lists))
fout.write(str(len(lists)))
I updated the code to use the with statement from another answer. I also used some code from this answer (How can I convert this string to list of lists?) to (more?) correctly count nested lists.
When python try to read a file using default method it generally treats content of that file as a string. So first responsibility is to type cast string content into appropriate content type for that you can not use default type casting method.
You can use special package by the name ast to type cast the data.
import ast
fin = open("list.txt", "r")
fout = open("num.txt", "w")
for list in fin.readlines():
fout.write(len(ast.literal_eval(list)))
fin.close()
fout.close()

Python - When Outputting Dictionary to External Text File Only First Key Is Being Outputted

My issue:
I need to out every item in a dictionary to an external text file in Python.
Like the following:
dict1 = {}
gen1 = 1
aha = 2
dict1['Generation'] = gen1
dict1['Population'] = aha
for key, value in sorted(dict1.items()):
print(key,':',value, file = open('text.txt', 'w'))
In this example I have 2 keys in the dictionary however when I run the code and go to the file the only line which has been made only the first key is outputted.
What Can I do so that all of the keys in the dictionary are printed in the external file?
thank you
You are re-opening the file for writing (and thus clearing it) in each iteration of your for loop. Use
with open('text.txt', 'w') as out:
for key, value in sorted(dict1.items()):
print(key,':',value, file=out)
Alternatively, simply change open('text.txt', 'w') to open('text.txt', 'a') in your original code in order to open the file for appending. This is less efficient than only opening the file once, though.
Every call to open('text.txt', 'w') will truncate the file. That means, before writing the second item, the file will be cleared and the first item is lost.
You should only open it once and keep it in a variable:
# not final solution yet!
f = open('text.txt', 'w')
for key, value in sorted(dict1.items()):
print(key, ':', value, file=f)
However in Python, most of the time you should use the with statement to ensure the file is properly closed:
with open('text.txt', 'w') as f:
for key, value in sorted(dict1.items()):
print(key, ':', value, file=f)
You should use json module to dump your dict into a file, instead of reinventing the wheel:
import json
dict1 = {}
gen1 = 1
aha = 2
dict1['Generation'] = gen1
dict1['Population'] = aha
with open('text.txt', 'w') as dict_file:
json.dump(dict1, dict_file)

checking if a list in a text file already exists and appending to it

So from this:
Lucy:4
Henry:8
Henry:9
Lucy:9
To this
Lucy: 4,9
Henry: 8,9
this is now fixed thank you
Very straight forward solution might be like this: (If you don't want to use defaultdict)
with open('input.txt') as f:
dic = {}
for line in f:
key,value = line.strip().split(':')
dic.setdefault(key,[]).append(value)
with open('output','a') as f:
for key,value in dic.items():
f.write(key + ':' + ','.join(value) + '\n')
UPDATE
I have fixed your code, and you need to change this lines:
Remove the following lines, they are useless here.
file = open(class_number, 'a') #opens the file in 'append' mode so you don't delete all the information
file.write(str(name + ",")) #writes the name and ":" to file
file.write(str(score)) #writes the score to file
file.write('\n')#writes the score to the file
file.close()#safely closes the file to save the information
You are using the wrong delimiter.
key,value= line.split(",")
Change this to below:
key,value= line.strip().split(":")
This will fix your error.
N.B. Here, strip() is there to remove spaces and newlines.
Don't really know, why you are prining the commas.
file.write(key + ':' + ',' + ',' + ','.join(value))
Change this to below:
file.write(key + ':' + ','.join(value) + '\n')
One thing, you are reading and writing from the same file. In that case, you should read all at once if you need to write to the same file. But if you use a separate file, you are just fine with this code.
Solution 1:
Best way to do so is first read all data in dictionary and finally dump it to file.
from collections import defaultdict
result = defaultdict(list)
def add_item(classname,source):
for name,score in source:
result[name].append(score)
with open(classname,'w') as c:
for key,val in result.items():
c.write('{}: {}'.format(key,','.join(val))
Solution 2:
for each request, you have to real whole the file and then rewrite it.:
def add_item(classname,name,score):
result={item.spilt(':')[0],item.spilt(':')[1] for item in open(classname,'r').readlines()]
result[name].append(score)
with open(classname,'w') as c:
for key,val in result.items():
c.write('{}: {}'.format(key,','.join(val))

File I/O Python Save and Read

I need to save a dictionary and then be able to read the dictionary after it's been saved.
This is what I have and it should work (i think), but i keep getting the following error when it comes to the read_dict function:
return dict(line.split() for line in x)
ValueError: dictionary update sequence element #0 has length 1; 2 is required
Any advice?
def save_dict(dict1):
with open('save.txt', 'w') as fh:
for key in dict1.keys():
fh.write(key + '' + dictionary1[key] + '\n')
def readDB():
with open('save.txt', 'r') as fh:
return dict(new.split() for new in fh)
Unless you actually need a line-by-line list in the file, use something like json or pickle to save the dict. These formats deal with things like spaces in the key name, non-string values, non-ascii characters and such.
import json
dict1 = {'test':123}
with open('save.txt', 'w') as fh:
json.dump(dict1, fh)
with open('save.txt', 'r') as fh:
dict2 = json.load(fh)
Use space instead of empty string, otherwise str.split will return a single item list which is going to raise an error when passed to dict().
fh.write(key + ' ' + dictionary1[key] + '\n')
Or better use string formatting:
for key, val in dict1.items():
fh.write('{} {}\n'.format(key, val))
Demo:
>>> s = 'k' + '' + 'v' #WRONG
>>> s
'kv'
>>> s.split()
['kv']
>>> s = 'k' + ' ' + 'v' #RIGHT
>>> s
'k v'
>>> s.split()
['k', 'v']
You probably need to use pickle module man!
Check out this example :
## Importing
from pickle import dump
## You make the dictionary
my_dict = {'a':1 , 'b':2 , 'c':3}
## You dump the dictionary's items to a binary (.txt file for windows)
with open('the path you want to save','wb') as da_file:
dump(my_dict , da_file)
save that file as "something0.py"
## Importing
from pickle import load
## You getting the data back from file
## the variable that will get the result of load module
## will be the same type with the variable that "dumped"
## the items to that file!
with open('the file path which you will get the items from' , 'rb') as da_file:
my_dict = load(da_file)
## Print out the results
from pprint import pprint
pprint(my_dict)
save that file as "something1.py"
Now run the two modules with the same file on "with" statement,
first 0 then 1 .
And 1 will print you the same results that the 0 gave to the file!
As mentioned you should use pickle, but as a more simplified way
FileTowriteto = open("foo.txt", "wb")
import pickle
DumpingDict = {"Foo":"Foo"}
pickle.dump(DumpingDict, FileTowriteto)
Then when you want to read it you can do this
OldDict = open("foo.txt", "rb")
OldDictRecover = pickle.load(OldDict)
This should work, and if the output is binary run the str() function on it.

How to save a dictionary to a file?

I have problem with changing a dict value and saving the dict to a text file (the format must be same), I only want to change the member_phone field.
My text file is the following format:
memberID:member_name:member_email:member_phone
and I split the text file with:
mdict={}
for line in file:
x=line.split(':')
a=x[0]
b=x[1]
c=x[2]
d=x[3]
e=b+':'+c+':'+d
mdict[a]=e
When I try change the member_phone stored in d, the value has changed not flow by the key,
def change(mdict,b,c,d,e):
a=input('ID')
if a in mdict:
d= str(input('phone'))
mdict[a]=b+':'+c+':'+d
else:
print('not')
and how to save the dict to a text file with same format?
Python has the pickle module just for this kind of thing.
These functions are all that you need for saving and loading almost any object:
import pickle
with open('saved_dictionary.pkl', 'wb') as f:
pickle.dump(dictionary, f)
with open('saved_dictionary.pkl', 'rb') as f:
loaded_dict = pickle.load(f)
In order to save collections of Python there is the shelve module.
Pickle is probably the best option, but in case anyone wonders how to save and load a dictionary to a file using NumPy:
import numpy as np
# Save
dictionary = {'hello':'world'}
np.save('my_file.npy', dictionary)
# Load
read_dictionary = np.load('my_file.npy',allow_pickle='TRUE').item()
print(read_dictionary['hello']) # displays "world"
FYI: NPY file viewer
We can also use the json module in the case when dictionaries or some other data can be easily mapped to JSON format.
import json
# Serialize data into file:
json.dump( data, open( "file_name.json", 'w' ) )
# Read data from file:
data = json.load( open( "file_name.json" ) )
This solution brings many benefits, eg works for Python 2.x and Python 3.x in an unchanged form and in addition, data saved in JSON format can be easily transferred between many different platforms or programs. This data are also human-readable.
Save and load dict to file:
def save_dict_to_file(dic):
f = open('dict.txt','w')
f.write(str(dic))
f.close()
def load_dict_from_file():
f = open('dict.txt','r')
data=f.read()
f.close()
return eval(data)
As Pickle has some security concerns and is slow (source), I would go for JSON, as it is fast, built-in, human-readable, and interchangeable:
import json
data = {'another_dict': {'a': 0, 'b': 1}, 'a_list': [0, 1, 2, 3]}
# e.g. file = './data.json'
with open(file, 'w') as f:
json.dump(data, f)
Reading is similar easy:
with open(file, 'r') as f:
data = json.load(f)
This is similar to this answer, but implements the file handling correctly.
If the performance improvement is still not enough, I highly recommend orjson, fast, correct JSON library for Python build upon Rust.
I'm not sure what your first question is, but if you want to save a dictionary to file you should use the json library. Look up the documentation of the loads and puts functions.
I would suggest saving your data using the JSON format instead of pickle format as JSON's files are human-readable which makes your debugging easier since your data is small. JSON files are also used by other programs to read and write data. You can read more about it here
You'll need to install the JSON module, you can do so with pip:
pip install json
# To save the dictionary into a file:
json.dump( data, open( "myfile.json", 'w' ) )
This creates a json file with the name myfile.
# To read data from file:
data = json.load( open( "myfile.json" ) )
This reads and stores the myfile.json data in a data object.
For a dictionary of strings such as the one you're dealing with, it could be done using only Python's built-in text processing capabilities.
(Note this wouldn't work if the values are something else.)
with open('members.txt') as file:
mdict={}
for line in file:
a, b, c, d = line.strip().split(':')
mdict[a] = b + ':' + c + ':' + d
a = input('ID: ')
if a not in mdict:
print('ID {} not found'.format(a))
else:
b, c, d = mdict[a].split(':')
d = input('phone: ')
mdict[a] = b + ':' + c + ':' + d # update entry
with open('members.txt', 'w') as file: # rewrite file
for id, values in mdict.items():
file.write(':'.join([id] + values.split(':')) + '\n')
I like using the pretty print module to store the dict in a very user-friendly readable form:
import pprint
def store_dict(fname, dic):
with open(fname, "w") as f:
f.write(pprint.pformat(dic, indent=4, sort_dicts=False))
# note some of the defaults are: indent=1, sort_dicts=True
Then, when recovering, read in the text file and eval() it to turn the string back into a dict:
def load_file(fname):
try:
with open(fname, "r") as f:
dic = eval(f.read())
except:
dic = {}
return dic
Unless you really want to keep the dictionary, I think the best solution is to use the csv Python module to read the file.
Then, you get rows of data and you can change member_phone or whatever you want ;
finally, you can use the csv module again to save the file in the same format
as you opened it.
Code for reading:
import csv
with open("my_input_file.txt", "r") as f:
reader = csv.reader(f, delimiter=":")
lines = list(reader)
Code for writing:
with open("my_output_file.txt", "w") as f:
writer = csv.writer(f, delimiter=":")
writer.writerows(lines)
Of course, you need to adapt your change() function:
def change(lines):
a = input('ID')
for line in lines:
if line[0] == a:
d=str(input("phone"))
line[3]=d
break
else:
print "not"
I haven't timed it but I bet h5 is faster than pickle; the filesize with compression is almost certainly smaller.
import deepdish as dd
dd.io.save(filename, {'dict1': dict1, 'dict2': dict2}, compression=('blosc', 9))
file_name = open("data.json", "w")
json.dump(test_response, file_name)
file_name.close()
or use context manager, which is better:
with open("data.json", "w") as file_name:
json.dump(test_response, file_name)

Categories