Read a txt-file as dict containing a numpy-array - python

I have a lot of .txt files that I want to read.
The .txt files were saved by converting a python dictionary to a string and saving the string in a .txt file.
param_string = str(parameters-as-dict)
text_file = open(parameter_file_path, "w")
text_file.write(param_string)
text_file.close()
The entries of the dict are of mixed types (float, int, string,...). In some of the files one entry of the dict is a numpy-array and is saved in the txt-file as
'epsilons': array([...])
Because I want to access the values saved in the dict by their names, I now want to read the txt-file and load them as a dict again. This works easily with
f = open(path, 'r')
parameters = ast.literal_eval(f.read())
as long as there is no numpy array in the file. When the numpy-array is present, I get the error:
File ".../python3.6/ast.py", line 84, in _convert
raise ValueError('malformed node or string: ' + repr(node)) ValueError: malformed node or string: <_ast.Call object at 0x7fb5428cc630>
Which makes sense, looking at the as.literal_eval documentation (https://docs.python.org/2/library/ast.html) which says
Safely evaluate an expression node or a Unicode or Latin-1 encoded
string containing a Python literal or container display. The string or
node provided may only consist of the following Python literal
structures: strings, numbers, tuples, lists, dicts, booleans, and
None.
Since I can't resave the file differently, I don't know at which position the array is and I want to avoid cumbersome regex parsing, I'm searching for a solution that transforms my txt-file into a dict containing a numpy-array.
EDIT: the problem is not only the numpy array but also when I saved an object of e.g. a specific class:
, 'foo' : <class bar>,
A solution, where everything that can not be parsed as some kind of number/bool/some knonw datatype is automatically saved as a string just as it is would satisfy my needs.

I suggest an iterative approach handling the exceptions as needed. I don't like using eval, perhaps there's a better way but this is quick and dirty and assumes you have safe inputs.
parameters = {}
with open("file.txt") as f:
for line in f:
(key, val) = line.split(':')
if val[:6] == '<class'
# string representation like '<class bar>'
# ast.literal_eval() can't handle this, and neither can eval()
# this is just a string literal, so keep it as such:
parameters[key] = val
continue
try:
parameters[key] = ast.literal_eval(val)
except ValueError as e:
# for unsupported data structures like np.array
parameters[key] = eval(val)

I guess you'll have to check for an array line by line. A quick & dirty suggestion:
parameters = {}
with open("file.txt") as f:
for line in f:
(key, val) = line.split(':')
if 'array' in val:
s = val.split('(', 1)[1].split(')')[0]
parameters[key] = np.array(ast.literal_eval(s))
else:
parameters[key] = ast.literal_eval(val)
Maybe for future reference, you can try using the pickle module to save your data.

Related

How to parse dictionary syntaxed string to dictionary object

I have a file syntaxed in a way that ressembles a Dictionary as follows:
{YEARS:5}
{GROUPS:[1,2]}
{SAVE_FILE:{USE:1,NAME:CustomCalendar.ics}}
{SAVE_ONLINE:{USE:1,NAME:Custom Calendar,EMAIL:an.email#something.com,PASSWORD:AcompLExP#ssw0rd}}
{COURSES:[BTC,CIT,CQN,OSA,PRJCQN,PRJIOT,PILS,SPO,SHS1]}
I would like to find a way to parse each individual line into a dictionary as it is written. The difficulty I have is that some of these lines contain a dictionary as their value.
I am capable of taking the single lines and converting them to actual dictionaries but I am having an issue when working with the other lines.
Here is the code I have so far:
def get_config(filename=):
with open(filename, encoding="utf8") as config:
years = config.read().split()[0]
print(parse_line(years))
def parse_line(input_line):
input_line = input_line.strip("{}")
input_line = input_line.split(":")
return {input_line[i]: input_line[i + 1] for i in range(0, len(input_line), 2)}
If at all possible, I'd love to be able to deal with any line within a single function and hopefully deal with more than two nested dictionaries.
Thanks in advance!
If your file would contain valid JSON format, it would be an easy task to read the file and convert your data structures to dictionaries.
To give an example, consider having the following line of text in a file text.txt:
{"SAVE_ONLINE":{"USE":1,"NAME":"Custom Calendar","EMAIL":"an.email#something.com","PASSWORD":"AcompLExP#ssw0rd"}}
Please note, that the only difference are the quotes " around strings.
You can easily parse the line to a dictionary structure with:
import json
with open('text.txt', 'r') as f:
d = json.loads(f.read())
Output
print(d)
# {'SAVE_ONLINE': {'USE': 1, 'NAME': 'Custom Calendar', 'EMAIL': 'an.email#something.com', 'PASSWORD': 'AcompLExP#ssw0rd'}}

How to import a special format as a dictionary in python?

I have the text files as below format in single line,
username:password;username1:password1;username2:password2;
etc.
What I have tried so far is
with open('list.txt') as f:
d = dict(x.rstrip().split(None, 1) for x in f)
but I get an error saying that the length is 1 and 2 is required which indicates the file is not being as key:value.
Is there any way to fix this or should I just reformat the file in another way?
thanks for your answers.
What i got so far is:
with open('tester.txt') as f:
password_list = dict(x.strip(":").split(";", 1) for x in f)
for user, password in password_list.items():
print(user + " - " + password)
the results comes out as username:password - username1:password1
what i need is to split username:password where key = user and value = password
Since variable f in this case is a file object and not a list, the first thing to do would be to get the lines from it. You could use the https://docs.python.org/2/library/stdtypes.html?highlight=readline#file.readlines* method for this.
Furthermore, I think I would use strip with the semicolon (";") parameter. This will provide you with a list of strings of "username:password", provided your entire file looks like this.
I think you will figure out what to do after that.
EDIT
* I auto assumed you use Python 2.7 for some reason. In version 3.X you might want to look at the "distutils.text_file" (https://docs.python.org/3.7/distutils/apiref.html?highlight=readlines#distutils.text_file.TextFile.readlines) class.
Load the text of the file in Python with open() and read() as a string
Apply split(;) to that string to create a list like [username:password, username1:password1, username2:password2]
Do a dict comprehension where you apply split(":") to each item of the above list to split those pairs.
with open('list.txt', 'rt') as f:
raw_data = f.readlines()[0]
list_data = raw_data.split(';')
user_dict = { x.split(':')[0]:x.split(':')[1] for x in list_data }
print(user_dict)
Dictionary comprehension is useful here.
One liner to pull all the info out of the text file. As requested. Hope your tutor is impressed. Ask him How it works and see what he says. Maybe update your question to include his response.
If you want me to explain, feel free to comment and I shall go into more detail.
The error you're probably getting:
ValueError: dictionary update sequence element #3 has length 1; 2 is required
is because the text line ends with a semicolon. Splitting it on semicolons then results in a list that contains some pairs, and an empty string:
>>> "username:password;username1:password1;username2:password2;".split(";")
['username:password', 'username1:password1', 'username2:password2', '']
Splitting the empty string on colons then results in a single empty string, rather than two strings.
To fix this, filter out the empty string. One example of doing this would be
[element for element in x.split(";") if element != ""]
In general, I recommend you do the work one step at a time and assign to intermediary variables.
Here's a simple (but long) answer. You need to get the line from the file, and then split it and the items resulting from the split:
results = {}
with open('file.txt') as file:
for line in file:
#Only one line, but that's fine
entries = line.split(';')
for entry in entries:
if entry != '':
#The last item in entries will be blank, due to how split works in this example
user, password = entry.split(':')
results[user] = password
Try this.
f = open('test.txt').read()
data = f.split(";")
d = {}
for i in data:
if i:
value = i.split(":")
d.update({value[0]:value[1]})
print d

How to extract multiple JSON objects from one file?

I am very new to Json files. If I have a json file with multiple json objects such as following:
{"ID":"12345","Timestamp":"20140101", "Usefulness":"Yes",
"Code":[{"event1":"A","result":"1"},…]}
{"ID":"1A35B","Timestamp":"20140102", "Usefulness":"No",
"Code":[{"event1":"B","result":"1"},…]}
{"ID":"AA356","Timestamp":"20140103", "Usefulness":"No",
"Code":[{"event1":"B","result":"0"},…]}
…
I want to extract all "Timestamp" and "Usefulness" into a data frames:
Timestamp Usefulness
0 20140101 Yes
1 20140102 No
2 20140103 No
…
Does anyone know a general way to deal with such problems?
Update: I wrote a solution that doesn't require reading the entire file in one go. It's too big for a stackoverflow answer, but can be found here jsonstream.
You can use json.JSONDecoder.raw_decode to decode arbitarily big strings of "stacked" JSON (so long as they can fit in memory). raw_decode stops once it has a valid object and returns the last position where wasn't part of the parsed object. It's not documented, but you can pass this position back to raw_decode and it start parsing again from that position. Unfortunately, the Python json module doesn't accept strings that have prefixing whitespace. So we need to search to find the first non-whitespace part of your document.
from json import JSONDecoder, JSONDecodeError
import re
NOT_WHITESPACE = re.compile(r'\S')
def decode_stacked(document, pos=0, decoder=JSONDecoder()):
while True:
match = NOT_WHITESPACE.search(document, pos)
if not match:
return
pos = match.start()
try:
obj, pos = decoder.raw_decode(document, pos)
except JSONDecodeError:
# do something sensible if there's some error
raise
yield obj
s = """
{"a": 1}
[
1
,
2
]
"""
for obj in decode_stacked(s):
print(obj)
prints:
{'a': 1}
[1, 2]
Use a json array, in the format:
[
{"ID":"12345","Timestamp":"20140101", "Usefulness":"Yes",
"Code":[{"event1":"A","result":"1"},…]},
{"ID":"1A35B","Timestamp":"20140102", "Usefulness":"No",
"Code":[{"event1":"B","result":"1"},…]},
{"ID":"AA356","Timestamp":"20140103", "Usefulness":"No",
"Code":[{"event1":"B","result":"0"},…]},
...
]
Then import it into your python code
import json
with open('file.json') as json_file:
data = json.load(json_file)
Now the content of data is an array with dictionaries representing each of the elements.
You can access it easily, i.e:
data[0]["ID"]
So, as was mentioned in a couple comments containing the data in an array is simpler but the solution does not scale well in terms of efficiency as the data set size increases. You really should only use an iterable object when you want to access a random item in the array, otherwise, generators are the way to go. Below I have prototyped a reader function which reads each json object individually and returns a generator.
The basic idea is to signal the reader to split on the carriage character "\n" (or "\r\n" for Windows). Python can do this with the file.readline() function.
import json
def json_reader(filename):
with open(filename) as f:
for line in f:
yield json.loads(line)
However, this method only really works when the file is written as you have it -- with each object separated by a newline character. Below I wrote an example of a writer that separates an array of json objects and saves each one on a new line.
def json_writer(file, json_objects):
with open(file, "w") as f:
for jsonobj in json_objects:
jsonstr = json.dumps(jsonobj)
f.write(jsonstr + "\n")
You could also do the same operation with file.writelines() and a list comprehension:
...
json_strs = [json.dumps(j) + "\n" for j in json_objects]
f.writelines(json_strs)
...
And if you wanted to append the data instead of writing a new file just change open(file, "w") to open(file, "a").
In the end I find this helps a great deal not only with readability when I try and open json files in a text editor but also in terms of using memory more efficiently.
On that note if you change your mind at some point and you want a list out of the reader, Python allows you to put a generator function inside of a list and populate the list automatically. In other words, just write
lst = list(json_reader(file))
Added streaming support based on the answer of #dunes:
import re
from json import JSONDecoder, JSONDecodeError
NOT_WHITESPACE = re.compile(r"[^\s]")
def stream_json(file_obj, buf_size=1024, decoder=JSONDecoder()):
buf = ""
ex = None
while True:
block = file_obj.read(buf_size)
if not block:
break
buf += block
pos = 0
while True:
match = NOT_WHITESPACE.search(buf, pos)
if not match:
break
pos = match.start()
try:
obj, pos = decoder.raw_decode(buf, pos)
except JSONDecodeError as e:
ex = e
break
else:
ex = None
yield obj
buf = buf[pos:]
if ex is not None:
raise ex

Converting a modified JSON to CSV using python

I know this question has been asked before, but never with the following caveats:
I'm a complete python n00b. Also a JSON noob.
The JSON file / string is not the same as those seen in json2csv examples.
The CSV file output is supposed to have standard columns.
Due to point number 1, I'm not aware of most terminologies and technologies used for this. So please bear with me.
Point number 2: Here's a single line of the supposed JSON file:
"id":"123456","about":"YESH","can_post":true,"category":"Community","checkins":0,"description":"OLE!","has_added_app":false,"is_community_page":false,"is_published":true,"likes":48,"link":"www.fake.com","name":"Test Name","parking":{"lot":0,"street":0,"valet":0},"talking_about_count":0,"website":"www.fake.com/blog","were_here_count":0^
Weird, I know - it lacks braces and brackets and stuff. Which is why I'm convinced posted solutions won't work.
I'm not sure what the 0^ at the end of the line is, but I see it at the end of every line. I'm assuming the 0 is the value for "were_here_count" while the ^ is a... line terminator? EDIT: Apparently, I can just disregard it.
Of note is that the value of "parking" appears to be yet another array - I'm fine with just displaying it as is (minus the double quotes).
Point number 3: Here's the columns of the supposed CSV file output. This is the complete column set - the JSON file won't always have them all.
ID STRING,
ABOUT STRING,
ATTIRE STRING,
BAND_MEMBERS STRING,
BEST_PAGE STRING,
BIRTHDAY STRING,
BOOKING_AGENT STRING,
CAN_POST STRING,
CATEGORY STRING,
CATEGORY_LIST STRING,
CHECKINS STRING,
COMPANY_OVERVIEW STRING,
COVER STRING,
CONTEXT STRING,
CURRENT_LOCATION STRING,
DESCRIPTION STRING,
DIRECTED_BY STRING,
FOUNDED STRING,
GENERAL_INFO STRING,
GENERAL_MANAGER STRING,
GLOBAL_BRAND_PARENT_PAGE STRING,
HOMETOWN STRING,
HOURS STRING,
IS_PERMANENTLY_CLOSED STRING,
IS_PUBLISHED STRING,
IS_UNCLAIMED STRING,
LIKES STRING,
LINK STRING,
LOCATION STRING,
MISSION STRING,
NAME STRING,
PARKING STRING,
PHONE STRING,
PRESS_CONTACT STRING,
PRICE_RANGE STRING,
PRODUCTS STRING,
RESTAURANT_SERVICES STRING,
RESTAURANT_SPECIALTIES STRING,
TALKING_ABOUT_COUNT STRING,
USERNAME STRING,
WEBSITE STRING,
WERE_HERE_COUNT STRING
Here's my code so far:
import os
num = '1'
inPath = "./fb-data_input/"
outPath = "./fb-data_output/"
#Get list of Files, put them in filenameList array
fileNameList = os.listdir(path)
#Process per file in
for item in fileNameList:
print("Processing: " + item)
fb_inputFile = open(inPath + item, "rb").read().split("\n")
fb_outputFile = open(outPath + "fbdata-IAB-output" + num, "wb")
num++
jsonString = fb_inputFile.split("\",\"")
jsonField = jsonString[0]
jsonValue = jsonString[1]
jsonHash[?] = [?,?]
#Do Code stuff here
Up until the for loop, it just loads the json file names into an array, and then processes it one by one.
Here's my logic for the rest of the code:
Split the json string by something. Perhaps the "," so that other commas won't get split.
Store it into a hashmap / 2D array (dynamic?)
Trim away the JSON fields and the first and/or last double quotes.
Add the resulting output to another hashmap, with those set columns, putting in null in a column that the JSON file does not have.
And then I output the result to a CSV.
It sounds logical in my head, but I'm pretty sure there's something I missed. And of course, I have a hard time putting it in code.
Can I have some help on this? Thanks.
P.S.
Additional information:
OS: Mac OSX
Target platform OS: Ubuntu of some sort
Here is a full solution, based on your original code:
import os
import json
from csv import DictWriter
import codecs
def get_columns():
columns = []
with open("columns.txt") as f:
columns = [line.split()[0] for line in f if line.strip()]
return columns
if __name__ == "__main__":
in_path = "./fb-data_input/"
out_path = "./fb-data_output/"
columns = get_columns()
bad_keys = ("has_added_app", "is_community_page")
for filename in os.listdir(in_path):
json_filename = os.path.join(in_path, filename)
csv_filename = os.path.join(out_path, "%s.csv" % (os.path.basename(filename)))
with open(json_filename) as f, open(csv_filename, "wb") as csv_file:
csv_file.write(codecs.BOM_UTF8)
csv = DictWriter(csv_file, columns)
csv.writeheader()
for line_number, line in enumerate(f, start=1):
try:
data = json.loads("{%s}" % (line.strip().strip('^')))
# fix parking column
if "parking" in data:
data['parking'] = ", ".join("%s: %s" % (k, str(v)) for k, v in data['parking'].items())
data = {k.upper(): unicode(v).encode('utf8') for k, v in data.items() if k not in bad_keys}
except Exception, e:
import traceback
traceback.print_exc()
data = {columns[0]: "Error on line %s of %s: %s" % (line_number, json_filename, e)}
csv.writerow(data)
Edited: Full unicode support plus extended error information.
So, first off, your string is valid json if you just add curly braces around it. You can then deserialize with Python's json library. Setup your csv columns as a dictionary with each of them pointing to whatever you want as a default value (None? ""? you're choice). Once you've deserialized the json to a dict, just loop through each key there and fill in the csv_columns dict as appropriate. Then just use Python's csv module to write it out:
import json
import csv
string = '"id":"123456","about":"YESH","can_post":true,"category":"Community","checkins":0,"description":"OLE!","has_added_app":false,"is_community_page":false,"is_published":true,"likes":48,"link":"www.fake.com","name":"Test Name","parking":{"lot":0,"street":0,"valet":0},"talking_about_count":0,"website":"www.fake.com/blog","were_here_count":0^'
string = '{%s}' % string[:-1]
json_dict = json.loads(string)
#make 'parking' a string. I'm assuming that's your only hash.
json_dict['parking'] = json.dumps(json_dict['parking'])
csv_cols_list = ['a','b','c'] #put your actual csv columns here
csv_cols = {col: '' for col in csv_cols_list}
for k, v in json_dict.iterkeys():
if k in csv_cols:
csv_cols[k] = v
#now just write to csv using Python's csv library
Note: this is a general answer that assumes that your "json" will be valid key/value pairs. Your "parking" key is a special case you'll need to deal with somehow. I left it as is because I don't know what you want with it. I'm also assuming the '^' at the end of your string was a typo.
[EDIT] Changed to account for parking and the '^' at the end. [/EDIT]
Either way, the general idea here is what you want.
The first thing is your input is not JSON. Its just a string that is delimited, where the column and value is quoted.
Here is a solution that would work:
import csv
columns = ['ID', 'ABOUT', ... ]
with open('input_file.txt', 'r') as f, open('output_file.txt', 'w') as o:
reader = csv.reader(f, delimiter=',')
writer = csv.writer(o, delimiter=',')
writer.writerow(columns)
for row in reader:
data = {k.upper():v for k,v in row.split(':', 1)}
row = [data.get(v, '') for v in columns]
writer.writerow(row)
In this loop, for each line we read from the input file, a dictionary is created. The key is the first value from the 'foo:bar' pair, and we convert it to upper case.
Next, for each column, we try to fetch a value from this dictionary in the order that the columns are written out. If a value for the column doesn't exist, a blank '' is returned. These values are collected in a list row. This makes sure no matter how many columns are missing, we write an equal number of columns to the output.

Writing multiple values in single cell in csv

For each user I have the list of events in which he participated.
e.g. bob : [event1,event2,...]
I want to write it in csv file. I created a dictionary (key - user & value - list of events)
I wrote it in csv. The following is the sample output
username, frnds
"abc" ['event1','event2']
where username is first col and frnds 2nd col
This is code
writer = csv.writer(open('eventlist.csv', 'ab'))
for key, value in evnt_list.items():
writer.writerow([key, value])
when I am reading the csv I am not getting the list directly. But I am getting it in following way
['e','v','e','n','t','1','','...]
I also tried to write the list directly in csv but while reading am getting the same output.
What I want is multiple values in a single cell so that when I read a column for a row I get list of all events.
e.g
colA colB
user1,event1,event2,...
I think it's not difficult but somehow I am not getting it.
###Reading
I am reading it with the help of following
codereader = csv.reader(open("eventlist.csv"))
reader.next()
for row in reader:
tmp=row[1]
print tmp # it is printing the whole list but
print tmp[0] #the output is [
print tmp[1] #output is 'e' it should have been 'event1'
print tmp[2] #output is 'v' it should have been 'event2'
you have to format your values into a single string:
with open('eventlist.csv', 'ab') as f:
writer = csv.writer(f, delimiter=' ')
for key, value in evnt_list.items():
writer.writerow([key, ','.join(value)])
exports as
key1 val11,val12,val13
key2 val21,val22,val23
READING: Here you have to keep in mind, that you converted your Python list into a formatted string. Therefore you cannot use standard csv tools to read it:
with open("eventlist.csv") as f:
csvr = csv.reader(f, delimiter=' ')
csvr.next()
for rec in csvr:
key, values_txt = rec
values = values_txt.split(',')
print key, values
works as awaited.
You seem to be saying that your evnt_list is a dictionary whose keys are strings and whose values are lists of strings. If so, then the CSV-writing code you've given in your question will write a string representation of a Python list into the second column. When you read anything in from CSV, it will just be a string, so once again you'll have a string representation of your list. For example, if you have a cell that contains "['event1', 'event2']" you will be reading in a string whose first character (at position 0) is [, second character is ', third character is e, etc. (I don't think your tmp[1] is right; I think it is really ', not e.)
It sounds like you want to reconstruct the Python object, in this case a list of strings. To do that, use ast.literal_eval:
import ast
cell_string_value = "['event1', 'event2']"
cell_object = ast.literal_eval(cell_string_value)
Incidentally, the reason to use ast.literal_eval instead of just eval is safety. eval allows arbitrary Python expressions and is thus a security risk.
Also, what is the purpose of the CSV, if you want to get the list back as a list? Will people be reading it (in Excel or something)? If not, then you may want to simply save the evnt_list object using pickle or json, and not bother with the CSV at all.
Edit: I should have read more carefully; the data from evnt_list is being appended to the CSV, and neither pickle nor json is easily appendable. So I suppose CSV is a reasonable and lightweight way to accumulate the data. A full-blown database might be better, but that would not be as lightweight.

Categories