I am trying to convert a JSON file into a dictionary and apply key/value pairs, so I can then use groupbykey() to basically deduplicate the key/value pairs.
This is the original content of the file:
{"tax_pd":"200003","ein":"720378282"}
{"tax_pd":"200012","ein":"274027765"}
{"tax_pd":"200012","ein":"042746989"}
{"tax_pd":"200012","ein":"205993971"}
I have formatted it like so:
(u'201208', u'010620100')
(u'201208', u'860785769')
(u'201208', u'371650138')
(u'201208', u'237253410')
I want to turn these into key/value pairs, so I can apply GroupByKey, in my Dataflow Pipeline. I believe i need to turn it into a dictionary first?
I'm new to python and the google cloud applications and some help would be great!
EDIT : Code snippets
with beam.Pipeline(options=pipeline_options) as p:
(p
| 'ReadInputText' >> beam.io.ReadFromText(known_args.input)
| 'YieldWords' >> beam.ParDo(ExtractWordsFn())
# | 'GroupByKey' >> beam.GroupByKey()
| 'WriteInputText' >> beam.io.WriteToText(known_args.output))
class ExtractWordsFn(beam.DoFn):
def process(self, element):
words = re.findall(r'[0-9]+', element)
yield tuple(words)
A quick pure-Python solution would be:
import json
with open('path/to/my/file.json','rb') as fh:
lines = [json.loads(l) for l in fh.readlines()]
# [{'tax_pd': '200003', 'ein': '720378282'}, {'tax_pd': '200012', 'ein': '274027765'}, {'tax_pd': '200012', 'ein': '042746989'}, {'tax_pd': '200012', 'ein': '205993971'}]
Looking at your data, you don't have unique keys to do key:value by tax_pd and ein. Assuming there will be collisions, you could do the following:
myresults = {}
for line in lines:
# I'm assuming we want to use tax_pd as the key, and ein as the value, but this can be extended to other keys
# This will return None if the tax_pd is not already found
if not myresults.get(line.get('tax_pd')):
myresults[line.get('tax_pd')] = [line.get('ein')]
else:
myresults[line.get('tax_pd')] = list(set([line.get('ein'), *myresults[line.get('tax_pd')]))
#results
#{'200003': ['720378282'], '200012': ['205993971', '042746989', '274027765']}
This way you have unique keys, with lists of corresponding unique ein values. Not completely sure if that's what you're going for or not. set will automatically dedup a list, and the wrapping list reconverts the data type
You can then lookup by the tax_id explicitly:
myresults.get('200012')
# ['205993971', '042746989', '274027765']
EDIT: To read from the cloud storage, the code snippet here translated to be a bit easier to use:
with gcs.open(filename) as fh:
lines = fh.read().split('\n')
You can set up your gcs object using their api docs
Related
I have a json file, which I will read and based on the xyz details will create excel report. Below is the sample json file I will use to extract the information which holds data in format of multiple dictionaries.
Now my requirement is to fetch xyz value one by one and based on it using certain field create a report. Below is the small snippet of the code where I am reading the file and based on key populating results. The data I am referencing after reading it from a file.
def pop_ws(dictionary,ws):
r=1
count=1
for k,v in dictionary.items():
offs=len(v['current'])
ws.cell(row=r+1,column=1).value = k
ws.cell(row=r+1,column=4).value = v['abc']
ws.cell(row=r+1,column=5).value = v['def']
wrk=read_cves(k)
count +=1
if wrk !='SAT':
ws.cell(row=r+1,column=7).value =k
ws.cell(row=r+1,column=8).value =tmp1['public_date']
if 'cvss' in list(tmp1.keys()):
.
.
.
def read_f(data):
with open(dat.json) as f:
wrk = f.read()
I am pretty much stuck on how to code in def read_f(data):, so that it read dat.json and based on value i.e data, fetch details defined as in dictionary structure one by one for all the required data and populate as defined under pop_ws in my code.
The data in def read_f(data): will be a dynamic value and based on it I need to filter the dictionary which have value (stored in data) defined against a key and then extract the whole dictionary into another json file.
Any suggestion on this will be appreciated.
Use json package to load json format data like below:
# Python program to read
# json file
import json
# Opening JSON file
f = open('data.json',)
# returns JSON object as
# a dictionary
data = json.load(f)
# Iterating through the json
# list
for i in data['emp_details']:
print(i)
# Closing file
f.close()
I got this from this link, now you can get dict from the file.
Next you can just filter the dict with specific value like below.
You should use filter() built-in function, with a function that returns True, if the dictionary contains one of the values.
def filter_func(dic, filterdic):
for k,v in filterdic.items():
if k == 'items':
if any(elemv in dic[k] for elemv in v):
return True
elif v == dic[k]:
return True
return False
def filter_cards(deck, filterdic):
return list(filter(lambda dic, filterdic=filterdic: filter_func(dic, filterdic) , deck))
You should use a dictionary as the second element.
filter_cards(deck, {'CVE': 'moderate'})
Hopefully, this could helpful for your situation.
Thanks.
Once you get your json object, you can access each value using the key like so:
print(json_obj["key"]) #prints the json value for that key
In your case
print(wrk["CVE"]) # prints CVE-2020-25624
I have a list of dictionaries in a json file.
I have iterated through the list and each dictionary to obtain two specific key:value pairs from each dictionary for each element.
i.e. List[dictionary{i(key_x:value_x, key_y:value_y)}]
My question is now:
How do I place these two new key: value pairs in a new list/dictionary/array/tuple, representing the two key: value pairs extracted for each listed element in the original?
To be clear:
ORIGINAL_LIST (i.e. with each element being a nested dictionary) =
[{"a":{"blah":"blah",
"key_1":value_a1,
"key_2":value_a2,
"key_3":value_a3,
"key_4":value_a4,
"key_5":value_a5,},
"b":"something_a"},
{"a":{"blah":"blah",
"key_1":value_b1,
"key_2":value_b2,
"key_3":value_b3,
"key_4":value_b4,
"key_5":value_b5,},
"b":"something_b"}]
So my code so far is:
import json
from collections import *
from pprint import pprint
json_file = "/some/path/to/json/file"
with open(json_file) as json_data:
data = json.load(json_data)
json_data.close()
for i in data:
event = dict(i)
event_key_b = event.get('b')
event_key_2 = event.get('key_2')
print(event_key_b)#print value of "b" for each nested dict for 'i'
print(event_key_2)#print value of "key_2" for each nested dict for 'i'
To be clear:
FINAL_LIST(i.e. with each element being a nested dictionary) =
[{"b":"something_a", "key_2":value_2},
{"b":"something_b", "key_2":value_2}]
So I have an answer to getting the keys into individual dictionaries, as follows in the code below. The only problem is that the value for 'key_2' in the original json dictionaries is either an int value or it is "" for values which are 0. My script just returns 'None' for all instances of value_2 for key_2. How can I get it to read the appropriate values for 'value_2'? I want to only return dictionaries for cases where 'value_2' > 0 (i.e. where value_2 != "")
Below is the current code:
import json
from pprint import pprint
json_file = "/some/path/to/json/file"
with open(json_file) as json_data:
data = json.load(json_data)
json_data.close()
for i in data:
event_key_b = event.get('b')
for x in i:
event_key_2 = event.get('key_2')
x = {'b' : something_b, 'key_2' : value_2}
print(x)
Also, if there are any more elegant solutions anyone can think of I would really be interested in learning them ... Some of the json files I'm looking at can range from 200 dictionary entries in the original list to 2,000,000. I'm planning to feed my parsed results into a message queue for processing by a different service and any efficiencies in the code will help for scalability in processing. Also if anyone has any recommendations to give on Redis vs. RabbitMQ, I'd really appreciate it
I have data that look like this:
data = 'somekey:value4thekey&second-key:valu3-can.be?anything&third_k3y:it%can have spaces;too'
In a nice human-readable way it would look like this:
somekey : value4thekey
second-key : valu3-can.be?anything
third_k3y : it%can have spaces;too
How should I parse the data so when I do data['somekey'] I would get >>> value4thekey?
Note: The & is connecting all of the different items
How am I currently tackling with it
Currently, I use this ugly solution:
all = data.split('&')
for i in all:
if i.startswith('somekey'):
print i
This solution is very bad due to multiple obvious limitations. It would be much better if I can somehow parse it into a python tree object.
I'd split the string by & to get a list of key-value strings, and then split each such string by : to get key-value pairs. Using dict and list comprehensions actually makes this quite elegant:
result = {k:v for k, v in (part.split(':') for part in data.split('&'))}
You can parse your data directly to a dictionary - split on the item separator & then split again on the key,value separator ::
table = {
key: value for key, value in
(item.split(':') for item in data.split('&'))
}
This allows you direct access to elements, e.g. as table['somekey'].
If you don't have objects within a value, you can parse it to a dictionary
structure = {}
for ele in data.split('&'):
ele_split = ele.split(':')
structure[ele_split[0]] = ele_split[1]
You can now use structure to get the values:
print structure["somekey"]
#returns "value4thekey"
Since the keys have a common format of being in the form of "key":"value".
You can use it as a parameter to split on.
for i in x.split("&"):
print(i.split(":"))
This would generate an array of even items where every even index is the key and odd index being the value. Iterate through the array and load it into a dictionary. You should be good!
I'd format data to YAML and parse the YAML
import re
import yaml
data = 'somekey:value4thekey&second-key:valu3-can.be?anything&third_k3y:it%can have spaces;too'
yaml_data = re.sub('[:]', ': ', re.sub('[&]', '\n', data ))
y = yaml.load(yaml_data)
for k in y:
print "%s : %s" % (k,y[k])
Here's the output:
third_k3y : it%can have spaces;too
somekey : value4thekey
second-key : valu3-can.be?anything
I'm trying to process a log from Symphony using Pandas, but have some trouble with a malformed JSON which I can't parse.
An example of the log :
'{id:46025,
work_assignment:43313=>43313,
declaration:<p><strong>Bijkomende interventie.</strong></p>\r\n\r\n<p>H </p>\r\n\r\n<p><strong><em>Vaststellingen.</em></strong></p>\r\n\r\n<p><strong><em>CV. </em></strong>De.</p>=><p><strong>Bijkomende interventie.</strong></p>\r\n\r\n<p>He </p>\r\n\r\n<p><strong><em>Vaststellingen.</em></strong></p>\r\n\r\n<p><strong><em>CV. </em></strong>De.</p>,conclusions:<p>H </p>=><p>H </p>}'
What is the best way to process this?
For each part (id/work_assignment/declaration/etc) I would like to retrieve the old and new value (which are separated by "=>").
Use the following code:
def clean(my_log):
my_log.replace("{", "").replace("}", "") # Removes the unneeded { }
my_items = list(my_log.split(",")) # Split at the comma to get the pairs
my_dict = {}
for i in my_items:
key, value = i.split(":") # Split at the colon to separate the key and value
my_dict[key] = value # Add to the dictionary
return my_dict
Function returns a Python dictionary, which can then be converted to JSON using a serializer if needed, or directly used.
Hope I helped :D
I have a list of dictionaries that maps different IDs to a central ID. I have a document with these different IDs associated with terms. I have created a function that now has a key the central ID from the different IDs in the document. The goFile is the document where in the first column there's an ID and in the second one there's a GOterm. The mappingList is a list containing dictionaries in which the ID in the goFile is mapped to a main ID.
My expected output is a dictionary with a main ID as a key and a set with the go terms associated with it as value.
def parseGO(mappingList, goFile):
# open the file
file = open(goFile)
# this will be the dictionary that this function returns
# entries will have as a key an Ensembl ID
# and the value will be a set of GO terms
GOdict = {}
GOset = set()
for line in file:
splitline = line.split(' ')
GO_term = splitline[1]
value_ID = splitline[0]
for dict in mappingList:
if value_ID in dict:
ENSB_term = dict[value_ID]
#my best try
for dict in mappingList:
for key in GOdict.keys():
if value_ID in dict and key == dict[value_ID]:
GOdict[ENSB_term].add(GO_term)
GOdict[ENSB_term] = GOset
return GOdict
My problem is that now I have to add to the central ID in my GOdict the terms that are associated in the document to the different IDs. To avoid duplicates i use a set (GOset). How do I do it? All my try end having all the terms mapped to all the main IDs.
Some sample:
mappingList = [{'1234': 'mainID1', '456': 'mainID2'}, {'789': 'mainID2'}]
goFile:
1234 GOTERM1
1234 GOTERM2
456 GOTERM1
456 GOTERM3
789 GOTERM1
expected output:
GOdict = {'mainID1': set([GOTERM1, GOTERM2]), 'mainID2': set([GOTERM1, GOTERM3])}
First off, you shouldn't use the variable name 'dict', as it shadows the built-in dict class, and will cause you problems at some point.
The following should work for you:
from collections import defaultdict
def parse_go(mapping_list, go_file):
go_dict = defaultdict(set)
with open(go_file) as f: # Better garbage handling using 'with'
for line in f:
(value_id, go_term) = line.split() # Feel free to change the split behaviour
# work better for you.
for map_dict in mapping_list:
if value_id in map_dict:
go_dict[map_dict[value_id]].add(go_term)
return go_dict
The code is fairly straightforward, but here's a breakdown anyway.
We use a default dictionary instead of a normal dictionary so we can eliminate all that if in or setdefault() boilerplate.
For each line in the file, we check if the first item (value_id) is a key in any of the mapping dictionaries, and if so, adds the lines second item (go_term) to that value_id's set in the dictionary.
EDIT: Request for doing this without defaultdict(). Assume that go_dict is just a normal dictionary (go_dict = {}), your for loop would look like:
for map_dict in mapping_list:
if value_id in map_dict:
esnb_entry = go_dict.setdefault(map_dict[value_id], set())
esnb_entry.add(go_term)