Unable to extract required field from JSON text - python

I am trying to extract a specific field "Engineering Lead" and its corresponding value from the JSON text but however, when tried to extract it directly from the JSON, it is throwing the key error as shown in the code1 . Since it is not working, i have decided to loop it to fetch the key Engineering Lead and it is value but it still throwing the same error. any help would be aprreciated.
json text:
{'expand': 'renderedFields,names,schema,operations,editmeta,changelog,versionedRepresentations', 'id': '11659640', 'self': '/rest/api/2/issue/11659640', 'key': 'TOOLSTEST-2651', 'fields': {'description': 'h2. Main\r\n * *:*\r\n * *Application ISO:*\xa0Tony Zeinoun\r\n * *Engineering Lead:*\xa0Peter james\r\n * *Application Architect:*\xa0John david\r\n * *Divisional Architect:*\xa0Robert denuvit'}}
code 1:
engLeadDetails = data_load['fields']['* \*Engineering Lead']
Code 2:
engLeadDetails = data_load['fields']
for k,v in engLeadDetails.items():
if (k == '* \*Engineering Lead'):
print (v)
Error:
Traceback (most recent call last):
File "/Users/peter/abc.py", line 32, in <module>
engLeadDetails = data_load['fields']['* *Engineering Lead']
KeyError: '* *Engineering Lead'

I think python can't find such key because of some missing quotes. Please check the json text once more. It seems like * *Engineering Lead is currently a part of a bigger string, but not a key (due to missing quotes).

KeyError means that the key * \*Engineering Lead doesn't exist in the dictionary.
It appears the delimiter of the description (where your EngLead is stored) is \r\n.
Using this we can split the description to get each role.
job_details = data_load["fields"]["description"]
Removing arbitrary strings, this leaves us with
job_details = [
"* *Application ISO:* Tony Zeinoun",
"* *Engineering Lead:* Peter james",
"* *Application Architect:* John david",
"* *Divisional Architect:* Robert denuvit",
]
I am assuming you want the name of the person in each position.
Now we remove arbitrary characters from each string.
job_dict = {}
for s in job_details:
s = s.replace("*","").strip()
job, person = s.split(":")
job_dict[job] = person.strip()
job_dict is now clean, with easy key access to each job.
Resultant Dict:
{
'Application Architect': 'John david',
'Application ISO': 'Tony Zeinoun',
'Divisional Architect': 'Robert denuvit',
'Engineering Lead': 'Peter james'
}
print(job_dict["Engineering Lead"]) # Peter james

You can convert the JSON string description into a dictionary by splitting on \r\n sequence then break the role and name into key/values and add to a dictionary.
The expression \W* in regexp below will strip off the non-alphanumeric prefix off the roles and names; e.g., "** Application ISO" => "Application ISO", etc.
Try something like this:
data = {}
for s in data_load['fields']['description'].split('\r\n'):
if m := re.search(r'^\W*(.*?):\W*(.+)', s):
if label := m.group(1):
data[label] = m.group(2)
print(data)
Output:
{'Application ISO': 'Tony Zeinoun', 'Engineering Lead': 'Peter james', 'Application Architect': 'John david', 'Divisional Architect': 'Robert denuvit'}
Then can grab a particular role/person out:
print(">>", data.get("Engineering Lead"))
Outputs:
>> Peter james

Related

Is there a better way to capture all the regex patterns in matching with nested lists within a dictionary?

I am trying out a simple text-matching activity where I scraped titles of blog posts and try to match it with my pre-defined categories once I find specific keywords.
So for example, the title of the blog post is
"Capture Perfect Night Shots with the Oppo Reno8 Series"
Once I ensure that "Oppo" is included in my categories, "Oppo" should match with my "phone" category like so:
categories = {"phone" : ['apple', 'oppo', 'xiaomi', 'samsung', 'huawei', 'nokia'],
"postpaid" : ['signature', 'postpaid'],
"prepaid" : ['power all', 'giga'],
"sku" : ['data', 'smart bro'],
"ewallet" : ['gigapay'],
"event" : ['gigafest'],
"software" : ['ios', 'android', 'macos', 'windows'],
"subculture" : ['anime', 'korean', 'kpop', 'gaming', 'pop', 'culture', 'lgbtq', 'binge', 'netflix', 'games', 'ml', 'apple music'],
"health" : ['workout', 'workouts', 'exercise', 'exercises'],
"crypto" : ['axie', 'bitcoin', 'coin', 'crypto', 'cryptocurrency', 'nft'],
"virtual" : ['metaverse', 'virtual']}
Then my dataframe would look like this
Fortunately I found a reference to how to use regex in mapping to nested dictionaries but it can't seem to work past the first couple of words
Reference is here
So once I use the code
def put_category(cats, text):
regex = re.compile("(%s)" % "|".join(map(re.escape, categories.keys())))
if regex.search(text):
ret = regex.search(text)
return ret[0]
else:
return 'general'
It usually reverts to put "general" as the category, even when doing it in lowercase as seen here
I'd prefer to use the current method of inputting values inside the dictionary for this matching activity instead of running pure regex patterns and then putting it through fuzzy matching for the result.
You can create a reverse mapping that maps keywords to categories instead, so that you can efficiently return the corresponding category when a match is found:
mapping = {keyword: category for category, keywords in categories.items() for keyword in keywords}
def put_category(mapping, text):
match = re.search(rf'\b(?:{"|".join(map(re.escape, mapping))})\b', text, re.I)
if match:
return mapping[match[0].lower()]
return 'general'
print(put_category(mapping, "Capture Perfect Night Shots with the Oppo Reno8 Series"))
This outputs:
phone
Demo: https://replit.com/#blhsing/BlandAdoredParser
In this case, you are matching exact words, and not patterns. You can do it without regular expressions.
Going back to your example:
import pandas as pd
CAT_DICT = {"phone" : ['apple', 'oppo', 'xiaomi', 'samsung', 'huawei', 'nokia'],
"postpaid" : ['signature', 'postpaid'],
"prepaid" : ['power all', 'giga'],
"sku" : ['data', 'smart bro'],
"ewallet" : ['gigapay'],
"event" : ['gigafest'],
"software" : ['ios', 'android', 'macos', 'windows'],
"subculture" : ['anime', 'korean', 'kpop', 'gaming', 'pop', 'culture', 'lgbtq', 'binge', 'netflix', 'games', 'ml', 'apple music'],
"health" : ['workout', 'workouts', 'exercise', 'exercises'],
"crypto" : ['axie', 'bitcoin', 'coin', 'crypto', 'cryptocurrency', 'nft'],
"virtual" : ['metaverse', 'virtual']}
df = pd.DataFrame({"title": [
"Capture Perfect Night Shots with the Oppo Reno8 Series",
"Personal is Powerful: Why Apple's iOS 16 is the Smartest update"
]})
You can define this function to assign categories to each title:
def assign_cat(title: str, cat_dict: dict[str, list[str]]) -> list[str]:
title_low = title.lower()
categories = list()
for c,words in cat_dict.items():
if any([w in title_low for w in words]):
categories.append(c)
if len(categories) == 0:
categories.append("general")
return categories
The key part is here: any([w in title_low for w in words]). For each word in your category, you are checking if it is present in the title (lowercase). And if ANY of the words is present, you associate the category to it.
You get:
The advantage of this approach is that a title can have multiple categories assigned to it (see the 2nd title)

How to structure a list with JSON objects in Python?

I got a list in Python with Twitter user information and exported it with Pandas to an Excel file.
One row is one Twitter user with nearly all information of the user (name, #-tag, location etc.)
Here is my code to create the list and fill it with the user data:
def get_usernames(userids, api):
fullusers = []
u_count = len(userids)
try:
for i in range(int(u_count/100) + 1):
end_loc = min((i + 1) * 100, u_count)
fullusers.extend(
api.lookup_users(user_ids=userids[i * 100:end_loc])
)
print('\n' + 'Done! We found ' + str(len(fullusers)) + ' follower in total for this account.' + '\n')
return fullusers
except:
import traceback
traceback.print_exc()
print ('Something went wrong, quitting...')
The only problem is that every row is in JSON object and therefore one long comma-seperated string. I would like to create headers (no problem with Pandas) and only write parts of the string (i.e. ID or name) to colums.
Here is an example of a row from my output.xlsx:
User(_api=<tweepy.api.API object at 0x16898928>, _json={'id': 12345, 'id_str': '12345', 'name': 'Jane Doe', 'screen_name': 'jdoe', 'location': 'Nirvana, NI', 'description': 'Just some random descrition')
I have two ideas, but I don't know how to realize them due to my lack of skills and experience with Python.
Create a loop which saves certain parts ('id','name' etc.) from the JSON-string in colums.
Cut off the User(_api=<tweepy.api. API object at 0x16898928>, _json={ at the beginning and ) at the end, so that I may export they file as CSV.
Could anyone help me out with one of my two solutions or suggest a "simple" way to do this?
fyi: I want to do this to gather data for my thesis.
Try the python json library:
import json
jsonstring = "{'id': 12345, 'id_str': '12345', 'name': 'Jane Doe', 'screen_name': 'jdoe', 'location': 'Nirvana, NI', 'description': 'Just some random descrition')"
jsondict = json.loads(jsonstring)
# type(jsondict) == dictionary
Now you can just extract the data you want from it:
id = jsondict["id"]
name = jsondict["name"]
newdict = {"id":id,"name":name}

How to feed CoreNLP some pre-labeled Named Entities?

I want to use Standford CoreNLP to pull out Coreferences and start working on the Dependencies of pre-labeled text. I eventually hope to build graph nodes and edges between related Named Entities. I am working in python, but using nltk's java functions to call the "edu.stanford.nlp.pipeline.StanfordCoreNLP" jar directly (which is what nltk does behind the scenes anyway).
My pre-labeled text is in this format:
PRE-LABELED: During his youth, [PERSON: Alexander III of Macedon] was tutored by [PERSON: Aristotle] until age 16. Following the conquest of [LOCATION: Anatolia], [PERSON: Alexander] broke the power of [LOCATION: Persia] in a series of decisive battles, most notably the battles of [LOCATION: Issus] and [LOCATION: Gaugamela]. He subsequently overthrew [PERSON: Persian King Darius III] and conquered the [ORGANIZATION: Achaemenid Empire] in its entirety.
What I tried to do is tokenize my sentences myself, building a list of tuples in IOB format: [ ("During","O"), ("his","O"), ("youth","O"), ("Alexander","B-PERSON"), ("III","I-PERSON"), ...]
However, I can't figure out how to tell CoreNLP to take this tuple list as a starting point, building additional Named Entities that weren't initially labeled and finding coreferences on these new, higher-quality tokenized sentences. I obviously tried simply striping out my labels, and letting CoreNLP do this by itself, but CoreNLP is just not as good at finding the Named Entities as the human-tagged pre-labeled text.
I need an output as below. I understand that it will be difficult to use Dependencies to get Edges in this way, but I need to see how far I can get.
DESIRED OUTPUT:
[Person 1]:
Name: Alexander III of Macedon
Mentions:
* "Alexander III of Macedon"; Sent1 [4,5,6,7] # List of tokens
* "Alexander"; Sent2 [6]
* "He"; Sent3 [1]
Edges:
* "Person 2"; "tutored by"; "Aristotle"
[Person 2]:
Name: Aristotle
[....]
How can I feed CoreNLP some pre-identified Named Entities, and still get help with additional Named Entities, with Coreference, and with Basic Dependencies?
P.S. Note that this is not a duplicate of NLTK Named Entity Recognition with Custom Data. I'm not trying to train a new classifier with my pre-labeled NER, I'm only trying to add CoreNLP's to my own when running coreference (including mentions) and dependencies on a given sentence.
The answer is to make a Rules file with Additional TokensRegexNER Rules.
I used a regex to group out the labeled names. From this I built a rules tempfile which I passed to the corenlp jar with -ner.additional.regexner.mapping mytemprulesfile.
Alexander III of Macedon PERSON PERSON,LOCATION,ORGANIZATION,MISC
Aristotle PERSON PERSON,LOCATION,ORGANIZATION,MISC
Anatolia LOCATION PERSON,LOCATION,ORGANIZATION,MISC
Alexander PERSON PERSON,LOCATION,ORGANIZATION,MISC
Persia LOCATION PERSON,LOCATION,ORGANIZATION,MISC
Issus LOCATION PERSON,LOCATION,ORGANIZATION,MISC
Gaugamela LOCATION PERSON,LOCATION,ORGANIZATION,MISC
Persian King Darius III PERSON PERSON,LOCATION,ORGANIZATION,MISC
Achaemenid Empire ORGANIZATION PERSON,LOCATION,ORGANIZATION,MISC
I have aligned this list for readability, but these are tab-separated values.
An interesting finding is that some multi-word pre-labeled entities stay multi-word as originally labeled, whereas running corenlp without the rules files will sometimes split these tokens into separate entities.
I had wanted to specifically identify the named-entity tokens, figuring it would make coreferences easier, but I guess this will do for now. How often are entity names identical but unrelated within one document, anyway?
Example (execution takes ~70secs)
import os, re, tempfile, json, nltk, pprint
from subprocess import PIPE
from nltk.internals import (
find_jar_iter,
config_java,
java,
_java_options,
find_jars_within_path,
)
def ExtractLabeledEntitiesByRegex( text, regex ):
rgx = re.compile(regex)
nelist = []
for mobj in rgx.finditer( text ):
ne = mobj.group('ner')
try:
tag = mobj.group('tag')
except IndexError:
tag = 'PERSON'
mstr = text[mobj.start():mobj.end()]
nelist.append( (ne,tag,mstr) )
cleantext = rgx.sub("\g<ner>", text)
return (nelist, cleantext)
def GenerateTokensNERRules( nelist ):
rules = ""
for ne in nelist:
rules += ne[0]+'\t'+ne[1]+'\tPERSON,LOCATION,ORGANIZATION,MISC\n'
return rules
def GetEntities( origtext ):
nelist, cleantext = ExtractLabeledEntitiesByRegex( origtext, '(\[(?P<tag>[a-zA-Z]+)\:\s*)(?P<ner>(\s*\w)+)(\s*\])' )
origfile = tempfile.NamedTemporaryFile(mode='r+b', delete=False)
origfile.write( cleantext.encode('utf-8') )
origfile.flush()
origfile.seek(0)
nerrulefile = tempfile.NamedTemporaryFile(mode='r+b', delete=False)
nerrulefile.write( GenerateTokensNERRules(nelist).encode('utf-8') )
nerrulefile.flush()
nerrulefile.seek(0)
java_options='-mx4g'
config_java(options=java_options, verbose=True)
stanford_jar = '../stanford-corenlp-full-2018-10-05/stanford-corenlp-3.9.2.jar'
stanford_dir = os.path.split(stanford_jar)[0]
_classpath = tuple(find_jars_within_path(stanford_dir))
cmd = ['edu.stanford.nlp.pipeline.StanfordCoreNLP',
'-annotators','tokenize,ssplit,pos,lemma,ner,parse,coref,coref.mention,depparse,natlog,openie,relation',
'-ner.combinationMode','HIGH_RECALL',
'-ner.additional.regexner.mapping',nerrulefile.name,
'-coref.algorithm','neural',
'-outputFormat','json',
'-file',origfile.name
]
# java( cmd, classpath=_classpath, stdout=PIPE, stderr=PIPE )
stdout, stderr = java( cmd, classpath=_classpath, stdout=PIPE, stderr=PIPE ) # Couldn't get working- stdin=textfile
PrintJavaOutput( stdout, stderr )
origfilenametuple = os.path.split(origfile.name)
jsonfilename = origfilenametuple[len(origfilenametuple)-1] + '.json'
os.unlink( origfile.name )
os.unlink( nerrulefile.name )
origfile.close()
nerrulefile.close()
with open( jsonfilename ) as jsonfile:
jsondata = json.load(jsonfile)
currentid = 0
entities = []
for sent in jsondata['sentences']:
for thisentity in sent['entitymentions']:
tag = thisentity['ner']
if tag == 'PERSON' or tag == 'LOCATION' or tag == 'ORGANIZATION':
entity = {
'id':currentid,
'label':thisentity['text'],
'tag':tag
}
entities.append( entity )
currentid += 1
return entities
#### RUN ####
corpustext = "During his youth, [PERSON:Alexander III of Macedon] was tutored by [PERSON: Aristotle] until age 16. Following the conquest of [LOCATION: Anatolia], [PERSON: Alexander] broke the power of [LOCATION: Persia] in a series of decisive battles, most notably the battles of [LOCATION: Issus] and [LOCATION: Gaugamela]. He subsequently overthrew [PERSON: Persian King Darius III] and conquered the [ORGANIZATION: Achaemenid Empire] in its entirety."
entities = GetEntities( corpustext )
for thisent in entities:
pprint.pprint( thisent )
Output
{'id': 0, 'label': 'Alexander III of Macedon', 'tag': 'PERSON'}
{'id': 1, 'label': 'Aristotle', 'tag': 'PERSON'}
{'id': 2, 'label': 'his', 'tag': 'PERSON'}
{'id': 3, 'label': 'Anatolia', 'tag': 'LOCATION'}
{'id': 4, 'label': 'Alexander', 'tag': 'PERSON'}
{'id': 5, 'label': 'Persia', 'tag': 'LOCATION'}
{'id': 6, 'label': 'Issus', 'tag': 'LOCATION'}
{'id': 7, 'label': 'Gaugamela', 'tag': 'LOCATION'}
{'id': 8, 'label': 'Persian King Darius III', 'tag': 'PERSON'}
{'id': 9, 'label': 'Achaemenid Empire', 'tag': 'ORGANIZATION'}
{'id': 10, 'label': 'He', 'tag': 'PERSON'}

Split string, unicode, unicode, string in python

I was trying to split combination of string, unicode in python. The split has to be made on the ResultSet object retrieved from web-site. Using the code below, I am able to get the details, actually it is user details:
from bs4 import BeautifulSoup
import urllib2
import re
url = "http://www.mouthshut.com/vinay_beriwal"
profile_user = urllib2.urlopen(url)
profile_soup = BeautifulSoup(profile_user.read())
usr_dtls = profile_soup.find("div",id=re.compile("_divAboutMe")).find_all('p')
for dt in usr_dtls:
usr_dtls = " ".join(dt.text.split())
print(usr_dtls)
The output is as below:
i love yellow..
Name: Vinay Beriwal
Age: 39 years
Hometown: New Delhi, India
Country: India
Member since: Feb 11, 2016
What I need is to create distinct 5 variables as Name, Age, Hometown, Country, Member since and store the corresponding value after ':' for same.
Thanks
You can use a dictionary to store name-value pairs.For example -
my_dict = {"Name":"Vinay","Age":21}
In my_dict, Name and Age are the keys of the dictionary, you can access values like this -
print (my_dict["Name"]) #This will print Vinay
Also, it's nice and better to use complete words for variable names.
results = profile_soup.find("div",id=re.compile("_divAboutMe")).find_all('p')
user_data={} #dictionary initialization
for result in results:
result = " ".join(result.text.split())
try:
var,value = result.strip().split(':')
user_data[var.strip()]=value.strip()
except:
pass
#If you print the user_data now
print (user_data)
'''
This is what it'll print
{'Age': ' 39 years', 'Country': ' India', 'Hometown': 'New Delhi, India', 'Name': 'Vinay Beriwal', 'Member since': 'Feb 11, 2016'}
'''
You can use a dictionary to store your data:
my_dict = {}
for dt in usr_dtls:
item = " ".join(dt.text.split())
try:
if ':' in item:
k, v = item.split(':')
my_dict[k.strip()] = v.strip()
except:
pass
Note: You should not use usr_dtls inside your for loop, because that's would override your original usr_dtls

Python: Problems when iterating over a dictionary

So, I have been working on this simple Python program to get familiar with dictionaries. Basically, it works as a database which you can search in. If your entry is in the dictionary key, it brings up the information regarding the entry.
Family = {'Jim' : ['cool guy', 'has facial hair'],
'Ned' : ['hot stuff', ' wears Tees']}
query = input("Look up database on whom? > ")
for (name, info) in Family.items():
if name in query or name.lower() in query:
print("{} is {}".format(name, info))
This ^ works. However, when I tried to add an ELSE clause, to deal with non-existent entries, I get this.
else:
print ('Value not found!')
It prints the Value not found! many times before bringing up the value. If I try to add a 'go back to start' function it doesn't even bring up a registered value. I know this is because it is a loop and iterates over the dict one by one; so like 1)jim is true then 2) ned is false.
How do I improve this code to make it: -able to give an error about a non-existent entry and then restart the program. Thanks.
You will need to take care of case insensitivity in your code. Iterate through the list to ensure that the name exists before continuing:
Family = {'Jim' : ['cool guy', 'has facial hair'],
'Ned' : ['hot stuff', ' wears Tees']}
names = [name.lower() for name in Family]
def find(query):
if query.lower() in names:
info = [Family[n] for n in Family if n.lower() == query.lower()]
print('{} is {}'.format(
query, info
))
else:
print('{} not found'.format(query))
If you try it with the following sample:
find('Ned')
find('ned')
find('no ned')
You will get the following results:
Ned is [['hot stuff', ' wears Tees']]
ned is [['hot stuff', ' wears Tees']]
no ned not found
This is one way to do it:
Family = {'Jim' : ['cool guy', 'has facial hair'],
'Ned' : ['hot stuff', ' wears Tees']}
query = input("Look up database on whom? > ")
if query in Family.keys():
for (name, info) in Family.items():
if name in query or name.lower() in query:
print("{} is {}".format(name, info))
else:
print "Print Something - Not in Family"

Categories