nltk NER word extraction - python

I have checked previous related threads, but did not solve my issue. I have written code to get NER from text.
text = "Stallone jason's film Rocky was inducted into the National Film Registry as well as having its film props placed in the Smithsonian Museum."
tokenized = nltk.word_tokenize(text)
tagged = nltk.pos_tag(tokenized)
namedEnt = nltk.ne_chunk(tagged, binary = True)
print namedEnt
namedEnt = nltk.ne_chunk(tagged, binary = False)
which gives this short of result
(S
(NE Stallone/NNP)
jason/NN
's/POS
film/NN
(NE Rocky/NNP)
was/VBD
inducted/VBN
into/IN
the/DT
(NE National/NNP Film/NNP Registry/NNP)
as/IN
well/RB
as/IN
having/VBG
its/PRP$
film/NN
props/NNS
placed/VBN
in/IN
the/DT
(NE Smithsonian/NNP Museum/NNP)
./.)
while I expect only NE as a result, like
Stallone
Rockey
National Film Registry
Smithsonian Museum
how to achieve this?
UPDATE
result = ' '.join([y[0] for y in x.leaves()]) for x in namedEnt.subtrees() if x.node == "NE"
print result
gives syntext error, what is correct way to write this?
UPDATE2
text = "Stallone jason's film Rocky was inducted into the National Film Registry as well as having its film props placed in the Smithsonian Museum."
tokenized = nltk.word_tokenize(text)
tagged = nltk.pos_tag(tokenized)
namedEnt = nltk.ne_chunk(tagged, binary = True)
print namedEnt
np = [' '.join([y[0] for y in x.leaves()]) for x in namedEnt.subtrees() if x.node == "NE"]
print np
error:
np = [' '.join([y[0] for y in x.leaves()]) for x in namedEnt.subtrees() if x.node == "NE"]
File "/usr/local/lib/python2.7/dist-packages/nltk/tree.py", line 198, in _get_node
raise NotImplementedError("Use label() to access a node label.")
NotImplementedError: Use label() to access a node label.
so I tried with
np = [' '.join([y[0] for y in x.leaves()]) for x in namedEnt.subtrees() if x.label() == "NE"]
which gives emtpy result

The namedEnt returned is actually a Tree object which is a subclass of list. You can do the following to parse it:
[' '.join([y[0] for y in x.leaves()]) for x in namedEnt.subtrees() if x.node == "NE"]
Output:
['Stallone', 'Rocky', 'National Film Registry', 'Smithsonian Museum']
The binary flag is set to True will indicate only whether a subtree is NE or not, which is what we need above. When set to False it will give more information like whether the NE is an Organization, Person etc. For some reason, the result with flag On and Off don't seem to agree with one another.

Related

Looping through tree to create a dictionary_NLTK

I'm new to Python and trying to solve a problem looping through a tree in NLTK. I'm stuck on the final output, it is not entirely correct.
I'm looking to create a dictionary with 2 variables and if there is no quantity then add value 1.
This is the desired final output:
{ quantity =1, food =pizza }, {quantity =1, food = coke }
,{quantity =2, food = beers}, {quantity =1, food = sandwich }
Here is my code, any help is much appreaciated!
'''
import nltk as nltk
nltk.download()
grammar = r""" Food:{<DT>?<VRB>?<NN.*>+}
}<>+{
Quantity: {<CD>|<JJ>|<DT>}
"""
rp = nltk.RegexpParser(grammar)
def RegPar(menu):
grammar = r"""Food:{<DT>?<VRB>?<NN.*>+}
}<>+{
Quantity: {<CD>|<JJ>|<DT>}
"""
rp = nltk.RegexpParser(grammar)
output = rp.parse(menu)
return(output)
Sentences = [ 'A pizza margherita', 'one coke y 2 beers', 'Sandwich']
tagged_array =[]
output_array =[]
for s in Sentences:
tokens = nltk.word_tokenize(s)
tags = nltk.pos_tag(tokens)
tagged_array.append(tags)
output = rp.parse(tags)
output_array.append(output)
print(output)
dat = []
tree = RegPar(output_array)
for subtree in tree.subtrees():
if subtree.label() == 'Food' or subtree.label() =='Quantity':
dat.append({(subtree.label(),subtree.leaves()[0][0])})
print(dat)
##[{('Food', 'A')}, {('Quantity', 'one')}, {('Food', 'coke')}, {('Quantity', '2')}, {('Food', 'beers')}, {('Food', 'Sandwich')}]*
'''

Find successively connected nouns or pronouns in string

I want to find stand-alone or successively connected nouns in a text. I put together below code, but it is neither efficient nor pythonic. Does anybody have a more pythonic way of finding these nouns with spaCy?
Below code builds a dict with all tokens and then runs through them to find stand-alone or connected PROPN or NOUN until the for-loop runs out of range. It returns a list of the collected items.
def extract_unnamed_ents(doc):
"""Takes a string and returns a list of all succesively connected nouns or pronouns"""
nlp_doc = nlp(doc)
token_list = []
for token in nlp_doc:
token_dict = {}
token_dict['lemma'] = token.lemma_
token_dict['pos'] = token.pos_
token_dict['tag'] = token.tag_
token_list.append(token_dict)
ents = []
k = 0
for i in range(len(token_list)):
try:
if token_list[k]['pos'] == 'PROPN' or token_list[k]['pos'] == 'NOUN':
ent = token_list[k]['lemma']
if token_list[k+1]['pos'] == 'PROPN' or token_list[k+1]['pos'] == 'NOUN':
ent = ent + ' ' + token_list[k+1]['lemma']
k += 1
if token_list[k+1]['pos'] == 'PROPN' or token_list[k+1]['pos'] == 'NOUN':
ent = ent + ' ' + token_list[k+1]['lemma']
k += 1
if token_list[k+1]['pos'] == 'PROPN' or token_list[k+1]['pos'] == 'NOUN':
ent = ent + ' ' + token_list[k+1]['lemma']
k += 1
if token_list[k+1]['pos'] == 'PROPN' or token_list[k+1]['pos'] == 'NOUN':
ent = ent + ' ' + token_list[k+1]['lemma']
k += 1
if ent not in ents:
ents.append(ent)
except:
pass
k += 1
return ents
Test:
extract_unnamed_ents('Chancellor Angela Merkel and some of her ministers will discuss at a cabinet '
"retreat next week ways to avert driving bans in major cities after Germany's "
'top administrative court in February allowed local authorities to bar '
'heavily polluting diesel cars.')
Out:
['Chancellor Angela Merkel',
'minister',
'cabinet retreat',
'week way',
'ban',
'city',
'Germany',
'court',
'February',
'authority',
'diesel car']
spacy has a way of doing this but I'm not sure it is giving you exactly what you are after
import spacy
text = """Chancellor Angela Merkel and some of her ministers will discuss
at a cabinet retreat next week ways to avert driving bans in
major cities after Germany's top administrative court
in February allowed local authorities to bar heavily
polluting diesel cars.
""".replace('\n', ' ')
nlp = spacy.load("en_core_web_sm")
doc = nlp(text)
print([i.text for i in doc.noun_chunks])
gives
['Chancellor Angela Merkel', 'her ministers', 'a cabinet retreat', 'ways', 'driving bans', 'major cities', "Germany's top administrative court", 'February', 'local authorities', 'heavily polluting diesel cars']
Here, however the i.lemma_ line doesn't really give you what you want (I think this might be fixed by this recent PR).
Since it isn't quite what you are after you could use itertools.groupby like so
import itertools
out = []
for i, j in itertools.groupby(doc, key=lambda i: i.pos_):
if i not in ("PROPN", "NOUN"):
continue
out.append(' '.join(k.lemma_ for k in j))
print(out)
gives
['Chancellor Angela Merkel', 'minister', 'cabinet retreat', 'week way', 'ban', 'city', 'Germany', 'court', 'February', 'authority', 'diesel car']
This should give you exactly the same output as your function (the output is slightly different here but I believe this is due to different spacy versions).
If you are feeling really adventurous you could use a list comprehension
out = [' '.join(k.lemma_ for k in j)
for i, j in itertools.groupby(doc, key=lambda i: i.pos_)
if i in ("PROPN", "NOUN")]
Note I see slightly different results with different spacy versions. The output above is from spacy-2.1.8

I need to automatize the extraction of a logical statement (SWRL) from sentences in English

(Excuse me for my English, I'm also new at this, so be gentle, thank you)
I'm trying to extract a logical statement(SWRL) from any possible sentence that contains actions and conditions
This is the kind of logical statement I'd like to obtain:
IF (CONDITION) THEN (ACTION | NOT ACTION | ACTION OR NOT ACTION)
I've been trying to apply some NLP techniques with Spacy and Stanford NLP library, but my lack of knowledge about grammatical English structures makes it almost impossible for me.
I'd like to know if someone could help me with this research, either with ideas or with unknown libraries for me.
For example:
import nltk
import spacy
nlp = spacy.load('en_core_web_sm')
sent="The speed limit is 90 kilometres per hour on roads outside built-up areas."
doc=nlp(sent)
Obtaining the root:
def sent_root(sent):
for index,token in enumerate(sent):
if token.head == token:
return token, index
Out: (is, 3)
Obtaining the subject:
def sent_subj(sent):
for index,token in enumerate(sent):
if token.dep_ == 'nsubj':
return token, index
Out: (limit, 2)
Obtaining the childrens (dependencies of the word):
def sent_child(token):
complete_subj = ''
for child in token.children:
if(child.is_punct == False):
if(child.dep_ == 'compound'):
complete_subj += child.text + ' ' + token.text+' '
else:
complete_subj += child.text + ' '
for child_token in child.children:
if(child.is_punct == False):
complete_subj += child_token.text+' '
return complete_subj
Out: 'The speed limit '
Doc ents + root:
def doc_ents_root(sent, root):
ents_root = root.text+' '
for token in sent.ents:
ents_root += token.text + ' '
return ents_root
Out: 'is 90 kilometres per hour '
Extracting the action:
def action(sent):
#Obtaining the sent root
root, root_idx = sent_root(sent)
#Obtaining the subject
subj, subj_idx = sent_subj(sent)
#Obtaining the whole subject (subj + comps)
complete_subj = sent_child(subj)
complete_ents = doc_ents_root(sent, root)
return complete_subj + complete_ents
Applying all the funcions
action(doc)
Out: 'A traffic light with signal indicates '

Truecasing - SpaCy

Intent is to capitalize based on POS tags, which I could achieve with the help of the below link.
How can I best determine the correct capitalization for a word?
Trying to achieve similar results using spacy?
def truecase(doc):
truecased_sents = [] # list of truecased sentences
tagged_sent = token.tag_([word.lower() for token in doc])
normalized_sent = [w.capitalize() if t in ["NN","NNS"] else w for (w,t) in tagged_sent]
normalized_sent[0] = normalized_sent[0].capitalize()
string = re.sub(" (?=[\.,'!?:;])", "", ' '.join(normalized_sent))
return string
it throws out this error
tagged_sent = token.tag_([word.lower() for token in doc])
NameError: global name 'token' is not defined
how to declare token as global and solve this issue. Is my approach correct?
import spacy, re
nlp = spacy.load('en_core_web_sm')
doc = nlp(u'autonomous cars shift insurance liability toward manufacturers.')
tagged_sent = [(w.text, w.tag_) for w in doc]
normalized_sent = [w.capitalize() if t in ["NN","NNS"] else w for (w,t) in tagged_sent]
normalized_sent[0] = normalized_sent[0].capitalize()
string = re.sub(" (?=[\.,'!?:;])", "", ' '.join(normalized_sent))
print string
Output:
Autonomous Cars shift Insurance Liability toward Manufacturers.

How to extract subjects in a sentence and their respective dependent phrases?

I am trying to work on subject extraction in a sentence, so that I can get the sentiments in accordance with the subject. I am using nltk in python2.7 for this purpose. Take the following sentence as an example:
Donald Trump is the worst president of USA, but Hillary is better than him
He we can see that Donald Trump and Hillary are the two subjects, and sentiments related to Donald Trump is negative but related to Hillary are positive. Till now, I am able to break this sentence into chunks of noun phrases, and I am able to get the following:
(S
(NP Donald/NNP Trump/NNP)
is/VBZ
(NP the/DT worst/JJS president/NN)
in/IN
(NP USA,/NNP)
but/CC
(NP Hillary/NNP)
is/VBZ
better/JJR
than/IN
(NP him/PRP))
Now, how do I approach in finding the subjects from these noun phrases? Then how do I group the phrases meant for both the subjects together? Once I have the phrases meant for both the subjects separately, I can perform sentiment analysis on both of them separately.
EDIT
I looked into the library mentioned by #Krzysiek (spacy), and it gave me dependency trees as well in the sentences.
Here is the code:
from spacy.en import English
parser = English()
example = u"Donald Trump is the worst president of USA, but Hillary is better than him"
parsedEx = parser(example)
# shown as: original token, dependency tag, head word, left dependents, right dependents
for token in parsedEx:
print(token.orth_, token.dep_, token.head.orth_, [t.orth_ for t in token.lefts], [t.orth_ for t in token.rights])
Here are the dependency trees:
(u'Donald', u'compound', u'Trump', [], [])
(u'Trump', u'nsubj', u'is', [u'Donald'], [])
(u'is', u'ROOT', u'is', [u'Trump'], [u'president', u',', u'but', u'is'])
(u'the', u'det', u'president', [], [])
(u'worst', u'amod', u'president', [], [])
(u'president', u'attr', u'is', [u'the', u'worst'], [u'of'])
(u'of', u'prep', u'president', [], [u'USA'])
(u'USA', u'pobj', u'of', [], [])
(u',', u'punct', u'is', [], [])
(u'but', u'cc', u'is', [], [])
(u'Hillary', u'nsubj', u'is', [], [])
(u'is', u'conj', u'is', [u'Hillary'], [u'better'])
(u'better', u'acomp', u'is', [], [u'than'])
(u'than', u'prep', u'better', [], [u'him'])
(u'him', u'pobj', u'than', [], [])
This gives in depth insights into the dependencies of the different tokens of the sentences. Here is the link to the paper which describes the dependencies between different pairs. How can I use this tree to attach the contextual words for different subjects to them?
I was going through spacy library more, and I finally figured out the solution through dependency management. Thanks to this repo, I figured out how to include adjectives as well in my subjective verb object (making it SVAO's), as well as taking out compound subjects in the query. Here goes my solution:
from nltk.stem.wordnet import WordNetLemmatizer
from spacy.lang.en import English
SUBJECTS = ["nsubj", "nsubjpass", "csubj", "csubjpass", "agent", "expl"]
OBJECTS = ["dobj", "dative", "attr", "oprd"]
ADJECTIVES = ["acomp", "advcl", "advmod", "amod", "appos", "nn", "nmod", "ccomp", "complm",
"hmod", "infmod", "xcomp", "rcmod", "poss"," possessive"]
COMPOUNDS = ["compound"]
PREPOSITIONS = ["prep"]
def getSubsFromConjunctions(subs):
moreSubs = []
for sub in subs:
# rights is a generator
rights = list(sub.rights)
rightDeps = {tok.lower_ for tok in rights}
if "and" in rightDeps:
moreSubs.extend([tok for tok in rights if tok.dep_ in SUBJECTS or tok.pos_ == "NOUN"])
if len(moreSubs) > 0:
moreSubs.extend(getSubsFromConjunctions(moreSubs))
return moreSubs
def getObjsFromConjunctions(objs):
moreObjs = []
for obj in objs:
# rights is a generator
rights = list(obj.rights)
rightDeps = {tok.lower_ for tok in rights}
if "and" in rightDeps:
moreObjs.extend([tok for tok in rights if tok.dep_ in OBJECTS or tok.pos_ == "NOUN"])
if len(moreObjs) > 0:
moreObjs.extend(getObjsFromConjunctions(moreObjs))
return moreObjs
def getVerbsFromConjunctions(verbs):
moreVerbs = []
for verb in verbs:
rightDeps = {tok.lower_ for tok in verb.rights}
if "and" in rightDeps:
moreVerbs.extend([tok for tok in verb.rights if tok.pos_ == "VERB"])
if len(moreVerbs) > 0:
moreVerbs.extend(getVerbsFromConjunctions(moreVerbs))
return moreVerbs
def findSubs(tok):
head = tok.head
while head.pos_ != "VERB" and head.pos_ != "NOUN" and head.head != head:
head = head.head
if head.pos_ == "VERB":
subs = [tok for tok in head.lefts if tok.dep_ == "SUB"]
if len(subs) > 0:
verbNegated = isNegated(head)
subs.extend(getSubsFromConjunctions(subs))
return subs, verbNegated
elif head.head != head:
return findSubs(head)
elif head.pos_ == "NOUN":
return [head], isNegated(tok)
return [], False
def isNegated(tok):
negations = {"no", "not", "n't", "never", "none"}
for dep in list(tok.lefts) + list(tok.rights):
if dep.lower_ in negations:
return True
return False
def findSVs(tokens):
svs = []
verbs = [tok for tok in tokens if tok.pos_ == "VERB"]
for v in verbs:
subs, verbNegated = getAllSubs(v)
if len(subs) > 0:
for sub in subs:
svs.append((sub.orth_, "!" + v.orth_ if verbNegated else v.orth_))
return svs
def getObjsFromPrepositions(deps):
objs = []
for dep in deps:
if dep.pos_ == "ADP" and dep.dep_ == "prep":
objs.extend([tok for tok in dep.rights if tok.dep_ in OBJECTS or (tok.pos_ == "PRON" and tok.lower_ == "me")])
return objs
def getAdjectives(toks):
toks_with_adjectives = []
for tok in toks:
adjs = [left for left in tok.lefts if left.dep_ in ADJECTIVES]
adjs.append(tok)
adjs.extend([right for right in tok.rights if tok.dep_ in ADJECTIVES])
tok_with_adj = " ".join([adj.lower_ for adj in adjs])
toks_with_adjectives.extend(adjs)
return toks_with_adjectives
def getObjsFromAttrs(deps):
for dep in deps:
if dep.pos_ == "NOUN" and dep.dep_ == "attr":
verbs = [tok for tok in dep.rights if tok.pos_ == "VERB"]
if len(verbs) > 0:
for v in verbs:
rights = list(v.rights)
objs = [tok for tok in rights if tok.dep_ in OBJECTS]
objs.extend(getObjsFromPrepositions(rights))
if len(objs) > 0:
return v, objs
return None, None
def getObjFromXComp(deps):
for dep in deps:
if dep.pos_ == "VERB" and dep.dep_ == "xcomp":
v = dep
rights = list(v.rights)
objs = [tok for tok in rights if tok.dep_ in OBJECTS]
objs.extend(getObjsFromPrepositions(rights))
if len(objs) > 0:
return v, objs
return None, None
def getAllSubs(v):
verbNegated = isNegated(v)
subs = [tok for tok in v.lefts if tok.dep_ in SUBJECTS and tok.pos_ != "DET"]
if len(subs) > 0:
subs.extend(getSubsFromConjunctions(subs))
else:
foundSubs, verbNegated = findSubs(v)
subs.extend(foundSubs)
return subs, verbNegated
def getAllObjs(v):
# rights is a generator
rights = list(v.rights)
objs = [tok for tok in rights if tok.dep_ in OBJECTS]
objs.extend(getObjsFromPrepositions(rights))
potentialNewVerb, potentialNewObjs = getObjFromXComp(rights)
if potentialNewVerb is not None and potentialNewObjs is not None and len(potentialNewObjs) > 0:
objs.extend(potentialNewObjs)
v = potentialNewVerb
if len(objs) > 0:
objs.extend(getObjsFromConjunctions(objs))
return v, objs
def getAllObjsWithAdjectives(v):
# rights is a generator
rights = list(v.rights)
objs = [tok for tok in rights if tok.dep_ in OBJECTS]
if len(objs)== 0:
objs = [tok for tok in rights if tok.dep_ in ADJECTIVES]
objs.extend(getObjsFromPrepositions(rights))
potentialNewVerb, potentialNewObjs = getObjFromXComp(rights)
if potentialNewVerb is not None and potentialNewObjs is not None and len(potentialNewObjs) > 0:
objs.extend(potentialNewObjs)
v = potentialNewVerb
if len(objs) > 0:
objs.extend(getObjsFromConjunctions(objs))
return v, objs
def findSVOs(tokens):
svos = []
verbs = [tok for tok in tokens if tok.pos_ == "VERB" and tok.dep_ != "aux"]
for v in verbs:
subs, verbNegated = getAllSubs(v)
# hopefully there are subs, if not, don't examine this verb any longer
if len(subs) > 0:
v, objs = getAllObjs(v)
for sub in subs:
for obj in objs:
objNegated = isNegated(obj)
svos.append((sub.lower_, "!" + v.lower_ if verbNegated or objNegated else v.lower_, obj.lower_))
return svos
def findSVAOs(tokens):
svos = []
verbs = [tok for tok in tokens if tok.pos_ == "VERB" and tok.dep_ != "aux"]
for v in verbs:
subs, verbNegated = getAllSubs(v)
# hopefully there are subs, if not, don't examine this verb any longer
if len(subs) > 0:
v, objs = getAllObjsWithAdjectives(v)
for sub in subs:
for obj in objs:
objNegated = isNegated(obj)
obj_desc_tokens = generate_left_right_adjectives(obj)
sub_compound = generate_sub_compound(sub)
svos.append((" ".join(tok.lower_ for tok in sub_compound), "!" + v.lower_ if verbNegated or objNegated else v.lower_, " ".join(tok.lower_ for tok in obj_desc_tokens)))
return svos
def generate_sub_compound(sub):
sub_compunds = []
for tok in sub.lefts:
if tok.dep_ in COMPOUNDS:
sub_compunds.extend(generate_sub_compound(tok))
sub_compunds.append(sub)
for tok in sub.rights:
if tok.dep_ in COMPOUNDS:
sub_compunds.extend(generate_sub_compound(tok))
return sub_compunds
def generate_left_right_adjectives(obj):
obj_desc_tokens = []
for tok in obj.lefts:
if tok.dep_ in ADJECTIVES:
obj_desc_tokens.extend(generate_left_right_adjectives(tok))
obj_desc_tokens.append(obj)
for tok in obj.rights:
if tok.dep_ in ADJECTIVES:
obj_desc_tokens.extend(generate_left_right_adjectives(tok))
return obj_desc_tokens
Now when you pass query such as:
from spacy.lang.en import English
parser = English()
sentence = u"""
Donald Trump is the worst president of USA, but Hillary is better than him
"""
parse = parser(sentence)
print(findSVAOs(parse))
You will get the following:
[(u'donald trump', u'is', u'worst president'), (u'hillary', u'is', u'better')]
Thank you #Krzysiek for your solution too, I actually was unable to go deep into your library to modify it. I rather tried modifying the above mentioned link to solve my problem.
I was recently just solving very similar problem - I needed to extract subject(s), action, object(s). And I open sourced my work so you can check this library:
https://github.com/krzysiekfonal/textpipeliner
This based on spacy(opponent to nltk) but it also based on sentence tree.
So for instance let's get this doc embedded in spacy as example:
import spacy
nlp = spacy.load("en")
doc = nlp(u"The Empire of Japan aimed to dominate Asia and the " \
"Pacific and was already at war with the Republic of China " \
"in 1937, but the world war is generally said to have begun on " \
"1 September 1939 with the invasion of Poland by Germany and " \
"subsequent declarations of war on Germany by France and the United Kingdom. " \
"From late 1939 to early 1941, in a series of campaigns and treaties, Germany conquered " \
"or controlled much of continental Europe, and formed the Axis alliance with Italy and Japan. " \
"Under the Molotov-Ribbentrop Pact of August 1939, Germany and the Soviet Union partitioned and " \
"annexed territories of their European neighbours, Poland, Finland, Romania and the Baltic states. " \
"The war continued primarily between the European Axis powers and the coalition of the United Kingdom " \
"and the British Commonwealth, with campaigns including the North Africa and East Africa campaigns, " \
"the aerial Battle of Britain, the Blitz bombing campaign, the Balkan Campaign as well as the " \
"long-running Battle of the Atlantic. In June 1941, the European Axis powers launched an invasion " \
"of the Soviet Union, opening the largest land theatre of war in history, which trapped the major part " \
"of the Axis' military forces into a war of attrition. In December 1941, Japan attacked " \
"the United States and European territories in the Pacific Ocean, and quickly conquered much of " \
"the Western Pacific.")
You can now create a simple pipes structure(more about pipes in readme of this project):
pipes_structure = [SequencePipe([FindTokensPipe("VERB/nsubj/*"),
NamedEntityFilterPipe(),
NamedEntityExtractorPipe()]),
FindTokensPipe("VERB"),
AnyPipe([SequencePipe([FindTokensPipe("VBD/dobj/NNP"),
AggregatePipe([NamedEntityFilterPipe("GPE"),
NamedEntityFilterPipe("PERSON")]),
NamedEntityExtractorPipe()]),
SequencePipe([FindTokensPipe("VBD/**/*/pobj/NNP"),
AggregatePipe([NamedEntityFilterPipe("LOC"),
NamedEntityFilterPipe("PERSON")]),
NamedEntityExtractorPipe()])])]
engine = PipelineEngine(pipes_structure, Context(doc), [0,1,2])
engine.process()
And in the result you will get:
>>>[([Germany], [conquered], [Europe]),
([Japan], [attacked], [the, United, States])]
Actually it based strongly (the finding pipes) on another library - grammaregex. You can read about it from a post:
https://medium.com/#krzysiek89dev/grammaregex-library-regex-like-for-text-mining-49e5706c9c6d#.zgx7odhsc
EDITED
Actually the example I presented in readme discards adj, but all you need is to adjust pipe structure passed to engine according to your needs.
For instance for your sample sentences I can propose such structure/solution which give you tuple of 3 elements(subj, verb, adj) per every sentence:
import spacy
from textpipeliner import PipelineEngine, Context
from textpipeliner.pipes import *
pipes_structure = [SequencePipe([FindTokensPipe("VERB/nsubj/NNP"),
NamedEntityFilterPipe(),
NamedEntityExtractorPipe()]),
AggregatePipe([FindTokensPipe("VERB"),
FindTokensPipe("VERB/xcomp/VERB/aux/*"),
FindTokensPipe("VERB/xcomp/VERB")]),
AnyPipe([FindTokensPipe("VERB/[acomp,amod]/ADJ"),
AggregatePipe([FindTokensPipe("VERB/[dobj,attr]/NOUN/det/DET"),
FindTokensPipe("VERB/[dobj,attr]/NOUN/[acomp,amod]/ADJ")])])
]
engine = PipelineEngine(pipes_structure, Context(doc), [0,1,2])
engine.process()
It will give you result:
[([Donald, Trump], [is], [the, worst])]
A little bit complexity is in the fact you have compound sentence and the lib produce one tuple per sentence - I'll soon add possibility(I need it too for my project) to pass a list of pipe structures to engine to allow produce more tuples per sentence. But for now you can solve it just by creating second engine for compounded sents which structure will differ only of VERB/conj/VERB instead of VERB(those regex starts always from ROOT, so VERB/conj/VERB lead you to just second verb in compound sentence):
pipes_structure_comp = [SequencePipe([FindTokensPipe("VERB/conj/VERB/nsubj/NNP"),
NamedEntityFilterPipe(),
NamedEntityExtractorPipe()]),
AggregatePipe([FindTokensPipe("VERB/conj/VERB"),
FindTokensPipe("VERB/conj/VERB/xcomp/VERB/aux/*"),
FindTokensPipe("VERB/conj/VERB/xcomp/VERB")]),
AnyPipe([FindTokensPipe("VERB/conj/VERB/[acomp,amod]/ADJ"),
AggregatePipe([FindTokensPipe("VERB/conj/VERB/[dobj,attr]/NOUN/det/DET"),
FindTokensPipe("VERB/conj/VERB/[dobj,attr]/NOUN/[acomp,amod]/ADJ")])])
]
engine2 = PipelineEngine(pipes_structure_comp, Context(doc), [0,1,2])
And now after you run both engines you will get expected result :)
engine.process()
engine2.process()
[([Donald, Trump], [is], [the, worst])]
[([Hillary], [is], [better])]
This is what you need I think. Of course I just quickly created a pipe structure for given example sentence and it won't work for every case, but I saw a lot of sentence structures and it will already fulfil quite nice percentage, but then you can just add more FindTokensPipe etc for cases which won't work currently and I'm sure after a few adjustment you will cover really good number of possible sentences(english is not too complex so...:)

Categories