How to replace words with their synonyms of word-net? - python

i want to do data augmentation for sentiment analysis task by replacing words with it's synonyms from wordnet but replacing is random i want to loop over the synonyms and replace word with all synonyms one at the time to increase data-size
sentences=[]
for index , r in pos_df.iterrows():
text=normalize(r['text'])
words=tokenize(text)
output = ""
# Identify the parts of speech
tagged = nltk.pos_tag(words)
for i in range(0,len(words)):
replacements = []
# Only replace nouns with nouns, vowels with vowels etc.
for syn in wordnet.synsets(words[i]):
# Do not attempt to replace proper nouns or determiners
if tagged[i][1] == 'NNP' or tagged[i][1] == 'DT':
break
# The tokenizer returns strings like NNP, VBP etc
# but the wordnet synonyms has tags like .n.
# So we extract the first character from NNP ie n
# then we check if the dictionary word has a .n. or not
word_type = tagged[i][1][0]
if syn.name().find("."+word_type+"."):
# extract the word only
r = syn.name()[0:syn.name().find(".")]
replacements.append(r)
if len(replacements) > 0:
# Choose a random replacement
replacement = replacements[randint(0,len(replacements)-1)]
print(replacement)
output = output + " " + replacement
else:
# If no replacement could be found, then just use the
# original word
output = output + " " + words[i]
sentences.append([output,'positive'])

Even I'm working with a similar kind of project, generating new sentences from a given input but without changing the context from the input text.
While coming across this, I found a data augmentation technique. Which seems to work well on the augmentation part. EDA(Easy Data Augmentation) is a paper[https://github.com/jasonwei20/eda_nlp].
Hope this helps you.

Related

Given a word can we get all possible lemmas for it using Spacy?

The input word is standalone and not part of a sentence but I would like to get all of its possible lemmas as if the input word were in different sentences with all possible POS tags. I would also like to get the lookup version of the word's lemma.
Why am I doing this?
I have extracted lemmas from all the documents and I have also calculated the number of dependency links between lemmas. Both of which I have done using en_core_web_sm. Now, given an input word, I would like to return the lemmas that are linked most frequently to all the possible lemmas of the input word.
So in short, I would like to replicate the behaviour of token._lemma for the input word with all possible POS tags to maintain consistency with the lemma links I have counted.
I found it difficult to get lemmas and inflections directly out of spaCy without first constructing an example sentence to give it context. This wasn't ideal, so I looked further and found LemmaInflect did this very well.
> from lemminflect import getAllLemmas, getInflection, getAllInflections, getAllInflectionsOOV
> getAllLemmas('watches')
{'NOUN': ('watch',), 'VERB': ('watch',)}
> getAllInflections('watch')
{'NN': ('watch',), 'NNS': ('watches', 'watch'), 'VB': ('watch',), 'VBD': ('watched',), 'VBG': ('watching',), 'VBZ': ('watches',), 'VBP': ('watch',)}
spaCy is just not designed to do this - it's made for analyzing text, not producing text.
The linked library looks good, but if you want to stick with spaCy or need languages besides English, you can look at spacy-lookups-data, which is the raw data used for lemmas. Generally there will be a dictionary for each part of speech that lets you look up the lemma for a word.
To get alternative lemmas, I am trying a combination of Spacy rule_lemmatize and Spacy lookup data. rule_lemmatize may produce more than one valid lemma whereas the lookup data will only offer one lemma for a given word (in the files I have inspected). There are however cases where the lookup data produces a lemma whilst rule_lemmatize does not.
My examples are for Spanish:
import spacy
import spacy_lookups_data
import json
import pathlib
# text = "fui"
text = "seguid"
# text = "contenta"
print("Input text: \t\t" + text)
# Find lemmas using rules:
nlp = spacy.load("es_core_news_sm")
lemmatizer = nlp.get_pipe("lemmatizer")
doc = nlp(text)
rule_lemmas = lemmatizer.rule_lemmatize(doc[0])
print("Lemmas using rules: " + ", ".join(rule_lemmas))
# Find lemma using lookup:
lookups_path = str(pathlib.Path(spacy_lookups_data.__file__).parent.resolve()) + "/data/es_lemma_lookup.json"
fileObject = open(lookups_path, "r")
lookup_json = fileObject.read()
lookup = json.loads(lookup_json)
print("Lemma from lookup: \t" + lookup[text])
Output:
Input text: fui # I went; I was (two verbs with same form in this tense)
Lemmas using rules: ir, ser # to go, to be (both possible lemmas returned)
Lemma from lookup: ser # to be
Input text: seguid # Follow! (imperative)
Lemmas using rules: seguid # Follow! (lemma not returned)
Lemma from lookup: seguir # to follow
Input text: contenta # (it) satisfies (verb); contented (adjective)
Lemmas using rules: contentar # to satisfy (verb but not adjective lemma returned)
Lemma from lookup: contento # contented (adjective, lemma form)

Python; use list of initial characters to retrieve full word from other list?

I'm trying to use the list of shortened words to select & retrieve the corresponding full word identified by its initial sequence of characters:
shortwords = ['appe', 'kid', 'deve', 'colo', 'armo']
fullwords = ['appearance', 'armour', 'colored', 'developing', 'disagreement', 'kid', 'pony', 'treasure']
Trying this regex match with a single shortened word:
import re
shortword = 'deve'
retrieved=filter(lambda i: re.match(r'{}'.format(shortword),i), fullwords)
print(retrieved*)
returns
developing
So the regex match works but the question is how to adapt the code to iterate through the shortwords list and retrieve the full words?
EDIT: The solution needs to preserve the order from the shortwords list.
Maybe using a dictionary
# Using a dictionary
test= 'appe is a deve arm'
shortwords = ['appe', 'deve', 'colo', 'arm', 'pony', 'disa']
fullwords = ['appearance', 'developing', 'colored', 'armour', 'pony', 'disagreement']
#Building the dictionary
d={}
for i in range(len(shortwords)):
d[shortwords[i]]=fullwords[i]
# apply dictionary to test
res=" ".join(d.get(s,s) for s in test.split())
# print test data after dictionary mapping
print(res)
That is one way to do it:
shortwords = ['appe', 'deve', 'colo', 'arm', 'pony', 'disa']
fullwords = ['appearance', 'developing', 'colored', 'armour', 'pony', 'disagreement']
# Dict comprehension
words = {short:full for short, full in zip(shortwords, fullwords)}
#Solving problem
keys = ['deve','arm','pony']
values = [words[key] for key in keys]
print(values)
This is a classical key - value problem. Use a dictionary for that or consider pandas in long-term.
Your question text seems to indicate that you're looking for your shortwords at the start of each word. That should be easy then:
matched_words = [word for word in fullwords if any(word.startswith(shortword) for shortword in shortwords]
If you'd like to regex this for some reason (it's unlikely to be faster), you could do that with a large alternation:
regex_alternation = '|'.join(re.escape(shortword) for shortword in shortwords)
matched_words = [word for word in fullwords if re.match(rf"^{regex_alternation}", word)]
Alternately if your shortwords are always four characters, you could just slice the first four off:
shortwords = set(shortwords) # sets have O(1) lookups so this will save
# a significant amount of time if either shortwords
# or longwords is long
matched_words = [word for word in fullwords if word[:4] in shortwords]
This snippet has the functionality I wanted. It builds a regular expression pattern at each loop iteration in order to accomodate varying word length parameters. Further it maintains the original order of the wordroots list. In essence it looks at each word in wordroots and fills out the full word from the dataset. This is useful when working with the bip-0039 words list which contains words of 3-8 characters in length and which are uniquely identifiable by their initial 4 characters. Recovery phrases are built by randomly selecting a sequence of words from the bip-0039 list, order is important. Observed security practice is often to abbreviate each word to a maximum of its four initial characters. Here is code which would rebuild a recovery phrase from its abbreviation:
import re
wordroots = ['sun', 'sunk', 'sunn', 'suns']
dataset = ['sun', 'sunk', 'sunny', 'sunshine']
retrieved = []
for root in wordroots:
#(exact match) or ((exact match at beginning of word when root is 4 or more characters) else (exact match))
pattern = r"(^" + root + "$|" + ("^" + root + "[a-zA-Z]+)" if len(root) >= 4 else "^" + root + "$)")
retrieved.extend( filter(lambda i: re.match(pattern, i), dataset) )
print(*retrieved)
Output:
sun sunk sunny sunshine

Code to create a reliable Language model from my own corpus

I have a corpus of sentences in a specific domain.
I am looking for an open-source code/package, that I can give the data and it will generate a good, reliable language model. (Meaning, given a context, know the probability for each word).
Is there such a code/project?
I saw this github repo: https://github.com/rafaljozefowicz/lm, but it didn't work.
I recommend writing your own basic implementation. First, we need some sentences:
import nltk
from nltk.corpus import brown
words = brown.words()
total_words = len(words)
sentences = list(brown.sents())
sentences is now a list of lists. Each sublist represents a sentence with each word as an element. Now you need to decide whether or not you want to include punctuation in your model. If you want to remove it, try something like the following:
punctuation = [",", ".", ":", ";", "!", "?"]
for i, sentence in enumerate(sentences.copy()):
new_sentence = [word for word in sentence if word not in punctuation]
sentences[i] = new_sentence
Next, you need to decide whether or not you care about capitalization. If you don't care about it, you could remove it like so:
for i, sentence in enumerate(sentences.copy()):
new_sentence = list()
for j, word in enumerate(sentence.copy()):
new_word = word.lower() # Lower case all characters in word
new_sentence.append(new_word)
sentences[i] = new_sentence
Next, we need special start and end words to represent words that are valid at the beginning and end of sentences. You should pick start and end words that don't exist in your training data.
start = ["<<START>>"]
end = ["<<END>>"]
for i, sentence in enumerate(sentences.copy()):
new_sentence = start + sentence + end
sentences[i] = new_sentence
Now, let's count unigrams. A unigram is a sequence of one word in a sentence. Yes, a unigram model is just a frequency distribution of each word in the corpus:
new_words = list()
for sentence in sentences:
for word in sentence:
new_words.append(word)
unigram_fdist = nltk.FreqDist(new_words)
And now it's time to count bigrams. A bigram is a sequence of two words in a sentence. So, for the sentence "i am the walrus", we have the following bigrams: "<> i", "i am", "am the", "the walrus", and "walrus <>".
bigrams = list()
for sentence in sentences:
new_bigrams = nltk.bigrams(sentence)
bigrams += new_bigrams
Now we can create a frequency distribution:
bigram_fdist = nltk.ConditionalFreqDist(bigrams)
Finally, we want to know the probability of each word in the model:
def getUnigramProbability(word):
if word in unigram_fdist:
return unigram_fdist[word]/total_words
else:
return -1 # You should figure out how you want to handle out-of-vocabulary words
def getBigramProbability(word1, word2):
if word1 not in bigram_fdist:
return -1 # You should figure out how you want to handle out-of-vocabulary words
elif word2 not in bigram_fdist[word1]:
# i.e. "word1 word2" never occurs in the corpus
return getUnigramProbability(word2)
else:
bigram_frequency = bigram_fdist[word1][word2]
unigram_frequency = unigram_fdist[word1]
bigram_probability = bigram_frequency / unigram_frequency
return bigram_probability
While this isn't a framework/library that just builds the model for you, I hope seeing this code has demystified what goes on in a language model.
You might try word_language_model from PyTorch examples. There just might be an issue if you have a big corpus. They load all data in memory.

Highlight certain words that appear in sequence

I'm trying to print a text while highlighting certain words and word bigrams. This would be fairly straight forward if I didn't have to print the other tokens like punctuation and such as well.
I have a list of words to highlight and another list of word bigrams to highlight.
Highlighting individual words is fairly easy, like for example:
import re
import string
regex_pattern = re.compile("([%s \n])" % string.punctuation)
def highlighter(content, terms_to_hightlight):
tokens = regex_pattern.split(content)
for token in tokens:
if token.lower() in terms_to_hightlight:
print('\x1b[6;30;42m' + token + '\x1b[0m', end="")
else:
print(token, end="")
Only highlighting words that appear in sequence is more complex. I have been playing around with iterators but haven't been able to come up with anything that isn't overtly complicated.
If I understand the question correctly, one solution is to look ahead to the next word token and check if the bigram is in the list.
import re
import string
regex_pattern = re.compile("([%s \n])" % string.punctuation)
def find_next_word(tokens, idx):
nonword = string.punctuation + " \n"
for i in range(idx+1, len(tokens)):
if tokens[i] not in nonword:
return (tokens[i], i)
return (None, -1)
def highlighter(content, terms, bigrams):
tokens = regex_pattern.split(content)
idx = 0
while idx < len(tokens):
token = tokens[idx]
(next_word, nw_idx) = find_next_word(tokens, idx)
if token.lower() in terms:
print('*' + token + '*', end="")
idx += 1
elif next_word and (token.lower(), next_word.lower()) in bigrams:
concat = "".join(tokens[idx:nw_idx+1])
print('-' + concat + '-', end="")
idx = nw_idx + 1
else:
print(token, end="")
idx += 1
terms = ['man', 'the']
bigrams = [('once', 'upon'), ('i','was')]
text = 'Once upon a time, as I was walking to the city, I met a man. As I was tired, I did not look once... upon this man.'
highlighter(text, terms, bigrams)
When called, this gives :
-Once upon- a time, as -I was- walking to *the* city, I met a *man*. As -I was- tired, I did not look -once... upon- this *man*.
Please note that:
this is a greedy algorithm, it will match the first bigram it finds. So for instance you check for yellow banana and banana boat, yellow banana boat is always highlighted as -yellow banana- boat. If you want another behavior, you should update the test logic.
you probably also want to update the logic to manage the case where a word is both in terms and the first part of a bigram
I haven't tested all edge cases, some things may break / there may be fence-post errors
you can optimize performance if necessary by:
building a list of the first words of the bigram and checking if a word is in it before doing the look-ahead to the next word
and/or using the result of the look-ahead to treat in one step all the non-word tokens between two words (implementing this step should be enough to insure linear performance)
Hope this helps.

determining context from text using pandas

I've built a web crawler which fetches me data. The data is typically structured. But then and there are a few anomalies. Now to do analysis on top of the data I am searching for few words i.e searched_words=['word1','word2','word3'......] I want the sentences in which these words are present. So I coded as below :
searched_words=['word1','word2','word3'......]
fsa = re.compile('|'.join(re.escape(w.lower()) for w in searched_words))
str_df['context'] = str_df['text'].apply(lambda text: [sent for sent in sent_tokenize(text)
if any(True for w in word_tokenize(sent) if w.lower() in words)])
It is working but the problem I am facing is if there is/are missing white-spaces after a fullstop in the text I am getting all such sentences as such.
Example :
searched_words = ['snakes','venomous']
text = "I am afraid of snakes.I hate them."
output : ['I am afraid of snakes.I hate them.']
Desired output : ['I am afraid of snakes.']
If all tokenizers (including nltk) fail you you can take matters into your own hands and try
import re
s='I am afraid of snakes.I hate venomous them. Theyre venomous.'
def findall(s,p):
return [m.start() for m in re.finditer(p, s)]
def find(sent, word):
res=[]
indexes = findall(sent,word)
for index in indexes:
i = index
while i>0:
if sent[i]!='.':
i-=1
else:
break
end = index+len(word)
nextFullStop = end + sent[end:].find('.')
res.append(sent[i:nextFullStop])
i=0
return res
Play with it here. There's some dots left in there as I do not know what you want to do exactly with them.
What it does is it finds all occurences of said word, and gets you the Sentence all they way back to the previous dot. This is for an edge case only but you can tune it easily, specific to your needs.

Categories