The following code creates a dataframe, tokenizes, and filters stopwords. However, am I stuck trying to properly gather the results to load back into a column of the dataframe. Trying to put the results back into the dataframe (using commented code) produces the following error ValueError: Length of values does not match length of index. It seems like the issue is with how I'm loading the lists back into the df. I think it is treating them one at a time. I'm not clear how to form a list of lists, which is what I think is needed. Neither append() nor extend() seem appropriate, or if they are I'm not doing it properly. Any insight would be greatly appreciated.
Minimal example
# Load libraries
import numpy as np
import pandas as pd
import spacy
# Create dataframe and tokenize
df = pd.DataFrame({'Text': ['This is the first text. It is two sentences',
'This is the second text, with one sentence']})
nlp = spacy.load("en_core_web_sm")
df['Tokens'] = ''
doc = df['Text']
doc = doc.apply(lambda x: nlp(x))
df['Tokens'] = doc
# df # check dataframe
# Filter stopwords
df['No Stop'] = ''
def test_loc(df):
for i in df.index:
doc = df.loc[i,'Tokens']
tokens_no_stop = [token.text for token in doc if not token.is_stop]
print(tokens_no_stop)
# df['No Stop'] = tokens_no_stop # THIS PRODUCES AN ERROR
test_loc(df)
Result
['text', '.', 'sentences']
['second', 'text', ',', 'sentence']
As you mentioned you need a list of lists in order for the assignment to work.
Another solution can be to use pandas.apply as you used in the beginning of your code.
import numpy as np
import pandas as pd
import spacy
df = pd.DataFrame({'Text': ['This is the first text. It is two sentences',
'This is the second text, with one sentence']})
nlp = spacy.load("en_core_web_sm")
df['Tokens'] = df['Text'].apply(lambda x: nlp(x))
def remove_stop_words(tokens):
return [token.text for token in tokens if not token.is_stop]
df['No Stop'] = df['Tokens'].apply(remove_stop_words)
Notice you don't have to create the column before assigning to it.
Related
I have a Spacy model for text generation, and I want to create a pandas data frame with all the texts that my Spacy model produces in each iteration. How can I save the spacy.tokens.doc.Doc output into a pandas dataframe?
nlp = spacy.load('en_core_web_sm')
newDataSet=pd.dataframe()
docs = nlp.pipe(df['Text'])
syn_augmenter =augmenty.load('random_synonym_insertion.v1',level=0.1)
for doc in augmenty.docs(docs, augmenter=syn_augmenter, nlp=nlp):
newDataSet=newDataSet.add(doc) # this produces an error
so you probably want to use DframCy library to make that happen. It is also recommended by SpaCy: https://spacy.io/universe/project/dframcy. A snippet I use is:
import spacy
from dframcy import DframCy
from tqdm import tqdm
nlp = spacy.load('en_core_web_trf')
dframcy = DframCy(nlp)
columns=["id", "text", "start", "end", "pos_", "tag_", "dep_", \
"head", "ent_type_", "lemma_", "lower_", "is_punct", "is_quote", "is_digit"]
def get_features(item):
doc = dframcy.nlp(item[1]["discourse_text"])
annotation_dataframe = dframcy.to_dataframe(doc, columns=columns)
annotation_dataframe['index'] = item[0]
return annotation_dataframe
results = []
for item in tqdm(df.iterrows(), total=df.shape[0]):
results.append(get_features(item))
features = pd.concat(results)
features
So the columns object denotes what objects you want to have returned. This is parsed to dframcy is extract the features and return a nice dataframe per document. If you have a table of strings that you want to tokenize and get features from, you need to iterate over it. TQDM tracks the overall progress of your for-loop. Concatenating the list of dataframes (per doc) will give you a complete overview.
I have a large dataset with 250,000 entries, and the text column that i am processing contains a sentence is each row.
import pandas as pd
import spacy
nlp = spacy.load('en_core_web_sm')
from faker import Faker
fake = Faker()
df = pd.read_csv('my/huge/dataset.csv')
(e,g) --> df = pd.DataFrame({'text':['Michael Jackson was a famous singer and songwriter.']})
so from text file, I am trying to find names of people and replace them with fake names from the faker library and adding the result to a new column, as follows.
person_list = [[n.text for n in doc.ents] for doc in nlp_news_sm.pipe(df.text.values) if [n.label_ == 'PER' for n in doc.ents]]
flat_person_list = list(set([item for sublist in person_list for item in sublist]))
fake_person_name = [fake.name() for n in range(len(flat_person_list))]
name_dict = dict(zip(flat_person_list, fake_person_name))
df.name = df.text.replace(name_dict, regex=True)
The problem is that it is taking forever to run and I am not sure how to enhance the performance of the code, so it can run faster.
ok i think i found a better way of doing text replacement in pandas, thanks to Florian C's comment.
The Spacy model still takes a lot of time, but that part I cannot change, however, instead of str.replace, i decided to use map and lambda, so now the last line is as follows:
df.name = df.text.map(lambda x:name_dict.get(x,x))
I have a list of words in dataframe which I would like to replace with empty string.
I have a column named source which I have to clean properly.
e.g replace 'siliconvalley.co' to 'siliconvalley'
I created a list which is
list = ['.com','.co','.de','.co.jp','.co.uk','.lk','.it','.es','.ua','.bg','.at','.kr']
and replace them with empty string
for l in list:
df['source'] = df['source'].str.replace(l,'')
In the output, I am getting 'silinvalley' which means it has also replaced 'co' instead of '.co'
I want the code to replace the data which is exactly matching the pattern. Please help!
This would be one way. Would have to be careful with the order of replacement. If '.co' comes before '.co.uk' you don't get the desired result.
df["source"].replace('|'.join([re.escape(i) for i in list_]), '', regex=True)
Minimal example:
import pandas as pd
import re
list_ = ['.com','.co.uk','.co','.de','.co.jp','.lk','.it','.es','.ua','.bg','.at','.kr']
df = pd.DataFrame({
'source': ['google.com', 'google.no', 'google.co.uk']
})
pattern = '|'.join([re.escape(i) for i in list_])
df["new_source"] = df["source"].replace(pattern, '', regex=True)
print(df)
# source new_source
#0 google.com google
#1 google.no google.no
#2 google.co.uk google
I am using scattertext to parse a document in xlsx, but I am using non-English language and I will be most happy to add lemmatization and tokenization. I've checked these on spaCy alone and it works, but I have no clue how to integrate it in my scattertext plot.
import pandas as pd
import spacy
import pl_core_news_sm
nlp = spacy.load("pl_core_news_sm")
#nlp = pl_core_news_sm.load()
import scattertext as st
from pprint import pprint
from spacy.lang.pl.stop_words import STOP_WORDS
df = pd.read_excel("/home/poodle/Desktop/myfile.xlsx", sheet_name = 'Arkusz1', error_bad_lines = False)
corpus = st.CorpusFromPandas(
df,
category_col = 'Evaluation',
text_col = 'Opis',
nlp = st.whitespace_nlp_with_sentences).build().remove_terms(STOP_WORDS, ignore_absences=True)
html = st.produce_scattertext_explorer(corpus,
category = 'Nonsense',
category_name = 'Nonsense',
not_category_name = 'Correct',
minimum_term_frequency = 0,
width_in_pixels = 800,
metadata = corpus.get_df()['Autor'],
save_svg_button = True)
open('./Convention-Visualization6.html', 'wb').write(html.encode('utf-8'))
Is my code overall ok?
Scattertext has a specific pipeline for displaying lemmas instead of tokens.
To start, please use spaCy to parse your data frame of documents instead of scattertext's whitespace tokenizer.
I'm using spaCy's English parser here, but you should be sure to use a Polish version, if available.
import scattertext as st
import spacy
nlp = spacy.load('en')
df = st.SampleCorpora.ConventionData2012.get_data().assign(
parse=lambda df: df.text.apply(nlp)
)
Next, I create a Scattertext corpus from the data frame, using the column containing the spaCy Doc objects we created int he previous step.
Also, we use the st.FeatsFromSpacyDoc(use_lemmas=True) feature extractor to extract lemmas instead of tokens.
corpus = st.CorpusFromParsedDocuments(
df, category_col='party', parsed_col='parse',
feats_from_spacy_doc=st.FeatsFromSpacyDoc(use_lemmas=True)
)
I like use only unigrams (unilemmas in this case) and isolate the 2,000 most informative lemmas to display.
corpus = corpus.build().get_unigram_corpus().compact(st.AssociationCompactor(2000))
Finally, I create an html object which makes each axis in the plot the dense rank of a lemma's frequency.
html = st.produce_scattertext_explorer(
corpus,
category='democrat',
category_name='Democratic',
not_category_name='Republican',
minimum_term_frequency=0, pmi_threshold_coefficient=0,
width_in_pixels=1000, metadata=corpus.get_df()['speaker'],
transform=st.Scalers.dense_rank,
max_overlapping=3
)
open('./demo_lemmas.html', 'w').write(html)
print('open ./demo_lemmas.html in Chrome')
I am working in jupyter notebook and have a pandas dataframe "data":
Question_ID | Customer_ID | Answer
1 234 Data is very important to use because ...
2 234 We value data since we need it ...
I want to go through the text in column "Answer" and get the three words before and after the word "data".
So in this scenario I would have gotten "is very important"; "We value", "since we need".
Is there an good way to do this within a pandas dataframe? So far I only found solutions where "Answer" would be its own file run through python code (without a pandas dataframe). While I realize that I need to use the NLTK library, I haven't used it before, so I don't know what the best approach would be. (This was a great example Extracting a word and its prior 10 word context to a dataframe in Python)
This may work:
import pandas as pd
import re
df = pd.read_csv('data.csv')
for value in df.Answer.values:
non_data = re.split('Data|data', value) # split text removing "data"
terms_list = [term for term in non_data if len(term) > 0] # skip empty terms
substrs = [term.split()[0:3] for term in terms_list] # slice and grab first three terms
result = [' '.join(term) for term in substrs] # combine the terms back into substrings
print result
output:
['is very important']
['We value', 'since we need']
The solution using generator expression, re.findall and itertools.chain.from_iterable functions:
import pandas as pd, re, itertools
data = pd.read_csv('test.csv') # change with your current file path
data_adjacents = ((i for sublist in (list(filter(None,t))
for t in re.findall(r'(\w*?\s*\w*?\s*\w*?\s+)(?=\bdata\b)|(?<=\bdata\b)(\s+\w*\s*\w*\s*\w*)', l, re.I)) for i in sublist)
for l in data.Answer.tolist())
print(list(itertools.chain.from_iterable(data_adjacents)))
The output:
[' is very important', 'We value ', ' since we need']