I have a large dataset with 250,000 entries, and the text column that i am processing contains a sentence is each row.
import pandas as pd
import spacy
nlp = spacy.load('en_core_web_sm')
from faker import Faker
fake = Faker()
df = pd.read_csv('my/huge/dataset.csv')
(e,g) --> df = pd.DataFrame({'text':['Michael Jackson was a famous singer and songwriter.']})
so from text file, I am trying to find names of people and replace them with fake names from the faker library and adding the result to a new column, as follows.
person_list = [[n.text for n in doc.ents] for doc in nlp_news_sm.pipe(df.text.values) if [n.label_ == 'PER' for n in doc.ents]]
flat_person_list = list(set([item for sublist in person_list for item in sublist]))
fake_person_name = [fake.name() for n in range(len(flat_person_list))]
name_dict = dict(zip(flat_person_list, fake_person_name))
df.name = df.text.replace(name_dict, regex=True)
The problem is that it is taking forever to run and I am not sure how to enhance the performance of the code, so it can run faster.
ok i think i found a better way of doing text replacement in pandas, thanks to Florian C's comment.
The Spacy model still takes a lot of time, but that part I cannot change, however, instead of str.replace, i decided to use map and lambda, so now the last line is as follows:
df.name = df.text.map(lambda x:name_dict.get(x,x))
Related
I have a Spacy model for text generation, and I want to create a pandas data frame with all the texts that my Spacy model produces in each iteration. How can I save the spacy.tokens.doc.Doc output into a pandas dataframe?
nlp = spacy.load('en_core_web_sm')
newDataSet=pd.dataframe()
docs = nlp.pipe(df['Text'])
syn_augmenter =augmenty.load('random_synonym_insertion.v1',level=0.1)
for doc in augmenty.docs(docs, augmenter=syn_augmenter, nlp=nlp):
newDataSet=newDataSet.add(doc) # this produces an error
so you probably want to use DframCy library to make that happen. It is also recommended by SpaCy: https://spacy.io/universe/project/dframcy. A snippet I use is:
import spacy
from dframcy import DframCy
from tqdm import tqdm
nlp = spacy.load('en_core_web_trf')
dframcy = DframCy(nlp)
columns=["id", "text", "start", "end", "pos_", "tag_", "dep_", \
"head", "ent_type_", "lemma_", "lower_", "is_punct", "is_quote", "is_digit"]
def get_features(item):
doc = dframcy.nlp(item[1]["discourse_text"])
annotation_dataframe = dframcy.to_dataframe(doc, columns=columns)
annotation_dataframe['index'] = item[0]
return annotation_dataframe
results = []
for item in tqdm(df.iterrows(), total=df.shape[0]):
results.append(get_features(item))
features = pd.concat(results)
features
So the columns object denotes what objects you want to have returned. This is parsed to dframcy is extract the features and return a nice dataframe per document. If you have a table of strings that you want to tokenize and get features from, you need to iterate over it. TQDM tracks the overall progress of your for-loop. Concatenating the list of dataframes (per doc) will give you a complete overview.
I am using scattertext to parse a document in xlsx, but I am using non-English language and I will be most happy to add lemmatization and tokenization. I've checked these on spaCy alone and it works, but I have no clue how to integrate it in my scattertext plot.
import pandas as pd
import spacy
import pl_core_news_sm
nlp = spacy.load("pl_core_news_sm")
#nlp = pl_core_news_sm.load()
import scattertext as st
from pprint import pprint
from spacy.lang.pl.stop_words import STOP_WORDS
df = pd.read_excel("/home/poodle/Desktop/myfile.xlsx", sheet_name = 'Arkusz1', error_bad_lines = False)
corpus = st.CorpusFromPandas(
df,
category_col = 'Evaluation',
text_col = 'Opis',
nlp = st.whitespace_nlp_with_sentences).build().remove_terms(STOP_WORDS, ignore_absences=True)
html = st.produce_scattertext_explorer(corpus,
category = 'Nonsense',
category_name = 'Nonsense',
not_category_name = 'Correct',
minimum_term_frequency = 0,
width_in_pixels = 800,
metadata = corpus.get_df()['Autor'],
save_svg_button = True)
open('./Convention-Visualization6.html', 'wb').write(html.encode('utf-8'))
Is my code overall ok?
Scattertext has a specific pipeline for displaying lemmas instead of tokens.
To start, please use spaCy to parse your data frame of documents instead of scattertext's whitespace tokenizer.
I'm using spaCy's English parser here, but you should be sure to use a Polish version, if available.
import scattertext as st
import spacy
nlp = spacy.load('en')
df = st.SampleCorpora.ConventionData2012.get_data().assign(
parse=lambda df: df.text.apply(nlp)
)
Next, I create a Scattertext corpus from the data frame, using the column containing the spaCy Doc objects we created int he previous step.
Also, we use the st.FeatsFromSpacyDoc(use_lemmas=True) feature extractor to extract lemmas instead of tokens.
corpus = st.CorpusFromParsedDocuments(
df, category_col='party', parsed_col='parse',
feats_from_spacy_doc=st.FeatsFromSpacyDoc(use_lemmas=True)
)
I like use only unigrams (unilemmas in this case) and isolate the 2,000 most informative lemmas to display.
corpus = corpus.build().get_unigram_corpus().compact(st.AssociationCompactor(2000))
Finally, I create an html object which makes each axis in the plot the dense rank of a lemma's frequency.
html = st.produce_scattertext_explorer(
corpus,
category='democrat',
category_name='Democratic',
not_category_name='Republican',
minimum_term_frequency=0, pmi_threshold_coefficient=0,
width_in_pixels=1000, metadata=corpus.get_df()['speaker'],
transform=st.Scalers.dense_rank,
max_overlapping=3
)
open('./demo_lemmas.html', 'w').write(html)
print('open ./demo_lemmas.html in Chrome')
The following code creates a dataframe, tokenizes, and filters stopwords. However, am I stuck trying to properly gather the results to load back into a column of the dataframe. Trying to put the results back into the dataframe (using commented code) produces the following error ValueError: Length of values does not match length of index. It seems like the issue is with how I'm loading the lists back into the df. I think it is treating them one at a time. I'm not clear how to form a list of lists, which is what I think is needed. Neither append() nor extend() seem appropriate, or if they are I'm not doing it properly. Any insight would be greatly appreciated.
Minimal example
# Load libraries
import numpy as np
import pandas as pd
import spacy
# Create dataframe and tokenize
df = pd.DataFrame({'Text': ['This is the first text. It is two sentences',
'This is the second text, with one sentence']})
nlp = spacy.load("en_core_web_sm")
df['Tokens'] = ''
doc = df['Text']
doc = doc.apply(lambda x: nlp(x))
df['Tokens'] = doc
# df # check dataframe
# Filter stopwords
df['No Stop'] = ''
def test_loc(df):
for i in df.index:
doc = df.loc[i,'Tokens']
tokens_no_stop = [token.text for token in doc if not token.is_stop]
print(tokens_no_stop)
# df['No Stop'] = tokens_no_stop # THIS PRODUCES AN ERROR
test_loc(df)
Result
['text', '.', 'sentences']
['second', 'text', ',', 'sentence']
As you mentioned you need a list of lists in order for the assignment to work.
Another solution can be to use pandas.apply as you used in the beginning of your code.
import numpy as np
import pandas as pd
import spacy
df = pd.DataFrame({'Text': ['This is the first text. It is two sentences',
'This is the second text, with one sentence']})
nlp = spacy.load("en_core_web_sm")
df['Tokens'] = df['Text'].apply(lambda x: nlp(x))
def remove_stop_words(tokens):
return [token.text for token in tokens if not token.is_stop]
df['No Stop'] = df['Tokens'].apply(remove_stop_words)
Notice you don't have to create the column before assigning to it.
I have about 10 columns of data in a CSV file that I want to get statistics on using python. I am currently using the import csv module to open the file and read the contents. But I also want to look at 2 particular columns to compare data and get a percentage of accuracy based on the data.
Although I can open the file and parse through the rows I cannot figure out for example how to compare:
Row[i] Column[8] with Row[i] Column[10]
My pseudo code would be something like this:
category = Row[i] Column[8]
label = Row[i] Column[10]
if(category!=label):
difference+=1
totalChecked+=1
else:
correct+=1
totalChecked+=1
The only thing I am able to do is to read the entire row. But I want to get the exact Row and Column of my 2 variables category and label and compare them.
How do I work with specific row/columns for an entire excel sheet?
convert both to pandas dataframes and compare similarly as this example. Whatever dataset your working on using the Pandas module, alongside any other necessary relevant modules, and transforming the data into lists and dataframes, would be first step to working with it imo.
I've taken the liberty and time/ effort to delve into this myself as it will be useful to me going forward. Columns don't have to have the same lengths at all in his example, so that's good. I've tested the below code (Python 3.8) and it works successfully.
With only a slight adaptations can be used for your specific data columns, objects and purposes.
import pandas as pd
A = pd.read_csv(r'C:\Users\User\Documents\query_sequences.csv') #dropped the S fom _sequences
B = pd.read_csv(r'C:\Users\User\Documents\Sequence_reference.csv')
print(A.columns)
print(B.columns)
my_unknown_id = A['Unknown_sample_no'].tolist() #Unknown_sample_no
my_unknown_seq = A['Unknown_sample_seq'].tolist() #Unknown_sample_seq
Reference_Species1 = B['Reference_sequences_ID'].tolist()
Reference_Sequences1 = B['Reference_Sequences'].tolist() #it was Reference_sequences
Ref_dict = dict(zip(Reference_Species1, Reference_Sequences1)) #it was Reference_sequences
Unknown_dict = dict(zip(my_unknown_id, my_unknown_seq))
print(Ref_dict)
print(Unknown_dict)
Ref_dict = dict(zip(Reference_Species1, Reference_Sequences1))
Unknown_dict = dict(zip(my_unknown_id, my_unknown_seq))
print(Ref_dict)
print(Unknown_dict)
import re
filename = 'seq_match_compare2.csv'
f = open(filename, 'a') #in his eg it was 'w'
headers = 'Query_ID, Query_Seq, Ref_species, Ref_seq, Match, Match start Position\n'
f.write(headers)
for ID, seq in Unknown_dict.items():
for species, seq1 in Ref_dict.items():
m = re.search(seq, seq1)
if m:
match = m.group()
pos = m.start() + 1
f.write(str(ID) + ',' + seq + ',' + species + ',' + seq1 + ',' + match + ',' + str(pos) + '\n')
f.close()
And I did it myself too, assuming your columns contained integers, and according to your specifications (As best at the moment I can). Its my first try [Its my first attempt without webscraping, so go easy]. You could use my code below for a benchmark of how to move forward on your question.
Basically it does what you want (give you the skeleton) and does this : "imports csv in python using pandas module, converts to dataframes, works on specific columns only in those df's, make new columns (results), prints results alongside the original data in the terminal, and saves to new csv. It's as as messy as my python is , but it works! personally (& professionally) speaking is a milestone for me and I Will hopefully be working on it at a later date to improve it readability, scope, functionality and abilities [as the days go by (from next weekend).]
# This is work in progress, (although it does work and does a job), and its doing that for you. there are redundant lines of code in it, even the lines not hashed out (because im a self teaching newbie on my weekends). I was just finishing up on getting the results printed to a new csv file (done too). You can see how you could convert your columns & rows into lists with pandas dataframes, and start to do calculations with them in Python, and get your results back out to a new CSV. It a start on how you can answer your question going forward
#ITS FOR HER TO DO MUCH MORE & BETTER ON!! BUT IT DOES IN BASIC TERMS WHAT SHE ASKED FOR.
import pandas as pd
from pandas import DataFrame
import csv
import itertools #redundant now'?
A = pd.read_csv(r'C:\Users\User\Documents\book6 category labels.csv')
A["Category"].fillna("empty data - missing value", inplace = True)
#A["Blank1"].fillna("empty data - missing value", inplace = True)
# ...etc
print(A.columns)
MyCat=A['Category'].tolist()
MyLab=A['Label'].tolist()
My_Cats = A['Category1'].tolist()
My_Labs = A['Label1'].tolist()
#Ref_dict0 = zip(My_Labs, My_Cats) #good to compare whole columns as block, Enumerate ZIP 19:06 01/06/2020 FORGET THIS FOR NOW, WAS PART OF A LATTER ATTEMPT TO COMPARE TEXT & MISSED TEXT WITH INTERGER FIELDS. DOESNT EFFECT PROGRAM
Ref_dict = dict(zip(My_Labs, My_Cats))
Compareprep = dict(zip(My_Cats, My_Labs))
Ref_dict = dict(zip(My_Cats, My_Labs))
print(Ref_dict)
import re #this is for string matching & comparison. redundant in my example here but youll need it to compare tables if strings.
#filename = 'CATS&LABS64.csv' # when i got to exporting part, this is redundant now
#csvfile = open(filename, 'a') #when i tried to export results/output it first time - redundant
print("Given Dataframe :\n", A)
A['Lab-Cat_diff'] = A['Category1'].sub(A['Label1'], axis=0)
print("\nDifference of score1 and score2 :\n", A)
#YOU CAN DO OTHER MATCHES, COMPARISONS AND CALCULTAIONS YOURSELF HERE AND ADD THEM TO THE OUTPUT
result = (print("\nDifference of score1 and score2 :\n", A))
result2 = print(A) and print(result)
def result22(result2):
for aSentence in result2:
df = pd.DataFrame(result2)
print(str())
return df
print(result2)
print(result22) # printing out the function itself 'produces nothing but its name of course
output_df = DataFrame((result2),A)
output_df.to_csv('some_name5523.csv')
Yes, i know, its by no means perfect At all, but wanted to give you the heads up about panda's and dataframes for doing what you want moving forward.
I am working in jupyter notebook and have a pandas dataframe "data":
Question_ID | Customer_ID | Answer
1 234 Data is very important to use because ...
2 234 We value data since we need it ...
I want to go through the text in column "Answer" and get the three words before and after the word "data".
So in this scenario I would have gotten "is very important"; "We value", "since we need".
Is there an good way to do this within a pandas dataframe? So far I only found solutions where "Answer" would be its own file run through python code (without a pandas dataframe). While I realize that I need to use the NLTK library, I haven't used it before, so I don't know what the best approach would be. (This was a great example Extracting a word and its prior 10 word context to a dataframe in Python)
This may work:
import pandas as pd
import re
df = pd.read_csv('data.csv')
for value in df.Answer.values:
non_data = re.split('Data|data', value) # split text removing "data"
terms_list = [term for term in non_data if len(term) > 0] # skip empty terms
substrs = [term.split()[0:3] for term in terms_list] # slice and grab first three terms
result = [' '.join(term) for term in substrs] # combine the terms back into substrings
print result
output:
['is very important']
['We value', 'since we need']
The solution using generator expression, re.findall and itertools.chain.from_iterable functions:
import pandas as pd, re, itertools
data = pd.read_csv('test.csv') # change with your current file path
data_adjacents = ((i for sublist in (list(filter(None,t))
for t in re.findall(r'(\w*?\s*\w*?\s*\w*?\s+)(?=\bdata\b)|(?<=\bdata\b)(\s+\w*\s*\w*\s*\w*)', l, re.I)) for i in sublist)
for l in data.Answer.tolist())
print(list(itertools.chain.from_iterable(data_adjacents)))
The output:
[' is very important', 'We value ', ' since we need']