How to apply tf-idf to rows of text - python

I have rows of blurbs (in text format) and I want to use tf-idf to define the weight of each word. Below is the code:
def remove_punctuations(text):
for punctuation in string.punctuation:
text = text.replace(punctuation, '')
return text
df["punc_blurb"] = df["blurb"].apply(remove_punctuations)
df = pd.DataFrame(df["punc_blurb"])
vectoriser = TfidfVectorizer()
df["blurb_Vect"] = list(vectoriser.fit_transform(df["punc_blurb"]).toarray())
df_vectoriser = pd.DataFrame(x.toarray(),
columns = vectoriser.get_feature_names())
print(df_vectoriser)
All I get is a massive list of numbers, which I am not even sure anymore if its the TF or TF-IDF that it is giving me as the frequent words (the, and, etc) all have a score of more than 0.
The goal is to see the weights in the tf-idf column shown below and I am unsure if I am doing this in the most efficient way:
Goal Output table

You don't need punctuation remover if you use TfidfVectorizer. It will take care of punctuation automatically, by virtue of default token_pattern param:
from sklearn.feature_extraction.text import TfidfVectorizer
df = pd.DataFrame({"blurb":["this is a sentence", "this is, well, another one"]})
vectorizer = TfidfVectorizer(token_pattern='(?u)\\b\\w\\w+\\b')
df["tf_idf"] = list(vectorizer.fit_transform(df["blurb"].values.astype("U")).toarray())
vocab = sorted(vectorizer.vocabulary_.keys())
df["tf_idf_dic"] = df["tf_idf"].apply(lambda x: {k:v for k,v in dict(zip(vocab,x)).items() if v!=0})

Related

TFIDF separate for each label

Using TFIDFvectorizor(SKlearn), how to obtain word ranking based on tfidf score for each label separately. I want the word frequency for each label (positive and negative).
relevant code:
vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5,stop_words='english',use_idf=True, ngram_range =(1,1))
features_train = vectorizer.fit_transform(features_train).todense()
features_test = vectorizer.transform(features_test).todense()
for i in range(len(features_test)):
first_document_vector=features_test[i]
df_t = pd.DataFrame(first_document_vector.T, index=feature_names, columns=["tfidf"])
df_t.sort_values(by=["tfidf"],ascending=False).head(50)
This will give you positive, neutral, and negative sentiment analysis for each row of comments in a field of a dataframe. There is a lot of preprocessing code, to get things cleaned up, filter out stop-words, do some basic charting, etc.
import pickle
import pandas as pd
import numpy as np
import pandas as pd
import re
import nltk
import matplotlib.pyplot as plt
from nltk.corpus import stopwords
global str
df = pd.read_csv('C:\\your_path\\test_dataset.csv')
print(df.shape)
# let's experiment with some sentiment analysis concepts
# first we need to clean up the stuff in the independent field of the DF we are workign with
df['body'] = df[['body']].astype(str)
df['review_text'] = df[['review_text']].astype(str)
df['body'] = df['body'].str.replace('\d+', '')
df['review_text'] = df['review_text'].str.replace('\d+', '')
# get rid of special characters
df['body'] = df['body'].str.replace(r'[^\w\s]+', '')
df['review_text'] = df['review_text'].str.replace(r'[^\w\s]+', '')
# get rid fo double spaces
df['body'] = df['body'].str.replace(r'\^[a-zA-Z]\s+', '')
df['review_text'] = df['review_text'].str.replace(r'\^[a-zA-Z]\s+', '')
# convert all case to lower
df['body'] = df['body'].str.lower()
df['review_text'] = df['review_text'].str.lower()
# It looks like the language in body and review_text is very similar (2 fields in dataframe). let's check how closely they match...
# seems like the tone is similar, but the text is not matching at a high rate...less than 20% match rate
import difflib
body_list = df['body'].tolist()
review_text_list = df['review_text'].tolist()
body = body_list
reviews = review_text_list
s = difflib.SequenceMatcher(None, body, reviews).ratio()
print ("ratio:", s, "\n")
# filter out stop words
# these are the most common words such as: “the“, “a“, and “is“.
from nltk.corpus import stopwords
english_stopwords = stopwords.words('english')
print(len(english_stopwords))
text = str(body_list)
# split into words
from nltk.tokenize import word_tokenize
tokens = word_tokenize(text)
# convert to lower case
tokens = [w.lower() for w in tokens]
# remove punctuation from each word
import string
table = str.maketrans('', '', string.punctuation)
stripped = [w.translate(table) for w in tokens]
# remove remaining tokens that are not alphabetic
words = [word for word in stripped if word.isalpha()]
# filter out stop words
from nltk.corpus import stopwords
stop_words = set(stopwords.words('english'))
words = [w for w in words if not w in stop_words]
print(words[:100])
# plot most frequently occurring words in a bar chart
# remove unwanted characters, numbers and symbols
df['review_text'] = df['review_text'].str.replace("[^a-zA-Z#]", " ")
#Let’s try to remove the stopwords and short words (<2 letters) from the reviews.
from nltk.corpus import stopwords
stop_words = stopwords.words('english')
# function to remove stopwords
def remove_stopwords(rev):
rev_new = " ".join([i for i in rev if i not in stop_words])
return rev_new
# remove short words (length < 3)
df['review_text'] = df['review_text'].apply(lambda x: ' '.join([w for w in x.split() if len(w)>2]))
# remove stopwords from the text
reviews = [remove_stopwords(r.split()) for r in df['review_text']]
# make entire text lowercase
reviews = [r.lower() for r in reviews]
#Let’s again plot the most frequent words and see if the more significant words have come out.
freq_words(reviews, 35)
###############################################################################
###############################################################################
# Tf-idf is a very common technique for determining roughly what each document in a set of
# documents is “about”. It cleverly accomplishes this by looking at two simple metrics: tf
# (term frequency) and idf (inverse document frequency). Term frequency is the proportion
# of occurrences of a specific term to total number of terms in a document. Inverse document
# frequency is the inverse of the proportion of documents that contain that word/phrase.
# Simple, right!? The general idea is that if a specific phrase appears a lot of times in a
# given document, but it doesn’t appear in many other documents, then we have a good idea
# that the phrase is important in distinguishing that document from all the others.
# Starting with the CountVectorizer/TfidfTransformer approach...
# convert fields in datframe to list
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
cvec = CountVectorizer(stop_words='english', min_df=1, max_df=.5, ngram_range=(1,2))
cvec
# Calculate all the n-grams found in all documents
from itertools import islice
cvec.fit(body_list)
list(islice(cvec.vocabulary_.items(), 20))
len(cvec.vocabulary_)
# Let’s take a moment to describe these parameters as they are the primary levers for adjusting what
# feature set we end up with. First is “min_df” or mimimum document frequency. This sets the minimum
# number of documents that any term is contained in. This can either be an integer which sets the
# number specifically, or a decimal between 0 and 1 which is interpreted as a percentage of all documents.
# Next is “max_df” which similarly controls the maximum number of documents any term can be found in.
# If 90% of documents contain the word “spork” then it’s so common that it’s not very useful.
# Initialize the vectorizer with new settings and check the new vocabulary length
cvec = CountVectorizer(stop_words='english', min_df=.0025, max_df=.5, ngram_range=(1,2))
cvec.fit(body_list)
len(cvec.vocabulary_)
# Our next move is to transform the document into a “bag of words” representation which essentially is
# just a separate column for each term containing the count within each document. After that, we’ll
# take a look at the sparsity of this representation which lets us know how many nonzero values there
# are in the dataset. The more sparse the data is the more challenging it will be to model
cvec_counts = cvec.transform(body_list)
print('sparse matrix shape:', cvec_counts.shape)
print('nonzero count:', cvec_counts.nnz)
print('sparsity: %.2f%%' % (100.0 * cvec_counts.nnz / (cvec_counts.shape[0] * cvec_counts.shape[1])))
# get counts of frequently occurring terms; top 20
occ = np.asarray(cvec_counts.sum(axis=0)).ravel().tolist()
counts_df = pd.DataFrame({'term': cvec.get_feature_names(), 'occurrences': occ})
counts_df.sort_values(by='occurrences', ascending=False).head(20)
# Now that we’ve got term counts for each document we can use the TfidfTransformer to calculate the
# weights for each term in each document
transformer = TfidfTransformer()
transformed_weights = transformer.fit_transform(cvec_counts)
transformed_weights
# we can take a look at the top 20 terms by average tf-idf weight.
weights = np.asarray(transformed_weights.mean(axis=0)).ravel().tolist()
weights_df = pd.DataFrame({'term': cvec.get_feature_names(), 'weight': weights})
weights_df.sort_values(by='weight', ascending=False).head(20)
# FINALLY!!!!
# Here we are doing some sentiment analysis, and distilling the 'review_text' field into positive, neutral, or negative,
# based on the tone of the text in each record. Also, we are filtering out the records that have <.2 negative score;
# keeping only those that have >.2 negative score. This is interesting, but this can contain some non-intitive results.
# For instance, one record in 'review_text' literally says 'no issues'. This is probably positive, but the algo sees the
# word 'no' and interprets the comment as negative. I would argue that it's positive. We'll circle back and resolve
# this potential issue a little later.
import nltk
nltk.download('vader_lexicon')
nltk.download('punkt')
from nltk.sentiment.vader import SentimentIntensityAnalyzer
sid = SentimentIntensityAnalyzer()
df['sentiment'] = df['review_text'].apply(lambda x: sid.polarity_scores(x))
def convert(x):
if x < 0:
return "negative"
elif x > .2:
return "positive"
else:
return "neutral"
df['result'] = df['sentiment'].apply(lambda x:convert(x['compound']))
# df.groupby(['brand','result']).size()
# df.groupby(['brand','result']).count()
x = df.groupby(['review_text','brand'])['result'].value_counts(normalize=True)
x = df.groupby(['brand'])['result'].value_counts(normalize=True)
y = x.loc[(x.index.get_level_values(1) == 'negative')]
print(y[y>0.2])
Result:
brand result
ABH negative 0.500000
Alexander McQueen negative 0.500000
Anastasia negative 0.498008
BURBERRY negative 0.248092
Beats negative 0.272947
Bowers & Wilkins negative 0.500000
Breitling Official negative 0.666667
Capri Blue negative 0.333333
FERRARI negative 1.000000
Fendi negative 0.283582
GIORGIO ARMANI negative 1.000000
Jan Marini Skin Research negative 0.250000
Jaybird negative 0.235294
LANC�ME negative 0.500000
Longchamp negative 0.271605
Longchamps negative 0.500000
M.A.C negative 0.203390
Meaningful Beauty negative 0.222222
Polk Audio negative 0.256410
Pumas negative 0.222222
Ralph Lauren Polo negative 0.500000
Roberto Cavalli negative 0.250000
Samsung negative 0.332298
T3 Micro negative 0.224138
Too Faced negative 0.216216
VALENTINO by Mario Valentino negative 0.333333
YSL negative 0.250000
Feel free to skip things you find to be irrelevant, but as-is, the code does a fairly comprehensive NLP analysis.
Also, take a look at these two links.
https://www.analyticsvidhya.com/blog/2018/02/the-different-methods-deal-text-data-predictive-python/
https://towardsdatascience.com/fine-grained-sentiment-analysis-in-python-part-1-2697bb111ed4

Calculate TF-IDF using sklearn for variable-n-grams in python

Problem:
using scikit-learn to find the number of hits of variable n-grams of a particular vocabulary.
Explanation.
I got examples from here.
Imagine I have a corpus and I want to find how many hits (counting) has a vocabulary like the following one:
myvocabulary = [(window=4, words=['tin', 'tan']),
(window=3, words=['electrical', 'car'])
(window=3, words=['elephant','banana'])
What I call here window is the length of the span of words in which the words can appear. as follows:
'tin tan' is hit (within 4 words)
'tin dog tan' is hit (within 4 words)
'tin dog cat tan is hit (within 4 words)
'tin car sun eclipse tan' is NOT hit. tin and tan appear more than 4 words away from each other.
I just want to count how many times (window=4, words=['tin', 'tan']) appears in a text and the same for all the other ones and then add the result to a pandas in order to calculate a tf-idf algorithm.
I could only find something like this:
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(vocabulary = myvocabulary, stop_words = 'english')
tfs = tfidf.fit_transform(corpus.values())
where vocabulary is a simple list of strings, being single words or several words.
besides from scikitlearn:
class sklearn.feature_extraction.text.CountVectorizer
ngram_range : tuple (min_n, max_n)
The lower and upper boundary of the range of n-values for different n-grams to be extracted. All values of n such that min_n <= n <= max_n will be used.
does not help neither.
Any ideas?
I am not sure if this can be done using CountVectorizer or TfidfVectorizer. I have written my own function for doing this as follows:
import pandas as pd
import numpy as np
import string
def contained_within_window(token, word1, word2, threshold):
word1 = word1.lower()
word2 = word2.lower()
token = token.translate(str.maketrans('', '', string.punctuation)).lower()
if (word1 in token) and word2 in (token):
word_list = token.split(" ")
word1_index = [i for i, x in enumerate(word_list) if x == word1]
word2_index = [i for i, x in enumerate(word_list) if x == word2]
count = 0
for i in word1_index:
for j in word2_index:
if np.abs(i-j) <= threshold:
count=count+1
return count
return 0
SAMPLE:
corpus = [
'This is the first document. And this is what I want',
'This document is the second document.',
'And this is the third one.',
'Is this the first document?',
'I like coding in sklearn',
'This is a very good question'
]
df = pd.DataFrame(corpus, columns=["Test"])
your df will look like this:
Test
0 This is the first document. And this is what I...
1 This document is the second document.
2 And this is the third one.
3 Is this the first document?
4 I like coding in sklearn
5 This is a very good question
Now you can apply contained_within_window as follows:
sum(df.Test.apply(lambda x: contained_within_window(x,word1="this", word2="document",threshold=2)))
And you get:
2
You can just run a for loop for checking different instances.
And you this to construct your pandas df and apply TfIdf on it, which is straight forward.

Is there a way to get only the IDF values of words using scikit or any other python package?

I have a text column in my dataset and using that column I want to have a IDF calculated for all the words that are present. TFID implementations in scikit, like tfidf vectorize, are giving me TFIDF values directly as against just word IDFs. Is there a way to get word IDFs give a set of documents?
You can just use TfidfVectorizer with use_idf=True (default value) and then extract with idf_.
from sklearn.feature_extraction.text import TfidfVectorizer
my_data = ["hello how are you", "hello who are you", "i am not you"]
tf = TfidfVectorizer(use_idf=True)
tf.fit_transform(my_data)
idf = tf.idf_
[BONUS] if you want to get the idf value for a particular word:
# If you want to get the idf value for a particular word, here "hello"
tf.idf_[tf.vocabulary_["hello"]]

TF-IDF Weighting after NLTK pre-processing

I am doing some textual preprocessing prior to machine learning. I have two features (Panda series) - abstract and title - and use the following function to preprocess the data (giving a numpy array, where each row contains the features for one training example):
def preprocessText(data):
stemmer = nltk.stem.porter.PorterStemmer()
preprocessed = []
for each in data:
tokens = nltk.word_tokenize(each.lower().translate(xlate))
filtered = [word for word in tokens if word not in stopwords]
preprocessed.append([stemmer.stem(item) for item in filtered])
print(Counter(sum([list(x) for x in preprocessed], [])))
return np.array(preprocessed)
I now need to use TF-IDF to weight the features - how can I do this?
From what I see, you have list of filtered words in preprocessed variable. One way to do TF-IDF transformation is to use scikit-learn, TfidfVectorizer. However, the class tokenizes the space for you i.e. you can provide list of processed documents each contain string. So you have to edit your code to:
preprocessed.append(' '.join([stemmer.stem(item) for item in filtered]))
Then you can transform list of documents as follows
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf_model = TfidfVectorizer() # specify parameters here
X_tfidf = tfidf_model.fit_transform(preprocessed)
The output will be matrix in sparse compressed sparse row (CSR) format where you can transform to numpy array later on.
tfidf_model.vocabulary_ will contain dictionary map of stemming words to id.

Can I use TfidfVectorizer in scikit-learn for non-English language? Also how do I read a non-English text in Python?

I have to read a text document which contains both English and non-English (Malayalam specifically) languages in Python. The following I see:
>>>text_english = 'Today is a good day'
>>>text_non_english = 'ആരാണു സന്തോഷമാഗ്രഹിക്കാത്തത'
Now, if I write a code to extract the first letter using
>>>print(text_english[0])
'T'
and when I run
>>>print(text_non_english[0])
�
To get the first letter, I have to write the following
>>>print(text_non_english[0:3])
ആ
Why this happens?
My aim to extract the words in the text so that I can input it to the tfidf transformer. When I create the tfidf vocabulary from the Malayalam language, there are words which are two letters which is not correct. Actually they are part of the full words. What should i do so that the tfidf transformer takes the full Malayalam word for the transformation instead of taking two letters.
I used the following code for this
>>>useful_text_1[1:3] # contains both English and Malayalam text
>>>vectorizer = TfidfVectorizer(sublinear_tf=True,max_df=0.5,stop_words='english')
# Learn vocabulary and idf, return term-document matrix
>>>vect_2 = vectorizer.fit_transform(useful_text_1[1:3])
>>>vectorizer.vocabulary_
Some of the words in the vocabulary are as below:
ഷമ
സന
സഹ
ർക
ർത
The vocabulary is not correct. It is not considering the whole word. How to rectify this?
You have to encode text in utf-8. But Malayalam language's letter contains 3 symbols, so you need to use unicode function:
In[36]: tn = 'ആരാണു സന്തോഷമാഗ്രഹിക്കാത്തത'
In[37]: tne=unicode(tn, encoding='utf-8')
In[38]: print(tne[0])
ആ
Using a dummy tokenizer actually worked for me
vectorizer = TfidfVectorizer(tokenizer=lambda x: x.split(), min_df=1)
>>> tn = 'ആരാണു സന്തോഷമാഗ്രഹിക്കാത്തത'
>>> vectorizer = TfidfVectorizer(tokenizer=lambda x: x.split(),min_df=1)
>>> vect_2 = vectorizer.fit_transform(tn.split())
>>> for x in vectorizer.vocabulary_:
... print x
...
സന്തോഷമാഗ്രഹിക്കാത്തത
ആരാണു
>>>
Alternative is to try Text2Text to get the TFIDF vectors. It supports 100s of languages, including Malayalam.
import text2text as t2t
t2t.Handler([
'Today is a good day',
'ആരാണു സന്തോഷമാഗ്രഹിക്കാത്തത'
]).tfidf()

Categories