I am facing a machine learning problem; Learning data consists of numeric, categorical and dates. I started training only based on numerics and dates (that I converted to numeric using epoch, week day, hours and so...). Apart from a poor score, performance are very well (seconds of training on one million entry).
The problem is with categoricals which most many values up to thousands.
Values consist of equipment brands, comments and so, and are humanly entered, So I assume there are much resemblances. I can sacrifice a bit of real world representation through data (hense score) for feasibility (training time).
Programming challenge: I came up with this from this nice performance analysis
import difflib
def gcm1(strings):
clusters = {}
co = 0
for string in (x for x in strings):
if(co % 10000 == 0 ):
print(co)
co = co +1
if string in clusters:
clusters[string].append(string)
else:
match = difflib.get_close_matches(string, clusters.keys(), 1, 0.90)
if match:
clusters[match[0]].append(string)
else:
clusters[string] = [ string ]
return clusters
def reduce(lines_):
clusters = gcm1(lines_)
clusters = dict( (v,k) for k in clusters for v in clusters[k] )
return [clusters.get(item,item) for item in lines_]
Example of this is like:
reduce(['XHSG11', 'XHSG8', 'DOIIV', 'D.OIIV ', ...]
=> ['XHSG11', 'XHSG11', 'DOIIV', 'DOIIV ', ...]
I am very bound to Python so I couldn't get other C implemented code running.
Obviously, the function difflib.get_close_matches in each iteration is the most greedy.
Is there an better alternative? or a better method of my algorithm?
As I said on million entries, on let's say 10 columns, I can't even estimate when the algorithm stops (more than 3 hours and still running on my 16 gigs of RAM and i7 4790k CPU)
Data is like (extract):
Comments: [nan '1er rdv' '16H45-VE' 'VTE 2016 APRES 9H'
'ARM : SERENITE DV. RECUP.CONTRAT. VERIF TYPE APPAREIL. RECTIF TVA SI NECESSAIRE']
422227 different values
MODELE_CODE: ['VIESK02534' 'CMA6781031' 'ELMEGLM23HNATVMC' 'CMACALYDRADELTA2428FF'
'FBEZZCIAO3224SVMC']
10206 values
MARQUE_LIB: ['VIESSMANN' 'CHAFFOTEAUX ET MAURY' 'ELM LEBLANC' 'FR BG' 'CHAPPEE']
167 values
... more columns
This code below is to find out the top 150 words which appeared the most in 2 strings.
pwords = re.findall(r'\w+',p)
ptop150words=Counter(pwords).most_common(150)
sorted(ptop150words)
nwords = re.findall(r'\w+',n)
ntop150words=Counter(nwords).most_common(150)
sorted(ntop150words)
This code below is to remove the common words which appeared in the 2 strings.
def new(ntopwords,ptopwords):
for i in ntopwords[:]:
if i in potopwords:
ntopwords.remove(i)
ptopwords.remove(i)
print(i)
However, there is no output for print(i). what is wrong?
Most likely your indentation.
new(negativetop150words,positivetop150words):
for i in negativetop150words[:]:
if i in positivetop150words:
negativetop150words.remove(i)
positivetop150words.remove(i)
print(i)
You could rely on set methods. Once you have both lists, you convert them to sets. The common subset is the intersection of the 2 sets, and you can simply take the difference from both original sets:
positiveset = set(positivewords)
negativeset = set(negativewords)
commons = positiveset & negativeset
positivewords = sorted(positiveset - commons)
negativewords = sorted(negativeset - commons)
commonwords = sorted(commons)
The code you posted does not call the function new(negativetop150words, positivetop150words) Also per Jesse's comment, the print(i) command is outside the function. Here's the code that worked for me:
import re
from collections import Counter
def new(negativetop150words, positivetop150words):
for i in negativetop150words[:]:
if i in positivetop150words:
negativetop150words.remove(i)
positivetop150words.remove(i)
print(i)
return negativetop150words, positivetop150words
positive = 'The FDA is already fairly gung-ho about providing this. It receives about 1,000 applications a year and approves all but 1%. The agency makes sure there is sound science behind the request, and no obvious indication that the medicine would harm the patient.'
negative = 'Thankfully these irritating bits of bureaucracy have been duly dispatched. This victory comes courtesy of campaigning work by a libertarian think-tank, the Goldwater Institute, based in Arizona. It has been pushing right-to-try legislation for around four years, and it can now be found in 40 states. Speaking about the impact of these laws on patients, Arthur Caplan, a professor of bioethics at NYU School of Medicine in New York, says he can think of one person who may have been helped.'
positivewords = re.findall(r'\w+', positive)
positivetop150words = Counter(positivewords).most_common(150)
sorted(positivetop150words)
negativewords = re.findall(r'\w+', negative)
negativetop150words = Counter(negativewords).most_common(150)
words = new(negativewords, positivewords)
This prints:
a
the
It
and
about
the
How do I get the probability of a string being similar to another string in Python?
I want to get a decimal value like 0.9 (meaning 90%) etc. Preferably with standard Python and library.
e.g.
similar("Apple","Appel") #would have a high prob.
similar("Apple","Mango") #would have a lower prob.
There is a built in.
from difflib import SequenceMatcher
def similar(a, b):
return SequenceMatcher(None, a, b).ratio()
Using it:
>>> similar("Apple","Appel")
0.8
>>> similar("Apple","Mango")
0.0
Solution #1: Python builtin
use SequenceMatcher from difflib
pros:
native python library, no need extra package.
cons: too limited, there are so many other good algorithms for string similarity out there.
example :
>>> from difflib import SequenceMatcher
>>> s = SequenceMatcher(None, "abcd", "bcde")
>>> s.ratio()
0.75
Solution #2: jellyfish library
its a very good library with good coverage and few issues.
it supports:
- Levenshtein Distance
- Damerau-Levenshtein Distance
- Jaro Distance
- Jaro-Winkler Distance
- Match Rating Approach Comparison
- Hamming Distance
pros:
easy to use, gamut of supported algorithms, tested.
cons: not native library.
example:
>>> import jellyfish
>>> jellyfish.levenshtein_distance(u'jellyfish', u'smellyfish')
2
>>> jellyfish.jaro_distance(u'jellyfish', u'smellyfish')
0.89629629629629637
>>> jellyfish.damerau_levenshtein_distance(u'jellyfish', u'jellyfihs')
1
I think maybe you are looking for an algorithm describing the distance between strings. Here are some you may refer to:
Hamming distance
Levenshtein distance
Damerau–Levenshtein distance
Jaro–Winkler distance
TheFuzz is a package that implements Levenshtein distance in python, with some helper functions to help in certain situations where you may want two distinct strings to be considered identical. For example:
>>> fuzz.ratio("fuzzy wuzzy was a bear", "wuzzy fuzzy was a bear")
91
>>> fuzz.token_sort_ratio("fuzzy wuzzy was a bear", "wuzzy fuzzy was a bear")
100
You can create a function like:
def similar(w1, w2):
w1 = w1 + ' ' * (len(w2) - len(w1))
w2 = w2 + ' ' * (len(w1) - len(w2))
return sum(1 if i == j else 0 for i, j in zip(w1, w2)) / float(len(w1))
Note, difflib.SequenceMatcher only finds the longest contiguous matching subsequence, this is often not what is desired, for example:
>>> a1 = "Apple"
>>> a2 = "Appel"
>>> a1 *= 50
>>> a2 *= 50
>>> SequenceMatcher(None, a1, a2).ratio()
0.012 # very low
>>> SequenceMatcher(None, a1, a2).get_matching_blocks()
[Match(a=0, b=0, size=3), Match(a=250, b=250, size=0)] # only the first block is recorded
Finding the similarity between two strings is closely related to the concept of pairwise sequence alignment in bioinformatics. There are many dedicated libraries for this including biopython. This example implements the Needleman Wunsch algorithm:
>>> from Bio.Align import PairwiseAligner
>>> aligner = PairwiseAligner()
>>> aligner.score(a1, a2)
200.0
>>> aligner.algorithm
'Needleman-Wunsch'
Using biopython or another bioinformatics package is more flexible than any part of the python standard library since many different scoring schemes and algorithms are available. Also, you can actually get the matching sequences to visualise what is happening:
>>> alignment = next(aligner.align(a1, a2))
>>> alignment.score
200.0
>>> print(alignment)
Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-Apple-
|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-|||-|-
App-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-elApp-el
Package distance includes Levenshtein distance:
import distance
distance.levenshtein("lenvestein", "levenshtein")
# 3
You can find most of the text similarity methods and how they are calculated under this link: https://github.com/luozhouyang/python-string-similarity#python-string-similarity
Here some examples;
Normalized, metric, similarity and distance
(Normalized) similarity and distance
Metric distances
Shingles (n-gram) based similarity and distance
Levenshtein
Normalized Levenshtein
Weighted Levenshtein
Damerau-Levenshtein
Optimal String Alignment
Jaro-Winkler
Longest Common Subsequence
Metric Longest Common Subsequence
N-Gram
Shingle(n-gram) based algorithms
Q-Gram
Cosine similarity
Jaccard index
Sorensen-Dice coefficient
Overlap coefficient (i.e.,Szymkiewicz-Simpson)
The builtin SequenceMatcher is very slow on large input, here's how it can be done with diff-match-patch:
from diff_match_patch import diff_match_patch
def compute_similarity_and_diff(text1, text2):
dmp = diff_match_patch()
dmp.Diff_Timeout = 0.0
diff = dmp.diff_main(text1, text2, False)
# similarity
common_text = sum([len(txt) for op, txt in diff if op == 0])
text_length = max(len(text1), len(text2))
sim = common_text / text_length
return sim, diff
BLEUscore
BLEU, or the Bilingual Evaluation Understudy, is a score for comparing
a candidate translation of text to one or more reference translations.
A perfect match results in a score of 1.0, whereas a perfect mismatch
results in a score of 0.0.
Although developed for translation, it can be used to evaluate text
generated for a suite of natural language processing tasks.
Code:
import nltk
from nltk.translate import bleu
from nltk.translate.bleu_score import SmoothingFunction
smoothie = SmoothingFunction().method4
C1='Text'
C2='Best'
print('BLEUscore:',bleu([C1], C2, smoothing_function=smoothie))
Examples: By updating C1 and C2.
C1='Test' C2='Test'
BLEUscore: 1.0
C1='Test' C2='Best'
BLEUscore: 0.2326589746035907
C1='Test' C2='Text'
BLEUscore: 0.2866227639866161
You can also compare sentence similarity:
C1='It is tough.' C2='It is rough.'
BLEUscore: 0.7348889200874658
C1='It is tough.' C2='It is tough.'
BLEUscore: 1.0
Textdistance:
TextDistance – python library for comparing distance between two or more sequences by many algorithms. It has Textdistance
30+ algorithms
Pure python implementation
Simple usage
More than two sequences comparing
Some algorithms have more than one implementation in one class.
Optional numpy usage for maximum speed.
Example1:
import textdistance
textdistance.hamming('test', 'text')
Output:
1
Example2:
import textdistance
textdistance.hamming.normalized_similarity('test', 'text')
Output:
0.75
Thanks and Cheers!!!
There are many metrics to define similarity and distance between strings as mentioned above. I will give my 5 cents by showing an example of Jaccard similarity with Q-Grams and an example with edit distance.
The libraries
from nltk.metrics.distance import jaccard_distance
from nltk.util import ngrams
from nltk.metrics.distance import edit_distance
Jaccard Similarity
1-jaccard_distance(set(ngrams('Apple', 2)), set(ngrams('Appel', 2)))
and we get:
0.33333333333333337
And for the Apple and Mango
1-jaccard_distance(set(ngrams('Apple', 2)), set(ngrams('Mango', 2)))
and we get:
0.0
Edit Distance
edit_distance('Apple', 'Appel')
and we get:
2
And finally,
edit_distance('Apple', 'Mango')
and we get:
5
Cosine Similarity on Q-Grams (q=2)
Another solution is to work with the textdistance library. I will provide an example of Cosine Similarity
import textdistance
1-textdistance.Cosine(qval=2).distance('Apple', 'Appel')
and we get:
0.5
Adding the Spacy NLP library also to the mix;
#profile
def main():
str1= "Mar 31 09:08:41 The world is beautiful"
str2= "Mar 31 19:08:42 Beautiful is the world"
print("NLP Similarity=",nlp(str1).similarity(nlp(str2)))
print("Diff lib similarity",SequenceMatcher(None, str1, str2).ratio())
print("Jellyfish lib similarity",jellyfish.jaro_distance(str1, str2))
if __name__ == '__main__':
#python3 -m spacy download en_core_web_sm
#nlp = spacy.load("en_core_web_sm")
nlp = spacy.load("en_core_web_md")
main()
Run with Robert Kern's line_profiler
kernprof -l -v ./python/loganalysis/testspacy.py
NLP Similarity= 0.9999999821467294
Diff lib similarity 0.5897435897435898
Jellyfish lib similarity 0.8561253561253562
However the time's are revealing
Function: main at line 32
Line # Hits Time Per Hit % Time Line Contents
==============================================================
32 #profile
33 def main():
34 1 1.0 1.0 0.0 str1= "Mar 31 09:08:41 The world is beautiful"
35 1 0.0 0.0 0.0 str2= "Mar 31 19:08:42 Beautiful is the world"
36 1 43248.0 43248.0 99.1 print("NLP Similarity=",nlp(str1).similarity(nlp(str2)))
37 1 375.0 375.0 0.9 print("Diff lib similarity",SequenceMatcher(None, str1, str2).ratio())
38 1 30.0 30.0 0.1 print("Jellyfish lib similarity",jellyfish.jaro_distance(str1, str2))
Here's what i thought of:
import string
def match(a,b):
a,b = a.lower(), b.lower()
error = 0
for i in string.ascii_lowercase:
error += abs(a.count(i) - b.count(i))
total = len(a) + len(b)
return (total-error)/total
if __name__ == "__main__":
print(match("pple inc", "Apple Inc."))
Python3.6+=
No Libuary Imported
Works Well in most scenarios
In stack overflow, when you tries to add a tag or post a question, it bring up all relevant stuff. This is so convenient and is exactly the algorithm that I am looking for. Therefore, I coded a query set similarity filter.
def compare(qs, ip):
al = 2
v = 0
for ii, letter in enumerate(ip):
if letter == qs[ii]:
v += al
else:
ac = 0
for jj in range(al):
if ii - jj < 0 or ii + jj > len(qs) - 1:
break
elif letter == qs[ii - jj] or letter == qs[ii + jj]:
ac += jj
break
v += ac
return v
def getSimilarQuerySet(queryset, inp, length):
return [k for tt, (k, v) in enumerate(reversed(sorted({it: compare(it, inp) for it in queryset}.items(), key=lambda item: item[1])))][:length]
if __name__ == "__main__":
print(compare('apple', 'mongo'))
# 0
print(compare('apple', 'apple'))
# 10
print(compare('apple', 'appel'))
# 7
print(compare('dude', 'ud'))
# 1
print(compare('dude', 'du'))
# 4
print(compare('dude', 'dud'))
# 6
print(compare('apple', 'mongo'))
# 2
print(compare('apple', 'appel'))
# 8
print(getSimilarQuerySet(
[
"java",
"jquery",
"javascript",
"jude",
"aja",
],
"ja",
2,
))
# ['javascript', 'java']
Explanation
compare takes two string and returns a positive integer.
you can edit the al allowed variable in compare, it indicates how large the range we need to search through. It works like this: two strings are iterated, if same character is find at same index, then accumulator will be added to a largest value. Then, we search in the index range of allowed, if matched, add to the accumulator based on how far the letter is. (the further, the smaller)
length indicate how many items you want as result, that is most similar to input string.
I have my own for my purposes, which is 2x faster than difflib SequenceMatcher's quick_ratio(), while providing similar results. a and b are strings:
score = 0
for letters in enumerate(a):
score = score + b.count(letters[1])
I have two sentences in python, that are represents sets of words the user gives in input as query for an image retrieval software:
sentence1 = "dog is the"
sentence2 = "the dog is a very nice animal"
I have a set of images that have a description, so for example:
sentence3 = "the dog is running in your garden"
I want to recover all the images that have a description "very close" to the query inserted by user, but this part related to description should be normalized between 0 and 1 since it is just a part of a more complex research which considers also geotagging and low level features of images.
Given that I create three sets using:
set_sentence1 = set(sentence1.split())
set_sentence2 = set(sentence2.split())
set_sentence3 = set(sentence3.split())
And compute the intersection between sets as:
intersection1 = set_sentence1.intersection(set_sentence3)
intersection2 = set_sentence2.intersection(set_sentence3)
How can i normalize efficiently the comparison?
I don't want to use levensthein distance, since I'm not interested in string similarity, but in set similarity.
maybe a metric like:
Similarity1 = (1.0 + len(intersection1))/(1.0 + max(len(set_sentence1), len(set_sentence3)))
Similarity2 = (1.0 + len(intersection2))/(1.0 + max(len(set_sentence2), len(set_sentence3)))
have you tried difflib?
example from docs:
>>> s1 = ['bacon\n', 'eggs\n', 'ham\n', 'guido\n']
>>> s2 = ['python\n', 'eggy\n', 'hamster\n', 'guido\n']
>>> for line in context_diff(s1, s2, fromfile='before.py', tofile='after.py'):
... sys.stdout.write(line)
*** before.py
--- after.py
***************
*** 1,4 ****
! bacon
! eggs
! ham
guido
--- 1,4 ----
! python
! eggy
! hamster
guido
We can try jaccard similarity. len(set A intersection set B) / len(set A union set B). More info at https://en.wikipedia.org/wiki/Jaccard_index
I'm trying to analyze a bunch of search terms, so many that individually they don't tell much. That said, I'd like to group the terms because I think similar terms should have similar effectiveness. For example,
Term Group
NBA Basketball 1
Basketball NBA 1
Basketball 1
Baseball 2
It's a contrived example, but hopefully it explains what I'm trying to do. So then, what is the best way to do what I've described? I thought the nltk may have something along those lines, but I'm only barely familiar with it.
Thanks
You'll want to cluster these terms, and for the similarity metric I recommend Dice's Coefficient at the character-gram level. For example, partition the strings into two-letter sequences to compare (term1="NB", "BA", "A ", " B", "Ba"...).
nltk appears to provide dice as nltk.metrics.association.BigramAssocMeasures.dice(), but it's simple enough to implement in a way that'll allow tuning. Here's how to compare these strings at the character rather than word level.
import sys, operator
def tokenize(s, glen):
g2 = set()
for i in xrange(len(s)-(glen-1)):
g2.add(s[i:i+glen])
return g2
def dice_grams(g1, g2): return (2.0*len(g1 & g2)) / (len(g1)+len(g2))
def dice(n, s1, s2): return dice_grams(tokenize(s1, n), tokenize(s2, n))
def main():
GRAM_LEN = 4
scores = {}
for i in xrange(1,len(sys.argv)):
for j in xrange(i+1, len(sys.argv)):
s1 = sys.argv[i]
s2 = sys.argv[j]
score = dice(GRAM_LEN, s1, s2)
scores[s1+":"+s2] = score
for item in sorted(scores.iteritems(), key=operator.itemgetter(1)):
print item
When this program is run with your strings, the following similarity scores are produced:
./dice.py "NBA Basketball" "Basketball NBA" "Basketball" "Baseball"
('NBA Basketball:Baseball', 0.125)
('Basketball NBA:Baseball', 0.125)
('Basketball:Baseball', 0.16666666666666666)
('NBA Basketball:Basketball NBA', 0.63636363636363635)
('NBA Basketball:Basketball', 0.77777777777777779)
('Basketball NBA:Basketball', 0.77777777777777779)
At least for this example, the margin between the basketball and baseball terms should be sufficient for clustering them into separate groups. Alternatively you may be able to use the similarity scores more directly in your code with a threshold.