Identify Visually Similar Strings in Python - python

I am working on a python project in which I need to filter profane words, and I already have a filter in place. The only problem is that if a user switches a character with a visually similar character (e.g. hello and h311o), the filter does not pick it up. Is there some way that I could find detect these words without hard coding every combination in?

What about translating l331sp33ch to leetspeech and applying a simple levensthein distance? (you need to pip install editdistance first)
import editdistance
try:
from string import maketrans # python 2
except:
maketrans = str.maketrans # python 3
t = maketrans("01345", "oleas")
editdistance.eval("h3110".translate(t), 'hello')
results in 0

Maybe build a relationship between the visually similar characters and what they can represent i.e.
dict = {'3': 'e', '1': 'l', '0': 'o'} #etc....
and then you can use this to test against your database of forbidden words.
e.g.
input:he11
if any of the characters have an entry in dict,
dict['h'] #not exist
dict['e'] #not exist
dict['1'] = 'l'
dict['1'] = 'l'
Put this together to form a word and then search your forbidden list. I don't know if this is the fastest way of doing it, but it is "a" way.
I'm interested to see what others come up with.
*disclaimer: I've done a year or so of Perl and am starting out learning Python right now. When I get the time. Which is very hard to come by.

Linear Replacement
You will want something adaptable to innovative orthographers. For a start, pattern-match the alphabetic characters to your lexicon of banned words, using other characters as wild cards. For instance, your example would get translated to "h...o", which you would match to your proposed taboo word, "hello".
Next, you would compare the non-alpha characters to a dictionary of substitutions, allowing common wild-card chars to stand for anything. For instance, asterisk, hyphen, and period could stand for anything; '4' and '#' could stand for 'A', and so on. However, you'll do this checking from the strength of the taboo word, not from generating all possibilities: the translation goes the other way.
You will have a little ambiguity, as some characters stand for multiple letters. "#" can be used in place of 'O' of you're getting crafty. Also note that not all the letters will be in your usual set: you'll want to deal with moentary symbols (Euro, Yen, and Pound are all derived from letters), as well as foreign letters that happen to resemble Latin letters.
Multi-character replacements
That handles only the words that have the same length as the taboo word. Can you also handle abbreviations? There are a lot of combinations of the form "h-bomb", where the banned word appears as the first letter only: the effect is profane, but the match is more difficult, especially where the 'b's are replaced with a scharfes-S (German), the 'm' with a Hebrew or Cryllic character, and the 'o' with anything round form the entire font.
Context
There is also the problem that some words are perfectly legitimate in one context, but profane in a slang context. Are you also planning to match phrases, perhaps parsing a sentence for trigger words?
Training a solution
If you need a comprehensive solution, consider training a neural network with phrases and words you label as "okay" and "taboo", and let it run for a day. This can take a lot of the adaptation work off your shoulders, and enhancing the model isn't a difficult problem: add your new differentiating text and continue the training from the point where you left off.

Thank you to all who posted an answer to this question. More answers are welcome, as they may help others. I ended up going off of David Zemens' comment on the question.
I'd use a dictionary or list of common variants ("sh1t", etc.) which you could persist as a plain text file or json etc., and read in to memory. This would allow you to add new entries as needed, independently of the code itself. If you're only concerned about profanities, then the list should be reasonably small to maintain, and new variations unlikely. I've used a hard-coded dict to represent statistical t-table (with 1500 key/value pairs) in the past, seems like your problem would not require nearly that many keys.
While this still means that all there word will be hard coded, it will allow me to update the list more easily.

Related

Remove repeating characters from sentence but retain the words meaning

I want to remove repeating characters from a sentence but make it so that the words still retain its meaning (if it has any). For example : I'm so haaappppyyyy about offline school
to I'm so happy about offline school. See, haaappppyyyy became happy and offline & school stay the same instead becoming ofline & schol
I've tried two solutions, using RE and itertools, but none really fits for what I'm searching for
Using Regex :
tweet = 'I'm so haaappppyyyy about offline school'
repeat_char = re.compile(r"(.)\1{1,}", re.IGNORECASE)
tweet = repeat_char.sub(r"\1\1", tweet)
tweet = re.sub("(.)\\1{2,}", "\\1", tweet)
output :
I'm so haappyy about offline school #it makes 2 chars for every repating chars
using itertools :
tweet = 'I'm so happy about offline school'
tweet = ''.join(ch for ch, _ in itertools.groupby(tweet))
output :
I'm so hapy about ofline schol
How can I fix this? should I make a lists of words I want to exclude?
In addition, I want it to also be able to reduce some words that's in a pattern to it's base form. For example :
wkwk (base form)
wkwkwkwk
wkwkwkwkwkwkwk
I want to make the second and the third word into the first word, the base form
You can combine regex and NLP here by iterating over all words in a string, and once you find one with identical consecutive letters reduce them to max 2 consecutive occurrences of the same letters and run the automatic spellcheck to fix the spelling.
See an example Python code:
import re
from textblob import TextBlob
from textblob import Word
rx = re.compile(r'([^\W\d_])\1{2,}')
print( re.sub(r'[^\W\d_]+', lambda x: Word(rx.sub(r'\1\1', x.group())).correct() if rx.search(x.group()) else x.group(), tweet) )
# => "I'm so happy about offline school"
The code uses the Textblob library, but you may use any you like.
Note that ([^\W\d_])\1{2,} matches any three or more consecutive letters, [^\W\d_]+ matches one or more letters.
This answer was originally written for Regex to reduce repeated chars in a string which was closed as duplicate before I could submit my post. So I "recycled" it here.
Regex is not always the best solution
Regex for validation of formats or input
A regex is often used for low-level pattern recognition and substitution.
It may be useful for validation of formats. You can see it as "dump" automation.
Linguistics (NLP)
When it comes to natural language (NLP), or here spelling (dictionary) the semantics may play a role. Depending on the context "ass" and "as" may both be correctly spelled, although the semantics are very different.
(I apologize for the rude examples, but I am not a native-speaker and those two had the most distinct meaning depending on re-duplication).
For those cases a regex or simple pattern-recognition may be not sufficient. It can cause more effort to apply it correctly than the research for a language-specific library or solution (including a basic application).
Examples for spelling that a regex may struggle with
Like the difference between "haappy" (orthographically invalid, but only the duplicated vowels "aa", not the consonants "pp") and "yeees" (contains no duplicates in correct spelling) or "kiss" (is correctly spelled with duplicate consonants)
Spelling correction requires more
For example a dictionary to lookup if duplicate characters (vowels or consonants) are valid for correct spelling of the word in its form.
Consider a spelling-correction module
You could use textblob module for spelling correction:
To install:
pip install textblob
Example for some test-cases (independent words):
from textblob import TextBlob
incorrect_words = ["cmputr", "yeees", "haappy"] # incorrect spelling
text = ",".join(incorrect_words) # join them as comma separated list
print(f"original words: {text}")
b = TextBlob(text)
# prints the corrected spelling
print(f"corrected words: {b.correct()}")
Prints:
original words: cmputr,yeees,haappy
corrected words: computer,eyes,happy
Surprise: You might have expected "yes" (so did I). But the correction results not in removal of 2 duplicated vowels "ee", but rearrangement to keep almost all letters (5 of 6, only removed one "e").
Example for the given sentence:
from textblob import TextBlob
tweet = "I'm so haaappppyyyy about offline school" # either escape or use different quotes when a single-quote (') is enclosed
print(TextBlob(tweet).correct())
Prints:
I'm so haaappppyyyy about office school
Unfortunately quite worse:
not "happy"
semantically out-of-scope with "office" instead "offline"
Apparently a preceeding cleaning step using regex, like Wiktor suggests, may ameliorate the result.
See also:
Stackabuse: Spelling Correction in Python with TextBlob, tutorial
documentation: TextBlob: Simplified Text Processing
Well, first of all you need a list (or set) of all allowed words, to compare with.
I'd approach it with the assumption (which might be wrong) that no words contain sequences of more than two repeating characters. So for each word generate a list of all potential candidates, for example "haaappppppyyyy" would yield you ["haappyy", "happyy", "happy", etc]. then it's just a matter of checking which one of those words actually exists by comparing to the allowed word list.
The time complexity of this is quite high, tho so if it needs to go fast then throw a hash table on it or something :)

Scoring word similarity between arbitrary text

I have a list of over 500 very important, but arbitrary strings. they look like:
list_important_codes = ['xido9','uaid3','frps09','ggix21']
What I know
*Casing is not important, but all other characters must match exactly.
*Every string starts with 4 alphabetical characters, and ends with either one or two numerical characters.
*I have a list of about 100,000 strings,list_recorded_codes that were hand-typed and should match list_important_codes exactly, but about 10,000 of them dont. Because these strings were typed manually, the incorrect strings are usually only about 1 character off. (errors such as: *has an added space, *got two letters switched around, *has "01" instead of "1", etc)
What I need to do
I need to iterate through list_recorded_codes and find all of their perfect matches within list_important_codes.
What I tried
I spent about 10 hours trying to manually program a way to fix each word, but it seems to be impractical and incredibly tedious. not to mention, when my list doubles in size at a later date, i would have to manually go about that process again.
The solution I think I need, and the expected output
Im hoping that Python's NLTK can efficiently 'score' these arbitrary terms to find a 'best score'. For example, if the word in question is inputword = "gdix88", and that word gets compared to score(inputword,"gdox89")=.84 and score(inputword,"sudh88")=.21. with my expected output being highscore=.84, highscoreword='gdox89'
for manually_entered_text in ['xido9','uaid3','frp09','ggix21']:
--get_highest_score_from_important_words() #returns word_with_highest_score
--manually_entered_text = word_with_highest_score
I am also willing to use a different set of tools to fix this issue if needed. but also, the simpler the better! Thank you!
The 'score' you are looking for is called an edit distance. There is quite a lot of literature and algorithms available - easy to find, but only after you know the proper term :)
See the corresponding wikipedia article.
The nltk package provides an implementation of the so-called Levenshtein edit-distance:
from nltk.metrics.distance import edit_distance
if __name__ == '__main__':
print(edit_distance("xido9", "xido9 "))
print(edit_distance("xido9", "xido8"))
print(edit_distance("xido9", "xido9xxx"))
print(edit_distance("xido9", "xido9"))
The results are 1, 1, 3 and 0 in this case.
Here is the documentation of the corresponding nltk module
There are more specialized versions of this score that take into account how frequent various typing errors are (for example 'e' instead of 'r' might occur quite often because the keys are next to each other on a qwert keyboard).
But classic Levenshtein would were I would start.
You could apply a dynamic programming approach to this problem. Once you have your scoring matrix, you alignment_matrix and your local and global alignment functions set up, you could iterate through the list_important_codes and find the highest scoring alignment in the list_recorded_codes. Here is a project I did for DNA sequence alignment: DNA alignment. You can easily adapt it to your problem.

Python Comparing two Strings and Determining 'Uniqueness'

the title is a mess so bear with me while I explain my question in more detail (or really, it's a set of semi-related questions). I'm compiling a list of certain words from a large text file and storing them in a dictionary as keys with their respective occurrences (integers) as the value. I want to apply several processes to consolidate the dictionary so that 'related' words get lumped together.
First operation is plurals. I see no reason to have a 'cat' and a 'cats' key in the dictionary. Same with car vs. cars and book vs. books and so on. I want to write a function that (upon seeing a new word not currently in the dictionary) checks to see if the new word is a plural form of any key currently in the dict (and vice versa).
if new_word ends with s -> check dict for a key that matches new_word[:-1]
else if new_word does not end in s -> check dict for new_word + 's'
Is there a better way to approach this problem? (I would obviously have to handle edge cases for plurals...this is very general at this point)
On the same topic, what if I want to determine if words are similar by consulting a database of known suffixes and prefixes and seeing if a new_word is just a previously seen word with a suffix or prefix attached.
I use nltk to handle a lot of other tasks in my program such as splitting into sentences and individual words but I would prefer to write my 'similar-ness' algorithm myself. Thank you in advance for your help guys!

Justadistraction: tokenizing English without whitespaces. Murakami SheepMan

I wondered how you would go about tokenizing strings in English (or other western languages) if whitespaces were removed?
The inspiration for the question is the Sheep Man character in the Murakami novel 'Dance Dance Dance'
In the novel, the Sheep Man is translated as saying things like:
"likewesaid, we'lldowhatwecan. Trytoreconnectyou, towhatyouwant," said the Sheep Man. "Butwecan'tdoit-alone. Yougottaworktoo."
So, some punctuation is kept, but not all. Enough for a human to read, but somewhat arbitrary.
What would be your strategy for building a parser for this? Common combinations of letters, syllable counts, conditional grammars, look-ahead/behind regexps etc.?
Specifically, python-wise, how would you structure a (forgiving) translation flow? Not asking for a completed answer, just more how your thought process would go about breaking the problem down.
I ask this in a frivolous manner, but I think it's a question that might get some interesting (nlp/crypto/frequency/social) answers.
Thanks!
I actually did something like this for work about eight months ago. I just used a dictionary of English words in a hashtable (for O(1) lookup times). I'd go letter by letter matching whole words. It works well, but there are numerous ambiguities. (asshit can be ass hit or as shit). To resolve those ambiguities would require much more sophisticated grammar analysis.
First of all, I think you need a dictionary of English words -- you could try some methods that rely solely on some statistical analysis, but I think a dictionary has better chances of good results.
Once you have the words, you have two possible approaches:
You could categorize the words into grammar categories and use a formal grammar to parse the sentences -- obviously, you would sometimes get no match or multiple matches -- I'm not familiar with techniques that would allow you to loosen the grammar rules in case of no match, but I'm sure there must be some.
On the other hand, you could just take some large corpus of English text and compute relative probabilities of certain words being next to each other -- getting a list of pair and triples of words. Since that data structure would be rather big, you could use word categories (grammatical and/or based on meaning) to simplify it. Then you just build an automaton and choose the most probable transitions between the words.
I am sure there are many more possible approaches. You can even combine the two I mentioned, building some kind of grammar with weight attached to its rules. It's a rich field for experimenting.
I don't know if this is of much help to you, but you might be able to make use of this spelling corrector in some way.
This is just some quick code I wrote out that I think would work fairly well to extract words from a snippet like the one you gave... Its not fully thought out, but I think something along these lines would work if you can't find a pre-packaged type of solution
textstring = "likewesaid, we'lldowhatwecan. Trytoreconnectyou, towhatyouwant," said the Sheep Man. "Butwecan'tdoit-alone. Yougottaworktoo."
indiv_characters = list(textstring) #splits string into individual characters
teststring = ''
sequential_indiv_word_list = []
for cur_char in indiv_characters:
teststring = teststring + cur_char
# do some action here to test the testsring against an English dictionary where you can API into it to get True / False if it exists as an entry
if in_english_dict == True:
sequential_indiv_word_list.append(teststring)
teststring = ''
#at the end just assemble a sentence from the pieces of sequential_indiv_word_list by putting a space between each word
There are some more issues to be worked out, such as if it never returns a match, this would obviously not work as it would never match if it just kept adding in more characters, however since your demo string had some spaces you could have it recognize these too and automatically start over at each of these.
Also you need to account for punctuation, write conditionals like
if cur_char == ',' or cur_char =='.':
#do action to start new "word" automatically

How to do a Python split() on languages (like Chinese) that don't use whitespace as word separator?

I want to split a sentence into a list of words.
For English and European languages this is easy, just use split()
>>> "This is a sentence.".split()
['This', 'is', 'a', 'sentence.']
But I also need to deal with sentences in languages such as Chinese that don't use whitespace as word separator.
>>> u"这是一个句子".split()
[u'\u8fd9\u662f\u4e00\u4e2a\u53e5\u5b50']
Obviously that doesn't work.
How do I split such a sentence into a list of words?
UPDATE:
So far the answers seem to suggest that this requires natural language processing techniques and that the word boundaries in Chinese are ambiguous. I'm not sure I understand why. The word boundaries in Chinese seem very definite to me. Each Chinese word/character has a corresponding unicode and is displayed on screen as an separate word/character.
So where does the ambiguity come from. As you can see in my Python console output Python has no problem telling that my example sentence is made up of 5 characters:
这 - u8fd9
是 - u662f
一 - u4e00
个 - u4e2a
句 - u53e5
子 - u5b50
So obviously Python has no problem telling the word/character boundaries. I just need those words/characters in a list.
You can do this but not with standard library functions. And regular expressions won't help you either.
The task you are describing is part of the field called Natural Language Processing (NLP). There has been quite a lot of work done already on splitting Chinese words at word boundaries. I'd suggest that you use one of these existing solutions rather than trying to roll your own.
Chinese NLP
chinese - The Stanford NLP (Natural Language Processing) Group
Where does the ambiguity come from?
What you have listed there is Chinese characters. These are roughly analagous to letters or syllables in English (but not quite the same as NullUserException points out in a comment). There is no ambiguity about where the character boundaries are - this is very well defined. But you asked not for character boundaries but for word boundaries. Chinese words can consist of more than one character.
If all you want is to find the characters then this is very simple and does not require an NLP library. Simply decode the message into a unicode string (if it is not already done) then convert the unicode string to a list using a call to the builtin function list. This will give you a list of the characters in the string. For your specific example:
>>> list(u"这是一个句子")
just a word of caution: using list( '...' ) (in Py3; that's u'...' for Py2) will not, in the general sense, give you the characters of a unicode string; rather, it will most likely result in a series of 16bit codepoints. this is true for all 'narrow' CPython builds, which accounts for the vast majority of python installations today.
when unicode was first proposed in the 1990s, it was suggested that 16 bits would be more than enough to cover all the needs of a universal text encoding, as it enabled a move from 128 codepoints (7 bits) and 256 codepoints (8 bits) to a whopping 65'536 codepoints. it soon became apparent, however, that that had been wishful thinking; today, around 100'000 codepoints are defined in unicode version 5.2, and thousands more are pending for inclusion. in order for that to become possible, unicode had to move from 16 to (conceptually) 32 bits (although it doesn't make full use of the 32bit address space).
in order to maintain compatibility with software built on the assumption that unicode was still 16 bits, so-called surrogate pairs were devised, where two 16 bit codepoints from specifically designated blocks are used to express codepoints beyond 65'536, that is, beyond what unicode calls the 'basic multilingual plane', or BMP, and which are jokingly referred to as the 'astral' planes of that encoding, for their relative elusiveness and constant headache they offer to people working in the field of text processing and encoding.
now while narrow CPython deals with surrogate pairs quite transparently in some cases, it will still fail to do the right thing in other cases, string splitting being one of those more troublesome cases. in a narrow python build, list( 'abc大𧰼def' ) (or list( 'abc\u5927\U00027C3Cdef' ) when written with escapes) will result in ['a', 'b', 'c', '大', '\ud85f', '\udc3c', 'd', 'e', 'f'], with '\ud85f', '\udc3c' being a surrogate pair. incidentally, '\ud85f\udc3c' is what the JSON standard expects you to write in order to represent U-27C3C. either of these codepoints is useless on its own; a well-formed unicode string can only ever have pairs of surrogates.
so what you want to split a string into characters is really:
from re import compile as _Re
_unicode_chr_splitter = _Re( '(?s)((?:[\ud800-\udbff][\udc00-\udfff])|.)' ).split
def split_unicode_chrs( text ):
return [ chr for chr in _unicode_chr_splitter( text ) if chr ]
which correctly returns ['a', 'b', 'c', '大', '𧰼', 'd', 'e', 'f'] (note: you can probably rewrite the regular expression so that filtering out empty strings becomes unnecessary).
if all you want to do is splitting a text into chinese characters, you'd be pretty much done at this point. not sure what the OP's concept of a 'word' is, but to me, 这是一个句子 may be equally split into 这 | 是 | 一 | 个 | 句子 as well as 这是 | 一个 | 句子, depending on your point of view. however, anything that goes beyond the concept of (possibly composed) characters and character classes (symbols vs whitespace vs letters and such) goes well beyond what is built into unicode and python; you'll need some natural language processing to do that. let me remark that while your example 'yes the United Nations can!'.split() does successfully demonstrate that the split method does something useful to a lot of data, it does not parse the english text into words correctly: it fails to recognize United Nations as one word, while it falsely assumes can! is a word, which it is clearly not. this method gives both false positives and false negatives. depending on your data and what you intend to accomplish, this may or may not be what you want.
Ok I figured it out.
What I need can be accomplished by simply using list():
>>> list(u"这是一个句子")
[u'\u8fd9', u'\u662f', u'\u4e00', u'\u4e2a', u'\u53e5', u'\u5b50']
Thanks for all your inputs.
Best tokenizer tool for Chinese is pynlpir.
import pynlpir
pynlpir.open()
mystring = "你汉语说的很好!"
tokenized_string = pynlpir.segment(mystring, pos_tagging=False)
>>> tokenized_string
['你', '汉语', '说', '的', '很', '好', '!']
Be aware of the fact that pynlpir has a notorious but easy fixable problem with licensing, on which you can find plenty of solutions on the internet.
You simply need to replace the NLPIR.user file in your NLPIR folder downloading a valide licence from this repository and restart your environment.
Languages like Chinese have a very fluid definition of a word. E.g. One meaning of ma is "horse". One meaning of shang is "above" or "on top of". A compound is "mashang" which means literally "on horseback" but is used figuratively to mean "immediately". You need a very good dictionary with compounds in it and looking up the dictionary needs a longest-match approach. Compounding is rife in German (famous example is something like "Danube steam navigation company director's wife" being expressed as one word), Turkic languages, Finnish, and Magyar -- these languages have very long words many of which won't be found in a dictionary and need breaking down to understand them.
Your problem is one of linguistics, nothing to do with Python.
It's partially possible with Japanese, where you usually have different character classes at the beginning and end of the word, but there are whole scientific papers on the subject for Chinese. I have a regular expression for splitting words in Japanese if you are interested: http://hg.hatta-wiki.org/hatta-dev/file/cd21122e2c63/hatta/search.py#l19
Try this: http://code.google.com/p/pymmseg-cpp/
The list() is the answer for Chinese only sentence. For those hybrid English/Chines in most of case. It answered at hybrid-split, just copy answer from Winter as below.
def spliteKeyWord(str):
regex = r"[\u4e00-\ufaff]|[0-9]+|[a-zA-Z]+\'*[a-z]*"
matches = re.findall(regex, str, re.UNICODE)
return matches
if str longer than 30 then take 27 chars and add '...' at the end
otherwise return str
str='中文2018-2020年一区6、8、10、12号楼_「工程建设文档102332号」'
result = len(list(str)) >= 30 and ''.join(list(str)[:27]) + '...' or str

Categories