efficient algorithm to perform spell check on HTML document - python

I have a HTML document, a list of common spelling mistakes, and the correct spelling for each case.
The HTML documents will be up to ~50 pages and there are ~30K spelling correction entries.
What is an efficient way to correct all spelling mistakes in this HTML document?
(Note: my implementation will be in Python, in case you know of any relevant libraries.)
I have thought of 2 possibles approaches:
build hashtable of the spelling data
parse text from HTML
split text by whitespace into tokens
if token in spelling hashtable replace with correction
build new HTML document with updated text
This approach will fail for multi-word spelling corrections, which will exist. The following is a simpler though seemingly less efficient approach that will work for multi-words:
iterate spelling data
search for word in HTML document
if word exists replace with correction

You are correct that the first approach will be MUCH faster than the second (additionally, I would recommend looking into Tries instead of a straight hash, the space savings will be quite dramatic for 30k words).
To still be able to handle the multi-word cases, you could either keep track of the previous token and thereby check your hash for a combined string such as "prev cur".
Or else you could leave the multi-word corrections out of the hash and combine your two approaches, first using the hash for single words and then doing a scan for the multi-word combos (or vice versa). This could still be relatively fast if the number of multi-word corrections is relatively small.
Be careful tho, pulling out word tokens is trickier than just splitting on whitespace. You don't want to fail to correct an error simply because you didn't find 'instence,' with a comma in your hash.

I agree with Rob's suggestion of using a trie, based on characters, because I programmed a spelling correction algorithm ages ago based on having a dictionary of valid words stored as a trie. By using branch-and-bound I was able to suggest possibly correct spellings of misspelled words (by Levenshtein distance). In addition, since a trie is just a big finite-state-machine, it is fairly easy to add common prefixes and suffixes, so it could handle "words" like "postnationalizationalism's".

Related

how do i make python find words that look similar to a bad word, but not necessarily a proper word in english?

I'm making a cyberbullying detection discord bot in python, but sadly there are some people who may find their way around conventional English and spell a bad word in a different manner, like the n-word with 3 g's or the f word without the c. There are just too many variants of bad words some people may use. How can I make python find them all?
I've tried pyenchant but it doesn't do what I want it to do. If I put suggest("racist slur"), "sucker" is in the array. I can't seem to find anything that works.
Will I have to consider every possibility separately and add all the possibilities into a single dictionary? (I hope not.)
It's not necessarily python's job to do the heavy lifting but rather its ecosystem. You may want to look into Natural Language Understanding algorithms and find a way that suits your specific needs. This takes some time and further expertise to figure out.
You may want to start with pytorch, it has helped my learning curve a lot. Their docs regarding text: https://pytorch.org/text/stable/index.html
Also, I'd suggest, you have a look around at kaggle, several datascience challenges have a prize on them to tackle the same task you are aiming to solve.
https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification
These competitions usually have public starter notebooks to get you started with your own implementation.
You could try looping through the string that you are moderating and putting it into an array.
For example, if you wanted to blacklist "foo"
x=[["f","o","o"],[" "], ["f","o","o","o"]]
then count the letters in each word to count how many of each letter is in each word:
y = [["f":"1", "o":"2"], [" ":"1"], ["f":"1", "o":"3"]]
then see that y[2] is very similar to y[0] (the banned word).
While this method is not perfect, it is a start.
Another thing to look in to is using a neural language interpreter that detects if a word is being used in a derogatory way. A while back, Google designed one of these.
The other answer is just that no bot is perfect.
You might just have to put these common misspellings in the blacklist.
However, the automatic approach would be awesome if you got it working with 100% accuracy.
Unfortunately, spell checking (for different languages) alone is still an open problem that people do research on, so there is no perfect solution for this, let alone for the case when the user intentionally tries to insert some "errors".
Fortunately, there is a conceptually limited number of ways people can intentionally change the input word in order to obtain a new word that resembles the initial one enough to be understood by other people. For example, bad actors could try to:
duplicate some letters multiple times
add some separators (e.g. "-", ".") between characters
delete some characters (e.g. the f word without "c")
reverse the word
potentially others
My suggestion is to initially keep it simple, if you don't want to delve into machine learning. As a possible approach you could try to:
manually create a set of lower-case bad words with their duplicated letters removed (e.g. "killer" -> "kiler").
manually/automatically add to this set variants of these words with one or multiple letters missing that can still be easily understood (e.g. "kiler" +-> "kilr").
extract the words in the message (e.g. by message_str.split())
for each word and its reversed version:
a. remove possible separators (e.g. "-", ".")
b. convert it to lower case and remove consecutive, duplicate letters
c. check if this new form of the word is present in the set, if so, censor it or the entire message
This solution lacks the protection against words with characters separated by one or multiple white spaces / newlines (e.g. "killer" -> "k i l l e r").
Depending on how long the messages are (I believe they are generally short in chat rooms), you can try to consider each substring of the initial message with removed whitespaces, instead of each word detected by the white space separator in step 3. This will take more time, as generating each substring will take alone O(message_length^2) time.

How to find the prefix of a word for nlp

I want to find the prefix of a word for nlp purposes(interested in morphological negation).
For example, I want to know "unable" is negative, but "university" does not have any sort of negation. I have been using the startswith python function so far, but obviously there can be some issues.
Does anyone have any experience with finding prefixes of words? I feel like there should be some library or api, but I'm not sure.
Thanks!
Short of a full morphological analyser, you can work around this with exception lists and longest matching.
For example: you assume un- expresses negation. First, find longer prefixes (such as uni-), and match for that first, before looking at un-. There will be a handful of exceptions, such as uninteresting, which you can check for separately. This will be a fairly smallish list. Then, once all the uni- words have been dealt with, anything starting with un- is a candidate, though there will also be exceptions, such as under.
A slightly better solution is possible if you have a basic word list: cut of un- from the beginning of the string, and check whether the remainder is in your word list. University will become iversity, which is not in your list, and thus it's not the un- prefix. However, uninteresting will become interesting, which is, so here you have found a valid prefix. All you need for this is a list of non-negated words. You can of course also use this for other prefixes, such as the alpha privative, as in atypical the remainder typical will be in your list.
If you don't have such a list, simply split your text into tokens, sort and unique them, and then scan down the line of words beginning with your candidate prefixes. It's a bit tedious, but the numbers of relevant words are not that big. It's what we all did in NLP 30 years ago... :)

Comparing 2 pieces of text word by word or by their hashes

I have a python script scraping comments on a list of web page regularly and inserting them into a database. But it inserts a comment only it's not in the database yet. How feasible is it to store a hash of each comment along with its body to be able to look it up faster next time I'll need to check if it's already been inserted? Instead of storying only their bodies and comparing them word by word? If it's faster, what kind of hash should I use? Md5 or ....?
The avarage comment is about 1000 words. I'm aware that even a single character difference results in different hashes, that's ok.
You can use something like a Jaccard Index. This will even let you search for partial matches, you can set a threshold to reject or select matches (i.e. similar text)
You can even look for Minhashing, that would be space efficient way of doing Jaccard distance and you will have the benefit of a few character differences being matched and resulting in same bucket (Check out Locality Sensitive Hashing). You will have to set a threshold though, precision/recall problem is what you will have to tackle.

Split string using Regular Expression that includes lowercase, camelcase, numbers

I'm working on to get twitter trends using tweepy in python and I'm able find out world top 50 trends so for sample I'm getting results like these
#BrazilianFansAreTheBest, #PSYPagtuklas, Federer, Corcuera, Ouvindo ANTI,
艦これ改, 영혼의 나이, #TodoDiaéDiaDe, #TronoChicas, #이사람은_분위기상_군주_장수_책사,
#OTWOLKnowYourLimits, #BaeIn3Words, #NoEntiendoPorque
(Please ignore non English words)
So here I need to parse every hashtag and convert them into proper English words, Also I checked how people write hashtag and found below ways -
#thisisawesome
#ThisIsAwesome
#thisIsAwesome
#ThisIsAWESOME
#ThisISAwesome
#ThisisAwesome123
(some time hashtags have numbers as well)
So keeping all these in mind I thought if I'm able to split below string then all above cases will be covered.
string ="pleaseHelpMeSPLITThisString8989"
Result = please, Help, Me, SPLIT, This, String, 8989
I tried something using re.sub but it is not giving me desired results.
Regex is the wrong tool for the job. You need a clearly-defined pattern in order to write a good regex, and in this case, you don't have one. Given that you can have Capitalized Words, CAPITAL WORDS, lowercase words, and numbers, there's no real way to look at, say, THATSand and differentiate between THATS and or THAT Sand.
A natural-language approach would be a better solution, but again, it's inevitably going to run into the same problem as above - how do you differentiate between two (or more) perfectly valid ways to parse the same inputs? Now you'd need to get a trie of common sentences, build one for each language you plan to parse, and still need to worry about properly parsing the nonsensical tags twitter often comes up with.
The question becomes, why do you need to split the string at all? I would recommend finding a way to omit this requirement, because it's almost certainly going to be easier to change the problem than it is to develop this particular solution.

most efficient way to find partial string matches in large file of strings (python)

I downloaded the Wikipedia article titles file which contains the name of every Wikipedia article. I need to search for all the article titles that may be a possible match. For example, I might have the word "hockey", but the Wikipedia article for hockey that I would want is "Ice_hockey". It should be a case-insensitive search too.
I'm using Python, and is there a more efficient way than to just do a line by line search? I'll be performing this search like 500 or a 1000 times per minute ideally. If line by line is my only option, are there some optimizations I can do within this?
I think there are several million lines in the file.
Any ideas?
Thanks.
If you've got a fixed data set and variable queries, then the usual technique is to reorganise the data set into something that can be searched more easily. At an abstract level, you could break up each article title into individual lowercase words, and add each of them to a Python dictionary data structure. Then, whenever you get a query, convert the query word to lower case and look it up in the dictionary. If each dictionary entry value is a list of titles, then you can easily find all the titles that match a given query word.
This works for straightforward words, but you will have to consider whether you want to do matching on similar words, such as finding "smoking" when the query is "smoke".
Greg's answer is good if you want to match on individual words. If you want to match on substrings you'll need something a bit more complicated, like a suffix tree (http://en.wikipedia.org/wiki/Suffix_tree). Once constructed, a suffix tree can efficiently answer queries for arbitrary substrings, so in your example it could match "Ice_Hockey" when someone searched for "hock".
I'd suggest you put your data into an sqlite database, and use the SQL 'like' operator for your searches.

Categories