How to correct badly written emails? - python

I am trying to correct badly written emails contained in a list, by searching differences in the most common domains. E.g: hotmal.com to hotmail.com.
The thing is, there are tons of variations to one single domain. It would be extremly helpful if someone knew of an algorithm in python that can work as an autocorrect for email domains. Or if this is too complex of a problem for a few lines of code.

Check Levenshtein distance starting at https://en.wikipedia.org/wiki/Levenshtein_distance
It is commonly used for auto-correct

What if...you search for keywords in the domain. Like for hotmail.com, you can search for hot, or something similar. Also, like the #user10817019 wrote, you can combine it with searching for the first and last letters of the domain.

Write a small script in your preferred language that takes domains that start with h and end with l, and replace the entire string with hotmail so it fixes everything in between. Search for mai if they forgot the L. I had to do this the other day in vb.net so check my lists twice and correct bad data.

Related

Extract information from sentence

I'm creating a simple chatbot. I want to obtain the information from the user response. An example scenario:
Bot : Hi, what is your name?
User: My name is Edwin.
I wish to extract the name Edwin from the sentence. However, the user can response in different ways such as
User: Edwin is my name.
User: I am Edwin.
User: Edwin.
I'm tried to rely on the dependency relations between words but the result does not do well.
Any idea on what technique I could use to tackle this problem?
First off, I think a complete name detection is really heavy to set up. If you want your bot to be able to detect a name in like 99% of the cases, you've got some work. And I suppose the name detection is only the very beginning of your plans...
This said, here are the first ideas that came to my mind:
Names are, grammatically speaking, nouns. So if one can perform a grammatical analysis of the sentence, some candidates to the name can be found.
Names are supposed to begin with a cap, although on a chat this is likely not to be respected, so it might be of little use... However, if one came across a word beginning with a cap, it is likely to be someone's name (though it could be a place's name...).
The patterns you could reasonably think of when introducing yourself are not that numerous, so you could "hard-code" them, with of course a little tolerance towards typos.
If you are expecting an actual name, you could use a database holding a huge amount of names, but have fun with the Hawaiian or Chinese names. Still, this appears as a viable solution in the case of European names.
However, I am no AI specialist, and I'm looking forward to seeing other proposals.
I'd suggest using NER:
You can play with it yourself: http://nlp.cogcomp.org/
There are many alternatives, over only 2 'models':
Based on NLP training; uses HTTP for integration/delivery:
Microsoft LUIS
API.AI
IBM Watson
based on pattern matching; uses an interpreter (needs an native implementation or a bridge from other implementation)
Rivescript - Python interpreter available
ChatScript - needs a C++ bridge/interop
AIML - Python interpreter available
This is not an extensive listing of current options.
Detecting names can be complicated if you consider things like "My name is not important", "My name is very long", etc.
Here is public domain script in Self that attempts to parse a name, you may be able to adapt it to python, it also does some crazy stuff like lookup the words on Wiktionary to see if they are classified as names,
https://www.botlibre.com/script?id=525804

Using Python, how can I validate a UK postcode?

I have a number of UK postcodes. However, they are not well formatted. Sometimes, they look normal postcodes such as SW5 2RT, sometimes they don't have the space in the middle, but still valid such as SW52RT. But in some cases, they are wrong due to human factor, and they look like ILovePython which is totally invalid as postcodes.
So, I am wondering how can I validate the postcodes efficiently (using Python)?
Many thanks.
=================================================================
EDIT:
Thanks for the answers at: this page. But it seems only check characters in the postcode whether they are alphabet or numbers, but don't care if the combination make sense. There could be a false postcode such as AA1 1AA that get passed through the validation.
You can use this package for your purpose:
uk-postcode-utils 0.1.0
Here it is links for the same:
https://pypi.python.org/pypi/uk-postcode-utils
Also please have a look at:
Python, Regular Expression Postcode search
You are correct to say that Regex validation goes only so far. To ensure that the post code is 'postally' valid, you'll need a reference set to validate it against. There are a large number (low thousands) of changes to UK addresses per day to keep track of, and I do not believe this is something a regex alone can solve.
There are a couple of ways you can do it, either use a 3rd party to help you capture a complete & correct address (many available including https://www.edq.com/uk/products/address-validation/real-time-capture (my company)), or get a data supply from Royal Mail and implement your own solution.
Coping with typo's and different formats shouldn't be a problem either way you do it. Most 3rd parties will do this easily for you and should be able to cope with some mistakes too (depending upon what you have to search upon). They'll all have web services you should be able to easily implement or grab integration snippets for.
The UK Office for National Statistics publishes a list of UK postcodes, both current and retired, so you could pull the relevant columns out of the latest .csv download, duplicate the current ones with the space(s) removed and then do a lookup (it might be best to use a proper database such as MySQL with an index for this).

Python - Detect (spammy) URLS in string

So, I've been doing some research for a while now and I could't find anything about detecting a URL in a string. The problem is that most results are about detecting whether a string IS a URL, and not if it contains a URL. The 2 results that look best to me are
Regex to find urls in string in Python
and
Detecting a (naughty or nice) URL or link in a text string
but the first requires http://, which is not something spammers would use (:P) and the second one isn't in regex - and my limited knowledge does not know how to translate any of these. Something I have considered doing is using something dull like
spamlist = [".com",".co.uk","etc"]
for word in string:
if word in spamlist:
Do().stuff()
But that would honestly do more bad than good, and I am 100% sure there is a better way using regex or anything!
So if anyone knows anything that could help me I'd be very grateful! I've only been doing python for 1-2 months and not very intensively during this period but I feel like I'm making great progress and this one thing is all that's in the way, really.
EDIT: Sorry for not specifying earlier, I am looking to use this locally, not website (apache) based or anything similar. More trying to clean out any links from files I've got hanging around.
As I said in the comments,
Detecting a (naughty or nice) URL or link in a text string 's solution is a regex and you should probably make it a raw string or escape backslashes in it when using it in Python
You really shouldn't reinvent the square wheel here, especially since spam filtering is an arms race domain (couldn't remember the exact English phrase for this)

Python - Deciphering Line from SiriServer Plugin

I'm currently learning how to program plugins for SiriServer, in hope to create a bit of home automation using my phone. I'm trying to figure out how to program the text coverted speech to match and run the plugin.
I've learnt how to to short phrases, like this for example.:
#register("en-US", ".*Start.*XBMC.*")
Though if I'm understanding it's searching at random for the two words. If I were to say XBMC Start, it would probably work as well, but when I start working with wolframalpha, I need to be a bit more specific.
For example, speech to text saying "What's the weather like in Toronto?", somehow connects to this:
#register("en-US", "(what( is|'s) the )?weather( like)? in (?P<location>[\w ]+?)$")
What would all the extra symbols in that line mean that could connect these two together? I've tried messing around with a couple ideas but nothing seems to work the way I want it to. Any help is appreciated, thanks!
I will break down the example you provided so hopefully that is a good start, but searching for python regex would provide more thorough information.
The parentheses set the enclosed items to be seen as the result, not the individual items by the remaining expression. The pipes mean "or", the question marks mean this portion may or may not be present, and the group for location is a regex which sets the variable "location" as the input provided at this point in the input. The $ at the end means that this will complete the sentence. .* means anything at this place in the input is acceptable, but should also be ignored. Hopefully that helps.

What are some of the Artificial Intelligence (AI) related techniques one would use for parsing a webpage?

I would like to scrape several different discussions forums, most of which have different HTML formats. Rather than dissecting the HTML for each page, it would be more efficient (and fun) to implement some sort of Learning Algorithm that could identify the different messages (i.e. structures) on each page, and individually parse them while simultaneously ignoring all the extraneous crap (i.e., ads and other nonsense). Could someone please point me to some references or sample code for work that's already been carried out in this area.
Moreover, does anyone know of pseudocode for Arc90's readability code?
http://www.minvolai.com/blog/decruft-arc90s-readability-in-python/
build a solution that:
takes some sample webpages with the same structure (eg forum threads)
analyzes the DOM tree of each to find which parts are the same / different
where they are different is the dynamic content you are after (posts, user names, etc)
This technique is known as wrapper induction.
There seems to be a Python port of arc90's Readability script that might point you in the right direction (or at least some direction).
Maybe not exactly correct but there's an O'Reilly book called 'Collective Intelligence' that may lead you in the right direction for what you are attempting to do. Additionally, many of the examples are in python :)

Categories