Removing spaces in python - python

I'm trying to write code that parses a large text file. However, in order to get said text file, I run the original PDF file through pdfminer. While this works, it also returns my text file with many random spaces (see below)
SM ITH , JO HN , PHD
1234 S N O RT H AV E
Is there any easy way in Python to remove only certain spaces so words aren't separated? For the sample above, I want it to look like
SMITH, JOHN, PHD
1234 S NORTH AVE
Thanks.

Most likely what you're trying to do is impossible to do perfectly, and very hard to do well enough to satisfy you. I'll explain below.
But there's a good chance you shouldn't be doing it in the first place. pdfminer is highly configurable, and something like just specifying a smaller -M value will give you the text you wanted in the first place. You'll need to do a bit of trial and error, but if this works, it'll be far easier than trying to post-process things after the fact.
If you want to do this, you need to come up with a rule that determines which spaces are "random extra spaces" and which are real spaces before you can code that in Python. And I don't know that there is any such rule.
In your example, you can handle most of them by just turning multiple spaces into single spaces, and single spaces into nothing. It should be obvious how to do that. Even if you can't think of a clever solution, a triple replace works fine:
s = re.sub(r'\s\s+', r'<space>', s)
s = re.sub(r'\s', r'', s)
s = re.sub(r'<space>', r' ', s)
However, this rule isn't quite right, because in JO HN , PHD, the space after the comma isn't a random extra space, but it's not showing up as two or more spaces. And the same for the space in "1234 S". And, most likely, the same thing is true in lots of other cases for your real data.
A different somewhat close rule is that you only remove single spaces between letters. Again, if that works, it's easy to code. For example:
s = re.sub(r'(\w)\s(\w)', r'\1\2', s)
    s = re.sub(r'\s+', r' ', s)
But now that leaves a space before the comma after SMITH and JOHN.
Maybe you need to put in a little information about English punctuation—strip the spaces around punctuation, then add back in the spaces after a comma or period, around quotes, etc.
Or… well, nobody but you can know what your data look like and figure it out.
If you can't come up with a good rule, the only option is to build some complicated heuristics around looking up possible words in a dictionary and guessing which one is more likely—which still won't get everything right (e.g., how do you know whether "B OO K M AR K" is "BOOK MARK" or "BOOKMARK"?), but it's the best you could possibly do.

What you are trying to do is impossible, e.g., should "DESK TOP" be "DESK TOP" or "DESKTOP"?

Related

Removing part of string starting with \ud

I am trying to remove anything starting with \ud
My text:
onceuponadollhouse: "Iconic apart and better together \ud83d\udc6fâ€â™€ï¸The Caboodles® x Barbieâ„¢ collection has us thinking about our Doll Code \ud83c\udf80 We stand for one another by sharing our lessons
The answer I am looking for:
onceuponadollhouse: "Iconic apart and better together â€â™€ï¸The Caboodles® x Barbieâ„¢ collection has us thinking about our Doll Code We stand for one another by sharing our lessons
So the ideal way would be to take a step back, work out where in the process the encoding is getting mangled, then fix it. Somehow you're getting (a) surrogate pairs, which are the pairs of characters starting with \ud; and (b) UTF-8 interpreted as Latin-1 or some similar encoding, like the â„¢ after "Barbie".
Taking a step back and making sure that your input text is interpreted correctly would be ideal; here you're losing the emojis "woman with bunny ears" and "ribbon"; another time it might be somebody's name or other piece of important information.
If you're in a situation where you can't do it properly, and you need to strip the surrogate pairs, you can use re.sub:
import re
text = 'onceuponadollhouse: "Iconic apart and better together \ud83d\udc6fâ€â™€ï¸The Caboodles® x Barbieâ„¢ collection has us thinking about our Doll Code \ud83c\udf80 We stand for one another by sharing our lessons'
stripped = re.sub('[\ud800-\udfff]+', '', text)
print(stripped)
Depending on your purpose, it might be useful to replace those characters with a placeholder; since they always come in pairs, you might do something like this:
import re
text = 'onceuponadollhouse: "Iconic apart and better together \ud83d\udc6fâ€â™€ï¸The Caboodles® x Barbieâ„¢ collection has us thinking about our Doll Code \ud83c\udf80 We stand for one another by sharing our lessons'
stripped = re.sub('[\ud800-\udfff]{2}', '<unknown character>', text)
print(stripped)
Check out the emot python package. I discovered it this morning in from this article: https://towardsdatascience.com/5-python-libraries-that-you-dont-know-but-you-should-fd6f810773a7
The examples given in the documentation only interpret and emojis, but it also gives their location, so it wouldn't be too much of stretch to replace them.

Regular Expression to replace dot with space before parentheses

I am working on some customer comments that some of them did not follow grammatical rules. For Example (Such as s and b.) in the following text that provides more explanation for previous sentence is surrounded by two dots.
text = "I was initially scared of ANY drug after my experience. But about a year later I tried. (Such as s and b.). I had a very bad reaction to this."
First, I want to find . (Such as s and b.). and then replace the dot before (Such as s and b.) to space. This is my code, but it does not work.
text = re.sub (r'(\.)(\s+?\(.+\)\s*\.)', r' \2 ', text )
Output should be:
"I was initially scared of ANY drug after my experience. But about a year later I tried (Such as s and b.). I had a very bad reaction to this."
I am using python.
The sample provided does not make much sense because the only change is that the ` character is moved one position to the left.
However, this might do the trick (to keep the dot inside the paranthesis):
text = re.sub(r'\.\s*\)\s*\.', '.)', text)
Or this to have it outside:
text = re.sub(r'\.\s*\)\s*\.', ').', text)
Edit: Or maybe you're looking for this to replace the dot before the opening paranthesis?
text = re.sub(r'\.(?=\s*\(.*?\)\.)', ').', text)
I would suggest this to remove a dot before parentheses when there is another one following them:
text = re.sub(r'\.(\s*?\([^)]*\)\s*\.)', r'\1', text)
See it run on repl.it

Get address out of a paragraph with regex

Alright, this one's a bit of a pain. I'm doing some scraping with Python, trying to get an address out of a few lines of poorly tagged HTML. Here's a sample of the format:
256-555-5555<br/>
1234 Fake Ave S<br/>
Gotham (Lower Ward)<br/>
I'd like to retrieve only 1234 Fake Ave S, Gotham. Any ideas? I've been doing regex's all night and now my brain is mush...
Edit:
More detail about what the possible scenarios of how the data will arrive. Sometimes the first line will be there, sometimes not. All of the addresses I have seen have Ave, Way, St in it although I would prefer not to use that as a factor in the selection as I am not certain they will always be that way. The second and third line are alPhone (or possible email or website):
What I had in mind was something that
Selects everything on 2nd to last line (so, second line if there are three lines, first line if just two when there isn't a phone number).
Selects everything on last line that isn't in parentheses.
Combine the 2nd to last line and last line, adding a ", " in between the two.
I'm using Scrapy to acquire the HTML code. The address is all in the same div, I want to use regex to further break the data up into appropriate sections. Now how to do that is what I'm unable to figure out.
Edit2:
As per Ofir's comment, I should mention that I have already made expressions to isolate the phone number and parentheses section.
Phone (or possible email or website):
((1[-. ])?[0-9]{3}[-. ])?\(?([0-9]{3}[-. ][A?([0-9]{4})|([\w\.-]+#[\w\.-]+)|(www.+)|([\w\.-]*(?:com|net|org|us))
parentheses:
\((.*?)\)
I'm not sure how to use those to construct a everything-but-these statement.
It is possible that in your case it is easier to focus on what you don't want:
html tags (<br>)
phone numbers
everything in parenthesis
Each of which can be matched easily with simple regular expressions, making it easy to construct one to match the rest (presumably - the address)
This attempts to isolate the last two lines out of the string:
>>> s="""256-555-5555<br/>
... 1234 Fake Ave S<br/>
... Gotham (Lower Ward)<br/>
... """
>>> m = re.search(r'((?!</br>).*)<br/>\n((?!</br>).*)<br/>$)', s)
>>> print m.group(1)
1234 Fake Ave S
Trimming the parentheses is probably best left to a separate line of code, rather than complicating the regular expression further.
As far as I understood you problem, I think you are taking the wrong way to solve it.
Regexes are not a magical tool that could extract pertinent data from a pulp and jumble of undifferentiated elements of text. It is a tool that can only extract data from a text having variable parts but also a minimum of stable structure acting as anchors relatively to which the variable parts can be localized.
In your treatment, it seems to me that you first isolated this part containing possible phone number followed by address on 1/2 lines. But doing so, you lost information: what is before and what is after is anchoring information, you shouldn't try to find something in the remaining section obtained after having eliminated this information.
Moreover, I presume that you don't want only to catch a phone number and an address: you may want to extract other pieces of information lying before and after this section. With a good shaped regex, you could capture all the pieces in one shot.
So, please, give more of the text, with enough characters before and enough characters after the limited section allowing to write a correct and easier regex strategy to catch all the data you want. triplee has already asked you that, and you didn't, why ?

Justadistraction: tokenizing English without whitespaces. Murakami SheepMan

I wondered how you would go about tokenizing strings in English (or other western languages) if whitespaces were removed?
The inspiration for the question is the Sheep Man character in the Murakami novel 'Dance Dance Dance'
In the novel, the Sheep Man is translated as saying things like:
"likewesaid, we'lldowhatwecan. Trytoreconnectyou, towhatyouwant," said the Sheep Man. "Butwecan'tdoit-alone. Yougottaworktoo."
So, some punctuation is kept, but not all. Enough for a human to read, but somewhat arbitrary.
What would be your strategy for building a parser for this? Common combinations of letters, syllable counts, conditional grammars, look-ahead/behind regexps etc.?
Specifically, python-wise, how would you structure a (forgiving) translation flow? Not asking for a completed answer, just more how your thought process would go about breaking the problem down.
I ask this in a frivolous manner, but I think it's a question that might get some interesting (nlp/crypto/frequency/social) answers.
Thanks!
I actually did something like this for work about eight months ago. I just used a dictionary of English words in a hashtable (for O(1) lookup times). I'd go letter by letter matching whole words. It works well, but there are numerous ambiguities. (asshit can be ass hit or as shit). To resolve those ambiguities would require much more sophisticated grammar analysis.
First of all, I think you need a dictionary of English words -- you could try some methods that rely solely on some statistical analysis, but I think a dictionary has better chances of good results.
Once you have the words, you have two possible approaches:
You could categorize the words into grammar categories and use a formal grammar to parse the sentences -- obviously, you would sometimes get no match or multiple matches -- I'm not familiar with techniques that would allow you to loosen the grammar rules in case of no match, but I'm sure there must be some.
On the other hand, you could just take some large corpus of English text and compute relative probabilities of certain words being next to each other -- getting a list of pair and triples of words. Since that data structure would be rather big, you could use word categories (grammatical and/or based on meaning) to simplify it. Then you just build an automaton and choose the most probable transitions between the words.
I am sure there are many more possible approaches. You can even combine the two I mentioned, building some kind of grammar with weight attached to its rules. It's a rich field for experimenting.
I don't know if this is of much help to you, but you might be able to make use of this spelling corrector in some way.
This is just some quick code I wrote out that I think would work fairly well to extract words from a snippet like the one you gave... Its not fully thought out, but I think something along these lines would work if you can't find a pre-packaged type of solution
textstring = "likewesaid, we'lldowhatwecan. Trytoreconnectyou, towhatyouwant," said the Sheep Man. "Butwecan'tdoit-alone. Yougottaworktoo."
indiv_characters = list(textstring) #splits string into individual characters
teststring = ''
sequential_indiv_word_list = []
for cur_char in indiv_characters:
teststring = teststring + cur_char
# do some action here to test the testsring against an English dictionary where you can API into it to get True / False if it exists as an entry
if in_english_dict == True:
sequential_indiv_word_list.append(teststring)
teststring = ''
#at the end just assemble a sentence from the pieces of sequential_indiv_word_list by putting a space between each word
There are some more issues to be worked out, such as if it never returns a match, this would obviously not work as it would never match if it just kept adding in more characters, however since your demo string had some spaces you could have it recognize these too and automatically start over at each of these.
Also you need to account for punctuation, write conditionals like
if cur_char == ',' or cur_char =='.':
#do action to start new "word" automatically

strategies for finding duplicate mailing addresses

I'm trying to come up with a method of finding duplicate addresses, based on a similarity score. Consider these duplicate addresses:
addr_1 = '# 3 FAIRMONT LINK SOUTH'
addr_2 = '3 FAIRMONT LINK S'
addr_3 = '5703 - 48TH AVE'
adrr_4 = '5703- 48 AVENUE'
I'm planning on applying some string transformation to make long words abbreviated, like NORTH -> N, remove all spaces, commas and dashes and pound symbols. Now, having this output, how can I compare addr_3 with the rest of addresses and detect similar? What percentage of similarity would be safe? Could you provide a simple python code for this?
addr_1 = '3FAIRMONTLINKS'
addr_2 = '3FAIRMONTLINKS'
addr_3 = '570348THAV'
adrr_4 = '570348AV'
Thankful,
Eduardo
First, simplify the address string by collapsing all whitespace to a single space between each word, and forcing everything to lower case (or upper case if you prefer):
adr = " ".join(adr.tolower().split())
Then, I would strip out things like "st" in "41st Street" or "nd" in "42nd Street":
adr = re.sub("1st(\b|$)", r'1', adr)
adr = re.sub("([2-9])\s?nd(\b|$)", r'\1', adr)
Note that the second sub() will work with a space between the "2" and the "nd", but I didn't set the first one to do that; because I'm not sure how you can tell the difference between "41 St Ave" and "41 St" (that second one is "41 Street" abbreviated).
Be sure to read all the help for the re module; it's powerful but cryptic.
Then, I would split what you have left into a list of words, and apply the Soundex algorithm to list items that don't look like numbers:
http://en.wikipedia.org/wiki/Soundex
http://wwwhomes.uni-bielefeld.de/gibbon/Forms/Python/SEARCH/soundex.html
adrlist = [word if word.isdigit() else soundex(word) for word in adr.split()]
Then you can work with the list or join it back to a string as you think best.
The whole idea of the Soundex thing is to handle misspelled addresses. That may not be what you want, in which case just ignore this Soundex idea.
Good luck.
Removing spaces, commas and dashes will be ambiguous . It will be better to replace them with a single space.
Take for example this address
56 5th avenue
And this
5, 65th avenue
with your method both of them will be:
565THAV
What you can do is write a good address shortening algorithm and then use string comparison to detect duplicates. This should be enough to detect duplicates in the general case. A general similarity algorithm won't work. Because one number difference can mean a huge change in Addresses.
The algorithm can go like this:
replace all commas dashes with spaces. Use he translate method for that.
Build a dictionary with words and their abbreviated form
Remove the TH part if it was following a number.
This should be helpful in building your dictionary of abbreviations:
https://pe.usps.com/text/pub28/28apc_002.htm
I regularly inspect addresses for duplication where I work, and I have to say, I find Soundex highly unsuitable. It's both too slow and too eager to match things. I have similar issues with Levenshtein distance.
What has worked best for me is to sanitize and tokenize the addresses (get rid of punctuation, split things up into words) and then just see how many tokens match up. Because addresses typically have several tokens, you can develop a level of confidence in terms of a combination of (1) how many tokens were matched, (2) how many numeric tokens were matched, and (3) how many tokens are available. For example, if all tokens in the shorter address are in the longer address, the confidence of a match is pretty high. Likewise, if you match 5 tokens including at least one that's numeric, even if the addresses each have 8, that's still a high-confidence match.
It's definitely useful to do some tweaking, like substituting some common abbreviations. The USPS lists help, though I wouldn't go gung-ho trying to implement all of them, and some of the most valuable substitutions aren't on those lists. For example, 'JFK' should be a match for 'JOHN F KENNEDY', and there are a number of common ways to shorten 'MARTIN LUTHER KING JR'.
Maybe it goes without saying but I'll say it anyway, for completeness: Don't forget to just do a straight string comparison on the whole address before messing with more complicated things! This should be a very cheap test, and thus is probably a no-brainer first pass.
Obviously, the more time you're willing and able to spend (both on programming/testing and on run time), the better you'll be able to do. Fuzzy string matching techniques (faster and less generalized kinds than Levenshtein) can be useful, as a separate pass from the token approach (I wouldn't try to fuzzy match individual tokens against each other). I find that fuzzy string matching doesn't give me enough bang for my buck on addresses (though I will use it on names).
In order to do this right, you need to standardize your addresses according to USPS standards (your address examples appear to be US based). There are many direct marketing service providers that offer CASS (Coding Accuracy Support System) certification of postal addresses. The CASS process will standardize all of your addresses and append zip + 4 to them. Any undeliverable addresses will be flagged which will further reduce your postal mailing costs, if that is your intent. Once all of your addresses are standardized, eliminating duplicates will be trivial.
I had to do this once. I converted everything to lowercase, computed each address's Levenshtein distance to every other address, and ordered the results. It worked very well, but it was quite time-consuming.
You'll want to use an implementation of Levenshtein in C rather than in Python if you have a large data set. Mine was a few tens of thousands and took the better part of a day to run, I think.

Categories