regular expressions: remove Greek words with capital letters - python

I want to delete greek words with capital letters, such as:
text = 'Ο Κώστας θέλει να ΠΑΙΞΕΙ ΑΎΡΙΟ ποδόσφαιρο στο σχολείο'
the output should be
text = 'Ο Κώστας θέλει να ποδόσφαιρο στο σχολείο'
I checked this one Regular expression : Remove words with Capital letters, but I don't know how to adopt the code into Greek aphabet.

We can (by consulting an Unicode chart) see that the Greek letters are approximately in the range U+0370..U+1FFF, and then filter using the unicodedata module:
>>> import unicodedata
>>> greek_capital_chars = set(chr(cp) for cp in range(0x0370, 0x1FFF) if "GREEK CAPITAL" in unicodedata.name(chr(cp), ""))
{'Β', 'Χ', 'ᾛ', 'Ἁ', 'Ὼ', 'ᾜ', 'ᾫ', 'Ἂ', 'Ὰ', 'Ἑ', 'Ω', 'Ἤ', 'Ε', 'Ρ', 'Η', 'ᾏ', 'Ϳ', 'Ή', 'Ἣ', 'Ἵ', 'ᾋ', 'Ύ', 'ᾚ', 'Ή', 'Ϲ', 'Ί', 'Ὥ', 'Ύ', 'Ξ', 'Ὄ', 'Ο', 'Θ', 'Ϗ', 'Ϋ', 'Ͻ', 'ᾘ', 'Ὑ', 'Ώ', 'Ᾰ', 'ᾝ', 'Ἐ', 'Ὦ', 'Ά', 'Σ', 'Ὂ', 'Ἱ', 'Ὤ', 'Ͷ', 'Ὴ', 'Ό', 'Ψ', 'ῼ', 'Φ', 'Ἒ', 'Ὕ', 'ᾪ', 'Ἅ', 'Ῑ', 'Ἧ', 'Λ', 'Ἢ', 'Ϸ', 'Ἔ', 'Ί', 'Ἇ', 'Ἲ', 'Ὓ', 'Ζ', 'Τ', 'Ὗ', 'Ϊ', 'Ͽ', 'Μ', 'Ὀ', 'Ἄ', 'ᾊ', 'Κ', 'Γ', 'Ὶ', 'Ϻ', 'Ᾱ', 'ᾬ', 'Ώ', 'Ἳ', 'Ἥ', 'Ἦ', 'Ι', 'Ἃ', 'ᾌ', 'Ὁ', 'Έ', 'Δ', 'Ὡ', 'Ἆ', 'Ἰ', 'ϴ', 'Ͼ', 'Ῠ', 'ῌ', 'Ἓ', 'Ἕ', 'Έ', 'Ὃ', 'Ὠ', 'ᾈ', 'Ͱ', 'ᾼ', 'Ὢ', 'ᾙ', 'ᾞ', 'ᾎ', 'Ὸ', 'Ῥ', 'Ἀ', 'Ὣ', 'Ͳ', 'Ἶ', 'Ῐ', 'ᾮ', 'ᾍ', 'Ἡ', 'Ῡ', 'Ὧ', 'ᾉ', 'ᾩ', 'ᾯ', 'ᾭ', 'ᾟ', 'Ό', 'Α', 'Ὲ', 'Υ', 'Π', 'Ἴ', 'Ά', 'Ἷ', 'ᾨ', 'Ὅ', 'Ὺ', 'Ν', 'Ἠ'}
Then, you can form a regexp that matches words (continous runs) of such characters. We'll also include Latin capital characters.
>>> import re
>>> import string
>>> chars_class = re.escape("".join(greek_capital_chars.union(string.ascii_uppercase)))
>>> r = re.compile(f"[{chars_class}]+")
>>> text = 'Ο Κώστας θέλει να ΠΑΙΞΕΙ ΑΎΡΙΟ ποδόσφαιρο στο σχολείο'
>>> r.sub("", text)
' ώστας θέλει να ποδόσφαιρο στο σχολείο'
As it is, the regex will of course also remove any capital letter; you may wish to do
>>> r = re.compile(f"[{chars_class}]{{2,}}")
>>> r.sub("", text)
'Ο Κώστας θέλει να ποδόσφαιρο στο σχολείο'
or similar instead, depending on your use case.

Related

Python replace substring given pattern

I am doing some data cleansing and want to remove the whole string between the characters:
"<p.>kódy:" and "</p.>" The strings are located in a dataframe and for each record, different characters can be found between the two characters, so I thought using combining str.place or re.sub with some kind of a wilcard could work, but I've not been succesful.
This is my sample input:
<p.>kódy: 2008212017 2008212025 2008212041 2008212066 2008212074 2008212108 2008212116 2008212124 2008212132 2008212140 2008212165 2008212199 2008212207 2008212215 2008212223 2008212231 2008212249 2008212256 2008212264 2008212272 2008212314 2008212355 2008212363 2008212389 2052500028 2052500036 2052500051 2052500069 2052500093 2052500101 2054384017 2054384041 2054384066 2054384090 2054384116 2054384124 2054384132 2054384140 2054384157 2054384165 2054384181 2054384199 2054384207 2054384215 2054384223 2054384249 20543842494 2054384348 2081043032 2081043057 2081043081 2081043214 2081043222 311088575007 311095577004 311095711009 4210013769006 62008212110</p.>
<p.> 924071180 924071181 924071182 </p.>
And the desired output:
<p.> 924071180 924071181 924071182 </p.>
Any help would be appreciated!
Cheers,
Stepan
You can use a regular expression substitution to get what you want in a single call:
data = """<p.>kódy: 2008212017 2008212025 2008212041 2008212066 2008212074 2008212108 2008212116 2008212124 2008212132 2008212140 2008212165 2008212199 2008212207 2008212215 2008212223 2008212231 2008212249 2008212256 2008212264 2008212272 2008212314 2008212355 2008212363 2008212389 2052500028 2052500036 2052500051 2052500069 2052500093 2052500101 2054384017 2054384041 2054384066 2054384090 2054384116 2054384124 2054384132 2054384140 2054384157 2054384165 2054384181 2054384199 2054384207 2054384215 2054384223 2054384249 20543842494 2054384348 2081043032 2081043057 2081043081 2081043214 2081043222 311088575007 311095577004 311095711009 4210013769006 62008212110</p.>
<p.> 924071180 924071181 924071182 </p.>
"""
import re
r = re.sub(r"<p\.>kódy:.+?</p\.>", "", data)
print(r)
Result:
<p.> 924071180 924071181 924071182 </p.>
You can use split.
st = "<p.>kódy: 2008212017 2008212025 2008212041 2008212066 2008212074 2008212108 2008212116 2008212124 2008212132 2008212140 2008212165 2008212199 2008212207 2008212215 2008212223 2008212231 2008212249 2008212256 2008212264 2008212272 2008212314 2008212355 2008212363 2008212389 2052500028 2052500036 2052500051 2052500069 2052500093 2052500101 2054384017 2054384041 2054384066 2054384090 2054384116 2054384124 2054384132 2054384140 2054384157 2054384165 2054384181 2054384199 2054384207 2054384215 2054384223 2054384249 20543842494 2054384348 2081043032 2081043057 2081043081 2081043214 2081043222 311088575007 311095577004 311095711009 4210013769006 62008212110</p.> <p.> 924071180 924071181 924071182 </p.>"
st.split('</p.>')
Result:
['<p.>kódy: 2008212017 2008212025 2008212041 2008212066 2008212074 2008212108 2008212116 2008212124 2008212132 2008212140 2008212165 2008212199 2008212207 2008212215 2008212223 2008212231 2008212249 2008212256 2008212264 2008212272 2008212314 2008212355 2008212363 2008212389 2052500028 2052500036 2052500051 2052500069 2052500093 2052500101 2054384017 2054384041 2054384066 2054384090 2054384116 2054384124 2054384132 2054384140 2054384157 2054384165 2054384181 2054384199 2054384207 2054384215 2054384223 2054384249 20543842494 2054384348 2081043032 2081043057 2081043081 2081043214 2081043222 311088575007 311095577004 311095711009 4210013769006 62008212110', ' <p.> 924071180 924071181 924071182 ', '']
Or:
import re
t = re.sub('<p.>kódy.*?</p.>', '', st)
Result:
' <p.> 924071180 924071181 924071182 </p.>'

How to ignore punctuation in-between words using word_tokenize in NLTK?

I'm looking to ignore characters in-between words using NLTK word_tokenize.
If I have a a sentence:
test = 'Should I trade on the S&P? This works with a phone number 333-445-6635 and email test#testing.com'
The word_tokenize method is splitting the S&P into
'S','&','P','?'
Is there a way to have this library ignore punctuation between words or letters?
Expected output: 'S&P','?'
Let me know how this works with your sentences.
I added an additional test with a bunch of punctuation.
The regular expression is, in the final portion, modified from the WordPunctTokenizer regexp.
from nltk.tokenize import RegexpTokenizer
punctuation = r'[]!"$%&\'()*+,./:;=##?[\\^_`{|}~-]?'
tokenizer = RegexpTokenizer(r'\w+' + punctuation + r'\w+?|[^\s]+?')
# result:
In [156]: tokenizer.tokenize(test)
Out[156]: ['Should', 'I', 'trade', 'on', 'the', 'S&P', '?']
# additional test:
In [225]: tokenizer.tokenize('"I am tired," she said.')
Out[225]: ['"', 'I', 'am', 'tired', ',', '"', 'she', 'said', '.']
Edit: the requirements changed a bit so we can slightly modify PottsTweetTokenizer for this purpose.
emoticon_string = r"""
(?:
[<>]?
[:;=8] # eyes
[\-o\*\']? # optional nose
[\)\]\(\[dDpP/\:\}\{#\|\\] # mouth
|
[\)\]\(\[dDpP/\:\}\{#\|\\] # mouth
[\-o\*\']? # optional nose
[:;=8] # eyes
[<>]?
)"""
# Twitter symbols/cashtags: # Added by awd, 20140410.
# Based upon Twitter's regex described here: <https://blog.twitter.com/2013/symbols-entities-tweets>.
cashtag_string = r"""(?:\$[a-zA-Z]{1,6}([._][a-zA-Z]{1,2})?)"""
# The components of the tokenizer:
regex_strings = (
# Phone numbers:
r"""
(?:
(?: # (international)
\+?[01]
[\-\s.]*
)?
(?: # (area code)
[\(]?
\d{3}
[\-\s.\)]*
)?
\d{3} # exchange
[\-\s.]*
\d{4} # base
)"""
,
# Emoticons:
emoticon_string
,
# HTML tags:
r"""(?:<[^>]+>)"""
,
# URLs:
r"""(?:http[s]?://t.co/[a-zA-Z0-9]+)"""
,
# Twitter username:
r"""(?:#[\w_]+)"""
,
# Twitter hashtags:
r"""(?:\#+[\w_]+[\w\'_\-]*[\w_]+)"""
,
# Twitter symbols/cashtags:
cashtag_string
,
# email addresses
r"""(?:[\w.+-]+#[\w-]+\.(?:[\w-]\.?)+[\w-])""",
# Remaining word types:
r"""
(?:[a-z][^\s]+[a-z]) # Words with punctuation (modification here).
|
(?:[+\-]?\d+[,/.:-]\d+[+\-]?) # Numbers, including fractions, decimals.
|
(?:[\w_]+) # Words without apostrophes or dashes.
|
(?:\.(?:\s*\.){1,}) # Ellipsis dots.
|
(?:\S) # Everything else that isn't whitespace.
"""
)
word_re = re.compile(r"""(%s)""" % "|".join(regex_strings), re.VERBOSE | re.I | re.UNICODE)
# The emoticon and cashtag strings get their own regex so that we can preserve case for them as needed:
emoticon_re = re.compile(emoticon_string, re.VERBOSE | re.I | re.UNICODE)
cashtag_re = re.compile(cashtag_string, re.VERBOSE | re.I | re.UNICODE)
# These are for regularizing HTML entities to Unicode:
html_entity_digit_re = re.compile(r"&#\d+;")
html_entity_alpha_re = re.compile(r"&\w+;")
amp = "&"
class CustomTweetTokenizer(object):
def __init__(self, *, preserve_case: bool=False):
self.preserve_case = preserve_case
def tokenize(self, tweet: str) -> list:
"""
Argument: tweet -- any string object.
Value: a tokenized list of strings; concatenating this list returns the original string if preserve_case=True
"""
# Fix HTML character entitites:
tweet = self._html2unicode(tweet)
# Tokenize:
matches = word_re.finditer(tweet)
if self.preserve_case:
return [match.group() for match in matches]
return [self._normalize_token(match.group()) for match in matches]
#staticmethod
def _normalize_token(token: str) -> str:
if emoticon_re.search(token):
# Avoid changing emoticons like :D into :d
return token
if token.startswith('$') and cashtag_re.search(token):
return token.upper()
return token.lower()
#staticmethod
def _html2unicode(tweet: str) -> str:
"""
Internal method that seeks to replace all the HTML entities in
tweet with their corresponding unicode characters.
"""
# First the digits:
ents = set(html_entity_digit_re.findall(tweet))
if len(ents) > 0:
for ent in ents:
entnum = ent[2:-1]
try:
entnum = int(entnum)
tweet = tweet.replace(ent, chr(entnum))
except:
pass
# Now the alpha versions:
ents = set(html_entity_alpha_re.findall(tweet))
ents = filter((lambda x: x != amp), ents)
for ent in ents:
entname = ent[1:-1]
try:
tweet = tweet.replace(ent, chr(html.entities.name2codepoint[entname]))
except:
pass
tweet = tweet.replace(amp, " and ")
return tweet
To test it out:
tknzr = CustomTweetTokenizer(preserve_case=True)
tknzr.tokenize(test)
# result:
['Should',
'I',
'trade',
'on',
'the',
'S&P',
'?',
'This',
'works',
'with',
'a',
'phone',
'number',
'333-445-6635',
'and',
'email',
'test#testing.com']
Following up on #mechanical_meat answer,
There's a twitter text tokenizer in NLTK
Most probably, it's derived from the PottsTweetTokenizer at https://github.com/nltk/nltk/blob/develop/nltk/tokenize/casual.py
from nltk.tokenize import TweetTokenizer
tt = TweetTokenizer()
text = 'Should I trade on the S&P? This works with a phone number 333-445-6635 and email test#testing.com'
print(tt.tokenize(text))
[out]:
['Should', 'I', 'trade', 'on', 'the', 'S', '&', 'P', '?', 'This', 'works', 'with', 'a', 'phone', 'number', '333-445-6635', 'and', 'email', 'test#testing.com']
But that doesn't solve the S&P problem!!
So you can try the Multi-Word Expression approach, see https://stackoverflow.com/a/55644296/610569
from nltk import word_tokenize
from nltk.tokenize import TweetTokenizer
from nltk.tokenize import MWETokenizer
def multiword_tokenize(text, mwe, tokenize_func=word_tokenize):
# Initialize the MWETokenizer
protected_tuples = [tokenize_func(word) for word in mwe]
protected_tuples_underscore = ['_'.join(word) for word in protected_tuples]
tokenizer = MWETokenizer(protected_tuples)
# Tokenize the text.
tokenized_text = tokenizer.tokenize(tokenize_func(text))
# Replace the underscored protected words with the original MWE
for i, token in enumerate(tokenized_text):
if token in protected_tuples_underscore:
tokenized_text[i] = mwe[protected_tuples_underscore.index(token)]
return tokenized_text
text = 'Should I trade on the S&P? This works with a phone number 333-445-6635 and email test#testing.com'
mwe = ['S&P']
tt = TweetTokenizer()
print(multiword_tokenize(text, mwe, tt.tokenize))
[out]:
['Should', 'I', 'trade', 'on', 'the', 'S&P', '?', 'This', 'works', 'with', 'a', 'phone', 'number', '333-445-6635', 'and', 'email', 'test#testing.com']

how to indicate raw string with regex() if my pattern come from another string?

I have a csv table from which I get my regex pattern, e.g. \bconden
Problem : I don't manage to specify to python that this is a raw string
How to put r before a pattern when it comes from a string ?
import re
a = 'de la matière condensée'
fromcsv = '\bconden'
print(re.search('r' + fromcsv, a))
result is None
You can use the str_to_raw function below to make a raw string out of an already declared plain string variable:
import re
a = 'de la matière condensée'
pattern = '\bconden'
escape_dict = {
'\a': r'\a',
'\b': r'\b',
'\c': r'\c',
'\f': r'\f',
'\n': r'\n',
'\r': r'\r',
'\t': r'\t',
'\v': r'\v',
'\'': r'\'',
'\"': r'\"',
'\0': r'\0',
'\1': r'\1',
'\2': r'\2',
'\3': r'\3',
'\4': r'\4',
'\5': r'\5',
'\6': r'\6',
'\7': r'\7',
'\8': r'\8',
'\9': r'\9'
}
def str_to_raw(s):
return r''.join(escape_dict.get(c, c) for c in s)
print(re.search(r'\bconden', a))
print(re.search(str_to_raw(pattern), a))
Output:
<re.Match object; span=(14, 20), match='conden'>
<re.Match object; span=(14, 20), match='conden'>
note: I got escape_dict from this page.

Regex with unicode and str

I have a list of regex and a replace function.
regex function
replacement_patterns = [(ur'\\u20ac', ur' euros'),(ur'\xe2\x82\xac', r' euros'),(ur'\b[eE]?[uU]?[rR]\b', r' euros'), (ur'\b([0-9]+)[eE][uU]?[rR]?[oO]?[sS]?\b',ur' \1 euros')]
class RegexpReplacer(object):
def __init__(self, patterns=replacement_patterns):
self.patterns = [(re.compile(regex, re.UNICODE | re.IGNORECASE), repl) for (regex, repl) in patterns]
def replace(self, text):
s = text
for (pattern, repl) in self.patterns:
(s, count) = re.subn(pattern, repl, s)
return s
If I write the string as bellow:
string='730\u20ac.\r\n\n ropa surf ... 5,10 muy buen estado..... 170 \u20ac\r\n\nPack 850\u20ac, reparaci\u00f3n. \r\n\n'
replacer = RegexpReplacer()
texto= replacer.replace(string)
I get perfect results.
But if I call the function when iterating over a JSON file I have just loaded, it does not work (no error but no replacement)
What seems to happen is that when I call the function over the typed variable the function receives a STR, and when I call it from the JSON iteration it receives a unicode.
My question is why my regex is not working on the unicode, wouldnt it be supposed to?
Maybe you need something like this
import re
regex = re.compile("^http://.+", re.UNICODE)
And if you need more than one, you can do like this
regex = re.compile("^http://.+", re.UNICODE | re.IGNORECASE)
Get the example
>>> r = re.compile("^http://.+", re.UNICODE | re.IGNORECASE)
>>> r.match('HTTP://ыыы')
<_sre.SRE_Match object at 0x7f572455d648>
Does it correct result?
>>> class RegexpReplacer(object):
... def __init__(self, patterns=replacement_patterns):
... self.patterns = [(re.compile(regex, re.UNICODE | re.IGNORECASE), repl) for (regex, repl) in patterns]
... def replace(self, text):
... s = text
... for (pattern, repl) in self.patterns:
... (s, count) = re.subn(pattern, repl, s)
... return s
...
>>> string='730\u20ac.\r\n\n ropa surf ... 5,10 muy buen estado..... 170 \u20ac\r\n\nPack 850\u20ac, reparaci\u00f3n. \r\n\n'
>>> replacer = RegexpReplacer()
>>> texto= replacer.replace(string)
>>> texto
u'730 euros.\r\n\n ropa surf ... 5,10 muy buen estado..... 170 euros\r\n\nPack 850 euros, reparaci\\u00f3n. \r\n\n'
If you want Unicode replacement patterns, you need also be operating on Unicode strings. JSON should be returning Unicode as well.
Change the following by removing \\ and removing UTF-8 (won't see in a Unicode string). Also you compile with IGNORE_CASE so no need for [eE], etc.:
replacement_patterns = [(ur'\u20ac', ur' euros'),(ur'\be?u?r\b', r' euros'), (ur'\b([0-9]+)eu?r?o?s?\b',ur' \1 euros')]
Make the following a Unicode string (add u):
string = u'730\u20ac.\r\n\n ropa surf ... 5,10 muy buen estado..... 170 \u20ac\r\n\nPack 850\u20ac, reparaci\u00f3n. \r\n\n'
Then it should operator on Unicode JSON as well.

What is the regular expression for matching *text*?

What is the regex to match this is *some text*. but not this is \*another \*text. The regex is supposed to match the texts between the asterisks.
pattern = "\*(\w+(?:\s+\w+)*)\*"
re.findall(pattern, "this is *some text*.") // return 'some text'
re.findall(pattern, "this is \*another \*text") // return nothing
For replacing '*' with '$':
subpattern = "(\*(\w+(?:\s+\w+)*)\*)"
re.sub(subpattern, r"$\2$", "this is *some text*.") // return 'this is $some text$.'

Categories