My website supports a number of Indian languages. The user can change the language dynamically. When user inputs some string value, I have to split the string value into its individual characters. So, I'm looking for a way to write a common function that will work for English and a select set of Indian languages. I have searched across sites, however, there appears to be no common way to handle this requirement. There are language-specific implementations (for example Open-Tamil package for Tamil implements get_letters) but I could not find a common way to split or iterate through the characters in a unicode string taking the graphemes into consideration.
One of the many methods that I've tried:
name = u'தமிழ்'
print name
for i in list(name):
print i
#expected output
தமிழ்
த
மி
ழ்
#actual output
தமிழ்
த
ம
ி
ழ
்
#Here is another an example using another Indian language
name = u'हिंदी'
print name
for i in list(name):
print i
#expected output
हिंदी
हिं
दी
#actual output
हिंदी
ह
ि
ं
द
ी
The way to solve this is to group all "L" category characters with their subsequent "M" category characters:
>>> regex.findall(ur'\p{L}\p{M}*', name)
[u'\u0ba4', u'\u0bae\u0bbf', u'\u0bb4\u0bcd']
>>> for c in regex.findall(ur'\p{L}\p{M}*', name):
... print c
...
த
மி
ழ்
regex
To get "user-perceived" characters whatever the language, use \X (eXtended grapheme cluster) regular expression:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import regex # $ pip install regex
for text in [u'தமிழ்', u'हिंदी']:
print("\n".join(regex.findall(r'\X', text, regex.U)))
Output
த
மி
ழ்
हिं
दी
uniseg works really well for this, and the docs are OK. The other answer to this question works for international Unicode characters, but falls flat if users enter Emoji. The solution below will work:
>>> emoji = u'😀😃😄😁'
>>> from uniseg.graphemecluster import grapheme_clusters
>>> for c in list(grapheme_clusters(emoji)):
... print c
...
😀
😃
😄
😁
This is from pip install uniseg==0.7.1.
Related
Is there any way in Python 3 to replace general language specific characters for English letters?
For example, I've got function get_city(IP), that returns city name connected with given IP. It connects to external database, so I can't change the way it encodes, I am just getting value from database.
I would like to do something like:
city = "České Budějovice"
city = clear_name(city)
print(city) #should return "Ceske Budejovice"
Here I used Czech language, but in general it should work on any non Asian langauge.
Try unidecode:
# coding=utf-8
from unidecode import unidecode
city = "České Budějovice"
print(unidecode(city))
Prints Ceske Budejovice as desired (assuming your post has a typo).
Note: if you're using Python 2.x, you'll need to decode the string before passing it to unidecode, e.g. unidecode(city.decode('utf-8'))
Use unicodedata module for such cases.
To get the needed result you should normalize the given string by using unicodedata.normalize() and
unicodedata.combining() functions:
import unicodedata
city = "České Budějovice"
normalized = unicodedata.normalize('NFD', city)
new_city = u"".join([c for c in normalized if not unicodedata.combining(c)])
print(new_city) # Ceske Budejovice
NFD is one of the four Unicode Normalization Forms
http://www.unicode.org/reports/tr15/
Asongtoring above is almost correct - but in Python 3 it is a bit simpler as Pavlo Fesenko mentions in the comment to the solution. Here the solution in Python 3
from unidecode import unidecode
city = "České Budějovice"
print(unidecode(city))
I am trying to get a string to use in google geocoding api.I ve checked a lot of threads but I am still facing problem and I don't understand how to solve it.
I need addresse1 to be a string without any special characters. Addresse1 is for example: "32 rue d'Athènes Paris France".
addresse1= collect.replace(' ','+').replace('\n','')
addresse1=unicodedata.normalize('NFKD', addresse1).encode('utf-8','ignore')
here I got a string without any accent... Ho no... It is not a string but a bytes. So I ve done what was suggested and 'decode:
addresse1=addresse1.decode('utf-8')
But then addresse1 is exactly the same than at the begining... What do I have to do? What am I doing wrong? Or what i don't understand with unicode? Or is there a better solution?
Thanks,
Stéphane.
with 3rd party package: unidecode
3>> unidecode.unidecode("32 rue d'Athènes Paris France")
"32 rue d'Athenes Paris France"
addresse1=unicodedata.normalize('NFKD', addresse1).encode('utf-8','ignore')
You probably meant .encode('ascii', 'ignore'), to remove non-ASCII characters. UTF-8 contains all characters, so encoding to it doesn't get rid of any, and an encode-decode cycle with it is a no-op.
is there a better solution?
It depends what you are trying to do.
If you only want to remove diacritical marks and not lose all other non-ASCII characters, you could read unicodedata.category for each character after NFKD-normalising and remove those in category M.
If you want to transliterate to ASCII that becomes a language-specific question that requires custom replacements (for example in German ö becomes oe, but not in Swedish).
If you just want to fudge a string into ASCII because having non-ASCII characters in it causes some code to break, it is of course much better to fix that code to work properly with all Unicode characters than to mangle good data. The letter è is not encodable in ASCII, but neither are 99.9989% of all characters so that hardly makes it “special”. Code that only supports ASCII is lame.
The Google Geocoding API can work with Unicode perfectly well so there is no obvious reason you should need to do any of this.
ETA:
url2= 'maps.googleapis.com/maps/api/geocode/json?address=' + addresse1 ...
Ah, you need to URL-encode any data you inject into a URL. That's not just for Unicode — the above will break for many ASCII punctuation symbols too. Use urllib.quote to encode a single string, or urllib.encode to convert multiple parameters:
params = dict(
address=address1.encode('utf-8'),
key=googlekey
)
url2 = '...?' + urllib.urlencode(params)
(in Python 3 it's urllib.parse.quote and urllib.parse.encode and they automatically choose UTF-8 so you don't have to manually encode there.)
data2 = urllib.request.urlopen(url2).read().decode('utf-8')
data3=json.loads(data2)
json.loads reads byte strings so you should be safe to omit the UTF-8 decode. Anyway json.load will read directly from a file-like object so you shouldn't have to load the data into a string at all:
data3 = json.load(urllib.request.urlopen(url2))
Generally, there are two approaches: (1) regular expressions and (2) str.translate.
1) regular expressions
Decompose string and replace characters from the Unicode block \u0300-\u036f:
import unicodedata
import re
word = unicodedata.normalize("NFD", word)
word = re.sub("[\u0300-\u036f]", "", word)
It removes accents, circumflex, diaeresis, and so on:
pingüino > pinguino
εἴκοσι εἶσι > εικοσι εισι
For some languages, it could be another block, such as [\u0559-\u055f] for Armenian script.
2) str.translate
First, create replacement table (case-sensitive) and then apply it.
repl = str.maketrans(
"áéúíó",
"aeuio"
)
word.translate(repl)
Multi-char replacements are made as following:
repl = {
ord("æ"): "ae",
ord("œ"): "oe",
}
word.translate(repl)
I had a similar problem where I was generating tags that users might have to type with their phone.
Without using 3rd party packages you can simplify bobinces's answer above:
collect = "32 rue d'Athènes Paris France"
unicode_collect = unicodedata.normalize('NFD', collect)
address1 = unicode_collect.encode('ascii', 'ignore').decode('utf-8')
address1:
"32 rue d'Athenes Paris France"
You can use the translate() method from python.
Here's an example copied from tutorialspoint.com:
#!/usr/bin/python
from string import maketrans # Required to call maketrans function.
intab = "aeiou"
outtab = "12345"
trantab = maketrans(intab, outtab)
str = "this is string example....wow!!!";
print str.translate(trantab)
This outputs:
th3s 3s str3ng 2x1mpl2....w4w!!!
So you can define what characters you wish to replace more easily than with replace()
I have following two functions that works perfectly fine with ASCII strings and use the re module:
import re
def findWord(w):
return re.compile(r'\b{0}.*?\b'.format(w), flags=re.IGNORECASE).findall
def replace_keyword(w, c, x):
return re.sub(r"\b({0}\S*)".format(w), r'<mark style="background-color:{0}">\1</mark>'.format(c), x, flags=re.I)
However, they fail on using the utf-8 encoded strings with accented characters. On searching further, I found that the regex module is better suited for Unicode strings and hence I have been trying to port this to use regex for the last couple of hours but nothing seem to be working. This is what I have as of now:
import regex
def findWord(w):
return regex.compile(r'\b{0}.*?\b'.format(w), flags=regex.IGNORECASE|regex.UNICODE).findall
def replace_keyword(w, c, x):
return regex.sub(r"\b({0}\S*)".format(w), r'<mark style="background-color:{0}">\1</mark>'.format(c), x, flags=regex.IGNORECASE|regex.UNICODE)
However, on using an accented (not normalized) utf-8 encoded string, I keep getting an ordinal not in range error.
EDIT: The suggested possible duplicate question: Regular expression to match non-English characters? doesn't solve my problem. I want to use the python re/regex module. Secondly, I want to get the find and replace functions working using python.
EDIT: I am using python 2
EDIT: If you feel you can help me get these two functions working using Python 3 please let me know. I hope I will be able to invoke python 3 for using just these 2 functions through my python 2 script.
I think I am headed somewhere. I am trying to get this working without using the modules re or regex but plain python:
found_keywords = []
for word in keyword_list:
if word.lower() in article_text.lower():
found_keywords.append(word)
for word in found_keywords: # highlight the found keyword in the text
article_text = article_text.lower().replace(word.lower(), '<mark style="background-color:%s">%s</mark>' % (yellow_color, word))
Now, I just have to somehow replace found keywords in a case-insensitive manner and I will be good to go.
Just help me with this last step of replacing keywords in a case-insensitive manner without using re or regex so that it works for accented strings.
I have a string of characters that includes [a-z] as well as á,ü,ó,ñ,å,... and so on. Currently I am using regular expressions to get every line in a file that includes these characters.
Sample of spanishList.txt:
adan
celular
tomás
justo
tom
átomo
camara
rosa
avion
Python code (charactersToSearch comes from flask #application.route('/<charactersToSearch>')):
print (charactersToSearch)
#'átdsmjfnueó'
...
#encode
charactersToSearch = charactersToSearch.encode('utf-8')
query = re.compile('[' + charactersToSearch + ']{2,}$', re.UNICODE).match
words = set(word.rstrip('\n') for word in open('spanishList.txt') if query(word))
...
When I do this, I am expecting to get the words in the text file that include the characters in charactersToSearch. It works perfectly for words without special characters:
...
#after doing further searching for other conditions, return list of found words.
return '<br />'.join(sorted(set(word for (word, path) in solve())))
>>> adan
>>> justo
>>> tom
Only problem is that it ignores all words in the file that aren't ASCII. I should also be getting tomás and átomo.
I've tried encode, UTF-8, using ur'[...], but I haven't been able to get it to work for all characters. The file and the program (# -*- coding: utf-8 -*-) are in utf-8 as well.
A different tack
I'm not sure how to fix it in your current workflow, so I'll suggest a different route.
This regex will match characters that are neither white-space characters nor letters in the extended ASCII range, such as A and é. In other words, if one of your words contains a weird character that is not part of this set, the regex will match.
(?i)(?!(?![×Þß÷þø])[a-zÀ-ÿ])\S
Of course this will also match punctuation, but I'm assuming that we're only looking at words in an unpunctuated list. otherwise, excluding punctuation is not too hard.
As I see it, your challenge is to define your set.
In Python, you can so something like:
if re.search(r"(?i)(?!(?![×Þß÷þø])[a-zÀ-ÿ])\S", subject):
# Successful match
else:
# Match attempt failed
I feel your pain. Dealing with Unicode in python2.x is the headache.
The problem with that input is that python sees "á" as the raw byte string '\xc3\xa1' instead of the unicode character "u'\uc3a1'. So your going to need to sanitize the input before passing the string into your regex.
To change a raw byte string to to a unicode string
char = "á"
## print char yields the infamous, and in python unparsable "\xc3\xa1".
## which is probably what the regex is not registering.
bytes_in_string = [byte for byte in char]
string = ''.join([str(hex(ord(byte))).strip('0x') for byte in bytes_in_string])
new_unicode_string = unichr(int(string),16))
There's probably a better way, because this is a lot of operations to get something ready for regex, which I think is supposed to be faster in some way than iterating & 'if/else'ing.
Dunno though, not an expert.
I used something similar to this to isolate the special char words when I parsed wiktionary which was a wicked mess. As far as I can tell your going to have to comb through that to clean it up anyways, you may as well just:
for word in file:
try:
word.encode('UTF-8')
except UnicodeDecodeError:
your_list_of_special_char_words.append(word)
Hope this helped, and good luck!
On further research found this post:
Bytes in a unicode Python string
The was able to figure out the issue. After getting the string from the flask app route, encode it otherwise it give you an error, and then decode the charactersToSearch and each word in the file.
charactersToSearch = charactersToSearch.encode('utf-8')
Then decode it in UTF-8. If you leave the previous line out it give you an error
UNIOnlyAlphabet = charactersToSearch.decode('UTF-8')
query = re.compile('[' + UNIOnlyAlphabet + ']{2,}$', re.U).match
Lastly, when reading the UTF-8 file and using query, don't forget to decode each word in the file.
words = set(word.decode('UTF-8').rstrip('\n') for word in open('spanishList.txt') if query(word.decode('UTF-8')))
That should do it. Now the results show regular and special characters.
justo
tomás
átomo
adan
tom
I use this regex on some input,
[^a-zA-Z0-9##]
However this ends up removing lots of html special characters within the input, such as
#227;, #1606;, #1588; (i had to remove the & prefix so that it wouldn't
show up as the actual value..)
is there a way that I can convert them to their values so that it will satisfy the regexp expression? I also have no idea why the text decided to be so big.
Given that your text appears to have numeric-coded, not named, entities, you can first convert your byte string that includes xml entity defs (ampersand, hash, digits, semicolon) to unicode:
import re
xed_re = re.compile(r'&#(\d+);')
def usub(m): return unichr(int(m.group(1)))
s = 'ã, ن, ش'
u = xed_re.sub(usub, s)
if your terminal emulator can display arbitrary unicode glyphs, a print u will then show
ã, ن, ش
In any case, you can now, if you wish, use your original RE and you won't accidentally "catch" the entities, only ascii letters, digits, and the couple of punctuation characters you listed. (I'm not sure that's what you really want -- why not accented letters but just ascii ones, for example? -- but, if it is what you want, it will work).
If you do have named entities in addition to the numeric-coded ones, you can also apply the htmlentitydefs standard library module recommended in another answer (it only deals with named entities which map to Latin-1 code points, however).
You can adapt the following script:
import htmlentitydefs
import re
def substitute_entity (match):
name = match.group (1)
if name in htmlentitydefs.name2codepoint:
return unichr (htmlentitydefs.name2codepoint[name])
elif name.startswith ('#'):
try:
return unichr (int (name[1:]))
except:
pass
return '?'
print re.sub ('&(#?\\w+);', substitute_entity, 'x « y &wat; z {')
Produces the following answer here:
x « y ? z {
EDIT: I understood the question as "how to get rid of HTML entities before further processing", hope I haven't wasted time on answering a wrong question ;)
Without knowing what the expression is being used for I can't tell exactly what you need.
This will match special characters or strings of characters excluding letters, digits, #, and #:
[^a-zA-Z0-9##]*|#[0-9A-Za-z]+;