convert strings from special char to no special char - python

I am using python 2.7. If i have a string assigned to the name variable like below
name = "Test with-name and_underscore"
How can i convert this into a string that can be assigned to the name variable
name = "TestWithNameAndUnderscore"
Is Regular expression the way to go or does python have any built functions to do that....
So what i am looking for is, when a string comes in with underscore or dashes or space or any special char in it, that get converted to the same thing but No underscore/dashes/space/special char and the initial letter of that word would have start with upper case, so like "test name - is this_here" to "TestNameIsThisHere".
And if there is no space or no special chars then dont do anything. So if the string is "Helloworld", to skip it and move on.
Reason why i am doing this is, i am writing something for AWS using python boto and there is a naming restriction on what the resources can be called. It cannot be non alphanumeric

>>> import re
>>> name = "Test with-name and_underscore"
>>> print(''.join(x.capitalize() for x in re.compile(r'[^a-zA-Z0-9]').split(name)))
TestWithNameAndUnderscore
You can strip leading digits off as well, if that is needed. Here is a slightly more robust example that will do that and insure that the resulting string is not empty:
>>> import re
>>> def fix_id(s, split=re.compile('[^a-zA-Z0-9]+|^[0-9]+').split):
... result = ''.join(x.capitalize() for x in split(s))
... if not result:
... raise ValueError('Invalid ID (empty after edits)')
... return result
...
>>> fix_id("Test with-name and_underscore")
'TestWithNameAndUnderscore'
>>> fix_id("123 Test 456 with-name and_underscore 789")
'Test456WithNameAndUnderscore789'
>>> fix_id("Thisshouldbeunmolested")
'Thisshouldbeunmolested'
>>> fix_id('123')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in fix_id
ValueError: Invalid ID (empty after edits)
Note that neither of these will guarantee the uniqueness of your identifiers, e.g. "Mary-Sue" and "mary sue" will map to the same identifier. If you need these to map to different identifiers, you can add a cache dictionary, where you map symbols and add a suffix if necessary.

This can be done without Regex using the isalnum() function in Python.
name = "Test with-name and_underscore"
new_name = ''.join(name for name in string if e.isalnum())
Ofcourse, if you do insist on using regex this is possible too by replacing isalnum() with the appropriate regex function.

I know a stupid way to do it!
name.replace('_',' ').replace('-',' ')
name = name.title().replace(' ','')

A possible smaller re approach would be to use the following:
import re
string = '123 this is a test_of the-sub method 33'
varString = re.sub('_?-? ?', '', string)
it should return
>>> sub('_?-? ?','',string)
'123thisisatestofthesubmethod33'
If you are trying to use it as a variable name though you likely will run into some troubles, such as too long (pep8 compliance) or other foreign characters such as !?$% etc....in which the isalpha method above may help with. I'd be careful us trusting the value of the string to become a variable name and wrap some constraints to avoid any kind of overflow.

Related

how to replace a comma in python, which is pressed to the letter [duplicate]

I'm trying to remove specific characters from a string using Python. This is the code I'm using right now. Unfortunately it appears to do nothing to the string.
for char in line:
if char in " ?.!/;:":
line.replace(char,'')
How do I do this properly?
Strings in Python are immutable (can't be changed). Because of this, the effect of line.replace(...) is just to create a new string, rather than changing the old one. You need to rebind (assign) it to line in order to have that variable take the new value, with those characters removed.
Also, the way you are doing it is going to be kind of slow, relatively. It's also likely to be a bit confusing to experienced pythonators, who will see a doubly-nested structure and think for a moment that something more complicated is going on.
Starting in Python 2.6 and newer Python 2.x versions *, you can instead use str.translate, (see Python 3 answer below):
line = line.translate(None, '!##$')
or regular expression replacement with re.sub
import re
line = re.sub('[!##$]', '', line)
The characters enclosed in brackets constitute a character class. Any characters in line which are in that class are replaced with the second parameter to sub: an empty string.
Python 3 answer
In Python 3, strings are Unicode. You'll have to translate a little differently. kevpie mentions this in a comment on one of the answers, and it's noted in the documentation for str.translate.
When calling the translate method of a Unicode string, you cannot pass the second parameter that we used above. You also can't pass None as the first parameter. Instead, you pass a translation table (usually a dictionary) as the only parameter. This table maps the ordinal values of characters (i.e. the result of calling ord on them) to the ordinal values of the characters which should replace them, or—usefully to us—None to indicate that they should be deleted.
So to do the above dance with a Unicode string you would call something like
translation_table = dict.fromkeys(map(ord, '!##$'), None)
unicode_line = unicode_line.translate(translation_table)
Here dict.fromkeys and map are used to succinctly generate a dictionary containing
{ord('!'): None, ord('#'): None, ...}
Even simpler, as another answer puts it, create the translation table in place:
unicode_line = unicode_line.translate({ord(c): None for c in '!##$'})
Or, as brought up by Joseph Lee, create the same translation table with str.maketrans:
unicode_line = unicode_line.translate(str.maketrans('', '', '!##$'))
* for compatibility with earlier Pythons, you can create a "null" translation table to pass in place of None:
import string
line = line.translate(string.maketrans('', ''), '!##$')
Here string.maketrans is used to create a translation table, which is just a string containing the characters with ordinal values 0 to 255.
Am I missing the point here, or is it just the following:
string = "ab1cd1ef"
string = string.replace("1", "")
print(string)
# result: "abcdef"
Put it in a loop:
a = "a!b#c#d$"
b = "!##$"
for char in b:
a = a.replace(char, "")
print(a)
# result: "abcd"
>>> line = "abc##!?efg12;:?"
>>> ''.join( c for c in line if c not in '?:!/;' )
'abc##efg12'
With re.sub regular expression
Since Python 3.5, substitution using regular expressions re.sub became available:
import re
re.sub('\ |\?|\.|\!|\/|\;|\:', '', line)
Example
import re
line = 'Q: Do I write ;/.??? No!!!'
re.sub('\ |\?|\.|\!|\/|\;|\:', '', line)
'QDoIwriteNo'
Explanation
In regular expressions (regex), | is a logical OR and \ escapes spaces and special characters that might be actual regex commands. Whereas sub stands for substitution, in this case with the empty string ''.
The asker almost had it. Like most things in Python, the answer is simpler than you think.
>>> line = "H E?.LL!/;O:: "
>>> for char in ' ?.!/;:':
... line = line.replace(char,'')
...
>>> print line
HELLO
You don't have to do the nested if/for loop thing, but you DO need to check each character individually.
For the inverse requirement of only allowing certain characters in a string, you can use regular expressions with a set complement operator [^ABCabc]. For example, to remove everything except ascii letters, digits, and the hyphen:
>>> import string
>>> import re
>>>
>>> phrase = ' There were "nine" (9) chick-peas in my pocket!!! '
>>> allow = string.letters + string.digits + '-'
>>> re.sub('[^%s]' % allow, '', phrase)
'Therewerenine9chick-peasinmypocket'
From the python regular expression documentation:
Characters that are not within a range can be matched by complementing
the set. If the first character of the set is '^', all the characters
that are not in the set will be matched. For example, [^5] will match
any character except '5', and [^^] will match any character except
'^'. ^ has no special meaning if it’s not the first character in the
set.
line = line.translate(None, " ?.!/;:")
>>> s = 'a1b2c3'
>>> ''.join(c for c in s if c not in '123')
'abc'
Strings are immutable in Python. The replace method returns a new string after the replacement. Try:
for char in line:
if char in " ?.!/;:":
line = line.replace(char,'')
This is identical to your original code, with the addition of an assignment to line inside the loop.
Note that the string replace() method replaces all of the occurrences of the character in the string, so you can do better by using replace() for each character you want to remove, instead of looping over each character in your string.
I was surprised that no one had yet recommended using the builtin filter function.
import operator
import string # only for the example you could use a custom string
s = "1212edjaq"
Say we want to filter out everything that isn't a number. Using the filter builtin method "...is equivalent to the generator expression (item for item in iterable if function(item))" [Python 3 Builtins: Filter]
sList = list(s)
intsList = list(string.digits)
obj = filter(lambda x: operator.contains(intsList, x), sList)))
In Python 3 this returns
>> <filter object # hex>
To get a printed string,
nums = "".join(list(obj))
print(nums)
>> "1212"
I am not sure how filter ranks in terms of efficiency but it is a good thing to know how to use when doing list comprehensions and such.
UPDATE
Logically, since filter works you could also use list comprehension and from what I have read it is supposed to be more efficient because lambdas are the wall street hedge fund managers of the programming function world. Another plus is that it is a one-liner that doesnt require any imports. For example, using the same string 's' defined above,
num = "".join([i for i in s if i.isdigit()])
That's it. The return will be a string of all the characters that are digits in the original string.
If you have a specific list of acceptable/unacceptable characters you need only adjust the 'if' part of the list comprehension.
target_chars = "".join([i for i in s if i in some_list])
or alternatively,
target_chars = "".join([i for i in s if i not in some_list])
Using filter, you'd just need one line
line = filter(lambda char: char not in " ?.!/;:", line)
This treats the string as an iterable and checks every character if the lambda returns True:
>>> help(filter)
Help on built-in function filter in module __builtin__:
filter(...)
filter(function or None, sequence) -> list, tuple, or string
Return those items of sequence for which function(item) is true. If
function is None, return the items that are true. If sequence is a tuple
or string, return the same type, else return a list.
Try this one:
def rm_char(original_str, need2rm):
''' Remove charecters in "need2rm" from "original_str" '''
return original_str.translate(str.maketrans('','',need2rm))
This method works well in Python 3
Here's some possible ways to achieve this task:
def attempt1(string):
return "".join([v for v in string if v not in ("a", "e", "i", "o", "u")])
def attempt2(string):
for v in ("a", "e", "i", "o", "u"):
string = string.replace(v, "")
return string
def attempt3(string):
import re
for v in ("a", "e", "i", "o", "u"):
string = re.sub(v, "", string)
return string
def attempt4(string):
return string.replace("a", "").replace("e", "").replace("i", "").replace("o", "").replace("u", "")
for attempt in [attempt1, attempt2, attempt3, attempt4]:
print(attempt("murcielago"))
PS: Instead using " ?.!/;:" the examples use the vowels... and yeah, "murcielago" is the Spanish word to say bat... funny word as it contains all the vowels :)
PS2: If you're interested on performance you could measure these attempts with a simple code like:
import timeit
K = 1000000
for i in range(1,5):
t = timeit.Timer(
f"attempt{i}('murcielago')",
setup=f"from __main__ import attempt{i}"
).repeat(1, K)
print(f"attempt{i}",min(t))
In my box you'd get:
attempt1 2.2334518376057244
attempt2 1.8806643818474513
attempt3 7.214925774955572
attempt4 1.7271184513757465
So it seems attempt4 is the fastest one for this particular input.
Here's my Python 2/3 compatible version. Since the translate api has changed.
def remove(str_, chars):
"""Removes each char in `chars` from `str_`.
Args:
str_: String to remove characters from
chars: String of to-be removed characters
Returns:
A copy of str_ with `chars` removed
Example:
remove("What?!?: darn;", " ?.!:;") => 'Whatdarn'
"""
try:
# Python2.x
return str_.translate(None, chars)
except TypeError:
# Python 3.x
table = {ord(char): None for char in chars}
return str_.translate(table)
#!/usr/bin/python
import re
strs = "how^ much for{} the maple syrup? $20.99? That's[] ricidulous!!!"
print strs
nstr = re.sub(r'[?|$|.|!|a|b]',r' ',strs)#i have taken special character to remove but any #character can be added here
print nstr
nestr = re.sub(r'[^a-zA-Z0-9 ]',r'',nstr)#for removing special character
print nestr
You can also use a function in order to substitute different kind of regular expression or other pattern with the use of a list. With that, you can mixed regular expression, character class, and really basic text pattern. It's really useful when you need to substitute a lot of elements like HTML ones.
*NB: works with Python 3.x
import re # Regular expression library
def string_cleanup(x, notwanted):
for item in notwanted:
x = re.sub(item, '', x)
return x
line = "<title>My example: <strong>A text %very% $clean!!</strong></title>"
print("Uncleaned: ", line)
# Get rid of html elements
html_elements = ["<title>", "</title>", "<strong>", "</strong>"]
line = string_cleanup(line, html_elements)
print("1st clean: ", line)
# Get rid of special characters
special_chars = ["[!##$]", "%"]
line = string_cleanup(line, special_chars)
print("2nd clean: ", line)
In the function string_cleanup, it takes your string x and your list notwanted as arguments. For each item in that list of elements or pattern, if a substitute is needed it will be done.
The output:
Uncleaned: <title>My example: <strong>A text %very% $clean!!</strong></title>
1st clean: My example: A text %very% $clean!!
2nd clean: My example: A text very clean
My method I'd use probably wouldn't work as efficiently, but it is massively simple. I can remove multiple characters at different positions all at once, using slicing and formatting.
Here's an example:
words = "things"
removed = "%s%s" % (words[:3], words[-1:])
This will result in 'removed' holding the word 'this'.
Formatting can be very helpful for printing variables midway through a print string. It can insert any data type using a % followed by the variable's data type; all data types can use %s, and floats (aka decimals) and integers can use %d.
Slicing can be used for intricate control over strings. When I put words[:3], it allows me to select all the characters in the string from the beginning (the colon is before the number, this will mean 'from the beginning to') to the 4th character (it includes the 4th character). The reason 3 equals till the 4th position is because Python starts at 0. Then, when I put word[-1:], it means the 2nd last character to the end (the colon is behind the number). Putting -1 will make Python count from the last character, rather than the first. Again, Python will start at 0. So, word[-1:] basically means 'from the second last character to the end of the string.
So, by cutting off the characters before the character I want to remove and the characters after and sandwiching them together, I can remove the unwanted character. Think of it like a sausage. In the middle it's dirty, so I want to get rid of it. I simply cut off the two ends I want then put them together without the unwanted part in the middle.
If I want to remove multiple consecutive characters, I simply shift the numbers around in the [] (slicing part). Or if I want to remove multiple characters from different positions, I can simply sandwich together multiple slices at once.
Examples:
words = "control"
removed = "%s%s" % (words[:2], words[-2:])
removed equals 'cool'.
words = "impacts"
removed = "%s%s%s" % (words[1], words[3:5], words[-1])
removed equals 'macs'.
In this case, [3:5] means character at position 3 through character at position 5 (excluding the character at the final position).
Remember, Python starts counting at 0, so you will need to as well.
In Python 3.5
e.g.,
os.rename(file_name, file_name.translate({ord(c): None for c in '0123456789'}))
To remove all the number from the string
How about this:
def text_cleanup(text):
new = ""
for i in text:
if i not in " ?.!/;:":
new += i
return new
Below one.. with out using regular expression concept..
ipstring ="text with symbols!##$^&*( ends here"
opstring=''
for i in ipstring:
if i.isalnum()==1 or i==' ':
opstring+=i
pass
print opstring
Recursive split:
s=string ; chars=chars to remove
def strip(s,chars):
if len(s)==1:
return "" if s in chars else s
return strip(s[0:int(len(s)/2)],chars) + strip(s[int(len(s)/2):len(s)],chars)
example:
print(strip("Hello!","lo")) #He!
You could use the re module's regular expression replacement. Using the ^ expression allows you to pick exactly what you want from your string.
import re
text = "This is absurd!"
text = re.sub("[^a-zA-Z]","",text) # Keeps only Alphabets
print(text)
Output to this would be "Thisisabsurd". Only things specified after the ^ symbol will appear.
# for each file on a directory, rename filename
file_list = os.listdir (r"D:\Dev\Python")
for file_name in file_list:
os.rename(file_name, re.sub(r'\d+','',file_name))
Even the below approach works
line = "a,b,c,d,e"
alpha = list(line)
while ',' in alpha:
alpha.remove(',')
finalString = ''.join(alpha)
print(finalString)
output: abcde
The string method replace does not modify the original string. It leaves the original alone and returns a modified copy.
What you want is something like: line = line.replace(char,'')
def replace_all(line, )for char in line:
if char in " ?.!/;:":
line = line.replace(char,'')
return line
However, creating a new string each and every time that a character is removed is very inefficient. I recommend the following instead:
def replace_all(line, baddies, *):
"""
The following is documentation on how to use the class,
without reference to the implementation details:
For implementation notes, please see comments begining with `#`
in the source file.
[*crickets chirp*]
"""
is_bad = lambda ch, baddies=baddies: return ch in baddies
filter_baddies = lambda ch, *, is_bad=is_bad: "" if is_bad(ch) else ch
mahp = replace_all.map(filter_baddies, line)
return replace_all.join('', join(mahp))
# -------------------------------------------------
# WHY `baddies=baddies`?!?
# `is_bad=is_bad`
# -------------------------------------------------
# Default arguments to a lambda function are evaluated
# at the same time as when a lambda function is
# **defined**.
#
# global variables of a lambda function
# are evaluated when the lambda function is
# **called**
#
# The following prints "as yellow as snow"
#
# fleece_color = "white"
# little_lamb = lambda end: return "as " + fleece_color + end
#
# # sometime later...
#
# fleece_color = "yellow"
# print(little_lamb(" as snow"))
# --------------------------------------------------
replace_all.map = map
replace_all.join = str.join
If you want your string to be just allowed characters by using ASCII codes, you can use this piece of code:
for char in s:
if ord(char) < 96 or ord(char) > 123:
s = s.replace(char, "")
It will remove all the characters beyond a....z even upper cases.

Excluding a specific string of characters in a str()-function

A small issue I've encountered during coding.
I'm looking to print out the name of a .txt file.
For example, the file is named: verdata_florida.txt, or verdata_newyork.txt
How can I exclude .txt and verdata_, but keep the string between? It must work for any number of characters, but .txt and verdata_ must be excluded.
This is where I am so far, I've already defined filename to be input()
print("Average TAM at", str(filename[8:**????**]), "is higher than ")
3 ways of doing it:
using str.split twice:
>>> "verdata_florida.txt".split("_")[1].split(".")[0]
'florida'
using str.partition twice (you won't get an exception if the format doesn't match, and probably faster too):
>>> "verdata_florida.txt".partition("_")[2].partition(".")[0]
'florida'
using re, keeping only center part:
>>> import re
>>> re.sub(".*_(.*)\..*",r"\1","verdata_florida.txt")
'florida'
all those above must be tuned if _ and . appear multiple times (must we keep the longest or the shortest string)
EDIT: In your case, though, prefixes & suffixes seem fixed. In that case, just use str.replace twice:
>>> "verdata_florida.txt".replace("verdata_","").replace(".txt","")
'florida'
Assuming you want it to split on the first _ and the last . you can use slicing and the index and rindex functions to get this done. These functions will search for the first occurrence of the substring in the parenthesis and return the index number. If no substring is found, they will throw a ValueError. If the search is desired, but not the ValueError, you can also use find and rfind, which do the same thing but always return -1 if no match is found.
s = 'verdata_new_hampshire.txt'
s_trunc = s[s.index('_') + 1: s.rindex('.')] # or s[s.find('_') + 1: s.rfind('.')]
print(s_trunc) # new_hampshire
Of course, if you are always going to exclude verdata_ and .txt you could always hardcode the slice as well.
print(s[8:-4]) # new_hampshire
You can leverage str.split() on strings. For example:
s = 'verdata_newyork.txt'
s.split('verdata_')
# ['', 'florida.txt']
s.split('verdata_')[1]
# 'florida.txt'
s.split('verdata_')[1].split('.txt')
['florida', '']
s.split('verdata_')[1].split('.txt')[0]
# 'florida'
You can just split string by dot and underscore like this:
string filename = "verdata_prague.txt";
string name = filename.split("."); //verdata_prague
name = name[0].split("_")[1]; //prague
or by replace function:
string filename = "verdata_prague.txt";
string name = filename.replace(".txt",""); //verdata_prague
name = name[0].replace("verdata_","")[1]; //prague

python regular expression to match strings

I want to parse a string, such as:
package: name='jp.tjkapp.droid1lwp' versionCode='2' versionName='1.1'
uses-permission:'android.permission.WRITE_APN_SETTINGS'
uses-permission:'android.permission.RECEIVE_BOOT_COMPLETED'
uses-permission:'android.permission.ACCESS_NETWORK_STATE'
I want to get:
string1: jp.tjkapp.droidllwp`
string2: 1.1
Because there are multiple uses-permission, I want to get permission as a list, contains:
WRITE_APN_SETTINGS, RECEIVE_BOOT_COMPLETED and ACCESS_NETWORK_STATE.
Could you help me write the python regular expression to get the strings I want?
Thanks.
Assuming the code block you provided is one long string, here stored in a variable called input_string:
name = re.search(r"(?<=name\=\')[\w\.]+?(?=\')", input_string).group(0)
versionName = re.search(r"(?<=versionName\=\')\d+?\.\d+?(?=\')", input_string).group(0)
permissions = re.findall(r'(?<=android\.permission\.)[A-Z_]+(?=\')', input_string)
Explanation:
name
(?<=name\=\'): check ahead of the main string in order to return only strings that are preceded by name='. The \ in front of = and ' serve to escape them so that the regex knows we're talking about the = string and not a regex command. name=' is not also returned when we get the result, we just know that the results we get are all preceded by it.
[\w\.]+?: This is the main string we're searching for. \w means any alphanumeric character and underscore. \. is an escaped period, so the regex knows we mean . and not the regex command represented by an unescaped period. Putting these in [] means we're okay with anything we've stuck in brackets, so we're saying that we'll accept any alphanumeric character, _, or .. + afterwords means at least one of the previous thing, meaning at least one (but possibly more) of [\w\.]. Finally, the ? means don't be greedy--we're telling the regex to get the smallest possible group that meets these specifications, since + could go on for an unlimited number of repeats of anything matched by [\w\.].
(?=\'): check behind the main string in order to return only strings that are followed by '. The \ is also an escape, since otherwise regex or Python's string execution might misinterpret '. This final ' is not returned with our results, we just know that in the original string, it followed any result we do end up getting.
You can do this without regex by reading the file content line by line.
>>> def split_string(s):
... if s.startswith('package'):
... return [i.split('=')[1] for i in s.split() if "=" in i]
... elif s.startswith('uses-permission'):
... return s.split('.')[-1]
...
>>> split_string("package: name='jp.tjkapp.droid1lwp' versionCode='2' versionName='1.1'")
["'jp.tjkapp.droid1lwp'", "'2'", "'1.1'"]
>>> split_string("uses-permission:'android.permission.WRITE_APN_SETTINGS'")
"WRITE_APN_SETTINGS'"
>>> split_string("uses-permission:'android.permission.RECEIVE_BOOT_COMPLETED'")
"RECEIVE_BOOT_COMPLETED'"
>>> split_string("uses-permission:'android.permission.ACCESS_NETWORK_STATE'")
"ACCESS_NETWORK_STATE'"
>>>
Here is one example code
#!/usr/bin/env python
inputFile = open("test.txt", "r").readlines()
for line in inputFile:
if line.startswith("package"):
words = line.split()
string1 = words[1].split("=")[1].replace("'","")
string2 = words[3].split("=")[1].replace("'","")
test.txt file contains input data you mentioned earlier..

string.upper(<str>) and <str>.upper() won't execute

I have the following bit of code:
def test():
fragment = ''
fragment = raw_input('Enter input')
while fragment not in string.ascii_letters:
fragment = raw_input('Invalid character entered, try again: ')
fragment.upper()
print fragment*3
However when I run it, say for an input value of p, fragment gets printed as 'ppp' - all lower case, i.e. the fragment.upper() line does not run. The same thing happens if I replace that line with string.upper(fragment) (and adding import string at the beginning). Can someone tell me what I'm doing wrong?
Strings are immutable. So functions like str.upper() will not modify str but return a new string.
>>> name = "xyz"
>>> name.upper()
'XYZ'
>>> print name
xyz # Notice that it's still in lower case.
>>> name_upper = name.upper()
>>> print name_upper
XYZ
So instead of fragment.upper() in your code, you need to do new_variable = fragment.upper()and then use this new_variable.
You're not realizing that strings in Python are immutable and that string methods and operations return new strings.
>>> print 'ppp'.upper()
PPP
String is a immutable object, so when you call
string.upper()
python would make a copy of the string, and when you come back call
print string
, it would be the original string, which is lower case. So when you need its upper case version, you have to say:
print string.upper()

Python: How to use string.translate() to replace quotation marks? (for "slug" creation)

I want to remove all strange characters from a string to make it "url safe". Therefor, I have a function that goes like this:
def urlize(url, safe=u''):
intab = u"àáâãäåòóôõöøèéêëçìíîïùúûüÿñ" + safe
outtab = u"aaaaaaooooooeeeeciiiiuuuuyn" + safe
trantab = dict((ord(a), b) for a, b in zip(intab, outtab))
return url.lower().translate(trantab).strip()
This works just great, but now I want to reuse that funcion to allow special characters. For example, the quotation mark.
urlize(u'This is sóme randóm "text" that í wánt to process',u'"')
...and that throws the following error:
Traceback (most recent call last):
File "<input>", line 1, in <module>
TypeError: expected a character buffer object
I have tried, but did not work:
urlize(u'text',u'\"')
intab = u"àáâãäåòóôõöøèéêëçìíîïùúûüÿñ%s" , safe
--EDIT--
The full function looks like this
def urlize(url, safe=u''):
intab = u"àáâãäåòóôõöøèéêëçìíîïùúûüÿñ" + safe
outtab = u"aaaaaaooooooeeeeciiiiuuuuyn" + safe
trantab = dict((ord(a), b) for a, b in zip(intab, outtab))
translated_url = url.lower().translate(trantab).strip()
pos = 0
stop = len(translated_url)
new_url= ''
last_division_char = False
while pos < stop:
if not translated_url[pos].isalnum() and translated_url[pos] not in safe:
if (not last_division_char) and (pos != stop -1):
new_url+='-'
last_division_char = True
else:
new_url+=translated_url[pos]
last_division_char = False
pos+=1
return new_url
--EDIT-- Goal
What I want is to normalize text so that I can put it on the url myself, and use it like an Id. For example, if I want to show the products of a category, I'd rather put "ninos-y-bebes" instead of "niños-y-bebés" (spanish for kids and babies). I really don't want all the áéíóúñ (which are the special characters in spanish) in my url, but I don't want to get rid of them either. That's why I would like to replace all characters that looks the same (not 100% all of them, I dont care) and then delete all non alfanumeric characters left.
The unidecode module is a safer option (it will handle other special simbols like "degree"):
>>> from unidecode import unidecode
>>> s = u'This is sóme randóm "text" that í wánt to process'
>>> unidecode(s)
'This is some random "text" that i want to process'
>>> import urllib
>>> urllib.urlencode(dict(x=unidecode(s)))[2:]
'This+is+some+random+%22text%22+that+i+want+to+process'
[ update ]
i think i'm already doing that -> u"aaaaaaooooooeeeeciiiiuuuuyn" – Marco Bruggmann
Fair enough, if you are willing to keep track of every unicode character out there for your translation table (accented characters are not the only issues, there are a whole lot of symbols to rain on your parade).
Worst, many unicode symbols may be visually identical to their ASCII counterparts, leading to hard to diagnose errors.
[ update ]
What about something like:
>>> safe_chars = 'abcdefghijklmnopqrstuvwxyz01234567890-_'
>>> filter(lambda x: x in safe_chars, "i think i'm already doing that")
'ithinkimalreadydoingthat'
[ update ]
#Daenyth I tried it, but I only get errors: from urllib import urlencode => urlencode('google.com/';) => TypeError: not a valid non-string sequence or mapping object – Marco Bruggmann
The urlencode function is intended to produce QUERYSTRING formated output (a=1&b=2&c=3). It expects key/value pairs:
>>> urllib.urlencode(dict(url='google.com/'))
'url=google.com%2F'
>>> help(urllib.urlencode)
Help on function urlencode in module urllib:
urlencode(query, doseq=0)
Encode a sequence of two-element tuples or dictionary into a URL query string.
If any values in the query arg are sequences and doseq is true, each
sequence element is converted to a separate parameter.
If the query arg is a sequence of two-element tuples, the order of the
parameters in the output will match the order of parameters in the
input.
(END)
[ update ]
That will works without a doubt, but what I want is to normalize text so that I can put it on the url myself, and use it like an Id. For example, if I want to show the products of a category, I'd rather put "ninos-y-bebes" instead of "niños-y-bebés" (spanish for kids and babies). I really don't want all the áéíóúñ (which are the special characters in spanish) in my url, but I don't want to get rid of them either. That's why I would like to replace all characters that looks the same (not 100% all of them, I dont care) and then delete all non alfanumeric characters left.
Ok, Marco, what you want is a routine to create the so called slugs, isn't it?
You can do it in one line:
>>> s = u'This is sóme randóm "text" that í wánt to process'
>>> allowed_chars = 'abcdefghijklmnopqrstuwvxyz01234567890'
>>> ''.join([ x if x in allowed_chars else '-' for x in unidecode(s.lower()) ])
u'this-is-some-random--text--that-i-want-to-process'
>>> s = u"Niños y Bebés"
>>> ''.join([ x if x in allowed_chars else '-' for x in unidecode(s.lower()) ])
u'ninos-y-bebes'
>>> s = u"1ª Categoria, ½ docena"
>>> ''.join([ x if x in allowed_chars else '-' for x in unidecode(s.lower()) ])
u'1a-categoria--1-2-docena'

Categories