Check if a string has unique characters excluding whitespace - python

I'm practicing questions from Cracking the coding interview to become better and just in case, be prepared. The first problem states: Find if a string has all unique characters or not? I wrote this and it works perfectly:
def isunique(string):
x = []
for i in string:
if i in x:
return False
else:
x.append(i)
return True
Now, my question is, what if I have all unique characters like in:
'I am J'
which would be pretty rare, but lets say it occurs by mere chance, how can I create an exception for the spaces? I a way it doesn't count the space as a character, so the func returns True and not False?

Now no matter how space or how many special characters in your string , it will just count the words :
import re
def isunique(string):
pattern=r'\w'
search=re.findall(pattern,string)
string=search
x = []
for i in string:
if i in x:
return False
else:
x.append(i)
return True
print(isunique('I am J'))
output:
True
without space words test case :
print(isunique('war'))
True
with space words test case:
print(isunique('w a r'))
True
repeating letters :
print(isunique('warrior'))
False

Create a list of characters you want to consider as non-characters and replace them in string. Then perform your function code.
As an alternative, to check the uniqueness of characters, the better approach will be to compare the length of final string with the set value of that string as:
def isunique(my_string):
nonchars = [' ', '.', ',']
for nonchar in nonchars:
my_string = my_string.replace(nonchar, '')
return len(set(my_string)) == len(my_string)
Sample Run:
>>> isunique( 'I am J' )
True
As per the Python's set() document:
Return a new set object, optionally with elements taken from iterable.
set is a built-in class. See set and Set Types — set, frozenset for
documentation about this class.

And... a pool of answers is never complete unless there is also a regex solution:
def is_unique(string):
import re
patt = re.compile(r"^.*?(.).*?(\1).*$")
return not re.search(patt, string)
(I'll leave the whitespace handling as an exercise to the OP)

An elegant approach (YMMV), with collections.Counter.
from collections import Counter
def isunique(string):
return Counter(string.replace(' ', '')).most_common(1)[0][-1] == 1
Alternatively, if your strings contain more than just whitespaces (tabs and newlines for instance), I'd recommend regex based substitution:
import re
string = re.sub(r'\s+', '', string, flags=re.M)

Simple solution
def isunique(string):
return all(string.count(i)==1 for i in string if i!=' ')

Related

how to replace a comma in python, which is pressed to the letter [duplicate]

I'm trying to remove specific characters from a string using Python. This is the code I'm using right now. Unfortunately it appears to do nothing to the string.
for char in line:
if char in " ?.!/;:":
line.replace(char,'')
How do I do this properly?
Strings in Python are immutable (can't be changed). Because of this, the effect of line.replace(...) is just to create a new string, rather than changing the old one. You need to rebind (assign) it to line in order to have that variable take the new value, with those characters removed.
Also, the way you are doing it is going to be kind of slow, relatively. It's also likely to be a bit confusing to experienced pythonators, who will see a doubly-nested structure and think for a moment that something more complicated is going on.
Starting in Python 2.6 and newer Python 2.x versions *, you can instead use str.translate, (see Python 3 answer below):
line = line.translate(None, '!##$')
or regular expression replacement with re.sub
import re
line = re.sub('[!##$]', '', line)
The characters enclosed in brackets constitute a character class. Any characters in line which are in that class are replaced with the second parameter to sub: an empty string.
Python 3 answer
In Python 3, strings are Unicode. You'll have to translate a little differently. kevpie mentions this in a comment on one of the answers, and it's noted in the documentation for str.translate.
When calling the translate method of a Unicode string, you cannot pass the second parameter that we used above. You also can't pass None as the first parameter. Instead, you pass a translation table (usually a dictionary) as the only parameter. This table maps the ordinal values of characters (i.e. the result of calling ord on them) to the ordinal values of the characters which should replace them, or—usefully to us—None to indicate that they should be deleted.
So to do the above dance with a Unicode string you would call something like
translation_table = dict.fromkeys(map(ord, '!##$'), None)
unicode_line = unicode_line.translate(translation_table)
Here dict.fromkeys and map are used to succinctly generate a dictionary containing
{ord('!'): None, ord('#'): None, ...}
Even simpler, as another answer puts it, create the translation table in place:
unicode_line = unicode_line.translate({ord(c): None for c in '!##$'})
Or, as brought up by Joseph Lee, create the same translation table with str.maketrans:
unicode_line = unicode_line.translate(str.maketrans('', '', '!##$'))
* for compatibility with earlier Pythons, you can create a "null" translation table to pass in place of None:
import string
line = line.translate(string.maketrans('', ''), '!##$')
Here string.maketrans is used to create a translation table, which is just a string containing the characters with ordinal values 0 to 255.
Am I missing the point here, or is it just the following:
string = "ab1cd1ef"
string = string.replace("1", "")
print(string)
# result: "abcdef"
Put it in a loop:
a = "a!b#c#d$"
b = "!##$"
for char in b:
a = a.replace(char, "")
print(a)
# result: "abcd"
>>> line = "abc##!?efg12;:?"
>>> ''.join( c for c in line if c not in '?:!/;' )
'abc##efg12'
With re.sub regular expression
Since Python 3.5, substitution using regular expressions re.sub became available:
import re
re.sub('\ |\?|\.|\!|\/|\;|\:', '', line)
Example
import re
line = 'Q: Do I write ;/.??? No!!!'
re.sub('\ |\?|\.|\!|\/|\;|\:', '', line)
'QDoIwriteNo'
Explanation
In regular expressions (regex), | is a logical OR and \ escapes spaces and special characters that might be actual regex commands. Whereas sub stands for substitution, in this case with the empty string ''.
The asker almost had it. Like most things in Python, the answer is simpler than you think.
>>> line = "H E?.LL!/;O:: "
>>> for char in ' ?.!/;:':
... line = line.replace(char,'')
...
>>> print line
HELLO
You don't have to do the nested if/for loop thing, but you DO need to check each character individually.
For the inverse requirement of only allowing certain characters in a string, you can use regular expressions with a set complement operator [^ABCabc]. For example, to remove everything except ascii letters, digits, and the hyphen:
>>> import string
>>> import re
>>>
>>> phrase = ' There were "nine" (9) chick-peas in my pocket!!! '
>>> allow = string.letters + string.digits + '-'
>>> re.sub('[^%s]' % allow, '', phrase)
'Therewerenine9chick-peasinmypocket'
From the python regular expression documentation:
Characters that are not within a range can be matched by complementing
the set. If the first character of the set is '^', all the characters
that are not in the set will be matched. For example, [^5] will match
any character except '5', and [^^] will match any character except
'^'. ^ has no special meaning if it’s not the first character in the
set.
line = line.translate(None, " ?.!/;:")
>>> s = 'a1b2c3'
>>> ''.join(c for c in s if c not in '123')
'abc'
Strings are immutable in Python. The replace method returns a new string after the replacement. Try:
for char in line:
if char in " ?.!/;:":
line = line.replace(char,'')
This is identical to your original code, with the addition of an assignment to line inside the loop.
Note that the string replace() method replaces all of the occurrences of the character in the string, so you can do better by using replace() for each character you want to remove, instead of looping over each character in your string.
I was surprised that no one had yet recommended using the builtin filter function.
import operator
import string # only for the example you could use a custom string
s = "1212edjaq"
Say we want to filter out everything that isn't a number. Using the filter builtin method "...is equivalent to the generator expression (item for item in iterable if function(item))" [Python 3 Builtins: Filter]
sList = list(s)
intsList = list(string.digits)
obj = filter(lambda x: operator.contains(intsList, x), sList)))
In Python 3 this returns
>> <filter object # hex>
To get a printed string,
nums = "".join(list(obj))
print(nums)
>> "1212"
I am not sure how filter ranks in terms of efficiency but it is a good thing to know how to use when doing list comprehensions and such.
UPDATE
Logically, since filter works you could also use list comprehension and from what I have read it is supposed to be more efficient because lambdas are the wall street hedge fund managers of the programming function world. Another plus is that it is a one-liner that doesnt require any imports. For example, using the same string 's' defined above,
num = "".join([i for i in s if i.isdigit()])
That's it. The return will be a string of all the characters that are digits in the original string.
If you have a specific list of acceptable/unacceptable characters you need only adjust the 'if' part of the list comprehension.
target_chars = "".join([i for i in s if i in some_list])
or alternatively,
target_chars = "".join([i for i in s if i not in some_list])
Using filter, you'd just need one line
line = filter(lambda char: char not in " ?.!/;:", line)
This treats the string as an iterable and checks every character if the lambda returns True:
>>> help(filter)
Help on built-in function filter in module __builtin__:
filter(...)
filter(function or None, sequence) -> list, tuple, or string
Return those items of sequence for which function(item) is true. If
function is None, return the items that are true. If sequence is a tuple
or string, return the same type, else return a list.
Try this one:
def rm_char(original_str, need2rm):
''' Remove charecters in "need2rm" from "original_str" '''
return original_str.translate(str.maketrans('','',need2rm))
This method works well in Python 3
Here's some possible ways to achieve this task:
def attempt1(string):
return "".join([v for v in string if v not in ("a", "e", "i", "o", "u")])
def attempt2(string):
for v in ("a", "e", "i", "o", "u"):
string = string.replace(v, "")
return string
def attempt3(string):
import re
for v in ("a", "e", "i", "o", "u"):
string = re.sub(v, "", string)
return string
def attempt4(string):
return string.replace("a", "").replace("e", "").replace("i", "").replace("o", "").replace("u", "")
for attempt in [attempt1, attempt2, attempt3, attempt4]:
print(attempt("murcielago"))
PS: Instead using " ?.!/;:" the examples use the vowels... and yeah, "murcielago" is the Spanish word to say bat... funny word as it contains all the vowels :)
PS2: If you're interested on performance you could measure these attempts with a simple code like:
import timeit
K = 1000000
for i in range(1,5):
t = timeit.Timer(
f"attempt{i}('murcielago')",
setup=f"from __main__ import attempt{i}"
).repeat(1, K)
print(f"attempt{i}",min(t))
In my box you'd get:
attempt1 2.2334518376057244
attempt2 1.8806643818474513
attempt3 7.214925774955572
attempt4 1.7271184513757465
So it seems attempt4 is the fastest one for this particular input.
Here's my Python 2/3 compatible version. Since the translate api has changed.
def remove(str_, chars):
"""Removes each char in `chars` from `str_`.
Args:
str_: String to remove characters from
chars: String of to-be removed characters
Returns:
A copy of str_ with `chars` removed
Example:
remove("What?!?: darn;", " ?.!:;") => 'Whatdarn'
"""
try:
# Python2.x
return str_.translate(None, chars)
except TypeError:
# Python 3.x
table = {ord(char): None for char in chars}
return str_.translate(table)
#!/usr/bin/python
import re
strs = "how^ much for{} the maple syrup? $20.99? That's[] ricidulous!!!"
print strs
nstr = re.sub(r'[?|$|.|!|a|b]',r' ',strs)#i have taken special character to remove but any #character can be added here
print nstr
nestr = re.sub(r'[^a-zA-Z0-9 ]',r'',nstr)#for removing special character
print nestr
You can also use a function in order to substitute different kind of regular expression or other pattern with the use of a list. With that, you can mixed regular expression, character class, and really basic text pattern. It's really useful when you need to substitute a lot of elements like HTML ones.
*NB: works with Python 3.x
import re # Regular expression library
def string_cleanup(x, notwanted):
for item in notwanted:
x = re.sub(item, '', x)
return x
line = "<title>My example: <strong>A text %very% $clean!!</strong></title>"
print("Uncleaned: ", line)
# Get rid of html elements
html_elements = ["<title>", "</title>", "<strong>", "</strong>"]
line = string_cleanup(line, html_elements)
print("1st clean: ", line)
# Get rid of special characters
special_chars = ["[!##$]", "%"]
line = string_cleanup(line, special_chars)
print("2nd clean: ", line)
In the function string_cleanup, it takes your string x and your list notwanted as arguments. For each item in that list of elements or pattern, if a substitute is needed it will be done.
The output:
Uncleaned: <title>My example: <strong>A text %very% $clean!!</strong></title>
1st clean: My example: A text %very% $clean!!
2nd clean: My example: A text very clean
My method I'd use probably wouldn't work as efficiently, but it is massively simple. I can remove multiple characters at different positions all at once, using slicing and formatting.
Here's an example:
words = "things"
removed = "%s%s" % (words[:3], words[-1:])
This will result in 'removed' holding the word 'this'.
Formatting can be very helpful for printing variables midway through a print string. It can insert any data type using a % followed by the variable's data type; all data types can use %s, and floats (aka decimals) and integers can use %d.
Slicing can be used for intricate control over strings. When I put words[:3], it allows me to select all the characters in the string from the beginning (the colon is before the number, this will mean 'from the beginning to') to the 4th character (it includes the 4th character). The reason 3 equals till the 4th position is because Python starts at 0. Then, when I put word[-1:], it means the 2nd last character to the end (the colon is behind the number). Putting -1 will make Python count from the last character, rather than the first. Again, Python will start at 0. So, word[-1:] basically means 'from the second last character to the end of the string.
So, by cutting off the characters before the character I want to remove and the characters after and sandwiching them together, I can remove the unwanted character. Think of it like a sausage. In the middle it's dirty, so I want to get rid of it. I simply cut off the two ends I want then put them together without the unwanted part in the middle.
If I want to remove multiple consecutive characters, I simply shift the numbers around in the [] (slicing part). Or if I want to remove multiple characters from different positions, I can simply sandwich together multiple slices at once.
Examples:
words = "control"
removed = "%s%s" % (words[:2], words[-2:])
removed equals 'cool'.
words = "impacts"
removed = "%s%s%s" % (words[1], words[3:5], words[-1])
removed equals 'macs'.
In this case, [3:5] means character at position 3 through character at position 5 (excluding the character at the final position).
Remember, Python starts counting at 0, so you will need to as well.
In Python 3.5
e.g.,
os.rename(file_name, file_name.translate({ord(c): None for c in '0123456789'}))
To remove all the number from the string
How about this:
def text_cleanup(text):
new = ""
for i in text:
if i not in " ?.!/;:":
new += i
return new
Below one.. with out using regular expression concept..
ipstring ="text with symbols!##$^&*( ends here"
opstring=''
for i in ipstring:
if i.isalnum()==1 or i==' ':
opstring+=i
pass
print opstring
Recursive split:
s=string ; chars=chars to remove
def strip(s,chars):
if len(s)==1:
return "" if s in chars else s
return strip(s[0:int(len(s)/2)],chars) + strip(s[int(len(s)/2):len(s)],chars)
example:
print(strip("Hello!","lo")) #He!
You could use the re module's regular expression replacement. Using the ^ expression allows you to pick exactly what you want from your string.
import re
text = "This is absurd!"
text = re.sub("[^a-zA-Z]","",text) # Keeps only Alphabets
print(text)
Output to this would be "Thisisabsurd". Only things specified after the ^ symbol will appear.
# for each file on a directory, rename filename
file_list = os.listdir (r"D:\Dev\Python")
for file_name in file_list:
os.rename(file_name, re.sub(r'\d+','',file_name))
Even the below approach works
line = "a,b,c,d,e"
alpha = list(line)
while ',' in alpha:
alpha.remove(',')
finalString = ''.join(alpha)
print(finalString)
output: abcde
The string method replace does not modify the original string. It leaves the original alone and returns a modified copy.
What you want is something like: line = line.replace(char,'')
def replace_all(line, )for char in line:
if char in " ?.!/;:":
line = line.replace(char,'')
return line
However, creating a new string each and every time that a character is removed is very inefficient. I recommend the following instead:
def replace_all(line, baddies, *):
"""
The following is documentation on how to use the class,
without reference to the implementation details:
For implementation notes, please see comments begining with `#`
in the source file.
[*crickets chirp*]
"""
is_bad = lambda ch, baddies=baddies: return ch in baddies
filter_baddies = lambda ch, *, is_bad=is_bad: "" if is_bad(ch) else ch
mahp = replace_all.map(filter_baddies, line)
return replace_all.join('', join(mahp))
# -------------------------------------------------
# WHY `baddies=baddies`?!?
# `is_bad=is_bad`
# -------------------------------------------------
# Default arguments to a lambda function are evaluated
# at the same time as when a lambda function is
# **defined**.
#
# global variables of a lambda function
# are evaluated when the lambda function is
# **called**
#
# The following prints "as yellow as snow"
#
# fleece_color = "white"
# little_lamb = lambda end: return "as " + fleece_color + end
#
# # sometime later...
#
# fleece_color = "yellow"
# print(little_lamb(" as snow"))
# --------------------------------------------------
replace_all.map = map
replace_all.join = str.join
If you want your string to be just allowed characters by using ASCII codes, you can use this piece of code:
for char in s:
if ord(char) < 96 or ord(char) > 123:
s = s.replace(char, "")
It will remove all the characters beyond a....z even upper cases.

split string on any special character using python

currently I can have many dynamic separators in string like
new_123_12313131
new$123$12313131
new#123#12313131
etc etc . I just want to check if there is a special character in string then just get value after last separator like in this example just want 12313131
This is a good use case for isdigit():
l = [
'new_123_12313131',
'new$123$12313131',
'new#123#12313131',
]
output = []
for s in l:
temp = ''
for char in s:
if char.isdigit():
temp += char
output.append(temp)
print(output)
Result: ['12312313131', '12312313131', '12312313131']
Assuming you define 'special character' as anything thats not alphanumeric, you can use the str.isalnum() function to determine the first special character and leverage it something like this:
def split_non_special(input) -> str:
"""
Find first special character starting from the end and get the last piece
"""
for i in reversed(input):
if not i.isalnum():
return input.split(i)[-1] # return as soon as a separator is found
return '' # no separator found
# inputs = ['new_123_12313131', 'new$123$12313131', 'new#123#12313131', 'eefwfwrfwfwf3243']
# outputs = [split_non_special(input) for input in inputs]
# ['12313131', '12313131', '12313131', ''] # outputs
just get value after last separator
the more obvious way is using re.findall:
from re import findall
findall(r'\d+$',text) # ['12313131']
Python supplies what seems to be what you consider "special" characters using the string library as string.punctuation. Which are these characters:
!"#$%&'()*+,-./:;<=>?#[\]^_`{|}~
Using that in conjunction with the re module you can do this:
from string import punctuation
import re
re.split(f"[{punctuation}]", my_string)
my_string being the string you want to split.
Results for your examples
['new', '123', '12313131']
To get just digits you can use:
re.split("\d", my_string)
Results:
['123', '12313131']

Pattern matching Strings

I have an array of strings:
Input:
["series-a-funding", "series-b-financials","series-c-funding","series-b-funding","abc-funding","raised-acd", "fund-series-a", "fund-series-b"]
and I want to filter all the strings which has series-a, series-b, series-c in the strings. My sample output will be
["series-a-funding", "series-b-financials","series-c-funding","series-b-funding","fund-series-a", "fund-series-b"]
I have tried something like this
def interesting(textInput):
textInput = textInput.lower()
if any([word in textInput for word in ['fund-series-%s' or 'series-%s-funding' or 'series-%s-financing' %i for i in ['a', 'b', 'c', 'd']]]):
return True
return False
But no luck. Can anyone help me with this. I am new to python so don't have much idea on this.
I believe this will do:
import re
pattern = re.compile(
'^(fund-series-[abcd]|series-[abcd]-funding|series-[abcd]-financing)$',
re.IGNORECASE
)
def interesting(word):
return bool(pattern.match(word))
We declare pattern, described by your rules and your presented code and then declare function to check if word matched the pattern (case-insensitive)
Or if you need more global series-a/b/c/d pattern, then search will do:
pattern = re.compile('series-[abcd]', re.IGNORECASE)
def interesting(word):
return bool(pattern.search(word))
This solution will filter strings matching the regex series-[abc]:
import re
regex = re.compile('series-[abc]')
output_list = list(filter(regex.search, input_list))
The expression filter(regex.search, input_list) applies the function regex.search to each element in the list and returns only those elements for which the result is True when converted to bool. We make use of the fact that the regex.search function returns a SRE_Match object in the case of match, which evaluates to True when converted to bool, or a None in the case of mismatch, which evaluates to False.
import re
s = ["series-a-funding", "series-b-financials","series-c-funding","series-b-funding","abc-funding","raised-acd", "fund-series-a", "fund-series-b"]
[i for i in s if bool(re.match(r"series-['a','b','c']+", i))]
gives
['series-a-funding', 'series-b-financials', 'series-c-funding', 'series-b-funding']
What you want is a list which contains the string with necessary keywords. Not whether they are true or false. Then instead of using return True or return False you should try return a list.
Another problem in your code is using methods incorrectly. lower () method is applicable for strings only, not for lists. Therefore you should have used textInput = text.lower() for text in textInput
Also you would want to return the true string. Not its lower cased clone. Therefore converting original list to lowercase at the beginning of the function is inadvisable. Instead you could lower case each string when comparing and not at the beginning.
Here is a simple program I have written:
textList = ["series-a-funding", "series-b-financials","series-c-funding","series-b-funding","abc-funding","raised-acd", "fund-series-a", "fund-series-b"]
def printStrings (strings):
return [string for string in textList if 'series-a' in string.lower () or 'series-b' in string.lower () or 'series-c' in string.lower ()]
print printStrings (strings)

How to convert string to Title Case in Python?

Example:
HILO -> Hilo
new york -> New York
SAN FRANCISCO -> San Francisco
Is there a library or standard way to perform this task?
Why not use title Right from the docs:
>>> "they're bill's friends from the UK".title()
"They'Re Bill'S Friends From The Uk"
If you really wanted PascalCase you can use this:
>>> ''.join(x for x in 'make IT pascal CaSe'.title() if not x.isspace())
'MakeItPascalCase'
This one would always start with lowercase, and also strip non alphanumeric characters:
def camelCase(st):
output = ''.join(x for x in st.title() if x.isalnum())
return output[0].lower() + output[1:]
def capitalizeWords(s):
return re.sub(r'\w+', lambda m:m.group(0).capitalize(), s)
re.sub can take a function for the "replacement" (rather than just a string, which is the usage most people seem to be familiar with). This repl function will be called with an re.Match object for each match of the pattern, and the result (which should be a string) will be used as a replacement for that match.
A longer version of the same thing:
WORD_RE = re.compile(r'\w+')
def capitalizeMatch(m):
return m.group(0).capitalize()
def capitalizeWords(s):
return WORD_RE.sub(capitalizeMatch, s)
This pre-compiles the pattern (generally considered good form) and uses a named function instead of a lambda.
Potential library: https://pypi.org/project/stringcase/
Example:
import stringcase
stringcase.camelcase('foo_bar_baz') # => "fooBarBaz"
Though it's questionable whether it will leave spaces in. (Examples show it removing space, but there is a bug tracker issue noting that it leaves them in.)
Why not write one? Something like this may satisfy your requirements:
def FixCase(st):
return ' '.join(''.join([w[0].upper(), w[1:].lower()]) for w in st.split())
Note: Why am I providing yet another answer? This answer is based on the title of the question and the notion that camelcase is defined as: a series of words that have been concatenated (no spaces!) such that each of the original words start with a capital letter (the rest being lowercase) excepting the first word of the series (which is completely lowercase). Also it is assumed that "all strings" refers to ASCII character set; unicode would not work with this solution).
simple
Given the above definition, this function
import re
word_regex_pattern = re.compile("[^A-Za-z]+")
def camel(chars):
words = word_regex_pattern.split(chars)
return "".join(w.lower() if i is 0 else w.title() for i, w in enumerate(words))
, when called, would result in this manner
camel("San Francisco") # sanFrancisco
camel("SAN-FRANCISCO") # sanFrancisco
camel("san_francisco") # sanFrancisco
less simple
Note that it fails when presented with an already camel cased string!
camel("sanFrancisco") # sanfrancisco <-- noted limitation
even less simple
Note that it fails with many unicode strings
camel("México City") # mXicoCity <-- can't handle unicode
I don't have a solution for these cases(or other ones that could be introduced with some creativity). So, as in all things that have to do with strings, cover your own edge cases and good luck with unicode!
just use .title(), and it will convert first letter of every word in capital, rest in small:
>>> a='mohs shahid ss'
>>> a.title()
'Mohs Shahid Ss'
>>> a='TRUE'
>>> b=a.title()
>>> b
'True'
>>> eval(b)
True
def camelCase(st):
s = st.title()
d = "".join(s.split())
d = d.replace(d[0],d[0].lower())
return d
I would like to add my little contribution to this post:
def to_camelcase(str):
return ' '.join([t.title() for t in str.split()])
From code wars - Write simple .camelCase method in Python for strings. All words must have their first letter capitalized without spaces.
camelcase("hello case") => HelloCase
camelcase("camel case word") => CamelCaseWord
def camel_case(string):
titled_string = string.title()
space_joined_string = titled_string.replace(' ', '')
return space_joined_string

Efficient way to search for invalid characters in python

I am building a forum application in Django and I want to make sure that users dont enter certain characters in their forum posts. I need an efficient way to scan their whole post to check for the invalid characters. What I have so far is the following although it does not work correctly and I do not think the idea is very efficient.
def clean_topic_message(self):
topic_message = self.cleaned_data['topic_message']
words = topic_message.split()
if (topic_message == ""):
raise forms.ValidationError(_(u'Please provide a message for your topic'))
***for word in words:
if (re.match(r'[^<>/\{}[]~`]$',topic_message)):
raise forms.ValidationError(_(u'Topic message cannot contain the following: <>/\{}[]~`'))***
return topic_message
Thanks for any help.
For a regex solution, there are two ways to go here:
Find one invalid char anywhere in the string.
Validate every char in the string.
Here is a script that implements both:
import re
topic_message = 'This topic is a-ok'
# Option 1: Invalidate one char in string.
re1 = re.compile(r"[<>/{}[\]~`]");
if re1.search(topic_message):
print ("RE1: Invalid char detected.")
else:
print ("RE1: No invalid char detected.")
# Option 2: Validate all chars in string.
re2 = re.compile(r"^[^<>/{}[\]~`]*$");
if re2.match(topic_message):
print ("RE2: All chars are valid.")
else:
print ("RE2: Not all chars are valid.")
Take your pick.
Note: the original regex erroneously has a right square bracket in the character class which needs to be escaped.
Benchmarks: After seeing gnibbler's interesting solution using set(), I was curious to find out which of these methods would actually be fastest, so I decided to measure them. Here are the benchmark data and statements measured and the timeit result values:
Test data:
r"""
TEST topic_message STRINGS:
ok: 'This topic is A-ok. This topic is A-ok.'
bad: 'This topic is <not>-ok. This topic is {not}-ok.'
MEASURED PYTHON STATEMENTS:
Method 1: 're1.search(topic_message)'
Method 2: 're2.match(topic_message)'
Method 3: 'set(invalid_chars).intersection(topic_message)'
"""
Results:
r"""
Seconds to perform 1000000 Ok-match/Bad-no-match loops:
Method Ok-time Bad-time
1 1.054 1.190
2 1.830 1.636
3 4.364 4.577
"""
The benchmark tests show that Option 1 is slightly faster than option 2 and both are much faster than the set().intersection() method. This is true for strings which both match and don't match.
You have to be much more careful when using regular expressions - they are full of traps.
in the case of [^<>/\{}[]~] the first ] closes the group which is probably not what you intended. If you want to use ] in a group it has to be the first character after the [ eg []^<>/\{}[~]
simple test confirms this
>>> import re
>>> re.search("[[]]","]")
>>> re.search("[][]","]")
<_sre.SRE_Match object at 0xb7883db0>
regex is overkill for this problem anyway
def clean_topic_message(self):
topic_message = self.cleaned_data['topic_message']
invalid_chars = '^<>/\{}[]~`$'
if (topic_message == ""):
raise forms.ValidationError(_(u'Please provide a message for your topic'))
if set(invalid_chars).intersection(topic_message):
raise forms.ValidationError(_(u'Topic message cannot contain the following: %s'%invalid_chars))
return topic_message
If efficiency is a major concern I would re.compile() the re string, since you're going to use the same regex many times.
re.match and re.search behave differently. Splitting words is not required to search using regular expressions.
import re
symbols_re = re.compile(r"[^<>/\{}[]~`]");
if symbols_re.search(self.cleaned_data('topic_message')):
//raise Validation error
I can't say what would be more efficient, but you certainly should get rid of the $ (unless it's an invalid character for the message)... right now you only match the re if the characters are at the end of topic_message because $ anchors the match to the right-hand side of the line.
In any case you need to scan the entire message. So wouldn't something simple like this work ?
def checkMessage(topic_message):
for char in topic_message:
if char in "<>/\{}[]~`":
return False
return True
is_valid = not any(k in text for k in '<>/{}[]~`')
I agree with gnibbler, regex is an overkiller for this situation. Probably after removing this unwanted chars you'll want to remove unwanted words also, here's a little basic way to do it:
def remove_bad_words(title):
'''Helper to remove bad words from a sentence based in a dictionary of words.
'''
word_list = title.split(' ')
for word in word_list:
if word in BAD_WORDS: # BAD_WORDS is a list of unwanted words
word_list.remove(word)
#let's build the string again
title2 = u''
for word in word_list:
title2 = ('%s %s') % (title2, word)
#title2 = title2 + u' '+ word
return title2
Example: just tailor to your needs.
### valid chars: 0-9 , a-z, A-Z only
import re
REGEX_FOR_INVALID_CHARS=re.compile( r'[^0-9a-zA-Z]+' )
list_of_invalid_chars_found=REGEX_FOR_INVALID_CHARS.findall( topic_message )

Categories