Related
I'm trying to remove specific characters from a string using Python. This is the code I'm using right now. Unfortunately it appears to do nothing to the string.
for char in line:
if char in " ?.!/;:":
line.replace(char,'')
How do I do this properly?
Strings in Python are immutable (can't be changed). Because of this, the effect of line.replace(...) is just to create a new string, rather than changing the old one. You need to rebind (assign) it to line in order to have that variable take the new value, with those characters removed.
Also, the way you are doing it is going to be kind of slow, relatively. It's also likely to be a bit confusing to experienced pythonators, who will see a doubly-nested structure and think for a moment that something more complicated is going on.
Starting in Python 2.6 and newer Python 2.x versions *, you can instead use str.translate, (see Python 3 answer below):
line = line.translate(None, '!##$')
or regular expression replacement with re.sub
import re
line = re.sub('[!##$]', '', line)
The characters enclosed in brackets constitute a character class. Any characters in line which are in that class are replaced with the second parameter to sub: an empty string.
Python 3 answer
In Python 3, strings are Unicode. You'll have to translate a little differently. kevpie mentions this in a comment on one of the answers, and it's noted in the documentation for str.translate.
When calling the translate method of a Unicode string, you cannot pass the second parameter that we used above. You also can't pass None as the first parameter. Instead, you pass a translation table (usually a dictionary) as the only parameter. This table maps the ordinal values of characters (i.e. the result of calling ord on them) to the ordinal values of the characters which should replace them, or—usefully to us—None to indicate that they should be deleted.
So to do the above dance with a Unicode string you would call something like
translation_table = dict.fromkeys(map(ord, '!##$'), None)
unicode_line = unicode_line.translate(translation_table)
Here dict.fromkeys and map are used to succinctly generate a dictionary containing
{ord('!'): None, ord('#'): None, ...}
Even simpler, as another answer puts it, create the translation table in place:
unicode_line = unicode_line.translate({ord(c): None for c in '!##$'})
Or, as brought up by Joseph Lee, create the same translation table with str.maketrans:
unicode_line = unicode_line.translate(str.maketrans('', '', '!##$'))
* for compatibility with earlier Pythons, you can create a "null" translation table to pass in place of None:
import string
line = line.translate(string.maketrans('', ''), '!##$')
Here string.maketrans is used to create a translation table, which is just a string containing the characters with ordinal values 0 to 255.
Am I missing the point here, or is it just the following:
string = "ab1cd1ef"
string = string.replace("1", "")
print(string)
# result: "abcdef"
Put it in a loop:
a = "a!b#c#d$"
b = "!##$"
for char in b:
a = a.replace(char, "")
print(a)
# result: "abcd"
>>> line = "abc##!?efg12;:?"
>>> ''.join( c for c in line if c not in '?:!/;' )
'abc##efg12'
With re.sub regular expression
Since Python 3.5, substitution using regular expressions re.sub became available:
import re
re.sub('\ |\?|\.|\!|\/|\;|\:', '', line)
Example
import re
line = 'Q: Do I write ;/.??? No!!!'
re.sub('\ |\?|\.|\!|\/|\;|\:', '', line)
'QDoIwriteNo'
Explanation
In regular expressions (regex), | is a logical OR and \ escapes spaces and special characters that might be actual regex commands. Whereas sub stands for substitution, in this case with the empty string ''.
The asker almost had it. Like most things in Python, the answer is simpler than you think.
>>> line = "H E?.LL!/;O:: "
>>> for char in ' ?.!/;:':
... line = line.replace(char,'')
...
>>> print line
HELLO
You don't have to do the nested if/for loop thing, but you DO need to check each character individually.
For the inverse requirement of only allowing certain characters in a string, you can use regular expressions with a set complement operator [^ABCabc]. For example, to remove everything except ascii letters, digits, and the hyphen:
>>> import string
>>> import re
>>>
>>> phrase = ' There were "nine" (9) chick-peas in my pocket!!! '
>>> allow = string.letters + string.digits + '-'
>>> re.sub('[^%s]' % allow, '', phrase)
'Therewerenine9chick-peasinmypocket'
From the python regular expression documentation:
Characters that are not within a range can be matched by complementing
the set. If the first character of the set is '^', all the characters
that are not in the set will be matched. For example, [^5] will match
any character except '5', and [^^] will match any character except
'^'. ^ has no special meaning if it’s not the first character in the
set.
line = line.translate(None, " ?.!/;:")
>>> s = 'a1b2c3'
>>> ''.join(c for c in s if c not in '123')
'abc'
Strings are immutable in Python. The replace method returns a new string after the replacement. Try:
for char in line:
if char in " ?.!/;:":
line = line.replace(char,'')
This is identical to your original code, with the addition of an assignment to line inside the loop.
Note that the string replace() method replaces all of the occurrences of the character in the string, so you can do better by using replace() for each character you want to remove, instead of looping over each character in your string.
I was surprised that no one had yet recommended using the builtin filter function.
import operator
import string # only for the example you could use a custom string
s = "1212edjaq"
Say we want to filter out everything that isn't a number. Using the filter builtin method "...is equivalent to the generator expression (item for item in iterable if function(item))" [Python 3 Builtins: Filter]
sList = list(s)
intsList = list(string.digits)
obj = filter(lambda x: operator.contains(intsList, x), sList)))
In Python 3 this returns
>> <filter object # hex>
To get a printed string,
nums = "".join(list(obj))
print(nums)
>> "1212"
I am not sure how filter ranks in terms of efficiency but it is a good thing to know how to use when doing list comprehensions and such.
UPDATE
Logically, since filter works you could also use list comprehension and from what I have read it is supposed to be more efficient because lambdas are the wall street hedge fund managers of the programming function world. Another plus is that it is a one-liner that doesnt require any imports. For example, using the same string 's' defined above,
num = "".join([i for i in s if i.isdigit()])
That's it. The return will be a string of all the characters that are digits in the original string.
If you have a specific list of acceptable/unacceptable characters you need only adjust the 'if' part of the list comprehension.
target_chars = "".join([i for i in s if i in some_list])
or alternatively,
target_chars = "".join([i for i in s if i not in some_list])
Using filter, you'd just need one line
line = filter(lambda char: char not in " ?.!/;:", line)
This treats the string as an iterable and checks every character if the lambda returns True:
>>> help(filter)
Help on built-in function filter in module __builtin__:
filter(...)
filter(function or None, sequence) -> list, tuple, or string
Return those items of sequence for which function(item) is true. If
function is None, return the items that are true. If sequence is a tuple
or string, return the same type, else return a list.
Try this one:
def rm_char(original_str, need2rm):
''' Remove charecters in "need2rm" from "original_str" '''
return original_str.translate(str.maketrans('','',need2rm))
This method works well in Python 3
Here's some possible ways to achieve this task:
def attempt1(string):
return "".join([v for v in string if v not in ("a", "e", "i", "o", "u")])
def attempt2(string):
for v in ("a", "e", "i", "o", "u"):
string = string.replace(v, "")
return string
def attempt3(string):
import re
for v in ("a", "e", "i", "o", "u"):
string = re.sub(v, "", string)
return string
def attempt4(string):
return string.replace("a", "").replace("e", "").replace("i", "").replace("o", "").replace("u", "")
for attempt in [attempt1, attempt2, attempt3, attempt4]:
print(attempt("murcielago"))
PS: Instead using " ?.!/;:" the examples use the vowels... and yeah, "murcielago" is the Spanish word to say bat... funny word as it contains all the vowels :)
PS2: If you're interested on performance you could measure these attempts with a simple code like:
import timeit
K = 1000000
for i in range(1,5):
t = timeit.Timer(
f"attempt{i}('murcielago')",
setup=f"from __main__ import attempt{i}"
).repeat(1, K)
print(f"attempt{i}",min(t))
In my box you'd get:
attempt1 2.2334518376057244
attempt2 1.8806643818474513
attempt3 7.214925774955572
attempt4 1.7271184513757465
So it seems attempt4 is the fastest one for this particular input.
Here's my Python 2/3 compatible version. Since the translate api has changed.
def remove(str_, chars):
"""Removes each char in `chars` from `str_`.
Args:
str_: String to remove characters from
chars: String of to-be removed characters
Returns:
A copy of str_ with `chars` removed
Example:
remove("What?!?: darn;", " ?.!:;") => 'Whatdarn'
"""
try:
# Python2.x
return str_.translate(None, chars)
except TypeError:
# Python 3.x
table = {ord(char): None for char in chars}
return str_.translate(table)
#!/usr/bin/python
import re
strs = "how^ much for{} the maple syrup? $20.99? That's[] ricidulous!!!"
print strs
nstr = re.sub(r'[?|$|.|!|a|b]',r' ',strs)#i have taken special character to remove but any #character can be added here
print nstr
nestr = re.sub(r'[^a-zA-Z0-9 ]',r'',nstr)#for removing special character
print nestr
You can also use a function in order to substitute different kind of regular expression or other pattern with the use of a list. With that, you can mixed regular expression, character class, and really basic text pattern. It's really useful when you need to substitute a lot of elements like HTML ones.
*NB: works with Python 3.x
import re # Regular expression library
def string_cleanup(x, notwanted):
for item in notwanted:
x = re.sub(item, '', x)
return x
line = "<title>My example: <strong>A text %very% $clean!!</strong></title>"
print("Uncleaned: ", line)
# Get rid of html elements
html_elements = ["<title>", "</title>", "<strong>", "</strong>"]
line = string_cleanup(line, html_elements)
print("1st clean: ", line)
# Get rid of special characters
special_chars = ["[!##$]", "%"]
line = string_cleanup(line, special_chars)
print("2nd clean: ", line)
In the function string_cleanup, it takes your string x and your list notwanted as arguments. For each item in that list of elements or pattern, if a substitute is needed it will be done.
The output:
Uncleaned: <title>My example: <strong>A text %very% $clean!!</strong></title>
1st clean: My example: A text %very% $clean!!
2nd clean: My example: A text very clean
My method I'd use probably wouldn't work as efficiently, but it is massively simple. I can remove multiple characters at different positions all at once, using slicing and formatting.
Here's an example:
words = "things"
removed = "%s%s" % (words[:3], words[-1:])
This will result in 'removed' holding the word 'this'.
Formatting can be very helpful for printing variables midway through a print string. It can insert any data type using a % followed by the variable's data type; all data types can use %s, and floats (aka decimals) and integers can use %d.
Slicing can be used for intricate control over strings. When I put words[:3], it allows me to select all the characters in the string from the beginning (the colon is before the number, this will mean 'from the beginning to') to the 4th character (it includes the 4th character). The reason 3 equals till the 4th position is because Python starts at 0. Then, when I put word[-1:], it means the 2nd last character to the end (the colon is behind the number). Putting -1 will make Python count from the last character, rather than the first. Again, Python will start at 0. So, word[-1:] basically means 'from the second last character to the end of the string.
So, by cutting off the characters before the character I want to remove and the characters after and sandwiching them together, I can remove the unwanted character. Think of it like a sausage. In the middle it's dirty, so I want to get rid of it. I simply cut off the two ends I want then put them together without the unwanted part in the middle.
If I want to remove multiple consecutive characters, I simply shift the numbers around in the [] (slicing part). Or if I want to remove multiple characters from different positions, I can simply sandwich together multiple slices at once.
Examples:
words = "control"
removed = "%s%s" % (words[:2], words[-2:])
removed equals 'cool'.
words = "impacts"
removed = "%s%s%s" % (words[1], words[3:5], words[-1])
removed equals 'macs'.
In this case, [3:5] means character at position 3 through character at position 5 (excluding the character at the final position).
Remember, Python starts counting at 0, so you will need to as well.
In Python 3.5
e.g.,
os.rename(file_name, file_name.translate({ord(c): None for c in '0123456789'}))
To remove all the number from the string
How about this:
def text_cleanup(text):
new = ""
for i in text:
if i not in " ?.!/;:":
new += i
return new
Below one.. with out using regular expression concept..
ipstring ="text with symbols!##$^&*( ends here"
opstring=''
for i in ipstring:
if i.isalnum()==1 or i==' ':
opstring+=i
pass
print opstring
Recursive split:
s=string ; chars=chars to remove
def strip(s,chars):
if len(s)==1:
return "" if s in chars else s
return strip(s[0:int(len(s)/2)],chars) + strip(s[int(len(s)/2):len(s)],chars)
example:
print(strip("Hello!","lo")) #He!
You could use the re module's regular expression replacement. Using the ^ expression allows you to pick exactly what you want from your string.
import re
text = "This is absurd!"
text = re.sub("[^a-zA-Z]","",text) # Keeps only Alphabets
print(text)
Output to this would be "Thisisabsurd". Only things specified after the ^ symbol will appear.
# for each file on a directory, rename filename
file_list = os.listdir (r"D:\Dev\Python")
for file_name in file_list:
os.rename(file_name, re.sub(r'\d+','',file_name))
Even the below approach works
line = "a,b,c,d,e"
alpha = list(line)
while ',' in alpha:
alpha.remove(',')
finalString = ''.join(alpha)
print(finalString)
output: abcde
The string method replace does not modify the original string. It leaves the original alone and returns a modified copy.
What you want is something like: line = line.replace(char,'')
def replace_all(line, )for char in line:
if char in " ?.!/;:":
line = line.replace(char,'')
return line
However, creating a new string each and every time that a character is removed is very inefficient. I recommend the following instead:
def replace_all(line, baddies, *):
"""
The following is documentation on how to use the class,
without reference to the implementation details:
For implementation notes, please see comments begining with `#`
in the source file.
[*crickets chirp*]
"""
is_bad = lambda ch, baddies=baddies: return ch in baddies
filter_baddies = lambda ch, *, is_bad=is_bad: "" if is_bad(ch) else ch
mahp = replace_all.map(filter_baddies, line)
return replace_all.join('', join(mahp))
# -------------------------------------------------
# WHY `baddies=baddies`?!?
# `is_bad=is_bad`
# -------------------------------------------------
# Default arguments to a lambda function are evaluated
# at the same time as when a lambda function is
# **defined**.
#
# global variables of a lambda function
# are evaluated when the lambda function is
# **called**
#
# The following prints "as yellow as snow"
#
# fleece_color = "white"
# little_lamb = lambda end: return "as " + fleece_color + end
#
# # sometime later...
#
# fleece_color = "yellow"
# print(little_lamb(" as snow"))
# --------------------------------------------------
replace_all.map = map
replace_all.join = str.join
If you want your string to be just allowed characters by using ASCII codes, you can use this piece of code:
for char in s:
if ord(char) < 96 or ord(char) > 123:
s = s.replace(char, "")
It will remove all the characters beyond a....z even upper cases.
To begin with, I am working to a get a desired output like this:
*********************************************************************
hello
*********************************************************************
To achieve this I have assigned the desired output to a variable with multiline string and printing the same with format.
$ cat varibale.py
decorator = """ **********************************************************************
{}
********************************************************************** """
print(decorator.format("hello"))
Output:
**********************************************************************
hello
**********************************************************************
The issue with above approach is the extra spaces in the third line of output which is looking odd.
I am able to achieve this in the following way:
$ cat varibale.py
decorator = """ **********************************************************************
{}
*********************************************************************
"""
print(decorator.format("hello"))
Output:
**********************************************************************
hello
*********************************************************************
But this way my code doesn't look good, as it is not following the indentation.
Please suggest the right way to achieve the desired output.
One way to make multi-line literal strings look good is to use a backslash to escape the newline, like this:
s = '''\
*********************************************************************
hello
*********************************************************************
'''
print(s)
output
*********************************************************************
hello
*********************************************************************
However, PEP-008 discourages backslash usage like that. It's too fragile: if there's a space between the backslash and the newline then the newline won't get escaped, and the backslash will get printed.
A more versatile approach is to use a function which calculates the amount of padding required to centre the text, and applies it via a nested formatting specifier. For example:
def banner(s, width=69):
stars = '*' * width
pad = (width + len(s)) // 2
return '{0}\n{1:>{2}}\n{0}'.format(stars, s, pad)
print(banner('hello'))
print(banner('Hello, world', width=16))
output
*********************************************************************
hello
*********************************************************************
****************
Hello, world
****************
How it works
That format string is a little dense, so I guess I should try to explain it. ;) For full information on this topic please see Format String Syntax in the docs. The explanation below borrows from & paraphrases those docs.
'{0}\n{1:>{2}}\n{0}'.format(stars, s, pad)
The stuff enclosed in {} in a format string is called a "replacement field". The first item in a replacement field is the optional field name. This lets us identify which arg of .format goes with this replacement field. There are a couple of possible variations for field names, this format string uses numeric names, so it identifies the .format args by their position. That is, 0 corresponds to stars, 1 corresponds to s and 2 corresponds to pad.
If no field names are given they get automatically filled by the numbers 0, 1, 2, ... etc (unless you're using Python 2.6, where field names are mandatory). That's quite useful most of the time, so most format strings don't bother using field names.
After the field name we can give a "format specifier" or "format spec" which describes how the value is to be presented. A colon : separates the field name from the format spec. If you don't supply a format spec then you get a default one, and most of the time that's adequate. But here we do want a little more control, so we need to supply a format spec.
In a form spec the > sign forces the field to be right-aligned within the available space. After the alignment sign we can provide a number to specify the minimum field width; the field will automatically be made larger if necessary.
For example, '{0:>6}'.format('test') says to put argument 0 ('test') in a space that's at least 6 chars wide, aligned to the right. Which results in the string ' test'.
But a format spec can actually contain a whole new replacement field! This allows us to supply a variable to control the field width. So in my format string {1:>{2}} says to put arg 1 here (s), right aligned in a field with a width given by arg 2 (pad). Only one level of replacement field nesting is permitted, but it's hard to think of a situation where you'd actually want deeper nesting.
So putting it all together: '{0}\n{1:>{2}}\n{0}' tells .format to build a string that starts with arg 0 (stars) using the default format spec, followed by a newline, followed by arg 1 (s) right aligned in a field of width pad, followed by another newline, finally followed by arg 0 (stars) again.
I hope that made enough sense. :)
In Python 3.6+, we could use an f-string:
def banner(s, width=69):
stars = '*' * width
pad = (width + len(s)) // 2
return f'{stars}\n{s:>{pad}}\n{stars}'
you could proceed for example as:
print('*'*80)
print('{msg:^80s}'.format(msg = 'HELLO')) #^ centers the message
print('*'*80)
or if you want to have the text-width dynamic:
def fn(msg, w = 80):
delim = '*'*w
fmt = '{msg:^%ds}'%w
print(delim)
print(fmt.format(msg=msg))
print(delim)
fn('hello')
or slightly generalized version should you need to write to a file:
import sys
def fn(msg, w = 80, F = sys.stdout):
delim = '*'*w
fmt = '{delim:s}\n{msg:^%ds}\n{delim:s}\n'%w
F.write(fmt.format(delim = delim, msg = msg))
fn('hello')
Maybe :
print '*' * 80 + '\n' + ' ' * 38 + 'hello' + '\n' + '*' *80
OR
If it is python3
a = lambda x,c,mess: print(c*x + ('\n' if not mess else mess))
a(80, '*', None)
a(38, ' ', 'Hello')
a(80, '*', None)
I would like to parse JSON-like strings. Their lone difference with normal JSON is the presence of contiguous commas in arrays. When there are two such commas, it implicitly means that null should be inserted in-between. Example:
JSON-like: ["foo",,,"bar",[1,,3,4]]
Javascript: ["foo",null,null,"bar",[1,null,3,4]]
Decoded (Python): ["foo", None, None, "bar", [1, None, 3, 4]]
The native json.JSONDecoder class doesn't allow me to change the behavior of the array parsing. I can only modify the parser for objects (dicts), ints, floats, strings (by giving kwargs functions to JSONDecoder(), please see the doc).
So, does it mean I have to write a JSON parser from scratch? The Python code of json is available but it's quite a mess. I would prefer to use its internals instead of duplicating its code!
Since what you're trying to parse isn't JSON per se, but rather a different language that's very much like JSON, you may need your own parser.
Fortunately, this isn't as hard as it sounds. You can use a Python parser generator like pyparsing. JSON can be fully specified with a fairly simple context-free grammar (I found one here), so you should be able to modify it to fit your needs.
Small & simple workaround to try out:
Convert JSON-like data to strings.
Replace ",," with ",null,".
Convert it to whatever is your representation.
Let JSONDecoder(),
do the heavy lifting.
& 3. can be omitted if you already deal with strings.
(And if converting to string is impractical, update your question with this info!)
You can do the comma replacement of Lattyware's/przemo_li's answers in one pass by using a lookbehind expression, i.e. "replace all commas that are preceded by just a comma":
>>> s = '["foo",,,"bar",[1,,3,4]]'
>>> re.sub(r'(?<=,)\s*,', ' null,', s)
'["foo", null, null,"bar",[1, null,3,4]]'
Note that this will work for small things where you can assume there aren't consecutive commas in string literals, for example. In general, regular expressions aren't enough to handle this problem, and Taymon's approach of using a real parser is the only fully correct solution.
It's a hackish way of doing it, but one solution is to simply do some string modification on the JSON-ish data to get it in line before parsing it.
import re
import json
not_quite_json = '["foo",,,"bar",[1,,3,4]]'
not_json = True
while not_json:
not_quite_json, not_json = re.subn(r',\s*,', ', null, ', not_quite_json)
Which leaves us with:
'["foo", null, null, "bar",[1, null, 3,4]]'
We can then do:
json.loads(not_quite_json)
Giving us:
['foo', None, None, 'bar', [1, None, 3, 4]]
Note that it's not as simple as a replace, as the replacement also inserts commas that can need replacing. Given this, you have to loop through until no more replacements can be made. Here I have used a simple regex to do the job.
I've had a look at Taymon recommendation, pyparsing, and I successfully hacked the example provided here to suit my needs.
It works well at simulating Javascript eval() but fails one situation: trailing commas. There should be a optional trailing comma – see tests below – but I can't find any proper way to implement this.
from pyparsing import *
TRUE = Keyword("true").setParseAction(replaceWith(True))
FALSE = Keyword("false").setParseAction(replaceWith(False))
NULL = Keyword("null").setParseAction(replaceWith(None))
jsonString = dblQuotedString.setParseAction(removeQuotes)
jsonNumber = Combine(Optional('-') + ('0' | Word('123456789', nums)) +
Optional('.' + Word(nums)) +
Optional(Word('eE', exact=1) + Word(nums + '+-', nums)))
jsonObject = Forward()
jsonValue = Forward()
# black magic begins
commaToNull = Word(',,', exact=1).setParseAction(replaceWith(None))
jsonElements = ZeroOrMore(commaToNull) + Optional(jsonValue) + ZeroOrMore((Suppress(',') + jsonValue) | commaToNull)
# black magic ends
jsonArray = Group(Suppress('[') + Optional(jsonElements) + Suppress(']'))
jsonValue << (jsonString | jsonNumber | Group(jsonObject) | jsonArray | TRUE | FALSE | NULL)
memberDef = Group(jsonString + Suppress(':') + jsonValue)
jsonMembers = delimitedList(memberDef)
jsonObject << Dict(Suppress('{') + Optional(jsonMembers) + Suppress('}'))
jsonComment = cppStyleComment
jsonObject.ignore(jsonComment)
def convertNumbers(s, l, toks):
n = toks[0]
try:
return int(n)
except ValueError:
return float(n)
jsonNumber.setParseAction(convertNumbers)
def test():
tests = (
'[1,2]', # ok
'[,]', # ok
'[,,]', # ok
'[ , , , ]', # ok
'[,1]', # ok
'[,,1]', # ok
'[1,,2]', # ok
'[1,]', # failure, I got [1, None], I should have [1]
'[1,,]', # failure, I got [1, None, None], I should have [1, None]
)
for test in tests:
results = jsonArray.parseString(test)
print(results.asList())
For those looking for something quick and dirty to convert general JS objects (to dicts). Some part of the page of one real site gives me some object I'd like to tackle. There are 'new' constructs for dates, and it's in one line, no spaces in between, so two lines suffice:
data=sub(r'new Date\(([^)])*\)', r'\1', data)
data=sub(r'([,{])(\w*):', r'\1"\2":', data)
Then json.loads() worked fine. Your mileage may vary:)
I use Python 2.6 and I want to replace each instance of certain leading characters (., _ and $ in my case) in a string with another character or string. Since in my case the replacement string is the same, I came up with this:
def replaceLeadingCharacters(string, old, new = ''):
t = string.lstrip(old)
return new * (len(string) - len(t)) + t
which seems to work fine:
>>> replaceLeadingCharacters('._.!$XXX$._', '._$', 'Y')
'YYY!$XXX$._'
Is there a better (simpler or more efficient) way to achieve the same effect in Python ?
Is there a way to achieve this effect with a string instead of characters? Something like str.replace() that stops once something different than the string-to-be-replaced comes up in the input string? Right now I've come up with this:
def replaceLeadingString(string, old, new = ''):
n = 0
o = 0
s = len(old)
while string.startswith(old, o):
n += 1
o += s
return new * n + string[o:]
I am hoping that there is a way to do this without an explicit loop
EDIT:
There are quite a few answers using the re module. I have a couple of questions/issues with it:
Isn't it significantly slower than the str methods when used as a replacement for them?
Is there an easy way to properly quote/escape strings that will be used in a regular expression? For example if I wanted to use re for replaceLeadingCharacters, how would I ensure that the contents of the old variable will not mess things up in ^[old]+ ? I'd rather have a "black-box" function that does not require its users to pay attention to the list of characters that they provide.
Your replaceLeadingCharacters() seems fine as is.
Here's replaceLeadingString() implementation that uses re module (without the while loop):
#!/usr/bin/env python
import re
def lreplace(s, old, new):
"""Return a copy of string `s` with leading occurrences of
substring `old` replaced by `new`.
>>> lreplace('abcabcdefabc', 'abc', 'X')
'XXdefabc'
>>> lreplace('_abc', 'abc', 'X')
'_abc'
"""
return re.sub(r'^(?:%s)+' % re.escape(old),
lambda m: new * (m.end() / len(old)),
s)
Isn't it significantly slower than the str methods when used as a replacement for them?
Don't guess. Measure it for expected input.
Is there an easy way to properly quote/escape strings that will be used in a regular expression?
re.escape()
re.sub(r'^[._$]+', lambda m: 'Y' * m.end(0), '._.!$XXX$._')
But IMHO your first solution is good enough.
I have a string that I want to use as a filename, so I want to remove all characters that wouldn't be allowed in filenames, using Python.
I'd rather be strict than otherwise, so let's say I want to retain only letters, digits, and a small set of other characters like "_-.() ". What's the most elegant solution?
The filename needs to be valid on multiple operating systems (Windows, Linux and Mac OS) - it's an MP3 file in my library with the song title as the filename, and is shared and backed up between 3 machines.
You can look at the Django framework (but take there licence into account!) for how they create a "slug" from arbitrary text. A slug is URL- and filename- friendly.
The Django text utils define a function, slugify(), that's probably the gold standard for this kind of thing. Essentially, their code is the following.
import unicodedata
import re
def slugify(value, allow_unicode=False):
"""
Taken from https://github.com/django/django/blob/master/django/utils/text.py
Convert to ASCII if 'allow_unicode' is False. Convert spaces or repeated
dashes to single dashes. Remove characters that aren't alphanumerics,
underscores, or hyphens. Convert to lowercase. Also strip leading and
trailing whitespace, dashes, and underscores.
"""
value = str(value)
if allow_unicode:
value = unicodedata.normalize('NFKC', value)
else:
value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore').decode('ascii')
value = re.sub(r'[^\w\s-]', '', value.lower())
return re.sub(r'[-\s]+', '-', value).strip('-_')
And the older version:
def slugify(value):
"""
Normalizes string, converts to lowercase, removes non-alpha characters,
and converts spaces to hyphens.
"""
import unicodedata
value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore')
value = unicode(re.sub('[^\w\s-]', '', value).strip().lower())
value = unicode(re.sub('[-\s]+', '-', value))
# ...
return value
There's more, but I left it out, since it doesn't address slugification, but escaping.
You can use list comprehension together with the string methods.
>>> s
'foo-bar#baz?qux#127/\\9]'
>>> "".join(x for x in s if x.isalnum())
'foobarbazqux1279'
This whitelist approach (ie, allowing only the chars present in valid_chars) will work if there aren't limits on the formatting of the files or combination of valid chars that are illegal (like ".."), for example, what you say would allow a filename named " . txt" which I think is not valid on Windows. As this is the most simple approach I'd try to remove whitespace from the valid_chars and prepend a known valid string in case of error, any other approach will have to know about what is allowed where to cope with Windows file naming limitations and thus be a lot more complex.
>>> import string
>>> valid_chars = "-_.() %s%s" % (string.ascii_letters, string.digits)
>>> valid_chars
'-_.() abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
>>> filename = "This Is a (valid) - filename%$&$ .txt"
>>> ''.join(c for c in filename if c in valid_chars)
'This Is a (valid) - filename .txt'
What is the reason to use the strings as file names? If human readability is not a factor I would go with base64 module which can produce file system safe strings. It won't be readable but you won't have to deal with collisions and it is reversible.
import base64
file_name_string = base64.urlsafe_b64encode(your_string)
Update: Changed based on Matthew comment.
There is a nice project on Github called python-slugify:
Install:
pip install python-slugify
Then use:
>>> from slugify import slugify
>>> txt = "This\ is/ a%#$ test ---"
>>> slugify(txt)
'this-is-a-test'
Just to further complicate things, you are not guaranteed to get a valid filename just by removing invalid characters. Since allowed characters differ on different filenames, a conservative approach could end up turning a valid name into an invalid one. You may want to add special handling for the cases where:
The string is all invalid characters (leaving you with an empty string)
You end up with a string with a special meaning, eg "." or ".."
On windows, certain device names are reserved. For instance, you can't create a file named "nul", "nul.txt" (or nul.anything in fact) The reserved names are:
CON, PRN, AUX, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and LPT9
You can probably work around these issues by prepending some string to the filenames that can never result in one of these cases, and stripping invalid characters.
Just like S.Lott answered, you can look at the Django Framework for how they convert a string to a valid filename.
The most recent and updated version is found in utils/text.py, and defines "get_valid_filename", which is as follows:
def get_valid_filename(s):
s = str(s).strip().replace(' ', '_')
return re.sub(r'(?u)[^-\w.]', '', s)
( See https://github.com/django/django/blob/master/django/utils/text.py )
This is the solution I ultimately used:
import unicodedata
validFilenameChars = "-_.() %s%s" % (string.ascii_letters, string.digits)
def removeDisallowedFilenameChars(filename):
cleanedFilename = unicodedata.normalize('NFKD', filename).encode('ASCII', 'ignore')
return ''.join(c for c in cleanedFilename if c in validFilenameChars)
The unicodedata.normalize call replaces accented characters with the unaccented equivalent, which is better than simply stripping them out. After that all disallowed characters are removed.
My solution doesn't prepend a known string to avoid possible disallowed filenames, because I know they can't occur given my particular filename format. A more general solution would need to do so.
In one line:
valid_file_name = re.sub('[^\w_.)( -]', '', any_string)
you can also put '_' character to make it more readable (in case of replacing slashs, for example)
Keep in mind, there are actually no restrictions on filenames on Unix systems other than
It may not contain \0
It may not contain /
Everything else is fair game.
$ touch "
> even multiline
> haha
> ^[[31m red ^[[0m
> evil"
$ ls -la
-rw-r--r-- 0 Nov 17 23:39 ?even multiline?haha??[31m red ?[0m?evil
$ ls -lab
-rw-r--r-- 0 Nov 17 23:39 \neven\ multiline\nhaha\n\033[31m\ red\ \033[0m\nevil
$ perl -e 'for my $i ( glob(q{./*even*}) ){ print $i; } '
./
even multiline
haha
red
evil
Yes, i just stored ANSI Colour Codes in a file name and had them take effect.
For entertainment, put a BEL character in a directory name and watch the fun that ensues when you CD into it ;)
You could use the re.sub() method to replace anything not "filelike". But in effect, every character could be valid; so there are no prebuilt functions (I believe), to get it done.
import re
str = "File!name?.txt"
f = open(os.path.join("/tmp", re.sub('[^-a-zA-Z0-9_.() ]+', '', str))
Would result in a filehandle to /tmp/filename.txt.
I realise there are many answers but they mostly rely on regular expressions or external modules, so I'd like to throw in my own answer. A pure python function, no external module needed, no regular expression used. My approach is not to clean invalid chars, but to only allow valid ones.
def normalizefilename(fn):
validchars = "-_.() "
out = ""
for c in fn:
if str.isalpha(c) or str.isdigit(c) or (c in validchars):
out += c
else:
out += "_"
return out
if you like, you can add your own valid chars to the validchars variable at the beginning, such as your national letters that don't exist in English alphabet. This is something you may or may not want: some file systems that don't run on UTF-8 might still have problems with non-ASCII chars.
This function is to test for a single file name validity, so it will replace path separators with _ considering them invalid chars. If you want to add that, it is trivial to modify the if to include os path separator.
If you don't mind installing a package, this should be useful:
https://pypi.org/project/pathvalidate/
From https://pypi.org/project/pathvalidate/#sanitize-a-filename:
from pathvalidate import sanitize_filename
fname = "fi:l*e/p\"a?t>h|.t<xt"
print(f"{fname} -> {sanitize_filename(fname)}\n")
fname = "\0_a*b:c<d>e%f/(g)h+i_0.txt"
print(f"{fname} -> {sanitize_filename(fname)}\n")
Output
fi:l*e/p"a?t>h|.t<xt -> filepath.txt
_a*b:c<d>e%f/(g)h+i_0.txt -> _abcde%f(g)h+i_0.txt
Another issue that the other comments haven't addressed yet is the empty string, which is obviously not a valid filename. You can also end up with an empty string from stripping too many characters.
What with the Windows reserved filenames and issues with dots, the safest answer to the question “how do I normalise a valid filename from arbitrary user input?” is “don't even bother try”: if you can find any other way to avoid it (eg. using integer primary keys from a database as filenames), do that.
If you must, and you really need to allow spaces and ‘.’ for file extensions as part of the name, try something like:
import re
badchars= re.compile(r'[^A-Za-z0-9_. ]+|^\.|\.$|^ | $|^$')
badnames= re.compile(r'(aux|com[1-9]|con|lpt[1-9]|prn)(\.|$)')
def makeName(s):
name= badchars.sub('_', s)
if badnames.match(name):
name= '_'+name
return name
Even this can't be guaranteed right especially on unexpected OSs — for example RISC OS hates spaces and uses ‘.’ as a directory separator.
Though you have to be careful. It is not clearly said in your intro, if you are looking only at latine language. Some words can become meaningless or another meaning if you sanitize them with ascii characters only.
imagine you have "forêt poésie" (forest poetry), your sanitization might give "fort-posie" (strong + something meaningless)
Worse if you have to deal with chinese characters.
"下北沢" your system might end up doing "---" which is doomed to fail after a while and not very helpful. So if you deal with only files I would encourage to either call them a generic chain that you control or to keep the characters as it is. For URIs, about the same.
Why not just wrap the "osopen" with a try/except and let the underlying OS sort out whether the file is valid?
This seems like much less work and is valid no matter which OS you use.
>>> import string
>>> safechars = bytearray(('_-.()' + string.digits + string.ascii_letters).encode())
>>> allchars = bytearray(range(0x100))
>>> deletechars = bytearray(set(allchars) - set(safechars))
>>> filename = u'#ab\xa0c.$%.txt'
>>> safe_filename = filename.encode('ascii', 'ignore').translate(None, deletechars).decode()
>>> safe_filename
'abc..txt'
It doesn't handle empty strings, special filenames ('nul', 'con', etc).
I liked the python-slugify approach here but it was stripping dots also away which was not desired. So I optimized it for uploading a clean filename to s3 this way:
pip install python-slugify
Example code:
s = 'Very / Unsafe / file\nname hähä \n\r .txt'
clean_basename = slugify(os.path.splitext(s)[0])
clean_extension = slugify(os.path.splitext(s)[1][1:])
if clean_extension:
clean_filename = '{}.{}'.format(clean_basename, clean_extension)
elif clean_basename:
clean_filename = clean_basename
else:
clean_filename = 'none' # only unclean characters
Output:
>>> clean_filename
'very-unsafe-file-name-haha.txt'
This is so failsafe, it works with filenames without extension and it even works for only unsafe characters file names (result is none here).
Answer modified for python 3.6
import string
import unicodedata
validFilenameChars = "-_.() %s%s" % (string.ascii_letters, string.digits)
def removeDisallowedFilenameChars(filename):
cleanedFilename = unicodedata.normalize('NFKD', filename).encode('ASCII', 'ignore')
return ''.join(chr(c) for c in cleanedFilename if chr(c) in validFilenameChars)
Not exactly what OP was asking for but this is what I use because I need unique and reversible conversions:
# p3 code
def safePath (url):
return ''.join(map(lambda ch: chr(ch) if ch in safePath.chars else '%%%02x' % ch, url.encode('utf-8')))
safePath.chars = set(map(lambda x: ord(x), '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz+-_ .'))
Result is "somewhat" readable, at least from a sysadmin point of view.
When confronted with the same problem I used python-slugify.
Usage was also suggested by Shoham but, as therealmarv pointed out, by default python-slugify also converts dots.
This behaviour can be overruled by including dots into the regex_pattern argument.
> filename = "This is a väryì' Strange File-Nömé.jpeg"
> pattern = re.compile(r'[^-a-zA-Z0-9.]+')
> slugify(filename,regex_pattern=pattern)
'this-is-a-varyi-strange-file-nome.jpeg'
Note that the regex pattern was copied from the
ALLOWED_CHARS_PATTERN_WITH_UPPERCASE
global variable within the slugify.py file of the python-slugify package and extended with "."
Keep in mind that special characters like .() must be escaped with \.
If you want to preserve uppercase letters use the lowercase=False argument.
> filename = "This is a väryì' Strange File-Nömé.jpeg"
> pattern = re.compile(r'[^-a-zA-Z0-9.]+')
> slugify(filename,regex_pattern=pattern, lowercase=False)
'This-is-a-varyi-Strange-File-Nome.jpeg'
This worked using Python 3.8.4 and python-slugify 4.0.1
Yet another answer for Windows specific paths, using simple replacement and no funky modules:
import re
def check_for_illegal_char(input_str):
# remove illegal characters for Windows file names/paths
# (illegal filenames are a superset (41) of the illegal path names (36))
# this is according to windows blacklist obtained with Powershell
# from: https://stackoverflow.com/questions/1976007/what-characters-are-forbidden-in-windows-and-linux-directory-names/44750843#44750843
#
# PS> $enc = [system.Text.Encoding]::UTF8
# PS> $FileNameInvalidChars = [System.IO.Path]::GetInvalidFileNameChars()
# PS> $FileNameInvalidChars | foreach { $enc.GetBytes($_) } | Out-File -FilePath InvalidFileCharCodes.txt
illegal = '\u0022\u003c\u003e\u007c\u0000\u0001\u0002\u0003\u0004\u0005\u0006\u0007\u0008' + \
'\u0009\u000a\u000b\u000c\u000d\u000e\u000f\u0010\u0011\u0012\u0013\u0014\u0015' + \
'\u0016\u0017\u0018\u0019\u001a\u001b\u001c\u001d\u001e\u001f\u003a\u002a\u003f\u005c\u002f'
output_str, _ = re.subn('['+illegal+']','_', input_str)
output_str = output_str.replace('\\','_') # backslash cannot be handled by regex
output_str = output_str.replace('..','_') # double dots are illegal too, or at least a bad idea
output_str = output_str[:-1] if output_str[-1] == '.' else output_str # can't have end of line '.'
if output_str != input_str:
print(f"The name '{input_str}' had invalid characters, "
f"name was modified to '{output_str}'")
return output_str
When tested with check_for_illegal_char('fas\u0003\u0004good\\..asd.'), I get:
The name 'fas♥♦good\..asd.' had invalid characters, name was modified to 'fas__good__asd'
Most of these solutions don't work.
'/hello/world' -> 'helloworld'
'/helloworld'/ -> 'helloworld'
This isn't what you want generally, say you are saving the html for each link, you're going to overwrite the html for a different webpage.
I pickle a dict such as:
{'helloworld':
(
{'/hello/world': 'helloworld', '/helloworld/': 'helloworld1'},
2)
}
2 represents the number that should be appended to the next filename.
I look up the filename each time from the dict. If it's not there, I create a new one, appending the max number if needed.
Still haven't found a good library to generate a valid filename. Note that in languages like German, Norwegian or French special characters in filenames are very common and totally OK. So I ended up with my own library:
# util/files.py
CHAR_MAX_LEN = 31
CHAR_REPLACE = '_'
ILLEGAL_CHARS = [
'#', # pound
'%', # percent
'&', # ampersand
'{', # left curly bracket
'}', # right curly bracket
'\\', # back slash
'<', # left angle bracket
'>', # right angle bracket
'*', # asterisk
'?', # question mark
'/', # forward slash
' ', # blank spaces
'$', # dollar sign
'!', # exclamation point
"'", # single quotes
'"', # double quotes
':', # colon
'#', # at sign
'+', # plus sign
'`', # backtick
'|', # pipe
'=', # equal sign
]
def generate_filename(
name, char_replace=CHAR_REPLACE, length=CHAR_MAX_LEN,
illegal=ILLEGAL_CHARS, replace_dot=False):
''' return clean filename '''
# init
_elem = name.split('.')
extension = _elem[-1].strip()
_length = length - len(extension) - 1
label = '.'.join(_elem[:-1]).strip()[:_length]
filename = ''
# replace '.' ?
if replace_dot:
label = label.replace('.', char_replace)
# clean
for char in label + '.' + extension:
if char in illegal:
char = char_replace
filename += char
return filename
generate_filename('nucgae zutaäer..0.1.docx', replace_dot=False)
nucgae_zutaäer..0.1.docx
generate_filename('nucgae zutaäer..0.1.docx', replace_dot=True)
nucgae_zutaäer__0_1.docx
I'm sure this isn't a great answer, since it modifies the string it's looping over, but it seems to work alright:
import string
for chr in your_string:
if chr == ' ':
your_string = your_string.replace(' ', '_')
elif chr not in string.ascii_letters or chr not in string.digits:
your_string = your_string.replace(chr, '')
UPDATE
All links broken beyond repair in this 6 year old answer.
Also, I also wouldn't do it this way anymore, just base64 encode or drop unsafe chars. Python 3 example:
import re
t = re.compile("[a-zA-Z0-9.,_-]")
unsafe = "abc∂éåß®∆˚˙©¬ñ√ƒµ©∆∫ø"
safe = [ch for ch in unsafe if t.match(ch)]
# => 'abc'
With base64 you can encode and decode, so you can retrieve the original filename again.
But depending on the use case you might be better off generating a random filename and storing the metadata in separate file or DB.
from random import choice
from string import ascii_lowercase, ascii_uppercase, digits
allowed_chr = ascii_lowercase + ascii_uppercase + digits
safe = ''.join([choice(allowed_chr) for _ in range(16)])
# => 'CYQ4JDKE9JfcRzAZ'
ORIGINAL LINKROTTEN ANSWER:
The bobcat project contains a python module that does just this.
It's not completely robust, see this post and this reply.
So, as noted: base64 encoding is probably a better idea if readability doesn't matter.
Docs https://svn.origo.ethz.ch/bobcat/src-doc/safefilename-module.html
Source https://svn.origo.ethz.ch/bobcat/trunk/src/bobcatlib/safefilename.py
Here, this should cover all the bases. It handles all types of issues for you, including (but not limited too) character substitution.
Works in Windows, *nix, and almost every other file system. Allows printable characters only.
def txt2filename(txt, chr_set='normal'):
"""Converts txt to a valid Windows/*nix filename with printable characters only.
args:
txt: The str to convert.
chr_set: 'normal', 'universal', or 'inclusive'.
'universal': ' -.0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'
'normal': Every printable character exept those disallowed on Windows/*nix.
'extended': All 'normal' characters plus the extended character ASCII codes 128-255
"""
FILLER = '-'
# Step 1: Remove excluded characters.
if chr_set == 'universal':
# Lookups in a set are O(n) vs O(n * x) for a str.
printables = set(' -.0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz')
else:
if chr_set == 'normal':
max_chr = 127
elif chr_set == 'extended':
max_chr = 256
else:
raise ValueError(f'The chr_set argument may be normal, extended or universal; not {chr_set=}')
EXCLUDED_CHRS = set(r'<>:"/\|?*') # Illegal characters in Windows filenames.
EXCLUDED_CHRS.update(chr(127)) # DEL (non-printable).
printables = set(chr(x)
for x in range(32, max_chr)
if chr(x) not in EXCLUDED_CHRS)
result = ''.join(x if x in printables else FILLER # Allow printable characters only.
for x in txt)
# Step 2: Device names, '.', and '..' are invalid filenames in Windows.
DEVICE_NAMES = 'CON,PRN,AUX,NUL,COM1,COM2,COM3,COM4,' \
'COM5,COM6,COM7,COM8,COM9,LPT1,LPT2,' \
'LPT3,LPT4,LPT5,LPT6,LPT7,LPT8,LPT9,' \
'CONIN$,CONOUT$,..,.'.split() # This list is an O(n) operation.
if result in DEVICE_NAMES:
result = f'-{result}-'
# Step 3: Maximum length of filename is 255 bytes in Windows and Linux (other *nix flavors may allow longer names).
result = result[:255]
# Step 4: Windows does not allow filenames to end with '.' or ' ' or begin with ' '.
result = re.sub(r'^[. ]', FILLER, result)
result = re.sub(r' $', FILLER, result)
return result
This solution needs no external libraries. It substitutes non-printable filenames too because they are not always simple to deal with.