Turn a string into a valid filename? - python
I have a string that I want to use as a filename, so I want to remove all characters that wouldn't be allowed in filenames, using Python.
I'd rather be strict than otherwise, so let's say I want to retain only letters, digits, and a small set of other characters like "_-.() ". What's the most elegant solution?
The filename needs to be valid on multiple operating systems (Windows, Linux and Mac OS) - it's an MP3 file in my library with the song title as the filename, and is shared and backed up between 3 machines.
You can look at the Django framework (but take there licence into account!) for how they create a "slug" from arbitrary text. A slug is URL- and filename- friendly.
The Django text utils define a function, slugify(), that's probably the gold standard for this kind of thing. Essentially, their code is the following.
import unicodedata
import re
def slugify(value, allow_unicode=False):
"""
Taken from https://github.com/django/django/blob/master/django/utils/text.py
Convert to ASCII if 'allow_unicode' is False. Convert spaces or repeated
dashes to single dashes. Remove characters that aren't alphanumerics,
underscores, or hyphens. Convert to lowercase. Also strip leading and
trailing whitespace, dashes, and underscores.
"""
value = str(value)
if allow_unicode:
value = unicodedata.normalize('NFKC', value)
else:
value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore').decode('ascii')
value = re.sub(r'[^\w\s-]', '', value.lower())
return re.sub(r'[-\s]+', '-', value).strip('-_')
And the older version:
def slugify(value):
"""
Normalizes string, converts to lowercase, removes non-alpha characters,
and converts spaces to hyphens.
"""
import unicodedata
value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore')
value = unicode(re.sub('[^\w\s-]', '', value).strip().lower())
value = unicode(re.sub('[-\s]+', '-', value))
# ...
return value
There's more, but I left it out, since it doesn't address slugification, but escaping.
You can use list comprehension together with the string methods.
>>> s
'foo-bar#baz?qux#127/\\9]'
>>> "".join(x for x in s if x.isalnum())
'foobarbazqux1279'
This whitelist approach (ie, allowing only the chars present in valid_chars) will work if there aren't limits on the formatting of the files or combination of valid chars that are illegal (like ".."), for example, what you say would allow a filename named " . txt" which I think is not valid on Windows. As this is the most simple approach I'd try to remove whitespace from the valid_chars and prepend a known valid string in case of error, any other approach will have to know about what is allowed where to cope with Windows file naming limitations and thus be a lot more complex.
>>> import string
>>> valid_chars = "-_.() %s%s" % (string.ascii_letters, string.digits)
>>> valid_chars
'-_.() abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
>>> filename = "This Is a (valid) - filename%$&$ .txt"
>>> ''.join(c for c in filename if c in valid_chars)
'This Is a (valid) - filename .txt'
What is the reason to use the strings as file names? If human readability is not a factor I would go with base64 module which can produce file system safe strings. It won't be readable but you won't have to deal with collisions and it is reversible.
import base64
file_name_string = base64.urlsafe_b64encode(your_string)
Update: Changed based on Matthew comment.
There is a nice project on Github called python-slugify:
Install:
pip install python-slugify
Then use:
>>> from slugify import slugify
>>> txt = "This\ is/ a%#$ test ---"
>>> slugify(txt)
'this-is-a-test'
Just to further complicate things, you are not guaranteed to get a valid filename just by removing invalid characters. Since allowed characters differ on different filenames, a conservative approach could end up turning a valid name into an invalid one. You may want to add special handling for the cases where:
The string is all invalid characters (leaving you with an empty string)
You end up with a string with a special meaning, eg "." or ".."
On windows, certain device names are reserved. For instance, you can't create a file named "nul", "nul.txt" (or nul.anything in fact) The reserved names are:
CON, PRN, AUX, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and LPT9
You can probably work around these issues by prepending some string to the filenames that can never result in one of these cases, and stripping invalid characters.
Just like S.Lott answered, you can look at the Django Framework for how they convert a string to a valid filename.
The most recent and updated version is found in utils/text.py, and defines "get_valid_filename", which is as follows:
def get_valid_filename(s):
s = str(s).strip().replace(' ', '_')
return re.sub(r'(?u)[^-\w.]', '', s)
( See https://github.com/django/django/blob/master/django/utils/text.py )
This is the solution I ultimately used:
import unicodedata
validFilenameChars = "-_.() %s%s" % (string.ascii_letters, string.digits)
def removeDisallowedFilenameChars(filename):
cleanedFilename = unicodedata.normalize('NFKD', filename).encode('ASCII', 'ignore')
return ''.join(c for c in cleanedFilename if c in validFilenameChars)
The unicodedata.normalize call replaces accented characters with the unaccented equivalent, which is better than simply stripping them out. After that all disallowed characters are removed.
My solution doesn't prepend a known string to avoid possible disallowed filenames, because I know they can't occur given my particular filename format. A more general solution would need to do so.
In one line:
valid_file_name = re.sub('[^\w_.)( -]', '', any_string)
you can also put '_' character to make it more readable (in case of replacing slashs, for example)
Keep in mind, there are actually no restrictions on filenames on Unix systems other than
It may not contain \0
It may not contain /
Everything else is fair game.
$ touch "
> even multiline
> haha
> ^[[31m red ^[[0m
> evil"
$ ls -la
-rw-r--r-- 0 Nov 17 23:39 ?even multiline?haha??[31m red ?[0m?evil
$ ls -lab
-rw-r--r-- 0 Nov 17 23:39 \neven\ multiline\nhaha\n\033[31m\ red\ \033[0m\nevil
$ perl -e 'for my $i ( glob(q{./*even*}) ){ print $i; } '
./
even multiline
haha
red
evil
Yes, i just stored ANSI Colour Codes in a file name and had them take effect.
For entertainment, put a BEL character in a directory name and watch the fun that ensues when you CD into it ;)
You could use the re.sub() method to replace anything not "filelike". But in effect, every character could be valid; so there are no prebuilt functions (I believe), to get it done.
import re
str = "File!name?.txt"
f = open(os.path.join("/tmp", re.sub('[^-a-zA-Z0-9_.() ]+', '', str))
Would result in a filehandle to /tmp/filename.txt.
I realise there are many answers but they mostly rely on regular expressions or external modules, so I'd like to throw in my own answer. A pure python function, no external module needed, no regular expression used. My approach is not to clean invalid chars, but to only allow valid ones.
def normalizefilename(fn):
validchars = "-_.() "
out = ""
for c in fn:
if str.isalpha(c) or str.isdigit(c) or (c in validchars):
out += c
else:
out += "_"
return out
if you like, you can add your own valid chars to the validchars variable at the beginning, such as your national letters that don't exist in English alphabet. This is something you may or may not want: some file systems that don't run on UTF-8 might still have problems with non-ASCII chars.
This function is to test for a single file name validity, so it will replace path separators with _ considering them invalid chars. If you want to add that, it is trivial to modify the if to include os path separator.
If you don't mind installing a package, this should be useful:
https://pypi.org/project/pathvalidate/
From https://pypi.org/project/pathvalidate/#sanitize-a-filename:
from pathvalidate import sanitize_filename
fname = "fi:l*e/p\"a?t>h|.t<xt"
print(f"{fname} -> {sanitize_filename(fname)}\n")
fname = "\0_a*b:c<d>e%f/(g)h+i_0.txt"
print(f"{fname} -> {sanitize_filename(fname)}\n")
Output
fi:l*e/p"a?t>h|.t<xt -> filepath.txt
_a*b:c<d>e%f/(g)h+i_0.txt -> _abcde%f(g)h+i_0.txt
Another issue that the other comments haven't addressed yet is the empty string, which is obviously not a valid filename. You can also end up with an empty string from stripping too many characters.
What with the Windows reserved filenames and issues with dots, the safest answer to the question “how do I normalise a valid filename from arbitrary user input?” is “don't even bother try”: if you can find any other way to avoid it (eg. using integer primary keys from a database as filenames), do that.
If you must, and you really need to allow spaces and ‘.’ for file extensions as part of the name, try something like:
import re
badchars= re.compile(r'[^A-Za-z0-9_. ]+|^\.|\.$|^ | $|^$')
badnames= re.compile(r'(aux|com[1-9]|con|lpt[1-9]|prn)(\.|$)')
def makeName(s):
name= badchars.sub('_', s)
if badnames.match(name):
name= '_'+name
return name
Even this can't be guaranteed right especially on unexpected OSs — for example RISC OS hates spaces and uses ‘.’ as a directory separator.
Though you have to be careful. It is not clearly said in your intro, if you are looking only at latine language. Some words can become meaningless or another meaning if you sanitize them with ascii characters only.
imagine you have "forêt poésie" (forest poetry), your sanitization might give "fort-posie" (strong + something meaningless)
Worse if you have to deal with chinese characters.
"下北沢" your system might end up doing "---" which is doomed to fail after a while and not very helpful. So if you deal with only files I would encourage to either call them a generic chain that you control or to keep the characters as it is. For URIs, about the same.
Why not just wrap the "osopen" with a try/except and let the underlying OS sort out whether the file is valid?
This seems like much less work and is valid no matter which OS you use.
>>> import string
>>> safechars = bytearray(('_-.()' + string.digits + string.ascii_letters).encode())
>>> allchars = bytearray(range(0x100))
>>> deletechars = bytearray(set(allchars) - set(safechars))
>>> filename = u'#ab\xa0c.$%.txt'
>>> safe_filename = filename.encode('ascii', 'ignore').translate(None, deletechars).decode()
>>> safe_filename
'abc..txt'
It doesn't handle empty strings, special filenames ('nul', 'con', etc).
I liked the python-slugify approach here but it was stripping dots also away which was not desired. So I optimized it for uploading a clean filename to s3 this way:
pip install python-slugify
Example code:
s = 'Very / Unsafe / file\nname hähä \n\r .txt'
clean_basename = slugify(os.path.splitext(s)[0])
clean_extension = slugify(os.path.splitext(s)[1][1:])
if clean_extension:
clean_filename = '{}.{}'.format(clean_basename, clean_extension)
elif clean_basename:
clean_filename = clean_basename
else:
clean_filename = 'none' # only unclean characters
Output:
>>> clean_filename
'very-unsafe-file-name-haha.txt'
This is so failsafe, it works with filenames without extension and it even works for only unsafe characters file names (result is none here).
Answer modified for python 3.6
import string
import unicodedata
validFilenameChars = "-_.() %s%s" % (string.ascii_letters, string.digits)
def removeDisallowedFilenameChars(filename):
cleanedFilename = unicodedata.normalize('NFKD', filename).encode('ASCII', 'ignore')
return ''.join(chr(c) for c in cleanedFilename if chr(c) in validFilenameChars)
Not exactly what OP was asking for but this is what I use because I need unique and reversible conversions:
# p3 code
def safePath (url):
return ''.join(map(lambda ch: chr(ch) if ch in safePath.chars else '%%%02x' % ch, url.encode('utf-8')))
safePath.chars = set(map(lambda x: ord(x), '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz+-_ .'))
Result is "somewhat" readable, at least from a sysadmin point of view.
When confronted with the same problem I used python-slugify.
Usage was also suggested by Shoham but, as therealmarv pointed out, by default python-slugify also converts dots.
This behaviour can be overruled by including dots into the regex_pattern argument.
> filename = "This is a väryì' Strange File-Nömé.jpeg"
> pattern = re.compile(r'[^-a-zA-Z0-9.]+')
> slugify(filename,regex_pattern=pattern)
'this-is-a-varyi-strange-file-nome.jpeg'
Note that the regex pattern was copied from the
ALLOWED_CHARS_PATTERN_WITH_UPPERCASE
global variable within the slugify.py file of the python-slugify package and extended with "."
Keep in mind that special characters like .() must be escaped with \.
If you want to preserve uppercase letters use the lowercase=False argument.
> filename = "This is a väryì' Strange File-Nömé.jpeg"
> pattern = re.compile(r'[^-a-zA-Z0-9.]+')
> slugify(filename,regex_pattern=pattern, lowercase=False)
'This-is-a-varyi-Strange-File-Nome.jpeg'
This worked using Python 3.8.4 and python-slugify 4.0.1
Yet another answer for Windows specific paths, using simple replacement and no funky modules:
import re
def check_for_illegal_char(input_str):
# remove illegal characters for Windows file names/paths
# (illegal filenames are a superset (41) of the illegal path names (36))
# this is according to windows blacklist obtained with Powershell
# from: https://stackoverflow.com/questions/1976007/what-characters-are-forbidden-in-windows-and-linux-directory-names/44750843#44750843
#
# PS> $enc = [system.Text.Encoding]::UTF8
# PS> $FileNameInvalidChars = [System.IO.Path]::GetInvalidFileNameChars()
# PS> $FileNameInvalidChars | foreach { $enc.GetBytes($_) } | Out-File -FilePath InvalidFileCharCodes.txt
illegal = '\u0022\u003c\u003e\u007c\u0000\u0001\u0002\u0003\u0004\u0005\u0006\u0007\u0008' + \
'\u0009\u000a\u000b\u000c\u000d\u000e\u000f\u0010\u0011\u0012\u0013\u0014\u0015' + \
'\u0016\u0017\u0018\u0019\u001a\u001b\u001c\u001d\u001e\u001f\u003a\u002a\u003f\u005c\u002f'
output_str, _ = re.subn('['+illegal+']','_', input_str)
output_str = output_str.replace('\\','_') # backslash cannot be handled by regex
output_str = output_str.replace('..','_') # double dots are illegal too, or at least a bad idea
output_str = output_str[:-1] if output_str[-1] == '.' else output_str # can't have end of line '.'
if output_str != input_str:
print(f"The name '{input_str}' had invalid characters, "
f"name was modified to '{output_str}'")
return output_str
When tested with check_for_illegal_char('fas\u0003\u0004good\\..asd.'), I get:
The name 'fas♥♦good\..asd.' had invalid characters, name was modified to 'fas__good__asd'
Most of these solutions don't work.
'/hello/world' -> 'helloworld'
'/helloworld'/ -> 'helloworld'
This isn't what you want generally, say you are saving the html for each link, you're going to overwrite the html for a different webpage.
I pickle a dict such as:
{'helloworld':
(
{'/hello/world': 'helloworld', '/helloworld/': 'helloworld1'},
2)
}
2 represents the number that should be appended to the next filename.
I look up the filename each time from the dict. If it's not there, I create a new one, appending the max number if needed.
Still haven't found a good library to generate a valid filename. Note that in languages like German, Norwegian or French special characters in filenames are very common and totally OK. So I ended up with my own library:
# util/files.py
CHAR_MAX_LEN = 31
CHAR_REPLACE = '_'
ILLEGAL_CHARS = [
'#', # pound
'%', # percent
'&', # ampersand
'{', # left curly bracket
'}', # right curly bracket
'\\', # back slash
'<', # left angle bracket
'>', # right angle bracket
'*', # asterisk
'?', # question mark
'/', # forward slash
' ', # blank spaces
'$', # dollar sign
'!', # exclamation point
"'", # single quotes
'"', # double quotes
':', # colon
'#', # at sign
'+', # plus sign
'`', # backtick
'|', # pipe
'=', # equal sign
]
def generate_filename(
name, char_replace=CHAR_REPLACE, length=CHAR_MAX_LEN,
illegal=ILLEGAL_CHARS, replace_dot=False):
''' return clean filename '''
# init
_elem = name.split('.')
extension = _elem[-1].strip()
_length = length - len(extension) - 1
label = '.'.join(_elem[:-1]).strip()[:_length]
filename = ''
# replace '.' ?
if replace_dot:
label = label.replace('.', char_replace)
# clean
for char in label + '.' + extension:
if char in illegal:
char = char_replace
filename += char
return filename
generate_filename('nucgae zutaäer..0.1.docx', replace_dot=False)
nucgae_zutaäer..0.1.docx
generate_filename('nucgae zutaäer..0.1.docx', replace_dot=True)
nucgae_zutaäer__0_1.docx
I'm sure this isn't a great answer, since it modifies the string it's looping over, but it seems to work alright:
import string
for chr in your_string:
if chr == ' ':
your_string = your_string.replace(' ', '_')
elif chr not in string.ascii_letters or chr not in string.digits:
your_string = your_string.replace(chr, '')
UPDATE
All links broken beyond repair in this 6 year old answer.
Also, I also wouldn't do it this way anymore, just base64 encode or drop unsafe chars. Python 3 example:
import re
t = re.compile("[a-zA-Z0-9.,_-]")
unsafe = "abc∂éåß®∆˚˙©¬ñ√ƒµ©∆∫ø"
safe = [ch for ch in unsafe if t.match(ch)]
# => 'abc'
With base64 you can encode and decode, so you can retrieve the original filename again.
But depending on the use case you might be better off generating a random filename and storing the metadata in separate file or DB.
from random import choice
from string import ascii_lowercase, ascii_uppercase, digits
allowed_chr = ascii_lowercase + ascii_uppercase + digits
safe = ''.join([choice(allowed_chr) for _ in range(16)])
# => 'CYQ4JDKE9JfcRzAZ'
ORIGINAL LINKROTTEN ANSWER:
The bobcat project contains a python module that does just this.
It's not completely robust, see this post and this reply.
So, as noted: base64 encoding is probably a better idea if readability doesn't matter.
Docs https://svn.origo.ethz.ch/bobcat/src-doc/safefilename-module.html
Source https://svn.origo.ethz.ch/bobcat/trunk/src/bobcatlib/safefilename.py
Here, this should cover all the bases. It handles all types of issues for you, including (but not limited too) character substitution.
Works in Windows, *nix, and almost every other file system. Allows printable characters only.
def txt2filename(txt, chr_set='normal'):
"""Converts txt to a valid Windows/*nix filename with printable characters only.
args:
txt: The str to convert.
chr_set: 'normal', 'universal', or 'inclusive'.
'universal': ' -.0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'
'normal': Every printable character exept those disallowed on Windows/*nix.
'extended': All 'normal' characters plus the extended character ASCII codes 128-255
"""
FILLER = '-'
# Step 1: Remove excluded characters.
if chr_set == 'universal':
# Lookups in a set are O(n) vs O(n * x) for a str.
printables = set(' -.0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz')
else:
if chr_set == 'normal':
max_chr = 127
elif chr_set == 'extended':
max_chr = 256
else:
raise ValueError(f'The chr_set argument may be normal, extended or universal; not {chr_set=}')
EXCLUDED_CHRS = set(r'<>:"/\|?*') # Illegal characters in Windows filenames.
EXCLUDED_CHRS.update(chr(127)) # DEL (non-printable).
printables = set(chr(x)
for x in range(32, max_chr)
if chr(x) not in EXCLUDED_CHRS)
result = ''.join(x if x in printables else FILLER # Allow printable characters only.
for x in txt)
# Step 2: Device names, '.', and '..' are invalid filenames in Windows.
DEVICE_NAMES = 'CON,PRN,AUX,NUL,COM1,COM2,COM3,COM4,' \
'COM5,COM6,COM7,COM8,COM9,LPT1,LPT2,' \
'LPT3,LPT4,LPT5,LPT6,LPT7,LPT8,LPT9,' \
'CONIN$,CONOUT$,..,.'.split() # This list is an O(n) operation.
if result in DEVICE_NAMES:
result = f'-{result}-'
# Step 3: Maximum length of filename is 255 bytes in Windows and Linux (other *nix flavors may allow longer names).
result = result[:255]
# Step 4: Windows does not allow filenames to end with '.' or ' ' or begin with ' '.
result = re.sub(r'^[. ]', FILLER, result)
result = re.sub(r' $', FILLER, result)
return result
This solution needs no external libraries. It substitutes non-printable filenames too because they are not always simple to deal with.
Related
how to replace a comma in python, which is pressed to the letter [duplicate]
I'm trying to remove specific characters from a string using Python. This is the code I'm using right now. Unfortunately it appears to do nothing to the string. for char in line: if char in " ?.!/;:": line.replace(char,'') How do I do this properly?
Strings in Python are immutable (can't be changed). Because of this, the effect of line.replace(...) is just to create a new string, rather than changing the old one. You need to rebind (assign) it to line in order to have that variable take the new value, with those characters removed. Also, the way you are doing it is going to be kind of slow, relatively. It's also likely to be a bit confusing to experienced pythonators, who will see a doubly-nested structure and think for a moment that something more complicated is going on. Starting in Python 2.6 and newer Python 2.x versions *, you can instead use str.translate, (see Python 3 answer below): line = line.translate(None, '!##$') or regular expression replacement with re.sub import re line = re.sub('[!##$]', '', line) The characters enclosed in brackets constitute a character class. Any characters in line which are in that class are replaced with the second parameter to sub: an empty string. Python 3 answer In Python 3, strings are Unicode. You'll have to translate a little differently. kevpie mentions this in a comment on one of the answers, and it's noted in the documentation for str.translate. When calling the translate method of a Unicode string, you cannot pass the second parameter that we used above. You also can't pass None as the first parameter. Instead, you pass a translation table (usually a dictionary) as the only parameter. This table maps the ordinal values of characters (i.e. the result of calling ord on them) to the ordinal values of the characters which should replace them, or—usefully to us—None to indicate that they should be deleted. So to do the above dance with a Unicode string you would call something like translation_table = dict.fromkeys(map(ord, '!##$'), None) unicode_line = unicode_line.translate(translation_table) Here dict.fromkeys and map are used to succinctly generate a dictionary containing {ord('!'): None, ord('#'): None, ...} Even simpler, as another answer puts it, create the translation table in place: unicode_line = unicode_line.translate({ord(c): None for c in '!##$'}) Or, as brought up by Joseph Lee, create the same translation table with str.maketrans: unicode_line = unicode_line.translate(str.maketrans('', '', '!##$')) * for compatibility with earlier Pythons, you can create a "null" translation table to pass in place of None: import string line = line.translate(string.maketrans('', ''), '!##$') Here string.maketrans is used to create a translation table, which is just a string containing the characters with ordinal values 0 to 255.
Am I missing the point here, or is it just the following: string = "ab1cd1ef" string = string.replace("1", "") print(string) # result: "abcdef" Put it in a loop: a = "a!b#c#d$" b = "!##$" for char in b: a = a.replace(char, "") print(a) # result: "abcd"
>>> line = "abc##!?efg12;:?" >>> ''.join( c for c in line if c not in '?:!/;' ) 'abc##efg12'
With re.sub regular expression Since Python 3.5, substitution using regular expressions re.sub became available: import re re.sub('\ |\?|\.|\!|\/|\;|\:', '', line) Example import re line = 'Q: Do I write ;/.??? No!!!' re.sub('\ |\?|\.|\!|\/|\;|\:', '', line) 'QDoIwriteNo' Explanation In regular expressions (regex), | is a logical OR and \ escapes spaces and special characters that might be actual regex commands. Whereas sub stands for substitution, in this case with the empty string ''.
The asker almost had it. Like most things in Python, the answer is simpler than you think. >>> line = "H E?.LL!/;O:: " >>> for char in ' ?.!/;:': ... line = line.replace(char,'') ... >>> print line HELLO You don't have to do the nested if/for loop thing, but you DO need to check each character individually.
For the inverse requirement of only allowing certain characters in a string, you can use regular expressions with a set complement operator [^ABCabc]. For example, to remove everything except ascii letters, digits, and the hyphen: >>> import string >>> import re >>> >>> phrase = ' There were "nine" (9) chick-peas in my pocket!!! ' >>> allow = string.letters + string.digits + '-' >>> re.sub('[^%s]' % allow, '', phrase) 'Therewerenine9chick-peasinmypocket' From the python regular expression documentation: Characters that are not within a range can be matched by complementing the set. If the first character of the set is '^', all the characters that are not in the set will be matched. For example, [^5] will match any character except '5', and [^^] will match any character except '^'. ^ has no special meaning if it’s not the first character in the set.
line = line.translate(None, " ?.!/;:")
>>> s = 'a1b2c3' >>> ''.join(c for c in s if c not in '123') 'abc'
Strings are immutable in Python. The replace method returns a new string after the replacement. Try: for char in line: if char in " ?.!/;:": line = line.replace(char,'') This is identical to your original code, with the addition of an assignment to line inside the loop. Note that the string replace() method replaces all of the occurrences of the character in the string, so you can do better by using replace() for each character you want to remove, instead of looping over each character in your string.
I was surprised that no one had yet recommended using the builtin filter function. import operator import string # only for the example you could use a custom string s = "1212edjaq" Say we want to filter out everything that isn't a number. Using the filter builtin method "...is equivalent to the generator expression (item for item in iterable if function(item))" [Python 3 Builtins: Filter] sList = list(s) intsList = list(string.digits) obj = filter(lambda x: operator.contains(intsList, x), sList))) In Python 3 this returns >> <filter object # hex> To get a printed string, nums = "".join(list(obj)) print(nums) >> "1212" I am not sure how filter ranks in terms of efficiency but it is a good thing to know how to use when doing list comprehensions and such. UPDATE Logically, since filter works you could also use list comprehension and from what I have read it is supposed to be more efficient because lambdas are the wall street hedge fund managers of the programming function world. Another plus is that it is a one-liner that doesnt require any imports. For example, using the same string 's' defined above, num = "".join([i for i in s if i.isdigit()]) That's it. The return will be a string of all the characters that are digits in the original string. If you have a specific list of acceptable/unacceptable characters you need only adjust the 'if' part of the list comprehension. target_chars = "".join([i for i in s if i in some_list]) or alternatively, target_chars = "".join([i for i in s if i not in some_list])
Using filter, you'd just need one line line = filter(lambda char: char not in " ?.!/;:", line) This treats the string as an iterable and checks every character if the lambda returns True: >>> help(filter) Help on built-in function filter in module __builtin__: filter(...) filter(function or None, sequence) -> list, tuple, or string Return those items of sequence for which function(item) is true. If function is None, return the items that are true. If sequence is a tuple or string, return the same type, else return a list.
Try this one: def rm_char(original_str, need2rm): ''' Remove charecters in "need2rm" from "original_str" ''' return original_str.translate(str.maketrans('','',need2rm)) This method works well in Python 3
Here's some possible ways to achieve this task: def attempt1(string): return "".join([v for v in string if v not in ("a", "e", "i", "o", "u")]) def attempt2(string): for v in ("a", "e", "i", "o", "u"): string = string.replace(v, "") return string def attempt3(string): import re for v in ("a", "e", "i", "o", "u"): string = re.sub(v, "", string) return string def attempt4(string): return string.replace("a", "").replace("e", "").replace("i", "").replace("o", "").replace("u", "") for attempt in [attempt1, attempt2, attempt3, attempt4]: print(attempt("murcielago")) PS: Instead using " ?.!/;:" the examples use the vowels... and yeah, "murcielago" is the Spanish word to say bat... funny word as it contains all the vowels :) PS2: If you're interested on performance you could measure these attempts with a simple code like: import timeit K = 1000000 for i in range(1,5): t = timeit.Timer( f"attempt{i}('murcielago')", setup=f"from __main__ import attempt{i}" ).repeat(1, K) print(f"attempt{i}",min(t)) In my box you'd get: attempt1 2.2334518376057244 attempt2 1.8806643818474513 attempt3 7.214925774955572 attempt4 1.7271184513757465 So it seems attempt4 is the fastest one for this particular input.
Here's my Python 2/3 compatible version. Since the translate api has changed. def remove(str_, chars): """Removes each char in `chars` from `str_`. Args: str_: String to remove characters from chars: String of to-be removed characters Returns: A copy of str_ with `chars` removed Example: remove("What?!?: darn;", " ?.!:;") => 'Whatdarn' """ try: # Python2.x return str_.translate(None, chars) except TypeError: # Python 3.x table = {ord(char): None for char in chars} return str_.translate(table)
#!/usr/bin/python import re strs = "how^ much for{} the maple syrup? $20.99? That's[] ricidulous!!!" print strs nstr = re.sub(r'[?|$|.|!|a|b]',r' ',strs)#i have taken special character to remove but any #character can be added here print nstr nestr = re.sub(r'[^a-zA-Z0-9 ]',r'',nstr)#for removing special character print nestr
You can also use a function in order to substitute different kind of regular expression or other pattern with the use of a list. With that, you can mixed regular expression, character class, and really basic text pattern. It's really useful when you need to substitute a lot of elements like HTML ones. *NB: works with Python 3.x import re # Regular expression library def string_cleanup(x, notwanted): for item in notwanted: x = re.sub(item, '', x) return x line = "<title>My example: <strong>A text %very% $clean!!</strong></title>" print("Uncleaned: ", line) # Get rid of html elements html_elements = ["<title>", "</title>", "<strong>", "</strong>"] line = string_cleanup(line, html_elements) print("1st clean: ", line) # Get rid of special characters special_chars = ["[!##$]", "%"] line = string_cleanup(line, special_chars) print("2nd clean: ", line) In the function string_cleanup, it takes your string x and your list notwanted as arguments. For each item in that list of elements or pattern, if a substitute is needed it will be done. The output: Uncleaned: <title>My example: <strong>A text %very% $clean!!</strong></title> 1st clean: My example: A text %very% $clean!! 2nd clean: My example: A text very clean
My method I'd use probably wouldn't work as efficiently, but it is massively simple. I can remove multiple characters at different positions all at once, using slicing and formatting. Here's an example: words = "things" removed = "%s%s" % (words[:3], words[-1:]) This will result in 'removed' holding the word 'this'. Formatting can be very helpful for printing variables midway through a print string. It can insert any data type using a % followed by the variable's data type; all data types can use %s, and floats (aka decimals) and integers can use %d. Slicing can be used for intricate control over strings. When I put words[:3], it allows me to select all the characters in the string from the beginning (the colon is before the number, this will mean 'from the beginning to') to the 4th character (it includes the 4th character). The reason 3 equals till the 4th position is because Python starts at 0. Then, when I put word[-1:], it means the 2nd last character to the end (the colon is behind the number). Putting -1 will make Python count from the last character, rather than the first. Again, Python will start at 0. So, word[-1:] basically means 'from the second last character to the end of the string. So, by cutting off the characters before the character I want to remove and the characters after and sandwiching them together, I can remove the unwanted character. Think of it like a sausage. In the middle it's dirty, so I want to get rid of it. I simply cut off the two ends I want then put them together without the unwanted part in the middle. If I want to remove multiple consecutive characters, I simply shift the numbers around in the [] (slicing part). Or if I want to remove multiple characters from different positions, I can simply sandwich together multiple slices at once. Examples: words = "control" removed = "%s%s" % (words[:2], words[-2:]) removed equals 'cool'. words = "impacts" removed = "%s%s%s" % (words[1], words[3:5], words[-1]) removed equals 'macs'. In this case, [3:5] means character at position 3 through character at position 5 (excluding the character at the final position). Remember, Python starts counting at 0, so you will need to as well.
In Python 3.5 e.g., os.rename(file_name, file_name.translate({ord(c): None for c in '0123456789'})) To remove all the number from the string
How about this: def text_cleanup(text): new = "" for i in text: if i not in " ?.!/;:": new += i return new
Below one.. with out using regular expression concept.. ipstring ="text with symbols!##$^&*( ends here" opstring='' for i in ipstring: if i.isalnum()==1 or i==' ': opstring+=i pass print opstring
Recursive split: s=string ; chars=chars to remove def strip(s,chars): if len(s)==1: return "" if s in chars else s return strip(s[0:int(len(s)/2)],chars) + strip(s[int(len(s)/2):len(s)],chars) example: print(strip("Hello!","lo")) #He!
You could use the re module's regular expression replacement. Using the ^ expression allows you to pick exactly what you want from your string. import re text = "This is absurd!" text = re.sub("[^a-zA-Z]","",text) # Keeps only Alphabets print(text) Output to this would be "Thisisabsurd". Only things specified after the ^ symbol will appear.
# for each file on a directory, rename filename file_list = os.listdir (r"D:\Dev\Python") for file_name in file_list: os.rename(file_name, re.sub(r'\d+','',file_name))
Even the below approach works line = "a,b,c,d,e" alpha = list(line) while ',' in alpha: alpha.remove(',') finalString = ''.join(alpha) print(finalString) output: abcde
The string method replace does not modify the original string. It leaves the original alone and returns a modified copy. What you want is something like: line = line.replace(char,'') def replace_all(line, )for char in line: if char in " ?.!/;:": line = line.replace(char,'') return line However, creating a new string each and every time that a character is removed is very inefficient. I recommend the following instead: def replace_all(line, baddies, *): """ The following is documentation on how to use the class, without reference to the implementation details: For implementation notes, please see comments begining with `#` in the source file. [*crickets chirp*] """ is_bad = lambda ch, baddies=baddies: return ch in baddies filter_baddies = lambda ch, *, is_bad=is_bad: "" if is_bad(ch) else ch mahp = replace_all.map(filter_baddies, line) return replace_all.join('', join(mahp)) # ------------------------------------------------- # WHY `baddies=baddies`?!? # `is_bad=is_bad` # ------------------------------------------------- # Default arguments to a lambda function are evaluated # at the same time as when a lambda function is # **defined**. # # global variables of a lambda function # are evaluated when the lambda function is # **called** # # The following prints "as yellow as snow" # # fleece_color = "white" # little_lamb = lambda end: return "as " + fleece_color + end # # # sometime later... # # fleece_color = "yellow" # print(little_lamb(" as snow")) # -------------------------------------------------- replace_all.map = map replace_all.join = str.join
If you want your string to be just allowed characters by using ASCII codes, you can use this piece of code: for char in s: if ord(char) < 96 or ord(char) > 123: s = s.replace(char, "") It will remove all the characters beyond a....z even upper cases.
Show non printable characters in a string
Is it possible to visualize non-printable characters in a python string with its hex values? e.g. If I have a string with a newline inside I would like to replace it with \x0a. I know there is repr() which will give me ...\n, but I'm looking for the hex version.
I don't know of any built-in method, but it's fairly easy to do using a comprehension: import string printable = string.ascii_letters + string.digits + string.punctuation + ' ' def hex_escape(s): return ''.join(c if c in printable else r'\x{0:02x}'.format(ord(c)) for c in s)
I'm kind of late to the party, but if you need it for simple debugging, I found that this works: string = "\n\t\nHELLO\n\t\n\a\17" procd = [c for c in string] print(procd) # Prints ['\n,', '\t,', '\n,', 'H,', 'E,', 'L,', 'L,', 'O,', '\n,', '\t,', '\n,', '\x07,', '\x0f,'] While just list is simpler, a comprehension makes it easier to add in filtering/mapping if necessary.
You'll have to make the translation manually; go through the string with a regular expression for example, and replace each occurrence with the hex equivalent. import re replchars = re.compile(r'[\n\r]') def replchars_to_hex(match): return r'\x{0:02x}'.format(ord(match.group())) replchars.sub(replchars_to_hex, inputtext) The above example only matches newlines and carriage returns, but you can expand what characters are matched, including using \x escape codes and ranges. >>> inputtext = 'Some example containing a newline.\nRight there.\n' >>> replchars.sub(replchars_to_hex, inputtext) 'Some example containing a newline.\\x0aRight there.\\x0a' >>> print(replchars.sub(replchars_to_hex, inputtext)) Some example containing a newline.\x0aRight there.\x0a
Modifying ecatmur's solution to handle non-printable non-ASCII characters makes it less trivial and more obnoxious: def escape(c): if c.printable(): return c c = ord(c) if c <= 0xff: return r'\x{0:02x}'.format(c) elif c <= '\uffff': return r'\u{0:04x}'.format(c) else: return r'\U{0:08x}'.format(c) def hex_escape(s): return ''.join(escape(c) for c in s) Of course if str.isprintable isn't exactly the definition you want, you can write a different function. (Note that it's a very different set from what's in string.printable—besides handling non-ASCII printable and non-printable characters, it also considers \n, \r, \t, \x0b, and \x0c as non-printable. You can make this more compact; this is explicit just to show all the steps involved in handling Unicode strings. For example: def escape(c): if c.printable(): return c elif c <= '\xff': return r'\x{0:02x}'.format(ord(c)) else: return c.encode('unicode_escape').decode('ascii') Really, no matter what you do, you're going to have to handle \r, \n, and \t explicitly, because all of the built-in and stdlib functions I know of will escape them via those special sequences instead of their hex versions.
I did something similar once by deriving a str subclass with a custom __repr__() method which did what I wanted. It's not exactly what you're looking for, but may give you some ideas. # -*- coding: iso-8859-1 -*- # special string subclass to override the default # representation method. main purpose is to # prefer using double quotes and avoid hex # representation on chars with an ord > 128 class MsgStr(str): def __repr__(self): # use double quotes unless there are more of them within the string than # single quotes if self.count("'") >= self.count('"'): quotechar = '"' else: quotechar = "'" rep = [quotechar] for ch in self: # control char? if ord(ch) < ord(' '): # remove the single quotes around the escaped representation rep += repr(str(ch)).strip("'") # embedded quote matching quotechar being used? elif ch == quotechar: rep += "\\" rep += ch # else just use others as they are else: rep += ch rep += quotechar return "".join(rep) if __name__ == "__main__": s1 = '\tWürttemberg' s2 = MsgStr(s1) print "str s1:", s1 print "MsgStr s2:", s2 print "--only the next two should differ--" print "repr(s1):", repr(s1), "# uses built-in string 'repr'" print "repr(s2):", repr(s2), "# uses custom MsgStr 'repr'" print "str(s1):", str(s1) print "str(s2):", str(s2) print "repr(str(s1)):", repr(str(s1)) print "repr(str(s2)):", repr(str(s2)) print "MsgStr(repr(MsgStr('\tWürttemberg'))):", MsgStr(repr(MsgStr('\tWürttemberg')))
There is also a way to print non-printable characters in the sense of them executing as commands within the string even if not visible (transparent) in the string, and their presence can be observed by measuring the length of the string using len as well as by simply putting the mouse cursor at the start of the string and seeing/counting how many times you have to tap the arrow key to get from start to finish, as oddly some single characters can have a length of 3 for example, which seems perplexing. (Not sure if this was already demonstrated in prior answers) In this example screenshot below, I pasted a 135-bit string that has a certain structure and format (which I had to manually create beforehand for certain bit positions and its overall length) so that it is interpreted as ascii by the particular program I'm running, and within the resulting printed string are non-printable characters such as the 'line break` which literally causes a line break (correction: form feed, new page I meant, not line break) in the printed output there is an extra entire blank line in between the printed result (see below): Example of printing non-printable characters that appear in printed string Input a string:100100001010000000111000101000101000111011001110001000100001100010111010010101101011100001011000111011001000101001000010011101001000000 HPQGg]+\,vE!:# >>> len('HPQGg]+\,vE!:#') 17 >>> In the above code excerpt, try to copy-paste the string HPQGg]+\,vE!:# straight from this site and see what happens when you paste it into the Python IDLE. Hint: You have to tap the arrow/cursor three times to get across the two letters from P to Q even though they appear next to each other, as there is actually a File Separator ascii command in between them. However, even though we get the same starting value when decoding it as a byte array to hex, if we convert that hex back to bytes they look different (perhaps lack of encoding, not sure), but either way the above output of the program prints non-printable characters (I came across this by chance while trying to develop a compression method/experiment). >>> bytes(b'HPQGg]+\,vE!:#').hex() '48501c514767110c5d2b5c2c7645213a40' >>> bytes.fromhex('48501c514767110c5d2b5c2c7645213a40') b'HP\x1cQGg\x11\x0c]+\\,vE!:#' >>> (0x48501c514767110c5d2b5c2c7645213a40 == 0b100100001010000000111000101000101000111011001110001000100001100010111010010101101011100001011000111011001000101001000010011101001000000) True >>> In the above 135 bit string, the first 16 groups of 8 bits from the big-endian side encode each character (including non-printable), whereas the last group of 7 bits results in the # symbol, as seen below: Technical breakdown of the format of the above 135-bit string And here as text is the breakdown of the 135-bit string: 10010000 = H (72) 10100000 = P (80) 00111000 = x1c (28 for File Separator) * 10100010 = Q (81) 10001110 = G(71) 11001110 = g (103) 00100010 = x11 (17 for Device Control 1) * 00011000 = x0c (12 for NP form feed, new page) * 10111010 = ] (93 for right bracket ‘]’ 01010110 = + (43 for + sign) 10111000 = \ (92 for backslash) 01011000 = , (44 for comma, ‘,’) 11101100 = v (118) 10001010 = E (69) 01000010 = ! (33 for exclamation) 01110100 = : (58 for colon ‘:’) 1000000 = # (64 for ‘#’ sign) So in closing, the answer to the sub-question about showing the non-printable as hex, in byte array further above appears the letters x1c which denote the file separator command which was also noted in the hint. The byte array could be considered a string if excluding the prefix b on the left side, and again this value shows in the print string albeit it is invisible (although its presence can be observed as demonstrated above with the hint and len command).
python regular expression to match strings
I want to parse a string, such as: package: name='jp.tjkapp.droid1lwp' versionCode='2' versionName='1.1' uses-permission:'android.permission.WRITE_APN_SETTINGS' uses-permission:'android.permission.RECEIVE_BOOT_COMPLETED' uses-permission:'android.permission.ACCESS_NETWORK_STATE' I want to get: string1: jp.tjkapp.droidllwp` string2: 1.1 Because there are multiple uses-permission, I want to get permission as a list, contains: WRITE_APN_SETTINGS, RECEIVE_BOOT_COMPLETED and ACCESS_NETWORK_STATE. Could you help me write the python regular expression to get the strings I want? Thanks.
Assuming the code block you provided is one long string, here stored in a variable called input_string: name = re.search(r"(?<=name\=\')[\w\.]+?(?=\')", input_string).group(0) versionName = re.search(r"(?<=versionName\=\')\d+?\.\d+?(?=\')", input_string).group(0) permissions = re.findall(r'(?<=android\.permission\.)[A-Z_]+(?=\')', input_string) Explanation: name (?<=name\=\'): check ahead of the main string in order to return only strings that are preceded by name='. The \ in front of = and ' serve to escape them so that the regex knows we're talking about the = string and not a regex command. name=' is not also returned when we get the result, we just know that the results we get are all preceded by it. [\w\.]+?: This is the main string we're searching for. \w means any alphanumeric character and underscore. \. is an escaped period, so the regex knows we mean . and not the regex command represented by an unescaped period. Putting these in [] means we're okay with anything we've stuck in brackets, so we're saying that we'll accept any alphanumeric character, _, or .. + afterwords means at least one of the previous thing, meaning at least one (but possibly more) of [\w\.]. Finally, the ? means don't be greedy--we're telling the regex to get the smallest possible group that meets these specifications, since + could go on for an unlimited number of repeats of anything matched by [\w\.]. (?=\'): check behind the main string in order to return only strings that are followed by '. The \ is also an escape, since otherwise regex or Python's string execution might misinterpret '. This final ' is not returned with our results, we just know that in the original string, it followed any result we do end up getting.
You can do this without regex by reading the file content line by line. >>> def split_string(s): ... if s.startswith('package'): ... return [i.split('=')[1] for i in s.split() if "=" in i] ... elif s.startswith('uses-permission'): ... return s.split('.')[-1] ... >>> split_string("package: name='jp.tjkapp.droid1lwp' versionCode='2' versionName='1.1'") ["'jp.tjkapp.droid1lwp'", "'2'", "'1.1'"] >>> split_string("uses-permission:'android.permission.WRITE_APN_SETTINGS'") "WRITE_APN_SETTINGS'" >>> split_string("uses-permission:'android.permission.RECEIVE_BOOT_COMPLETED'") "RECEIVE_BOOT_COMPLETED'" >>> split_string("uses-permission:'android.permission.ACCESS_NETWORK_STATE'") "ACCESS_NETWORK_STATE'" >>>
Here is one example code #!/usr/bin/env python inputFile = open("test.txt", "r").readlines() for line in inputFile: if line.startswith("package"): words = line.split() string1 = words[1].split("=")[1].replace("'","") string2 = words[3].split("=")[1].replace("'","") test.txt file contains input data you mentioned earlier..
Is there any regular expression in python to find out the space in folder path
I have a python strings as below: mystring1 = "command1 " "D:\\The palace\\The King\\ " "D:\\The palace\\The Queen\\" mystring2 = "command2 " "D:\\Thepalace\\TheKing\\ " "D:\\Thepalace\\TheQueen\\" Is there any regular expression by which we can find out whether a space exists in the folder path ie how can i distinguish mystring1 to mystring2 by using a regular expression
Provided that mystring1 and mystring2 will be proper Python strings, I think you want to take a look at the shlex Python package. It's specifically created for parsing such strings. Then you can use shlex.get_token() to retrieve the command, and the arguments to the command, quotes/spaced parsed and all.
You may not need regular expression indeed. A simple string method string.find(string) would do. s = "some string" space = " " s.find(space) # gives -1 when space is not in s, otherwise the offset of its 1st appearance EDITED: This answer is edited according to the edition of the question. Since you're on Windows, I present you ":\\. .*\\" for commands with absolute paths. And as for relative paths, I'm afraid there won't be a nice solution to detect.
Assuming that you meant mystring1="command1 D:\\The place\\The King\\ D:\\The place\\The Queen\\" You could split the string at occurrences of D:\\ and then removing spaces from the substrings will give you space free paths, e.g. subs = mystring1.split('D:\\') mystring2 = subs[0] for s in subs[1:]: mystring2 += ' D:\\' + s.replace(' ', '') For example I applied this and got >> mystring2 'command1 D:\\Theplace\\TheKing\\ D:\\Theplace\\TheQueen\\' If you just need to distinguish them use def has_spaces(str1): subs = str1.split('D:\\') for s in subs[1:]: if s.strip().count(' ') > 0: return True >> has_spaces(mystring1) True >> has_spaces(mystring2) False to detect if any of the paths contain spaces. Cheers to Pengyu CHEN for pointing out the OP asked an easier question.
python regex for repeating string
I am wanting to verify and then parse this string (in quotes): string = "start: c12354, c3456, 34526; other stuff that I don't care about" //Note that some codes begin with 'c' I would like to verify that the string starts with 'start:' and ends with ';' Afterward, I would like to have a regex parse out the strings. I tried the following python re code: regx = r"start: (c?[0-9]+,?)+;" reg = re.compile(regx) matched = reg.search(string) print ' matched.groups()', matched.groups() I have tried different variations but I can either get the first or the last code but not a list of all three. Or should I abandon using a regex? EDIT: updated to reflect part of the problem space I neglected and fixed string difference. Thanks for all the suggestions - in such a short time.
In Python, this isn’t possible with a single regular expression: each capture of a group overrides the last capture of that same group (in .NET, this would actually be possible since the engine distinguishes between captures and groups). Your easiest solution is to first extract the part between start: and ; and then using a regular expression to return all matches, not just a single match, using re.findall('c?[0-9]+', text).
You could use the standard string tools, which are pretty much always more readable. s = "start: c12354, c3456, 34526;" s.startswith("start:") # returns a boolean if it starts with this string s.endswith(";") # returns a boolean if it ends with this string s[6:-1].split(', ') # will give you a list of tokens separated by the string ", "
This can be done (pretty elegantly) with a tool like Pyparsing: from pyparsing import Group, Literal, Optional, Word import string code = Group(Optional(Literal("c"), default='') + Word(string.digits) + Optional(Literal(","), default='')) parser = Literal("start:") + OneOrMore(code) + Literal(";") # Read lines from file: with open('lines.txt', 'r') as f: for line in f: try: result = parser.parseString(line) codes = [c[1] for c in result[1:-1]] # Do something with teh codez... except ParseException exc: # Oh noes: string doesn't match! continue Cleaner than a regular expression, returns a list of codes (no need to string.split), and ignores any extra characters in the line, just like your example.
import re sstr = re.compile(r'start:([^;]*);') slst = re.compile(r'(?:c?)(\d+)') mystr = "start: c12354, c3456, 34526; other stuff that I don't care about" match = re.match(sstr, mystr) if match: res = re.findall(slst, match.group(0)) results in ['12354', '3456', '34526']