How can i change string encoding? [duplicate] - python

I have a string where special characters like ' or " or & (...) can appear. In the string:
string = """ Hello "XYZ" this 'is' a test & so on """
how can I automatically escape every special character, so that I get this:
string = " Hello "XYZ" this 'is' a test & so on "

In Python 3.2, you could use the html.escape function, e.g.
>>> string = """ Hello "XYZ" this 'is' a test & so on """
>>> import html
>>> html.escape(string)
' Hello "XYZ" this 'is' a test & so on '
For earlier versions of Python, check http://wiki.python.org/moin/EscapingHtml:
The cgi module that comes with Python has an escape() function:
import cgi
s = cgi.escape( """& < >""" ) # s = "& < >"
However, it doesn't escape characters beyond &, <, and >. If it is used as cgi.escape(string_to_escape, quote=True), it also escapes ".
Here's a small snippet that will let you escape quotes and apostrophes as well:
html_escape_table = {
"&": "&",
'"': """,
"'": "&apos;",
">": ">",
"<": "<",
}
def html_escape(text):
"""Produce entities within text."""
return "".join(html_escape_table.get(c,c) for c in text)
You can also use escape() from xml.sax.saxutils to escape html. This function should execute faster. The unescape() function of the same module can be passed the same arguments to decode a string.
from xml.sax.saxutils import escape, unescape
# escape() and unescape() takes care of &, < and >.
html_escape_table = {
'"': """,
"'": "&apos;"
}
html_unescape_table = {v:k for k, v in html_escape_table.items()}
def html_escape(text):
return escape(text, html_escape_table)
def html_unescape(text):
return unescape(text, html_unescape_table)

The cgi.escape method will convert special charecters to valid html tags
import cgi
original_string = 'Hello "XYZ" this \'is\' a test & so on '
escaped_string = cgi.escape(original_string, True)
print original_string
print escaped_string
will result in
Hello "XYZ" this 'is' a test & so on
Hello "XYZ" this 'is' a test & so on
The optional second paramter on cgi.escape escapes quotes. By default, they are not escaped

A simple string function will do it:
def escape(t):
"""HTML-escape the text in `t`."""
return (t
.replace("&", "&").replace("<", "<").replace(">", ">")
.replace("'", "'").replace('"', """)
)
Other answers in this thread have minor problems: The cgi.escape method for some reason ignores single-quotes, and you need to explicitly ask it to do double-quotes. The wiki page linked does all five, but uses the XML entity &apos;, which isn't an HTML entity.
This code function does all five all the time, using HTML-standard entities.

The other answers here will help with such as the characters you listed and a few others. However, if you also want to convert everything else to entity names, too, you'll have to do something else. For instance, if á needs to be converted to á, neither cgi.escape nor html.escape will help you there. You'll want to do something like this that uses html.entities.entitydefs, which is just a dictionary. (The following code is made for Python 3.x, but there's a partial attempt at making it compatible with 2.x to give you an idea):
# -*- coding: utf-8 -*-
import sys
if sys.version_info[0]>2:
from html.entities import entitydefs
else:
from htmlentitydefs import entitydefs
text=";\"áèïøæỳ" #This is your string variable containing the stuff you want to convert
text=text.replace(";", "$ஸ$") #$ஸ$ is just something random the user isn't likely to have in the document. We're converting it so it doesn't convert the semi-colons in the entity name into entity names.
text=text.replace("$ஸ$", "&semi;") #Converting semi-colons to entity names
if sys.version_info[0]>2: #Using appropriate code for each Python version.
for k,v in entitydefs.items():
if k not in {"semi", "amp"}:
text=text.replace(v, "&"+k+";") #You have to add the & and ; manually.
else:
for k,v in entitydefs.iteritems():
if k not in {"semi", "amp"}:
text=text.replace(v, "&"+k+";") #You have to add the & and ; manually.
#The above code doesn't cover every single entity name, although I believe it covers everything in the Latin-1 character set. So, I'm manually doing some common ones I like hereafter:
text=text.replace("ŷ", "&ycirc;")
text=text.replace("Ŷ", "&Ycirc;")
text=text.replace("ŵ", "&wcirc;")
text=text.replace("Ŵ", "&Wcirc;")
text=text.replace("ỳ", "ỳ")
text=text.replace("Ỳ", "Ỳ")
text=text.replace("ẃ", "&wacute;")
text=text.replace("Ẃ", "&Wacute;")
text=text.replace("ẁ", "ẁ")
text=text.replace("Ẁ", "Ẁ")
print(text)
#Python 3.x outputs: &semi;"áèïøæỳ
#The Python 2.x version outputs the wrong stuff. So, clearly you'll have to adjust the code somehow for it.

Related

How to properly unescape select sequences in python

I'm escaping certain characters in strings (e.g., \n, \\) with double backslashes, like this: text.replace("\\", "\\\\").replace("\n", "\\n")
Naïvely, I tried to unescape using: text.replace("\\n", "\n").replace("\\\\", "\\")
However, this fails on strings like:
>>> text = "\\\n\\n"
>>> print(text)
\
\n
>>> etext = text.replace("\\", "\\\\").replace("\n", "\\n")
>>> print(etext)
\\\n\\n
>>> ftext = etext.replace("\\n", "\n").replace("\\\\", "\\")
>>> print(ftext)
\
\
>>>
As you can see the original string doesn't survive the round trip.
Even changing the order of replaces around would not solve the issue.
The only way to correctly unescape is to do the replacements in one go.
Python's str has maketrans and translate to achieve a similar effect
but they only work on single characters as keys.
re.sub also does not work since the substitution would need to distinguish the case somehow. (\1 does not work since if the second character is n we want the newline character as output instead of n)
A correct (but slow) solution would be:
def unescape(text: str) -> str:
res: list[str] = []
in_escape = False
for c in text:
if in_escape:
in_escape = False
if c == "\\":
res.append("\\")
continue
if c == "n":
res.append("\n")
continue
if c == "\\":
in_escape = True
continue
res.append(c)
return "".join(res)
>>> text = "\\\n\\n"
>>> print(text)
\
\n
>>> etext = text.replace("\\", "\\\\").replace("\n", "\\n")
>>> print(etext)
\\\n\\n
>>> print(unescape(etext))
\
\n
>>>
Is there a proper/canonical/fast way of escaping (only certain sequences in) strings?
(EDIT: to answer why a subset of escapes is preferred. in my case other escapes are not needed and it's easy to permanently corrupt your data by escaping things that don't need to. for example, from the top of my head I can think of three different escape functions just in python alone that all escape completely different subsets of characters. even the str.escape function changes what it escapes between python versions. now most of the time unescape can handle a wider set of escape sequences than its corresponding escape function but this is not always the case. this all doesn't even take into account trying to load the escaped data in a different language)

Python query for code examples

I want to create something like a dictionary for python code examples. My problem is, that I have to escape all the code examples. Also r'some string' is not useful. Would you recommend to use an other solution to query this entries?
import easygui
lex = {"dict": "woerter = {\"house\" : \"Haus\"}\nwoerter[\"house\"]",\
"for": "for x in range(0, 3):\n print \"We are on time %d\" % (x)",\
"while": "while expression:\n statement(s)"}
input_ = easygui.enterbox("Python-lex","")
output = lex[input_]
b = easygui.textbox("","",output)
Use triple quoting:
lex = {"dict": '''\
woerter = {"house" : "Haus"}
woerter["house"]
''',
"for": '''\
for x in range(0, 3):
print "We are on time %d" % (x)
''',
"while": '''\
while expression:
statement(s)
'''}
Triple-quoted strings (using ''' or """ delimiters) preserve newlines and any embedded single quotes do not need to be escaped.
The \ escape after the opening ''' triple quote escapes the newline at the start, making the value a little easier to read. The alternative would be to put the first line directly after the opening quotes.
You can make these raw as well; r'''\n''' would contain the literal characters \ and n, but literal newlines still remain literal newlines. Triple-quoting works with double-quote characters too: """This is a triple-quoted string too""". The only thing you'd have to escape is another triple quote in the same style; you only need to escape one quote character in that case:
triple_quote_with_embedded_triple = '''Triple quotes use \''' and """ delimiters'''
I guess you can use json.dumps(data, incident=1) to convert the data, and transfer into easygui.textbox.
like this below:
import json
import easygui
resp = dict(...)
easygui.textbox(text=json.dumps(resp, indent=1))

Python3 : unescaping non ascii characters

(Python 3.3.2) I have to unescape some non ASCII escaped characters returned by a call to re.escape(). I see here and here methods that doesn't work. I'm working in a 100% UTF-8 environment.
# pure ASCII string : ok
mystring = "a\n" # expected unescaped string : "a\n"
cod = codecs.getencoder('unicode_escape')
print( cod(mystring) )
# non ASCII string : method #1
mystring = "€\n"
# equivalent to : mystring = codecs.unicode_escape_decode(mystring)
cod = codecs.getdecoder('unicode_escape')
print(cod(mystring))
# RESULT = ('â\x82¬\n', 5) INSTEAD OF ("€\n", 2)
# non ASCII string : method #2
mystring = "€\n"
mystring = bytes(mystring, 'utf-8').decode('unicode_escape')
print(mystring)
# RESULT = â\202¬ INSTEAD OF "€\n"
Is this a bug ? Have I misunderstood something ?
Any help would be appreciated !
PS : I edited my post thanks to the Michael Foukarakis' remark.
I guess the actual string you need to process is mystring = €\\n?
mystring = "€\n" # that's 2 char, "€" and new line
mystring = "€\\n" # that's 3 char, "€", "\" and "n"
I don't really understand what's going wrong within encode() and decode() of python3, but my friend solve this problem when we are writing some tools.
How we did is to bypass the encoder("utf_8") after the escape procedure is done.
>>> "€\\n".encode("utf_8")
b'\xe2\x82\xac\\n'
>>> "€\\n".encode("utf_8").decode("unicode_escape")
'â\x82¬\n'
>>> "€\\n".encode("utf_8").decode("unicode_escape").encode("utf_8")
b'\xc3\xa2\xc2\x82\xc2\xac\n' # we don't want this
>>> bytes([ord(char) for char in "€\\n".encode("utf_8").decode("unicode_escape")])
b'\xe2\x82\xac\n' # what we really need
>>> str(bytes([ord(char) for char in "€\\n".encode("utf_8").decode("unicode_escape")]), "utf_8")
'€\n'
We can see that: though the result of decode("unicode_escape") looks wired, the bytes object actually contain the correct bytes of your strings(with utf-8 encoding), in this case, "\xe2\x82\xac\n"
And we now do not print the str object directly, neither do we use encode("utf_8"), we use ord() to create the bytes object b'\xe2\x82\xac\n'.
And you can get the correct str from this bytes object, just put it into str()
BTW, the tool my friend and me want to make is a wrapper that allow user to input c-like string literal, and convert the escaped sequence automatically.
User input:\n\x61\x62\n\x20\x21 # 20 characters, which present 6 chars semantically
output: # \n
ab # \x61\x62\n
! # \x20\x21
That's a powerful tool for user to input some non-printable character in terminal.
Our final tools is:
#!/usr/bin/env python3
import sys
for line in sys.stdin:
sys.stdout.buffer.write(bytes([ord(char) for char in line[:-1].encode().decode('unicode_escape')]))
sys.stdout.flush()
You seem to misunderstand encodings. To be protected against common errors, we usually encode a string when it leaves our application, and decode it when it comes in.
Firstly, let's look at the documentation for unicode_escape, which states:
Produce[s] a string that is suitable as Unicode literal in Python source code.
Here is what you would get from the network or a file that claims its contents are Unicode escaped:
b'\\u20ac\\n'
Now, you have to decode this to use it in your app:
>>> s = b'\\u20ac\\n'.decode('unicode_escape')
>>> s
'€\n'
and if you wanted to write it back to, say, a Python source file:
with open('/tmp/foo', 'wb') as fh: # binary mode
fh.write(b'print("' + s.encode('unicode_escape') + b'")')
import string
printable = string.printable
printable = printable + '€'
def cod(c):
return c.encode('unicode_escape').decode('ascii')
def unescape(s):
return ''.join(c if ord(c)>=32 and c in printable else cod(c) for c in s)
mystring = "€\n"
print(unescape(mystring))
Unfortunately string.printable only includes ASCII characters. You can make a copy as I did here and extend it with any Unicode characters that you'd like, such as €.

Proper way to deal with string which looks like json object but it is wrapped with single quote

By definition the JSON string is wrapped with double quote.
In fact:
json.loads('{"v":1}') #works
json.loads("{'v':1}") #doesn't work
But how to deal with the second statements?
I'm looking for a solution different from eval or replace.
Thanks.
If you get a mailformed json why don't you just replace the double quotes with single quotes before
json.load
If you cannot fix the other side you will have to convert invalid JSON into valid JSON. I think the following treats escaped characters properly:
def fixEscapes(value):
# Replace \' by '
value = re.sub(r"[^\\]|\\.", lambda match: "'" if match.group(0) == "\\'" else match.group(0), value)
# Replace " by \"
value = re.sub(r"[^\\]|\\.", lambda match: '\\"' if match.group(0) == '"' else match.group(0), value)
return value
input = "{'vt\"e\\'st':1}"
input = re.sub(r"'(([^\\']|\\.)+)'", lambda match: '"%s"' % fixEscapes(match.group(1)), input)
print json.loads(input)
Not sure if I got your requirements right, but are you looking for something like this?
def fix_json(string_):
if string_[0] == string_[-1] == "'":
return '"' + string_[1:-1] +'"'
return string_
Example usage:
>>> fix_json("'{'key':'val\"'...cd'}'")
"{'key':'val"'...cd'}"
EDIT: it seems that the humour I tried to have in making the example above is not self-explanatory. So, here's another example:
>>> fix_json("'This string has - I'm sure - single quotes delimiters.'")
"This string has - I'm sure - single quotes delimiters."
This examples show how the "replacement" only happens at the extremities of the string, not within it.
you could also achieve the same with a regular expression, of course, but if you are just checking the starting and finishing char of a string, I find using regular string indexes more readable....
unfortunately you have to do this:
f = open('filename.json', 'rb')
json = eval(f.read())
done!
this works, but apparently people don't like the eval function. Let me know if you find a better approach. I used this on some twitter data...

Turn a string into a valid filename?

I have a string that I want to use as a filename, so I want to remove all characters that wouldn't be allowed in filenames, using Python.
I'd rather be strict than otherwise, so let's say I want to retain only letters, digits, and a small set of other characters like "_-.() ". What's the most elegant solution?
The filename needs to be valid on multiple operating systems (Windows, Linux and Mac OS) - it's an MP3 file in my library with the song title as the filename, and is shared and backed up between 3 machines.
You can look at the Django framework (but take there licence into account!) for how they create a "slug" from arbitrary text. A slug is URL- and filename- friendly.
The Django text utils define a function, slugify(), that's probably the gold standard for this kind of thing. Essentially, their code is the following.
import unicodedata
import re
def slugify(value, allow_unicode=False):
"""
Taken from https://github.com/django/django/blob/master/django/utils/text.py
Convert to ASCII if 'allow_unicode' is False. Convert spaces or repeated
dashes to single dashes. Remove characters that aren't alphanumerics,
underscores, or hyphens. Convert to lowercase. Also strip leading and
trailing whitespace, dashes, and underscores.
"""
value = str(value)
if allow_unicode:
value = unicodedata.normalize('NFKC', value)
else:
value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore').decode('ascii')
value = re.sub(r'[^\w\s-]', '', value.lower())
return re.sub(r'[-\s]+', '-', value).strip('-_')
And the older version:
def slugify(value):
"""
Normalizes string, converts to lowercase, removes non-alpha characters,
and converts spaces to hyphens.
"""
import unicodedata
value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore')
value = unicode(re.sub('[^\w\s-]', '', value).strip().lower())
value = unicode(re.sub('[-\s]+', '-', value))
# ...
return value
There's more, but I left it out, since it doesn't address slugification, but escaping.
You can use list comprehension together with the string methods.
>>> s
'foo-bar#baz?qux#127/\\9]'
>>> "".join(x for x in s if x.isalnum())
'foobarbazqux1279'
This whitelist approach (ie, allowing only the chars present in valid_chars) will work if there aren't limits on the formatting of the files or combination of valid chars that are illegal (like ".."), for example, what you say would allow a filename named " . txt" which I think is not valid on Windows. As this is the most simple approach I'd try to remove whitespace from the valid_chars and prepend a known valid string in case of error, any other approach will have to know about what is allowed where to cope with Windows file naming limitations and thus be a lot more complex.
>>> import string
>>> valid_chars = "-_.() %s%s" % (string.ascii_letters, string.digits)
>>> valid_chars
'-_.() abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
>>> filename = "This Is a (valid) - filename%$&$ .txt"
>>> ''.join(c for c in filename if c in valid_chars)
'This Is a (valid) - filename .txt'
What is the reason to use the strings as file names? If human readability is not a factor I would go with base64 module which can produce file system safe strings. It won't be readable but you won't have to deal with collisions and it is reversible.
import base64
file_name_string = base64.urlsafe_b64encode(your_string)
Update: Changed based on Matthew comment.
There is a nice project on Github called python-slugify:
Install:
pip install python-slugify
Then use:
>>> from slugify import slugify
>>> txt = "This\ is/ a%#$ test ---"
>>> slugify(txt)
'this-is-a-test'
Just to further complicate things, you are not guaranteed to get a valid filename just by removing invalid characters. Since allowed characters differ on different filenames, a conservative approach could end up turning a valid name into an invalid one. You may want to add special handling for the cases where:
The string is all invalid characters (leaving you with an empty string)
You end up with a string with a special meaning, eg "." or ".."
On windows, certain device names are reserved. For instance, you can't create a file named "nul", "nul.txt" (or nul.anything in fact) The reserved names are:
CON, PRN, AUX, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and LPT9
You can probably work around these issues by prepending some string to the filenames that can never result in one of these cases, and stripping invalid characters.
Just like S.Lott answered, you can look at the Django Framework for how they convert a string to a valid filename.
The most recent and updated version is found in utils/text.py, and defines "get_valid_filename", which is as follows:
def get_valid_filename(s):
s = str(s).strip().replace(' ', '_')
return re.sub(r'(?u)[^-\w.]', '', s)
( See https://github.com/django/django/blob/master/django/utils/text.py )
This is the solution I ultimately used:
import unicodedata
validFilenameChars = "-_.() %s%s" % (string.ascii_letters, string.digits)
def removeDisallowedFilenameChars(filename):
cleanedFilename = unicodedata.normalize('NFKD', filename).encode('ASCII', 'ignore')
return ''.join(c for c in cleanedFilename if c in validFilenameChars)
The unicodedata.normalize call replaces accented characters with the unaccented equivalent, which is better than simply stripping them out. After that all disallowed characters are removed.
My solution doesn't prepend a known string to avoid possible disallowed filenames, because I know they can't occur given my particular filename format. A more general solution would need to do so.
In one line:
valid_file_name = re.sub('[^\w_.)( -]', '', any_string)
you can also put '_' character to make it more readable (in case of replacing slashs, for example)
Keep in mind, there are actually no restrictions on filenames on Unix systems other than
It may not contain \0
It may not contain /
Everything else is fair game.
$ touch "
> even multiline
> haha
> ^[[31m red ^[[0m
> evil"
$ ls -la
-rw-r--r-- 0 Nov 17 23:39 ?even multiline?haha??[31m red ?[0m?evil
$ ls -lab
-rw-r--r-- 0 Nov 17 23:39 \neven\ multiline\nhaha\n\033[31m\ red\ \033[0m\nevil
$ perl -e 'for my $i ( glob(q{./*even*}) ){ print $i; } '
./
even multiline
haha
red
evil
Yes, i just stored ANSI Colour Codes in a file name and had them take effect.
For entertainment, put a BEL character in a directory name and watch the fun that ensues when you CD into it ;)
You could use the re.sub() method to replace anything not "filelike". But in effect, every character could be valid; so there are no prebuilt functions (I believe), to get it done.
import re
str = "File!name?.txt"
f = open(os.path.join("/tmp", re.sub('[^-a-zA-Z0-9_.() ]+', '', str))
Would result in a filehandle to /tmp/filename.txt.
I realise there are many answers but they mostly rely on regular expressions or external modules, so I'd like to throw in my own answer. A pure python function, no external module needed, no regular expression used. My approach is not to clean invalid chars, but to only allow valid ones.
def normalizefilename(fn):
validchars = "-_.() "
out = ""
for c in fn:
if str.isalpha(c) or str.isdigit(c) or (c in validchars):
out += c
else:
out += "_"
return out
if you like, you can add your own valid chars to the validchars variable at the beginning, such as your national letters that don't exist in English alphabet. This is something you may or may not want: some file systems that don't run on UTF-8 might still have problems with non-ASCII chars.
This function is to test for a single file name validity, so it will replace path separators with _ considering them invalid chars. If you want to add that, it is trivial to modify the if to include os path separator.
If you don't mind installing a package, this should be useful:
https://pypi.org/project/pathvalidate/
From https://pypi.org/project/pathvalidate/#sanitize-a-filename:
from pathvalidate import sanitize_filename
fname = "fi:l*e/p\"a?t>h|.t<xt"
print(f"{fname} -> {sanitize_filename(fname)}\n")
fname = "\0_a*b:c<d>e%f/(g)h+i_0.txt"
print(f"{fname} -> {sanitize_filename(fname)}\n")
Output
fi:l*e/p"a?t>h|.t<xt -> filepath.txt
_a*b:c<d>e%f/(g)h+i_0.txt -> _abcde%f(g)h+i_0.txt
Another issue that the other comments haven't addressed yet is the empty string, which is obviously not a valid filename. You can also end up with an empty string from stripping too many characters.
What with the Windows reserved filenames and issues with dots, the safest answer to the question “how do I normalise a valid filename from arbitrary user input?” is “don't even bother try”: if you can find any other way to avoid it (eg. using integer primary keys from a database as filenames), do that.
If you must, and you really need to allow spaces and ‘.’ for file extensions as part of the name, try something like:
import re
badchars= re.compile(r'[^A-Za-z0-9_. ]+|^\.|\.$|^ | $|^$')
badnames= re.compile(r'(aux|com[1-9]|con|lpt[1-9]|prn)(\.|$)')
def makeName(s):
name= badchars.sub('_', s)
if badnames.match(name):
name= '_'+name
return name
Even this can't be guaranteed right especially on unexpected OSs — for example RISC OS hates spaces and uses ‘.’ as a directory separator.
Though you have to be careful. It is not clearly said in your intro, if you are looking only at latine language. Some words can become meaningless or another meaning if you sanitize them with ascii characters only.
imagine you have "forêt poésie" (forest poetry), your sanitization might give "fort-posie" (strong + something meaningless)
Worse if you have to deal with chinese characters.
"下北沢" your system might end up doing "---" which is doomed to fail after a while and not very helpful. So if you deal with only files I would encourage to either call them a generic chain that you control or to keep the characters as it is. For URIs, about the same.
Why not just wrap the "osopen" with a try/except and let the underlying OS sort out whether the file is valid?
This seems like much less work and is valid no matter which OS you use.
>>> import string
>>> safechars = bytearray(('_-.()' + string.digits + string.ascii_letters).encode())
>>> allchars = bytearray(range(0x100))
>>> deletechars = bytearray(set(allchars) - set(safechars))
>>> filename = u'#ab\xa0c.$%.txt'
>>> safe_filename = filename.encode('ascii', 'ignore').translate(None, deletechars).decode()
>>> safe_filename
'abc..txt'
It doesn't handle empty strings, special filenames ('nul', 'con', etc).
I liked the python-slugify approach here but it was stripping dots also away which was not desired. So I optimized it for uploading a clean filename to s3 this way:
pip install python-slugify
Example code:
s = 'Very / Unsafe / file\nname hähä \n\r .txt'
clean_basename = slugify(os.path.splitext(s)[0])
clean_extension = slugify(os.path.splitext(s)[1][1:])
if clean_extension:
clean_filename = '{}.{}'.format(clean_basename, clean_extension)
elif clean_basename:
clean_filename = clean_basename
else:
clean_filename = 'none' # only unclean characters
Output:
>>> clean_filename
'very-unsafe-file-name-haha.txt'
This is so failsafe, it works with filenames without extension and it even works for only unsafe characters file names (result is none here).
Answer modified for python 3.6
import string
import unicodedata
validFilenameChars = "-_.() %s%s" % (string.ascii_letters, string.digits)
def removeDisallowedFilenameChars(filename):
cleanedFilename = unicodedata.normalize('NFKD', filename).encode('ASCII', 'ignore')
return ''.join(chr(c) for c in cleanedFilename if chr(c) in validFilenameChars)
Not exactly what OP was asking for but this is what I use because I need unique and reversible conversions:
# p3 code
def safePath (url):
return ''.join(map(lambda ch: chr(ch) if ch in safePath.chars else '%%%02x' % ch, url.encode('utf-8')))
safePath.chars = set(map(lambda x: ord(x), '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz+-_ .'))
Result is "somewhat" readable, at least from a sysadmin point of view.
When confronted with the same problem I used python-slugify.
Usage was also suggested by Shoham but, as therealmarv pointed out, by default python-slugify also converts dots.
This behaviour can be overruled by including dots into the regex_pattern argument.
> filename = "This is a väryì' Strange File-Nömé.jpeg"
> pattern = re.compile(r'[^-a-zA-Z0-9.]+')
> slugify(filename,regex_pattern=pattern)
'this-is-a-varyi-strange-file-nome.jpeg'
Note that the regex pattern was copied from the
ALLOWED_CHARS_PATTERN_WITH_UPPERCASE
global variable within the slugify.py file of the python-slugify package and extended with "."
Keep in mind that special characters like .() must be escaped with \.
If you want to preserve uppercase letters use the lowercase=False argument.
> filename = "This is a väryì' Strange File-Nömé.jpeg"
> pattern = re.compile(r'[^-a-zA-Z0-9.]+')
> slugify(filename,regex_pattern=pattern, lowercase=False)
'This-is-a-varyi-Strange-File-Nome.jpeg'
This worked using Python 3.8.4 and python-slugify 4.0.1
Yet another answer for Windows specific paths, using simple replacement and no funky modules:
import re
def check_for_illegal_char(input_str):
# remove illegal characters for Windows file names/paths
# (illegal filenames are a superset (41) of the illegal path names (36))
# this is according to windows blacklist obtained with Powershell
# from: https://stackoverflow.com/questions/1976007/what-characters-are-forbidden-in-windows-and-linux-directory-names/44750843#44750843
#
# PS> $enc = [system.Text.Encoding]::UTF8
# PS> $FileNameInvalidChars = [System.IO.Path]::GetInvalidFileNameChars()
# PS> $FileNameInvalidChars | foreach { $enc.GetBytes($_) } | Out-File -FilePath InvalidFileCharCodes.txt
illegal = '\u0022\u003c\u003e\u007c\u0000\u0001\u0002\u0003\u0004\u0005\u0006\u0007\u0008' + \
'\u0009\u000a\u000b\u000c\u000d\u000e\u000f\u0010\u0011\u0012\u0013\u0014\u0015' + \
'\u0016\u0017\u0018\u0019\u001a\u001b\u001c\u001d\u001e\u001f\u003a\u002a\u003f\u005c\u002f'
output_str, _ = re.subn('['+illegal+']','_', input_str)
output_str = output_str.replace('\\','_') # backslash cannot be handled by regex
output_str = output_str.replace('..','_') # double dots are illegal too, or at least a bad idea
output_str = output_str[:-1] if output_str[-1] == '.' else output_str # can't have end of line '.'
if output_str != input_str:
print(f"The name '{input_str}' had invalid characters, "
f"name was modified to '{output_str}'")
return output_str
When tested with check_for_illegal_char('fas\u0003\u0004good\\..asd.'), I get:
The name 'fas♥♦good\..asd.' had invalid characters, name was modified to 'fas__good__asd'
Most of these solutions don't work.
'/hello/world' -> 'helloworld'
'/helloworld'/ -> 'helloworld'
This isn't what you want generally, say you are saving the html for each link, you're going to overwrite the html for a different webpage.
I pickle a dict such as:
{'helloworld':
(
{'/hello/world': 'helloworld', '/helloworld/': 'helloworld1'},
2)
}
2 represents the number that should be appended to the next filename.
I look up the filename each time from the dict. If it's not there, I create a new one, appending the max number if needed.
Still haven't found a good library to generate a valid filename. Note that in languages like German, Norwegian or French special characters in filenames are very common and totally OK. So I ended up with my own library:
# util/files.py
CHAR_MAX_LEN = 31
CHAR_REPLACE = '_'
ILLEGAL_CHARS = [
'#', # pound
'%', # percent
'&', # ampersand
'{', # left curly bracket
'}', # right curly bracket
'\\', # back slash
'<', # left angle bracket
'>', # right angle bracket
'*', # asterisk
'?', # question mark
'/', # forward slash
' ', # blank spaces
'$', # dollar sign
'!', # exclamation point
"'", # single quotes
'"', # double quotes
':', # colon
'#', # at sign
'+', # plus sign
'`', # backtick
'|', # pipe
'=', # equal sign
]
def generate_filename(
name, char_replace=CHAR_REPLACE, length=CHAR_MAX_LEN,
illegal=ILLEGAL_CHARS, replace_dot=False):
''' return clean filename '''
# init
_elem = name.split('.')
extension = _elem[-1].strip()
_length = length - len(extension) - 1
label = '.'.join(_elem[:-1]).strip()[:_length]
filename = ''
# replace '.' ?
if replace_dot:
label = label.replace('.', char_replace)
# clean
for char in label + '.' + extension:
if char in illegal:
char = char_replace
filename += char
return filename
generate_filename('nucgae zutaäer..0.1.docx', replace_dot=False)
nucgae_zutaäer..0.1.docx
generate_filename('nucgae zutaäer..0.1.docx', replace_dot=True)
nucgae_zutaäer__0_1.docx
I'm sure this isn't a great answer, since it modifies the string it's looping over, but it seems to work alright:
import string
for chr in your_string:
if chr == ' ':
your_string = your_string.replace(' ', '_')
elif chr not in string.ascii_letters or chr not in string.digits:
your_string = your_string.replace(chr, '')
UPDATE
All links broken beyond repair in this 6 year old answer.
Also, I also wouldn't do it this way anymore, just base64 encode or drop unsafe chars. Python 3 example:
import re
t = re.compile("[a-zA-Z0-9.,_-]")
unsafe = "abc∂éåß®∆˚˙©¬ñ√ƒµ©∆∫ø"
safe = [ch for ch in unsafe if t.match(ch)]
# => 'abc'
With base64 you can encode and decode, so you can retrieve the original filename again.
But depending on the use case you might be better off generating a random filename and storing the metadata in separate file or DB.
from random import choice
from string import ascii_lowercase, ascii_uppercase, digits
allowed_chr = ascii_lowercase + ascii_uppercase + digits
safe = ''.join([choice(allowed_chr) for _ in range(16)])
# => 'CYQ4JDKE9JfcRzAZ'
ORIGINAL LINKROTTEN ANSWER:
The bobcat project contains a python module that does just this.
It's not completely robust, see this post and this reply.
So, as noted: base64 encoding is probably a better idea if readability doesn't matter.
Docs https://svn.origo.ethz.ch/bobcat/src-doc/safefilename-module.html
Source https://svn.origo.ethz.ch/bobcat/trunk/src/bobcatlib/safefilename.py
Here, this should cover all the bases. It handles all types of issues for you, including (but not limited too) character substitution.
Works in Windows, *nix, and almost every other file system. Allows printable characters only.
def txt2filename(txt, chr_set='normal'):
"""Converts txt to a valid Windows/*nix filename with printable characters only.
args:
txt: The str to convert.
chr_set: 'normal', 'universal', or 'inclusive'.
'universal': ' -.0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'
'normal': Every printable character exept those disallowed on Windows/*nix.
'extended': All 'normal' characters plus the extended character ASCII codes 128-255
"""
FILLER = '-'
# Step 1: Remove excluded characters.
if chr_set == 'universal':
# Lookups in a set are O(n) vs O(n * x) for a str.
printables = set(' -.0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz')
else:
if chr_set == 'normal':
max_chr = 127
elif chr_set == 'extended':
max_chr = 256
else:
raise ValueError(f'The chr_set argument may be normal, extended or universal; not {chr_set=}')
EXCLUDED_CHRS = set(r'<>:"/\|?*') # Illegal characters in Windows filenames.
EXCLUDED_CHRS.update(chr(127)) # DEL (non-printable).
printables = set(chr(x)
for x in range(32, max_chr)
if chr(x) not in EXCLUDED_CHRS)
result = ''.join(x if x in printables else FILLER # Allow printable characters only.
for x in txt)
# Step 2: Device names, '.', and '..' are invalid filenames in Windows.
DEVICE_NAMES = 'CON,PRN,AUX,NUL,COM1,COM2,COM3,COM4,' \
'COM5,COM6,COM7,COM8,COM9,LPT1,LPT2,' \
'LPT3,LPT4,LPT5,LPT6,LPT7,LPT8,LPT9,' \
'CONIN$,CONOUT$,..,.'.split() # This list is an O(n) operation.
if result in DEVICE_NAMES:
result = f'-{result}-'
# Step 3: Maximum length of filename is 255 bytes in Windows and Linux (other *nix flavors may allow longer names).
result = result[:255]
# Step 4: Windows does not allow filenames to end with '.' or ' ' or begin with ' '.
result = re.sub(r'^[. ]', FILLER, result)
result = re.sub(r' $', FILLER, result)
return result
This solution needs no external libraries. It substitutes non-printable filenames too because they are not always simple to deal with.

Categories