urlencode() for RFC 3986 - python

Python has a terrific urlencode() function which encodes a dict via RFC 1738 (Plus encoding):
>>> urllib.parse.urlencode({'site':'Stack Overflow','Coder':'Jeff Atwood'})
'Coder=Jeff+Atwood&site=Stack+Overflow'
I cannot find a replacement that uses RFC 3986 (Percent encoding), even though the fine manual states the following:
RFC 3986 - Uniform Resource Identifiers
This is the current standard (STD66). Any changes to urllib.parse module should conform to this.
This would be the expected output:
>>> urllib.parse.urlencode({'site':'Stack Overflow','Coder':'Jeff Atwood'})
'Coder=Jeff%20Atwood&site=Stack%20Overflow'
Of course I could roll my own, but I find it surprising that I can find no such Python function built in. Is there such a Python function that I'm just not finding?

It seems there is no such thing built in, but there is a bug requesting one, and it even has a patch attached: http://bugs.python.org/issue13866

For strings you can use this:
def percent_encoding(string):
result = ''
accepted = [c for c in 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-._~'.encode('utf-8')]
for char in string.encode('utf-8'):
result += chr(char) if char in accepted else '%{}'.format(hex(char)[2:]).upper()
return result
>>> percent_encoding('http://www.google.com')
'http%3A%2F%2Fwww.google.com'
>>> percent_encoding('ñapa')
'%C3%B1apa'
And now, for a dictionary, you need to encode the values, so you only need a function that translate this dictionary to url key/value pairs, encoding only its values.
def percent_urlencode(dictionary):
return '&'.join(["{}={}".format(k, percent_encoding(str(v))) for k, v in dictionary.items()])
>>> percent_urlencode({'token': '$%&/', 'username': 'me'})
'username=me&token=%24%25%26%2F'
>>> percent_urlencode({'site':'Stack Overflow','Coder':'Jeff Atwood'})
'site=Stack%20Overflow&Coder=Jeff%20Atwood'

Related

Generating comments for a data structure that *wasn't* loaded via RoundTripLoader?

I have a data structure that I would like to add comments to, then convert into YAML.
I'd like to avoid outputting the data structure as YAML and loading it back in using RoundTripLoader.
Is there a way to convert my data structure into one that supports the ruamel.yaml comments interface?
There is a way, although the interface for that is not guaranteed to be stable.
Because of that, and the lack of documentation, it often helps to look at the representation of round_trip_loading() your expected output, or a small sample thereof.
You'll have to realise that comments are attached to, special versions of, the representation of the structured nodes (mapping and sequence). For a mapping that would safe_load() as a Python dict, this is a CommentedMap() and for a sequence, that would load as a Python list, this is a CommentedSeq().
Both these classes can have a .ca attribute holding the comments that may occur before the structural node, as end-of-line-comments after a key/value pair resp. item, on its own line between key-value pairs or items, and at the end of a node.
That means you have to convert any dict or list you have, that needs commenting on (which can be done automatically/recursively e.g. by the routine comment_prep()), and then find the correct point and way to attach the comment. Because the comment manipulation routines have not stabilized, make sure you wrap your comment adding routines to get a single place where to update in case they do change.
import sys
from ruamel.yaml import round_trip_dump as rtd
from ruamel.yaml.comments import CommentedMap, CommentedSeq
# please note that because of the dict the order of the keys is undetermined
data = dict(a=1, b=2, c=['x', 'y', dict(k='i', l=42, m='∞')])
rtd(data, sys.stdout)
print('-' * 30)
def comment_prep(base):
"""replace all dict with CommentedMap and list with CommentedSeq"""
if isinstance(base, dict):
ret_val = CommentedMap()
for key in sorted(base): # here we force sorted order
ret_val[key] = comment_prep(base[key])
return ret_val
if isinstance(base, list):
ret_val = CommentedSeq()
for item in base:
ret_val.append(comment_prep(item))
return ret_val
return base
data = comment_prep(data)
data['c'][2].yaml_add_eol_comment('# this is the answer', key='l', column=15)
rtd(data, sys.stdout)
gives:
c:
- x
- y
- k: i
m: ∞
l: 42
b: 2
a: 1
------------------------------
a: 1
b: 2
c:
- x
- y
- k: i
l: 42 # this is the answer
m: ∞
The file test_comment_manipulation.py, has some more examples and is a good place to keep an eye on (as the interface changes, so will the tests in that file).

how to create a dictionary from a set of properly formatted tuples in python

Is there a simple way to create a dictionary from a list of formatted tuples. e.g. if I do something like:
d={"responseStatus":"SUCCESS","sessionId":"01234","userId":2000004904}
This creates a dictionary called d. However, if I want to create a dictionary from a string which contains the same string, I can't do that
res=<some command that returns {"responseStatus":"SUCCESS","sessionId":"01234","userId":2000004904}>
print res
# returns {"responseStatus":"SUCCESS","sessionId":"01234","userId":2000004904}
d=dict(res)
This throws an error that says:
ValueError: dictionary update sequence element #0 has length 1; 2 is required
I strongly strongly suspect that you have json on your hands.
import json
d = json.loads('{"responseStatus":"SUCCESS","sessionId":"01234","userId":2000004904}')
would give you what you want.
Use dict(zip(tuples))
>>> u = ("foo", "bar")
>>> v = ("blah", "zoop")
>>> d = dict(zip(u, v))
>>> d
{'foo': 'blah', 'bar': 'zoop'}
Note, if you have an odd number of tuples this will not work.
Based on what you gave is, res is
# returns {"responseStatus":"SUCCESS","sessionId":"01234","userId":2000004904}
So the plan is to grab the string starting at the curly brace to the end and use json to decode it:
import json
# Discard the text before the curly brace
res = res[res.index('{'):]
# Turn that text into a dictionary
d = json.loads(res)
All you need to do in your particular case is
d = eval(res)
And please keep security in mind when using eval, especially if you're mixing it with ajax/json.
UPDATE
Since others pointed out you might be getting this data over the web and it isn't just a "how to make this work" question, use this:
import json
json.loads(res)

Removing 'u' character from the output of json.loads(jsonstring) [duplicate]

I'm using Python 2 to parse JSON from ASCII encoded text files.
When loading these files with either json or simplejson, all my string values are cast to Unicode objects instead of string objects. The problem is, I have to use the data with some libraries that only accept string objects. I can't change the libraries nor update them.
Is it possible to get string objects instead of Unicode ones?
Example
>>> import json
>>> original_list = ['a', 'b']
>>> json_list = json.dumps(original_list)
>>> json_list
'["a", "b"]'
>>> new_list = json.loads(json_list)
>>> new_list
[u'a', u'b'] # I want these to be of type `str`, not `unicode`
(One easy and clean solution for 2017 is to use a recent version of Python — i.e. Python 3 and forward.)
While there are some good answers here, I ended up using PyYAML to parse my JSON files, since it gives the keys and values as str type strings instead of the unicode type. Because JSON is a subset of YAML, it works nicely:
>>> import json
>>> import yaml
>>> list_org = ['a', 'b']
>>> list_dump = json.dumps(list_org)
>>> list_dump
'["a", "b"]'
>>> json.loads(list_dump)
[u'a', u'b']
>>> yaml.safe_load(list_dump)
['a', 'b']
Notes
Some things to note though:
I get string objects because all my entries are ASCII encoded. If I would use Unicode encoded entries, I would get them back as unicode objects — there is no conversion!
You should (probably always) use PyYAML's safe_load function; if you use it to load JSON files, you don't need the "additional power" of the load function anyway.
If you want a YAML parser that has more support for the 1.2 version of the spec (and correctly parses very low numbers) try Ruamel YAML: pip install ruamel.yaml and import ruamel.yaml as yaml was all I needed in my tests.
Conversion
As stated, there isn't any conversion! If you can't be sure to only deal with ASCII values (and you can't be sure most of the time), better use a conversion function:
I used the one from Mark Amery a couple of times now, it works great and is very easy to use. You can also use a similar function as an object_hook instead, as it might gain you a performance boost on big files. See the slightly more involved answer from Mirec Miskuf for that.
There's no built-in option to make the json module functions return byte strings instead of Unicode strings. However, this short and simple recursive function will convert any decoded JSON object from using Unicode strings to UTF-8-encoded byte strings:
def byteify(input):
if isinstance(input, dict):
return {byteify(key): byteify(value)
for key, value in input.iteritems()}
elif isinstance(input, list):
return [byteify(element) for element in input]
elif isinstance(input, unicode):
return input.encode('utf-8')
else:
return input
Just call this on the output you get from a json.load or json.loads call.
A couple of notes:
To support Python 2.6 or earlier, replace return {byteify(key): byteify(value) for key, value in input.iteritems()} with return dict([(byteify(key), byteify(value)) for key, value in input.iteritems()]), since dictionary comprehensions weren't supported until Python 2.7.
Since this answer recurses through the entire decoded object, it has a couple of undesirable performance characteristics that can be avoided with very careful use of the object_hook or object_pairs_hook parameters. Mirec Miskuf's answer is so far the only one that manages to pull this off correctly, although as a consequence, it's significantly more complicated than my approach.
A solution with object_hook
It works for both Python 2.7 and 3.x.
import json
def json_load_byteified(file_handle):
return _byteify(
json.load(file_handle, object_hook=_byteify),
ignore_dicts=True
)
def json_loads_byteified(json_text):
return _byteify(
json.loads(json_text, object_hook=_byteify),
ignore_dicts=True
)
def _byteify(data, ignore_dicts = False):
if isinstance(data, str):
return data
# If this is a list of values, return list of byteified values
if isinstance(data, list):
return [ _byteify(item, ignore_dicts=True) for item in data ]
# If this is a dictionary, return dictionary of byteified keys and values
# but only if we haven't already byteified it
if isinstance(data, dict) and not ignore_dicts:
return {
_byteify(key, ignore_dicts=True): _byteify(value, ignore_dicts=True)
for key, value in data.items() # changed to .items() for Python 2.7/3
}
# Python 3 compatible duck-typing
# If this is a Unicode string, return its string representation
if str(type(data)) == "<type 'unicode'>":
return data.encode('utf-8')
# If it's anything else, return it in its original form
return data
Example usage:
>>> json_loads_byteified('{"Hello": "World"}')
{'Hello': 'World'}
>>> json_loads_byteified('"I am a top-level string"')
'I am a top-level string'
>>> json_loads_byteified('7')
7
>>> json_loads_byteified('["I am inside a list"]')
['I am inside a list']
>>> json_loads_byteified('[[[[[[[["I am inside a big nest of lists"]]]]]]]]')
[[[[[[[['I am inside a big nest of lists']]]]]]]]
>>> json_loads_byteified('{"foo": "bar", "things": [7, {"qux": "baz", "moo": {"cow": ["milk"]}}]}')
{'things': [7, {'qux': 'baz', 'moo': {'cow': ['milk']}}], 'foo': 'bar'}
>>> json_load_byteified(open('somefile.json'))
{'more json': 'from a file'}
How does this work and why would I use it?
Mark Amery's function is shorter and clearer than these ones, so what's the point of them? Why would you want to use them?
Purely for performance. Mark's answer decodes the JSON text fully first with Unicode strings, then recurses through the entire decoded value to convert all strings to byte strings. This has a couple of undesirable effects:
A copy of the entire decoded structure gets created in memory
If your JSON object is really deeply nested (500 levels or more) then you'll hit Python's maximum recursion depth
This answer mitigates both of those performance issues by using the object_hook parameter of json.load and json.loads. From the documentation:
object_hook is an optional function that will be called with the result of any object literal decoded (a dict). The return value of object_hook will be used instead of the dict. This feature can be used to implement custom decoders
Since dictionaries nested many levels deep in other dictionaries get passed to object_hook as they're decoded, we can byteify any strings or lists inside them at that point and avoid the need for deep recursion later.
Mark's answer isn't suitable for use as an object_hook as it stands, because it recurses into nested dictionaries. We prevent that recursion in this answer with the ignore_dicts parameter to _byteify, which gets passed to it at all times except when object_hook passes it a new dict to byteify. The ignore_dicts flag tells _byteify to ignore dicts since they already been byteified.
Finally, our implementations of json_load_byteified and json_loads_byteified call _byteify (with ignore_dicts=True) on the result returned from json.load or json.loads to handle the case where the JSON text being decoded doesn't have a dict at the top level.
You can use the object_hook parameter for json.loads to pass in a converter. You don't have to do the conversion after the fact. The json module will always pass the object_hook dicts only, and it will recursively pass in nested dicts, so you don't have to recurse into nested dicts yourself. I don't think I would convert Unicode strings to numbers like Wells shows. If it's a Unicode string, it was quoted as a string in the JSON file, so it is supposed to be a string (or the file is bad).
Also, I'd try to avoid doing something like str(val) on a unicode object. You should use value.encode(encoding) with a valid encoding, depending on what your external library expects.
So, for example:
def _decode_list(data):
rv = []
for item in data:
if isinstance(item, unicode):
item = item.encode('utf-8')
elif isinstance(item, list):
item = _decode_list(item)
elif isinstance(item, dict):
item = _decode_dict(item)
rv.append(item)
return rv
def _decode_dict(data):
rv = {}
for key, value in data.iteritems():
if isinstance(key, unicode):
key = key.encode('utf-8')
if isinstance(value, unicode):
value = value.encode('utf-8')
elif isinstance(value, list):
value = _decode_list(value)
elif isinstance(value, dict):
value = _decode_dict(value)
rv[key] = value
return rv
obj = json.loads(s, object_hook=_decode_dict)
That's because json() has no difference between string objects and Unicode objects. They're all strings in JavaScript.
I think JSON is right to return Unicode objects. In fact, I wouldn't accept anything less, since JavaScript strings are in fact unicode objects (i.e., JSON (JavaScript) strings can store any kind of Unicode character), so it makes sense to create unicode objects when translating strings from JSON. Plain strings just wouldn't fit since the library would have to guess the encoding you want.
It's better to use unicode string objects everywhere. So your best option is to update your libraries so they can deal with Unicode objects.
But if you really want bytestrings, just encode the results to the encoding of your choice:
>>> nl = json.loads(js)
>>> nl
[u'a', u'b']
>>> nl = [s.encode('utf-8') for s in nl]
>>> nl
['a', 'b']
There exists an easy work-around.
TL;DR - Use ast.literal_eval() instead of json.loads(). Both ast and json are in the standard library.
While not a 'perfect' answer, it gets one pretty far if your plan is to ignore Unicode altogether. In Python 2.7
import json, ast
d = { 'field' : 'value' }
print "JSON Fail: ", json.loads(json.dumps(d))
print "AST Win:", ast.literal_eval(json.dumps(d))
gives:
JSON Fail: {u'field': u'value'}
AST Win: {'field': 'value'}
This gets more hairy when some objects are really Unicode strings. The full answer gets hairy quickly.
Mike Brennan's answer is close, but there isn't any reason to retraverse the entire structure. If you use the object_hook_pairs (Python 2.7+) parameter:
object_pairs_hook is an optional function that will be called with the result of any object literal decoded with an ordered list of pairs. The return value of object_pairs_hook will be used instead of the dict. This feature can be used to implement custom decoders that rely on the order that the key and value pairs are decoded (for example, collections.OrderedDict will remember the order of insertion). If object_hook is also defined, the object_pairs_hook takes priority.
With it, you get each JSON object handed to you, so you can do the decoding with no need for recursion:
def deunicodify_hook(pairs):
new_pairs = []
for key, value in pairs:
if isinstance(value, unicode):
value = value.encode('utf-8')
if isinstance(key, unicode):
key = key.encode('utf-8')
new_pairs.append((key, value))
return dict(new_pairs)
In [52]: open('test.json').read()
Out[52]: '{"1": "hello", "abc": [1, 2, 3], "def": {"hi": "mom"}, "boo": [1, "hi", "moo", {"5": "some"}]}'
In [53]: json.load(open('test.json'))
Out[53]:
{u'1': u'hello',
u'abc': [1, 2, 3],
u'boo': [1, u'hi', u'moo', {u'5': u'some'}],
u'def': {u'hi': u'mom'}}
In [54]: json.load(open('test.json'), object_pairs_hook=deunicodify_hook)
Out[54]:
{'1': 'hello',
'abc': [1, 2, 3],
'boo': [1, 'hi', 'moo', {'5': 'some'}],
'def': {'hi': 'mom'}}
Notice that I never have to call the hook recursively since every object will get handed to the hook when you use the object_pairs_hook. You do have to care about lists, but as you can see, an object within a list will be properly converted, and you don't have to recurse to make it happen.
A coworker pointed out that Python2.6 doesn't have object_hook_pairs. You can still use this will Python2.6 by making a very small change. In the hook above, change:
for key, value in pairs:
to
for key, value in pairs.iteritems():
Then use object_hook instead of object_pairs_hook:
In [66]: json.load(open('test.json'), object_hook=deunicodify_hook)
Out[66]:
{'1': 'hello',
'abc': [1, 2, 3],
'boo': [1, 'hi', 'moo', {'5': 'some'}],
'def': {'hi': 'mom'}}
Using object_pairs_hook results in one less dictionary being instantiated for each object in the JSON object, which, if you were parsing a huge document, might be worth while.
I'm afraid there isn't any way to achieve this automatically within the simplejson library.
The scanner and decoder in simplejson are designed to produce Unicode text. To do this, the library uses a function called c_scanstring (if it's available, for speed), or py_scanstring if the C version is not available. The scanstring function is called several times by nearly every routine that simplejson has for decoding a structure that might contain text. You'd have to either monkey patch the scanstring value in simplejson.decoder, or subclass JSONDecoder and provide pretty much your own entire implementation of anything that might contain text.
The reason that simplejson outputs Unicode, however, is that the JSON specification specifically mentions that "A string is a collection of zero or more Unicode characters"... support for Unicode is assumed as part of the format itself. simplejson's scanstring implementation goes so far as to scan and interpret Inicode escapes (even error-checking for malformed multi-byte charset representations), so the only way it can reliably return the value to you is as Unicode.
If you have an aged library that needs an str, I recommend you either laboriously search the nested data structure after parsing (which I acknowledge is what you explicitly said you wanted to avoid... sorry), or perhaps wrap your libraries in some sort of facade where you can massage the input parameters at a more granular level. The second approach might be more manageable than the first if your data structures are indeed deeply nested.
As Mark (Amery) correctly notes: Using PyYAML's deserializer on a JSON dump works only if you have ASCII only. At least out of the box.
Two quick comments on the PyYAML approach:
Never use yaml.load() on data from the field. It’s a feature(!) of YAML to execute arbitrary code hidden within the structure.
You can make it work also for non ASCII via this:
def to_utf8(loader, node):
return loader.construct_scalar(node).encode('utf-8')
yaml.add_constructor(u'tag:yaml.org,2002:str', to_utf8)
But performance-wise, it’s of no comparison to Mark Amery's answer:
Throwing some deeply-nested sample dicts onto the two methods, I get this (with dt[j] = time delta of json.loads(json.dumps(m))):
dt[yaml.safe_load(json.dumps(m))] =~ 100 * dt[j]
dt[byteify recursion(Mark Amery)] =~ 5 * dt[j]
So deserialization, including fully walking the tree and encoding, is well within the order of magnitude of JSON's C-based implementation. I find this remarkably fast and its also more robust than the yaml load at deeply nested structures. And less security error prone, looking at yaml.load.
=> While I would appreciate a pointer to a C-only based converter, the byteify function should be the default answer.
This holds especially true if your JSON structure is from the field, containing user input. Because then you probably need to walk anyway over your structure - independent on your desired internal data structures ('unicode sandwich' or byte strings only).
Why?
Unicode normalisation. For the unaware: Take a painkiller and read this.
So using the byteify recursion you kill two birds with one stone:
get your bytestrings from nested JSON dumps
get user input values normalised, so that you find the stuff in your storage.
In my tests it turned out that replacing the input.encode('utf-8') with a unicodedata.normalize('NFC', input).encode('utf-8') was even faster than without NFC - but that’s heavily dependent on the sample data I guess.
The gotcha is that simplejson and json are two different modules, at least in the manner they deal with Unicode. You have json in Python 2.6+, and this gives you Unicode values, whereas simplejson returns string objects.
Just try easy_install-ing simplejson in your environment and see if that works. It did for me.
Just use pickle instead of json for dump and load, like so:
import json
import pickle
d = { 'field1': 'value1', 'field2': 2, }
json.dump(d,open("testjson.txt","w"))
print json.load(open("testjson.txt","r"))
pickle.dump(d,open("testpickle.txt","w"))
print pickle.load(open("testpickle.txt","r"))
The output it produces is (strings and integers are handled correctly):
{u'field2': 2, u'field1': u'value1'}
{'field2': 2, 'field1': 'value1'}
I had a JSON dict as a string. The keys and values were Unicode objects like in the following example:
myStringDict = "{u'key':u'value'}"
I could use the byteify function suggested above by converting the string to a dict object using ast.literal_eval(myStringDict).
So, I've run into the same problem.
Because I need to pass all data to PyGTK, Unicode strings aren't very useful to me either. So I have another recursive conversion method. It's actually also needed for type-safe JSON conversion - json.dump() would bail on any non-literals, like Python objects. It doesn't convert dict indexes though.
# removes any objects, turns Unicode back into str
def filter_data(obj):
if type(obj) in (int, float, str, bool):
return obj
elif type(obj) == unicode:
return str(obj)
elif type(obj) in (list, tuple, set):
obj = list(obj)
for i,v in enumerate(obj):
obj[i] = filter_data(v)
elif type(obj) == dict:
for i,v in obj.iteritems():
obj[i] = filter_data(v)
else:
print "invalid object in data, converting to string"
obj = str(obj)
return obj
Support for Python 2 and 3 using a hook (from Mirec Miskuf's answer):
import requests
import six
from six import iteritems
requests.packages.urllib3.disable_warnings() # #UndefinedVariable
r = requests.get("http://echo.jsontest.com/key/value/one/two/three", verify=False)
def _byteify(data):
# If this is a Unicode string, return its string representation
if isinstance(data, six.string_types):
return str(data.encode('utf-8').decode())
# If this is a list of values, return list of byteified values
if isinstance(data, list):
return [ _byteify(item) for item in data ]
# If this is a dictionary, return dictionary of byteified keys and values,
# but only if we haven't already byteified it
if isinstance(data, dict):
return {
_byteify(key): _byteify(value) for key, value in iteritems(data)
}
# If it's anything else, return it in its original form
return data
w = r.json(object_hook=_byteify)
print(w)
Returns:
{'three': '', 'key': 'value', 'one': 'two'}
I built this recursive caster. It works for my needs and I think it's relatively complete.
def _parseJSON(self, obj):
newobj = {}
for key, value in obj.iteritems():
key = str(key)
if isinstance(value, dict):
newobj[key] = self._parseJSON(value)
elif isinstance(value, list):
if key not in newobj:
newobj[key] = []
for i in value:
newobj[key].append(self._parseJSON(i))
elif isinstance(value, unicode):
val = str(value)
if val.isdigit():
val = int(val)
else:
try:
val = float(val)
except ValueError:
val = str(val)
newobj[key] = val
return newobj
Just pass it a JSON object like so:
obj = json.loads(content, parse_float=float, parse_int=int)
obj = _parseJSON(obj)
I have it as a private member of a class, but you can repurpose the method as you see fit.
I rewrote Wells's _parse_json() to handle cases where the json object itself is an array (my use case).
def _parseJSON(self, obj):
if isinstance(obj, dict):
newobj = {}
for key, value in obj.iteritems():
key = str(key)
newobj[key] = self._parseJSON(value)
elif isinstance(obj, list):
newobj = []
for value in obj:
newobj.append(self._parseJSON(value))
elif isinstance(obj, unicode):
newobj = str(obj)
else:
newobj = obj
return newobj
Here is a recursive encoder written in C:
https://github.com/axiros/nested_encode
The performance overhead for "average" structures is around 10% compared to json.loads().
python speed.py
json loads [0.16sec]: {u'a': [{u'b': [[1, 2, [u'\xd6ster..
json loads + encoding [0.18sec]: {'a': [{'b': [[1, 2, ['\xc3\x96ster.
time overhead in percent: 9%
using this teststructure:
import json, nested_encode, time
s = """
{
"firstName": "Jos\\u0301",
"lastName": "Smith",
"isAlive": true,
"age": 25,
"address": {
"streetAddress": "21 2nd Street",
"city": "\\u00d6sterreich",
"state": "NY",
"postalCode": "10021-3100"
},
"phoneNumbers": [
{
"type": "home",
"number": "212 555-1234"
},
{
"type": "office",
"number": "646 555-4567"
}
],
"children": [],
"spouse": null,
"a": [{"b": [[1, 2, ["\\u00d6sterreich"]]]}]
}
"""
t1 = time.time()
for i in xrange(10000):
u = json.loads(s)
dt_json = time.time() - t1
t1 = time.time()
for i in xrange(10000):
b = nested_encode.encode_nested(json.loads(s))
dt_json_enc = time.time() - t1
print "json loads [%.2fsec]: %s..." % (dt_json, str(u)[:20])
print "json loads + encoding [%.2fsec]: %s..." % (dt_json_enc, str(b)[:20])
print "time overhead in percent: %i%%" % (100 * (dt_json_enc - dt_json)/dt_json)
With Python 3.6, sometimes I still run into this problem. For example, when getting a response from a REST API and loading the response text to JSON, I still get the Unicode strings.
Found a simple solution using json.dumps().
response_message = json.loads(json.dumps(response.text))
print(response_message)
I ran into this problem too, and having to deal with JSON, I came up with a small loop that converts the Unicode keys to strings. (simplejson on GAE does not return string keys.)
obj is the object decoded from JSON:
if NAME_CLASS_MAP.has_key(cls):
kwargs = {}
for i in obj.keys():
kwargs[str(i)] = obj[i]
o = NAME_CLASS_MAP[cls](**kwargs)
o.save()
kwargs is what I pass to the constructor of the GAE application (which does not like Unicode keys in **kwargs).
It is not as robust as the solution from Wells, but much smaller.
I've adapted the code from the answer of Mark Amery, particularly in order to get rid of isinstance for the pros of duck typing.
The encoding is done manually and ensure_ascii is disabled. The Python documentation for json.dump says that:
If ensure_ascii is True (the default), all non-ASCII characters in the output are escaped with \uXXXX sequences
Disclaimer: in the doctest I used the Hungarian language. Some notable Hungarian-related character encodings are: cp852, the IBM/OEM encoding used e.g. in DOS (sometimes referred to as ASCII. Incorrectly I think, as it is dependent on the code page setting). Windows-1250 is used e.g. in Windows (sometimes referred as ANSI, dependent on the locale settings), and ISO 8859-1, sometimes used on HTTP servers.
The test text Tüskéshátú kígyóbűvölő is attributed to Koltai László (native personal name form) and is from Wikipedia.
# coding: utf-8
"""
This file should be encoded correctly with utf-8.
"""
import json
def encode_items(input, encoding='utf-8'):
u"""original from: https://stackoverflow.com/a/13101776/611007
adapted by SO/u/611007 (20150623)
>>>
>>> ## run this with `python -m doctest <this file>.py` from command line
>>>
>>> txt = u"Tüskéshátú kígyóbűvölő"
>>> txt2 = u"T\\u00fcsk\\u00e9sh\\u00e1t\\u00fa k\\u00edgy\\u00f3b\\u0171v\\u00f6l\\u0151"
>>> txt3 = u"uúuutifu"
>>> txt4 = b'u\\xfauutifu'
>>> # txt4 shouldn't be 'u\\xc3\\xbauutifu', string content needs double backslash for doctest:
>>> assert u'\\u0102' not in b'u\\xfauutifu'.decode('cp1250')
>>> txt4u = txt4.decode('cp1250')
>>> assert txt4u == u'u\\xfauutifu', repr(txt4u)
>>> txt5 = b"u\\xc3\\xbauutifu"
>>> txt5u = txt5.decode('utf-8')
>>> txt6 = u"u\\u251c\\u2551uutifu"
>>> there_and_back_again = lambda t: encode_items(t, encoding='utf-8').decode('utf-8')
>>> assert txt == there_and_back_again(txt)
>>> assert txt == there_and_back_again(txt2)
>>> assert txt3 == there_and_back_again(txt3)
>>> assert txt3.encode('cp852') == there_and_back_again(txt4u).encode('cp852')
>>> assert txt3 == txt4u,(txt3,txt4u)
>>> assert txt3 == there_and_back_again(txt5)
>>> assert txt3 == there_and_back_again(txt5u)
>>> assert txt3 == there_and_back_again(txt4u)
>>> assert txt3.encode('cp1250') == encode_items(txt4, encoding='utf-8')
>>> assert txt3.encode('utf-8') == encode_items(txt5, encoding='utf-8')
>>> assert txt2.encode('utf-8') == encode_items(txt, encoding='utf-8')
>>> assert {'a':txt2.encode('utf-8')} == encode_items({'a':txt}, encoding='utf-8')
>>> assert [txt2.encode('utf-8')] == encode_items([txt], encoding='utf-8')
>>> assert [[txt2.encode('utf-8')]] == encode_items([[txt]], encoding='utf-8')
>>> assert [{'a':txt2.encode('utf-8')}] == encode_items([{'a':txt}], encoding='utf-8')
>>> assert {'b':{'a':txt2.encode('utf-8')}} == encode_items({'b':{'a':txt}}, encoding='utf-8')
"""
try:
input.iteritems
return {encode_items(k): encode_items(v) for (k,v) in input.iteritems()}
except AttributeError:
if isinstance(input, unicode):
return input.encode(encoding)
elif isinstance(input, str):
return input
try:
iter(input)
return [encode_items(e) for e in input]
except TypeError:
return input
def alt_dumps(obj, **kwargs):
"""
>>> alt_dumps({'a': u"T\\u00fcsk\\u00e9sh\\u00e1t\\u00fa k\\u00edgy\\u00f3b\\u0171v\\u00f6l\\u0151"})
'{"a": "T\\xc3\\xbcsk\\xc3\\xa9sh\\xc3\\xa1t\\xc3\\xba k\\xc3\\xadgy\\xc3\\xb3b\\xc5\\xb1v\\xc3\\xb6l\\xc5\\x91"}'
"""
if 'ensure_ascii' in kwargs:
del kwargs['ensure_ascii']
return json.dumps(encode_items(obj), ensure_ascii=False, **kwargs)
I'd also like to highlight the answer of Jarret Hardie which references the JSON specification, quoting:
A string is a collection of zero or more Unicode characters
In my use case, I had files with JSON content. They are UTF-8 encoded files. ensure_ascii results in properly escaped, but not very readable JSON files, and that is why I've adapted Mark Amery's answer to fit my needs.
The doctest is not particularly thoughtful, but I share the code in the hope that it will useful for someone.
Check out this answer to a similar question like this which states that
The u- prefix just means that you have a Unicode string. When you really use the string, it won't appear in your data. Don't be thrown by the printed output.
For example, try this:
print mail_accounts[0]["i"]
You won't see a u.

urllib.urlencode doesn't like unicode values: how about this workaround?

If I have an object like:
d = {'a':1, 'en': 'hello'}
...then I can pass it to urllib.urlencode, no problem:
percent_escaped = urlencode(d)
print percent_escaped
But if I try to pass an object with a value of type unicode, game over:
d2 = {'a':1, 'en': 'hello', 'pt': u'olá'}
percent_escaped = urlencode(d2)
print percent_escaped # This fails with a UnicodeEncodingError
So my question is about a reliable way to prepare an object to be passed to urlencode.
I came up with this function where I simply iterate through the object and encode values of type string or unicode:
def encode_object(object):
for k,v in object.items():
if type(v) in (str, unicode):
object[k] = v.encode('utf-8')
return object
This seems to work:
d2 = {'a':1, 'en': 'hello', 'pt': u'olá'}
percent_escaped = urlencode(encode_object(d2))
print percent_escaped
And that outputs a=1&en=hello&pt=%C3%B3la, ready for passing to a POST call or whatever.
But my encode_object function just looks really shaky to me. For one thing, it doesn't handle nested objects.
For another, I'm nervous about that if statement. Are there any other types that I should be taking into account?
And is comparing the type() of something to the native object like this good practice?
type(v) in (str, unicode) # not so sure about this...
Thanks!
You should indeed be nervous. The whole idea that you might have a mixture of bytes and text in some data structure is horrifying. It violates the fundamental principle of working with string data: decode at input time, work exclusively in unicode, encode at output time.
Update in response to comment:
You are about to output some sort of HTTP request. This needs to be prepared as a byte string. The fact that urllib.urlencode is not capable of properly preparing that byte string if there are unicode characters with ordinal >= 128 in your dict is indeed unfortunate. If you have a mixture of byte strings and unicode strings in your dict, you need to be careful. Let's examine just what urlencode() does:
>>> import urllib
>>> tests = ['\x80', '\xe2\x82\xac', 1, '1', u'1', u'\x80', u'\u20ac']
>>> for test in tests:
... print repr(test), repr(urllib.urlencode({'a':test}))
...
'\x80' 'a=%80'
'\xe2\x82\xac' 'a=%E2%82%AC'
1 'a=1'
'1' 'a=1'
u'1' 'a=1'
u'\x80'
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "C:\python27\lib\urllib.py", line 1282, in urlencode
v = quote_plus(str(v))
UnicodeEncodeError: 'ascii' codec can't encode character u'\x80' in position 0: ordinal not in range(128)
The last two tests demonstrate the problem with urlencode(). Now let's look at the str tests.
If you insist on having a mixture, then you should at the very least ensure that the str objects are encoded in UTF-8.
'\x80' is suspicious -- it is not the result of any_valid_unicode_string.encode('utf8').
'\xe2\x82\xac' is OK; it's the result of u'\u20ac'.encode('utf8').
'1' is OK -- all ASCII characters are OK on input to urlencode(), which will percent-encode such as '%' if necessary.
Here's a suggested converter function. It doesn't mutate the input dict as well as returning it (as yours does); it returns a new dict. It forces an exception if a value is a str object but is not a valid UTF-8 string. By the way, your concern about it not handling nested objects is a little misdirected -- your code works only with dicts, and the concept of nested dicts doesn't really fly.
def encoded_dict(in_dict):
out_dict = {}
for k, v in in_dict.iteritems():
if isinstance(v, unicode):
v = v.encode('utf8')
elif isinstance(v, str):
# Must be encoded in UTF-8
v.decode('utf8')
out_dict[k] = v
return out_dict
and here's the output, using the same tests in reverse order (because the nasty one is at the front this time):
>>> for test in tests[::-1]:
... print repr(test), repr(urllib.urlencode(encoded_dict({'a':test})))
...
u'\u20ac' 'a=%E2%82%AC'
u'\x80' 'a=%C2%80'
u'1' 'a=1'
'1' 'a=1'
1 'a=1'
'\xe2\x82\xac' 'a=%E2%82%AC'
'\x80'
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "<stdin>", line 8, in encoded_dict
File "C:\python27\lib\encodings\utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 0: invalid start byte
>>>
Does that help?
I had the same problem with German "Umlaute".
The solution is pretty simple:
In Python 3+, urlencode allows to specify the encoding:
from urllib import urlencode
args = {}
args = {'a':1, 'en': 'hello', 'pt': u'olá'}
urlencode(args, 'utf-8')
>>> 'a=1&en=hello&pt=ol%3F'
Seems like it is a wider topic than it looks, especially when you have to deal with more complex dictionary values. I found 3 ways of solving the problem:
Patch urllib.py to include encoding parameter:
def urlencode(query, doseq=0, encoding='ascii'):
and replace all str(v) conversions to something like v.encode(encoding)
Obviously not good, since it's hardly redistributable and even harder to maintain.
Change default Python encoding as described here. The author of the blog pretty clearly describes some problems with this solution and who knows how more of them could be lurking in the shadows. So it doesn't look good to me either.
So I, personally, ended up with this abomination, which encodes all unicode strings to UTF-8 byte strings in any (reasonably) complex structure:
def encode_obj(in_obj):
def encode_list(in_list):
out_list = []
for el in in_list:
out_list.append(encode_obj(el))
return out_list
def encode_dict(in_dict):
out_dict = {}
for k, v in in_dict.iteritems():
out_dict[k] = encode_obj(v)
return out_dict
if isinstance(in_obj, unicode):
return in_obj.encode('utf-8')
elif isinstance(in_obj, list):
return encode_list(in_obj)
elif isinstance(in_obj, tuple):
return tuple(encode_list(in_obj))
elif isinstance(in_obj, dict):
return encode_dict(in_obj)
return in_obj
You can use it like this: urllib.urlencode(encode_obj(complex_dictionary))
To encode keys also, out_dict[k] can be replaced with out_dict[k.encode('utf-8')], but it was a bit too much for me.
It seems that you can't pass a Unicode object to urlencode, so, before calling it, you should encode every unicode object parameter. How you do this in a proper way seems to me very dependent on the context, but in your code you should always be aware of when to use the unicode python object (the unicode representation) and when to use the encoded object (bytestring).
Also, encoding the str values is "superfluous": What is the difference between encode/decode?
Nothing new to add except to point out that the urlencode algorithm is nothing tricky.
Rather than processing your data once and then calling urlencode on it, it would be perfectly fine to do something like:
from urllib import quote_plus
def urlencode_utf8(params):
if hasattr(params, 'items'):
params = params.items()
return '&'.join(
(quote_plus(k.encode('utf8'), safe='/') + '=' + quote_plus(v.encode('utf8'), safe='/')
for k, v in params))
Looking at the source code for the urllib module (Python 2.6), their implementation does not do much more. There is an optional feature where values in the parameters that are themselves 2-tuples are turned into separate key-value pairs, which is sometimes useful, but if you know you won't need that, the above will do.
You can even get rid of the if hasattr('items', params): if you know you won't need to handle lists of 2-tuples as well as dicts.
I solved it with this add_get_to_url() method:
import urllib
def add_get_to_url(url, get):
return '%s?%s' % (url, urllib.urlencode(list(encode_dict_to_bytes(get))))
def encode_dict_to_bytes(query):
if hasattr(query, 'items'):
query=query.items()
for key, value in query:
yield (encode_value_to_bytes(key), encode_value_to_bytes(value))
def encode_value_to_bytes(value):
if not isinstance(value, unicode):
return str(value)
return value.encode('utf8')
Features:
"get" can be a dict or a list of (key, value) pairs
Order is not lost
values can be integers or other simple datatypes.
Feedback welcome.
this one line working fine in my case -->
urllib.quote(unicode_string.encode('utf-8'))
thanks #IanCleland and #PavelVlasov
Why so long answers?
urlencode(unicode_string.encode('utf-8'))

How to get string objects instead of Unicode from JSON

I'm using Python 2 to parse JSON from ASCII encoded text files.
When loading these files with either json or simplejson, all my string values are cast to Unicode objects instead of string objects. The problem is, I have to use the data with some libraries that only accept string objects. I can't change the libraries nor update them.
Is it possible to get string objects instead of Unicode ones?
Example
>>> import json
>>> original_list = ['a', 'b']
>>> json_list = json.dumps(original_list)
>>> json_list
'["a", "b"]'
>>> new_list = json.loads(json_list)
>>> new_list
[u'a', u'b'] # I want these to be of type `str`, not `unicode`
(One easy and clean solution for 2017 is to use a recent version of Python — i.e. Python 3 and forward.)
While there are some good answers here, I ended up using PyYAML to parse my JSON files, since it gives the keys and values as str type strings instead of the unicode type. Because JSON is a subset of YAML, it works nicely:
>>> import json
>>> import yaml
>>> list_org = ['a', 'b']
>>> list_dump = json.dumps(list_org)
>>> list_dump
'["a", "b"]'
>>> json.loads(list_dump)
[u'a', u'b']
>>> yaml.safe_load(list_dump)
['a', 'b']
Notes
Some things to note though:
I get string objects because all my entries are ASCII encoded. If I would use Unicode encoded entries, I would get them back as unicode objects — there is no conversion!
You should (probably always) use PyYAML's safe_load function; if you use it to load JSON files, you don't need the "additional power" of the load function anyway.
If you want a YAML parser that has more support for the 1.2 version of the spec (and correctly parses very low numbers) try Ruamel YAML: pip install ruamel.yaml and import ruamel.yaml as yaml was all I needed in my tests.
Conversion
As stated, there isn't any conversion! If you can't be sure to only deal with ASCII values (and you can't be sure most of the time), better use a conversion function:
I used the one from Mark Amery a couple of times now, it works great and is very easy to use. You can also use a similar function as an object_hook instead, as it might gain you a performance boost on big files. See the slightly more involved answer from Mirec Miskuf for that.
There's no built-in option to make the json module functions return byte strings instead of Unicode strings. However, this short and simple recursive function will convert any decoded JSON object from using Unicode strings to UTF-8-encoded byte strings:
def byteify(input):
if isinstance(input, dict):
return {byteify(key): byteify(value)
for key, value in input.iteritems()}
elif isinstance(input, list):
return [byteify(element) for element in input]
elif isinstance(input, unicode):
return input.encode('utf-8')
else:
return input
Just call this on the output you get from a json.load or json.loads call.
A couple of notes:
To support Python 2.6 or earlier, replace return {byteify(key): byteify(value) for key, value in input.iteritems()} with return dict([(byteify(key), byteify(value)) for key, value in input.iteritems()]), since dictionary comprehensions weren't supported until Python 2.7.
Since this answer recurses through the entire decoded object, it has a couple of undesirable performance characteristics that can be avoided with very careful use of the object_hook or object_pairs_hook parameters. Mirec Miskuf's answer is so far the only one that manages to pull this off correctly, although as a consequence, it's significantly more complicated than my approach.
A solution with object_hook
It works for both Python 2.7 and 3.x.
import json
def json_load_byteified(file_handle):
return _byteify(
json.load(file_handle, object_hook=_byteify),
ignore_dicts=True
)
def json_loads_byteified(json_text):
return _byteify(
json.loads(json_text, object_hook=_byteify),
ignore_dicts=True
)
def _byteify(data, ignore_dicts = False):
if isinstance(data, str):
return data
# If this is a list of values, return list of byteified values
if isinstance(data, list):
return [ _byteify(item, ignore_dicts=True) for item in data ]
# If this is a dictionary, return dictionary of byteified keys and values
# but only if we haven't already byteified it
if isinstance(data, dict) and not ignore_dicts:
return {
_byteify(key, ignore_dicts=True): _byteify(value, ignore_dicts=True)
for key, value in data.items() # changed to .items() for Python 2.7/3
}
# Python 3 compatible duck-typing
# If this is a Unicode string, return its string representation
if str(type(data)) == "<type 'unicode'>":
return data.encode('utf-8')
# If it's anything else, return it in its original form
return data
Example usage:
>>> json_loads_byteified('{"Hello": "World"}')
{'Hello': 'World'}
>>> json_loads_byteified('"I am a top-level string"')
'I am a top-level string'
>>> json_loads_byteified('7')
7
>>> json_loads_byteified('["I am inside a list"]')
['I am inside a list']
>>> json_loads_byteified('[[[[[[[["I am inside a big nest of lists"]]]]]]]]')
[[[[[[[['I am inside a big nest of lists']]]]]]]]
>>> json_loads_byteified('{"foo": "bar", "things": [7, {"qux": "baz", "moo": {"cow": ["milk"]}}]}')
{'things': [7, {'qux': 'baz', 'moo': {'cow': ['milk']}}], 'foo': 'bar'}
>>> json_load_byteified(open('somefile.json'))
{'more json': 'from a file'}
How does this work and why would I use it?
Mark Amery's function is shorter and clearer than these ones, so what's the point of them? Why would you want to use them?
Purely for performance. Mark's answer decodes the JSON text fully first with Unicode strings, then recurses through the entire decoded value to convert all strings to byte strings. This has a couple of undesirable effects:
A copy of the entire decoded structure gets created in memory
If your JSON object is really deeply nested (500 levels or more) then you'll hit Python's maximum recursion depth
This answer mitigates both of those performance issues by using the object_hook parameter of json.load and json.loads. From the documentation:
object_hook is an optional function that will be called with the result of any object literal decoded (a dict). The return value of object_hook will be used instead of the dict. This feature can be used to implement custom decoders
Since dictionaries nested many levels deep in other dictionaries get passed to object_hook as they're decoded, we can byteify any strings or lists inside them at that point and avoid the need for deep recursion later.
Mark's answer isn't suitable for use as an object_hook as it stands, because it recurses into nested dictionaries. We prevent that recursion in this answer with the ignore_dicts parameter to _byteify, which gets passed to it at all times except when object_hook passes it a new dict to byteify. The ignore_dicts flag tells _byteify to ignore dicts since they already been byteified.
Finally, our implementations of json_load_byteified and json_loads_byteified call _byteify (with ignore_dicts=True) on the result returned from json.load or json.loads to handle the case where the JSON text being decoded doesn't have a dict at the top level.
You can use the object_hook parameter for json.loads to pass in a converter. You don't have to do the conversion after the fact. The json module will always pass the object_hook dicts only, and it will recursively pass in nested dicts, so you don't have to recurse into nested dicts yourself. I don't think I would convert Unicode strings to numbers like Wells shows. If it's a Unicode string, it was quoted as a string in the JSON file, so it is supposed to be a string (or the file is bad).
Also, I'd try to avoid doing something like str(val) on a unicode object. You should use value.encode(encoding) with a valid encoding, depending on what your external library expects.
So, for example:
def _decode_list(data):
rv = []
for item in data:
if isinstance(item, unicode):
item = item.encode('utf-8')
elif isinstance(item, list):
item = _decode_list(item)
elif isinstance(item, dict):
item = _decode_dict(item)
rv.append(item)
return rv
def _decode_dict(data):
rv = {}
for key, value in data.iteritems():
if isinstance(key, unicode):
key = key.encode('utf-8')
if isinstance(value, unicode):
value = value.encode('utf-8')
elif isinstance(value, list):
value = _decode_list(value)
elif isinstance(value, dict):
value = _decode_dict(value)
rv[key] = value
return rv
obj = json.loads(s, object_hook=_decode_dict)
That's because json() has no difference between string objects and Unicode objects. They're all strings in JavaScript.
I think JSON is right to return Unicode objects. In fact, I wouldn't accept anything less, since JavaScript strings are in fact unicode objects (i.e., JSON (JavaScript) strings can store any kind of Unicode character), so it makes sense to create unicode objects when translating strings from JSON. Plain strings just wouldn't fit since the library would have to guess the encoding you want.
It's better to use unicode string objects everywhere. So your best option is to update your libraries so they can deal with Unicode objects.
But if you really want bytestrings, just encode the results to the encoding of your choice:
>>> nl = json.loads(js)
>>> nl
[u'a', u'b']
>>> nl = [s.encode('utf-8') for s in nl]
>>> nl
['a', 'b']
There exists an easy work-around.
TL;DR - Use ast.literal_eval() instead of json.loads(). Both ast and json are in the standard library.
While not a 'perfect' answer, it gets one pretty far if your plan is to ignore Unicode altogether. In Python 2.7
import json, ast
d = { 'field' : 'value' }
print "JSON Fail: ", json.loads(json.dumps(d))
print "AST Win:", ast.literal_eval(json.dumps(d))
gives:
JSON Fail: {u'field': u'value'}
AST Win: {'field': 'value'}
This gets more hairy when some objects are really Unicode strings. The full answer gets hairy quickly.
Mike Brennan's answer is close, but there isn't any reason to retraverse the entire structure. If you use the object_hook_pairs (Python 2.7+) parameter:
object_pairs_hook is an optional function that will be called with the result of any object literal decoded with an ordered list of pairs. The return value of object_pairs_hook will be used instead of the dict. This feature can be used to implement custom decoders that rely on the order that the key and value pairs are decoded (for example, collections.OrderedDict will remember the order of insertion). If object_hook is also defined, the object_pairs_hook takes priority.
With it, you get each JSON object handed to you, so you can do the decoding with no need for recursion:
def deunicodify_hook(pairs):
new_pairs = []
for key, value in pairs:
if isinstance(value, unicode):
value = value.encode('utf-8')
if isinstance(key, unicode):
key = key.encode('utf-8')
new_pairs.append((key, value))
return dict(new_pairs)
In [52]: open('test.json').read()
Out[52]: '{"1": "hello", "abc": [1, 2, 3], "def": {"hi": "mom"}, "boo": [1, "hi", "moo", {"5": "some"}]}'
In [53]: json.load(open('test.json'))
Out[53]:
{u'1': u'hello',
u'abc': [1, 2, 3],
u'boo': [1, u'hi', u'moo', {u'5': u'some'}],
u'def': {u'hi': u'mom'}}
In [54]: json.load(open('test.json'), object_pairs_hook=deunicodify_hook)
Out[54]:
{'1': 'hello',
'abc': [1, 2, 3],
'boo': [1, 'hi', 'moo', {'5': 'some'}],
'def': {'hi': 'mom'}}
Notice that I never have to call the hook recursively since every object will get handed to the hook when you use the object_pairs_hook. You do have to care about lists, but as you can see, an object within a list will be properly converted, and you don't have to recurse to make it happen.
A coworker pointed out that Python2.6 doesn't have object_hook_pairs. You can still use this will Python2.6 by making a very small change. In the hook above, change:
for key, value in pairs:
to
for key, value in pairs.iteritems():
Then use object_hook instead of object_pairs_hook:
In [66]: json.load(open('test.json'), object_hook=deunicodify_hook)
Out[66]:
{'1': 'hello',
'abc': [1, 2, 3],
'boo': [1, 'hi', 'moo', {'5': 'some'}],
'def': {'hi': 'mom'}}
Using object_pairs_hook results in one less dictionary being instantiated for each object in the JSON object, which, if you were parsing a huge document, might be worth while.
I'm afraid there isn't any way to achieve this automatically within the simplejson library.
The scanner and decoder in simplejson are designed to produce Unicode text. To do this, the library uses a function called c_scanstring (if it's available, for speed), or py_scanstring if the C version is not available. The scanstring function is called several times by nearly every routine that simplejson has for decoding a structure that might contain text. You'd have to either monkey patch the scanstring value in simplejson.decoder, or subclass JSONDecoder and provide pretty much your own entire implementation of anything that might contain text.
The reason that simplejson outputs Unicode, however, is that the JSON specification specifically mentions that "A string is a collection of zero or more Unicode characters"... support for Unicode is assumed as part of the format itself. simplejson's scanstring implementation goes so far as to scan and interpret Inicode escapes (even error-checking for malformed multi-byte charset representations), so the only way it can reliably return the value to you is as Unicode.
If you have an aged library that needs an str, I recommend you either laboriously search the nested data structure after parsing (which I acknowledge is what you explicitly said you wanted to avoid... sorry), or perhaps wrap your libraries in some sort of facade where you can massage the input parameters at a more granular level. The second approach might be more manageable than the first if your data structures are indeed deeply nested.
As Mark (Amery) correctly notes: Using PyYAML's deserializer on a JSON dump works only if you have ASCII only. At least out of the box.
Two quick comments on the PyYAML approach:
Never use yaml.load() on data from the field. It’s a feature(!) of YAML to execute arbitrary code hidden within the structure.
You can make it work also for non ASCII via this:
def to_utf8(loader, node):
return loader.construct_scalar(node).encode('utf-8')
yaml.add_constructor(u'tag:yaml.org,2002:str', to_utf8)
But performance-wise, it’s of no comparison to Mark Amery's answer:
Throwing some deeply-nested sample dicts onto the two methods, I get this (with dt[j] = time delta of json.loads(json.dumps(m))):
dt[yaml.safe_load(json.dumps(m))] =~ 100 * dt[j]
dt[byteify recursion(Mark Amery)] =~ 5 * dt[j]
So deserialization, including fully walking the tree and encoding, is well within the order of magnitude of JSON's C-based implementation. I find this remarkably fast and its also more robust than the yaml load at deeply nested structures. And less security error prone, looking at yaml.load.
=> While I would appreciate a pointer to a C-only based converter, the byteify function should be the default answer.
This holds especially true if your JSON structure is from the field, containing user input. Because then you probably need to walk anyway over your structure - independent on your desired internal data structures ('unicode sandwich' or byte strings only).
Why?
Unicode normalisation. For the unaware: Take a painkiller and read this.
So using the byteify recursion you kill two birds with one stone:
get your bytestrings from nested JSON dumps
get user input values normalised, so that you find the stuff in your storage.
In my tests it turned out that replacing the input.encode('utf-8') with a unicodedata.normalize('NFC', input).encode('utf-8') was even faster than without NFC - but that’s heavily dependent on the sample data I guess.
The gotcha is that simplejson and json are two different modules, at least in the manner they deal with Unicode. You have json in Python 2.6+, and this gives you Unicode values, whereas simplejson returns string objects.
Just try easy_install-ing simplejson in your environment and see if that works. It did for me.
Just use pickle instead of json for dump and load, like so:
import json
import pickle
d = { 'field1': 'value1', 'field2': 2, }
json.dump(d,open("testjson.txt","w"))
print json.load(open("testjson.txt","r"))
pickle.dump(d,open("testpickle.txt","w"))
print pickle.load(open("testpickle.txt","r"))
The output it produces is (strings and integers are handled correctly):
{u'field2': 2, u'field1': u'value1'}
{'field2': 2, 'field1': 'value1'}
I had a JSON dict as a string. The keys and values were Unicode objects like in the following example:
myStringDict = "{u'key':u'value'}"
I could use the byteify function suggested above by converting the string to a dict object using ast.literal_eval(myStringDict).
So, I've run into the same problem.
Because I need to pass all data to PyGTK, Unicode strings aren't very useful to me either. So I have another recursive conversion method. It's actually also needed for type-safe JSON conversion - json.dump() would bail on any non-literals, like Python objects. It doesn't convert dict indexes though.
# removes any objects, turns Unicode back into str
def filter_data(obj):
if type(obj) in (int, float, str, bool):
return obj
elif type(obj) == unicode:
return str(obj)
elif type(obj) in (list, tuple, set):
obj = list(obj)
for i,v in enumerate(obj):
obj[i] = filter_data(v)
elif type(obj) == dict:
for i,v in obj.iteritems():
obj[i] = filter_data(v)
else:
print "invalid object in data, converting to string"
obj = str(obj)
return obj
Support for Python 2 and 3 using a hook (from Mirec Miskuf's answer):
import requests
import six
from six import iteritems
requests.packages.urllib3.disable_warnings() # #UndefinedVariable
r = requests.get("http://echo.jsontest.com/key/value/one/two/three", verify=False)
def _byteify(data):
# If this is a Unicode string, return its string representation
if isinstance(data, six.string_types):
return str(data.encode('utf-8').decode())
# If this is a list of values, return list of byteified values
if isinstance(data, list):
return [ _byteify(item) for item in data ]
# If this is a dictionary, return dictionary of byteified keys and values,
# but only if we haven't already byteified it
if isinstance(data, dict):
return {
_byteify(key): _byteify(value) for key, value in iteritems(data)
}
# If it's anything else, return it in its original form
return data
w = r.json(object_hook=_byteify)
print(w)
Returns:
{'three': '', 'key': 'value', 'one': 'two'}
I built this recursive caster. It works for my needs and I think it's relatively complete.
def _parseJSON(self, obj):
newobj = {}
for key, value in obj.iteritems():
key = str(key)
if isinstance(value, dict):
newobj[key] = self._parseJSON(value)
elif isinstance(value, list):
if key not in newobj:
newobj[key] = []
for i in value:
newobj[key].append(self._parseJSON(i))
elif isinstance(value, unicode):
val = str(value)
if val.isdigit():
val = int(val)
else:
try:
val = float(val)
except ValueError:
val = str(val)
newobj[key] = val
return newobj
Just pass it a JSON object like so:
obj = json.loads(content, parse_float=float, parse_int=int)
obj = _parseJSON(obj)
I have it as a private member of a class, but you can repurpose the method as you see fit.
I rewrote Wells's _parse_json() to handle cases where the json object itself is an array (my use case).
def _parseJSON(self, obj):
if isinstance(obj, dict):
newobj = {}
for key, value in obj.iteritems():
key = str(key)
newobj[key] = self._parseJSON(value)
elif isinstance(obj, list):
newobj = []
for value in obj:
newobj.append(self._parseJSON(value))
elif isinstance(obj, unicode):
newobj = str(obj)
else:
newobj = obj
return newobj
Here is a recursive encoder written in C:
https://github.com/axiros/nested_encode
The performance overhead for "average" structures is around 10% compared to json.loads().
python speed.py
json loads [0.16sec]: {u'a': [{u'b': [[1, 2, [u'\xd6ster..
json loads + encoding [0.18sec]: {'a': [{'b': [[1, 2, ['\xc3\x96ster.
time overhead in percent: 9%
using this teststructure:
import json, nested_encode, time
s = """
{
"firstName": "Jos\\u0301",
"lastName": "Smith",
"isAlive": true,
"age": 25,
"address": {
"streetAddress": "21 2nd Street",
"city": "\\u00d6sterreich",
"state": "NY",
"postalCode": "10021-3100"
},
"phoneNumbers": [
{
"type": "home",
"number": "212 555-1234"
},
{
"type": "office",
"number": "646 555-4567"
}
],
"children": [],
"spouse": null,
"a": [{"b": [[1, 2, ["\\u00d6sterreich"]]]}]
}
"""
t1 = time.time()
for i in xrange(10000):
u = json.loads(s)
dt_json = time.time() - t1
t1 = time.time()
for i in xrange(10000):
b = nested_encode.encode_nested(json.loads(s))
dt_json_enc = time.time() - t1
print "json loads [%.2fsec]: %s..." % (dt_json, str(u)[:20])
print "json loads + encoding [%.2fsec]: %s..." % (dt_json_enc, str(b)[:20])
print "time overhead in percent: %i%%" % (100 * (dt_json_enc - dt_json)/dt_json)
With Python 3.6, sometimes I still run into this problem. For example, when getting a response from a REST API and loading the response text to JSON, I still get the Unicode strings.
Found a simple solution using json.dumps().
response_message = json.loads(json.dumps(response.text))
print(response_message)
I ran into this problem too, and having to deal with JSON, I came up with a small loop that converts the Unicode keys to strings. (simplejson on GAE does not return string keys.)
obj is the object decoded from JSON:
if NAME_CLASS_MAP.has_key(cls):
kwargs = {}
for i in obj.keys():
kwargs[str(i)] = obj[i]
o = NAME_CLASS_MAP[cls](**kwargs)
o.save()
kwargs is what I pass to the constructor of the GAE application (which does not like Unicode keys in **kwargs).
It is not as robust as the solution from Wells, but much smaller.
I've adapted the code from the answer of Mark Amery, particularly in order to get rid of isinstance for the pros of duck typing.
The encoding is done manually and ensure_ascii is disabled. The Python documentation for json.dump says that:
If ensure_ascii is True (the default), all non-ASCII characters in the output are escaped with \uXXXX sequences
Disclaimer: in the doctest I used the Hungarian language. Some notable Hungarian-related character encodings are: cp852, the IBM/OEM encoding used e.g. in DOS (sometimes referred to as ASCII. Incorrectly I think, as it is dependent on the code page setting). Windows-1250 is used e.g. in Windows (sometimes referred as ANSI, dependent on the locale settings), and ISO 8859-1, sometimes used on HTTP servers.
The test text Tüskéshátú kígyóbűvölő is attributed to Koltai László (native personal name form) and is from Wikipedia.
# coding: utf-8
"""
This file should be encoded correctly with utf-8.
"""
import json
def encode_items(input, encoding='utf-8'):
u"""original from: https://stackoverflow.com/a/13101776/611007
adapted by SO/u/611007 (20150623)
>>>
>>> ## run this with `python -m doctest <this file>.py` from command line
>>>
>>> txt = u"Tüskéshátú kígyóbűvölő"
>>> txt2 = u"T\\u00fcsk\\u00e9sh\\u00e1t\\u00fa k\\u00edgy\\u00f3b\\u0171v\\u00f6l\\u0151"
>>> txt3 = u"uúuutifu"
>>> txt4 = b'u\\xfauutifu'
>>> # txt4 shouldn't be 'u\\xc3\\xbauutifu', string content needs double backslash for doctest:
>>> assert u'\\u0102' not in b'u\\xfauutifu'.decode('cp1250')
>>> txt4u = txt4.decode('cp1250')
>>> assert txt4u == u'u\\xfauutifu', repr(txt4u)
>>> txt5 = b"u\\xc3\\xbauutifu"
>>> txt5u = txt5.decode('utf-8')
>>> txt6 = u"u\\u251c\\u2551uutifu"
>>> there_and_back_again = lambda t: encode_items(t, encoding='utf-8').decode('utf-8')
>>> assert txt == there_and_back_again(txt)
>>> assert txt == there_and_back_again(txt2)
>>> assert txt3 == there_and_back_again(txt3)
>>> assert txt3.encode('cp852') == there_and_back_again(txt4u).encode('cp852')
>>> assert txt3 == txt4u,(txt3,txt4u)
>>> assert txt3 == there_and_back_again(txt5)
>>> assert txt3 == there_and_back_again(txt5u)
>>> assert txt3 == there_and_back_again(txt4u)
>>> assert txt3.encode('cp1250') == encode_items(txt4, encoding='utf-8')
>>> assert txt3.encode('utf-8') == encode_items(txt5, encoding='utf-8')
>>> assert txt2.encode('utf-8') == encode_items(txt, encoding='utf-8')
>>> assert {'a':txt2.encode('utf-8')} == encode_items({'a':txt}, encoding='utf-8')
>>> assert [txt2.encode('utf-8')] == encode_items([txt], encoding='utf-8')
>>> assert [[txt2.encode('utf-8')]] == encode_items([[txt]], encoding='utf-8')
>>> assert [{'a':txt2.encode('utf-8')}] == encode_items([{'a':txt}], encoding='utf-8')
>>> assert {'b':{'a':txt2.encode('utf-8')}} == encode_items({'b':{'a':txt}}, encoding='utf-8')
"""
try:
input.iteritems
return {encode_items(k): encode_items(v) for (k,v) in input.iteritems()}
except AttributeError:
if isinstance(input, unicode):
return input.encode(encoding)
elif isinstance(input, str):
return input
try:
iter(input)
return [encode_items(e) for e in input]
except TypeError:
return input
def alt_dumps(obj, **kwargs):
"""
>>> alt_dumps({'a': u"T\\u00fcsk\\u00e9sh\\u00e1t\\u00fa k\\u00edgy\\u00f3b\\u0171v\\u00f6l\\u0151"})
'{"a": "T\\xc3\\xbcsk\\xc3\\xa9sh\\xc3\\xa1t\\xc3\\xba k\\xc3\\xadgy\\xc3\\xb3b\\xc5\\xb1v\\xc3\\xb6l\\xc5\\x91"}'
"""
if 'ensure_ascii' in kwargs:
del kwargs['ensure_ascii']
return json.dumps(encode_items(obj), ensure_ascii=False, **kwargs)
I'd also like to highlight the answer of Jarret Hardie which references the JSON specification, quoting:
A string is a collection of zero or more Unicode characters
In my use case, I had files with JSON content. They are UTF-8 encoded files. ensure_ascii results in properly escaped, but not very readable JSON files, and that is why I've adapted Mark Amery's answer to fit my needs.
The doctest is not particularly thoughtful, but I share the code in the hope that it will useful for someone.
Check out this answer to a similar question like this which states that
The u- prefix just means that you have a Unicode string. When you really use the string, it won't appear in your data. Don't be thrown by the printed output.
For example, try this:
print mail_accounts[0]["i"]
You won't see a u.

Categories