# I have the dictionary my_dict
my_dict = {
'var1' : 5
'var2' : 9
}
r = redis.StrictRedis()
How would I store my_dict and retrieve it with redis. For example, the following code does not work.
#Code that doesn't work
r.set('this_dict', my_dict) # to store my_dict in this_dict
r.get('this_dict') # to retrieve my_dict
You can do it by hmset (multiple keys can be set using hmset).
hmset("RedisKey", dictionaryToSet)
import redis
conn = redis.Redis('localhost')
user = {"Name":"Pradeep", "Company":"SCTL", "Address":"Mumbai", "Location":"RCP"}
conn.hmset("pythonDict", user)
conn.hgetall("pythonDict")
{'Company': 'SCTL', 'Address': 'Mumbai', 'Location': 'RCP', 'Name': 'Pradeep'}
you can pickle your dict and save as string.
import pickle
import redis
r = redis.StrictRedis('localhost')
mydict = {1:2,2:3,3:4}
p_mydict = pickle.dumps(mydict)
r.set('mydict',p_mydict)
read_dict = r.get('mydict')
yourdict = pickle.loads(read_dict)
As the basic answer has already give by other people, I would like to add some to it.
Following are the commands in REDIS to perform basic operations with HashMap/Dictionary/Mapping type values.
HGET => Returns value for single key passed
HSET => set/updates value for the single key
HMGET => Returns value for single/multiple keys passed
HMSET => set/updates values for the multiple key
HGETALL => Returns all the (key, value) pairs in the mapping.
Following are their respective methods in redis-py library :-
HGET => hget
HSET => hset
HMGET => hmget
HMSET => hmset
HGETALL => hgetall
All of the above setter methods creates the mapping, if it doesn't exists.
All of the above getter methods doesn't raise error/exceptions, if mapping/key in mapping doesn't exists.
Example:
=======
In [98]: import redis
In [99]: conn = redis.Redis('localhost')
In [100]: user = {"Name":"Pradeep", "Company":"SCTL", "Address":"Mumbai", "Location":"RCP"}
In [101]: con.hmset("pythonDict", {"Location": "Ahmedabad"})
Out[101]: True
In [102]: con.hgetall("pythonDict")
Out[102]:
{b'Address': b'Mumbai',
b'Company': b'SCTL',
b'Last Name': b'Rajpurohit',
b'Location': b'Ahmedabad',
b'Name': b'Mangu Singh'}
In [103]: con.hmset("pythonDict", {"Location": "Ahmedabad", "Company": ["A/C Pri
...: sm", "ECW", "Musikaar"]})
Out[103]: True
In [104]: con.hgetall("pythonDict")
Out[104]:
{b'Address': b'Mumbai',
b'Company': b"['A/C Prism', 'ECW', 'Musikaar']",
b'Last Name': b'Rajpurohit',
b'Location': b'Ahmedabad',
b'Name': b'Mangu Singh'}
In [105]: con.hget("pythonDict", "Name")
Out[105]: b'Mangu Singh'
In [106]: con.hmget("pythonDict", "Name", "Location")
Out[106]: [b'Mangu Singh', b'Ahmedabad']
I hope, it makes things more clear.
If you want to store a python dict in redis, it is better to store it as json string.
import json
import redis
r = redis.StrictRedis(host='localhost', port=6379, db=0)
mydict = { 'var1' : 5, 'var2' : 9, 'var3': [1, 5, 9] }
rval = json.dumps(mydict)
r.set('key1', rval)
While retrieving de-serialize it using json.loads
data = r.get('key1')
result = json.loads(data)
arr = result['var3']
What about types (eg.bytes) that are not serialized by json functions ?
You can write encoder/decoder functions for types that cannot be serialized by json functions. eg. writing base64/ascii encoder/decoder function for byte array.
Another way: you can use RedisWorks library.
pip install redisworks
>>> from redisworks import Root
>>> root = Root()
>>> root.something = {1:"a", "b": {2: 2}} # saves it as Hash type in Redis
...
>>> print(root.something) # loads it from Redis
{'b': {2: 2}, 1: 'a'}
>>> root.something['b'][2]
2
It converts python types to Redis types and vice-versa.
>>> root.sides = [10, [1, 2]] # saves it as list in Redis.
>>> print(root.sides) # loads it from Redis
[10, [1, 2]]
>>> type(root.sides[1])
<class 'list'>
Disclaimer: I wrote the library. Here is the code: https://github.com/seperman/redisworks
HMSET is deprecated per the Redis docs. You can now use HSET with a dictionary as follows:
import redis
r = redis.Redis('localhost')
key = "hashexample"
entry = {
"version":"1.2.3",
"tag":"main",
"status":"CREATED",
"timeout":"30"
}
r.hset(key, mapping=entry)
Caution: very unintuitively, hset won't accept a dictionary (raising an error suggesting it does not accept dictionaries, see [1]) if it is simply passed to the 2nd positional (unnamed) argument. You need to pass the dictionary to a named argument mapping=.
[1] *** redis.exceptions.DataError: Invalid input of type: 'dict'. Convert to a bytes, string, int or float first.
One might consider using MessagePack which is endorsed by redis.
import msgpack
data = {
'one': 'one',
'two': 2,
'three': [1, 2, 3]
}
await redis.set('my-key', msgpack.packb(data))
val = await redis.get('my-key')
print(msgpack.unpackb(val))
# {'one': 'one', 'two': 2, 'three': [1, 2, 3]}
Using msgpack-python and aioredis
The redis SET command stores a string, not arbitrary data. You could try using the redis HSET command to store the dict as a redis hash with something like
for k,v in my_dict.iteritems():
r.hset('my_dict', k, v)
but the redis datatypes and python datatypes don't quite line up. Python dicts can be arbitrarily nested, but a redis hash is going to require that your value is a string. Another approach you can take is to convert your python data to string and store that in redis, something like
r.set('this_dict', str(my_dict))
and then when you get the string out you will need to parse it to recreate the python object.
An other way you can approach the matter:
import redis
conn = redis.Redis('localhost')
v={'class':'user','grants': 0, 'nome': 'Roberto', 'cognome': 'Brunialti'}
y=str(v)
print(y['nome']) #<=== this return an error as y is actually a string
conn.set('test',y)
z=eval(conn.get('test'))
print(z['nome']) #<=== this really works!
I did not test it for efficiency/speed.
If you don't know exactly how to organize data in Redis, I did some performance tests, including the results parsing.
The dictonary I used (d) had 437.084 keys (md5 format), and the values of this form:
{"path": "G:\tests\2687.3575.json",
"info": {"f": "foo", "b": "bar"},
"score": 2.5}
First Test (inserting data into a redis key-value mapping):
conn.hmset('my_dict', d) # 437.084 keys added in 8.98s
conn.info()['used_memory_human'] # 166.94 Mb
for key in d:
json.loads(conn.hget('my_dict', key).decode('utf-8').replace("'", '"'))
# 41.1 s
import ast
for key in d:
ast.literal_eval(conn.hget('my_dict', key).decode('utf-8'))
# 1min 3s
conn.delete('my_dict') # 526 ms
Second Test (inserting data directly into Redis keys):
for key in d:
conn.hmset(key, d[key]) # 437.084 keys added in 1min 20s
conn.info()['used_memory_human'] # 326.22 Mb
for key in d:
json.loads(conn.hgetall(key)[b'info'].decode('utf-8').replace("'", '"'))
# 1min 11s
for key in d:
conn.delete(key)
# 37.3s
As you can see, in the second test, only 'info' values have to be parsed, because the hgetall(key) already returns a dict, but not a nested one.
And of course, the best example of using Redis as python's dicts, is the First Test
DeprecationWarning: Redis.hmset() is deprecated. Use Redis.hset() instead.
Since HMSET is deprecated you can use HSET:
import redis
r = redis.Redis(host='localhost', port=6379, decode_responses=True)
r.hset('user:23', mapping={'id': 23, 'name': 'ip'})
r.hgetall('user:23')
Try rejson-py which is relatively new since 2017. Look at this introduction.
from rejson import Client, Path
rj = Client(host='localhost', port=6379)
# Set the key `obj` to some object
obj = {
'answer': 42,
'arr': [None, True, 3.14],
'truth': {
'coord': 'out there'
}
}
rj.jsonset('obj', Path.rootPath(), obj)
# Get something
print 'Is there anybody... {}?'.format(
rj.jsonget('obj', Path('.truth.coord'))
)
# Delete something (or perhaps nothing), append something and pop it
rj.jsondel('obj', Path('.arr[0]'))
rj.jsonarrappend('obj', Path('.arr'), 'something')
print '{} popped!'.format(rj.jsonarrpop('obj', Path('.arr')))
# Update something else
rj.jsonset('obj', Path('.answer'), 2.17)
In the context of Nameko (Python microservices framework which frequently uses a redis backend) you can use hmset as follows:
import uuid
from nameko.rpc import rpc
from nameko_redis import Redis
class AirportsService:
name = "trips_service"
redis = Redis('development')
#rpc
def get(self, trip_id):
trip = self.redis.get(trip_id)
return trip
#rpc
def create(self, airport_from_id, airport_to_id):
trip_id = uuid.uuid4().hex
pyDict = {"from":airport_from_id, "to":airport_to_id}
self.redis.hmset(trip_id, pyDict)
return trip_id
Related
I have an object with a similar structure to this
myObj = {
"subObj1":{"keyA":"valueA1"},
"subObj2":{"keyA":"valueA2","keyB":"valueB2"},
"subObj3":{"keyA":"valueA3","keyB":"valueB3", "keyC":{"keyA":"valueA3c"}},
}
Typically I can access the contents of this object similarly to this
print(myObj['subObj1']['keyA'])
print(myObj['subObj2']['keyB'])
print(myObj['subObj3']['keyC']['keyA'])
Which would return the values
alueA1
valueB2
valueA3c
I need a way to access the contents of my object based on keys from an external configuration file, The key from that file would look like
"subObj3.keyC.keyA"
I can transform that key into something similar to how I usually access the object
keyString="['subObj3']['keyC']['keyA']"
But when attempting to access the object with that keyString I get KeyError messages
print(myObj[keyString])
KeyError: "['subObj3']['keyC']['keyA']"
Is there a proper syntax, or a better way for what I'm trying to do here?
Here's one way via pandas:
import pandas as pd
myObj = {
"subObj1": {"keyA": "valueA1"},
"subObj2": {"keyA": "valueA2", "keyB": "valueB2"},
"subObj3": {"keyA": "valueA3", "keyB": "valueB3", "keyC": {"keyA": "valueA3c"}},
}
normalized_myObj = pd.json_normalize(myObj, sep='.').to_dict('records')
OUTPUT:
[{'subObj1.keyA': 'valueA1',
'subObj2.keyA': 'valueA2',
'subObj2.keyB': 'valueB2',
'subObj3.keyA': 'valueA3',
'subObj3.keyB': 'valueB3',
'subObj3.keyC.keyA': 'valueA3c'}]
NOTE: using pandas may be overkill for this task, but it's just a one-line solution that I prefer.
Nk03's solution is indeed a powerful method...
Just as a simpler alternative, consider this:
def get_value(s):
keys = s.split(".")
d = myObj
for k in keys: d = d[k] # will go a step deeper for each provided key
return d
get_value("subObj3.keyC.keyA")
>> 'valueA3c'
get_value("subObj1.keyA")
>> 'valueA1'
get_value("subObj2.keyB")
>> 'valueB2'
You said that you can transform your string into
keyString="['subObj3']['keyC']['keyA']"
That's good, because now you can preform eval() on this.
string = ""
for i in "subObj3.keyC.keyA".split('.'):
string += f"['{i}']"
print(eval(f'myObj{string}'))
output
valueA3c
Let's say we have a dict parsed from json and we read values from it from the keys in the form of key path path-to.my.keys
my_dict['path-to']['my']['keys']
In file system we have mkdir -p to create such path if it not exists.
In python, do we have such similar syntax/function to create key path for dict aka default empty dict for missing keys? My google search results not very helpful.
TLDR
You can use dict.setdefault or collections.defaultdict.
def make_path(d: dict, *paths: str) -> None:
for key in paths:
d = d.setdefault(key, {})
make_path(my_dict, 'path-to', 'my', 'keys')
assert my_dict['path-to']['my']['keys'] is not None
Full details
Solution 1. dict.setdefault:
my_dict.setdefault('path-to', {}).setdefault('my', {}).setdefault('keys', {})
Pros:
my_dict is normal dict
making dict happens only explicitly
No restrict of depth
Cons:
You should call setdefault method every use cases.
Solution 2. collections.defaultdict:
from collections import defaultdict
my_dict = defaultdict(lambda: defaultdict(lambda: defaultdict(dict)))
my_dict['path-to']['my']['keys']
Pros:
You don't need to call checking existence at all.
Cons:
Making dictionary happens implicitly.
my_dict is not pure dict.
You have depth limit by definition of my_dict.
Solution 3. advanced from solution 1: Make your own function
def make_path(my_dict: dict, *paths: str) -> dict:
while paths:
key, *paths = paths
my_dict = my_dict.setdefault(key, {})
return my_dict
test = {'path-to': {'test': 1}}
print(test)
make_path(test, 'path-to', 'my', 'keys')['test2'] = 4
print(test)
print(make_path(test)) # It's okay even no paths passed
output:
{'path-to': {'test': 1}}
{'path-to': {'test': 1, 'my': {'keys': {'test2': 4}}}}
{'path-to': {'test': 1, 'my': {'keys': {'test2': 4}}}}
Solution 4. advanced from solution 2: Make your own class
class MyDefaultDict(dict):
def __missing__(self, key):
self[key] = MyDefaultDict()
return self[key]
my_dict = MyDefaultDict()
print(my_dict)
my_dict['path-to']['my']['keys'] = 'hello'
print(my_dict)
output:
{}
{'path-to': {'my': {'keys': 'hello'}}}
Conclusion
I think that solution 3 is most similar to your need, but you can use any other options if it fits to your case.
Append
How about in Solution 4 we have dict :d already parsed from a json? Your solution starts from MyDefaultDict() type not from what returned from jsons.loads()
If you can edit json.loads part, then try:
import json
class MyDefaultDict(dict):
def __missing__(self, key):
self[key] = MyDefaultDict()
return self[key]
data = '{"path-to": {"my": {"keys": "hello"}}}'
my_dict = json.loads(data, object_pairs_hook=MyDefaultDict)
print(type(my_dict))
output:
<class '__main__.MyDefaultDict'>
There's the recursive defaultdict trick that allows you to set values at random paths down a nested structure without explicitly creating the path:
import json
from collections import defaultdict
nested = lambda: defaultdict(nested)
d = nested()
d['path']['to']['nested']['key'] = 'value'
print(json.dumps(d))
# {"path": {"to": {"nested": {"key": "value"}}}}
Non-existing keys will return empty defaultdicts.
In python, do we have such similar syntax/function to create key path for dict? My google search results not very helpful.
Python doesn't have "keypath" syntax in the style of clojure & friends no. It can handle this specific case at some runtime cost for the convenience using the setdefault method though: dict.setdefault(key, default) will return the value for the key after having set it if it was missing so my_dict.setdefault('path-to', {}).setdefault('my', {}).setdefault('keys', ???) would access the specified path, setting dicts where they are missing.
Answer to your question -- YES.
In python you can use subprocess module to execute all the commands you generally do on a system.
You can execute same mkdir -p command from python for creating a nested directory usinf subprocess.Popen.
Here is how you can do that :
import subprocess
# Create a string of nested directory path.
path_from_dict_keys = "dir1/dir2/dir3"
temp = subprocess.Popen(['mkdir', '-p', path_from_dict_keys], stdout = subprocess.PIPE)
# we use the communicate function to fetch the output
output = str(temp.communicate())
# I have the dictionary my_dict
my_dict = {
'var1' : 5
'var2' : 9
}
r = redis.StrictRedis()
How would I store my_dict and retrieve it with redis. For example, the following code does not work.
#Code that doesn't work
r.set('this_dict', my_dict) # to store my_dict in this_dict
r.get('this_dict') # to retrieve my_dict
You can do it by hmset (multiple keys can be set using hmset).
hmset("RedisKey", dictionaryToSet)
import redis
conn = redis.Redis('localhost')
user = {"Name":"Pradeep", "Company":"SCTL", "Address":"Mumbai", "Location":"RCP"}
conn.hmset("pythonDict", user)
conn.hgetall("pythonDict")
{'Company': 'SCTL', 'Address': 'Mumbai', 'Location': 'RCP', 'Name': 'Pradeep'}
you can pickle your dict and save as string.
import pickle
import redis
r = redis.StrictRedis('localhost')
mydict = {1:2,2:3,3:4}
p_mydict = pickle.dumps(mydict)
r.set('mydict',p_mydict)
read_dict = r.get('mydict')
yourdict = pickle.loads(read_dict)
As the basic answer has already give by other people, I would like to add some to it.
Following are the commands in REDIS to perform basic operations with HashMap/Dictionary/Mapping type values.
HGET => Returns value for single key passed
HSET => set/updates value for the single key
HMGET => Returns value for single/multiple keys passed
HMSET => set/updates values for the multiple key
HGETALL => Returns all the (key, value) pairs in the mapping.
Following are their respective methods in redis-py library :-
HGET => hget
HSET => hset
HMGET => hmget
HMSET => hmset
HGETALL => hgetall
All of the above setter methods creates the mapping, if it doesn't exists.
All of the above getter methods doesn't raise error/exceptions, if mapping/key in mapping doesn't exists.
Example:
=======
In [98]: import redis
In [99]: conn = redis.Redis('localhost')
In [100]: user = {"Name":"Pradeep", "Company":"SCTL", "Address":"Mumbai", "Location":"RCP"}
In [101]: con.hmset("pythonDict", {"Location": "Ahmedabad"})
Out[101]: True
In [102]: con.hgetall("pythonDict")
Out[102]:
{b'Address': b'Mumbai',
b'Company': b'SCTL',
b'Last Name': b'Rajpurohit',
b'Location': b'Ahmedabad',
b'Name': b'Mangu Singh'}
In [103]: con.hmset("pythonDict", {"Location": "Ahmedabad", "Company": ["A/C Pri
...: sm", "ECW", "Musikaar"]})
Out[103]: True
In [104]: con.hgetall("pythonDict")
Out[104]:
{b'Address': b'Mumbai',
b'Company': b"['A/C Prism', 'ECW', 'Musikaar']",
b'Last Name': b'Rajpurohit',
b'Location': b'Ahmedabad',
b'Name': b'Mangu Singh'}
In [105]: con.hget("pythonDict", "Name")
Out[105]: b'Mangu Singh'
In [106]: con.hmget("pythonDict", "Name", "Location")
Out[106]: [b'Mangu Singh', b'Ahmedabad']
I hope, it makes things more clear.
If you want to store a python dict in redis, it is better to store it as json string.
import json
import redis
r = redis.StrictRedis(host='localhost', port=6379, db=0)
mydict = { 'var1' : 5, 'var2' : 9, 'var3': [1, 5, 9] }
rval = json.dumps(mydict)
r.set('key1', rval)
While retrieving de-serialize it using json.loads
data = r.get('key1')
result = json.loads(data)
arr = result['var3']
What about types (eg.bytes) that are not serialized by json functions ?
You can write encoder/decoder functions for types that cannot be serialized by json functions. eg. writing base64/ascii encoder/decoder function for byte array.
Another way: you can use RedisWorks library.
pip install redisworks
>>> from redisworks import Root
>>> root = Root()
>>> root.something = {1:"a", "b": {2: 2}} # saves it as Hash type in Redis
...
>>> print(root.something) # loads it from Redis
{'b': {2: 2}, 1: 'a'}
>>> root.something['b'][2]
2
It converts python types to Redis types and vice-versa.
>>> root.sides = [10, [1, 2]] # saves it as list in Redis.
>>> print(root.sides) # loads it from Redis
[10, [1, 2]]
>>> type(root.sides[1])
<class 'list'>
Disclaimer: I wrote the library. Here is the code: https://github.com/seperman/redisworks
HMSET is deprecated per the Redis docs. You can now use HSET with a dictionary as follows:
import redis
r = redis.Redis('localhost')
key = "hashexample"
entry = {
"version":"1.2.3",
"tag":"main",
"status":"CREATED",
"timeout":"30"
}
r.hset(key, mapping=entry)
Caution: very unintuitively, hset won't accept a dictionary (raising an error suggesting it does not accept dictionaries, see [1]) if it is simply passed to the 2nd positional (unnamed) argument. You need to pass the dictionary to a named argument mapping=.
[1] *** redis.exceptions.DataError: Invalid input of type: 'dict'. Convert to a bytes, string, int or float first.
One might consider using MessagePack which is endorsed by redis.
import msgpack
data = {
'one': 'one',
'two': 2,
'three': [1, 2, 3]
}
await redis.set('my-key', msgpack.packb(data))
val = await redis.get('my-key')
print(msgpack.unpackb(val))
# {'one': 'one', 'two': 2, 'three': [1, 2, 3]}
Using msgpack-python and aioredis
The redis SET command stores a string, not arbitrary data. You could try using the redis HSET command to store the dict as a redis hash with something like
for k,v in my_dict.iteritems():
r.hset('my_dict', k, v)
but the redis datatypes and python datatypes don't quite line up. Python dicts can be arbitrarily nested, but a redis hash is going to require that your value is a string. Another approach you can take is to convert your python data to string and store that in redis, something like
r.set('this_dict', str(my_dict))
and then when you get the string out you will need to parse it to recreate the python object.
An other way you can approach the matter:
import redis
conn = redis.Redis('localhost')
v={'class':'user','grants': 0, 'nome': 'Roberto', 'cognome': 'Brunialti'}
y=str(v)
print(y['nome']) #<=== this return an error as y is actually a string
conn.set('test',y)
z=eval(conn.get('test'))
print(z['nome']) #<=== this really works!
I did not test it for efficiency/speed.
If you don't know exactly how to organize data in Redis, I did some performance tests, including the results parsing.
The dictonary I used (d) had 437.084 keys (md5 format), and the values of this form:
{"path": "G:\tests\2687.3575.json",
"info": {"f": "foo", "b": "bar"},
"score": 2.5}
First Test (inserting data into a redis key-value mapping):
conn.hmset('my_dict', d) # 437.084 keys added in 8.98s
conn.info()['used_memory_human'] # 166.94 Mb
for key in d:
json.loads(conn.hget('my_dict', key).decode('utf-8').replace("'", '"'))
# 41.1 s
import ast
for key in d:
ast.literal_eval(conn.hget('my_dict', key).decode('utf-8'))
# 1min 3s
conn.delete('my_dict') # 526 ms
Second Test (inserting data directly into Redis keys):
for key in d:
conn.hmset(key, d[key]) # 437.084 keys added in 1min 20s
conn.info()['used_memory_human'] # 326.22 Mb
for key in d:
json.loads(conn.hgetall(key)[b'info'].decode('utf-8').replace("'", '"'))
# 1min 11s
for key in d:
conn.delete(key)
# 37.3s
As you can see, in the second test, only 'info' values have to be parsed, because the hgetall(key) already returns a dict, but not a nested one.
And of course, the best example of using Redis as python's dicts, is the First Test
DeprecationWarning: Redis.hmset() is deprecated. Use Redis.hset() instead.
Since HMSET is deprecated you can use HSET:
import redis
r = redis.Redis(host='localhost', port=6379, decode_responses=True)
r.hset('user:23', mapping={'id': 23, 'name': 'ip'})
r.hgetall('user:23')
Try rejson-py which is relatively new since 2017. Look at this introduction.
from rejson import Client, Path
rj = Client(host='localhost', port=6379)
# Set the key `obj` to some object
obj = {
'answer': 42,
'arr': [None, True, 3.14],
'truth': {
'coord': 'out there'
}
}
rj.jsonset('obj', Path.rootPath(), obj)
# Get something
print 'Is there anybody... {}?'.format(
rj.jsonget('obj', Path('.truth.coord'))
)
# Delete something (or perhaps nothing), append something and pop it
rj.jsondel('obj', Path('.arr[0]'))
rj.jsonarrappend('obj', Path('.arr'), 'something')
print '{} popped!'.format(rj.jsonarrpop('obj', Path('.arr')))
# Update something else
rj.jsonset('obj', Path('.answer'), 2.17)
In the context of Nameko (Python microservices framework which frequently uses a redis backend) you can use hmset as follows:
import uuid
from nameko.rpc import rpc
from nameko_redis import Redis
class AirportsService:
name = "trips_service"
redis = Redis('development')
#rpc
def get(self, trip_id):
trip = self.redis.get(trip_id)
return trip
#rpc
def create(self, airport_from_id, airport_to_id):
trip_id = uuid.uuid4().hex
pyDict = {"from":airport_from_id, "to":airport_to_id}
self.redis.hmset(trip_id, pyDict)
return trip_id
How does one return a dict like object through protoRPC ?
I tried using the FieldList to no avail. I only see the following field definitions:
'IntegerField',
'FloatField',
'BooleanField',
'BytesField',
'StringField',
'MessageField',
'EnumField',
There are two scenarios:
1) Your dict has a well-defined schema: This is the best use case for ProtoRPC and if possible you should try to fit it into a schema. In this case, you would use a MessageField with some Message class that matches the schema in your dictionary.
For example, instead of
{'amount': 31, 'type': 'fish', mine: False}
you could define
from protorpc import messages
class MyCatch(messages.Message):
amount = messages.IntegerField(1)
type = messages.StringField(2)
mine = messages.BooleanField(3)
and then use this message definition in a field via
messages.MessageField(MyCatch, index, ...)
2) Your dict does not have a well-defined schema: In this case you can use json to dump your dictionary to a string and request ensure_ascii=True to make sure the return type is a bytes (str) object. Then you can just use a BytesField.
For example:
import json
class MyMessage(messages.Message):
some_dict = messages.BytesField(1)
my_dict = {'amount': 31, 'type': 'fish', mine: False}
message = MyMessage(some_dict=json.dumps(my_dict, ensure_ascii=True))
The use of ensure_ascii is optional as True is the default, but this may change depending on your environment.
Instead you could use pickle to serialize your dictionary. The method pickle.dumps always outputs ASCII/binary, so by swapping out json.dumps for pickle.dumps and dropping the ensure_ascii=True, you'd have the same outcome.
It's possible to create a custom JsonField like this :
In [1]: class JsonField(messages.StringField):
type = dict
You can then use it as any other field :
In [2]: class MyMessage(messages.Message):
data = JsonField(1)
In [3]: m = MyMessage(data={"foo": "bar"})
In [4]: m.data
Out [4]: {'foo': 'bar'}
For the first option in the approved answer, we can add a parameter repeated=True, so we'll have a list of json as the answer. I checked about it at https://developers.google.com/appengine/docs/python/tools/protorpc/overview?hl=en#Defining_the_Response_Message
A bit involved, but I have a recipe for something quite close to a dict implementation for protorpc: https://gist.github.com/linuxluser/32d4a9c36ca0b8715ad4
It is restricted to using string-only keys and simple (not nested) values. But if your data fits in that category, this solution should work well.
The idea has 2 parts:
Create a new field type MultiField that can hold an arbitrary value type.
Create a dict-like type MapField that stores key-value pairs in a list of MultiField types.
You use it like so:
import messages
import mapfield
class MyMessage(messages.Message):
some_dict = mapfield.MapField(1)
my_message = MyMessage(some_dict={"foo": 7, "bar": False, "baz": 9.2, "qux": "nog"})
It's only a start. Probably could be better. Improvements are welcomed. :)
I am writing a program that stores data in a dictionary object, but this data needs to be saved at some point during the program execution and loaded back into the dictionary object when the program is run again.
How would I convert a dictionary object into a string that can be written to a file and loaded back into a dictionary object? This will hopefully support dictionaries containing dictionaries.
The json module is a good solution here. It has the advantages over pickle that it only produces plain text output, and is cross-platform and cross-version.
import json
json.dumps(dict)
If your dictionary isn't too big maybe str + eval can do the work:
dict1 = {'one':1, 'two':2, 'three': {'three.1': 3.1, 'three.2': 3.2 }}
str1 = str(dict1)
dict2 = eval(str1)
print(dict1 == dict2)
You can use ast.literal_eval instead of eval for additional security if the source is untrusted.
I use json:
import json
# convert to string
input_ = json.dumps({'id': id_ })
# load to dict
my_dict = json.loads(input_)
Why not to use Python 3's inbuilt ast library's function literal_eval. It is better to use literal_eval instead of eval
import ast
str_of_dict = "{'key1': 'key1value', 'key2': 'key2value'}"
ast.literal_eval(str_of_dict)
will give output as actual Dictionary
{'key1': 'key1value', 'key2': 'key2value'}
And If you are asking to convert a Dictionary to a String then, How about using str() method of Python.
Suppose the dictionary is :
my_dict = {'key1': 'key1value', 'key2': 'key2value'}
And this will be done like this :
str(my_dict)
Will Print :
"{'key1': 'key1value', 'key2': 'key2value'}"
This is the easy as you like.
Use the pickle module to save it to disk and load later on.
Convert dictionary into JSON (string)
import json
mydict = { "name" : "Don",
"surname" : "Mandol",
"age" : 43}
result = json.dumps(mydict)
print(result[0:20])
will get you:
{"name": "Don", "sur
Convert string into dictionary
back_to_mydict = json.loads(result)
In Chinese language you should do the following adjustments:
import codecs
fout = codecs.open("xxx.json", "w", "utf-8")
dict_to_json = json.dumps({'text':"中文"},ensure_ascii=False,indent=2)
fout.write(dict_to_json + '\n')
You may find the json.dumps() method needs help handling some object types.
Credit goes to the top answer of this post for the following:
import json
json.dumps(my_dictionary, indent=4, sort_keys=True, default=str)
I think you should consider using the shelve module which provides persistent file-backed dictionary-like objects. It's easy to use in place of a "real" dictionary because it almost transparently provides your program with something that can be used just like a dictionary, without the need to explicitly convert it to a string and then write to a file (or vice-versa).
The main difference is needing to initially open() it before first use and then close() it when you're done (and possibly sync()ing it, depending on the writeback option being used). Any "shelf" file objects create can contain regular dictionaries as values, allowing them to be logically nested.
Here's a trivial example:
import shelve
shelf = shelve.open('mydata') # open for reading and writing, creating if nec
shelf.update({'one':1, 'two':2, 'three': {'three.1': 3.1, 'three.2': 3.2 }})
shelf.close()
shelf = shelve.open('mydata')
print shelf
shelf.close()
Output:
{'three': {'three.1': 3.1, 'three.2': 3.2}, 'two': 2, 'one': 1}
If you care about the speed use ujson (UltraJSON), which has the same API as json:
import ujson
ujson.dumps([{"key": "value"}, 81, True])
# '[{"key":"value"},81,true]'
ujson.loads("""[{"key": "value"}, 81, true]""")
# [{u'key': u'value'}, 81, True]
I use yaml for that if needs to be readable (neither JSON nor XML are that IMHO), or if reading is not necessary I use pickle.
Write
from pickle import dumps, loads
x = dict(a=1, b=2)
y = dict(c = x, z=3)
res = dumps(y)
open('/var/tmp/dump.txt', 'w').write(res)
Read back
from pickle import dumps, loads
rev = loads(open('/var/tmp/dump.txt').read())
print rev
I figured out the problem was not with my dict object it was the keys and values that were of RubyString type after loading it with RubyMarshl 'loads' method
So i did this:
dic_items = dict.items()
new_dict = {str(key): str(value) for key, value in dic_items}