I have the following:
def jsonify(ar):
json.dumps(ar._data)
jsonify(getFromTable())
getFromTable returns an array of boto objects. Each of those objects has a _data element. However ar._data does not work. It does not have the attribute _data.
How can I make a singular json from multiple objects. Or is it impossible?
My work around for this is:
def jsonify(ar):
str=""
for i in ar:
str+=json.dumps(i._data)
print str
return str
jsonify(getFromTable())
However I would still preffer to print them all in one json blob. Does anyone know how?
Solved below with help from mGilson
Also just as an fyi:
I'm using boto, dynamodb2, python, and pulling from a lazy evaluation resultSet returned by querying my table.
#mGilson, thank you. That was the correct way to solve that.
For anyone curious here is the implementation that I used.
def getFromTable():
global table
#t = table.scan(thirdKey__eq="Anon")
t = table.scan()
arr = []
for a in t:
arr.append(a)
return arr
def jsonify(ar):
str = json.dumps(ar)
print str
return str
def createListFromBotoObj(obj):
myList = []
for o in obj:
myList.append(o._data)
return myList
jsonify(createListFromBotoObj(getFromTable()))
Which prints the expected result:
[{"secondKey": "Doe", "thirdKey": "Anon", "firstKey": "John"}, {"secondKey": "G", "thirdKey": "Company", "firstKey": "P"}, {"secondKey": "T", "thirdKey": "Engineer3", "firstKey": "allen"}, {"secondKey": "John", "last_name": "Doe", "firstKey": "booperface"}, {"secondKey": "The Builder", "thirdKey": "Sadness", "firstKey": "Bob"}]
In case anyone is wondering I'm using this to test how I will implement my actual database.
Related
JSON format:
[{"SH_MSG": {"time": "1657291114000", "area_id": "D1", "address": "54", "msg_type": "SH", "data": "8CFB0B00"}}, {"SF_MSG": {"time": "1657291114000", "area_id": "D2", "address": "0A", "msg_type": "SF", "data": "1F"}}, ...}][...]
I want to record all data that has a CA_MSG tag at the start.
I am using stomp to obtain messages.
msg = json.loads(frame.body)
msg is a list such that:
msg = [{'SF_MSG': {'...'}}, ...]
I am trying:
for m in msg:
new_msg = []
if m.keys() == 'CA_MSG':
new_msg.append(m)
But this is just returning [] every time.
I ended up getting there in the end by skipping out the for loops for a list comprehension:
CA_MSGS = [msg['CA_MSG'] for msg in message if 'CA_MSG' in list(msg.keys())]
dict.keys() returns a list object for python version 2,
dict.keys() returns a dict_keys object for python version 3,
it can never be a True if you check the list/dict_keys object with a string object
if m.keys() == 'CA_MSG': # False all the time
# probably this is what you are looking for
# python 2
if m.keys().count('CA_MSG') > 0:
# python 3 change the dict_keys to a set probably good performance
if 'CA_MSG' in set(m.keys()):
I have searched quite thoroughly and have not found a suitable solution. I am new to Python/Programming, so I appreciate any advice I can get:
I am trying to search a string from StringSet, here is what i am trying to do but not getting the value.
string_set = {'"123", "456", "789"'}
value = '123'
values_list = []
def fun():
for i in string_set:
if i in value:
output=LookupTables.get('dynamo-table', i, {})
return output
fun()
Using the above if it value is in the stringset then it will return the value which is in my dynmodb table.
Nothe: There could be more than 5000 values in my table so i wanted to get earliest possible return.
maybe you should romove the extra '' firstly
string_set = {'"123", "456", "789"'} # this set has just one value '"123", "456", "789"'
string_set_fixed = {"123", "456", "789"}
im assuming you're just checking if 123 is in "123", "456", "789" since you had it wrapped in single quotes:
to represent that lets use:
strset = {"123", "456", "789"}
what if you have to use that weird variable?
this should render it useable
strset = {'"123", "456", "789"'}
removed = next(iter(strset))
strset.update((removed).split())
strset.remove(removed)
strset = set([i.strip(",").strip('"') for i in strset])
another cleaner way:
strset = {'"123", "456", "789"'}
exec(f"strset = {next(iter(strset))}")
print("123" in strset)
now to check if value is in there:
if value in strset:
#do code here
Try this:
string_set = {"123", "456", "789"}
value = '123'
values_list = []
def fun():
if value in string_set:
output = LookupTables.get('dynamo-table', value, {})
return output
fun()
Explanation:
Your definition of string_set contains an extraneous pair of ' ';
When you are testing i in value, you are comparing i against all substrings of value, rather than against the whole string.
My json object is: {"values": {"empid": 20000, "empName": "Sourav", "empSal": 8200}}
But I want to remove "Values: ". How can I do this? I have written a code in Python.
In the background It is taking the streaming data from MySQL and sending to Kinesis.
def main():
connection = {
"host": "127.0.0.1",
"port": int(sys.argv[1]),
"user": str(sys.argv[2]),
"passwd": str(sys.argv[3])}
kinesis = boto3.client("kinesis",region_name='ap-south-1')
stream = BinLogStreamReader(
connection_settings=connection,
only_events=[DeleteRowsEvent, WriteRowsEvent, UpdateRowsEvent],
server_id=100,
blocking=True,
log_file='mysql-bin.000003',
resume_stream=True,
)
for binlogevent in stream:
for row in binlogevent.rows:
print (json.dumps(row,cls=DateTimeEncoder))
kinesis.put_record(StreamName=str(sys.argv[4]), Data=json.dumps(row,cls=DateTimeEncoder),
PartitionKey="default",)
You can call row['values'] which will return the values inside of values.
An example in your code would be
kinesis.put_record(StreamName=str(sys.argv[4]), Data=json.dumps(row['values'],cls=DateTimeEncoder)
If you want to remove "Values: " from the string thatjson.dumps` produces, you can just do a replace:
json_string = json.dumps(row,cls=DateTimeEncoder)
json_string = json_string.replace("Values: ", "")
and then use the put_record on that string. Your json object is a dictionary, so you can't just remove the values: string/key from it. If you did actually remove the values key, the object would be empty.
I am implementing 'PATCH' on the server-side for partial updates to my resources.
Assuming I do not expose my SQL database schema in JSON requests/responses, i.e. there exists a separate mapping between keys in JSON and columns of a table, how do I best figure out which column(s) to update in SQL given the JSON of a partial update?
For example, suppose my table has 3 columns: col_a, col_b, and col_c, and the mapping between JSON keys to table columns is: a -> col_a, b -> col_b, c -> col_c. Given JSON-PATCH data:
[
{"op": "replace", "path": "/b", "value": "some_new_value"}
]
What is the best way to programmatically apply this partial update to col_b of the table corresponding to my resource?
Of course I can hardcode these mappings in a keys_to_columns dict somewhere, and upon each request with some patch_data, I can do sth like:
mapped_updates = {keys_to_columns[p['path'].split('/')[-1]]: p['value'] for p in patch_data}
then use mapped_updates to construct the SQL statement for DB update. If the above throws a KeyError I know the request data is invalid and can throw it away. And I will need to do this for every table/resource I have.
I wonder if there is a better way.
This is similar to what you're thinking of doing, but instead of creating maps, you can create classes for each table instead. For example:
class Table(object):
"""Parent class of all tables"""
def get_columns(self, **kwargs):
return {getattr(self, k): v for k, v in kwargs.iteritems()}
class MyTable(Table):
"""table MyTable"""
# columns mapping
a = "col_a"
b = "col_b"
tbl = MyTable()
tbl.get_columns(a="value a", b="value b")
# the above returns {"col_a": "value a", "col_b": "value b"}
# similarly:
tbl.get_columns(**{p['path'].split('/')[-1]: p['value'] for p in patch_data})
This is just something basic to get inspired from, these classes can be extended to do much more.
patch_json = [
{"op": "replace", "path": "/b", "value": "some_new_value"},
{"op": "replace", "path": "/a", "value": "some_new_value2"}
]
def fix_key(item):
item['path'] = item['path'].replace('/', 'col_')
return item
print map(fix_key, patch_json)
Is there any way to check every element of a list comprehension in a clean and elegant way?
For example, if I have some db result which may or may not have a 'loc' attribute, is there any way to have the following code run without crashing?
db_objs = SQL("query")
top_scores = [{"name":obj.name, "score":obj.score, "latitude":obj.loc.lat, "longitude":obj.loc.lon} for obj in db_objs]
If there is any way to fill these fields in either as None or the empty string or anything, that would be much very nice. Python tends to be a magical thing, so if any of you have sage advice it would be much appreciated.
Clean and unified solution:
from operator import attrgetter as _attrgetter
def attrgetter(attrname, default=None):
getter = _attrgetter(attrname)
def wrapped(obj):
try:
return getter(obj)
except AttributeError:
return default
return wrapped
GETTER_MAP = {
"name":attrgetter('name'),
"score":attrgetter('score'),
"latitude":attrgetter('loc.lat'),
"longitude":attrgetter('loc.lon'),
}
def getdict(obj):
return dict(((k,v(obj)) for (k,v) in GETTER_MAP.items()))
if __name__ == "__main__":
db_objs = SQL("query")
top_scores = [getdict(obj) for obj in db_objs]
print top_scores
Try this:
top_scores = [{"name":obj.name,
"score":obj.score,
"latitude": obj.loc.lat if hasattr(obj.loc, lat) else 0
"longitude":obj.loc.lon if hasattr(obj.loc, lon) else 0}
for obj in db_objs]
Or, in your query set a default value.
It's not pretty, but getattr() should work:
top_scores = [
{
"name": obj.name,
"score": obj.score,
"latitude": getattr(getattr(obj, "loc", None), "lat", None),
"longitude": getattr(getattr(obj, "loc", None), "lon", None),
}
for obj in db_objs
]
This will set the dict item with key "latitude" to obj.loc.lat (and so on) if it exists; if it doesn't (and even if obj.loc doesn't exist), it'll be set to None.