Memcache Outputting Null Value - python

I am trying to implement the pseudocode from the Google documentation, Memcache Examples, so that I can pass it to a dictionary but I am getting a null value. I've researched for solutions, for example, Google App Engine retrieving null values from memcache, but they were unhelpful.
How can I get the output of the_id cached for 500 seconds and returned for use by the update_dict function? What am I doing wrong?
CODE:
def return_id(self):
the_id = str(uuid.uuid1())
data = memcache.get(the_id)
print data
if data is not None:
return data
else:
memcache.add(the_id, the_id, 500)
return data
def update_dict(self):
....
id = self.return_id()
info = {
'id': id,
'time': time
}
info_dump = json.dumps(info)
return info_dump
OUTPUT:
{"id": null, "time": "1506437063"}

This issue has been resolved. The issues were:
my key didn't have a proper string name 'the_id'
I wasn't passing data in my else statement
Solution:
....
the_id = str(uuid.uuid1())
data = memcache.get('the_id') #fix: pass a string for the key name
print data
if data is not None:
return data
else:
data = the_id #fix: added the object that needed to be passed to data
memcache.add('the_id', the_id, 500)
return data
....
OUTPUT:
{"id": "25d853ee-a47d-11e7-8700-69aedf15b2da", "time": "1506437063"}
{"id": "25d853ee-a47d-11e7-8700-69aedf15b2da", "time": "1506437063"}

Related

Can't update record in DynamoDB

Really new to python and coding in general - would really help if someone could point me in the right direction with some code.
So to start off with I am making a proof of concept for a car park running AWS Rekognition and I need some help with updating the database. As you can see with the code below it inputs the reg_plate, entry_time and exit_time into the database all okay. But, what I am trying to do is when Rekognition is invoked a second time with the same reg_plate it updates the exit_time only for the current record in the database.
import boto3
import time
def detect_text(photo, bucket):
client=boto3.client('rekognition')
response=client.detect_text(Image={'S3Object':{'Bucket':bucket,'Name':photo}})
textDetections=response['TextDetections']
for text in textDetections:
if text['Type'] == 'LINE':
return text['DetectedText']
return False
def main(event, context):
bucket=''
photo='regtest.jpg'
text_detected=detect_text(photo,bucket)
if (text_detected == False):
exit()
print("Text detected: " + str(text_detected))
entry_time = str(int(time.time()))
dynamodb = boto3.client('dynamodb')
table_name = 'Customer_Plate_DB'
item = {
'reg_plate': {
'S': str(text_detected)
},
'entry_time': {
'S': entry_time
},
'exit_time': {
'S': str(0)
}
}
dynamodb.put_item(TableName=table_name, Item=item)
Tried various if statements but with no luck as whenever I try it just keeps making new records in the database and the exit_time is never updated.
In DynamoDB a PutItem will overwrite/insert data, so its not what you need if you are wanting to update a single attribute. You will need to use UpdateItem:
response = dynamodb.update_item(
TableName=table_name,
Key={
'reg_plate': {'S': str(text_detected)},
'entry_time': {'S': entry_time}
},
UpdateExpression="SET #t = :t",
ExpressionAttributeValues={ ":t": str(0) },
ExpressionAttributeNames={"#t":"exit_time"},
ConditionExpression='attribute_exists(reg_plate)'
)
In your question you do not tell me what your partition and/or sort keys are, so here I assume you have a partition key only called pk, which you can change to suit your needs.
In the above example, for item where pk = '123' I set exit_time to a value only if the item already exists in your table (Condition Expression).

return populated fields from a join table in SQLalchemy Flask

UserService is a join table connecting Users and Services tables. I have a query that returns all the tables that have a user_id = to the id passed in the route.
#bp.route('/user/<id>/services', methods=['GET'])
def get_services_from_user(id):
user = db_session.query(User).filter(id == User.id).first()
if not user:
return jsonify({'Message': f'User with id: {id} does not exist', 'Status': 404})
user_services = db_session.query(UserService).filter(user.id == UserService.user_id).all()
result = user_services_schema.dump(user_services)
for service in result:
user_service = db_session.query(Service).filter(service['service_id'] == Service.id).all()
result = services_schema.dump(user_service)
return jsonify(result)
result holds a list that looks as such:
[
{
"id": 1,
"service_id": 1,
"user_id": 1
},
{
"id": 2,
"service_id": 2,
"user_id": 1
}
]
how could I then continue this query or add another query to get all the actual populated services (Service class) instead of just the service_id and return all of them in a list? The for loop is my attempt at that but currently failing. I am only getting back a list with one populated service, and not the second.
You could try something like this:
userServies = db_session.query(Users, Services).filter(Users.id == Services.id).all()
userServices would be an iterable. You should use:
for value, index in userServices:
to iterate through it. (Could be index, value I'm not 100% sure of the order)
There is another way using .join() and adding the columns that you need with .add_columns().
There is also another way using
db_session.query(Users.id, Services.id, ... 'all the columns that you need' ...).filter(Users.id == Services.id).all()

How to (Python Iterate from a json array of objects)

What is the best way to iterate from an array of objects and use that data to update selected rows in a database table ?
I wanted to update data from the database where id = tasklist,
id from the json I've provided below, and set
attached_document_ins.is_viewed = checked value from the json with that ID .
for example, if id == 35 then attached_document_ins.is_viewed = True cause the checked value of id 35 is True.
What is the best algo for that ?
I have provided my code below.
#Code
def post(self, request):
data = request.data
print("Response Data :" , data)
try:
attached_document_ins = DocumentTask.objects.filter(id=tasklist_id)
for attached_document_ins in attached_document_ins:
attached_document_ins.is_viewed = True
attached_document_ins.save()
return Response("Success", status=status.HTTP_200_OK)
except DocumentTask.DoesNotExist:
return Response("Failed.", status=status.HTTP_400_BAD_REQUEST)
Json(data)
{
'tasklist':[
{
'files':[
],
'checked':True,
'company':6,
'task':'s',
'applicant':159,
'id':35
},
{
'files':[
],
'checked':True,
'company':6,
'task':'ss',
'applicant':159,
'id':36
},
{
'files':[
],
'checked':True,
'company':6,
'task':'sss',
'applicant':159,
'id':37
}
]
}
Here is one way you could do it:
for task in data['tasklist']:
if task['checked']:
document = DocumentTask.objects.get(id=task['id'])
document.is_viewed = True
document.save()

Python container troubles

Basically what I am trying to do is generate a json list of SSH keys (public and private) on a server using Python. I am using nested dictionaries and while it does work to an extent, the issue lies with it displaying every other user's keys; I need it to list only the keys that belong to the user for each user.
Below is my code:
def ssh_key_info(key_files):
for f in key_files:
c_time = os.path.getctime(f) # gets the creation time of file (f)
username_list = f.split('/') # splits on the / character
user = username_list[2] # assigns the 2nd field frome the above spilt to the user variable
key_length_cmd = check_output(['ssh-keygen','-l','-f', f]) # Run the ssh-keygen command on the file (f)
attr_dict = {}
attr_dict['Date Created'] = str(datetime.datetime.fromtimestamp(c_time)) # converts file create time to string
attr_dict['Key_Length]'] = key_length_cmd[0:5] # assigns the first 5 characters of the key_length_cmd variable
ssh_user_key_dict[f] = attr_dict
user_dict['SSH_Keys'] = ssh_user_key_dict
main_dict[user] = user_dict
A list containing the absolute path of the keys (/home/user/.ssh/id_rsa for example) is passed to the function. Below is an example of what I receive:
{
"user1": {
"SSH_Keys": {
"/home/user1/.ssh/id_rsa": {
"Date Created": "2017-03-09 01:03:20.995862",
"Key_Length]": "2048 "
},
"/home/user2/.ssh/id_rsa": {
"Date Created": "2017-03-09 01:03:21.457867",
"Key_Length]": "2048 "
},
"/home/user2/.ssh/id_rsa.pub": {
"Date Created": "2017-03-09 01:03:21.423867",
"Key_Length]": "2048 "
},
"/home/user1/.ssh/id_rsa.pub": {
"Date Created": "2017-03-09 01:03:20.956862",
"Key_Length]": "2048 "
}
}
},
As can be seen, user2's key files are included in user1's output. I may be going about this completely wrong, so any pointers are welcomed.
Thanks for the replies, I read up on nested dictionaries and found that the best answer on this post, helped me solve the issue: What is the best way to implement nested dictionaries?
Instead of all the dictionaries, I simplfied the code and just have one dictionary now. This is the working code:
class Vividict(dict):
def __missing__(self, key): # Sets and return a new instance
value = self[key] = type(self)() # retain local pointer to value
return value # faster to return than dict lookup
main_dict = Vividict()
def ssh_key_info(key_files):
for f in key_files:
c_time = os.path.getctime(f)
username_list = f.split('/')
user = username_list[2]
key_bit_cmd = check_output(['ssh-keygen','-l','-f', f])
date_created = str(datetime.datetime.fromtimestamp(c_time))
key_type = key_bit_cmd[-5:-2]
key_bits = key_bit_cmd[0:5]
main_dict[user]['SSH Keys'][f]['Date Created'] = date_created
main_dict[user]['SSH Keys'][f]['Key Type'] = key_type
main_dict[user]['SSH Keys'][f]['Bits'] = key_bits

How to decode dataTables Editor form in python flask?

I have a flask application which is receiving a request from dataTables Editor. Upon receipt at the server, request.form looks like (e.g.)
ImmutableMultiDict([('data[59282][gender]', u'M'), ('data[59282][hometown]', u''),
('data[59282][disposition]', u''), ('data[59282][id]', u'59282'),
('data[59282][resultname]', u'Joe Doe'), ('data[59282][confirm]', 'true'),
('data[59282][age]', u'27'), ('data[59282][place]', u'3'), ('action', u'remove'),
('data[59282][runnerid]', u''), ('data[59282][time]', u'29:49'),
('data[59282][club]', u'')])
I am thinking to use something similar to this really ugly code to decode it. Is there a better way?
from collections import defaultdict
# request.form comes in multidict [('data[id][field]',value), ...]
# so we need to exec this string to turn into python data structure
data = defaultdict(lambda: {}) # default is empty dict
# need to define text for each field to be received in data[id][field]
age = 'age'
club = 'club'
confirm = 'confirm'
disposition = 'disposition'
gender = 'gender'
hometown = 'hometown'
id = 'id'
place = 'place'
resultname = 'resultname'
runnerid = 'runnerid'
time = 'time'
# fill in data[id][field] = value
for formkey in request.form.keys():
exec '{} = {}'.format(d,repr(request.form[formkey]))
This question has an accepted answer and is a bit old but since the DataTable module seems being pretty popular among jQuery community still, I believe this approach may be useful for someone else. I've just wrote a simple parsing function based on regular expression and dpath module, though it appears not to be quite reliable module. The snippet may be not very straightforward due to an exception-relied fragment, but it was only one way to prevent dpath from trying to resolve strings as integer indices I found.
import re, dpath.util
rxsKey = r'(?P<key>[^\W\[\]]+)'
rxsEntry = r'(?P<primaryKey>[^\W]+)(?P<secondaryKeys>(\[' \
+ rxsKey \
+ r'\])*)\W*'
rxKey = re.compile(rxsKey)
rxEntry = re.compile(rxsEntry)
def form2dict( frmDct ):
res = {}
for k, v in frmDct.iteritems():
m = rxEntry.match( k )
if not m: continue
mdct = m.groupdict()
if not 'secondaryKeys' in mdct.keys():
res[mdct['primaryKey']] = v
else:
fullPath = [mdct['primaryKey']]
for sk in re.finditer( rxKey, mdct['secondaryKeys'] ):
k = sk.groupdict()['key']
try:
dpath.util.get(res, fullPath)
except KeyError:
dpath.util.new(res, fullPath, [] if k.isdigit() else {})
fullPath.append(int(k) if k.isdigit() else k)
dpath.util.new(res, fullPath, v)
return res
The practical usage is based on native flask request.form.to_dict() method:
# ... somewhere in a view code
pars = form2dict(request.form.to_dict())
The output structure includes both, dictionary and lists, as one could expect. E.g.:
# A little test:
rs = jQDT_form2dict( {
'columns[2][search][regex]' : False,
'columns[2][search][value]' : None,
'columns[2][search][regex]' : False,
} )
generates:
{
"columns": [
null,
null,
{
"search": {
"regex": false,
"value": null
}
}
]
}
Update: to handle lists as dictionaries (in more efficient way) one may simplify this snippet with following block at else part of if clause:
# ...
else:
fullPathStr = mdct['primaryKey']
for sk in re.finditer( rxKey, mdct['secondaryKeys'] ):
fullPathStr += '/' + sk.groupdict()['key']
dpath.util.new(res, fullPathStr, v)
I decided on a way that is more secure than using exec:
from collections import defaultdict
def get_request_data(form):
'''
return dict list with data from request.form
:param form: MultiDict from `request.form`
:rtype: {id1: {field1:val1, ...}, ...} [fieldn and valn are strings]
'''
# request.form comes in multidict [('data[id][field]',value), ...]
# fill in id field automatically
data = defaultdict(lambda: {})
# fill in data[id][field] = value
for formkey in form.keys():
if formkey == 'action': continue
datapart,idpart,fieldpart = formkey.split('[')
if datapart != 'data': raise ParameterError, "invalid input in request: {}".format(formkey)
idvalue = int(idpart[0:-1])
fieldname = fieldpart[0:-1]
data[idvalue][fieldname] = form[formkey]
# return decoded result
return data

Categories