How to use Task Queue (Push Queue) with Protorpc.
I have a landing page form that do multiple actions when sending it:
Save the fields in the DataStore
Send an email to the form's sender
Send the fields to a third party application (let's say a CRM)
The form send is implemented in the server side with protorpc.
class FormRequest(messages.Message)
field1 = messages.StringField(1, required=True)
field2 = messages.StringField(2, required=True)
...
class FormApi(remote.Service):
#remote.method(TravelRequest, message_types.VoidMessage)
def insert(self, request):
# Save the form in the DataStore
travel = FormModel(field1=request.field1, field2=request.field2)
travel.put()
# Send an email to the client
...
# Send the data to a third party
...
return message_types.VoidMessage()
This solution is stuck because the user need to wait all this request time. (In this case it is only 2-3s but it is a lot for a landing page form)
A good solution will be to use taskqueue to minimise the time the user need to wait:
(As an example)
class ...
#remote ...
def ...
# Save the form in the DataStore
taskqueue.add(url='/api/worker/save_to_db', params={'field1': request.field1, 'field2': request.field2})
# Send an email to the client
taskqueue.add(url='/api/worker/send_email', params={'field1': request.field1, 'field2': request.field2})
# Send the data to a third party (CRM)
taskqueue.add(url='/api/worker/send_to_crm', params={'field1': request.field1, 'field2': request.field2})
The "problem" is that protorpc get only json object as request.
How to do this with TaskQueue(Push) ?
The default behavior of TaskQueue is to send params as a string of urlencoded and it's not conveniant to protorpc.
Let's define a Worker service for the taskqueue:
class WorkersApi(remote.Service):
#remote.method(TravelRequest, message_types.VoidMessage)
def save_to_db(self, request):
# Instead of write each parameter, I am using this "cheat"
params = {}
for field in request.all_fields():
params[field.name] = getattr(request, field.name)
# Save data in the datastore
form_model = FormModel(**params)
form_model.put()
return message_types.VoidMessage()
Pay attention that I am using the same message object for the real request and for the taskqueue request (It is a big advantage to need not create and different message object for each request)
The question is how to use taskqueue with this protorpc function.
As I say in the question, the default behavior of taskqueue is not conveniant.
The solution is to convert the orignal request/message (in our example the FormRequest) object back to string and set a header to taskqueue that the payload is application/json.
Here's the code:
# This format string is take from the util file in the protorpc folder in Google App Engine source code
format_string = '%Y-%m-%dT%H:%M:%S.%f'
params = {}
for field in request.all_fields():
value = getattr(request, field.name)
if (isinstance(value, datetime.datetime)):
value = value.strftime(format_string)
params[field.name] = value
taskqueue.add(url='/api/workers.save_to_db', payload=json.dumps(params), headers={'content-type':'application/json'})
Do the same for the "email" and the "crm".
you can used put_async() for write with no time : Asynchronously writes the entity's data to the Datastore.
for example:
travel = FormModel(field1=request.field1, field2=request.field2)
travel.put_async()
# next action
Related
I have a task to create a REST API that will be responsible for handling messages. As a part of the functionality, I should be able to write a message. This message should contain the following fields:
id
sender
receiver
message itself
subject
creation date
So, as I expected to do this is to have a route that should handle the object that I will send as an argument. But I am not sure I can do so. What would you recommend in this case?
For now, I can handle it somehow like this:
#app.route('/new_message/<string:sender>/<string:receiver>/<string:message>/', methods=['POST'])
def get_message(sender, receiver, message):
sender = sender
receiver = receiver
message = message
# Code that will add the data or to the database or to the json file
# if I decide not to implement DB for this task
return 'Sent successfully'
Thanks for your advice!
I am suggesting you to use JSON request body instead of the path parameters for the POST method.
Here is the example,
from flask import request, Flask
app = Flask(__name__)
#app.route('/new_message', methods=['POST'])
def get_message():
payload = request.json()
sender = payload['sender']
receiver = payload['receiver']
message = payload['message']
return 'Sent successfully'
if __name__ == "__main__":
app.run()
Now, If you want to add message as object then you can add it in JSON body. Not only message object you can add any number of fields if required.
I am trying to use the api call users.profile.get to find a users profile picture. The problem is that the request requires not JSON, but URL encoded queries (I think?). I have the user id already, but I need to know how to send it to slack correctly, preferably with the api_call method.How would I go along doing this?
Here is the documention: https://api.slack.com/methods/users.profile.get
for users in collection.find():
start_date, end_date = users['start_date'], users['end_date']
user_data = client.api_call('users.profile.get',
"""What would I do here?""")
user_images[users['user_id']] = user_data['image_72']
block.section(
text=
f'from *{date_to_words(start_date[0], start_date[1], start_date[2])}* to *{date_to_words(end_date[0], end_date[1], end_date[2])}*'
)
block.context(data=((
'img', user_images[users['user_id']],
'_error displaying image_'
), ('text',
f'<!{users["user_id"]}>. _Contact them if you have any concerns_'
)))
You can pass the parameters of the API as names arguments in your function call.
For users.get.profile you want to provide the user ID, e.g. "U1245678".
Then your call would look like this (with slackclient v1):
response = sc.api_call(
"users.profile.get",
user="U12345678"
)
assert response["ok"]
user_data = response["profile"]
Or like this with slackclient v2:
response = sc.users_profile_get(user="U12345678")
assert response["ok"]
user_data = response["profile"]
To answer your question: You do not have to worry about how the API is called, since that is handled by library. But technically most of Slack's API endpoints accepts parameters both as URL endocded form and JSON.
I am developing a Gmail extracting app and using Gmail API to fetch mail from server. the problem lies in the fact that fetch time for mails is too large even though I used threading in back end framework. now I am going to implement one feature which will suggest user opting for bulk download that "once your download is ready, we will mail you" but for that i want to run download.py mentioned below in app tree in background and once the fetch is over it will get terminated.
And in the very bottom of the code i want to mail user that their download is ready but its not working though i have defined the mail server in settings.py .
download.py
import httplib2, base64
from stripogram import html2text
from oauth2client.django_orm import Storage
from apiclient.discovery import build
from oauth2client import client
from django.contrib.auth.models import User
from .models import CredentialsModel
from django.conf import settings
from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework import authentication, permissions
from gextracto import models
from gextracto.models import UserData
from django.core.mail import EmailMessage
from django.core import mail
connection = mail.get_connection()
class ListMails(APIView):
"""
Gets a list of a specified number mail ids for a particular label
Extracts the email in the form of plain/text
The API returns all the extracted mails
"""
authentication_classes = (authentication.SessionAuthentication,)
permission_classes = (permissions.IsAuthenticated,)
def extract_headers(self, message):
"""
Extract the headers for a single mail and returns it
{To, From, Subject}
"""
needed_fields = ('From', 'To', 'Subject')
return {i['name']:i['value'] for i in message['payload']['headers'] if i['name'] in needed_fields}
def get_message_body(self, message):
"""
Get the body of an email
Recursively look for the body for different mimetypes
Returns the body as text/plain
"""
if 'payload' in message:
return self.get_message_body(message['payload'])
elif 'parts' in message:
return self.get_message_body(message['parts'][0])
else:
data = base64.urlsafe_b64decode(message['body']['data'].encode('ASCII'))
markdown_data = html2text(data)#.decode('utf-8', "replace")
data = data.replace("\n", "<br/>")
# return {markdown, html}
return {'markdown':unicode( markdown_data,"ISO-8859-1"), 'html':unicode(data,"ISO-8859-1")} if markdown_data else {'html':unicode(data,"ISO-8859-1")}
def message_content_html(self, userId, message_id, service):
"""
Make queries to get the content for a mail given its message id
Returns all the content
"""
content = {'id':message_id}
# try
message = service.users().messages().get(userId=userId, id=message_id).execute()
mimetype = message['payload']['mimeType']
if mimetype == 'text/html':
return {}
#
else:
body = self.get_message_body(message)
if body == "":
body = "<empty message>"
headers = self.extract_headers(message)
content['body'] = body
content.update(headers)
return content
def collect_mails(self, user, messages, service):
"""
Collect the content for all the mails currently downloaded
"""
all_messages = []
try:
for message in messages:
content = self.message_content_html(user.username, message['id'], service)
if content:
all_messages.append(content)
return all_messages
# return empty list if no messages were downloaded
except KeyError:
return []
def get(self, request, format=None):
"""
Handles the GET request to get all the mails for a label
Paginages through the GAPI content if required
API returns all the messages
{To, From, Subject, body}
"""
user = request.user
storage = Storage(CredentialsModel, 'id', user, 'credential')
credentials = storage.get()
http_auth = credentials.authorize(httplib2.Http())
service = build('gmail', 'v1', http=http_auth)
user_Id = user.username
label_id = request.GET['label']
# try
# call Google API with a request to get a list of all the labels
response = service.users().messages().list(userId=user_Id, labelIds=label_id, maxResults=100).execute()
all_messages = self.collect_mails(user, response['messages'], service)
if not all_messages:
return Response([])
else:
if 'nextPageToken' in response:
page_token_flag = True
# request more more mails if the download limit has not yet been satisfied
while(page_token_flag):
response = service.users().messages().list(userId=user_Id, pageToken=response['nextPageToken'], maxResults=100).execute()
all_messages.append(self.collect_mails(user, response['messages'], service))
print(all_messages)
#for x in range(0,len(all_messages)):
#b=all_messages[10]
#instance= UserData(user_id=user ,label=label_id, sender = b['From'] , subject=b['Subject'] , body=b['body'])
#instance.save()
page_token_flag = 'nextPageToken' in response
##
for x in range(0,len(all_messages)):
b=all_messages[10]
instance= UserData(user_id=user ,label=label_id, sender = b['From'] , subject=b['Subject'] , body=b['body'])
instance.save()
print ("Hi i am here!!!")
email = EmailMessage('Your Download Ready!', 'http://127.0.0.1:8000/admin/gextracto/userdata/', to=[user], connection=connection)
email.send()
connection.close()
return Response(all_messages)
Please tell me the way to run it in background. if need any other info please do ask. Thanks
Don't know the exact requirements but I'll think about Celery to run background tasks. This approach allows to manage all post-script activities in native Django manner.
Also you can think about running the Django script using cron (as manage.py command) - but it can lead to some limitations.
What about sending emails failure - believe, you don't need to close connection after sending email. Usually I use send_mail()/send_mass_mail() functions - please, check their code to get an idea.
I have asked a few questions about this before, but still haven't solved my problem.
I am trying to allow Salesforce to remotely send commands to a Raspberry Pi via JSON (REST API). The Raspberry Pi controls the power of some RF Plugs via an RF Transmitter called a TellStick. This is all setup, and I can use Python to send these commands. All I need to do now is make the Pi accept JSON, then work out how to send the commands from Salesforce.
Someone kindly forked my repo on GitHub, and provided me with some code which should make it work. But unfortunately it still isn't working.
Here is the previous question: How to accept a JSON POST?
And here is the forked repo: https://github.com/bfagundez/RemotePiControl/blob/master/power.py
What do I need to do? I have sent test JSON messages n the Postman extension and in cURL but keep getting errors.
I just want to be able to send various variables, and let the script work the rest out.
I can currently post to a .py script I have with some URL variables, so /python.py?power=on&device=1&time=10&pass=whatever and it figures it out. Surely there's a simple way to send this in JSON?
Here is the power.py code:
# add flask here
from flask import Flask
app = Flask(__name__)
app.debug = True
# keep your code
import time
import cgi
from tellcore.telldus import TelldusCore
core = TelldusCore()
devices = core.devices()
# define a "power ON api endpoint"
#app.route("/API/v1.0/power-on/<deviceId>",methods=['POST'])
def powerOnDevice(deviceId):
payload = {}
#get the device by id somehow
device = devices[deviceId]
# get some extra parameters
# let's say how long to stay on
params = request.get_json()
try:
device.turn_on()
payload['success'] = True
return payload
except:
payload['success'] = False
# add an exception description here
return payload
# define a "power OFF api endpoint"
#app.route("/API/v1.0/power-off/<deviceId>",methods=['POST'])
def powerOffDevice(deviceId):
payload = {}
#get the device by id somehow
device = devices[deviceId]
try:
device.turn_off()
payload['success'] = True
return payload
except:
payload['success'] = False
# add an exception description here
return payload
app.run()
Your deviceID variable is a string, not an integer; it contains a '1' digit, but that's not yet an integer.
You can either convert it explicitly:
device = devices[int(deviceId)]
or tell Flask you wanted an integer parameter in the route:
#app.route("/API/v1.0/power-on/<int:deviceId>", methods=['POST'])
def powerOnDevice(deviceId):
where the int: part is a URL route converter.
Your views should return a response object, a string or a tuple instead of a dictionary (as you do now), see About Responses. If you wanted to return JSON, use the flask.json.jsonify() function:
# define a "power ON api endpoint"
#app.route("/API/v1.0/power-on/<int:deviceId>", methods=['POST'])
def powerOnDevice(deviceId):
device = devices[deviceId]
# get some extra parameters
# let's say how long to stay on
params = request.get_json()
try:
device.turn_on()
return jsonify(success=True)
except SomeSpecificException as exc:
return jsonify(success=False, exception=str(exc))
where I also altered the exception handler to handle a specific exception only; try to avoid Pokemon exception handling; do not try to catch them all!
To retrieve the Json Post values you must use request.json
if request.json and 'email' in request.json:
request.json['email']
hi everyone I am using a script which involves:
import oauth2 as oauth
import oauth2.clients.imap as imaplib
import email
conn = imaplib.IMAP4_SSL('imap.googlemail.com')
conn.debug = 4
# This is the only thing in the API for impaplib.IMAP4_SSL that has
# changed. You now authenticate with the URL, consumer, and token.
conn.authenticate(url, consumer, token)
# Once authenticated everything from the impalib.IMAP4_SSL class will
# work as per usual without any modification to your code.
conn.select('[Gmail]/All Mail')
response, item_ids = conn.search(None, "SINCE", "01-Jan-2011")
item_ids = item_ids[0].split()
# Now iterate through this shit and retrieve all the email while parsing
# and storing into your whatever db.
for emailid in item_ids:
resp, data = conn.fetch(emailid, "(RFC822)")
email_body = data[0][1]
mail = email.message_from_string(email_body)
My current problem is that I can't seem to be able to retrieve the body of the mail instance. I am able to see the content of the email by printing it or mail.as_string() but then even with mail.keys() and mail.values() i am actually unable to see the mail's content (the main message).
What is wrong with this email lib API? (or rather what am I doing wrong)?
From email docs:
You can pass the parser a string or a file object, and the parser will
return to you the root Message instance of the object structure.
For simple, non-MIME messages the payload of this root object will
likely be a string containing the text of the message. For MIME
messages, the root object will return True from its is_multipart()
method, and the subparts can be accessed via the get_payload() and
walk() methods.
So use get_payload() or if the message is multipart then call walk() method and then use get_payload() on a desirable subpart.