I am currently using Tornado and asyncmongo to create a website that accesses a mongodb. Everything is working great except when I need to make multiple requests to mongodb within in a single request to my handler. I would like to keep all my database call asynchronous so that there is no blocking on the server.
How can I accomplish this? There are several cases in which I need to retrieve different documents from multiple collections. I also sometimes need to use data that I retrieved from my first query in the second such as a foreign key. I will also need the data from both requests when I render my template.
Thanks!
Have each request trigger a different callback?
e.g. (I didn't test this, and I'm not familiar with asyncmongo, so it probably contains errors):
import asyncmongo
import tornado.web
class Handler(tornado.web.RequestHandler):
#tornado.web.asynchronous
def get(self, id):
self.id = id
self.db = asyncmongo.Client(pool_id='mypool', host='localhost',
port=27107, dbname='mydb')
self.db.users.find_one({'username': self.current_user},
callback=self.on_user)
def on_user(self, response, error):
if error:
raise tornado.web.HTTPError(500)
self.user = response
self.db.documents.find_one({'id': self.id, 'user': self.user},
callback=self.on_document)
def on_document(self, response, error):
if error:
raise tornado.web.HTTPError(500)
self.render('template', first_name=self.user['first_name'],
document=response)
Related
I'm trying to integrate GraphQL to my web service which was written in tornado (python). By using dataloaders, I can speed up my request and avoid sending multiple queries to my database. But the problem is I can't find any examples or the definition that equal the "context" variable at request level to store the GraphQLView. I found an example written in sanic refer to this link. Are there any definition in "tornado" that equal to "context" (get_context) in sanic ??? Or any examples to resolve the attributes like that:
class Bandwidth(ObjectType):
class Meta:
interfaces = (Service, )
min_inbits_value = Field(Point)
max_inbits_value = Field(Point)
def resolve_min_inbits_value(context, resolve_info, arg1, arg2):
Finaly, I can access and modify the context at request level, I want to share how to I do it. I can wrapper the context propery in TornadoGraphQLHandler, but I need to parse the raw query:
from graphene_tornado import tornado_graphql_handler
from graphql import parse
class TornadoGraphQLHandler(tornado_graphql_handler.TornadoGraphQLHandler):
#property
def context(self):
data = self.parse_body()
query, variables, operation_name, id = self.get_graphql_params(self.request, data)
try:
document = parse(query)
args = dict()
for member in document.definitions[0].selection_set.selections[0].arguments:
args[member.name.value] = member.value.value
return <dataloaders with the arguments in request here>
except:
return self.request
By this way, I can access the dataloader by "info.context" in graphene in the next level.
I'm trying to stream large CSVs to clients from my Flask server, which uses Flask-SQLAlchemy.
When configuring the app (using the factory pattern), db.session.close() is called after each request:
#app.after_request
def close_connection(r):
db.session.close()
return r
This configuration has worked great up until now, as all the requests are short lived. But when streaming a response, the SQLAlchemy session is closed prematurely, throwing the following error when the generator is called:
sqlalchemy.orm.exc.DetachedInstanceError: Parent instance <Question> is not bound to a Session; lazy load operation of attribute 'offered_answers' cannot proceed
Pseudo-code:
#app.route('/export')
def export_data():
answers = Answer.query.all()
questions = Question.query.all()
def generate():
Iterate through answers questions and write out various relationships to csv
response = Response(stream_with_context(generate()), mimetype='text/csv')
return response
I've tried multiple configurations of using / not using stream_with_context and global flags in def close_connection to not automatically close the connection but the same error persists.
#app.after_request was closing the database session before the generator to stream the file was ever invoked.
The solution was to migrate db.session.close() to #app.teardown_request. stream_with_context must also be used when instantiating Response.
I'm creating an API with Flask that is being used for a mobile platform, but I also want the application itself to digest the API in order to render web content. I'm wondering what the best way is to access API resource methods inside of Flask? For instance if I have the following class added as a resource:
class FooAPI(Resource):
def __init__(self):
# Do some things
super(FooAPI, self).__init__()
def post(self, id):
#return something
def get(self):
#return something
api = Api(app)
api.add_resource(FooAPI, '/api/foo', endpoint = 'foo')
Then in a controller I want:
#app.route("/bar")
def bar():
#Get return value from post() in FooAPI
How do I get the return value of post() from FooAPI? Can I do it somehow through the api variable? Or do I have to create an instance of FooAPI in the controller? It seems like there has to be an easy way to do this that I'm just not understanding...
The obvious way for your application to consume the API is to invoke it like any other client. The fact that the application would be acting as a server and a client at the same time does not matter, the client portion can place requests into localhost and the server part will get them in the same way it gets external requests. To generate HTTP requests you can use requests, or urllib2 from the standard library.
But while the above method will work just fine it seems overkill to me. In my opinion a better approach is to expose the common functionality of your application in a way that both the regular application and the API can invoke. For example, you could have a package called FooLib that implements all the shared logic, then FooAPI becomes a thin wrapper around FooLib, and both FooAPI and FooApp call FooLib to get things done.
Another approach is to have both the app and API in the same Flask(-RESTful) instance. Then, you can have the app call the API methods/functions internally (without HTTP). Let's consider a simple app that manages files on a server:
# API. Returns filename/filesize-pairs of all files in 'path'
#app.route('/api/files/',methods=['GET'])
def get_files():
files=[{'name':x,'size':sys.getsizeof(os.path.join(path,x))} for x in os.listdir(path)]
return jsonify(files)
# app. Gets all files from the API, uses the API data to render a template for the user
#app.route('/app/files/',methods=['GET'])
def app_get_files():
response=get_files() # you may verify the status code here before continuing
return render_template('files.html',files=response.get_json())
You can push all your requests around (from the API to the app and back) without including them in your function calls since Flask's request object is global. For example, for an app resource that handles a file upload, you can simply call:
#app.route('/app/files/post',methods=['POST'])
def app_post_file():
response=post_file()
flash('Your file was uploaded succesfully') # if status_code==200
return render_template('home.html')
The associated API resource being:
#app.route('/api/files/',methods=['POST'])
def post_file():
file=request.files['file']
....
....
return jsonify({'some info about the file upload'})
For large volumes of application data, though, the overhead of wrapping/unwrapping JSON makes Miguel's second solution preferrable.
In your case, you would want to call this in your contoller:
response=FooAPI().post(id)
I managed to achieve this, sometimes API's get ugly, in my case, I need to recursively call the function as the application has a extremely recursive nature (a tree). Recursive functions itself are quite expensive, recursive HTTP requests would be a world of memory and cpu waste.
So here's the snippet, check the third for loop:
class IntentAPI(Resource):
def get(self, id):
patterns = [pattern.dict() for pattern in Pattern.query.filter(Pattern.intent_id == id)]
responses = [response.dict() for response in Response.query.filter(Response.intent_id == id)]
return jsonify ( { 'patterns' : patterns, 'responses' : responses } )
def delete(self, id):
for pattern in Pattern.query.filter(Pattern.intent_id == id):
db.session.delete(pattern)
for response in Response.query.filter(Response.intent_id == id):
db.session.delete(response)
for intent in Intent.query.filter(Intent.context == Intent.query.get(id).set_context):
self.delete(intent.id) #or IntentAPI.delete(self, intent.id)
db.session.delete(Intent.query.get(id))
db.session.commit()
return jsonify( { 'result': True } )
I am developing an API using Django-TastyPie.
What API do?
It checks that if two or more requests are there on the server if yes it swap the data of both the requests and return a json response after 7 second delay.
What i need to do is send multiple asynchronous requests to the server to test this API.
I am using Django-Unit Test along with Tasty-Pie to test this functionality.
Problem
Django develpment server is single threaded so it does not support asynchronous requests
Solution tried:
I have tried to solve this by using multiprocessing:
class MatchResourceTest(ResourceTestCase):
def setUp(self):
super(MatchResourceTest, self).setUp()
self.user=""
self.user_list = []
self.thread_list = []
# Create and get user
self.assertHttpCreated(self.api_client.post('/api/v2/user/', format='json', data={'username': '123456','device': 'abc'}))
self.user_list.append( User.objects.get(username='123456') )
# Create and get other_user
self.assertHttpCreated(self.api_client.post('/api/v2/user/', format='json', data={'username': '456789','device': 'xyz'}))
self.user_list.append( User.objects.get(username='456789') )
def get_credentials(self):
return self.create_apikey(username=self.user.username, api_key=self.user.api_key.key)
def get_url(self):
resp = urllib2.urlopen(self.list_url).read()
self.assertHttpOK(resp)
def test_get_list_json(self):
for user in self.user_list:
self.user = user
self.list_url = 'http://127.0.0.1:8000/api/v2/match/?name=hello'
t = multiprocessing.Process(target=self.get_url)
t.start()
self.thread_list.append( t )
for t in self.thread_list:
t.join()
print ContactCardShare.objects.all()
Please suggest me any solution to test this API by sending asychronous requests
or
any APP , Library or any this which allow django development server to handle multiple requests asychronously
As far as I know, django's development server is multi-threaded.
I'm not sure this test is formatted correctly though. The test setUp shouldn't include tests itself, it should be foolproof data insertion by creating entries. The post should have it's own test.
See the tastypie docs for an example test case.
I'm writing a web application that connects to a database. I'm currently using a variable in a module that I import from other modules, but this feels nasty.
# server.py
from hexapoda.application import application
if __name__ == '__main__':
from paste import httpserver
httpserver.serve(application, host='127.0.0.1', port='1337')
# hexapoda/application.py
from mongoalchemy.session import Session
db = Session.connect('hexapoda')
import hexapoda.tickets.controllers
# hexapoda/tickets/controllers.py
from hexapoda.application import db
def index(request, params):
tickets = db.query(Ticket)
The problem is that I get multiple connections to the database (I guess that because I import application.py in two different modules, the Session.connect() function gets executed twice).
How can I access db from multiple modules without creating multiple connections (i.e. only call Session.connect() once in the entire application)?
Try the Twisted framework with something like:
from twisted.enterprise import adbapi
class db(object):
def __init__(self):
self.dbpool = adbapi.ConnectionPool('MySQLdb',
db='database',
user='username',
passwd='password')
def query(self, sql)
self.dbpool.runInteraction(self._query, sql)
def _query(self, tx, sql):
tx.execute(sql)
print tx.fetchone()
That's probably not what you want to do - a single connection per app means that your app can't scale.
The usual solution is to connect to the database when a request comes in and store that connection in a variable with "request" scope (i.e. it lives as long as the request).
A simple way to achieve that is to put it in the request:
request.db = ...connect...
Your web framework probably offers a way to annotate methods or something like a filter which sees all requests. Put the code to open/close the connection there.
If opening connections is expensive, use connection pooling.