How to test django API with asychronous requests - python

I am developing an API using Django-TastyPie.
What API do?
It checks that if two or more requests are there on the server if yes it swap the data of both the requests and return a json response after 7 second delay.
What i need to do is send multiple asynchronous requests to the server to test this API.
I am using Django-Unit Test along with Tasty-Pie to test this functionality.
Problem
Django develpment server is single threaded so it does not support asynchronous requests
Solution tried:
I have tried to solve this by using multiprocessing:
class MatchResourceTest(ResourceTestCase):
def setUp(self):
super(MatchResourceTest, self).setUp()
self.user=""
self.user_list = []
self.thread_list = []
# Create and get user
self.assertHttpCreated(self.api_client.post('/api/v2/user/', format='json', data={'username': '123456','device': 'abc'}))
self.user_list.append( User.objects.get(username='123456') )
# Create and get other_user
self.assertHttpCreated(self.api_client.post('/api/v2/user/', format='json', data={'username': '456789','device': 'xyz'}))
self.user_list.append( User.objects.get(username='456789') )
def get_credentials(self):
return self.create_apikey(username=self.user.username, api_key=self.user.api_key.key)
def get_url(self):
resp = urllib2.urlopen(self.list_url).read()
self.assertHttpOK(resp)
def test_get_list_json(self):
for user in self.user_list:
self.user = user
self.list_url = 'http://127.0.0.1:8000/api/v2/match/?name=hello'
t = multiprocessing.Process(target=self.get_url)
t.start()
self.thread_list.append( t )
for t in self.thread_list:
t.join()
print ContactCardShare.objects.all()
Please suggest me any solution to test this API by sending asychronous requests
or
any APP , Library or any this which allow django development server to handle multiple requests asychronously

As far as I know, django's development server is multi-threaded.
I'm not sure this test is formatted correctly though. The test setUp shouldn't include tests itself, it should be foolproof data insertion by creating entries. The post should have it's own test.
See the tastypie docs for an example test case.

Related

How to mock a rest API in python

I have a application running which some where in the midst uses some rest API call. Now for stress test I want to replace this API call with some mock server. Is there any way to do it.
Let me try to put it programmatically so it gets some clarity. I've a some server running at port say 8080
# main server
from flask import Flask
from myapp import Myapp
app = Flask(__name__)
#app.route("/find_solution", methods=["GET"])
def solution() :
return app.sol.find_solution(), 200
def start():
app.sol = Myapp()
return app
Now this Myapp
#myapp
import requests
class Myapp:
def __init__():
self.session = requests.Session()
def find_solution():
myparameters = {"Some parameter that I filled"}
return self.session.request('GET', 'http://api.weatherstack.com/current', params=myparameters)
Now here I want to replace behavior of http://api.weatherstack.com/current without modifying code. i.e some way where I can replace call to http:api.weatherstack.com/current to my local system server.
Any help of lead is appreciated. I am using ubuntu 20.04
So for your scenario if you want to test your api flask comes with mock test client feature.
test_client = app.test_client()
test_client.post('/find_solution', headers={"Content-Type": "application/json"}, data=data)
So for this scenario you can create test cases and get test client instance inside your test case and perform tests at api level. This is a light weight test method rather than the one proposed by you
Refer to the following link for official flask documentation
https://flask.palletsprojects.com/en/1.1.x/testing/#keeping-the-context-around
Cheers

Can I asynchronously duplicate a webapp2.RequestHandler Request to a different url?

For a percentage of production traffic, I want to duplicate the received request to a different version of my application. This needs to happen asynchronously so I don't double service time to the client.
The reason for doing this is so I can compare the responses generated by the prod version and a production candidate version. If their results are appropriately similar, I can be confident that the new version hasn't broken anything. (If I've made a functional change to the application, I'd filter out the necessary part of the response from this comparison.)
So I'm looking for an equivalent to:
class Foo(webapp2.RequestHandler):
def post(self):
handle = make_async_call_to('http://other_service_endpoint.com/', self.request)
# process the user's request in the usual way
test_response = handle.get_response()
# compare the locally-prepared response and the remote one, and log
# the diffs
# return the locally-prepared response to the caller
UPDATE
google.appengine.api.urlfetch was suggested as a potential solution to my problem, but it's synchronous in the dev_appserver, though it behaves the way I wanted in production (the request doesn't go out until get_response() is called, and it blocks). :
start_time = time.time()
rpcs = []
print 'creating rpcs:'
for _ in xrange(3):
rpcs.append(urlfetch.create_rpc())
print time.time() - start_time
print 'making fetch calls:'
for rpc in rpcs:
urlfetch.make_fetch_call(rpc, 'http://httpbin.org/delay/3')
print time.time() - start_time
print 'getting results:'
for rpc in rpcs:
rpc.get_result()
print time.time() - start_time
creating rpcs:
9.51290130615e-05
0.000154972076416
0.000189065933228
making fetch calls:
0.00029993057251
0.000356912612915
0.000473976135254
getting results:
3.15417003632
6.31326603889
9.46627306938
UPDATE2
So, after playing with some other options, I found a way to make completely non-blocking requests:
start_time = time.time()
rpcs = []
logging.info('creating rpcs:')
for i in xrange(10):
rpc = urlfetch.create_rpc(deadline=30.0)
url = 'http://httpbin.org/delay/{}'.format(i)
urlfetch.make_fetch_call(rpc, url)
rpc.callback = create_callback(rpc, url)
rpcs.append(rpc)
logging.info(time.time() - start_time)
logging.info('getting results:')
while rpcs:
rpc = apiproxy_stub_map.UserRPC.wait_any(rpcs)
rpcs.remove(rpc)
logging.info(time.time() - start_time)
...but the important point to note is that none of the async fetch options in urllib work in the dev_appserver. Having discovered this, I went back to try #DanCornilescu's solution and found that it only works properly in production, but not in the dev_appserver.
The URL Fetch service supports asynchronous requests. From Issuing an asynchronous request:
HTTP(S) requests are synchronous by default. To issue an asynchronous
request, your application must:
Create a new RPC object using urlfetch.create_rpc(). This object represents your asynchronous call in subsequent method calls.
Call urlfetch.make_fetch_call() to make the request. This method takes your RPC object and the request target's URL as parameters.
Call the RPC object's get_result() method. This method returns the result object if the request is successful, and raises an exception if
an error occurred during the request.
The following snippets demonstrate how to make a basic asynchronous
request from a Python application. First, import the urlfetch library
from the App Engine SDK:
from google.appengine.api import urlfetch
Next, use urlfetch to make the asynchronous request:
rpc = urlfetch.create_rpc()
urlfetch.make_fetch_call(rpc, "http://www.google.com/")
# ... do other things ...
try:
result = rpc.get_result()
if result.status_code == 200:
text = result.content
self.response.write(text)
else:
self.response.status_code = result.status_code
logging.error("Error making RPC request")
except urlfetch.DownloadError:
logging.error("Error fetching URL0")
Note: As per Sniggerfardimungus's experiment mentioned in the question's update the async calls might not work as expected on the development server - being serialized instead of concurrent, but they do so when deployed on GAE. Personally I didn't use the async calls yet, so I can't really say.
If the intent is not block at all waiting for the response from the production candidate app you could push a copy of the original request and the production-prepared response on a task queue then answer to the original request - with neglijible delay (that of enqueueing the task).
The handler for the respective task queue would, outside of the original request's critical path, make the request to the staging app using the copy of the original request (async or not, doesn't really matter from the point of view of impacting the production app's response time), get its response and compare it with the production-prepared response, log the deltas, etc. This can be nicely wrapped in a separate module for minimal changes to the production app and deployed/deleted as needed.

Writing tests for Python Eve RESTful APIs against a real MongoDB

I am developing my API server with Python-eve, and would like to know how to test the API endpoints. A few things that I would like to test specifically:
Validation of POST/PATCH requests
Authentication of different endpoints
Before_ and after_ hooks working property
Returning correct JSON response
Currently I am testing the app against a real MongoDB, and I can imagine the testing will take a long time to run once I have hundreds or thousands of tests to run. Mocking up stuff is another approach but I couldn't find tools that allow me to do that while keeping the tests as realistic as possible. I am wondering if there is a recommended way to test eve apps. Thanks!
Here is what I am having now:
from pymongo import MongoClient
from myModule import create_app
import unittest, json
class ClientAppsTests(unittest.TestCase):
def setUp(self):
app = create_app()
app.config['TESTING'] = True
self.app = app.test_client()
# Insert some fake data
client = MongoClient(app.config['MONGO_HOST'], app.config['MONGO_PORT'])
self.db = client[app.config['MONGO_DBNAME']]
new_app = {
'client_id' : 'test',
'client_secret' : 'secret',
'token' : 'token'
}
self.db.client_apps.insert(new_app)
def tearDown(self):
self.db.client_apps.remove()
def test_access_public_token(self):
res = self.app.get('/token')
assert res.status_code == 200
def test_get_token(self):
query = { 'client_id': 'test', 'client_secret': 'secret' }
res = self.app.get('/token', query_string=query)
res_obj = json.loads(res.get_data())
assert res_obj['token'] == 'token'
The Eve test suite itself is using a test db and not mocking anything. The test db gets created and dropped on every run to guarantee isolation (not super fast yes, but as close as possible to a production environment). While of course you should test your own code, you probably don't need to write tests like test_access_public_token above since, stuff like that is covered by the Eve suite already. You might want to check the Eve Mocker extension too.
Also make yourself familiar with Authentication and Authorization tutorials. It looks like the way you're going get the whole token thing going is not really appropriate (you want to use request headers for that kind of stuff).

How to unit test Google Cloud Endpoints

I'm needing some help setting up unittests for Google Cloud Endpoints. Using WebTest all requests answer with AppError: Bad response: 404 Not Found. I'm not really sure if endpoints is compatible with WebTest.
This is how the application is generated:
application = endpoints.api_server([TestEndpoint], restricted=False)
Then I use WebTest this way:
client = webtest.TestApp(application)
client.post('/_ah/api/test/v1/test', params)
Testing with curl works fine.
Should I write tests for endpoints different? What is the suggestion from GAE Endpoints team?
After much experimenting and looking at the SDK code I've come up with two ways to test endpoints within python:
1. Using webtest + testbed to test the SPI side
You are on the right track with webtest, but just need to make sure you correctly transform your requests for the SPI endpoint.
The Cloud Endpoints API front-end and the EndpointsDispatcher in dev_appserver transforms calls to /_ah/api/* into corresponding "backend" calls to /_ah/spi/*. The transformation seems to be:
All calls are application/json HTTP POSTs (even if the REST endpoint is something else).
The request parameters (path, query and JSON body) are all merged together into a single JSON body message.
The "backend" endpoint uses the actual python class and method names in the URL, e.g. POST /_ah/spi/TestEndpoint.insert_message will call TestEndpoint.insert_message() in your code.
The JSON response is only reformatted before being returned to the original client.
This means you can test the endpoint with the following setup:
from google.appengine.ext import testbed
import webtest
# ...
def setUp(self):
tb = testbed.Testbed()
tb.setup_env(current_version_id='testbed.version') #needed because endpoints expects a . in this value
tb.activate()
tb.init_all_stubs()
self.testbed = tb
def tearDown(self):
self.testbed.deactivate()
def test_endpoint_insert(self):
app = endpoints.api_server([TestEndpoint], restricted=False)
testapp = webtest.TestApp(app)
msg = {...} # a dict representing the message object expected by insert
# To be serialised to JSON by webtest
resp = testapp.post_json('/_ah/spi/TestEndpoint.insert', msg)
self.assertEqual(resp.json, {'expected': 'json response msg as dict'})
The thing here is you can easily setup appropriate fixtures in the datastore or other GAE services prior to calling the endpoint, thus you can more fully assert the expected side effects of the call.
2. Starting the development server for full integration test
You can start the dev server within the same python environment using something like the following:
import sys
import os
import dev_appserver
sys.path[1:1] = dev_appserver._DEVAPPSERVER2_PATHS
from google.appengine.tools.devappserver2 import devappserver2
from google.appengine.tools.devappserver2 import python_runtime
# ...
def setUp(self):
APP_CONFIGS = ['/path/to/app.yaml']
python_runtime._RUNTIME_ARGS = [
sys.executable,
os.path.join(os.path.dirname(dev_appserver.__file__),
'_python_runtime.py')
]
options = devappserver2.PARSER.parse_args([
'--admin_port', '0',
'--port', '8123',
'--datastore_path', ':memory:',
'--logs_path', ':memory:',
'--skip_sdk_update_check',
'--',
] + APP_CONFIGS)
server = devappserver2.DevelopmentServer()
server.start(options)
self.server = server
def tearDown(self):
self.server.stop()
Now you need to issue actual HTTP requests to localhost:8123 to run tests against the API, but again can interact with GAE APIs to set up fixtures, etc. This is obviously slow as you're creating and destroying a new dev server for every test run.
At this point I use the Google API Python client to consume the API instead of building the HTTP requests myself:
import apiclient.discovery
# ...
def test_something(self):
apiurl = 'http://%s/_ah/api/discovery/v1/apis/{api}/{apiVersion}/rest' \
% self.server.module_to_address('default')
service = apiclient.discovery.build('testendpoint', 'v1', apiurl)
res = service.testresource().insert({... message ... }).execute()
self.assertEquals(res, { ... expected reponse as dict ... })
This is an improvement over testing with CURL as it gives you direct access to the GAE APIs to easily set up fixtures and inspect internal state. I suspect there is an even better way to do integration testing that bypasses HTTP by stitching together the minimal components in the dev server that implement the endpoint dispatch mechanism, but that requires more research time than I have right now.
webtest can be simplified to reduce naming bugs
for the following TestApi
import endpoints
import protorpc
import logging
class ResponseMessageClass(protorpc.messages.Message):
message = protorpc.messages.StringField(1)
class RequestMessageClass(protorpc.messages.Message):
message = protorpc.messages.StringField(1)
#endpoints.api(name='testApi',version='v1',
description='Test API',
allowed_client_ids=[endpoints.API_EXPLORER_CLIENT_ID])
class TestApi(protorpc.remote.Service):
#endpoints.method(RequestMessageClass,
ResponseMessageClass,
name='test',
path='test',
http_method='POST')
def test(self, request):
logging.info(request.message)
return ResponseMessageClass(message="response message")
the tests.py should look like this
import webtest
import logging
import unittest
from google.appengine.ext import testbed
from protorpc.remote import protojson
import endpoints
from api.test_api import TestApi, RequestMessageClass, ResponseMessageClass
class AppTest(unittest.TestCase):
def setUp(self):
logging.getLogger().setLevel(logging.DEBUG)
tb = testbed.Testbed()
tb.setup_env(current_version_id='testbed.version')
tb.activate()
tb.init_all_stubs()
self.testbed = tb
def tearDown(self):
self.testbed.deactivate()
def test_endpoint_testApi(self):
application = endpoints.api_server([TestApi], restricted=False)
testapp = webtest.TestApp(application)
req = RequestMessageClass(message="request message")
response = testapp.post('/_ah/spi/' + TestApi.__name__ + '.' + TestApi.test.__name__, protojson.encode_message(req),content_type='application/json')
res = protojson.decode_message(ResponseMessageClass,response.body)
self.assertEqual(res.message, 'response message')
if __name__ == '__main__':
unittest.main()
I tried everything I could think of to allow these to be tested in the normal way. I tried hitting the /_ah/spi methods directly as well as even trying to create a new protorpc app using service_mappings to no avail. I'm not a Googler on the endpoints team so maybe they have something clever to allow this to work but it doesn't appear that simply using webtest will work (unless I missed something obvious).
In the meantime you can write a test script that starts the app engine test server with an isolated environment and just issue http requests to it.
Example to run the server with an isolated environment (bash but you can easily run this from python):
DATA_PATH=/tmp/appengine_data
if [ ! -d "$DATA_PATH" ]; then
mkdir -p $DATA_PATH
fi
dev_appserver.py --storage_path=$DATA_PATH/storage --blobstore_path=$DATA_PATH/blobstore --datastore_path=$DATA_PATH/datastore --search_indexes_path=$DATA_PATH/searchindexes --show_mail_body=yes --clear_search_indexes --clear_datastore .
You can then just use requests to test ala curl:
requests.get('http://localhost:8080/_ah/...')
If you don't want to test the full HTTP stack as described by Ezequiel Muns, you can also just mock out endpoints.method and test your API definition directly:
def null_decorator(*args, **kwargs):
def decorator(method):
def wrapper(*args, **kwargs):
return method(*args, **kwargs)
return wrapper
return decorator
from google.appengine.api.users import User
import endpoints
endpoints.method = null_decorator
# decorator needs to be mocked out before you load you endpoint api definitions
from mymodule import api
class FooTest(unittest.TestCase):
def setUp(self):
self.api = api.FooService()
def test_bar(self):
# pass protorpc messages directly
self.api.foo_bar(api.MyRequestMessage(some='field'))
My solution uses one dev_appserver instance for the entire test module, which is faster than restarting the dev_appserver for each test method.
By using Google's Python API client library, I also get the simplest and at the same time most powerful way of interacting with my API.
import unittest
import sys
import os
from apiclient.discovery import build
import dev_appserver
sys.path[1:1] = dev_appserver.EXTRA_PATHS
from google.appengine.tools.devappserver2 import devappserver2
from google.appengine.tools.devappserver2 import python_runtime
server = None
def setUpModule():
# starting a dev_appserver instance for testing
path_to_app_yaml = os.path.normpath('path_to_app_yaml')
app_configs = [path_to_app_yaml]
python_runtime._RUNTIME_ARGS = [
sys.executable,
os.path.join(os.path.dirname(dev_appserver.__file__),
'_python_runtime.py')
]
options = devappserver2.PARSER.parse_args(['--port', '8080',
'--datastore_path', ':memory:',
'--logs_path', ':memory:',
'--skip_sdk_update_check',
'--',
] + app_configs)
global server
server = devappserver2.DevelopmentServer()
server.start(options)
def tearDownModule():
# shutting down dev_appserver instance after testing
server.stop()
class MyTest(unittest.TestCase):
#classmethod
def setUpClass(cls):
# build a service object for interacting with the api
# dev_appserver must be running and listening on port 8080
api_root = 'http://127.0.0.1:8080/_ah/api'
api = 'my_api'
version = 'v0.1'
discovery_url = '%s/discovery/v1/apis/%s/%s/rest' % (api_root, api,
version)
cls.service = build(api, version, discoveryServiceUrl=discovery_url)
def setUp(self):
# create a parent entity and store its key for each test run
body = {'name': 'test parent'}
response = self.service.parent().post(body=body).execute()
self.parent_key = response['parent_key']
def test_post(self):
# test my post method
# the tested method also requires a path argument "parent_key"
# .../_ah/api/my_api/sub_api/post/{parent_key}
body = {'SomeProjectEntity': {'SomeId': 'abcdefgh'}}
parent_key = self.parent_key
req = self.service.sub_api().post(body=body,parent_key=parent_key)
response = req.execute()
etc..
After digging through the sources, I believe things have changed in endpoints since Ezequiel Muns's (excellent) answer in 2014. For method 1 you now need to request from /_ah/api/* directly and use the correct HTTP method instead of using the /_ah/spi/* transformation. This makes the test file look like this:
from google.appengine.ext import testbed
import webtest
# ...
def setUp(self):
tb = testbed.Testbed()
# Setting current_version_id doesn't seem necessary anymore
tb.activate()
tb.init_all_stubs()
self.testbed = tb
def tearDown(self):
self.testbed.deactivate()
def test_endpoint_insert(self):
app = endpoints.api_server([TestEndpoint]) # restricted is no longer required
testapp = webtest.TestApp(app)
msg = {...} # a dict representing the message object expected by insert
# To be serialised to JSON by webtest
resp = testapp.post_json('/_ah/api/test/v1/insert', msg)
self.assertEqual(resp.json, {'expected': 'json response msg as dict'})
For searching's sake, the symptom of using the old method is endpoints raising a ValueError with Invalid request path: /_ah/spi/whatever. Hope that saves someone some time!

Does urllib2 support threading to servers requiring basic auth?

I am developing an application which uses a series of REST calls to retrieve data. I have the basic application logic complete and the structure for data retrieval is roughly as follows.
1) the initial data call is completed
2) for each response in the initial call a subsequent data call is performed to a rest service requiring basic authentication.
Performing these calls in sequential order can add up to a long wait time by the end user, I am therefore trying to implement threading to speed up the process (being IO bound makes this an ideal candidate for threading). The problem is I am having problems with the authentication on the threaded calls.
If I perform the calls sequentially then everything works fine but if I set it up with the threaded approach I end up with 401 authentication errors or 500 internal server errors from the server.
I have talked to the REST service admins and they know of nothing that would prevent concurrent connections from the same user on the server end so I am wondering if this is an issue on the urllib2 end.
Does anyone have any experience with this?
EDIT:
While I am unable to post the exact code I will post a reasonable representation of what I am doing with very similar structure.
import threading
class UrlThread(threading.Thread):
def __init__(self, data):
threading.Thread.__init__(self)
self.data = data
def run(self):
password_manager = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_manager.add_password(None, 'https://url/to/Rest_Svc/', 'uid', 'passwd')
auth_manager = urllib2.HTTPBasicAuthHandler(password_manager)
opener = urllib2.build_opener(auth_manager)
urllib2.install_opener(opener)
option = data[0]
urlToOpen = 'https://url/to/Rest_Svc/?option='+option
rawData = urllib2.urlopen(urlToOpen)
wsData = rawData.readlines()
if wsData:
print('success')
#firstCallRows is a list of lists containing the data returned
#from the initial call I mentioned earlier.
thread_list = []
for row in firstCallRows:
t = UrlThread(row)
t.setDaemon(True)
t.start()
thread_list.append(t)
for thread in thread_list:
thread.join()
With Requests you could do something like this:
from requests import session, async
auth = ('username', 'password')
url = 'http://example.com/api/'
options = ['foo1', 'foo2', 'foo3']
s = session(auth=auth)
rs = [async.get(url, params={'option': opt}, session=s) for opt in options]
responses = async.imap(rs)
for r in responses:
print r.text
Relevant documentation:
Sessions
Asynchronous requests
Basic authentication

Categories