I came across an issue with performance testing of payments-related endpoints.
Basically I want to test some endpoints that make request themselves to a 3rd-party providers' API.
Is it possible from Locust's tests level to mock those 3rd-party API for the endpoints I intend to actually test (so without interference with the tested endpoints)?
If I understand correctly, you have a service you'd like to load/performance test but that service calls out to a third-party. But when you do your testing, you don't want to actually make any calls to the third-party service?
Locust is used for simulating client behavior. You can define that client behavior to be whatever you want; typically it's primary use case is for making http calls but almost any task can be done.
If it's your client that makes a request to your service and then makes a separate request to the other third-party service for payment processing, yes, you could define some sort of mocking behavior in Locust to make a real call to your service and then mock out a payment call. But if it's your service that takes a client call and then makes its own call to the third-party payment service, no, Locust can't do anything about that.
For that scenario, you'd be best off making your own simple mock/proxy service of the third-party service. It would take a request from your service, do basic validation to ensure things are coming in as expected, and then just return some canned response that looks like what your service would expect from the third-party. But this would be something you'd have to host yourself and have a method of telling your service to point to this mock service instead (DNS setting, environment variable, etc.). Then you could use Locust to simulate your client behavior as normal and you can test your service in an isolated manner without making any actual calls to the third-party service.
I actually skipped the most important part of the issue, namely I am testing the endpoints from outside of the repo containing them (basically my load test repo calls my app repo). I ended up mocking the provider inside of the app repo, which I initially intended to avoid but turned out to be only reasonable solution at the moment.
Related
I am writing an application that uses Google's python client for GCS.
https://cloud.google.com/storage/docs/reference/libraries#client-libraries-install-python
I've had no issues using this, until I needed to write my functional tests.
The way our organization tests integrations like this is to write a simple stub of the API endpoints I hit, and point the Google client library (in this case) to my stub, instead of needing to hit Google's live endpoints.
I'm using a service account for authentication and am able to point the client at my stub when fetching a token because it gets that value from the service account's json key that you get when you create the service account.
What I don't seem able to do is point the client library at my stubbed API instead of making calls directly to Google.
Some work arounds that I've though of, that I don't like are:
- Allow the tests to hit the live endpoints.
- Put in some configuration that toggles using the real Google client library, or a mocked version of the library. I'd rather mock the API versus having mock code deployed to production.
Any help with this is greatly appreciated.
I’ve made some research and it seems like there’s nothing supported specifically for Cloud Storage using python. I found this GitHub issue entry with a related discussion, but for go.
I think you can open a public issue tracker asking for this functionality. I’m afraid by now it’s easier to keep using your second workaround.
Background
I've been using Robot Framework and RequestsLibrary to write automated tests against RESTful endpoints I expose via AWS API Gateway. So I'm writing tests that look roughly like this:
*** Settings ***
Library Collections
Library RequestsLibrary
*** Test Cases ***
Get Requests
Create Session Sess https://<api-gateway-url>
${resp}= Get Request Sess /path/to/my/api?param=value
Should Be Equal As Strings ${resp.status_code} 200
Dictionary Should Contain Value ${resp.json()} someValueIwantToVerify
Now, I'm getting around to securing those API Gateway endpoints with IAM. Therefore requests need to be sig4 signed.
The application that consumes these services is written in javascript, and uses aws-api-gateway-client to sign requests. Testing (manually) in Postman is also easy enough, using AWS Signature Authorization type. However, I'm struggling with figuring out a strategy for Robot Framework.
Question(s)
In the broadest sense, I'm wondering if anyone else is using Robot Framework to test IAM secured API Gateway endpoints. If so, how did you pull it off?
More specifically:
Is there an existing Robot Framework library that addresses this use case?
If not, is writing my own library my only option?
If I am stuck writing a library (this looks promising), what sorts of keywords would I define, and how would I use them?
I don't know of an RF library that does what you want, but my first instinct would be to use Amazon's own AWS SDK for Python (boto3), and write a thin keyword wrapper library around it. I've done that for test cases that used AWS S3, but boto3 also supports API Gateway: http://boto3.readthedocs.io/en/latest/reference/services/apigateway.html
The requests library accepts an auth parameter, which is expected to be a subclass of AuthBase.
It turns out that modifying RequestsLibrary to leverage this existing functionality was a grand total of two lines of code (not counting comments, tests, and whitespace for readability):
def create_custom_session(self, alias, url, auth, headers={}, cookies=None, timeout=None, proxies=None, verify=False, debug=0, max_retries=3, backoff_factor=0.10, disable_warnings=0):
return self._create_session(alias, url, headers, cookies, auth, timeout, max_retries, backoff_factor, proxies, verify, debug, disable_warnings)
Sometimes the answer is just to issue your own pull request.
The new Create Custom Session keyword will accept any arbitrary auth object, including the one produced by the library I'm using (which I highly recommend).
The original issue now contains the details of how aws-requests-auth and RequestsLibrary work together with the new keyword.
#the_mero mentioned boto3 which is almost certainly what you want to use to actually marshal up the credentials. You don't want to hard code them in your test like the simple examples in the issue/pull request do.
I'm new to Tornado, and working on a project that involves some rather complex routing. In most of the other frameworks I've used I've been able to isolate routing for testing, without spinning up a server or doing anything terribly complex. I'd prefer to use pytest as my testing framework, but I'm not sure it matters.
Is there a way to, say, create my project's instance of tornado.web.Application, and pass it arbitrary paths and assert which RequestHandler will be invoked based on that path?
No, it is not currently possible to test this in Tornado via any public interface (as of Tornado version 4.3).
It's straightforward to avoid spinning up a server, although it requires a nontrivial amount of code: the interface between HTTPServer and Application is well-defined and documented. The trickier part is the other side: there is no supported way to determine which handler will be invoked before that handler is invoked.
I generally recommend testing routing via end-to-end tests for this reason. You could also store your URL route list before passing it into Tornado, and do your tests against that - the internal logic of "take the first regex match" is pretty easy to replicate.
I have a service provided by a REST API, with a Python library wrapping it using python-requests.
I have a 'dumb' user interface designed by a third party (not Python) to connect to a local XML-RPC.
Now I have to connect both ends and forward the XML-RPC calls to the REST API and return the results. It's mostly asynchronous and doesn't depend on results returning to the user in real-time. Most of the XML-RPC calls are supposed to return immediately, queue a task, and some other call will query the results later. Data is stored in an sqlite database until needed.
So, I decided to use twisted.web.xmlrpc for this middle layer and use the requests based lib for the remote calls and it works fine. I guess I'm blocking twisted's mainloop for a few seconds once in a while, but that's not a big deal.
The problem is that I also have to make some big file uploads from this middle layer to the HTTP server providing the REST API. I can't make those uploads using the requests based lib because it will block the twisted loop until the upload is finished.
I'd rather not use multithreading, and I really don't want to rewrite the python-requests based lib I have as a twisted client. Is there any way I can integrate requests into twisted's mainloop, or any other reasonable solution?
If you like requests' style of API, but want something that would work with Twisted, consider using treq. There are support libraries for writing interfaces which can be either synchronous or asynchronous depending on their caller's needs.
If you really want to use requests, but you don't want to block the main loop, you can invoke it with twisted.internet.threads.deferToThread. This is mostly transparent, and if your requests don't share any state you can almost ignore the fact that you're using multithreading.
But, ultimately, Jean-Paul's comment is correct; you are going to need to make some changes to the way this code works, if you want to change the way it works.
We have built a Python based REST API. We are planning to give to other developers as well. Is there a Python library which could manage authentication keep track of API calls made be each client etc?
Well, ideally you'll be using public-key cryptography, supplying the developers with an API key and secret both. If the service is accessible via HTTPS to a limited number of consumers, you might be tempted to defer to issuing a simple API key alone, but your committing yourself to remain a small, closed and insecure service forever.
As for managing API calls themselves, since you have the RESTful interfaces developed already, I would suggest that you begin decorating the functions or methods to extract the service consumer and keep track of API calls in MonogoDB--it's simple and perfectly tuned for such a requirement. It would also allow you to start throttling consumer connections at the application level where, in time, you can develop the system to encompass some low-level solutions for managing service connections, such as iptables.