Background
I've been using Robot Framework and RequestsLibrary to write automated tests against RESTful endpoints I expose via AWS API Gateway. So I'm writing tests that look roughly like this:
*** Settings ***
Library Collections
Library RequestsLibrary
*** Test Cases ***
Get Requests
Create Session Sess https://<api-gateway-url>
${resp}= Get Request Sess /path/to/my/api?param=value
Should Be Equal As Strings ${resp.status_code} 200
Dictionary Should Contain Value ${resp.json()} someValueIwantToVerify
Now, I'm getting around to securing those API Gateway endpoints with IAM. Therefore requests need to be sig4 signed.
The application that consumes these services is written in javascript, and uses aws-api-gateway-client to sign requests. Testing (manually) in Postman is also easy enough, using AWS Signature Authorization type. However, I'm struggling with figuring out a strategy for Robot Framework.
Question(s)
In the broadest sense, I'm wondering if anyone else is using Robot Framework to test IAM secured API Gateway endpoints. If so, how did you pull it off?
More specifically:
Is there an existing Robot Framework library that addresses this use case?
If not, is writing my own library my only option?
If I am stuck writing a library (this looks promising), what sorts of keywords would I define, and how would I use them?
I don't know of an RF library that does what you want, but my first instinct would be to use Amazon's own AWS SDK for Python (boto3), and write a thin keyword wrapper library around it. I've done that for test cases that used AWS S3, but boto3 also supports API Gateway: http://boto3.readthedocs.io/en/latest/reference/services/apigateway.html
The requests library accepts an auth parameter, which is expected to be a subclass of AuthBase.
It turns out that modifying RequestsLibrary to leverage this existing functionality was a grand total of two lines of code (not counting comments, tests, and whitespace for readability):
def create_custom_session(self, alias, url, auth, headers={}, cookies=None, timeout=None, proxies=None, verify=False, debug=0, max_retries=3, backoff_factor=0.10, disable_warnings=0):
return self._create_session(alias, url, headers, cookies, auth, timeout, max_retries, backoff_factor, proxies, verify, debug, disable_warnings)
Sometimes the answer is just to issue your own pull request.
The new Create Custom Session keyword will accept any arbitrary auth object, including the one produced by the library I'm using (which I highly recommend).
The original issue now contains the details of how aws-requests-auth and RequestsLibrary work together with the new keyword.
#the_mero mentioned boto3 which is almost certainly what you want to use to actually marshal up the credentials. You don't want to hard code them in your test like the simple examples in the issue/pull request do.
Related
I came across an issue with performance testing of payments-related endpoints.
Basically I want to test some endpoints that make request themselves to a 3rd-party providers' API.
Is it possible from Locust's tests level to mock those 3rd-party API for the endpoints I intend to actually test (so without interference with the tested endpoints)?
If I understand correctly, you have a service you'd like to load/performance test but that service calls out to a third-party. But when you do your testing, you don't want to actually make any calls to the third-party service?
Locust is used for simulating client behavior. You can define that client behavior to be whatever you want; typically it's primary use case is for making http calls but almost any task can be done.
If it's your client that makes a request to your service and then makes a separate request to the other third-party service for payment processing, yes, you could define some sort of mocking behavior in Locust to make a real call to your service and then mock out a payment call. But if it's your service that takes a client call and then makes its own call to the third-party payment service, no, Locust can't do anything about that.
For that scenario, you'd be best off making your own simple mock/proxy service of the third-party service. It would take a request from your service, do basic validation to ensure things are coming in as expected, and then just return some canned response that looks like what your service would expect from the third-party. But this would be something you'd have to host yourself and have a method of telling your service to point to this mock service instead (DNS setting, environment variable, etc.). Then you could use Locust to simulate your client behavior as normal and you can test your service in an isolated manner without making any actual calls to the third-party service.
I actually skipped the most important part of the issue, namely I am testing the endpoints from outside of the repo containing them (basically my load test repo calls my app repo). I ended up mocking the provider inside of the app repo, which I initially intended to avoid but turned out to be only reasonable solution at the moment.
I am writing an application that uses Google's python client for GCS.
https://cloud.google.com/storage/docs/reference/libraries#client-libraries-install-python
I've had no issues using this, until I needed to write my functional tests.
The way our organization tests integrations like this is to write a simple stub of the API endpoints I hit, and point the Google client library (in this case) to my stub, instead of needing to hit Google's live endpoints.
I'm using a service account for authentication and am able to point the client at my stub when fetching a token because it gets that value from the service account's json key that you get when you create the service account.
What I don't seem able to do is point the client library at my stubbed API instead of making calls directly to Google.
Some work arounds that I've though of, that I don't like are:
- Allow the tests to hit the live endpoints.
- Put in some configuration that toggles using the real Google client library, or a mocked version of the library. I'd rather mock the API versus having mock code deployed to production.
Any help with this is greatly appreciated.
I’ve made some research and it seems like there’s nothing supported specifically for Cloud Storage using python. I found this GitHub issue entry with a related discussion, but for go.
I think you can open a public issue tracker asking for this functionality. I’m afraid by now it’s easier to keep using your second workaround.
What is the best way to do user management in a single page JS (Mithril) app? I want users to login to load preferences and take on a role so they gain certain permissions. I have a REST API backend written in Python (Falcon web framework). Having read a bit into it, it seems to boil down to sending credentials to the backend and get a token back. But the question is how that should be done. It seems that tokens are a better method than cookies, but that has effects on the exchange of secrets/tokens. the 'xhr.withCredentials' method seems to be cookie based for instance. JWT (json web tokens) seems like a modern, interesting option, but it's hard to find a clear explanation how it could be used with a SPA.. And once the Mithril app has a token, where should I store it and how should I use it with subsequent requests?
This isn't so much about Mithril, actually the only Mithril-related area is the server communication. That is done with the m.request method (docs here), but you need to create an object for all server communication that requires authentication.
That object should have knowledge about the auth system and detect if a token expired, then request a new one, take proper action if things fail, etc. It's a bit of work, but the process is different for most auth systems, so there's not much to do about it, except using something that already exists.
Being a small and lean MVC framework, Mithril doesn't have any security-related features built-in, but the m.request method is very powerful and you should use that inside the auth communication object.
The client-side storage will be in cookies or HTML5 storage. Here's an StackExchange answer that goes into more depth: https://security.stackexchange.com/a/80767 but the point is that this isn't Mithril-related either.
Thanks for linking to the tokens vs. cookies article, it was very nice!
I have been reading at multiple places and it is suggested that the Web Servers should be Stateles with share nothing architecture. This helps them scale better.
That means each request has all the information needed to process the request.
This becomes tricky when you have REST endpoints that needs authentication.
I have been looking at ways Flask extensions do this and Flask Login extension is defined as
Flask-Login provides user session management for Flask. It handles the
common tasks of logging in, logging out, and remembering your users’
sessions over extended periods of time.
This seems like against the philosophy of building a Stateless server, isn't it?
What are better ways to build a Stateless server with authentication provided via HTTP headers with Python or related python libraries?
P.S: Apologies for not posting a programming question here, this is a design issue and I do not know how to solve it and SO seems to have right people to answer such questions. Thanks.
Flask-Login uses flask's built in session management, which by default uses secure/signed cookies, and so is purely client side.
It can support server side sessions if needed though of course, here's an example redis backed session store.
I've the same problem as you have said.
While I have built a simple solution for this but looking for a better one.
What I currently did is to ask the caller (Who send the http request) provide a 'X-User-Info' in the http header, the value is a token. When I received the request, I use this token to get user identity (From redis for instance) and all of the following authorization & permission control are based on this identity.
The authentication does nothing but generate a random token, save it with user info to redis and return the token itself to the caller.
I have an application that has a "private" REST API; I use RESTful URLs when making Ajax calls from my own webpages. However, this is unsecure, and anyone could make those same calls if they knew the URL patterns.
What's the best (or standard) way to secure these calls? Is it worth looking at something like OAuth now if I intend to release an API in the future, or am I mixing two separate strategies together?
I am using Google App Engine for Python and Tipfy.
Definitely take a look at OAuth
It is quickly becoming the "de-facto" standard for securing REST APIs and a lot of big companies are using it, including Google, Twitter and Facebook just to name a few.
For Python on GAE you have two options:
The most straightforward way (IMHO) is using David Larlet's library for OAuth Support in Django available on BitBucket.
But since you're not using Django, maybe you want to take a look at the python-oauth2 library that's available on GitHub, and is considered the most up-to-date and unit-tested implementation of OAuth for Python 2.4+.
Either way I think you'd be much better using OAuth than rolling your own service security solution.
Securing a javascript client is nearly impossible; at the server, you have no fool-proof way to differentiate between a human using a web browser and a well-crafted script.
SSL encrypts data over the wire but decrypts at the edges, so that's no help. It prevents man-in-the-middle attacks, but does nothing to verify the legitimacy of the original client.
OAuth is good for securing requests between two servers, but for a Javascript client, it doesn't really help: anyone reading your javascript code can find your consumer key/secret, and then they can forge signed requests.
Some things you can do to mitigate API scraping:
Generate short-lived session cookies when someone visits your website. Require a valid session cookie to invoke the REST API.
Generate short-lived request tokens and include them in your website HTML; require a valid request token inside each API request.
Require users of your website to log in (Google Accounts / OpenID); check auth cookie before handling API requests.
Rate-limit API requests. If you see too many requests from one client in a short time frame, block them.
OAuth would be overkill in your current scenario (potentially insecure), in that it's designed to authorize a third party service to have access to resources on behave of the user.
Securing AJAX request via an authorized user
AFAICT, you are in control of the client, resources and authentication; so you only need to secure access to the URL's and possibly the communication between client and server via SSL [2].
So, use Tipfy's auth extension to secure your URLs:
from tipfy import RequestHandler, Response
from tipfy.ext.auth import AppEngineAuthMixin, user_required
class MyHandler(RequestHandler, AppEngineAuthMixin):
#user_required
def get(self, **kwargs):
return Response('Only logged in users can see this page.')
Securing AJAX request without an authorized user
If a user is unknown, then one could apply CSRF preventions to help protect the REST service from being called from an "unauthorized" client. Tipfy has this built-in to it's WTForms extension, but it's not AJAX. Instead, the session extension could be used to apply an "authenticity_token" to all calls, that needs to be verified on the server.