I'm thinking about good ways to store third party credentials, which basically means there needs to be a secret somewhere, either in code or data. I'm deploying on google app engine.
If the 'secret' was something like
pw_passphrase = sha2(username + 'global-password')
pw_plaintext = aes_decrypt(pw_passphrase, pw_ciphertext)
can I depend on this code never being seen by a non appengine administrator?
...what if the credentials protect something supersensitive like personal financial data, do we still trust it?
(The sha2 bit is exchangable with any other secret pseudo-random function.)
Yes: your source code is secure (as secure as Google can make it), and there's no way for unauthorized third parties to peek.
Also remember to handle exceptions in your code with an error page, or else an exception thrown might uncover your source code to an unsigned user.
Related
I created a basic python web application using flask framework and deploy it on google app engine. So I store the JSON data utilising GAE memcache API with key and namespace as integer type and locally it worked, but once I deployed the code to app engine it's throwing the error at least one of key or namespace should be <type> string.
I change my memcache key and namespace to type string and it works. Can someone help me understand why it's supporting integer as a key and namespace locally and but once you deploy it fails. Is it a bug?
From google.appengine.api.memcache package:
Any method that takes a ‘key’ argument will accept that key as a
string (unicode or not) or a tuple of (hash_value, string) where the
hash_value, normally used for sharding onto a memcache instance, is
instead ignored, as Google App Engine deals with the sharding
transparently. Keys in memcache are just bytes, without a specified
encoding. All such methods may raise TypeError if provided a bogus key
value and a ValueError if the key is too large.
The local development server only attempts to emulate the actual GAE infra. Some aspects of the functionality aren't emulated entirely, some aren't emulated at all - the one you mention is not the only one. It's always a good idea to double-check your assumptions in real deployments, at least now and then, especially when exploring new GAE capabilities, to prevent including faulty assumptions in your app's foundation which would require significant re-working later. Been there, done that :)
I've got a Google App Engine project that uses the Google Cloud Language API, and I'm using the Google API Client Library (Python) to make the API calls.
When running my unit tests, I make quite a few calls to the API. This slows down my testing and also incurs costs.
I'd like to cache the calls to the Google API to speed up my tests and avoid the API charges, and I'd rather not roll my own if another solution is available.
I found this Google API page, which suggests doing this:
import httplib2
http = httplib2.Http(cache=".cache")
And I've added these lines to my code (there is another option to use GAE memcache but won't be persisted between test code invocations) and right after these lines, I create my API call connection:
NLP = discovery.build("language", "v1", API_KEY)
The caching isn't working and the above solution seems too simple so I suspect I am missing something.
UPDATE:
I updated my tests so that App Engine is not used (just a regular unit test) and I also figured out that I can pass the http I created to the Google API client like this:
NLP = discovery.build("language", "v1", http, API_KEY)
Now, the initial discovery call is cached but the actual API calls are not cached,e.g., this call is not cached:
result = NLP.documents().annotateText(body=data).execute()
The suggested code:
http = httplib2.Http(cache=".cache") is trying to cache to the local filesystem in a directory called ".cache". On App Engine, you cannot write to the local filesystem, so this does nothing.
Instead, you could try caching to Memcache. The other suggestion on the Python Client docs referenced is to do exactly this:
from google.appengine.api import memcache
http = httplib2.Http(cache=memcache)
Since all App Engine apps get free access to shared memcache this should be better than nothing.
If this fails, you could also try memoization. I've had success memoizing calls to slow or flaky APIs, but it comes at the cost of increased memory usage (so I need bigger instances).
EDIT: I see from your comment you're having this problem locally. I was originally thinking that memoization would be an alternative, but the need to hack on httplib2 makes that overly complicated. I'm back to thinking about how to convince httplib2 to do the right thing.
If you're trying to make a test run faster by caching an API call result, stop and consider whether you may have taken a wrong turn.
If can you restructure your code such that you can replace the API call with a unittest.mock, your tests will run much, much faster.
I just came across vcrpy which seems to do exactly this. I'll update this answer after I've had a chance to try it out.
I am using Python (2.7) and Selenium (3.4.3) to drive Firefox (52.2.0 ESR) via geckodriver (0.19.0) to automate a process on a CentOS 7 machine.
I need totally unattended operation of this automation with user credentials passed through; no storage allowed and no breaking in.
One piece of drama is being caused by the fact that the internal website required for the process is within an Active Directory domain while the machine running my automation is not. I have no need to validate the user, only pass the credentials to the website in such a way as to not require human interaction or for the person to be a local user on the machine.
I have tried various permutations of:
[protocol]://[user,pass]#[url]
driver.switch_to_alert() + send_keys
It seems some of those only work on IE, something I have no access to.
I have checked for libraries to handle this and all to no avail.
I can add libraries to python and I have sudo access to the machine - can't touch authentication, so AD integration is not possible.
How can I give this AD website the credentials of an arbitrary user such that no local storage of their credentials happens an no user interaction is required?
Thank you
EDIT
I think something like a proxy which could authenticate the user then retain that authentication for selenium to do its thing ...
Is there a simple LDAP/AD proxy available?
EDIT 2
Perhaps a very simple way of stating this is that I want to pass user credentials and prevent the authentication popup from happening.
Solution Found:
I needed to use a browser extension.
My solution has been built for chromium but it should port almost-unchanged for Firefox and maybe edge.
First up, you need 2 APIs to be available for your browser:
webRequest.onAuthRequired - Chrome & Firefox
runtime.nativeMessaging - Chrome & Firefox
While both browser APIs are very similar, they do have some significant differences - such as Chrome's implementation lacking Promises.
If you setup your Native Messaging Host to send a properly-formed JSON string, you need only poll it once. This means you can use a single call to runtime.sendNativeMessage() and be assured that your credentials are paresable. Pun intended.
Next, we need to look at how we're supposed to handle the webRequest.onAuthRequired event.
Since I'm working in Chromium, I need to use the promise-less Chrome API.
chrome.webRequest.onAuthRequired.addListener(
callbackFunctionHere,
{urls:[targetUrls]},
['asyncBlocking'] // --> this line is important, too. Very.
The Change:
I'll be calling my function provideCredentials because I'm a big stealy-stealer and used an example from this source. Look for the asynchronous version.
The example code fetches the credentials from storage.local ...
chrome.storage.local.get(null, gotCredentials);
We don't want that. Nope.
We want to get the credentials from a single call to sendNativeMessage so we'll change that one line.
chrome.runtime.sendNativeMessage(hostName, { text: "Ready" }, gotCredentials);
That's all it takes. Seriously. As long as your Host plays nice, this is the big secret. I won't even tell you how long it took me to find it!
Links:
My questions with helpful links:
Here - Workaround for Authenticating against Active Directory
Here - Also has some working code for a functional NM Host
Here - Some enlightening material on promises
So this turns out to be a non-trivial problem.
I haven't implemented the solution, yet, but I know how to get there...
Passing values to an extension is the first step - this can be done in both Chrome and Firefox. Watch the version to make sure the API required, nativeMessaging, actually exists in your version. I have had to switch to chromium for this reason.
Alternatively, one can use the storage API to put values in browser storage first. [edit: I did not go this way for security concerns]
Next is to use the onAuthRequired event from the webRequest API . Setup a listener on the event and pass in the values you need.
Caveats: I have built everything right up to the extension itself for the nativeMessaging API solution and there's still a problem with getting the script to recognise the data. This is almost certainly my JavaScript skills clashing with the arcane knowledge required to make these APIs make much sense ...
I have yet to attempt the storage method as it's less secure (in my mind) but it does seem to be simpler.
I simply want to receive notifications from dropbox that a change has been made. I am currently following this tutorial:
https://www.dropbox.com/developers/reference/webhooks#tutorial
The GET method is done, verification is good.
However, when trying to mimic their implementation of POST, I am struggling because of a few things:
I have no idea what redis_url means in the def_process function of the tutorial.
I can't actually verify if anything is really being sent from dropbox.
Also any advice on how I can debug? I can't print anything from my program since it has to be ran on a site rather than an IDE.
Redis is a key-value store; it's just a way to cache your data throughout your application.
For example, access token that is received after oauth callback is stored:
redis_client.hset('tokens', uid, access_token)
only to be used later in process_user:
token = redis_client.hget('tokens', uid)
(code from https://github.com/dropbox/mdwebhook/blob/master/app.py as suggested by their documentation: https://www.dropbox.com/developers/reference/webhooks#webhooks)
The same goes for per-user delta cursors that are also stored.
However there are plenty of resources how to install Redis, for example:
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-redis
In this case your redis_url would be something like:
"redis://localhost:6379/"
There are also hosted solutions, e.g. http://redistogo.com/
Possible workaround would be to use database for such purpose.
As for debugging, you could use logging facility for Python, it's thread safe and capable of writing output to file stream, it should provide you with plenty information if properly used.
More info here:
https://docs.python.org/2/howto/logging.html
I have a Django app that uses some secret keys (for example for OAuth2 / JWT authentication). I wonder where is the right place to store these keys.
Here are the methods I found so far:
Hardcoding: not an option, I don't want my secrets on the source control.
Hardcoding + obfuscating: same as #1 - attackers can just run my code to get the secret.
Storing in environment variables: my app.yaml is also source-controlled.
Storing in DB: Not sure about that. DB is not reliable enough in terms of availability and security.
Storing in a non-source-controlled file: my favorite method so far. The problem is that I need some backup for the files, and manual backup doesn't sound right.
Am I missing something? Is there a best practice for storing secret keys for Django apps or App Engine apps?
You can hardly hide the secret keys from an attacker that can access your server, since the server needs to know the keys. But you can make it hard for an attacker with low privileges.
Obfuscating is generally not considered as a good practice.
Your option 5 seems reasonable. Storing the keys in a non-source controlled file allows to keep the keys in a single and well-defined place. You can set appropriate permissions on that file so that an attacker would need high privileges to open it. Also make sure that high privileges are required to edit the rest of the project, otherwise, the attacker could modify a random file of the project to access the keys.
I myself use your option 5 in my projects.
A solution I've seen is to store an encrypted copy of the secret configuration in your repository using gpg. Depending on the structure of your team you could encrypt it symmetrically and share the password to decrypt it or encrypt it with the public keys of core members / maintainers.
That way your secrets are backed up the same way your code is without making them as visible.