Braintree + Python: Configure credential at transaction level rather than module - python

I am currently using Python to integrate with Braintree. At the module-level, we configure our API keys. From the doc:
import braintree
braintree.Configuration.configure(...)
def my_transaction():
braintree.Transaction.sale(...)
How can I configure braintree at the method level? That is, if I wanted to use a different credential for each transaction, how could I do so without updating a global config? Eg:
import braintree
def my_transaction():
braintree.Transaction.sale({
'configuration': {...},
'amount': ...
})
I would like to be able to use a different API key, depending on the source of the transaction. I would also like to be able to more easily toggle between Sandbox and Production credentials.
How would I accomplish this?

I work at Braintree. If you need more help, please get in touch with our support team.
Configuration objects can be instantiated:
config = braintree.Configuration(
environment=braintree.Environment.Sandbox,
merchant_id='my_merchant_id',
public_key='public_key',
private_key='private_key'
)
and passed to a Braintree gateway object:
gateway = braintree.BraintreeGateway(config)
which you can then use to run transactions:
result = gateway.transaction.create({'amount': ...})
So you can either instantiate a new gateway for each transaction with the appropriate credentials, or keep around a gateway with each set of credentials and use the appropriate one.

Related

Fetch Azure Managed Identity from within Function

I am using Azure Managed Identity feature for my python Azure Functions App
and would like to be able to fetch currently assigned Client ID from within the Function App itself.
Search through documentation and azure-identity python sources did not give result I would expect.
Maybe I could:
Query Azure Instance Metadata Service myself to get this ID. (not really happy with this option)
Provision it as env variable during ARM deployment stage/ or by hands later on. (seems good and efficient, but not sure what is the best practice here)
UPDATE
Managed o get it working with ARM template and env variable
Deploys FunctionApp with System Identity
Provisions System Identity as env variable of this same FunctionApp
Idea is to use Microsoft.Resources/deployments subtemplate to update Function App configuration with:
{
"name": "AZURE_CLIENT_ID",
"value": "[reference(resourceId('Microsoft.Web/sites', variables('appName')), '2019-08-01', 'full').identity.principalId]"
},
The simplest option is to go to the identity tab for your Functions app, and turn on "System assigned managed identity".
You can then get the access token without having to provide the client_id, since the token request simply picks the system assigned identity if there is one for the Function app.
If you are using "user assigned managed identity", then you need to provide the client_id: either through env or directly in your code.
You may already be aware, but just an additional note: that you also need to make sure you have given access to your managed identity for the resource you are accessing, for example: going to the Azure resource your Function app needs to access and assigning an appropriate role for your managed identity.
your option 1 (query Azure Instance Metadata Service), is only available on VMs.
UPDATE
Since you need the client_id for other purposes, you may also consider reading it from the response to your request for the access token: client_id is one of the parameters in the JSON token returned to you along with the access token, and its value is the client_id of the managed identity you used (in your case, the system-assigned managed identity)
Here is a sample token response to illustrate this:
{
access_token: <...>,
resource: <...>,
token_type: 'Bearer',
client_id: <client_id of the managed identity used to get this token>
}

How can I create a config file for AWS RDS credentials and import them in my AWS Lambda APIs?

Apologies if the question seems too basic as I'm new to AWS. I need help with creating a config file where I can store my RDS credentials and import this file inside my Lambda functions instead of writing the credentials inside each of them. The Lambda API connects to the RDS using the following credentials:
#rds settings
rds_host = "<host url>"
name = '<host name>'
password = '<host password>'
db_name = '<name of the schema being used>'
I want to put this information in a config file but I'm not able to figure out where to place this config file inside AWS and how I can import the same. The reason I want to configure this is because if I change the credentials, I want them to be reflected inside each of my Lambdas.
Thank you
Putting db credentials inside your lambda function is not a good practice. I would recommend following good practices and considering one of the following ways of passing db credentials into a lambda function:
Use environment variables of lambda function to pass the credentials. The environment variables can even be encrypted using KMS for extra security.
Use SSM Parameter Store to centrally store and mange the credentials (free).
Use AWS Secrets Manager to store and automatically rotate the db credentails (not free).
Use IAM Database Authentication for MySQL and PostgreSQL, which eliminates the use of traditional username and password for accessing a db.
If you have many lambda functions, then methods 2, 3 and 4 would be most suited. The methods also solve the problem of updating db credentials.

How to I access Security token for Python SDK boto3

I want to access AWS comprehend api from python script. Not getting any leads of how do I remove this error. One thing I know that I have to get session security token.
try:
client = boto3.client(service_name='comprehend', region_name='us-east-1', aws_access_key_id='KEY ID', aws_secret_access_key= 'ACCESS KEY')
text = "It is raining today in Seattle"
print('Calling DetectEntities')
print(json.dumps(client.detect_entities(Text=text, LanguageCode='en'), sort_keys=True, indent=4))
print('End of DetectEntities\n')
except ClientError as e:
print (e)
Error : An error occurred (UnrecognizedClientException) when calling the DetectEntities operation: The security token included in the request is invalid.
This error suggesting that you have provided invalid credentials.
It is also worth nothing that you should never put credentials inside your source code. This can lead to potential security problems if other people obtain access to the source code.
There are several ways to provide valid credentials to an application that uses an AWS SDK (such as boto3).
If the application is running on an Amazon EC2 instance, assign an IAM Role to the instance. This will automatically provide credentials that can be retrieved by boto3.
If you are running the application on your own computer, store credentials in the .aws/credentials file. The easiest way to create this file is with the aws configure command.
See: Credentials — Boto 3 documentation
Create a profile using aws configure or updating ~/.aws/config. If you only have one profile to work with = default, you can omit profile_name parameter from Session() invocation (see example below). Then create AWS service specific client using the session object. Example;
import boto3
session = boto3.session.Session(profile_name="test")
ec2_client = session.client('ec2')
ec2_client.describe_instances()
ec2_resource = session.resource(‘ec2’)
One useful tool I use daily is this: https://github.com/atward/aws-profile/blob/master/aws-profile
This makes assuming role so much easier!
After you set up your access key in .aws/credentials and your .aws/config
you can do something like:
AWS_PROFILE=**you-profile** aws-profile [python x.py]
The part in [] can be substituted with anything that you want to use AWS credentials. e.g., terraform plan
Essentially, this utility simply put your AWS credentials into os environment variables. Then in your boto script, you don't need to worry about setting aws_access_key_id and etc..

Access authorizer from an AWS lambda function with proxy integration - Python

[Derived from AWS ApiGateway Lambda Proxy access Authorizer ]
I´m using an Lambda Proxy and a Cognito User Pool Authorizer in my ApiGateway. With node.js Lambda functions, it's possible access to have access to the authorizer (basically, get the user id, email and stuff) in event.requestContext.authorizer.claims.
But in Python, there's no such object. Exploring the event object and the context object shows there's nothing like requestContext or authorizer, just an identity object that contains Cognito Federated Identities information : for me, it's just null.
I haven't seen any mention of the event.requestContext.authorizr.claims in Python AWS-Lambda functions, and it would be a pain to
use federated identities instead
in API Gateway, set Authorization to AWS_IAM and turn on Invoke with caller credential
drop the proxy integration for a custom integration or
write the lambdas with node.js instead of python
Have you any idea if I'm missing something, if it's not implemented by AWS, if there's an easy workaround, or anything else ?
Thanks a lot for your help !
Here's the context object in python aws-lambda function, in a dictionary-like form :
"aws_request_id":"99XXXXf5-6XXe-1XX8-bXXf-5dXXXXXXXX50",
"function_name":"test_DynamoDB",
"function_version":"$LATEST",
"invoked_function_arn":"arn:aws:lambda:eu-west-3:4XXXXXXXXXX0:function:test_DynamoDB",
"log":"<bound method LambdaContext.log of <__main__.LambdaContext object at 0x7f07bcb21da0>>",
"log_group_name":"/aws/lambda/test_DynamoDB",
"log_stream_name":"2018/06/13/[$LATEST]76XXXXXXXXXXXXXXXXXXXXXXXXXXXXf0",
"memory_limit_in_mb":"128",
"identity":{
"cognito_identity_id":null,
"cognito_identity_pool_id":null
},
"client_context": None
Update
From https://stackoverflow.com/a/44039371/9936457 , it seems the same issue was in C# lambda functions and there's a workaround for it, using System.IO.Streams. Would this solution work in Python, with the right syntax ?

How to use Flask-Login when user info provide by remote service?

I'm building a Flask app and started using Flask-Login for authentication. What troubles me is that Flask-Login calls the load_user callback for every request that flask handles. Here is the example from https://flask-login.readthedocs.org/en/latest/#how-it-works:
#login_manager.user_loader
def load_user(userid):
return User.get(userid)
To retrieve the user, I need to pass a session token a remote web service across a VPN, and the remote web service does a db query -- this results in noticeable latency on every web request. My load_user looks something like this:
#login_manager.user_loader
def load_user(userid):
# notice that I don't even use the userid arg
try:
# have to get session_token from session; why not
# just get the entire user object from session???
token = session.get('session_token')
user_profile = RestClient().get_user_profile(token)
return User(user_profile['LDAP_ID'])
except:
return None
Seems like maybe I'm subverting the framework. Could just store/retrieve user from session, so why bother to get it from the web service? This option also seems to subvert Flask-Login, but eliminates latency and makes good use of session.
The best way to handle this is to cache session information using something like memcached or redis (look into Flask-Cache for help).
You should have a key-value cache store that structures the cache like so:
key: sessionID
value: user object
This is what most frameworks do -- Flask-Login is a generic tool -- so you have to implement this yourself.
Incidentally, if you're looking for a way to abstract away that nasty LDAP stuff on the backend, you might want to check out https://stormpath.com -- they sync with LDAP servers, and provide a REST API on top of it. There's also a Flask library for interacting with it: Flask-Stormpath.

Categories