Using Python How to fetch data from AWS Parameter store in CI Jenkin server (which runs my tests nightly) - python

I have spent a lot of time searching the web for this answer.
I know how to get my QA env aws_access_key_id & aws_secret_access_key while running my code locally on my PC which are stored in my C:\Users[name].aws config file
[profile qa]
aws_access_key_id = ABC
aws_secret_access_key = XYZ
[profile crossaccount]
role_arn=arn:aws:ssm:us-east-1:12345678:parameter/A/B/secrets/C
source_profile=qa
Like this python code and I actually see the correct values.
import boto3
session = boto3.Session(profile_name='qa')
s3client = session.client('s3')
credentials = session.get_credentials()
accessKey = credentials.access_key
secretKey = credentials.secret_key
print("accessKey= " + str(accessKey) + " secretKey="+ secretKey)
A) How do I get these when my code is running on CI do I pass the "aws_access_key_id" and "aws_secret_access_key" as arguments to the code?
B) How exactly do I assume a role and get the parameters from AWS System manager parameter store
Given I know
secretsExternalId: 'DEF',
secretsRoleArn: 'arn:aws:ssm:us-east-1:12345678:parameter/A/B/secrets/C',

Install boto3 and botocore on the CI server.
As you mentioned the CI(Jenkins) is also running on AWS. You can attach a role to the CI server(EC2) which provides SSM read permission.
Below is the sample permission for the role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:Describe*",
"ssm:Get*",
"ssm:List*"
],
"Resource": "*"
}
]
}
And from your python code, you can simply call the ssm to get the parameter values.
$ export parameter_value1="/path/to/ssm/parameter_1"
$ export parameter_value2="/path/to/ssm/parameter_2"
import os
import boto3
session = boto3.Session(region_name='eu-west-1')
ssm = session.client('ssm')
MYSQL_HOSTNAME = os.environ.get('parameter_value1')
MYSQL_USERNAME = os.environ.get('parameter_value2')
hostname = ssm.get_parameter(Name=parameter_value1, WithDecryption=True)
username = ssm.get_parameter(Name=parameter_value2, WithDecryption=True)
print("Param1: {}".format(hostname['Parameter']['Value']))
print("Param2: {}".format(username['Parameter']['Value']))

Related

Login to a GitLab repo from Kubernetes Python client

I have a Django app that uses the official Kubernetes client for python and works fine but it only deploys (rightly) public registries.
Is there a way to execute a login and then let Kubernetes client pull a private image freely? I wouldn't like to execute direct cmd commands for the login and the image pull.. Thanks!
Actually it's pretty easy to do using official Kubernetes Python Client. You need to do two steps:
create a secret of type dockerconfigjson (could be done by command line or using Python client) - you are putting here your credentials
add this secret into your deployment / pod definition using imagePullSecrets so Kubernetes client can pull images from private repositories
Create secret of type dockerconfigjson:
Replace <something> with your data.
Command line:
kubectl create secret docker-registry private-registry \
--docker-server=<your-registry-server> --docker-username=<your-name> \
--docker-password=<your-pword> --docker-email=<your-email>
Equivalent in Kubernetes Python Client (remember to pass in secure way variable password, for example check this):
import base64
import json
from kubernetes import client, config
config.load_kube_config()
v1 = client.CoreV1Api()
# Credentials
username = <your-name>
password = <your-pword>
mail = <your-email>
secret_name = "private-registry"
namespace = "default"
# Address of Docker repository - in case of Docker Hub just use https://index.docker.io/v1/
docker_server = <your-registry-server>
# Create auth token
auth_decoded = username + ":" + password
auth_decoded_bytes = auth_decoded.encode('ascii')
base64_auth_message_bytes = base64.b64encode(auth_decoded_bytes)
base64_auth_message = base64_auth_message_bytes.decode('ascii')
cred_payload = {
"auths": {
docker_server: {
"username": username,
"password": password,
"email": mail,
"auth": base64_auth_message
}
}
}
data = {
".dockerconfigjson": base64.b64encode(
json.dumps(cred_payload).encode()
).decode()
}
secret = client.V1Secret(
api_version="v1",
data=data,
kind="Secret",
metadata=dict(name=secret_name, namespace=namespace),
type="kubernetes.io/dockerconfigjson",
)
v1.create_namespaced_secret(namespace, body=secret)
Add this secret into your deployment / pod definition using imagePullSecrets: option
Now, let's move to using newly created secret - depends how you want to deploy pod / deployment in Python code there are two ways: apply yaml file or create pod / deployment manifest directly in the code. I will show both ways. As before, replace <something> with your data.
Example yaml file:
apiVersion: v1
kind: Pod
metadata:
name: private-registry-pod
spec:
containers:
- name: private-registry-container
image: <your-private-image>
imagePullSecrets:
- name: private-registry
In last line we are referring to secret docker-registry created in previous step.
Let's apply this yaml file using Kubernetes Python client:
from os import path
import yaml
from kubernetes import client, config
config.load_kube_config()
v1 = client.CoreV1Api()
config_yaml = "pod.yaml"
with open(path.join(path.dirname(__file__), config_yaml)) as f:
dep = yaml.safe_load(f)
resp = v1.create_namespaced_pod(body=dep, namespace="default")
print("Deployment created. status='%s'" % str(resp.status))
All in Python code - both pod definition and applying process:
from kubernetes import client, config
import time
config.load_kube_config()
v1 = client.CoreV1Api()
pod_name = "private-registry-pod"
secret_name = "private-registry"
namespace = "default"
container_name = "private-registry-container"
image = <your-private-image>
# Create a pod
print("Creating pod...")
pod_manifest= {
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": pod_name
},
"spec": {
"containers": [
{
"name": container_name,
"image": image
}
],
"imagePullSecrets": [
{
"name": secret_name
}
]
}
}
resp = v1.create_namespaced_pod(body=pod_manifest, namespace=namespace)
# Wait for a pod
while True:
resp = v1.read_namespaced_pod(name=pod_name, namespace=namespace)
if resp.status.phase != 'Pending':
break
time.sleep(1)
print("Done.")
Sources:
Github thread
Stackoverflow topic
Official Kubernetes Python client example
Kubernetes docs
Another Kubernetes docs
Github topic

AWS assume role not working as expected with boto3

I want to use aws SSM to do ssm:DescribeInstanceInformation on ec2 instances (i-0691847a77) by assuming an IAM role (iam_ssm_role) which has below policy defined.
both the IAM roles are on same aws account & iam_base_role arn has been added as trusted policy in iam_ssm_role.
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"ssm:*",
"ec2:DescribeImages",
"cloudwatch:PutMetricData",
"ec2:DescribeInstances",
"lambda:InvokeFunction",
"ec2:DescribeTags",
"ec2:DescribeVpcs",
"cloudwatch:GetMetricStatistics",
"ec2:DescribeSubnets",
"ec2:DescribeKeyPairs",
"cloudwatch:ListMetrics",
"ec2:DescribeSecurityGroups"
],
"Resource": "*"
}
I am running below code on an ec2 instance with IAM role (iam_base_role)
import boto3
from boto3.session import Session
def assume_role(arn, session_name):
client = boto3.client('sts')
response = client.assume_role(RoleArn=arn, RoleSessionName=session_name)
session = Session(aws_access_key_id=response['Credentials']['AccessKeyId'],
aws_secret_access_key=response['Credentials']['SecretAccessKey'],
aws_session_token=response['Credentials']['SessionToken'])
client = session.client('sts')
account_id = client.get_caller_identity()["Account"]
print(response['AssumedRoleUser']['AssumedRoleId'])
assume_role('arn:aws:iam::000001:role/iam_ssm_role', 'ssm_session')
client = boto3.client('ssm', region_name = 'us-east-1')
ssm_response = client.describe_instance_information(
InstanceInformationFilterList=[
{
'key': 'InstanceIds',
'valueSet': [
'i-0f0099877fgg'
]
}
]
)
print(ssm_response)
I am getting access denied error , the assumed role shows as "iam_ssm_role" but it looks like the SSM is running using iam_base_role not iam_ssm_role
AROAV6BDS6PTVQBU:iam_ssm_role
botocore.exceptions.ClientError: An error occurred (AccessDeniedException) when calling the DescribeInstanceInformation operation: User: arn:aws:sts::000001:assumed-role/iam_base_role/i-0691847a77 is not authorized to perform: ssm:DescribeInstanceInformation on resource: arn:aws:ssm:us-east-1:000001:*
Ok, I found the issue with my previous code , I was not using the assumed iam role's credentials in boto3.client SSM part.
I can now run the code successfully , I am using below code now.
import boto3
boto_sts=boto3.client('sts')
stsresponse = boto_sts.assume_role(
RoleArn="arn:aws:iam::000001:role/iam_ssm_role",
RoleSessionName='newsession'
)
newsession_id = stsresponse["Credentials"]["AccessKeyId"]
newsession_key = stsresponse["Credentials"]["SecretAccessKey"]
newsession_token = stsresponse["Credentials"]["SessionToken"]
client = boto3.client('ssm',
region_name = 'us-east-1',
aws_access_key_id=newsession_id,
aws_secret_access_key=newsession_key,
aws_session_token=newsession_token)
ssm_response = client.describe_instance_information(
InstanceInformationFilterList=[
{
'key': 'InstanceIds',
'valueSet': [
'i-0f0099877fgg'
]
}
]
)
print(ssm_response)

How to create user in amazon-cognito using boto3 in python

I'm trying to create user using python3.x and boto3 but end up with facing some issues
I've tried using "admin_create_user" even id didn't worked for me
import boto3
aws_client = boto3.client('cognito-idp',
region_name = CONFIG["cognito"]["region"]
)
response = aws_client.admin_create_user(
UserPoolId = CONFIG["cognito"]["pool_id"],
Username = email,
UserAttributes = [
{"Name": "first_name", "Value": first_name},
{"Name": "last_name", "Value": last_name},
{ "Name": "email_verified", "Value": "true" }
],
DesiredDeliveryMediums = ['EMAIL']
)
Error facing
I think you didn't pass the configuration. First install the AWS CLI.
pip install awscli --upgrade --user
Then type below command in your terminal,
aws configure
Provide your details correctly,
AWS Access Key ID [****************6GOW]:
AWS Secret Access Key [****************BHOD]:
Default region name [us-east-1]:
Default output format [None]:
Try this and you can also view your credentials in below paths.
sudo cat ~/.aws/credentials
[default]
aws_access_key_id = ******7MVXLBPHW66GOW
aws_secret_access_key = wKtT*****UqN1sO/1Pfn+BCrvNst*****695BHOD
sudo cat ~/.aws/config
[default]
region = us-east-1
or you can view all those in one place by aws configure list command,

AWS IOT connection using Cognito credentials

I am having problems connecting to AWS IOT from a python script using Cognito credentials. I am using the AWS IOT SDK in python as well as the boto3 package. Here is what I am doing:
First, I set up a Cognito User Pool with a couple of users who have a username and password to login. I also set up a Cognito Identity Pool with my Cognito User Pool as the one and only Authentication Provider. I do not provide access to unauthenticated identities. The Identity Pool has an Auth Role I will just call "MyAuthRole" and when I go to IAM, that role has two policies attached: One is the default policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"mobileanalytics:PutEvents",
"cognito-sync:*",
"cognito-identity:*"
],
"Resource": [
"*"
]
}
]
}
and the second is a policy for IOT access:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "iot:*",
"Resource": "*"
}
]
}
Next I have python code to use my AWS IAM account credentials (access key and secret key) to get a temporary auth token like this:
auth_data = { 'USERNAME':username , 'PASSWORD':password }
provider_client=boto3.client('cognito-idp', region_name=region)
resp = provider_client.admin_initiate_auth(UserPoolId=user_pool_id, AuthFlow='ADMIN_NO_SRP_AUTH', AuthParameters=auth_data, ClientId=client_id)
id_token=resp['AuthenticationResult']['IdToken']
Finally I try to use this token to connect to AWS IOT, using this function:
def _get_aws_cognito_temp_credentials(aws_access_key_id=None,aws_secret_access_key=None,
region_name='us-west-2',account_id=None,user_pool_id=None,
identity_pool_id=None,id_token=None):
boto3.setup_default_session(aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
region_name = region_name)
identity_client = boto3.client('cognito-identity', region_name=region_name)
loginkey = "cognito-idp.%s.amazonaws.com/%s" % (region_name,user_pool_id)
print("loginkey is %s" % loginkey)
loginsdict={
loginkey: id_token
}
identity_response = identity_client.get_id(AccountId=account_id,
IdentityPoolId=identity_pool_id,
Logins=loginsdict)
identity_id = identity_response['IdentityId']
#
# Get the identity's credentials
#
credentials_response = identity_client.get_credentials_for_identity(
IdentityId=identity_id,
Logins=loginsdict)
credentials = credentials_response['Credentials']
access_key_id = credentials['AccessKeyId']
secret_key = credentials['SecretKey']
service = 'execute-api'
session_token = credentials['SessionToken']
expiration = credentials['Expiration']
return access_key_id,secret_key,session_token,expiration
Finally I create the AWS IOT client and try to connect like this:
myAWSIoTMQTTClient = AWSIoTMQTTClient(clientId, useWebsocket=True)
myAWSIoTMQTTClient.configureEndpoint(host, port)
myAWSIoTMQTTClient.configureIAMCredentials(temp_access_key_id,
temp_secret_key,
temp_session_token)
myAWSIoTMQTTClient.configureAutoReconnectBackoffTime(1, 32, 20)
myAWSIoTMQTTClient.configureOfflinePublishQueueing(-1)
myAWSIoTMQTTClient.configureDrainingFrequency(2)
myAWSIoTMQTTClient.configureConnectDisconnectTimeout(10)
myAWSIoTMQTTClient.configureMQTTOperationTimeout(5)
log.info("create_aws_iot_client", pre_connect=True)
myAWSIoTMQTTClient.connect()
log.info("create_aws_iot_client", post_connect=True, myAWSIoTMQTTClient=myAWSIoTMQTTClient)
The problem is that it gets to pre_connect and then just hangs and eventually times out. The error message I get is this:
AWSIoTPythonSDK.exception.AWSIoTExceptions.connectTimeoutException
I also read somewhere that there may be other policies that somehow have to be attached:
"According to the Policies for HTTP and WebSocketClients documentation, in order to authenticate an Amazon Cognito identity to publish MQTT messages over HTTP, you must specify two policies. The first policy must be attached to an Amazon Cognito identity pool role. This is the managed policy AWSIoTDataAccess that was defined earlier in the IdentityPoolAuthRole.
The second policy must be attached to an Amazon Cognito user using the AWS IoT AttachPrincipalPolicy API."
However, I have no clue how to achieve the above in python or using the AWS console.
How do I fix this issue?
You are right that the step you've missed is using the AttachPrincipalPolicy API (which is now depreciated and has been replaced with Iot::AttachPolicy.)
To do this:
Create an IoT policy (IoT Core > Secure > Policies > Create)
Give the policy the permissions you want any user attached to that policy to have, from your code as shared that would mean just copying the second IAM policy. Although you'll want to lock that down in production!
Using the AWS CLI you can attach this policy to your cognito user using:
aws iot attach-policy --policy-name <iot-policy-name> --target <cognito-identity-id>
There is a significantly more involved AWS example at aws-samples/aws-iot-chat-example, however it is in JavaScript. I wasn't able to find an equivalent in Python, however the Cognito/IAM/IoT configuration steps and required API calls will remain the same whatever the language.

Copy from S3 bucket in one account to S3 bucket in another account using Boto3 in AWS Lambda

I have created a S3 bucket and created a file under my aws account. My account has trust relationship established with another account and I am able to put objects into the bucket in another account using Boto3. How can I copy objects from bucket in my account to bucket in another account using Boto3?
I see "access denied" when I use the code below -
source_session = boto3.Session(region_name = 'us-east-1')
source_conn = source_session.resource('s3')
src_conn = source_session.client('s3')
dest_session = __aws_session(role_arn=assumed_role_arn, session_name='dest_session')
dest_conn = dest_session.client ( 's3' )
copy_source = { 'Bucket': bucket_name , 'Key': key_value }
dest_conn.copy ( copy_source, dest_bucket_name , dest_key,ExtraArgs={'ServerSideEncryption':'AES256'}, SourceClient = src_conn )
In my case , src_conn has access to source bucket and dest_conn has access to destination bucket.
I believe the only way to achieve this by downloading and uploading the files.
AWS Session
client = boto3.client('sts')
response = client.assume_role(RoleArn=role_arn, RoleSessionName=session_name)
session = boto3.Session(
aws_access_key_id=response['Credentials']['AccessKeyId'],
aws_secret_access_key=response['Credentials']['SecretAccessKey'],
aws_session_token=response['Credentials']['SessionToken'])
Another approach is to attach a policy to the destination bucket permitting access from the account hosting the source bucket. eg. something like the following should work (although you may want to tighten up the permissions as appropriate):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<source account ID>:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::dst_bucket",
"arn:aws:s3:::dst_bucket/*"
]
}
]
}
Then your Lambda hosted in your source AWS account should have no problems writing to the bucket(s) in the destination AWS account.

Categories