I cannot give too many details due to confidentiality, but I will try to specify as best as I can.
I have an AWS role that is going to be used to call an API and has the correct permissions.
I am using Boto3 to attempt to assume the role.
In my python code I have
sts_client = boto3.client('sts')
response = sts_client.assume_role(
RoleArn="arn:aws:iam::ACCNAME:role/ROLENAME",
RoleSessionName="filler",
)
With this code, I get this error:
"An error occurred (InvalidClientTokenId) when calling the AssumeRole operation: The security token included in the request is invalid."
Any help would be appreciated. Thanks
When you construct the client in this way, e.g. sts_client = boto3.client('sts'), it uses the boto3 DEFAULT_SESSION, which pulls from your ~/.aws/credentials file (possibly among other locations; I did not investigate further).
When I ran into this, the values for aws_access_key_id, aws_secret_access_key, and aws_session_token were stale. Updating them in the default configuration file (or simply overriding them directly in the client call) resolved this issue:
sts_client = boto3.client('sts',
aws_access_key_id='aws_access_key_id',
aws_secret_access_key='aws_secret_access_key',
aws_session_token='aws_session_token')
As an aside, I found that enabling stream logging was helpful and used the output to dive into the boto3 source code and find the issue: boto3.set_stream_logger('').
Related
I've build the following script:
import boto
import sys
import gcs_oauth2_boto_plugin
def check_size_lzo(ds):
# URI scheme for Cloud Storage.
CLIENT_ID = 'myclientid'
CLIENT_SECRET = 'mysecret'
GOOGLE_STORAGE = 'gs'
dir_file= 'date_id={ds}/apollo_export_{ds}.lzo'.format(ds=ds)
gcs_oauth2_boto_plugin.SetFallbackClientIdAndSecret(CLIENT_ID, CLIENT_SECRET)
uri = boto.storage_uri('my_bucket/data/apollo/prod/'+ dir_file, GOOGLE_STORAGE)
key = uri.get_key()
if key.size < 45379959:
raise ValueError('umg lzo file is too small, investigate')
else:
print('umg lzo file is %sMB' % round((key.size/1e6),2))
if __name__ == "__main__":
check_size_lzo(sys.argv[1])
It works fine locally but when I try and run on kubernetes cluster I get the following error:
boto.exception.GSResponseError: GSResponseError: 403 Access denied to 'gs://my_bucket/data/apollo/prod/date_id=20180628/apollo_export_20180628.lzo'
I have updated the .boto file on my cluster and added my oauth client id and secret but still having the same issue.
Would really appreciate help resolving this issue.
Many thanks!
If it works in one environment and fails in another, I assume that you're getting your auth from a .boto file (or possibly from the OAUTH2_CLIENT_ID environment variable), but your kubernetes instance is lacking such a file. That you got a 403 instead of a 401 says that your remote server is correctly authenticating as somebody, but that somebody is not authorized to access the object, so presumably you're making the call as a different user.
Unless you've changed something, I'm guessing that you're getting the default Kubernetes Engine auth, with means a service account associated with your project. That service account probably hasn't been granted read permission for your object, which is why you're getting a 403. Grant it read/write permission for your GCS resources, and that should solve the problem.
Also note that by default the default credentials aren't scoped to include GCS, so you'll need to add that as well and then restart the instance.
I'm using this code to assume an Amazon Web Services role via SAML authentication:
client = boto3.client('sts', region_name = region)
token = client.assume_role_with_saml(role, principal, saml)
As documented here, the assume_role_with_saml call does not require the use of AWS security credentials; all the auth info is contained in the parameters to the call itself. Nonetheless, if I have auth-related AWS_ environment variables set, the call to boto3.client() immediately tries to use them to authenticate. Usually, I have AWS_PROFILE set, and the reason I'm running this code is because the named profile's security token has expired, so the call fails, and I have to unset AWS_PROFILE and try again.
I can of course manually go through os.environ looking for and deleting relevant variables before the call to boto3.client(), but I'm wondering if there's any cleaner way to say "Hey, Boto, just give me an STS client object without trying to authenticate anything, OK?"
From this response on GitHub, here's how to set up a client that won't attempt to sign outgoing requests with IAM credentials:
import boto3
from botocore import UNSIGNED
from botocore.config import Config
client = boto3.client('sts', region_name=region, config=Config(signature_version=UNSIGNED))
By examining the boto3 and botocore code, I worked out a solution, but I'm not sure it's an improvement over unsetting the environment variables:
import boto3, botocore
bs = botocore.session.get_session({ 'profile': ( None, ['', ''], None, None ) })
bs.set_credentials('','','')
s = boto3.session.Session(botocore_session = bs)
client = s.client('sts', region_name = region)
Accepting my own answer for now, but if anyone has a better idea, I'm all ears.
So I'm trying to produce temporary globally readable URLs for my Google Cloud Storage objects using the google-cloud-storage Python library (https://googlecloudplatform.github.io/google-cloud-python/latest/storage/blobs.html) - more specifically the Blob.generate_signed_url() method. I doing this from within a Compute Engine instance in a command line Python script. And I keep getting the following error:
AttributeError: you need a private key to sign credentials.the credentials you are currently using <class 'oauth2cl
ient.service_account.ServiceAccountCredentials'> just contains a token. see https://google-cloud-python.readthedocs
.io/en/latest/core/auth.html?highlight=authentication#setting-up-a-service-account for more details.
I am aware that there are issues with doing this from within GCE (https://github.com/GoogleCloudPlatform/google-auth-library-python/issues/50) but I have created a new Service Account credentials following the instructions here: https://cloud.google.com/storage/docs/access-control/create-signed-urls-program and my key.json file most certainly includes a private key. Still I am seeing that error.
This is my code:
keyfile = "/path/to/my/key.json"
credentials = ServiceAccountCredentials.from_json_keyfile_name(keyfile)
expiration = timedelta(3) # valid for 3 days
url = blob.generate_signed_url(expiration, method="GET",
credentials=credentials)
I've read through the issue tracker here https://github.com/GoogleCloudPlatform/google-cloud-python/issues?page=2&q=is%3Aissue+is%3Aopen and nothing related jumps out so I am assuming this should work. Cannot see what's going wrong here.
I was having the same issue. Ended up fixing it by starting the storage client directly from the service account json.
storage_client = storage.Client.from_service_account_json('path_to_service_account_key.json')
I know I'm late to the party but hopefully this helps!
Currently, it's not possible to use blob.generate_signed_url without explicitly referencing credentials. (Source: Google-Cloud-Python documentation) However, you can do a workaround, as seen here, which consists of:
signing_credentials = compute_engine.IDTokenCredentials(
auth_request,
"",
service_account_email=credentials.service_account_email
)
signed_url = signed_blob_path.generate_signed_url(
expires_at_ms,
credentials=signing_credentials,
version="v4"
)
A much complete snippet for those asking where other elements come from. cc #AlbertoVitoriano
from google.auth.transport import requests
from google.auth import default, compute_engine
credentials, _ = default()
# then within your abstraction
auth_request = requests.Request()
credentials.refresh(auth_request)
signing_credentials = compute_engine.IDTokenCredentials(
auth_request,
"",
service_account_email=credentials.service_account_email
)
signed_url = signed_blob_path.generate_signed_url(
expires_at_ms,
credentials=signing_credentials,
version="v4"
)
When trying to create a sink using the Google Cloud Python3 API Client I get the error:
RetryError: GaxError(Exception occurred in retry method that was not classified as transient, caused by <_Rendezvous of RPC that terminated with (StatusCode.PERMISSION_DENIED, The caller does not have permission)>)
The code I used was this one:
import os
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'path_to_json_secrets.json'
from google.cloud.bigquery.client import Client as bqClient
bqclient = bqClient()
ds = bqclient.dataset('dataset_name')
print(ds.access_grants)
[]
ds.delete()
ds.create()
print(ds.access_grants)
[<AccessGrant: role=WRITER, specialGroup=projectWriters>,
<AccessGrant: role=OWNER, specialGroup=projectOwners>,
<AccessGrant: role=OWNER, userByEmail=id_1#id_2.iam.gserviceaccount.com>,
<AccessGrant: role=READER, specialGroup=projectReaders>]
from google.cloud.logging.client import Client as lClient
lclient = lClient()
dest = 'bigquery.googleapis.com%s' %(ds.path)
sink = lclient.sink('sink_test', filter_='jsonPayload.project=project_name', destination=dest)
sink.create()
Don't quite understand why this is happening. When I use lclient.log_struct() I can see the logs arriving in the Logging console so I do have access to Stackdriver Logging.
Is there any mistake in this setup?
Thanks in advance.
Creating a sink requires different permissions than writing a log entry. By default service accounts are given project Editor (not Owner), which does not have permission to create sinks.
See the list of permissions required in the access control docs.
Make sure the service account you're using has logging.sinks.create permission. The simplest way to do this is to switch the service account from Editor to Owner, but it would be better to add the Logs Editor Role so you just give it the permission it needs.
I am trying to run this script:
from __future__ import print_function
import paramiko
import boto3
#print('Loading function')
paramiko.util.log_to_file("/tmp/Dawny.log")
# List of EC2 variables
region = 'us-east-1'
image = 'ami-<>'
keyname = '<>.pem'
ec2 = boto3.resource('ec2')
instances = ec2.create_instances(
ImageId=image, MinCount=1, MaxCount=1,
InstanceType = 't2.micro', KeyName=keyname)
instance = instances[0]
instance.wait_until_running()
instance.load()
print(instance.public_dns_name)
I am running this script on a server which has all the aws configurations done (in aws configure)
And, when I run it, I get this error:
botocore.exceptions.ClientError: An error occurred (AuthFailure) when calling the RunInstances operation: Not authorized for images:
[ami-<>]
Any reason why? And, how do I solve it?
[The image is private. But, as I have configured boto on the server, technically, it shouldn't be a problem, right?]
There is few answer for this error
Insufficient parameter, but create_instance give your other error. e.g. VPC-id, subnet-ID, Security group are missing.
Your API Access key in credential doesn't have any right to initate run-instance. Please go to IAM and check whether your user are given adequate roles to perform the task.
You might run into this error if you try to use the Key Pair file name instead of the actual name in AWS Console > EC2 > Key Pairs
aws ec2 run-instances --image-id ami-123457916 --instance-type t3.nano
--key-name **my_ec2_keypair.pem**
Should be name of the KeyPair, not the filename of the KeyPair.