Not authorized for images: boto3 error - python

I am trying to run this script:
from __future__ import print_function
import paramiko
import boto3
#print('Loading function')
paramiko.util.log_to_file("/tmp/Dawny.log")
# List of EC2 variables
region = 'us-east-1'
image = 'ami-<>'
keyname = '<>.pem'
ec2 = boto3.resource('ec2')
instances = ec2.create_instances(
ImageId=image, MinCount=1, MaxCount=1,
InstanceType = 't2.micro', KeyName=keyname)
instance = instances[0]
instance.wait_until_running()
instance.load()
print(instance.public_dns_name)
I am running this script on a server which has all the aws configurations done (in aws configure)
And, when I run it, I get this error:
botocore.exceptions.ClientError: An error occurred (AuthFailure) when calling the RunInstances operation: Not authorized for images:
[ami-<>]
Any reason why? And, how do I solve it?
[The image is private. But, as I have configured boto on the server, technically, it shouldn't be a problem, right?]

There is few answer for this error
Insufficient parameter, but create_instance give your other error. e.g. VPC-id, subnet-ID, Security group are missing.
Your API Access key in credential doesn't have any right to initate run-instance. Please go to IAM and check whether your user are given adequate roles to perform the task.

You might run into this error if you try to use the Key Pair file name instead of the actual name in AWS Console > EC2 > Key Pairs
aws ec2 run-instances --image-id ami-123457916 --instance-type t3.nano
--key-name **my_ec2_keypair.pem**
Should be name of the KeyPair, not the filename of the KeyPair.

Related

AWS Sagemaker InvokeEndpoint: operation: Endpoint of account not found

I've been following this guide here: https://aws.amazon.com/blogs/machine-learning/building-an-nlu-powered-search-application-with-amazon-sagemaker-and-the-amazon-es-knn-feature/
I have successfully deployed the model from my notebook instance. I am also able to generate predictions by calling predict() method from sagemaker.predictor.
This is how I created and deployed the model
class StringPredictor(Predictor):
def __init__(self, endpoint_name, sagemaker_session):
super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')
pytorch_model = PyTorchModel(model_data = inputs,
role=role,
entry_point ='inference.py',
source_dir = './code',
framework_version = '1.3.1',
py_version='py3',
predictor_cls=StringPredictor)
predictor = pytorch_model.deploy(instance_type='ml.m5.large', initial_instance_count=4)
From the SageMaker dashboard, I can even see that my endpoint and the status is "in-service"
If I run aws sagemaker list-endpoints I can see my desired endpoint showing up correctly as well.
My issue is when I run this code (outside of sagemaker), I'm getting an error:
import boto3
sm_runtime_client = boto3.client('sagemaker-runtime')
payload = "somestring that is used here"
response = sm_runtime_client.invoke_endpoint(EndpointName='pytorch-inference-xxxx',ContentType='text/plain',Body=payload)
This is the error thrown
botocore.errorfactory.ValidationError: An error occurred (ValidationError) when calling the InvokeEndpoint operation: Endpoint pytorch-inference-xxxx of account xxxxxx not found.
This is quite strange as I'm able to see and run the endpoint just fine from sagemaker notebook and I am able to run the predict() method too.
I have verified the region, endpoint name and the account number.
I was having the exact same error, I've just fixed mine by setting the correct region.
I have verified the region, endpoint name and the account number.
I know that you have indicated that you have verified the region, but in my case, the remote computer had another region configured. So I just ran the following command on my remote computer
aws configure
And once I set the key ID and secret key again, I set the correct region and the error was gone.

AWS Glue - Python Shell Jobs Secret Manager Connectivity Issues

I am using Python Shell Jobs under AWS Glue which has boto3 and a few other libraries built-in . I am facing issues trying to access the secrets manager to get credentials to my RDS instance running Mysql , the job keeps running forever without any (error/success) message nor does it time out .
Below is the simple code that runs even from my local or a lambda for Python3.7 but not in Python Shell GLUE ,
import boto3
import base64
from botocore.exceptions import ClientError
secret_name = "secret_name"
region_name = "eu-west-1"
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager',
region_name=region_name
)
get_secret_value_response = client.get_secret_value(SecretId=secret_name)
print(get_secret_value_response)
Would be very helpful if someone could point out if anything needs to be done additionally in Python Shell jobs under AWS Glue in order to access the secret manager credentials .
Make sure the IAM role used by the Glue Job has the policy SecretsManagerReadWrite
Also AWSGlueServiceRole and AmazonS3FullAccess
According to the documentation
When you create a job without any VPC configuration , then glue tries to reach the secret manager through internet , if the policies allows to have internet route then we can connect to secret manager
But when a glue job is created with VPC configuration/connection then all the request are made from your VPC/subnet where the connection points to , if this is the case, make sure you have secret manager endpoint present in your route table of the subnet where glue launches the resources.
https://docs.aws.amazon.com/glue/latest/dg/setup-vpc-for-glue-access.html
https://docs.aws.amazon.com/secretsmanager/latest/userguide/vpc-endpoint-overview.html

boto3 ec2 & django [duplicate]

On boto I used to specify my credentials when connecting to S3 in such a way:
import boto
from boto.s3.connection import Key, S3Connection
S3 = S3Connection( settings.AWS_SERVER_PUBLIC_KEY, settings.AWS_SERVER_SECRET_KEY )
I could then use S3 to perform my operations (in my case deleting an object from a bucket).
With boto3 all the examples I found are such:
import boto3
S3 = boto3.resource( 's3' )
S3.Object( bucket_name, key_name ).delete()
I couldn't specify my credentials and thus all attempts fail with InvalidAccessKeyId error.
How can I specify credentials with boto3?
You can create a session:
import boto3
session = boto3.Session(
aws_access_key_id=settings.AWS_SERVER_PUBLIC_KEY,
aws_secret_access_key=settings.AWS_SERVER_SECRET_KEY,
)
Then use that session to get an S3 resource:
s3 = session.resource('s3')
You can get a client with new session directly like below.
s3_client = boto3.client('s3',
aws_access_key_id=settings.AWS_SERVER_PUBLIC_KEY,
aws_secret_access_key=settings.AWS_SERVER_SECRET_KEY,
region_name=REGION_NAME
)
This is older but placing this here for my reference too. boto3.resource is just implementing the default Session, you can pass through boto3.resource session details.
Help on function resource in module boto3:
resource(*args, **kwargs)
Create a resource service client by name using the default session.
See :py:meth:`boto3.session.Session.resource`.
https://github.com/boto/boto3/blob/86392b5ca26da57ce6a776365a52d3cab8487d60/boto3/session.py#L265
you can see that it just takes the same arguments as Boto3.Session
import boto3
S3 = boto3.resource('s3', region_name='us-west-2', aws_access_key_id=settings.AWS_SERVER_PUBLIC_KEY, aws_secret_access_key=settings.AWS_SERVER_SECRET_KEY)
S3.Object( bucket_name, key_name ).delete()
I'd like expand on #JustAGuy's answer. The method I prefer is to use AWS CLI to create a config file. The reason is, with the config file, the CLI or the SDK will automatically look for credentials in the ~/.aws folder. And the good thing is that AWS CLI is written in python.
You can get cli from pypi if you don't have it already. Here are the steps to get cli set up from terminal
$> pip install awscli #can add user flag
$> aws configure
AWS Access Key ID [****************ABCD]:[enter your key here]
AWS Secret Access Key [****************xyz]:[enter your secret key here]
Default region name [us-west-2]:[enter your region here]
Default output format [None]:
After this you can access boto and any of the api without having to specify keys (unless you want to use a different credentials).
If you rely on your .aws/credentials to store id and key for a user, it will be picked up automatically.
For instance
session = boto3.Session(profile_name='dev')
s3 = session.resource('s3')
This will pick up the dev profile (user) if your credentials file contains the following:
[dev]
aws_access_key_id = AAABBBCCCDDDEEEFFFGG
aws_secret_access_key = FooFooFoo
region=op-southeast-2
There are numerous ways to store credentials while still using boto3.resource().
I'm using the AWS CLI method myself. It works perfectly.
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html?fbclid=IwAR2LlrS4O2gYH6xAF4QDVIH2Q2tzfF_VZ6loM3XfXsPAOR4qA-pX_qAILys
you can set default aws env variables for secret and access keys - that way you dont need to change default client creation code - though it is better to pass it as a parameter if you have non-default creds

Calling assume_role results in an "InvalidClientTokenId" error

I cannot give too many details due to confidentiality, but I will try to specify as best as I can.
I have an AWS role that is going to be used to call an API and has the correct permissions.
I am using Boto3 to attempt to assume the role.
In my python code I have
sts_client = boto3.client('sts')
response = sts_client.assume_role(
RoleArn="arn:aws:iam::ACCNAME:role/ROLENAME",
RoleSessionName="filler",
)
With this code, I get this error:
"An error occurred (InvalidClientTokenId) when calling the AssumeRole operation: The security token included in the request is invalid."
Any help would be appreciated. Thanks
When you construct the client in this way, e.g. sts_client = boto3.client('sts'), it uses the boto3 DEFAULT_SESSION, which pulls from your ~/.aws/credentials file (possibly among other locations; I did not investigate further).
When I ran into this, the values for aws_access_key_id, aws_secret_access_key, and aws_session_token were stale. Updating them in the default configuration file (or simply overriding them directly in the client call) resolved this issue:
sts_client = boto3.client('sts',
aws_access_key_id='aws_access_key_id',
aws_secret_access_key='aws_secret_access_key',
aws_session_token='aws_session_token')
As an aside, I found that enabling stream logging was helpful and used the output to dive into the boto3 source code and find the issue: boto3.set_stream_logger('').

Cannot read a key from S3 with boto, but can with aws cli

Using this aws cli command (with access keys configured), I'm able to copy a key from S3 locally:
aws s3 cp s3://<bucketname>/test.txt test.txt
Using the following code in boto, I get S3ResponseError: 403 Forbidden, whether I allow boto to use configured credentials, or explicitly pass it keys.
import boto
c = boto.connect_s3()
b = c.get_bucket('<bucketname>')
k = b.get_key('test.txt')
d = k.get_contents_as_string() # exception thrown here
I've seen the other SO posts about not validating the key with validate=False etc, but none of these are my issue. I get similar results when copying the key to another location in the same bucket. Succeeds with the cli, but not with boto.
I've looked at the boto source to see if it's doing anything that requires extra permissions, but nothing stands out to me.
Does anyone have any suggestions? How does boto resolve its credentials?
Explicitly set your credentials so that our the same as the CLI with the ENV variables.
echo $ACCESS_KEY
echo $SECRET_KEY
import boto3
client = boto3.client(
's3',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY
)
b = client.get_bucket('<bucketname>')
k = b.get_key('test.txt')
d = k.get_contents_as_string()
How boto resolves its credential.
The mechanism in which boto3 looks for credentials is to search through a list of possible locations and stop as soon as it finds credentials. The order in which Boto3 searches for credentials is:
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
http://boto3.readthedocs.io/en/latest/guide/configuration.html#guide-configuration

Categories