I am getting what seems to be quite a common error for people starting with AWS, python and boto.
NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials
I have tried this and this but still get the error.
I know the credentials work and are correct because I have used them to test previous things such as an rds connections.
Script for rds is the following:
import boto.rds as rds
import boto3 as b3
import boto
from sqlalchemy import create_engine
conn = boto.rds.connect_to_region("us-west-2",aws_access_key_id='<ID>',aws_secret_access_key='<KEY>')
engine = create_engine('postgresql://my_id:my_pass#datawarehouse.stuff.us-west-2.rds.amazonaws.com/db_name', echo=False)
res = engine.execute("select * from table")
print res,engine
Which runs without error.
Is there anything I am missing in terms of VPC? Access rights?
Its making me nuts!
I have BOTO_CONFIG set to C:/Users/%USER%/boto.config at the user level (not system level).
and C:/Users/%USER%/boto.config reads as:
[default]
aws_access_key_id = <MY_ID>
aws_secret_access_key = <MY_SECRET>
print boto.__version__
yields:
2.40.0
Thanks for any help.
Related
I try to read or write from/to an AWS RDS Proxy with a postgres RDS as the endpoint.
The operation works with psql but fails on the same client with pg8000 or psycopg2 as client libraries in Python.
The operation works with with pg8000 and psycopg2 if I use the RDS directly as endpoint (without the RDS proxy).
sqlaclchemy/psycopg2 error message:
Feature not supported: RDS Proxy currently doesn’t support command-line options.
A minimal version of the code I use:
from sqlalchemy import create_engine
import os
from dotenv import load_dotenv
load_dotenv()
login_string = os.environ['login_string_proxy']
engine = create_engine(login_string, client_encoding="utf8", echo=True, connect_args={'options': '-csearch_path={}'.format("testing")})
engine.execute(f"INSERT INTO testing.mytable (product) VALUES ('123')")
pg8000: the place it stops / waits for something is in core.py:
def sock_read(b):
try:
return self._sock.read(b)
except OSError as e:
raise InterfaceError("network error on read") from e
A minimal version of the code I use:
import pg8000
import os
from dotenv import load_dotenv
load_dotenv()
db_connection = pg8000.connect(database=os.environ['database'], host=os.environ['host'], port=os.environ['port'], user=os.environ['user'], password=os.environ['password'])
db_connection.run(f"INSERT INTO mytable (data) VALUES ('data')")
db_connection.commit()
db_connection.close()
The logs in the RDS Proxy looks always normal for all the examples I mentioned - e.g.:
A new client connected from ...:60614.
Received Startup Message: [username="", database="", protocolMajorVersion=3, protocolMinorVersion=0, sslEnabled=false]
Proxy authentication with PostgreSQL native password authentication succeeded for user "" with TLS off.
A TCP connection was established from the proxy at ...:42795 to the database at ...:5432.
The new database connection successfully authenticated with TLS off.
I opened up all ports via security groups on the RDS and the RDS proxy and I used an EC2 inside the VPC.
I tried with autocommit on and off.
The 'command-line option" being referred to is the -csearch_path={}.
Remove that, and then once the connection is established execute set search_path = whatever as your first query.
This is a known issue that pg8000 can't connect to AWS RDS proxy (postgres). I did a PR https://github.com/tlocke/pg8000/pull/72 let see if Tony Locke (the father of pg8000) approves the change. ( if not you have to change the lines of the core.py https://github.com/tlocke/pg8000/pull/72/files )
self._write(FLUSH_MSG)
if (code != PASSWORD):
self._write(FLUSH_MSG)
I am trying to configure an engine in sqlalchemy to connect with temporary credentials from an AWS IAM role using get_cluster_credentials api.
When I do so this is the user I get 'IAM:user_rw'. Problem comes when I configure the engine string as
engine_string = "postgresql+pygresql://{user}:{password}#{endpoint}:{port}/{dbname}".format(
user=cluster_creds['DbUser'],
password=cluster_creds['DbPassword'],
endpoint='big endpointstring',
port=8192,
dbname='small dbname')
I create the engine without errors but when running any query I get: FATAL: password authentication failed for user "IAM"
Tested the user and pass in DataGrip it works so it seems evident sqlalchemy is getting the user just as "IAM" instead of 'IAM:user_rw'.
Do you know how can I force sqlalchemy to get the correct user?
I managed to solve the issue using urllib parse_quote in a similar fashion to what Gord is pointing. Final code
from urllib.parse import quote_plus
engine_string = "postgresql+pygresql://%s:%s#%s:%d/%s" % (
quote_plus(user),
quote_plus(passw),
endpoint,
port,
dbname,
)
I am using Python Shell Jobs under AWS Glue which has boto3 and a few other libraries built-in . I am facing issues trying to access the secrets manager to get credentials to my RDS instance running Mysql , the job keeps running forever without any (error/success) message nor does it time out .
Below is the simple code that runs even from my local or a lambda for Python3.7 but not in Python Shell GLUE ,
import boto3
import base64
from botocore.exceptions import ClientError
secret_name = "secret_name"
region_name = "eu-west-1"
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager',
region_name=region_name
)
get_secret_value_response = client.get_secret_value(SecretId=secret_name)
print(get_secret_value_response)
Would be very helpful if someone could point out if anything needs to be done additionally in Python Shell jobs under AWS Glue in order to access the secret manager credentials .
Make sure the IAM role used by the Glue Job has the policy SecretsManagerReadWrite
Also AWSGlueServiceRole and AmazonS3FullAccess
According to the documentation
When you create a job without any VPC configuration , then glue tries to reach the secret manager through internet , if the policies allows to have internet route then we can connect to secret manager
But when a glue job is created with VPC configuration/connection then all the request are made from your VPC/subnet where the connection points to , if this is the case, make sure you have secret manager endpoint present in your route table of the subnet where glue launches the resources.
https://docs.aws.amazon.com/glue/latest/dg/setup-vpc-for-glue-access.html
https://docs.aws.amazon.com/secretsmanager/latest/userguide/vpc-endpoint-overview.html
I am using python to send emails via an AWS Simple Email Service.
In attempt to have the best security possible I would like to
make a boto SES connection without exposing my access keys inside the code.
Right now I am establishing a connection like this
ses = boto.ses.connect_to_region(
'us-west-2',
aws_access_key_id='<ACCESS_KEY>',
aws_secret_access_key='<SECRET_ACCESS_KEY>'
)
Is there a way to do this without exposing my access keys inside the script?
The simplest solution is to use environment variables you may retrieve in your Python code with os.environ.
export AWS_ACCESS_KEY_ID=<YOUR REAL ACCESS KEY>
export AWS_SECRET_ACCESS_KEY=<YOUR REAL SECRET KEY>
And in the Python code:
from os import environ as os_env
ses = boto.ses.connect_to_region(
'us-west-2',
aws_access_key_id=os_env['AWS_ACCESS_KEY_ID'],
aws_secret_access_key=os_env['AWS_SECRET_ACCESS_KEY']'
)
To your EC2 instance attach an IAM role that has SES privileges, then you do not have to pass the credentials explicitly. Your script will get the credentials automatically from the metadata server.
See: Easily Replace or Attach an IAM Role to an Existing EC2 Instance by Using the EC2 Console. Then your code will be like:
ses = boto.ses.connect_to_region('us-west-2')
Preferred method of authentication is to use boto3's ability to read your AWS credential file.
Configure your AWS CLI using the aws configure command.
Then, in your script you can use the Session call to get the credentials:
session = boto3.Session(profile_name='default')
Two options are to set an environment variable named ACCESS_KEY and another named SECRET_ACCESS_KEY, then in your code you would have:
import os
ses = boto.ses.connect_to_region(
'us-west-2',
aws_access_key_id=os.environ['ACCESS_KEY'],
aws_secret_access_key=os.environ['SECRET_ACCESS_KEY']
)
or use a json file:
import json
path_to_json = 'your/path/here.json'
with open(path_to_json, 'r') as f:
keys = json.load(f)
ses = boto.ses.connect_to_region(
'us-west-2',
aws_access_key_id=keys['ACCESS_KEY'],
aws_secret_access_key=keys['SECRET_ACCESS_KEY']
)
the json file would contain:
{'ACCESS_KEY':<ACCESS_KEY>, 'SECRET_ACCESS_KEY':<SECRET_ACCESS_KEY>}
I designed a simple website using Flask and my goal was to deploy it on Google App engine. I started working on it locally and used google cloud sql for the database. I used google_cloud_proxy to open the port 3306 to interact with my GC SQL instance and it works fine locally... this is the way I'm connecting my application to GC SQL:
I have a app.yaml file which I've defined my Global Variables in it:
env_variables:
CLOUDSQL_SERVER = '127.0.0.1'
CLOUDSQL_CONNECTION_NAME = "myProjectName:us-central1:project"`
CLOUDSQL_USER = "user"
CLOUDSQL_PASSWORD = "myPassword"
CLOUDSQL_PORT = 3306
CLOUDSQL_DATABASE = "database"
and from my local machine I do:
db = MySQLdb.connect(CLOUDSQL_SERVER,CLOUDSQL_USER,CLOUDSQL_PASSWORD,CLOUDSQL_DATABASE,CLOUDSQL_PORT)
and if I want to get connected on App Engine, I do:
cloudsql_unix_socket = os.path.join('/cloudsql', CLOUDSQL_CONNECTION_NAME)
db = MySQLdb.connect(unix_socket=cloudsql_unix_socket,user=CLOUDSQL_USER,passwd=CLOUDSQL_PASSWORD,db=CLOUDSQL_DATABASE)
the static part of the website is running but for example, when I want to login with a username and password which is stored in GC SQL, I receive an internal error.
I tried another way... I started a compute engine, defined my global variables in config.py, installed flask, mysqldb and everything needed to start my application. I also used cloud_sql_proxy on that compute engine and I tried this syntax to connect to GC SQL instance:
db = MySQLdb.connect(CLOUDSQL_SERVER,CLOUDSQL_USER,CLOUDSQL_PASSWORD,CLOUDSQL_DATABASE,CLOUDSQL_PORT)
but it had the same problem. I don't think that it's the permission issue as I defined my compute engine's ip address in the authorized network part of GC SQL and in I AM & ADMIN part, the myprojectname#appspot.gserviceaccount.com has the Editor role!
can anyone help me out where the problem is?
ALright! I solved the problem. I followed the Google cloud's documentation but I had problems.I added a simple '/' in:
cloudsql_unix_socket = os.path.join('/cloudsql', CLOUDSQL_CONNECTION_NAME)
instead of '/cloudsql' it should be '/cloudsql/'
I know it's weird because os.path.join must add '/' to the path but for strange reasons which I don't know, it wasn't doing so.