I am experiencing issue when I am trying to retrieve documents based on the view.
result = bucket.query("view","view", limit=1, streaming=True)
for row in result:
bucket.replace("aass_brewery-juleol", row.docid)
The exception:
couchbase.exceptions._AuthError_0x2 (generated, catch AuthError): <Key=u'aass_brewery-juleol', RC=0x2[Authentication failed. You may have provided an invalid username/password combination], Operational Error, Results=1, C Source=(src/multiresult.c,309)>
The bucket doesn't have any authentication beside Standard port (TCP port 11211. Needs SASL auth.)
Related
I am running out of ideas.
I have created a Aurora Serverless RDS (Version 1) with Data API enabled. I now wish to execute SQL statements against it using the Data API (https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html)
I have made a small test script using the provided guidelines (https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html#data-api.calling:~:text=Calling%20the%20Data%20API%20from%20a%20Python%20application)
import boto3
session = boto3.Session(region_name="eu-central-1")
rds = session.client("rds-data")
secret = session.client("secretsmanager")
cluster_arn = "arn:aws:rds:eu-central-1:<accountID>:cluster:aurorapostgres"
secret_arn = "arn:aws:secretsmanager:eu-central-1:<accountID>:secret:dbsecret-xNMeQc"
secretvalue = secret.get_secret_value(
SecretId = secret_arn
)
print(secretvalue)
SQL = "SELECT * FROM pipelinedb.dataset"
res = rds.execute_statement(
resourceArn = cluster_arn,
secretArn = secret_arn,
database = "pipelinedb",
sql = SQL
)
print(res)
However I get the error message:
BadRequestException: An error occurred (BadRequestException) when calling the ExecuteStatement operation: FATAL: password authentication failed for user "bjarki"; SQLState: 28P01
I have verified the following:
Secret value is correct
Secret JSON structure is correctly following recommended structure (https://docs.aws.amazon.com/secretsmanager/latest/userguide/reference_secret_json_structure.html)
IAM user running the python script has Admin access to the account, and thus is privileged enough
Cluster is running in Public Subnets (internet gateways attached to route tables) and ACL and security groups are fully open.
The user "bjarki" is the master user and thus should have the required DB privileges to run the query
I am out of ideas on why this error is appearing - any good ideas?
Try this AWS tutorial that is located in the AWS Examples
Code Library. It shows how to use the AWS SDK for Python (Boto3) to create a web application that tracks work items in an Amazon Aurora database and emails reports by using Amazon Simple Email Service (Amazon SES). This example uses a front end built with React.js to interact with a Flask-RESTful Python backend.
Integrate a React.js web application with AWS services.
List, add, and update items in an Aurora table.
Send an email report of filtered work items by using Amazon SES.
Deploy and manage example resources with the included AWS CloudFormation script.
https://docs.aws.amazon.com/code-library/latest/ug/cross_RDSDataTracker_python_3_topic.html
Try running the CDK to properly setup the database too.
Once you successfully implemented this example, you wil get this front end with a Python backend.
I'm facing a problem with authenticating to RDS from Python 3.9 Lambda using IAM Roles with either mysql-connector-python 8.0.27 or PyMySQL 1.0.2 modules. In both cases the problem is that token is not passed inside connection string
In case of mysql.connect I'm using this construct:
os.environ['LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN'] = '1'
token = client.generate_db_auth_token(DBHostname=hostname, Port=port, DBUsername=username)
mysql.connector.connect(host=hostname, user=username, passwd=token, database=dbname, auth_plugin='mysql_clear_password')
but getting ubyte format requires 0 <= number <= 255 error which leads me to believe that token string is too long and lib can't support it
So I tried switching to PyMySQL, but now it fails with "Access denied for user 'mydbuser'#'172.16.1.246' (using password: NO)")
os.environ['LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN'] = '1'
....
token = client.generate_db_auth_token(DBHostname=hostname, Port=port, DBUsername=username)
my_db = pymysql.connect(
host=hostname,
user=username,
password=token,
database=dbname,
connect_timeout=2)
Token is generated and I see it in logs, IAM roles/policies are correct as I'm able to connect from Lambda using standard password, or using mysql CLI client on EC2 with token (MySQL community client, not MariaDB because it doesn't support --enable-cleartext-plugin option)
I even hardcoded the token, and still it fails with "no password"
UPDATE #1:
After more experiments I took a token that was generated from AWS CLI aws rds generate-db-auth-token --hostname my-mysql-db.us-east-2.rds.amazonaws.com --port 3306 --region us-east-2 --username mydbuser put it in mysql-connector-python's code and it magically worked!
I've compared two tokens and the difference is that aws rds generate-db-auth-token returns header called X-Amz-Signature 64 bytes long, so mysql-connector processes it fine.
However, boto3 client.generate_db_auth_token returns header called X-Amz-Security-Token 820 bytes long and it utterly destroys poor mysql-connector
A bug? A feature? Is it possible to get 64-bit header?
PyMySQL didn't work with either of tokens, so probably a bug in that library itself
UPDATE #2:
I've decided to test the tokens themselves on mysql client and got quite confusing results:
When using token made from Long-term AWS access key (the one which starts with 'AKIA') and generated via aws rds generate-db-auth-token I was able to connect to DB
When using token from lambda's client.generate_db_auth_token - that token didn't work in mysql CLI either - getting ERROR 1045 (28000): Access denied for user 'lambdauser'#'172.16.1.20' (using password: YES) - The main difference is that Lambda is using Short-term access key (starts with 'ASIA')
Could it be that lambda's access key is somehow out of sync? Or Lambda needs extra roles/policies to make tokens trusted by other AWS services?
MySQL command format:
mysql --host=$RDSHOST --port=3306 --enable-cleartext-plugin --user=lambdauser --password=$TOKEN
I have created a sample congnito app and added few users, now i am trying to access the users from python code , but getting below error, i have developer access. Please let me know how to get user details from cognito
code:
import boto3
client = boto3.client('cognito-idp',region_name='us-east-2')
response = client.admin_get_user(
UserPoolId='us-east-2_XXXXXXX',
Username='newuser'
)
error:ClientError: An error occurred (ExpiredTokenException) when calling the AdminGetUser operation: The security token included in the request is expired
In a Flask app with flask-sqlalchemy, I am trying to initialize a connection to Db2 Warehouse on Cloud by setting SQLALCHEMY_DATABASE_URI to one of the parts provided in the service credentials. In the past, going with the uri component worked fine, but my new service has SSL connections only.
app.config['SQLALCHEMY_DATABASE_URI']=dbInfo['uri']
This results in connection errors
File "/home/vcap/deps/0/python/lib/python3.6/site-packages/ibm_db_dbi.py", line 592, in connect
conn = ibm_db.connect(dsn, '', '', conn_options)
Exception: [IBM][CLI Driver] SQL30081N A communication error has been detected. Communication protocol being used: "TCP/IP". Communication API being used: "SOCKETS". Location where the error was detected: "52.117.199.197". Communication function detecting the error: "recv". Protocol specific error code(s): "104", "*", "0". SQLSTATE=08001 SQLCODE=-30081 During handling of the above exception, another exception occurred:
It seems that the driver is not accepting the ssl=true option specified in the URI string. What parts of the service credentials should I use? Would I need to build the URI string manually?
This is only a partial answer because of a workaround. I am using the port information from the service credentials to modify the connection URI:
if dbInfo['port']==50001:
# if we are on the SSL port, add additional parameter for the driver
app.config['SQLALCHEMY_DATABASE_URI']=dbInfo['uri']+"Security=SSL;"
else:
app.config['SQLALCHEMY_DATABASE_URI']=dbInfo['uri']
By adding Security=SSL to the uri, the driver picks up the info on SSL and uses the correct settings to connect to Db2.
I'm working on datalab but when I try to query a table in bigquery I get the following error:
Exception: invalid: Error while reading table: .... error message: Failed to read the spreadsheet. Errors: No OAuth token with Google Drive scope was found.
This only happens with the tables that are linked with google drive sheet.
now enable the google drive app in gcp
from google.cloud import bigquery
client = bigquery.Client()
sql = """
SELECT * FROM `proyect-xxxx.set_xxx.table_x` LIMIT 1000
"""
df = client.query(sql).to_dataframe()
project_id = 'proyect-xxxx'
df = client.query(sql, project=project_id).to_dataframe()
df.head(3)
Exception: invalid: Error while reading table: .... error message: Failed to read the spreadsheet. Errors: No OAuth token with Google Drive scope was found.
As state by the error you are trying to access Google Drive, Which store you BigQuery external table, without providing permission to your oAuth token
You will need to go to Google Console and enable this access to solve your problem.
You can use this link which provide a how-to explanation on this subject
Visit the Google API Console to obtain OAuth 2.0 credentials such as a client ID and client secret that are known to both Google and your application. The set of values varies based on what type of application you are building. For example, a JavaScript application does not require a secret, but a web server application does.