How do I connect to RDS in Elastic Beanstalk via Pymysql - python

I have a python application (dash and plotly) that I'm trying to run on AWS EBS. It ran successfully when I was pulling data from different APIs within the python app (tradingEconomics and fredapi).
I have since created an RDS database so I can pull from the APIs once, store the data, and access it there. I was able to successfully connect to the database (add to, and pull from) when running it locally via pymysql. How I am doing so is below:
from sqlalchemy import create_engine
engine = create_engine('mysql+pymysql://USERNAME:PASSWORD#WHAT_I_CALLED_DATABASE_INSTANCE_ON_RDS.ApPrOpRiAtEcHaRaCtErS.us-east-1.rds.amazonaws.com/NAME_OF_DATABASE_I_CREATED_ON_MYSQLWORKBENCH')
dbConnection = engine.connect()
returned_frame = pd.read_sql("SELECT * FROM nameOfTableICreatedOnMySQLWorkbench", dbConnection)
Like I said, this works when run locally. It does not work when trying to run on EBS. (I get an internal server error 500, based on the logs it looks this is the problem.)
I think I opened up the inbound and outbound permissions on the database instance on RDS as some have suggested, but perhaps I misunderstood. This did not work.
If you have any further questions for clarification, feel free to ask.

Related

Attempting to establish a connection to Amazon Redshift from Python Script

I am trying to connect to a Amazon redshift table. I created the table using SQL and now I am writing a Python script to append a data frame to the database. I am unable to connect to the database and feel that I have something wrong with my syntax or something else. My code is below.
from sqlalchemy import create_engine
conn = create_engine('jdbc:redshift://username:password#localhost:port/db_name')
Here is the error I am getting.
sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string
Thanks!
There are basically two options for connecting to Amazon Redshift using Python.
Option 1: JDBC Connection
This is a traditional connection to a database. The popular choice tends to be using psycopg2 to establish the connection, since Amazon Redshift resembles a PostgreSQL database. You can download specific JDBC drivers for Redshift.
This connection would require the Redshift database to be accessible to the computer making the query, and the Security Group would need to permit access on port 5439. If you are trying to connect from a computer on the Internet, the database would need to be in a Public Subnet and set to Publicly Accessible = Yes.
See: Establish a Python Redshift Connection: A Comprehensive Guide - Learn | Hevo
Option 2: Redshift Data API
You can directly query an Amazon Redshift database by using the Boto3 library for Python, including an execute_statement() call to query data and a get_statement_result() call to retrieve the results. This also works with IAM authentication rather than having to create additional 'database users'.
There is no need to configure Security Groups for this method, since the request is made to AWS (on the Internet). It also works with Redshift databases that are in private subnets.

Using python to get DB2 list db directory on a remote server

I want to get the list of DBs on a remote server with a python script.
I know I can connect to a certain db with
import ibm_db
ibm_db.connect("DATABASE=name;HOSTNAME=host;PORT=60000;PROTOCOL=TCPIP;UID=username;
PWD=password;", "", "")
however I want to only connect to an instance and then do "db2 list db directory" to get the DB names.
Meaning change to the instance user and set off that command or preferably use a python module that can do just that. I only need the names no real connection to a database.
The result should be an array with all database names in that instance.
Any ideas or help?
Thank you
Unfortunately, there is no such function in python-ibmdb API and actually not even in full Db2 API. The only "workaround" I could think of would be UDF deployed on the remote database that uses db2DbDirOpenScan to access the catalog and return the info via the connection that is already established.

Cloud SQL/NiFi: Connect to cloud sql database with python and NiFi

So, I am doing a etl process in which I use Apache NiFi as an etl tool along with a postgresql database from google cloud sql to read csv file from GCS. As a part of the process, I need to write a query to transform data read from csv file and insert to the table in the cloud sql database. So, based on NIFi, I need to write a python to execute a sql queries automatically on a daily basis. But the question here is that how can I write a python to connect with the cloud sql database? What config that should be done? I have read something about cloud sql proxy but can I just use an cloud sql instance's internal ip address and put it in some config file and creating some dbconnector out of it?
Thank you
Edit: I can connect to cloud sql database from my vm using psql -h [CLOUD_SQL_PRIVATE_IP_ADDR] -U postgres but I need to run python script for the etl process and there's a part of the process that need to execute sql. What I am trying to ask is that how can I write a python file that use for executing the sql
e.g. In python, query = 'select * from table ....' and then run
postgres.run_sql(query) which will execute the query. So how can I create this kind of executor?
I don't understand why you need to write any code in Python? I've done a similar process where I used GetFile (locally) to read a CSV file, parse and transform it, and then used ExecuteSQLRecord to insert the rows into a SQL server (running on a cloud provider). The DBCPConnectionPool needs to reference your cloud provider as per their connection instructions. This means the URL likely reference something.google.com and you may need to open firewall rules using your cloud provider administration.
You can connect directly to a Cloud SQL instance via a Public IP (public meaning accessible via the public internet) mostly the same as a local database. By default, connections via Public IP require some form of authorization. Here you have 3 (maybe 4*) options:
Cloud SQL Proxy - this is an executable that listens on a local port or unix socket and uses IAM permissions to authenticate, encrypt, and forward connections to the database.
Self-managed SSL/TLS - Create a SSL/TLS key pair, providing the client key to NiFi as proof of authentication.
Whitelisting an IP - Whitelist which IPs are allowed to connect (so the IP that NiFi publicly sits on). This is the least secure option for a variety of reasons.
Any of these options should work for you to connect directly to the database. If you still need to specifics for Python, I suggest looking into SQLAlchemy and use these snippets here as reference.
Another possible option: It looks like NiFi is using Java and allows you to specify a jar as a driver, so you could potentially also provide a driver bundled with the Cloud SQL JDBC SocketFactory to authenticate the connection as well.
To connect to a Cloud SQL instance with Python you need the Cloud SQL Proxy. Also you have to set a configuration file.
In this tutorial you can find step by step how to achieve this. It is described how to set the configuration file needed for the connection (here you can find an example of this file as well).
Also in the tutorial there are some examples showing you how to interact with your database with Python.

Not able to connect the amazon DynamoDb Local using python boto sdk

I want to connect the db available inside DynamoDbLocal using the boto sdk.I followed the documentation as per the below link.
http://boto.readthedocs.org/en/latest/dynamodb2_tut.html#dynamodb-local
This is the official documentation provided by the amazon.But when I am executing the snippet available in the document, I am unable to connect the db and I can't get the tables available inside the db. The dbname is "dummy_us-east-1.db". And my snippet is:
from boto.dynamodb2.layer1 import DynamoDBConnection
con = DynamoDBConnection(host='localhost', port=8000,
aws_access_key_id='dummy',
aws_secret_access_key='dummy',
is_secure=False,
)
print con.list_tables()
I have a 8 tables available inside the db. But I am getting empty list, after executing the list_tables() command.
output:
{u'TableNames':[]}
Instead of accessing the required database, it creating and accessing the new database.
Old database : dummy_us-east-1.db
New database : dummy_localhost.db
How to resolve this.
Please give me some suggestions regarding to the DynamoDbLocal access. Thanks in advance.
It sounds like you are using different approaches to connect to DynamoDB Local.
If so, you can also start DynamoDB Local with the sharedDb flag to force it to use a single db file:
-sharedDb When specified, DynamoDB Local will use a
single database instead of separate databases
for each credential and region. As a result,
all clients will interact with the same set of
tables, regardless of their region and
credential configuration.
E.g.
java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar --sharedDb
Here is the solution. this is because you didn't start the dynamodb with it location of jar file.
java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb

How to use ddbmock with dynamodb-mapper?

Can someone please explain how to set up dynamodb_mapper (together with boto?) to use ddbmock with sqlite backend as Amazon DynamoDB-replacement for functional testing purposes?
Right now, I have tried out "plain" boto and managed to get it working with ddbmock (with sqlite) by starting the ddbmock server locally and connect using boto like this:
db = connect_boto_network(host='127.0.0.1', port=6543)
..and then I use the db object for all operations against the database. However, dynamodb_mapper uses this way to get a db connection:
conn = ConnectionBorg()
As I understand, it uses boto's default way to connect with (the real) DynamoDB. So basically I'm wondering if there is a (preferred?) way to get ConnectionBorg() to connect with my local ddbmock server, as I've done with boto above? Thanks for any suggestions.
Library Mode
In library mode rather than server mode:
import boto
from ddbmock import config
from ddbmock import connect_boto_patch
# switch to sqlite backend
config.STORAGE_ENGINE_NAME = 'sqlite'
# define the database path. defaults to 'dynamo.db'
config.STORAGE_SQLITE_FILE = '/tmp/my_database.sqlite'
# Wire-up boto and ddbmock together
db = connect_boto_patch()
Any access to dynamodb service via boto will use ddbmock under the hood.
Server Mode
If you still want to us ddbmock in server mode, I would try to change ConnectionBorg._shared_state['_region'] in the really beginning of test setup code:
ConnectionBorg._shared_state['_region'] = RegionInfo(name='ddbmock', endpoint="localhost:6543")
As far as I understand, any access to dynamodb via any ConnectionBorg instance after those lines will use ddbmock entry point.
This said, I've never tested it. I'll make sure authors of ddbmock gives an update on this.

Categories