I am trying to connect to a Amazon redshift table. I created the table using SQL and now I am writing a Python script to append a data frame to the database. I am unable to connect to the database and feel that I have something wrong with my syntax or something else. My code is below.
from sqlalchemy import create_engine
conn = create_engine('jdbc:redshift://username:password#localhost:port/db_name')
Here is the error I am getting.
sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string
Thanks!
There are basically two options for connecting to Amazon Redshift using Python.
Option 1: JDBC Connection
This is a traditional connection to a database. The popular choice tends to be using psycopg2 to establish the connection, since Amazon Redshift resembles a PostgreSQL database. You can download specific JDBC drivers for Redshift.
This connection would require the Redshift database to be accessible to the computer making the query, and the Security Group would need to permit access on port 5439. If you are trying to connect from a computer on the Internet, the database would need to be in a Public Subnet and set to Publicly Accessible = Yes.
See: Establish a Python Redshift Connection: A Comprehensive Guide - Learn | Hevo
Option 2: Redshift Data API
You can directly query an Amazon Redshift database by using the Boto3 library for Python, including an execute_statement() call to query data and a get_statement_result() call to retrieve the results. This also works with IAM authentication rather than having to create additional 'database users'.
There is no need to configure Security Groups for this method, since the request is made to AWS (on the Internet). It also works with Redshift databases that are in private subnets.
Related
I have a python application (dash and plotly) that I'm trying to run on AWS EBS. It ran successfully when I was pulling data from different APIs within the python app (tradingEconomics and fredapi).
I have since created an RDS database so I can pull from the APIs once, store the data, and access it there. I was able to successfully connect to the database (add to, and pull from) when running it locally via pymysql. How I am doing so is below:
from sqlalchemy import create_engine
engine = create_engine('mysql+pymysql://USERNAME:PASSWORD#WHAT_I_CALLED_DATABASE_INSTANCE_ON_RDS.ApPrOpRiAtEcHaRaCtErS.us-east-1.rds.amazonaws.com/NAME_OF_DATABASE_I_CREATED_ON_MYSQLWORKBENCH')
dbConnection = engine.connect()
returned_frame = pd.read_sql("SELECT * FROM nameOfTableICreatedOnMySQLWorkbench", dbConnection)
Like I said, this works when run locally. It does not work when trying to run on EBS. (I get an internal server error 500, based on the logs it looks this is the problem.)
I think I opened up the inbound and outbound permissions on the database instance on RDS as some have suggested, but perhaps I misunderstood. This did not work.
If you have any further questions for clarification, feel free to ask.
I want to get the list of DBs on a remote server with a python script.
I know I can connect to a certain db with
import ibm_db
ibm_db.connect("DATABASE=name;HOSTNAME=host;PORT=60000;PROTOCOL=TCPIP;UID=username;
PWD=password;", "", "")
however I want to only connect to an instance and then do "db2 list db directory" to get the DB names.
Meaning change to the instance user and set off that command or preferably use a python module that can do just that. I only need the names no real connection to a database.
The result should be an array with all database names in that instance.
Any ideas or help?
Thank you
Unfortunately, there is no such function in python-ibmdb API and actually not even in full Db2 API. The only "workaround" I could think of would be UDF deployed on the remote database that uses db2DbDirOpenScan to access the catalog and return the info via the connection that is already established.
I do not know how to write a connection string in Python using Boto3 API to make a jdbc connection to an existing database on AWS Redshift. I am using MobaXterm or putty to make a SSH connection. I have some codes to create the table, but am lost as to how to connect to the Database in Redshift
import boto3
s3client = boto3.client('redshift', config=client-config)
CREATE TABLE pheaapoc_schema.green_201601_csv (
vendorid varchar(4),
pickup_ddatetime TIMESTAMP,
dropoff_datetime TIMESTAMP,
I need to connect to database "dummy" and create a table.
TL;DR; You do not need IAM credentials or boto3 to connect to Redshift. What you need is end_point for the Redshift cluster and redshift credentials and a postgres client using which you can connect.
You can connect to Redshift cluster just the way you connect to any Database (Like MySQL, PostgreSQL or MongoDB). To connect to any database, you need 5 items.
host - (This is nothing but the end point you get from AWS console/Redshift)
username - (Refer again to AWS console/Redshift. Take a look at master username section)
password - (If you created the Redshift, you should know the password for master user)
port number - (5439 for Redshift)
Database - (The default database you created at first)
Refer to the screenshot if it is not intuitive.
What boto3 APIs do?
Boto3 provides APIs using which you can modify your Redshift cluster. For example, it provides APIs to delete your cluster, resize your cluster or take a snapshot of your cluster. They do not involve connection whatsoever.
Screenshots for reference:
So, I am doing a etl process in which I use Apache NiFi as an etl tool along with a postgresql database from google cloud sql to read csv file from GCS. As a part of the process, I need to write a query to transform data read from csv file and insert to the table in the cloud sql database. So, based on NIFi, I need to write a python to execute a sql queries automatically on a daily basis. But the question here is that how can I write a python to connect with the cloud sql database? What config that should be done? I have read something about cloud sql proxy but can I just use an cloud sql instance's internal ip address and put it in some config file and creating some dbconnector out of it?
Thank you
Edit: I can connect to cloud sql database from my vm using psql -h [CLOUD_SQL_PRIVATE_IP_ADDR] -U postgres but I need to run python script for the etl process and there's a part of the process that need to execute sql. What I am trying to ask is that how can I write a python file that use for executing the sql
e.g. In python, query = 'select * from table ....' and then run
postgres.run_sql(query) which will execute the query. So how can I create this kind of executor?
I don't understand why you need to write any code in Python? I've done a similar process where I used GetFile (locally) to read a CSV file, parse and transform it, and then used ExecuteSQLRecord to insert the rows into a SQL server (running on a cloud provider). The DBCPConnectionPool needs to reference your cloud provider as per their connection instructions. This means the URL likely reference something.google.com and you may need to open firewall rules using your cloud provider administration.
You can connect directly to a Cloud SQL instance via a Public IP (public meaning accessible via the public internet) mostly the same as a local database. By default, connections via Public IP require some form of authorization. Here you have 3 (maybe 4*) options:
Cloud SQL Proxy - this is an executable that listens on a local port or unix socket and uses IAM permissions to authenticate, encrypt, and forward connections to the database.
Self-managed SSL/TLS - Create a SSL/TLS key pair, providing the client key to NiFi as proof of authentication.
Whitelisting an IP - Whitelist which IPs are allowed to connect (so the IP that NiFi publicly sits on). This is the least secure option for a variety of reasons.
Any of these options should work for you to connect directly to the database. If you still need to specifics for Python, I suggest looking into SQLAlchemy and use these snippets here as reference.
Another possible option: It looks like NiFi is using Java and allows you to specify a jar as a driver, so you could potentially also provide a driver bundled with the Cloud SQL JDBC SocketFactory to authenticate the connection as well.
To connect to a Cloud SQL instance with Python you need the Cloud SQL Proxy. Also you have to set a configuration file.
In this tutorial you can find step by step how to achieve this. It is described how to set the configuration file needed for the connection (here you can find an example of this file as well).
Also in the tutorial there are some examples showing you how to interact with your database with Python.
I connect to a postgres database hosted on AWS. Is there a way to find out the number of open connections to that database using python API?
I assume this is for RDS. There is no direct way via the AWS API. You could potentially get it from CloudWatch but you'd be better off connecting to the database and getting the count that way by querying pg_stat_activity.