I am trying to connect to snowflake using my login credentials. I'm using the following code:
snowflake.connector.connect(
user="<my_user_name>",
password="<my_password>",
account="<my_account_name_with_region_and_cloud>"
)
When I try to run the above code, I'm getting the following error:
OperationalError: 250003: Failed to get the response. Hanging? method: post, url: https://hm53485.us-east-2.aws.snowflakecomputing.com:443/session/v1/login-request?request_id=fcfdd77a-11ff-4956-9ed8-bcc332c5989a&databaseName=S3_DB&schemaName=PUBLIC&warehouse=COMPUTE_WH&request_guid=b9fdb5c9-81cb-4ecb-8d20-abef44249bbf
I'm sure that all my packages are up to date. I'm using python 3.6.4 and the latest snowflake_connector_python.
I'm currently on us-east-2 location in aws.
Can someone please help me out on this????
Just Give your account name in the account .We dont need the region and full URL.
Please check below .
----------------------------------------------------------------------
import snowflake.connector
PASSWORD = '*******'
USER = '<USERNAME>'
ACCOUNT = 'SFCSUPPORT'
WAREHOUSE = '<WHNAME>'
DATABASE = '<DBNAME>'
SCHEMA = 'PUBLIC'
print("Connecting...")
# -- (> ------------------- SECTION=connect_to_snowflake --------------------
con = snowflake.connector.connect(
user=USER,
password=PASSWORD,
account=ACCOUNT,
warehouse=WAREHOUSE,
database=DATABASE,
schema=SCHEMA
)
con.cursor().execute("USE WAREHOUSE " + WAREHOUSE)
con.cursor().execute("USE DATABASE " + DATABASE)
#con.cursor().execute("USE SCHEMA INFORMATION_SCHEMA")
try:
result = con.cursor().execute("Select * from <TABLE>")
result_list = result.fetchall()
print(result_list)
finally:
con.cursor().close()
con.cursor().close()
I'm using sqlalchemy, which you can install via pip:
pip install SQLAlchemy
https://docs.snowflake.net/manuals/user-guide/sqlalchemy.html
Here's what I have at the beginning of my notebook:
import snowflake.connector
import pandas as pd
from sqlalchemy import create_engine
from snowflake.sqlalchemy import URL
url = URL(
account = 'xxxxxxxx.east-us-2.azure',
user = 'xxxxxxxx',
password = 'xxxxxxxx',
database = 'xxxxxxxx',
schema = 'xxxxxxxx',
warehouse = 'xxxxxxxx',
role='xxxxxxxx'
)
engine = create_engine(url)
connection = engine.connect()
query = '''
select 1 AS VAL;
'''
df = pd.read_sql(query, connection)
df
I was getting a similar error. Tried few things like making sure the account name is correct as per https://docs.snowflake.com/en/user-guide/admin-account-identifier.html. The account name depends on the region in which your snowflake account is located. Note that some of the cloud regions need a cloud provider name at the end and some do it.
But it didn't help fix the issue I was facing. For me, it turned out to be a proxy issue. I was trying to connect from a corporate network with a proxy and it was blocking the connection to Snowflake. Whitelisting the snowflake URL in proxy fixed the issue for me.
Related
I recently had to enable SSO with Okta and had a few python projects I was running in Google Colab.
I am trying to redesign the connection string but can't seem to get it right.
This was my initial connection string before SSO:
from snowflake.sqlalchemy import URL
from sqlalchemy import create_engine
engine = create_engine(URL(
account = acc,
user = usr,
password = psw,
warehouse = whs,
role = rol
))
engine.connect()
This is what I found from research it should be with SSO:
from snowflake.sqlalchemy import URL
from sqlalchemy import create_engine
engine = create_engine(URL(
account = acc,
user = usr,
password = psw,
warehouse = whs,
role = rol
),
connect_args={
'authenticator': 'https://myokta.okta.com/',
}
)
engine.connect()
I tried that but I am getting this error:
I also tried using {'authenticator': 'externalbrowser'} but because I am in Google Colab I get an error stating Unable to open a browser in this environment..
The Web UI is working for the same user so it's just in Colab that I am having this issue.
How should I go about to connect?
EDIT:
So after doing some research I found that because we have MFA enabled this would not work. Is it possible to then use:
engine = create_engine(URL(
account = acc,
user = usr,
warehouse = whs,
role = rol,
authenticator = 'externalbrowser'
))
engine = engine.connect()
And have the externalbrowser be an iframe in the same notebook?
I managed to find the solution. When running engine.connect() with authenticator='externalbrowser' and Google Collab cannot open a separate tab, it will provide a manual link, which, when clicked, opens another tab pointing to localhost URL with a token as a param. I then copy this URL and when going back to the notebook, I paste this URL to the input box opened in the cell.
in my jupyter notebook I connect to snowflake with an externalbrowser auth like so:
conn = snowflake.connector.connect(
user='<my user>',
authenticator='externalbrowser',
account='<my account>',
warehouse='<the warehouse>')
this opens an external browser to auth and after that works fine with pandas read sql:
pd.read_sql('<a query>', conn)
want to use it with ipython sql, but when I try:
%sql snowflake://conn.user#conn.account
I get:
snowflake.connector.errors.ProgrammingError) Password is empty
well I don't have one :)
any ideas how to pass this?
IPython-sql connection strings are SQLAlchemy URL standard, therefore you can do the following:
%load_ext sql
from sqlalchemy import create_engine
from snowflake.sqlalchemy import URL
engine = create_engine(URL(
account = '<account>',
user = '<user>',
database = 'testdb',
schema = 'public',
warehouse = '<wh>',
role='public',
authenticator='externalbrowser'
))
connection = engine.connect()
This would open the external browser for authentication.
I am using a Python script to connect to a SQL Server database:
import pyodbc
import pandas
server = 'SQL'
database = 'DB_TEST'
username = 'USER'
password = 'My password'
sql='''
SELECT *
FROM [DB_TEST].[dbo].[test]
'''
cnxn = pyodbc.connect('DRIVER=SQL Server;SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password)
data = pandas.read_sql(sql,cnxn)
cnxn.close()
The script is launched everyday by an automatisation tools so there is no physical user.
The issue is how to replace the password field by a secure method?
The automated script is still ran by a windows user. Add this windows user to the SQL-Server users and give it the appropriate permissions, so you can use:
import pyodbc
import pandas
server = 'SQL'
database = 'DB_TEST'
sql='''
SELECT *
FROM [DB_TEST].[dbo].[test]
'''
cnxn = pyodbc.connect(
f'DRIVER=SQL Server;SERVER={server};DATABASE={database};Trusted_Connection=True;')
data = pandas.read_sql(sql,cnxn)
cnxn.close()
I am also interested in secure coding using Python .I did my own research to figure out available options, I would recommend reviewing this post as it summarize it all. Check on the listed options, and apply the one suits you better.
I am connecting to an Aurora database using python.Everything works when I have a static userid and password;
I am trying to use IAM Authenticated user and the user has been setup and it works well using MySQL Workbench
I am trying to use SQLAlchemy to connect using python - this is supposed to be a no brainier but it does not work
from _collections import OrderedDict
import datetime
from sqlalchemy import create_engine
from boto3 import client
# import pymysql
db_end_point = "yyyyyyyyyyyy.cluster-xxxxxxxxxxxx.eu-west-1.rds.amazonaws.com"
def get_db_password(cluster_end_point, user_name):
rds_client = client('rds', region_name='eu-west-1')
rds_token = rds_client.generate_db_auth_token(DBHostname=cluster_end_point, Port=3306, DBUsername=user_name, Region='eu-west-1')
return rds_token
user_password = get_db_password(db_end_point, "svc_payment_data_write")
print(user_password)
url = "mysql://{}:3306/payment_data_store".format(db_end_point)
rds_credentials = {
'user' : 'svc_payment_data_write', 'passwd' : user_password
}
kw = dict()
kw.update(rds_credentials)
print(kw)
engine = create_engine(url, connect_args=kw)
connection = engine.connect()
I have also tried creating the url with the password string quoted properly. The same thing works in JAVA properly
I'm trying to query an RDS (Postgres) database through Python, more specifically a Jupyter Notebook. Overall, what I've been trying for now is:
import boto3
client = boto3.client('rds-data')
response = client.execute_sql(
awsSecretStoreArn='string',
database='string',
dbClusterOrInstanceArn='string',
schema='string',
sqlStatements='string'
)
The error I've been receiving is:
BadRequestException: An error occurred (BadRequestException) when calling the ExecuteSql operation: ERROR: invalid cluster id: arn:aws:rds:us-east-1:839600708595:db:zprime
In the end, it was much simpler than I thought, nothing fancy or specific. It was basically a solution I had used before when accessing one of my local DBs. Simply import a specific library for your database type (Postgres, MySQL, etc) and then connect to it in order to execute queries through python.
I don't know if it will be the best solution since making queries through python will probably be much slower than doing them directly, but it's what works for now.
import psycopg2
conn = psycopg2.connect(database = 'database_name',
user = 'user',
password = 'password',
host = 'host',
port = 'port')
cur = conn.cursor()
cur.execute('''
SELECT *
FROM table;
''')
cur.fetchall()