How to secure a Python script SQL Server authentification - python

I am using a Python script to connect to a SQL Server database:
import pyodbc
import pandas
server = 'SQL'
database = 'DB_TEST'
username = 'USER'
password = 'My password'
sql='''
SELECT *
FROM [DB_TEST].[dbo].[test]
'''
cnxn = pyodbc.connect('DRIVER=SQL Server;SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password)
data = pandas.read_sql(sql,cnxn)
cnxn.close()
The script is launched everyday by an automatisation tools so there is no physical user.
The issue is how to replace the password field by a secure method?

The automated script is still ran by a windows user. Add this windows user to the SQL-Server users and give it the appropriate permissions, so you can use:
import pyodbc
import pandas
server = 'SQL'
database = 'DB_TEST'
sql='''
SELECT *
FROM [DB_TEST].[dbo].[test]
'''
cnxn = pyodbc.connect(
f'DRIVER=SQL Server;SERVER={server};DATABASE={database};Trusted_Connection=True;')
data = pandas.read_sql(sql,cnxn)
cnxn.close()

I am also interested in secure coding using Python .I did my own research to figure out available options, I would recommend reviewing this post as it summarize it all. Check on the listed options, and apply the one suits you better.

Related

Connecting R to Oracle DB without admin

I need an R script that allows me to connect to an Oracle DB without having to install anything needing admin powers, and preferrably nothing at all apart from package downloads. In python the following code works, I believe because it uses the cx_Oracle module as a portable driver. What would be a good R alternative?
import pandas as pd
import sqlalchemy
import sys
host = "xxx.intra"
database = "mydb"
user = "usr"
password = "pw"
def get_oracle_engine(host, database, user, password):
return sqlalchemy.create_engine("oracle+cx_oracle://{user}:{password}#{host}:1521/?service_name={database}".format(host=host, database=database, user=user, password=password))
engine=get_oracle_engine(host, database, user, password)
pd.read_sql_table("mytable", engine, schema= mydb,index.cols="id1")
I managed to install ROracle using the CRAN instructions but I keep getting the ORA-12154 TNS: cound not resolve the connect identifier specified when using:
library(ROracle)
con= DBI::dbconnect(dbDriver("Oracle"), user= user, password=password, host=host, dbname=database, port="1521")
By the way dbDriver("Oracle") returns
Driver name : Oracle (OCI)
Driver version: 1.3-1
Client version: 12.1.0.2.0
Try code like:
library(DBI)
library(ROracle)
drv <- Oracle()
con <- dbConnect(drv, 'cj', 'welcome', 'localhost:1521/orclpdb1')
dbGetQuery(con,"select count(*) from dual")
The connect string components are related to the {host}:1521/?service_name values you used with SQLAlchemy. Use a TNS alias or Easy Connect String, the same as other C based Oracle drivers, e.g. https://cx-oracle.readthedocs.io/en/latest/user_guide/connection_handling.html#connection-strings
The current ROracle code is at https://www.oracle.com/database/technologies/roracle-downloads.html There are some packaging glitches with uploading to CRAN and the CRAN maintainers haven't been responsive about resolving them.
ROracle still needs Oracle Client libraries such as from Oracle Instant Client.

SQL Connect error in Python(Windows): severity 9:\nAdaptive Server connection failed

Not able to connect to Azure DB. I get the following error while connecting via Python.
I'm able to connect to my usual SQL environment
import pandas as pd
import pymssql
connPDW = pymssql.connect(host=r'dwprd01.database.windows.net', user=r'internal\admaaron',password='',database='')
connPDW.autocommit(True)
cursor = connPDW.cursor()
conn.autocommit(True)
cursor = conn.cursor()
sql = """
select Top (10) * from TableName
"""
cursor.execute(sql);
Run without errors.
Just according to your code, there is an obvious issue of connecting Azure SQL Database by pymssql package in Python which use the incorrect user format and lack of the values of password and database parameters.
Please follow the offical document Step 3: Proof of concept connecting to SQL using pymssql carefully to change your code correctly.
If you have an instance of Azure SQL Database with the connection string of ODBC, such as Driver={ODBC Driver 13 for SQL Server};Server=tcp:<your hostname>.database.windows.net,1433;Database=<your database name>;Uid=<username>#<host>;Pwd=<your_password>;Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30; show in the Connection strings tab of your SQL Database on Azure portal.
Then, your code should be like below
hostname = '<your hostname>'
server = f"{hostname}.database.windows.net"
username = '<your username>'
user = f"{username}#{hostname}"
password = '<your password>'
database = '<your database name>'
import pymssql
conn = pymssql.connect(server=server, user=user, password=password, database=database)
Meanwhile, just additional note for the version of Azure SQL Database and MS SQL Server are 2008+ like the latest Azure SQL Database, you should use the ODBC Driver connection string which be started with DRIVER={ODBC Driver 17 for SQL Server};, not 13 show in the connection string of Azure portal if using ODBC driver for Python with pyodbc, please refer to the offical document Step 3: Proof of concept connecting to SQL using pyodbc.

How to query a (Postgres) RDS DB through an AWS Jupyter Notebook?

I'm trying to query an RDS (Postgres) database through Python, more specifically a Jupyter Notebook. Overall, what I've been trying for now is:
import boto3
client = boto3.client('rds-data')
response = client.execute_sql(
awsSecretStoreArn='string',
database='string',
dbClusterOrInstanceArn='string',
schema='string',
sqlStatements='string'
)
The error I've been receiving is:
BadRequestException: An error occurred (BadRequestException) when calling the ExecuteSql operation: ERROR: invalid cluster id: arn:aws:rds:us-east-1:839600708595:db:zprime
In the end, it was much simpler than I thought, nothing fancy or specific. It was basically a solution I had used before when accessing one of my local DBs. Simply import a specific library for your database type (Postgres, MySQL, etc) and then connect to it in order to execute queries through python.
I don't know if it will be the best solution since making queries through python will probably be much slower than doing them directly, but it's what works for now.
import psycopg2
conn = psycopg2.connect(database = 'database_name',
user = 'user',
password = 'password',
host = 'host',
port = 'port')
cur = conn.cursor()
cur.execute('''
SELECT *
FROM table;
''')
cur.fetchall()

Python - Pyodbc Connection error

I am trying to connect to the SQL Server database using Python3.4
This is the code that works for me
cnxn = pyodbc.connect('DRIVER={ODBC Driver 13 for SQL Server};SERVER=DESKTOP-GDM2HQ17\SQLEXPRESS;DATABASE=pyconnect;Trusted_Connection=yes')
and I login into my Management studio - database using Windows connection.
Here is the code, which is not working for me :
cnxn = pyodbc.connect('DRIVER={ODBC Driver 13 for SQL Server};SERVER=DESKTOP-GDM2HQ17\SQLEXPRESS;DATABASE=pyconnect;UID=DESKTOP-GDM2HQ17\sid;PWD=123')
Kindly share your thoughts on where I am going wrong.
There are two SQL Server Authentication modes:
1, Connecting Through Windows Authentication:
When a user connects through a Windows user account, SQL Server validates the account name and password using the Windows principal token in the operating system.
2, Connecting Through SQL Server Authentication:
When using SQL Server Authentication, logins are created in SQL Server that are not based on Windows user accounts. Both the user name and the password are created by using SQL Server and stored in SQL Server.
Your first code is working as it is Connecting Through Windows Authentication.
Your second code is not working as it is trying to find the credentials (login and password) which are stored in SQL Server, but the credentials is not created in SQL server.
Moreover, you could refer official doc to know how to Change Server Authentication Mode.
Hope it will help you.
DRIVER='{SQL Server}' works
below code is tested.....
import pyodbc
con = pyodbc.connect('Driver={SQL Server};'
'Server=LAPTOP-PPDS6BPG;'
'Database=training;'
'Trusted_Connection=yes;')
cursor = con.cursor()
sql_query = 'SELECT * FROM Students'
cursor.execute(sql_query)
for row in cursor:
print(row)
This works fine for me better than any other I could find
import pyodbc
import pandas as pd
conn = pyodbc.connect('Driver={SQL Server};'
'Server=10.****;'
'Database=Ma**;'
'UID=sql**;'
'PWD=sql**;')
cursor = conn.cursor()
sql = """\
EXEC [dbo].[GetNewPayment] #Login=?, #PasswordMD5=?, #RevokeTimeFrom=?, #RevokeTimeTo=? Status=?
"""
params = ('a***', 'c2ca***', '2021-05-01','2021-05-02', '3')
cursor.execute(sql, params)

Cross Server Select In SQLAlchemy

Is it possible to make SQLAlchemy do cross server joins?
If I try to run something like
engine = create_engine('mssql+pyodbc://SERVER/Database')
query = sql.text('SELECT TOP 10 * FROM [dbo].[Table]')
with engine.begin() as connection:
data = connection.execute(query).fetchall()
It works as I'd expect. If I change the query to select from [OtherServer].[OtherDatabase].[dbo].[Table] I get an error message "Login failed for user 'NT AUTHORITY\\ANONYMOUS LOGON"
Looks like there's an issue with how you authenticate to SQL server.
I believe you can connect using the current Windows user, the URI syntax is then mssql+pyodbc://SERVER/Database?trusted_connection=yes (I have never tested this, but give it a try).
Another option is to create a SQL server login (ie. a username/password that is defined within SQL server, NOT a Windows user) and use the SQL server login when you connect.
The database URI then becomes: mssql+pyodbc://username:password#SERVER/Database.
mssql+pyodbc://SERVER/Database?trusted_connection=yes threw an error when I tried to it. It did point me in the right direction though.
from sqlalchemy import create_engine, sql
import urllib
string = "DRIVER={SQL SERVER};SERVER=server;DATABASE=db;TRUSTED_CONNECTION=YES"
params = urllib.quote_plus(string)
engine = create_engine('mssql+pyodbc:///?odbc_connect={0}'.format(params))
query = sql.text('SELECT TOP 10 * FROM [CrossServer].[datbase].[dbo].[Table]')
with engine.begin() as connection:
data = connection.execute(query).fetchall()
It's quite complicated if you suppose to alter different servers through one connection.
But if you need to perform a query to a different server under different credentials you should add linked server first with sp_addlinkedserver. Then it should be added credentials to the linked server with sp_addlinkedsrvlogin. Have you tried this?

Categories