I was trying to close an open DB2 connection for one of my automation script in robot-framework so that I can open a Oracle DB connection and perform certain task. I tried using inbuilt function provided in database library of Robot Framework for closing the connection but it is not closing the connection and giving an error:
AttributeError: "NoneType" object has no attribute "close"
Close method which is written in database library of Robot-Framework is as below:
def disconnect_from_database(self):
self._dbconnection.close()
Steps performed in my scripts are:
Application related validation
Validation in IBM DB2 after connecting
Disconnect IBM DB2
Open Oracle DB and perform certain validation
If I am removing step 3 to disconnect IBM DB2 and directly open oracle connection I am getting error during runtime as
RuntimeExceptionPyRaisable: java.lang.RuntimeException: Class oracle.jdbc.driver.Oracle Driver not found
Please note that oracle db jars are present and script is working fine when I need either DB2 or Oracle DB connection but giving above mentioned error when I need to connect to both the DBs in single script.
Can anyone help me how I can connect to both DBs without these errors?
Related
I created an Azure Function with Python and want to write some data into an Azure SQL DB.
If I run the code on my local machine via AZ Function Debugger, everything is working. But when I deploy everything to Azure, I only get a message that there is an error (no additional specific information).
I think this is related to the ODBC Driver?
I'm using the following code to connect and insert data:
with pyodbc.connect('DRIVER='+driver+';SERVER=tcp:'+server+';PORT='+port+';DATABASE='+database+';UID='+username+';PWD='+ password + ";Authentication=ActiveDirectoryPassword", timeout=120) as conn:
with conn.cursor() as cursor:
try:
cursor.execute(data)
except:
logging.error("Can't execute SQL Query!")
I use driver= '{ODBC Driver 17 for SQL Server}' as driver.
I assume that this is missing in Azure? How can this issue be fixed? What is the right approach to connect from Azure Functions to an Azure SQL DB via Python?
It seems the ODBC driver is included, it was just poorly documnented:
https://github.com/MicrosoftDocs/azure-docs/issues/54423
There is an example project here:
https://github.com/kevin808/azure-function-pyodbc-MI
The full tutorial including creating the system assigned identity can be found here:
https://techcommunity.microsoft.com/t5/apps-on-azure-blog/how-to-connect-azure-sql-database-from-python-function-app-using/ba-p/3035595
There is currently a SQL Extension under development but it only supports C# at the moment. Python has been requested as an ehancement so you could add your 👍 to the issue so that you could use bindings
https://github.com/Azure/azure-functions-sql-extension/issues/172
I am trying to connect to a Amazon redshift table. I created the table using SQL and now I am writing a Python script to append a data frame to the database. I am unable to connect to the database and feel that I have something wrong with my syntax or something else. My code is below.
from sqlalchemy import create_engine
conn = create_engine('jdbc:redshift://username:password#localhost:port/db_name')
Here is the error I am getting.
sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string
Thanks!
There are basically two options for connecting to Amazon Redshift using Python.
Option 1: JDBC Connection
This is a traditional connection to a database. The popular choice tends to be using psycopg2 to establish the connection, since Amazon Redshift resembles a PostgreSQL database. You can download specific JDBC drivers for Redshift.
This connection would require the Redshift database to be accessible to the computer making the query, and the Security Group would need to permit access on port 5439. If you are trying to connect from a computer on the Internet, the database would need to be in a Public Subnet and set to Publicly Accessible = Yes.
See: Establish a Python Redshift Connection: A Comprehensive Guide - Learn | Hevo
Option 2: Redshift Data API
You can directly query an Amazon Redshift database by using the Boto3 library for Python, including an execute_statement() call to query data and a get_statement_result() call to retrieve the results. This also works with IAM authentication rather than having to create additional 'database users'.
There is no need to configure Security Groups for this method, since the request is made to AWS (on the Internet). It also works with Redshift databases that are in private subnets.
I'm trying to connect to an oldschool jTDS ms server for a variety of different analysis tasks. Firstly just using Python with SQL alchemy, as well as using Tableau and Presto.
Focusing on SQL Alchemy first at the moment I'm getting an error of:
Data source name not found and no default driver specified
With this, based on this thread here Connecting to SQL Server 2012 using sqlalchemy and pyodbc
i.e,
import urllib
params = urllib.parse.quote_plus("DRIVER={FreeTDS};"
"SERVER=x-y.x.com;"
"DATABASE=;"
"UID=user;"
"PWD=password")
engine = sa.create_engine("mssql+pyodbc:///?odbc_connect={FreeTDS}".format(params))
Connecting works fine through Dbeaver, using a jTDS SQL Server (MSSQL) driver (which is labelled as legacy).
Curious as to how to resolve this issue, I'll keep researching away, but would appreciate any help.
I imagine there is an old drive on the internet I need to integrate into SQL Alchemy to begin with, and then perhaps migrating this data to something newer.
Appreciate your time
I am able to connect to our database given the following connection string (OLEDB).
"Provider=IBMDA400;Data Source=10.33.xx.x;User Id=user;Password=pass;Default Collection=mm370lib;";
Then tried (Python ibm_db)
import ibm_db, ibm_db_dbi
ibm_db_conn = ibm_db.connect("DRIVER={IBM DB2 CLI DRIVER};DATABASE=mm370lib;HOSTNAME=10.33.xx.x;PORT=446;PROTOCOL=TCPIP;UID=user;PWD=pass;", '', '')
But this error occured.
Exception: [IBM][CLI Driver] SQL30061N The database alias or database name "MM370LIB " was not found at the remote node. SQLSTATE=08004 SQLCODE=-30061
What did I missed? Are the database Name and Default Collection different?
Yes, the DB name is usually the system name; though it doesn't have to be.
Originally, the AS/400 support only a single DB.
With the introduction of independent storage pools (iASP), today's IBM i machines can have multiple DBs.
From a 5250 session, try:
WRKRDBDIRE
Look for the *LOCAL entry, may be the only one.
You can also see the DB names using IBM i Navigator for Windows or the web based IBM Navigator. The DB names are shown under the "Databases" ,
there are three DBs on the system: Rchasma1, Iasp320, Ima1db1.
I currently use PYODBC to connect to MS SQL Server and MYSQL, but now need to access an Oracle database as well.
I have Oracle SQL Developer installed on my work comp (but there doesn't seem to be a separate Net Manager client per other SO posts), which I can use to access the DB.
Ideally, I would run what I need to in python, but am having difficulties. As it stands, I have created a linked server object to the Oracle DB in a MS SQL Server DB as a work around, but this isn't ideal.
What do I need to do to get PYODBC (or substitute) to connect to Oracle? Thanks very kindly.
I ran into the same issue where I could connect to a database via Oracle SQL Developer but not via pyodbc. Someone else did most of the database setup, so I wasn't sure of the proper connection parameters. I'll run you through how I was able to connect on a Windows computer.
In the Start Menu I typed "odbc" and selected "Microsoft ODBC Administrator". Under the "System DSN" tab I found my DSN name (we'll call it myDSN) and corresponding driver (mine was "Oracle in OraClient11g_home2"). I also have to specify a username and password for my database so my connection line now looks like this:
cnxn = pyodbc.connect(driver='{Oracle in OraClient11g_home2}', dsn='myDSN', uid='HODOR', pwd='hodor')
Maybe at this point it will work for you, but I still wasn't able to connect. This computer is a mess of 32 and 64 bit drivers so I figured I was pointing to the wrong one. So once again into the Start Menu, where under All Programs I found a folder called "Oracle in OraClient11g_home2" and right under it, one called "Oracle in OraClient11g_home32Bit". I changed my connection line in Python to the following:
cnxn = pyodbc.connect(driver='{Oracle in OraClient11g_home32Bit}', dsn='myDSN', uid='HODOR', pwd='hodor')
And it connected.