I have a database that I am running on my local machine which I can access through Microsoft SQL Server Manager Studio. I connect to this server "JIMS-LAPTOP\SQLEXPRESS" and then I can run queries through the manager. However I need to be able to connect to this database and work with it through python.
When I try to connect using sqlite3 like
conn = sqlite3.connect("JIMS-LAPTOP\SQLEXPRESS")
I get an unable to open database file error
I tried accessing the temporary file directly like this
conn = sqlite3.connect("C:\Users\Jim Notaro\AppData\Local\Temp\~vs13A7.sql")
c = conn.cursor()
c.execute("SELECT name FROM sqlite_master WHERE type = \"table\"")
print c.fetchall()
Which allows me to access a database but it is completely empty (No tables are displayed)
I also tried connecting like this
conn = sqlite3.connect("SQL SERVER (SQLEXPRESS)")
Which is what the name is in the sql server configuration manager but that also returns a blank database.
I'm not sure how I am suppose to be connecting to the database using python
You can't use sqlite3 to connect to SQL server, only to Sqlite databases.
You need to use a driver that can talk to MS SQL, like pyodbc.
Related
I wrote a Python program that web scraped a website and added the results to a Microsoft Access database. I now want to run the script again, with it adding the data to an Azure SQL database. I keep getting this error.
A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections.
I have tried t edit the settings of the database to no avail. Could someone tell me what settings to apply to the database? I also tried to see if there was a way to run the Python script inside azure to try to avoid the problem. Is this possible?
cnxn = pyodbc.connect(r'Driver={ODBC Driver 18 for SQL Server};Server=tcp:servername.database.windows.net,1433;Database=sizedb3;Uid={your_user_name};Pwd={your_password_here};Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;Authentication=ActiveDirectoryPassword')
I tried this driver. I have downloaded the driver from Microsoft's website. This driver is a connection string in the ODBC section of the Azure SQL database in the Azure portal.
I tried running the below python code to connect to Azure SQL DB with Azure AD authentication.
Code:-
import pyodbc
conn = pyodbc.connect('Driver={ODBC Driver 17 for SQL Server};'
'Server=tcp:siliconserver.database.windows.net,1433;'
'Database=silicondb;'
'Uid=xxxser#sid24desaioutlook.onmicrosoft.com;'
'Pwd=xxxxxxxxxx#123;'
'authentication=ActiveDirectoryPassword')
cursor = conn.cursor()
cursor.execute('SELECT * FROM StudentReviews')
for i in cursor:
print(i)
cursor.close()
conn.close()
Output:-
Make sure you have allowed your Client IP in your Azure SQL server Networking tab like below:-
I tried to remove one syntax/spelling from my Azure SQL connection string and got the same error code as yours like below:-
You can validate your connection string server spelling and syntax from your Azure SQL server DB > Connection String > like below:-
Also, Make sure you have added your client IP and allowed it in your Azure SQL Server like below:-
Normally, when trying to connect to a SQL sever DB in Python, I use the pyodbc package like this:
import pyodbc
conn = pyodbc.connect("Driver={SQL Server};"
"Server=<server-ip>;"
"Database=<DB-name>;"
"UID=<user-name>;"
"PWD=<password>;"
"Trusted_Connection=yes;"
)
However, I don't know how to connect to a linked server in Python. If my linked server is called linked-server and has a DB called linked-DB for example; I have tried the same connection string as above, and changing the database name like this: "Database=<linked-server>.<linked-DB>;", since that's how I query the linked server DB in SSMS. But this doesn't work in Python.
Thank you very much for your help.
I'm getting this error when trying to update a db2 database that is a linked server on our SQL Server db.
ERROR:root:('42000', '[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]The requested operation could not be performed because OLE DB provider "IBMDA400" for linked server "iSeries" does not support the required transaction interface. (7390) (SQLExecDirectW)')
I am connecting to sql server via pyodbc and can run sql scripts with no issues. Here is the sql I get the error with
sql3 = " exec ('UPDATE SVCEN2DEV.SRVMAST SET SVRMVD = ? WHERE svtype != ''*DCS-'' AND svcid = ? and svacct = ? ') AT [iSeries]"
db.execute(sql3, (row[2],srvid,row[0]))
db.commit()
And just in case here is my connection string using pyodbc:
conn = pyodbc.connect("DRIVER={SQL Server};SERVER="+ Config_Main.dbServer +";DATABASE="+ Config_Main.encludeName +";UID="+ Config_Main.encludeUser +";PWD=" + Config_Main.encludePass)
db = conn.cursor()
Also note that this query runs just fine in SSMS. I have also tried the openquery method but had no luck. Any ideas?
Python's DB API 2.0 specifies that, by default, connections should open with autocommit "off". This results in all database operations being performed in a transaction that must be explicitly committed (or rolled back) in the Python code.
When a pyodbc connection with autocommit = False (the default) sends an UPDATE to the SQL Server, that UPDATE is enclosed in a Local Transaction managed by SQL Server. When the SQL Server determines that the target table is on a Linked Server it tries to promote the transaction to a Distributed Transaction managed by MSDTC. If the connection technology used to manage the Linked Server does not support Distributed Transactions then the operation will fail.
This issue can often be avoided by ensuring that the pyodbc connection has autocommit enabled, either by
cnxn = pyodbc.connect(conn_str, autocommit=True)
or
cnxn = pyodbc.connect(conn_str)
cnxn.autocommit = True
That will send each SQL statement individually, without being wrapped in an implicit transaction.
I'm trying to query a Kerberized Hive cluster with SQL Alchemy. I'm able to submit queries using pyhs2 which confirms that it's possible to connect and query Hive when authenticated by Kerberos:
import pyhs2
with pyhs2.connect(host='hadoop01.woolford.io',
port=10500,
authMechanism='KERBEROS') as conn:
with conn.cursor() as cur:
cur.execute('SELECT * FROM default.mytable')
records = cur.fetchall()
# etc ...
I notice that Airbnb's Airflow uses SQL Alchemy and can connect to Kerberized Hive and so I imagine it's possible to do something like this:
engine = create_engine('hive://hadoop01.woolford.io:10500/default', connect_args={'?': '?'})
connection = engine.connect()
connection.execute("SELECT * FROM default.mytable")
# etc ...
I'm not sure what parameters should be set in the connect_args dictionary. Can you see what needs to be added to make this work (e.g. Kerberos service name, realm, etc.)?
update:
Under the hood SQL Alchemy is using PyHive to connect to Hive. The current version of PyHive, v0.2.1, doesn't support Kerberos.
I notice that someone from Yahoo created a pull request that provides support for Kerberos. This PR has not yet been merged/released and so I just copied the code from the PR into /usr/lib/python2.7/site-packages/pyhive/hive.py on the Superset server created a connection like this:
engine = create_engine('hive://hadoop01:10500', connect_args={'auth': 'KERBEROS', 'kerberos_service_name': 'hive'})
Hopefully, the maintainer of PyHive will merge/release the support for Kerberos.
install these libraries
sasl
thrift
thrift-sasl
PyHive
get your kerberos ticket and then;
engine = create_engine('hive://HOST:10500/DB_NAME',
connect_args={'auth': 'KERBEROS', 'kerberos_service_name': 'hive'})
ps: /DB_NAME is optional
When I run this query to bulkinsert a file on a shared drive to SQL server 2008 with username and password (not Windows authentication), I get these errors. DBA, system admins and network guys are all denying these errors are related to their teams and I am lost... Can anyone please help me to identify where the issue is? When I run bulkinsert with database username and password, what authentication does SQL server use to open the file?
Run this on MS Management Studio
BULK INSERT DatabaseName.dbo.TableName
FROM '\\shared_server\parent\child\file_name.txt'
WITH(FIRE_TRIGGERS, DATAFILETYPE='char', FIELDTERMINATOR='\t',ROWTERMINATOR='\n', FIRSTROW=2);
and I get
Cannot bulk load because the file "\\shared_server\parent\child\file_name.txt" could not be opened. Operating system error code 5(Access is denied.).
Run this on python
import pyodbc
database = 'DatabaseName'
username = 'username'
password = 'password'
server = 'server_name'
failover = 'failover_server_name'
cnxn_string = 'DRIVER={SQL Server Native Client 10.0};SERVER=%s;FAILOVER_PARTNER=%s;DATABASE=%s;UID=%s;PWD=%s;CHARSET=UTF8' % (server, failover, database, username, password)
cnxn = pyodbc.connect(cnxn_string)
cursor = cnxn.cursor()
query = r"""
BULK INSERT Estimates.dbo.FundamentalsIS
FROM '\\shared_server\parent\child\file_name.txt'
WITH(FIRE_TRIGGERS, DATAFILETYPE='char', FIELDTERMINATOR='\t',ROWTERMINATOR='\n', FIRSTROW=2);
"""
cursor.execute(query)
cursor.commit()"
and I get
ProgrammingError: ('42000', '[42000] [Microsoft][SQL Server Native Client 10.0][SQL Server]Cannot bulk load because the file "\\shared_server\parent\child\file_name.txt" could not be opened. Operating system error code 1326(Logon failure: unknown user name or bad password.). (4861) (SQLExecDirectW)')
Could the MS SQL server 2008 possibly be on a different security group (or have different settings) than the shared drives, where the file is located?
Because the bulk insert operation is run on the MS Management studio server side, it might not have access to the file, the 'access denied' leads me to believe DB server cannot get to shared file drive, and possibly does not have permission to access it. Likewise, even if using python to execute the BULK INSERT statement, the DB server still needs to have access to where ever the file is located.
I had a similar issue in the past, because the DB server could not get to the shared file, located elsewhere. My workaround was to use local computer to read in the file and run the insert queries using python. It sounds like the local environment has access to both and can be used as the central communication hub. You might have to do something similar to
https://stackoverflow.com/a/6482610/3761363
https://stackoverflow.com/a/11219626/3761363