I am using SQL server database. I've noticed that when executing the code below, I get a connection to the database left over in 'sleeping' state with an 'AWAITING COMMAND' status.
engine = create_engine(url, connect_args={'autocommit': True})
res = engine.execute(f"CREATE DATABASE my_database")
res.close()
engine.dispose()
With a breakpoint after the engine.dispose() call, I can see an entry on the server in the EXEC sp_who2 table. This entry only disappears after I kill the process.
Probably Connection Pooling
Connection Pooling
A connection pool is a standard technique used to
maintain long running connections in memory for efficient re-use, as
well as to provide management for the total number of connections an
application might use simultaneously.
Particularly for server-side web applications, a connection pool is
the standard way to maintain a “pool” of active database connections
in memory which are reused across requests.
SQLAlchemy includes several connection pool implementations which
integrate with the Engine. They can also be used directly for
applications that want to add pooling to an otherwise plain DBAPI
approach.
.
I'm not sure if this is what gets in the way of my teardown method which drops the database
To drop a database that's possibly in use try:
USE master;
ALTER DATABASE mydb SET RESTRiCTED_USER WITH ROLLBACK IMMEDIATE;
DROP DATABASE mydb;
You basically want to kill all the connections You could use something like this:
For MS SQL Server 2012 and above
USE [master];
DECLARE #kill varchar(8000) = '';
SELECT #kill = #kill + 'kill ' + CONVERT(varchar(5), session_id) + ';'
FROM sys.dm_exec_sessions
WHERE database_id = db_id('MyDB')
EXEC(#kill);
For MS SQL Server 2000, 2005, 2008
USE master;
DECLARE #kill varchar(8000); SET #kill = '';
SELECT #kill = #kill + 'kill ' + CONVERT(varchar(5), spid) + ';'
FROM master..sysprocesses
WHERE dbid = db_id('MyDB')
EXEC(#kill);
Or something more script-like:
DECLARE #pid SMALLINT, #sql NVARCHAR(100)
DECLARE curs CURSOR LOCAL FORWARD_ONLY FOR
SELECT DISTINCT pid FROM master..sysprocesses where dbid = DB_ID(#dbname)
OPEN curs
fetch next from curs into #pid
while ##FETCH_STATUS = 0
BEGIN
SET #sql = 'KILL ' + CONVERT(VARCHAR, #pid)
EXEC(#sql)
FETCH NEXT FROM curs into #pid
END
CLOSE curs
DEALLOCATE curs
More can be found here:
Script to kill all connections to a database (More than RESTRICTED_USER ROLLBACK)
Related
I have an app built using Fastapi & SQLAlchemy for handling all the DB-related stuff.
When the APIs are triggered via the frontend, I see that the connections are opened & they remain in IDLE state for a while. Is it possible to reduce the IDLE time via sqlalchemy?
I do the following to connect to the Postgres DB:
import sqlalchemy as db
eng = db.create_engine(<SQLALCHEMY_DATABASE_URI>)
conn = eng.connect()
metadata = db.MetaData()
table = db.Table(
<table_name>,
metadata,
autoload=True,
autoload_with=eng)
user_id = 1
try:
if ids_by_user is None:
query = db.select([table.columns.created_at]).where(
table.columns.user_id == user_id,
).order_by(
table.columns.created_at.desc()
)
result = conn.execute(query).fetchmany(1)
time = result[0][0]
time_filtering_query = db.select([table]).where(
table.columns.created_at == time
)
time_result = conn.execute(time_filtering_query).fetchall()
conn.close()
return time_result
else:
output_by_id = []
for i in ids_by_user:
query = db.select([table]).where(
db.and_(
table.columns.id == i,
table.columns.user_id == user_id
)
)
result = conn.execute(query).fetchall()
output_by_id.append(result)
output_by_id = [output_by_id[j][0]
for j in range(len(output_by_id))
if output_by_id[j]]
conn.close()
return output_by_id
finally:
eng.dispose()
Even after logging out of the app, the connections are still active & in idle state for a while and don't close immediately.
Edit 1
I tried using NullPool & the connections are still idle & in ROLLBACK, which is the same as when didn't use NullPool
You can reduce connection idle time by setting a maximum lifetime per connection by using pool_recycle. Note that connections already checked out will not be terminated until they are no longer in use.
If you are interested in reducing both the idle time and keeping the overall number of unused connections low, you can set a lower pool_size and then set max_overflow to allow for more connections to be allocated when the application is under heavier load.
from sqlalchemy import create_engine
e = create_engine(<SQLALCHEMY_DATABASE_URI>,
pool_recycle=3600 # idle connections will be terminated after 1 hour
pool_size=5 #pool size under normal conditions
max_overflow=5 #additional connections when pool size is exeeded
)
Google cloud has a helpful guide for optimizing Postgres connection pooling that you might find useful
I'm getting quite a few timeouts as my blob storage trigger is running. It seems to timeout whenever I'm inserting values into an Azure SQL DB. I have the functionTimeout parameter set in the host.json to "functionTimeout": "00:40:00", although I'm seeing timeouts happen within a couple of minutes. Why would this be the case? My function app is on ElasticPremium pricing tier.
System.TimeoutException message:
Exception while executing function: Functions.BlobTrigger2 The operation has timed out.
My connection to the db (I close it at the end of the script):
# urllib.parse.quote_plus for python 3
params = urllib.parse.quote_plus(fr'Driver={DRIVER};Server=tcp:{SERVER_NAME},1433;Database=newTestdb;Uid={USER_NAME};Pwd={PASSWORD};Encrypt=yes;TrustServerCertificate=no;Connection Timeout=0;')
conn_str = 'mssql+pyodbc:///?odbc_connect={}'.format(params)
engine_azure = create_engine(conn_str,echo=True)
conn = engine_azure.connect()
This is the line of code that is run before the timeout happens (Inserting to db):
processed_df.to_sql(blob_name_file.lower(), conn, if_exists = 'append', index=False, chunksize=500)
I'm using Visual Studio 2017 with a Python Console environment. I have a MySQL database set up which I can connect to successfully. I can also Insert data into the DB. Now I'm trying to display/fetch data from it.
I connect fine, and it seems I'm fetching data from my database, but nothing is actually printing to the console. I want to be able to fetch and display data, but nothing is displaying at all.
How do I actually display the data I select?
#importing module Like Namespace in .Net
import pypyodbc
#creating connection Object which will contain SQL Server Connection
connection = pypyodbc.connect('Driver={SQL Server};Server=DESKTOP-NJR6F8V\SQLEXPRESS;Data Source=DESKTOP-NJR6F8V\SQLEXPRESS;Integrated Security=True;Connect Timeout=30;Encrypt=False;TrustServerCertificate=True;ApplicationIntent=ReadWrite;MultiSubnetFailover=False')
cursor = connection.cursor()
SQLCommand = ("SELECT ID FROM MyAI_DB.dbo.WordDefinitions WHERE ID > 117000")
#Processing Query
cursor.execute(SQLCommand)
#Commiting any pending transaction to the database.
connection.commit()
#closing connection
#connection.close()
I figured it out. I failed to include the right Print statement. Which was:
print(cursor.fetchone())
I also had the connection.commit statement in the wrong place (it was inserted even executing the Print statement). The final code that worked was this:
#importing module Like Namespace in .Net
import pypyodbc
#creating connection Object which will contain SQL Server Connection
connection = pypyodbc.connect('Driver={SQL Server};Server=DESKTOP-NJR6F8V\SQLEXPRESS;Data Source=DESKTOP-NJR6F8V\SQLEXPRESS;Integrated Security=True;Connect Timeout=30;Encrypt=False;TrustServerCertificate=True;ApplicationIntent=ReadWrite;MultiSubnetFailover=False')
cursor = connection.cursor()
SQLCommand = ("SELECT * FROM MyAI_DB.dbo.WordDefinitions")
#Processing Query
cursor.execute(SQLCommand)
#Commiting any pending transaction to the database.
print(cursor.fetchone())
connection.commit()
#closing connection
#connection.close()
I'm getting this error when trying to update a db2 database that is a linked server on our SQL Server db.
ERROR:root:('42000', '[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]The requested operation could not be performed because OLE DB provider "IBMDA400" for linked server "iSeries" does not support the required transaction interface. (7390) (SQLExecDirectW)')
I am connecting to sql server via pyodbc and can run sql scripts with no issues. Here is the sql I get the error with
sql3 = " exec ('UPDATE SVCEN2DEV.SRVMAST SET SVRMVD = ? WHERE svtype != ''*DCS-'' AND svcid = ? and svacct = ? ') AT [iSeries]"
db.execute(sql3, (row[2],srvid,row[0]))
db.commit()
And just in case here is my connection string using pyodbc:
conn = pyodbc.connect("DRIVER={SQL Server};SERVER="+ Config_Main.dbServer +";DATABASE="+ Config_Main.encludeName +";UID="+ Config_Main.encludeUser +";PWD=" + Config_Main.encludePass)
db = conn.cursor()
Also note that this query runs just fine in SSMS. I have also tried the openquery method but had no luck. Any ideas?
Python's DB API 2.0 specifies that, by default, connections should open with autocommit "off". This results in all database operations being performed in a transaction that must be explicitly committed (or rolled back) in the Python code.
When a pyodbc connection with autocommit = False (the default) sends an UPDATE to the SQL Server, that UPDATE is enclosed in a Local Transaction managed by SQL Server. When the SQL Server determines that the target table is on a Linked Server it tries to promote the transaction to a Distributed Transaction managed by MSDTC. If the connection technology used to manage the Linked Server does not support Distributed Transactions then the operation will fail.
This issue can often be avoided by ensuring that the pyodbc connection has autocommit enabled, either by
cnxn = pyodbc.connect(conn_str, autocommit=True)
or
cnxn = pyodbc.connect(conn_str)
cnxn.autocommit = True
That will send each SQL statement individually, without being wrapped in an implicit transaction.
I use psycopg2 for accessing my postgres database in python. My function should create a new database, the code looks like this:
def createDB(host, username, dbname):
adminuser = settings.DB_ADMIN_USER
adminpass = settings.DB_ADMIN_PASS
try:
conn=psycopg2.connect(user=adminuser, password=adminpass, host=host)
cur = conn.cursor()
cur.execute("CREATE DATABASE %s OWNER %s" % (nospecial(dbname), nospecial(username)))
conn.commit()
except Exception, e:
raise e
finally:
cur.close()
conn.close()
def nospecial(s):
pattern = re.compile('[^a-zA-Z0-9_]+')
return pattern.sub('', s)
When I call createDB my postgres server throws an error:
CREATE DATABASE cannot run inside a transaction block
with the errorcode 25001 which stands for "ACTIVE SQL TRANSACTION".
I'm pretty sure that there is no other connection running at the same time and every connection I used before calling createDB is shut down.
It looks like your cursor() is actually a transaction:
http://initd.org/psycopg/docs/cursor.html#cursor
Cursors created from the same
connection are not isolated, i.e., any
changes done to the database by a
cursor are immediately visible by the
other cursors. Cursors created from
different connections can or can not
be isolated, depending on the
connections’ isolation level. See also
rollback() and commit() methods.
Skip the cursor and just execute your query. Drop commit() as well, you can't commit when you don't have a transaction open.