SQL Server 2005 terminating connections after 30 sec - python

We moved our SQL Server 2005 database to a new physical server, and since then it has been terminating any connection that persist for 30 seconds.
We are experiencing this in Oracle SQL developer and when connecting from python using pyodbc
Everything worked perfectly before, and now python returns this error after 30 seconds:
('08S01', '[08S01] [FreeTDS][SQL Server]Read from the server failed (20004) (SQLExecDirectW)')

First of all what you need is profile the sql server to see if any activity is happening. Look for slow running queries, CPU and memory bottlenecks.
Also you can include the timeout in the querystring like this:
"Data Source=(local);Initial Catalog=AdventureWorks;Integrated Security=SSPI;Connection Timeout=30";
and extend that number if you want.
But remember "timeout" doesn't means time connection, this is just the time to wait while trying to establish a connection before terminating.
I think this problem is more about database performance or maybe a network issue.

Maybe check your remote query timeout? It should default to 600, but maybe it's set to 30? Info here

Related

How to overcome the 2hr connection timeout (OperationalError) using SQLAlchemy and Postgres?

I'm trying to execute some long-running SQL queries using SQLAlchemy against a Postgres database hosted on AWS RDS.
from sqlalchemy import create_engine
conn_str = 'postgresql://user:password#db-primary.cluster-cxf.us-west-2.rds.amazonaws.com:5432/dev'
engine = create_engine(conn_str)
sql = 'UPDATE "Clients" SET "Name" = NULL'
#this takes about 4 hrs to execute if run in pgAdmin
with engine.begin() as conn:
conn.execute(sql)
After running for exactly 2 hours, the script errors out with
OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
(Background on this error at: https://sqlalche.me/e/14/e3q8)
I have tested setting connection timeouts in SQLAlchemy (based on How to set connection timeout in SQLAlchemy). This did not make a difference.
I have looked up the connection settings in the Postgres settings (based on https://dba.stackexchange.com/questions/164419/is-it-possible-to-limit-timeout-on-postgres-server), but both statement_timeout and idle_in_transaction_session_timeout are set to 0, meaning there are no set limits.
I agree with #jjanes. This smells like a TCP connection timeout issue. Might be that somewhere in the network layer something, be it a NAT or a firewall, dropped your TCP connection, leaving the code to wait for the full TCP keepalive timeout until it sees the connection as closed. This could happen usually when the network topology between the client and the database is complicated. For example there may be a company firewall, or some sort of interconnection. pgAdmin may come with a pre-configured setting for TCP keepalive, therefore it was not impacted, but I'm not sure.
Other timeouts didn't kick in because, in my understanding, TCP timeout is in the L4 layer, which overshadows other timeouts that are in L7 application layer.
You could try adding the keepalive parameters into your connection string and see if it can resolve the issue. For example:
postgresql://user:password#db-primary.cluster-cxf.us-west-2.rds.amazonaws.com:5432/dev?keepalives_idle=1&keepalives_count=1&tcp_user_timeout=1000
Note the keepalive parameters at the end. For your reference, here's the explanation to those parameters:
https://www.postgresql.org/docs/current/runtime-config-connection.html

Why connection to Redshift from Python sometimes times out?

I'm connecting Redshift through Python from PC, Python from Lambda and SQL Workbench.
SQL Workbech always connects without any issues.
Both PC and Lambda often work correctly, but sometimes they can't connect - it looks like connection is waiting for Redshift to accept request, just nothing happens until, for ex. Lambda times out.
There's no log in run, no feedback information, nothing in redshift logs I can find about it.
I can fix every single run manually by connecting in Workbench, while the script is waiting for connection - it apparently refreshes something, allowing Python to connect and work properly.
What is the reason for this and how can I fix it?
Here's my connection:
conn = psycopg2.connect(
host=host,
database=db,
user=user,
password=pwd,
port=port
)
I went through several questions regarding connections like:
AWS Lambda times out connecting to RedShift
AWS Serverless lambda times out while connecting to redshift
python socket Windows 10 connection times out
but nothing looks like its related to my issue.
Found an answer - SQL Workbench blocked the connection. When Workbench is connected, all other sources happen to fail.
Easiest solution is to disconnect Workbench connection before other client tries to connect. If other client already tried connecting, then disconnecting is not enough - it requires reconnecting, so the other client refreshes (I have no idea why, found it by trial and error).
Disconnection may also not work, if there was any error query run in the past and it was not rolled back. After error, rollback and disconnection everything is fine.

Is there a limit on the number of connections opened from a client's side to a SQL Server database?

I'm working on a Python application with an SQL Server database using pyodbc, and I need to open multiple connections from the application's side to the database.
I learnt that the max number of connections allowed on an instance of the SQL Server database is 32,767. My understanding is this is the max that the DB instance "can handle", i.e. all simultaneous users combined.
Is there a limit on how many connections one client can open towards the same database instance, is it also 32,767? If yes, where / how is this limit configured?
Taking an educated guess here that there is no connection count limit on the client side towards the same DB instance, there is a limit of 32,767 on the server side, but the client would be more likely to run out of other resources way before it gets close to this figure.
I was using one connection, one cursor, and threading to insert multiple records, but kept getting a "connection is busy" error, this is resolved by adding "MARS_Connection=yes" in the pyodbc database connection string, thanks to this MS documentation.
Related:
How costly is opening and closing of a DB connection?
Can I use multiple cursors on one connection with pyodbc and MS SQL Server?

How to close mysql connection automatically after specified time in python?

I want to close MySQL database connection after 50 sec automatically if queries are taking more than 50 sec? Is there any option in python while making connection or any other solution to do that ?
Reference site for python database connection
Look Connection in this site they might explained about timeout for query, you can pass an integer which is in seconds

python mysql database connection error

I am trying to access the remote database from one Linux server to another which is connected via LAN.
but it is not working.. after some time it will generate an error
`_mysql_exceptions.OperationalError: (2003, "Can't connect to MySQL server on '192.168.0.101' (99)")'
this error is random it will raise any time.
each time create a new db object in all methods.
and close the connection as well then also why this error raise.
can any one please help me to sort out this problem
This issue is due to so many pending request on the remote database.
So in this situation MySql closes the connection to the running script.
to overcome this situation put
time.sleep(sec) # here int is a seconds in number that to sleep the script.
it will solve this issue.. without transferring database to local server or any other administrative task on mysql
My solution was to collect more queries for one commit statement if those were insert queries.

Categories