I have a mysql server running on my local network that isn't reachable off the network, and it needs to stay like this.
When I am on a different network the following code hangs for about 5-10 seconds, my guess is that its retrying to connect for a number of attempts:
import mysql.connector
conn = mysql.connector.connect(
host="Address",
user="user",
password="password",
database="database"
)
Is there a way to "ping" the mysql server before this code to verify that the MySQL server is reachable or limit the number of retries?
At the moment I am having to use a try-except clause to catch if the server is not reaachable.
Instead of trying to implement specific behavior before connecting, adjust the connect timeout so that you don't have to wait - according to your need, the server is down if you can't connect within a short timeframe anyway.
You can use connection_timeout to adjust the socket timeout used when connecting to the server.
If you set it to a low value (seems like it's in seconds - so 1 should work fine) you'll get the behavior you're looking for (and it will also help you catch any issues with the user/password/database values).
Related
I'm trying to execute some long-running SQL queries using SQLAlchemy against a Postgres database hosted on AWS RDS.
from sqlalchemy import create_engine
conn_str = 'postgresql://user:password#db-primary.cluster-cxf.us-west-2.rds.amazonaws.com:5432/dev'
engine = create_engine(conn_str)
sql = 'UPDATE "Clients" SET "Name" = NULL'
#this takes about 4 hrs to execute if run in pgAdmin
with engine.begin() as conn:
conn.execute(sql)
After running for exactly 2 hours, the script errors out with
OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
(Background on this error at: https://sqlalche.me/e/14/e3q8)
I have tested setting connection timeouts in SQLAlchemy (based on How to set connection timeout in SQLAlchemy). This did not make a difference.
I have looked up the connection settings in the Postgres settings (based on https://dba.stackexchange.com/questions/164419/is-it-possible-to-limit-timeout-on-postgres-server), but both statement_timeout and idle_in_transaction_session_timeout are set to 0, meaning there are no set limits.
I agree with #jjanes. This smells like a TCP connection timeout issue. Might be that somewhere in the network layer something, be it a NAT or a firewall, dropped your TCP connection, leaving the code to wait for the full TCP keepalive timeout until it sees the connection as closed. This could happen usually when the network topology between the client and the database is complicated. For example there may be a company firewall, or some sort of interconnection. pgAdmin may come with a pre-configured setting for TCP keepalive, therefore it was not impacted, but I'm not sure.
Other timeouts didn't kick in because, in my understanding, TCP timeout is in the L4 layer, which overshadows other timeouts that are in L7 application layer.
You could try adding the keepalive parameters into your connection string and see if it can resolve the issue. For example:
postgresql://user:password#db-primary.cluster-cxf.us-west-2.rds.amazonaws.com:5432/dev?keepalives_idle=1&keepalives_count=1&tcp_user_timeout=1000
Note the keepalive parameters at the end. For your reference, here's the explanation to those parameters:
https://www.postgresql.org/docs/current/runtime-config-connection.html
I'm connecting Redshift through Python from PC, Python from Lambda and SQL Workbench.
SQL Workbech always connects without any issues.
Both PC and Lambda often work correctly, but sometimes they can't connect - it looks like connection is waiting for Redshift to accept request, just nothing happens until, for ex. Lambda times out.
There's no log in run, no feedback information, nothing in redshift logs I can find about it.
I can fix every single run manually by connecting in Workbench, while the script is waiting for connection - it apparently refreshes something, allowing Python to connect and work properly.
What is the reason for this and how can I fix it?
Here's my connection:
conn = psycopg2.connect(
host=host,
database=db,
user=user,
password=pwd,
port=port
)
I went through several questions regarding connections like:
AWS Lambda times out connecting to RedShift
AWS Serverless lambda times out while connecting to redshift
python socket Windows 10 connection times out
but nothing looks like its related to my issue.
Found an answer - SQL Workbench blocked the connection. When Workbench is connected, all other sources happen to fail.
Easiest solution is to disconnect Workbench connection before other client tries to connect. If other client already tried connecting, then disconnecting is not enough - it requires reconnecting, so the other client refreshes (I have no idea why, found it by trial and error).
Disconnection may also not work, if there was any error query run in the past and it was not rolled back. After error, rollback and disconnection everything is fine.
I'm trying to connect to db2 (ibm_db). The connection is successful, i'm able to make changes in the db. But after a while the connection gets closed. I'm not closing the connection anywhere.
It throws this errror:
[IBM][CLI Driver] CLI0106E Connection is closed. SQLSTATE=08003 SQLCODE=-99999
2019-04-11 03:11:20,558 - INFO - werkzeug - 9.46.72.43 - - [11/Apr/2019 03:11:20] POST 200
Here is my code: (Not exact. But something similar)
import ibm_db
conn = ibm_db.connect("database","username","password")
def update():
stmt = ibm_db.exec_immediate(conn, "UPDATE employee SET bonus = '1000' WHERE job = 'MANAGER'")
How do i maintain the connection the whole time. I mean whenever the service is running.
Your design of only making a connection when the service starts is unsuitable for long running services.
There's nothing you can do to stop the other end (i.e. the Db2-server, or any intervening gateway) from closing the connection. The connection can get closed for a variety of reasons. For example, the Db2-server may be configured to discard idle sessions, or sessions that break some site-specific workload-management rules. Network issues can cause connections to become unavailable. Service-management matters can cause connections to be forced off etc.
Check out the pconnect method to see if it helps you. Otherwise consider a better design such as connection-pooling, reconnect-on-demand etc.
I am writing an app with wxPython that incorporates pyodbc to access SQL Server. A user must first establish a VPN connection before they can establish a connection with the SQL server. In cases where a user forgets to establish a VPN connection or is simply not authorized to access a particular server, the app will freeze for up to 60+ seconds before it produces an error message. Often, users will get impatient and force-close the app before the error message pops up.
I wonder if there is a way to test whether it's possible to connect to the server without freezing up. I thought about using timeout, but it seems that timeout can be used only after I establish a connection
A sample connection string I use is below:
connection = pyodbc.connect(r'DRIVER={SQL Server};SERVER=ServerName;database=DatabaseName;Trusted_Connection=True;unicode_results=True')
See https://code.google.com/archive/p/pyodbc/wikis/Connection.wiki under timeout
Note: This attribute only affects queries. To set the timeout for the
actual connection process, use the timeout keyword of the
pyodbc.connect function.
So change your connection string to:
connection = pyodbc.connect(r'DRIVER={SQL Server};SERVER=ServerName;database=DatabaseName;Trusted_Connection=True;unicode_results=True', timeout=3)
should work
took a while before it threw an error message about server not existing or access being denied
Your comment conflates two very different kinds of errors:
server not existing is a network error. Either the name has no address, or the address is unreachable. No connection can be made.
access being denied is a response from the server. For the server to respond, a connection must exist. This is not to be confused with connection refused (ECONNREFUSED), which means the remote is not accepting connections on the port.
SQL Server uses TCP/IP. You can use standard network functions to determine if the network hostname of the machine running SQL Server can be found, and if the IP address is reachable. One advantage to using them to "pre-test" the connection is that any error you'll get will be much more specific than the typical there was a problem connecting to the server.
Note that not all delay-inducing errors can be avoided. For example, if the DNS server is not responding, the resolver will typically wait 30 seconds before giving up. If an IP address is valid, but there's no machine with that address, attempting a connection will take a long time to fail. There's no way for the client to know there's no such machine; it could just be taking a long time to get a response.
After accessing my web app using:
- Python 2.7
- the Bottle micro framework v. 0.10.6
- Apache 2.2.22
- mod_wsgi
- on Ubuntu Server 12.04 64bit; I'm receiving this error after several hours:
OperationalError: (2006, 'MySQL server has gone away')
I'm using MySQL - the native one included in Python. It usually happens when I don't access the server. I've tried closing all the connections, which I do, using this:
cursor.close()
db.close()
where db is the standard MySQLdb.Connection() call.
The my.cnf file looks something like this:
key_buffer = 16M
max_allowed_packet = 128M
thread_stack = 192K
thread_cache_size = 8
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover = BACKUP
#max_connections = 100
#table_cache = 64
#thread_concurrency = 10
It is the default configuration file except max_allowed_packet is 128M instead of 16M.
The queries to the database are quite simple, at most they retrieve approximately 100 records.
Can anyone help me fix this? One idea I did have was use try/except but I'm not sure if that would actually work.
Thanks in advance,
Jamie
Update: try/except calls didn't work.
This is MySQL error, not Python's.
The list of possible causes and possible solutions is here: MySQL 5.5 Reference Manual: C.5.2.9. MySQL server has gone away.
Possible causes include:
You tried to run a query after closing the connection to the server. This indicates a logic error in the application that should be corrected.
A client application running on a different host does not have the necessary privileges to connect to the MySQL server from that host.
You have encountered a timeout on the server side and the automatic reconnection in the client is disabled (the reconnect flag in the MYSQL structure is equal to 0).
You can also get these errors if you send a query to the server that is incorrect or too large. If mysqld receives a packet that is too large or out of order, it assumes that something has gone wrong with the client and closes the connection. If you need big queries (for example, if you are working with big BLOB columns), you can increase the query limit by setting the server's max_allowed_packet variable, which has a default value of 1MB. You may also need to increase the maximum packet size on the client end. More information on setting the packet size is given in Section C.5.2.10, “Packet too large”.
You also get a lost connection if you are sending a packet 16MB or larger if your client is older than 4.0.8 and your server is 4.0.8 and above, or the other way around.
and so on...
In other words, there are plenty of possible causes. Go through that list and check every possible cause.
Make sure you are not trying to commit to a closed MySqldb object
An answer to a (very closely related) question has been posted here: https://stackoverflow.com/a/982873/209532
It relates directly to the MySQLdb driver (MySQL-python (unmaintained) and mysqlclient (maintained fork)), but the approach is the the same for other driver the does not support automatic reconnect.
For me this was fixed using
MySQLdb.connect("127.0.0.1","root","","db" )
instead of
MySQLdb.connect("localhost","root","","db" )
and then
df.to_sql('df',sql_cnxn,flavor='mysql',if_exists='replace', chunksize=100)