I'm making a really transactional script that is migrating data from one nosql db to mysql but after a like 2 or 3 minutes I get this message
pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on
'foo.com' ([Errno 99] Cannot assign requested address)")
I'm already opening and closing connections as soon as they finish (since the script is multiprocessing to get a better usage of the server resources) I already have something that waits between tries,
but rather than having a bandaid solution I would like to make my script better, is there a way in which I can make pymysql disconnect and clean the used port?
I'm using an aws ubuntu server for the migration, I'm already aware that ubuntu keeps the ports open for 60 seconds and I've already extended the range of ports to the max, the script is running on a pool of 15 concurrent process, the mysql server is aws hosted (aurora).
So far while testing the script that I'm using processes about 10000 records per second.
Update:
Missed a 0 on the amount of records per second
Related
I'm working on a Python application with an SQL Server database using pyodbc, and I need to open multiple connections from the application's side to the database.
I learnt that the max number of connections allowed on an instance of the SQL Server database is 32,767. My understanding is this is the max that the DB instance "can handle", i.e. all simultaneous users combined.
Is there a limit on how many connections one client can open towards the same database instance, is it also 32,767? If yes, where / how is this limit configured?
Taking an educated guess here that there is no connection count limit on the client side towards the same DB instance, there is a limit of 32,767 on the server side, but the client would be more likely to run out of other resources way before it gets close to this figure.
I was using one connection, one cursor, and threading to insert multiple records, but kept getting a "connection is busy" error, this is resolved by adding "MARS_Connection=yes" in the pyodbc database connection string, thanks to this MS documentation.
Related:
How costly is opening and closing of a DB connection?
Can I use multiple cursors on one connection with pyodbc and MS SQL Server?
I have written a server script in python to connect to Mysql database using get_wsgi_application in django.core.wsgi.
But the server often gets idle for long hours and after that I get an error(2006,'MYSQL server has gone away'), when server tries to query the database.
One possible solution could be increasing the value of wait_timeout in Mysql server, but then this would keep others connections alive for long intervals as well.
Is there any other way to resolve this issue or a way to re-establish the Mysql connection after it goes down?
I want to close MySQL database connection after 50 sec automatically if queries are taking more than 50 sec? Is there any option in python while making connection or any other solution to do that ?
Reference site for python database connection
Look Connection in this site they might explained about timeout for query, you can pass an integer which is in seconds
I am trying to access the remote database from one Linux server to another which is connected via LAN.
but it is not working.. after some time it will generate an error
`_mysql_exceptions.OperationalError: (2003, "Can't connect to MySQL server on '192.168.0.101' (99)")'
this error is random it will raise any time.
each time create a new db object in all methods.
and close the connection as well then also why this error raise.
can any one please help me to sort out this problem
This issue is due to so many pending request on the remote database.
So in this situation MySql closes the connection to the running script.
to overcome this situation put
time.sleep(sec) # here int is a seconds in number that to sleep the script.
it will solve this issue.. without transferring database to local server or any other administrative task on mysql
My solution was to collect more queries for one commit statement if those were insert queries.
After accessing my web app using:
- Python 2.7
- the Bottle micro framework v. 0.10.6
- Apache 2.2.22
- mod_wsgi
- on Ubuntu Server 12.04 64bit; I'm receiving this error after several hours:
OperationalError: (2006, 'MySQL server has gone away')
I'm using MySQL - the native one included in Python. It usually happens when I don't access the server. I've tried closing all the connections, which I do, using this:
cursor.close()
db.close()
where db is the standard MySQLdb.Connection() call.
The my.cnf file looks something like this:
key_buffer = 16M
max_allowed_packet = 128M
thread_stack = 192K
thread_cache_size = 8
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover = BACKUP
#max_connections = 100
#table_cache = 64
#thread_concurrency = 10
It is the default configuration file except max_allowed_packet is 128M instead of 16M.
The queries to the database are quite simple, at most they retrieve approximately 100 records.
Can anyone help me fix this? One idea I did have was use try/except but I'm not sure if that would actually work.
Thanks in advance,
Jamie
Update: try/except calls didn't work.
This is MySQL error, not Python's.
The list of possible causes and possible solutions is here: MySQL 5.5 Reference Manual: C.5.2.9. MySQL server has gone away.
Possible causes include:
You tried to run a query after closing the connection to the server. This indicates a logic error in the application that should be corrected.
A client application running on a different host does not have the necessary privileges to connect to the MySQL server from that host.
You have encountered a timeout on the server side and the automatic reconnection in the client is disabled (the reconnect flag in the MYSQL structure is equal to 0).
You can also get these errors if you send a query to the server that is incorrect or too large. If mysqld receives a packet that is too large or out of order, it assumes that something has gone wrong with the client and closes the connection. If you need big queries (for example, if you are working with big BLOB columns), you can increase the query limit by setting the server's max_allowed_packet variable, which has a default value of 1MB. You may also need to increase the maximum packet size on the client end. More information on setting the packet size is given in Section C.5.2.10, “Packet too large”.
You also get a lost connection if you are sending a packet 16MB or larger if your client is older than 4.0.8 and your server is 4.0.8 and above, or the other way around.
and so on...
In other words, there are plenty of possible causes. Go through that list and check every possible cause.
Make sure you are not trying to commit to a closed MySqldb object
An answer to a (very closely related) question has been posted here: https://stackoverflow.com/a/982873/209532
It relates directly to the MySQLdb driver (MySQL-python (unmaintained) and mysqlclient (maintained fork)), but the approach is the the same for other driver the does not support automatic reconnect.
For me this was fixed using
MySQLdb.connect("127.0.0.1","root","","db" )
instead of
MySQLdb.connect("localhost","root","","db" )
and then
df.to_sql('df',sql_cnxn,flavor='mysql',if_exists='replace', chunksize=100)