Python psycopg2 timeout - python

I have a huge problem:
There seems to be some hardware problems on the router of the server my python software runs on. The connection to the database only is successfull about every third time. So a psycopg2.connect() can take up to 5 minutes before I get an timeout exception.
2014-12-23 15:03:12,461 - ERROR - could not connect to server: Connection timed out
Is the server running on host "172.20.19.1" and accepting
That's the code I'm using.
# Connection to the DB
try:
db = psycopg2.connect(host=dhost, database=ddatabase,
user=duser, password=dpassword)
cursor = db.cursor(cursor_factory=psycopg2.extras.DictCursor)
except psycopg2.DatabaseError, err:
print(str(err))
logging.error(str(err))
logging.info('program terminated')
sys.exit(1)
I tried some timeout additions for the query, but that didn't helped, since the connection didn't got established at all.
Is there a way, I can stop the program immediately, when the connection couldn't be established?

When using the keyword arguments syntax to the connect function it is possible to use any of the libpd supported connection parameters. Among those there is connect_timeout in seconds:
db = psycopg2.connect (
host=dhost, database=ddatabase,
user=duser, password=dpassword,
connect_timeout=3
)
http://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-PARAMKEYWORDS
http://initd.org/psycopg/docs/module.html
A connection time out raises an OperationalError exception.

Related

Close MySQL connection upon Python exception within Scrapy Framework?

I am using Scrapy 2.4.x pipeline.py to write data sets to a remote MySQL 5.7.32 server. In some cases errors happen and the script throws an exception - which is OK.
for selector in selectors:
unit = unitItem()
try:
unit['test'] = selector.xpath('form/text()').extract_first()
if not unit['test']:
self.logger.error('Extraction failed on %s', response.url)
continue
else:
unit['test'] = str(unit['test'].strip()
except Exception as e:
self.logger.error('Exception: %s', e)
continue
# more code
yield unit
There are 2 problems:
RAM usage is climbing up constantly. Do I somehow need to destroy the item?
There are many MySQL abborted connections errors. I believe this is due to MySQL connection is not closed
mysql error Log:
Aborted connection 63182018 to db: 'mydb' user: 'test' host: 'myhost' (Got an error reading communication packets)
The connection got opened at the very beginning of process_item and get closed at the very end of the method.
Would it help to close the connection upon exception? If so, is there a recommended routine?
I believe it would be more effective to open SQL connection on spider_opened()
and close it on spider_closed()
The only thing to keep in mind is that spider_closed() signal is fired when spider is closed gracefully.

Long celery task causes MySQL timeout in Django - options?

I have a celery task which takes about 6 hours. At the end of it, Django (or possibly Celery) raises an exception "MySQL server has gone away".
After doing some reading, it appears that this is a known issue with long tasks. I don't (think I have) control over pinging or otherwise mid-task; but the exception is raised after the call which takes time has finished (but still within the task function).
Is there a call I can make within the function to re-establish the connection?
(I have run this task "locally" with the same RDS MySQL DB and not had the issue, but I am getting it when running on an AWS instance.)
Eventually found what appears to have worked:
from django.db import close_old_connections
import time
def check_and_retry_django_db_connection():
close_old_connections()
db_conn = False
while not db_conn:
try:
connection.ensure_connection()
db_conn = True
except OperationalError:
print('Database unavailable, waiting 1 second...')
time.sleep(1)
print('Database available')
The key is the close_old_connections call - ensure_connection will not work otherwise.
Ian

How to set MySQLdb Connection timeout to infinity in python

I am doing some task on database after every 10 hours . I have connected to database only one time at the start of the script. after 10 hours database connection gets timeout.
I can use other method here but i want to know that how to set Connection timeout to infinity . After 10 hours i am getting an error which is given below .
Code:
import MySQLdb,time
db = MySQLdb.connect("hostname", "user", "password", "db_name")
while True:
db.commit() # to refresh database
cursor = db.cursor()
cursor.execute("some query here")
db.commit()
cursor.close()
time.sleep(36000)# Wait for 10 hours
Error
OperationalError: (2006, 'MySQL server has gone away')
The problem is the connection is closed from the db server side, what you have to do is;
Changing the timeout of mysql
Or more usefull
Just reconnect to the db again in your loop
If you use linux you can use cron to launch your script every X sec, if you use windows use the scheduling task service to launch the script when you desire.
From documents, there are two timeouts. one is write_timeout and another is connect_timeout
connect_timeout - Timeout before throwing an exception when connecting.
(default: 10, min: 1, max: 31536000)
write_timeout – The timeout for writing to the connection in seconds
(default: None - no timeout)
From my understanding you need to use connect_timeout.
Hope this helps! Cheers!

FTP Connection/Instantiation Hangs Application

I am attempting to open a connection via FTP using the following simple code. But the code is just hanging at this line. Its not advancing, its not throwing any exceptions or errors. My code is 6 months old and I've been able to use this code to connect to my website and download files all this time. Today its just started to hang when I go to open a FTP connection.
Do you know what could be going wrong?
ftp = ftplib.FTP("www.mySite.com") # hangs on this line
print("Im alive") # Never get printed out
ftp.login(username, password)
I administer the website with a couple of other people but we haven't changed anything.
Edit: Just tried to FTP in using Filezilla with the same username and password and it failed. The output was:
Status: Resolving address of www.mySite.com
Status: Connecting to IPADDRESS...
Status: Connection established, waiting for welcome message...
Error: Connection timed out
Error: Could not connect to server
Status: Waiting to retry...
Status: Resolving address of www.mySite.com
Status: Connecting to IPADDRESS...
Status: Connection established, waiting for welcome message...
Error: Connection timed out
Error: Could not connect to server
Looks like you have server issues, but if you'd like the Python program to error out instead of waiting forever for the server, you can specify a timeout kwarg to ftplib.FTP. From the docs (https://docs.python.org/2/library/ftplib.html#ftplib.FTP)
class ftplib.FTP([host[, user[, passwd[, acct[, timeout]]]]])
Return a new instance of the FTP class. When host is given, the method call connect(host) is made. When user is given, additionally
the method call login(user, passwd, acct) is made (where passwd and
acct default to the empty string when not given). The optional timeout
parameter specifies a timeout in seconds for blocking operations like
the connection attempt (if is not specified, the global default
timeout setting will be used).
Changed in version 2.6: timeout was added.

How can I get Pika to retry connecting to RabbitMQ if it fails the first time?

I'm trying to get my program, which uses Pika, to continually retry connecting to RabbitMQ on failure. From what I've seen of the Pika docs, there's a SimpleReconnectionStrategy class that can be used to accompish this but it doesn't seem to be working very well.
strategy = pika.SimpleReconnectionStrategy()
parameters = pika.ConnectionParameters(server)
self.connection = pika.AsyncoreConnection(parameters, True, strategy)
self.channel = self.connection.channel()
The connection should wait_for_open and setup the reconnection strategy.
However, when I run this, I get the following errors thrown:
error: uncaptured python exception, closing channel <pika.asyncore_adapter.RabbitDispatcher at 0xb6ba040c> (<class 'socket.error'>:[Errno 111] Connection refused [/usr/lib/python2.7/asyncore.py|read|79] [/usr/lib/python2.7/asyncore.py|handle_read_event|435] [/usr/lib/python2.7/asyncore.py|handle_connect_event|443])
error: uncaptured python exception, closing channel <pika.asyncore_adapter.RabbitDispatcher at 0xb6ba060c> (<class 'socket.error'>:[Errno 111] Connection refused [/usr/lib/python2.7/asyncore.py|read|79] [/usr/lib/python2.7/asyncore.py|handle_read_event|435] [/usr/lib/python2.7/asyncore.py|handle_connect_event|443])
These errors are continually thrown whilst Pika tries to connect. If I start the RabbitMQ server while my client is running, it will connect. I just don't like the sight of these errors... Are they normal? Am I doing this wrong?
import socket
...
while True:
connectSucceeded = False
try:
self.channel = self.connection.channel()
connectSucceeded = True
except socket.error:
pass
if connectSucceeded:
break
Something like the above is usually used. You could also add time.sleep() every time through the loop to try less frequently because sometimes servers do go down. In real production code I would also count the number of retries (or track the amount of time spent retrying) and give up after some interval. Sometimes it is better to log an error and crash.

Categories