pymongo MongoClient end_request() will not terminate cursor - python

i have a question about pymongo connection pool - MongoClient
how is it possible that the cursor ("results" in the following example) is retrieving the documents, even after the connection was returned to the connection pool by end_request() statement
mongo_connection_pool = MongoClient(host="127.0.0.1", port=27017)
db_connection = mongo_connection_pool["db_name"]
collection = db_connection["collection"]
results = collection.find()
db_connection.end_request()
for result in results:
print result
is there something that i'm missing?
cheers

In PyMongo 2.x MongoClient.start_request is used to pin a socket from the connection pool to an application thread. MongoClient.end_request removes that mapping (if it exists).
This has no impact on iterating a cursor. For each OP_GET_MORE operation the driver has to execute it will get a socket out of the pool. If you are in a "request" it will use the request socket for the current thread. If not, it will use any available socket. You can read more about requests here. Note that "requests" no longer exist in PyMongo 3.0.
If you want to "terminate" a cursor you can del the cursor object, or call cursor.close()

Related

How to share Postgres connection in a Redis queue

What is the best way to share a Postgres db connection across jobs in a Redis queue? I used to have the code below and import the connection in each job with conn = config.CONNECTION. Somehow, since the Redis version was updated on Heroku, this no longer works and the connection gets closed as each job finishes. I currently have to launch and close a new connection in each job.
from rq import Queue
from worker import conn
q = Queue(connection=conn)
q.enqueue(job1, job_timeout='5h')
q.enqueue(job2, job_timeout='5h')

Search which database server running on localhost using python 2.7

I have installed two database server, MySQL and MongoDB. I have two functions written in Python 2.7 to connect to the databases, one for MySQL and one for MongoDB. Now, how do I know which database server is running on my localhost using python 2.7 so that i can call the appropriate connecting function?
Here is my connection function for both database servers:
import mysql.connector
from pymongo import MongoClient
conn=None
def mysql_make_connection():
global conn
conn=mysql.connector.connect(host='localhost',database='sk',user='root',password='SonuKumar#1')
if conn.is_connected():
print "Connection established"
else:
print "Connection Problem"
def mongo_make_connection():
global conn
conn=MongoClient('localhost')
Instead of checking to see which one is running, perhaps a better way would be to use a try/except block to connect to the one most likely to be running. If that fails connect to the other one. So if mysql is the one most likely to be running, it could be something like this:
try:
<code to connect to mysql>
except <connection error>:
<code to connect to mongodb>

SQLAlchemy / pyODBC not releasing database connection

I'm using SQLAlchemy (Core only, not ORM) to create a connection to a SQL Server 2008 SP3.
When looking at the process' network connections, I noticed that the TCP/IP connection to the SQL Server (port 1433) remains open (ESTABLISHED).
Sample code:
from urllib.parse import quote_plus
from sqlalchemy.pool import NullPool
import sqlalchemy as sa
# parameters are read from a config file
db_params = quote_plus(';'.join(['{}={}'.format(key, val) for key, val in db_config.items()]))
# Hostname based connection
engine = sa.create_engine('mssql:///?odbc_connect={}'.format(db_params),
poolclass=NullPool)
conn = engine.connect()
conn.close()
engine.dispose()
engine = None
I added the NullPool and the engine.dispose() afterwards, thinking they might solve the lingering connection, but alas.
I'm using as hostname based connection as specified here.
Versions:
Python 3.5.0 (x32 on Win7)
SQLAlchemy 1.0.10
pyODBC 3.0.10
Edit: I've rewritten my code to solely use pyODBC instead of SQLAlchemy + pyODBC, and the issue remains. So as far as I can see, the issue is caused by pyODBC keeping the connection open.
When only pyODBC, the issue is because of connection pooling as discussed here.
As described in the docs:
pooling
A Boolean indicating whether connection pooling is enabled.
This is a global (HENV) setting, so it can only be modified before the
first connection is made. The default is True, which enables ODBC
connection pooling.
Thus:
import pyodbc
pyodbc.pooling = False
conn = pyodbc.connect(db_connection_string)
conn.close()
It seems that when using SQLAlchemy and disabling the SA pooling by using the NullPool, this isn't passed down to pyODBC.

How to set the redis timeout waiting for the response with pipeline in redis-py?

In the code below, is the pipeline timeout 2 seconds?
client = redis.StrictRedis(host=host, port=port, db=0, socket_timeout=2)
pipe = client.pipeline(transaction=False)
for name in namelist:
key = "%s-%s-%s-%s" % (key_sub1, key_sub2, name, key_sub3)
pipe.smembers(key)
pipe.execute()
In the redis, there are a lot of members in the set "key". It always return the error as below with the code last:
error Error while reading from socket: ('timed out',)
If I modify the socket_timeout value to 10, it returns ok.
Doesn't the param "socket_timeout" mean connection timeout? But it looks like response timeout.
The redis-py version is 2.6.7.
I asked andymccurdy , the author of redis-py, on github and the answer is as below:
If you're using redis-py<=2.9.1, socket_timeout is both the timeout
for socket connection and the timeout for reading/writing to the
socket. I pushed a change recently (465e74d) that introduces a new
option, socket_connect_timeout. This allows you to specify different
timeout values for socket.connect() differently from
socket.send/socket.recv(). This change will be included in 2.10 which
is set to be released later this week.
The redis-py version is 2.6.7, so it's both the timeout for socket connection and the timeout for reading/writing to the socket.
It is not connection timeout, it is operation timeout. Internally the socket_timeout argument on StrictRedis() will be passed to the socket's settimeout method.
See here for details: https://docs.python.org/2/library/socket.html#socket.socket.settimeout

How to close a mongodb python connection?

I'm doing a python script that writes some data to a mongodb.
I need to close the connection and free some resources, when finishing.
How is that done in Python?
Use close() method on your MongoClient instance:
client = pymongo.MongoClient()
# some code here
client.close()
Cleanup client resources and disconnect from MongoDB.
End all server sessions created by this client by sending one or more endSessions commands.
Close all sockets in the connection pools and stop the monitor threads.
The safest way to close a pymongo connection would be to use it with 'with':
with pymongo.MongoClient(db_config['HOST']) as client:
db = client[ db_config['NAME']]
item = db["document"].find_one({'id':1})
print(item)
Adding to #alexce's answer, it's not always true. If your connection is encrypted, MongoClient won't reconnect:
def close(self):
...
if self._encrypter:
# TODO: PYTHON-1921 Encrypted MongoClients cannot be re-opened.
self._encrypter.close()
also, since version 4.0, after calling close() client won't reconnect in any case.
def close(self) -> None:
"""Cleanup client resources and disconnect from MongoDB.
End all server sessions created by this client by sending one or more
endSessions commands.
Close all sockets in the connection pools and stop the monitor threads.
.. versionchanged:: 4.0
Once closed, the client cannot be used again and any attempt will
raise :exc:`~pymongo.errors.InvalidOperation`.

Categories