What is the best way to share a Postgres db connection across jobs in a Redis queue? I used to have the code below and import the connection in each job with conn = config.CONNECTION. Somehow, since the Redis version was updated on Heroku, this no longer works and the connection gets closed as each job finishes. I currently have to launch and close a new connection in each job.
from rq import Queue
from worker import conn
q = Queue(connection=conn)
q.enqueue(job1, job_timeout='5h')
q.enqueue(job2, job_timeout='5h')
Related
I'm trying to create an app using FastAPI + uvicorn.
This app must be able to handle simultaneous connections. I cannot guarantee that all the code can be executed in a async/await way.
Then, I thought to use the --workers X options from uvicorn to handle simultaneous connections, but I need to share the same database connection among all the workers.
I tried the following example:
import time
from pymongo.database import Database
from fastapi import Depends, FastAPI
from dynaconf import settings
from pymongo import MongoClient
print("-------> Creating a new MongoDB connection")
db_conn = MongoClient(settings.MONGODB_URI)
db = db_conn.get_database('mydb')
app = FastAPI()
def get_db():
return db
#app.get("/{id}")
async def main(id: str, db: Database = Depends(get_db)):
print("Recebido id: " + id)
time.sleep(10)
return { 'id': id }
$uvicorn main:app --workers 2
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started parent process [24730]
-------> Creating a new MongoDB connection
-------> Creating a new MongoDB connection
INFO: Started server process [24733]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Started server process [24732]
INFO: Waiting for application startup.
INFO: Application startup complete.
But I'm getting two mongodb connections.
How can I share the MongoDB connection and avoid creating a connection each single worker ?
You must not share the connection, as it's stateful. Two or more processes couldn't be able to use a single socket connection successfully.
I am using rabbitMQ to launch processes in remote hosts located in other parts of the world. Eg, RabbitMQ is running in an Oregon host, and it receives a client message to launch processes in Ireland and California.
Most of the time, the processes are launched, and, when they finish, rabbitMQ returns the output to the client. But, sometimes, the jobs finish successfully but rabbitMQ hasn't return the output to the client, and the client keeps hanging waiting for the response. These processes can take 10 minutes to execute, so the client is 10 minutes hanged waiting for the response.
I am using celery to connect to the rabbitMQ, and the client calls are blocking using task.get(). In other words, the client hangs until it receives the response for its call. I would like to understand why the client did not get the response if the jobs have finished. How can I debug this problem?
Here is my celeryconfig.py
import os
import sys
# add hadoop python to the env, just for the running
sys.path.append(os.path.dirname(os.path.basename(__file__)))
# broker configuration
# medusa-rabbitmq is the name of the hosts where rabbitmq is running
BROKER_URL = "amqp://celeryuser:celery#medusa-rabbitmq/celeryvhost"
CELERY_RESULT_BACKEND = "amqp"
TEST_RUNNER = 'celery.contrib.test_runner.run_tests'
# for debug
# CELERY_ALWAYS_EAGER = True
# module loaded
CELERY_IMPORTS = ("medusa.mergedirs", "medusa.medusasystem",
"medusa.utility", "medusa.pingdaemon", "medusa.hdfs", "medusa.vote.voting")
i have a question about pymongo connection pool - MongoClient
how is it possible that the cursor ("results" in the following example) is retrieving the documents, even after the connection was returned to the connection pool by end_request() statement
mongo_connection_pool = MongoClient(host="127.0.0.1", port=27017)
db_connection = mongo_connection_pool["db_name"]
collection = db_connection["collection"]
results = collection.find()
db_connection.end_request()
for result in results:
print result
is there something that i'm missing?
cheers
In PyMongo 2.x MongoClient.start_request is used to pin a socket from the connection pool to an application thread. MongoClient.end_request removes that mapping (if it exists).
This has no impact on iterating a cursor. For each OP_GET_MORE operation the driver has to execute it will get a socket out of the pool. If you are in a "request" it will use the request socket for the current thread. If not, it will use any available socket. You can read more about requests here. Note that "requests" no longer exist in PyMongo 3.0.
If you want to "terminate" a cursor you can del the cursor object, or call cursor.close()
I'm doing a python script that writes some data to a mongodb.
I need to close the connection and free some resources, when finishing.
How is that done in Python?
Use close() method on your MongoClient instance:
client = pymongo.MongoClient()
# some code here
client.close()
Cleanup client resources and disconnect from MongoDB.
End all server sessions created by this client by sending one or more endSessions commands.
Close all sockets in the connection pools and stop the monitor threads.
The safest way to close a pymongo connection would be to use it with 'with':
with pymongo.MongoClient(db_config['HOST']) as client:
db = client[ db_config['NAME']]
item = db["document"].find_one({'id':1})
print(item)
Adding to #alexce's answer, it's not always true. If your connection is encrypted, MongoClient won't reconnect:
def close(self):
...
if self._encrypter:
# TODO: PYTHON-1921 Encrypted MongoClients cannot be re-opened.
self._encrypter.close()
also, since version 4.0, after calling close() client won't reconnect in any case.
def close(self) -> None:
"""Cleanup client resources and disconnect from MongoDB.
End all server sessions created by this client by sending one or more
endSessions commands.
Close all sockets in the connection pools and stop the monitor threads.
.. versionchanged:: 4.0
Once closed, the client cannot be used again and any attempt will
raise :exc:`~pymongo.errors.InvalidOperation`.
We have a python application running with uwsgi, nginx.
We have a fallback mechanism for DBs. ie., if one server refuses to connect, we connect to the other server. But the issue is that the connection takes more than 60s to timeout.
As nginx times out in 60s, it displays the nginx error page. Where can we change the timeout for connecting to mysql servers so that we can make three attempts of connection to mysql in the given 60s nginx timeout period?
We use Web2py and default DAL object with pymysql adapter
you're talking about the option connect_timeout?
conn = pymysql.connect(host='127.0.0.1', port=3306, user='root', passwd='', db='mysql', connect_timeout=20)
in DAL terms this option will be something about this (not tested)
db = DAL('mysql://username:password#localhost/test', driver_args={connect_timeout=20})