how to set connection timeout in flask redis cache - python

I'm trying to use redis cache with my python code, below code works fine and it sets the keys perfectly. I wanted to set timeout when its not able to connect to redis or if the ports are not open.
unfortunately I could not able to find any document on how to pass the timeout to the connection parameters.
Following is my code.
from flask import Flask, render_template
from flask_caching import Cache
app = Flask(__name__, static_url_path='/static')
config = {
"DEBUG": True,
"CACHE_TYPE": "redis",
"CACHE_DEFAULT_TIMEOUT": 300,
"CACHE_KEY_PREFIX": "inventory",
"CACHE_REDIS_HOST": "localhost",
"CACHE_REDIS_PORT": "6379",
"CACHE_REDIS_URL": 'redis://localhost:6379'
}
cache = Cache(app, config=config)
socket_timeout = 5
#app.route('/')
#cache.memoize()
def dev():
# some code
return render_template("index.html", data=json_data, columns=columns)
when its not able to connect it waits for long time and throws the following error:
Traceback (most recent call last):
File "/Users/amjad/.virtualenvs/inventory/lib/python3.7/site-packages/flask_caching/__init__.py", line 771, in decorated_function
f, *args, **kwargs
File "/Users/amjad/.virtualenvs/inventory/lib/python3.7/site-packages/flask_caching/__init__.py", line 565, in make_cache_key
f, args=args, timeout=_timeout, forced_update=forced_update
File "/Users/amjad/.virtualenvs/inventory/lib/python3.7/site-packages/flask_caching/__init__.py", line 524, in _memoize_version
version_data_list = list(self.cache.get_many(*fetch_keys))
File "/Users/amjad/.virtualenvs/inventory/lib/python3.7/site-packages/flask_caching/backends/rediscache.py", line 101, in get_many
return [self.load_object(x) for x in self._read_clients.mget(keys)]
File "/Users/amjad/.virtualenvs/inventory/lib/python3.7/site-packages/redis/client.py", line 1329, in mget
return self.execute_command('MGET', *args, **options)
File "/Users/amjad/.virtualenvs/inventory/lib/python3.7/site-packages/redis/client.py", line 772, in execute_command
connection = pool.get_connection(command_name, **options)
File "/Users/amjad/.virtualenvs/inventory/lib/python3.7/site-packages/redis/connection.py", line 994, in get_connection
connection.connect()
File "/Users/amjad/.virtualenvs/inventory/lib/python3.7/site-packages/redis/connection.py", line 497, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 60 connecting to localhost:6379. Operation timed out.
Thanks in advance.

This question is fairly old but came across this exact problem just now and I found a solution. Leaving here for posterity for future readers.
According to the documentation at https://flask-caching.readthedocs.io/en/latest/index.html, the CACHE_TYPE parameter:
Specifies which type of caching object to use. This is an import string that will be imported and instantiated. It is assumed that the import object is a function that will return a cache object that adheres to the cache API.
So make a modified version of their redis function, found in flask_caching.backends.cache like so:
def redis_with_timeout(app, config, args, kwargs):
try:
from redis import from_url as redis_from_url
except ImportError:
raise RuntimeError("no redis module found")
# [... extra lines skipped for brevity ...]
# kwargs set here are passed through to the underlying Redis client
kwargs["socket_connect_timeout"] = 0.5
kwargs["socket_timeout"] = 0.5
return RedisCache(*args, **kwargs)
And use it instead of the default redis like so:
CACHE_TYPE = 'path.to.redis_with_timeout'
And the library will use that one instead, with the custom kwargs passed into the underlying Redis client. Hope that helps.

From latest document, there is an CACHE_OPTIONS config passed to almost every types of cache backends as keyword arguments:
Entries in CACHE_OPTIONS are passed to the redis client as **kwargs
We can simply pass additional settings like this:
from flask import Flask
from flask_caching import Cache
app = Flask(__name__)
config = {
"CACHE_TYPE": "redis",
...
"CACHE_REDIS_HOST": "localhost",
"CACHE_REDIS_PORT": "6379",
"CACHE_REDIS_URL": 'redis://localhost:6379',
"CACHE_OPTIONS": {
"socket_connect_timeout": 5, # connection timeout in seconds
"socket_timeout": 5, # send/recv timeout in seconds
}
}
cache = Cache(app, config=config)

Related

How do I pass application context to a child function in flask?

Here is the project Structure.
|-- a_api/
| |- a1.py
|
|-- b_api/
| |-b1.py
|
|-- c_api/
| |-c1.py
| |-c2.py
|
|-- utils/
| |-db.py
|
|-- main.py
db.py connects to mongo and stores connection in g from flask.
from flask import g
from pymongo import MongoClient
mongo_db = 'mongo_db'
def get_mongo_db():
"""Function will create a connection to mongo db for the current Request
Returns:
mongo_db: THe connection to Mongo Db
"""
if mongo_db not in g:
print('New Connection Created for mongo db')
mongo_client = MongoClient('the_url')
# Store the Client
g.mongo_db = mongo_client
else:
print('Old Connection reused for mongo db')
# Return The db
return g.mongo_db['db_name']
main.py calls two functions a1.py and b1.py
for a1: it interacts directly with the db.py file and updates data, this happens without any error and task is completed successfully.
for b1: it first calls c1 in a separate process, which used db.py and updates data - but in this case an error is thrown set up an application context with app.app_context()
How do I pass the application context to db.py when it is called from c1, which is called from b1?
How Do I create a single connection point to mongodb and use it across all requests or process in flask?
Traceback (most recent call last):
File "C:\Users\kunda\AppData\Local\Programs\Python\Python310\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
File "C:\Users\kunda\AppData\Local\Programs\Python\Python310\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "W:\xx\offers_api\offer_ops.py", line 60, in update_offer
response = aser(id=id, d=d)
File "W:\xx\offers_api\offer_ops.py", line 86, in aser
x = get_mongo_db()
File "W:\xx\utils\db.py", line 13, in get_mongo_db
if mongo_db not in g:
File "C:\Users\kunda\AppData\Local\Programs\Python\Python310\lib\site-packages\werkzeug\local.py", line 278, in __get__
obj = instance._get_current_object()
File "C:\Users\kunda\AppData\Local\Programs\Python\Python310\lib\site-packages\werkzeug\local.py", line 407, in _get_current_object
return self.__local() # type: ignore
File "C:\Users\kunda\AppData\Local\Programs\Python\Python310\lib\site-packages\flask\globals.py", line 40, in _lookup_app_object
raise RuntimeError(_app_ctx_err_msg)
RuntimeError: Working outside of application context.
This typically means that you attempted to use functionality that needed
to interface with the current application object in some way. To solve
this, set up an application context with app.app_context(). See the
documentation for more information.
Try something like this:
from flask import g
from pymongo import MongoClient
# import main flask app
from X import app
mongo_db = 'mongo_db'
def get_mongo_db():
"""Function will create a connection to mongo db for the current Request
Returns:
mongo_db: THe connection to Mongo Db
"""
# if circular dependency error try importing app here
from X import app
with app.app_context():
if mongo_db not in g:
print('New Connection Created for mongo db')
mongo_client = MongoClient('the_url')
# Store the Client
g.mongo_db = mongo_client
else:
print('Old Connection reused for mongo db')
# Return The db
return g.mongo_db['db_name']

Gremlin Python - "Server disconnected - please try to reconnect" error

I have a Flask web app in which I want to keep a persistent connection to an AWS Neptune graph database. This connection is established as follows:
from gremlin_python.process.anonymous_traversal import traversal
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
neptune_endpt = 'db-instance-x.xxxxxxxxxx.xx-xxxxx-x.neptune.amazonaws.com'
remoteConn = DriverRemoteConnection(f'wss://{neptune_endpt}:8182/gremlin','g')
self.g = traversal().withRemote(remoteConn)
The issue I'm facing is that the connection automatically drops off if left idle, and I cannot find a way to detect if the connection has dropped off (so that I can reconnect by using the code snippet above).
I have seen this similar issue: Gremlin server withRemote connection closed - how to reconnect automatically? however that question has no solution as well. This similar question has no answer either.
I've tried the following two solutions (both of which did not work):
I setup my webapp behind four Gunicorn workers with a timeout of a 100 seconds, hoping that worker restarts would take care of Gremlin timeouts.
I tried catching exceptions to detect if the connection has dropped off. Every time I use self.g to do some traversal on my graph, I try to "refresh" the connection, by which I mean this:
def _refresh_neptune(self):
try:
self.g = traversal().withRemote(self.conn)
except:
self.conn = DriverRemoteConnection(f'wss://{neptune_endpt}:8182/gremlin','g')
self.g = traversal().withRemote(self.conn)
Here self.conn was initialized as:
self.conn = DriverRemoteConnection(f'wss://{neptune_endpt}:8182/gremlin','g')
Is there any way to get around this connection error?
Thanks
Update: Added the error message below:
File "/home/ubuntu/.virtualenvs/rundev/lib/python3.6/site-packages/gremlin_python/process/traversal.py
", line 58, in toList
return list(iter(self))
File "/home/ubuntu/.virtualenvs/rundev/lib/python3.6/site-packages/gremlin_python/process/traversal.py
", line 48, in __next__
self.traversal_strategies.apply_strategies(self)
File "/home/ubuntu/.virtualenvs/rundev/lib/python3.6/site-packages/gremlin_python/process/traversal.py
", line 573, in apply_strategies
traversal_strategy.apply(traversal)
File "/home/ubuntu/.virtualenvs/rundev/lib/python3.6/site-packages/gremlin_python/driver/remote_connec
tion.py", line 149, in apply
remote_traversal = self.remote_connection.submit(traversal.bytecode)
File "/home/ubuntu/.virtualenvs/rundev/lib/python3.6/site-packages/gremlin_python/driver/driver_remote
_connection.py", line 56, in submit
results = result_set.all().result()
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result
return self.__get_result()
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/home/ubuntu/.virtualenvs/rundev/lib/python3.6/site-packages/gremlin_python/driver/resultset.py"
, line 90, in cb
f.result()
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result
return self.__get_result()
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/ubuntu/.virtualenvs/rundev/lib/python3.6/site-packages/gremlin_python/driver/connection.py
", line 83, in _receive
status_code = self._protocol.data_received(data, self._results)
File "/home/ubuntu/.virtualenvs/rundev/lib/python3.6/site-packages/gremlin_python/driver/protocol.py",
line 81, in data_received
'message': 'Server disconnected - please try to reconnect', 'attributes': {}})
gremlin_python.driver.protocol.GremlinServerError: 500: Server disconnected - please try to reconnect
I am not sure that this is the best way to solve this, but I'm also using gremlin-python and Neptune and I've had the same issue. I worked around it by implementing a Transport that you can provide to DriverRemoteConnection.
DriverRemoteConnection(
url=endpoint,
traversal_source=self._traversal_source,
transport_factory=Transport
)
gremlin-python returns connections to the pool on exception and the exception returned when a connection is closed is GremlinServerError which is also raised for other errors.
gremlin_python/driver/connection.py#L69 -
gremlin_python/driver/protocol.py#L80
The custom transport is the same as gremlin-python's TornadoTransport but the read and write methods are extended to:
Reopen closed connections, if the web socket client is closed
Raise a StreamClosedError, if the web socket client returns None from read_message
Dead connections that are added back to the pool are able to be reopended and then, you can then handle the StreamClosedError to apply some retry logic. I did it by overriding the submit and submitAsync methods in DriverRemoteConnection, but you could catch and retry anywhere.
class Transport(AbstractBaseTransport):
def __init__(self):
self._ws = None
self._loop = ioloop.IOLoop(make_current=False)
self._url = None
# Because the transport will try to reopen the underlying ws connection
# track if the closed() method has been called to prevent the transport
# from reopening.
self._explicit_closed = True
#property
def closed(self):
return not self._ws.protocol
def connect(self, url, headers=None):
self._explicit_closed = False
# Set the endpoint URL
self._url = httpclient.HTTPRequest(url, headers=headers) if headers else url
# Open the connection
self._connect()
def write(self, message):
# Before writing, try to ensure that the connection is open.
if self.closed:
self._connect()
self._loop.run_sync(lambda: self._ws.write_message(message, binary=True))
def read(self):
result = self._loop.run_sync(self._ws.read_message)
# If the read call returns None, the stream has closed.
if result is None:
self._ws.close() # Ensure we close the stream
raise StreamClosedError()
return result
def close(self):
self._ws.close()
self._loop.close()
self._explicit_closed = True
def _connect(self):
# If close() was called explicitly on the transport, don't allow
# subsequent calls to write() to reopen the connection.
if self._explicit_closed:
raise TransportClosedError(
"Transport has been closed and can not be reopened."
)
# Check if the ws is closed, if it is not, close it.
if self._ws and not self.closed:
self._ws.close()
# Open the ws connection
self._ws = self._loop.run_sync(
lambda: websocket.websocket_connect(url=self._url)
)
class TransportClosedError(Exception):
pass
This will work in with gremlin-pythons connection pooling as well.
If you don't need pooling, an alternate approach is to set the pool size to 1 and implement some form of keep-alive like is discussed here. TINKERPOP-2352
It looks like the web socket ping/keep-alive in gremlin-python is not implemented yet TINKERPOP-1886.

Does using functionality needed to interface with current app means creating view in flask?

in my flask app app/__init__.py I have included the below function
def create_app(config_filename=None):
_app = Flask(__name__, instance_relative_config=True)
with _app.app_context():
print "Creating web app",current_app.name
cors = CORS(_app, resources={r'/*': {"origins": "*"}})
sys.path.append(_app.instance_path)
_app.config.from_pyfile(config_filename)
from config import app_config
config = app_config[_os.environ.get('APP_SETTINGS', app_config.development)]
_app.config.from_object(config)
register_blueprints(_app)
# _app.app_context().push()
return _app
now I also have /app/datastore/core.py in which I have
import peewee
import os
from flask import g, current_app
cfg = current_app.config
dbName = 'clinic_backend'
def connect_db():
"""Connects to the specific database."""
return peewee.MySQLDatabase(dbName,
user=cfg['DB_USER'],
host=cfg['DB_HOST'],
port=3306,
password=cfg['DB_PWD']
)
def get_db():
""" Opens a new database connection if there is none yet for the
current application context.
"""
if not hasattr(g, 'db'):
g.db = connect_db()
return g.db
when I create my run my app start.py it creates the app object but when I try access the URL in browser I get error saying
File "/Users/ciasto/Development/python/backend/app/datastore/core.py",
line 31, in get_db
g.db = connect_db() File "/Users/ciasto/Development/python/backend/venv/lib/python2.7/site-packages/werkzeug/local.py",
line 364, in <lambda>
__setattr__ = lambda x, n, v: setattr(x._get_current_object(), n, v) File
"/Users/ciasto/Development/python/backend/venv/lib/python2.7/site-packages/werkzeug/local.py",
line 306, in _get_current_object
return self.__local() File "/Users/ciasto/Development/python/backend/venv/lib/python2.7/site-packages/flask/globals.py",
line 44, in _lookup_app_object
raise RuntimeError(_app_ctx_err_msg) RuntimeError: Working outside of application context.
This typically means that you attempted to use functionality that
needed to interface with the current application object in some way.
To solve this, set up an application context with app.app_context().
See the documentation for more information.
what am I doing wrong here?
what am I doing wrong here?
Just about everything.
Peewee already exposes database connections as a threadlocal so what you're doing is senseless. Especially given from reading your comments you are only trying to add connection hooks.
The peewee docs are QUITE CLEAR: http://docs.peewee-orm.com/en/latest/peewee/database.html#flask
As an aside, read the damn docs. How many questions have you posted already that could have been easily answered just by reading the documentation?

How can I return a complex object type (python-ldap connection) from a Pyro4 Daemon?

I've got a Pyro4 daemon going which I would like to have return a connection to LDAP (instantiated by the python-ldap module). The code is short and simple, but I run into an error with (I believe) serialization of the connection object upon my attempt to return the connection to the client script.
class LDAPDaemon(object):
def get_ldap_connection(self):
conn = ldap.initialize("ldap://ds1")
conn.simple_bind_s("cn=Directory Manager", "abc123")
return conn
daemon = Pyro4.Daemon(unixsocket="/tmp/ldap_unix.sock")
os.system("chmod 700 /tmp/ldap_unix.sock")
uri=daemon.register(LDAPDaemon(), "LDAPDaemon")
daemon.requestLoop()
Then in my driver script, I have the following (assume uri is known, cut all that out for brevity's sake):
with Pyro4.Proxy(uri) as ldap_daemon:
conn = ldap_daemon.get_ldap_connection()
This results in the following error:
Traceback (most recent call last):
File "./tester.py", line 14, in <module>
conn = ldap_daemon.get_ldap_connection()
File "/opt/csw/lib/python2.6/site-packages/Pyro4/core.py", line 160, in __call__
return self.__send(self.__name, args, kwargs)
File "/opt/csw/lib/python2.6/site-packages/Pyro4/core.py", line 318, in _pyroInvoke
raise data
AttributeError: __class__
I tried changing the Pyro4 configuration to accept different serializers, i.e.:
Pyro4.config.SERIALIZERS_ACCEPTED = set(['json', 'marshal', 'serpent', 'pickle'])
but that didn't change anything.
Please ignore the glaring security holes as this was dumbed down to the most basic code to produce the error.
You guessed right. The LDAPOject is not serializable.
Arguments passed to a remote object and the return values of its methods are serialized and then sent through a socket. Not serializable objects will cause errors. You should consider User's comment, create a proxy for the connection instead of sending it to the other process or you have to find a way to serialize it.

Flask mail giving Pickling errors with celery

I trying to use Celery (and rabbitmq) to send emails asynchronously with Flask mail. Initially I had an issue with render_template from flask breaking celery - Flask-Mail breaks Celery (The celery task would still execute successfully but no emails were being sent). While I was trying to fix that issue (which is still not fixed!) - I stumbled upon another problem. This pickling error which is due to a thread lock. I noticed that the problem started when I changed the way I called the celery task (from delay to apply_async). Since then I tried reverting my changes but I still can't get rid of the error. Any help regarding any one of the issues will be highly appreciated.
The traceback:
File "/Users/.../python2.7/site-packages/celery/app/amqp.py", line 250, in publish_task
**kwargs
File "/Users/.../lib/python2.7/site-packages/kombu/messaging.py", line 157, in publish
compression, headers)
File "/Users/.../lib/python2.7/site-packages/kombu/messaging.py", line 233, in _prepare
body) = encode(body, serializer=serializer)
File "/Users/.../lib/python2.7/site-packages/kombu/serialization.py", line 170, in encode
payload = encoder(data)
File "/Users/.../lib/python2.7/site-packages/kombu/serialization.py", line 356, in dumps
return dumper(obj, protocol=pickle_protocol)
PicklingError: Can't pickle <type 'thread.lock'>: attribute lookup thread.lock failed
tasks.py
from __future__ import absolute_import
from flask import render_template
from flask.ext.mail import Message
from celery import Celery
celery = Celery('tasks',
broker = 'amqp://tester:testing#localhost:5672/test_host')
#celery.task(name = "send_async_email")
def send_auth_email(app, nickname, email):
with app.test_request_context("/"):
recipients = []
recipients.append(email)
subject = render_template("subject.txt")
msg = Message(subject, recipients = recipients)
msg.html = render_template("test.html", name = nickname)
app.mail.send(msg)
In the test case I just call:
send_auth_email.delay(test_app, nick, email)
FYI: The API works perfectly fine if I don't use celery (i.e. synchronously). Thanks in advance!
When you invoke send_auth_email.delay(test_app, nick, email) all function arguments are being sent to task Queue. To do so, Celery pickles them.
Short answer test_app, being flask application, uses some magic and cannot be pickled. See docs for details on what can be pickled, and what not.
One solution is to pass all necessary arguments (in your case it seems that this is only name) to re-instantiate test_app in send_auth_email.

Categories