I have implemented django caching using redis following this blog: https://realpython.com/caching-in-django-with-redis/
So I followed this, installed the package,
Added in
CACHES = {
"default": {
"BACKEND": "redis_cache.RedisCache",
"LOCATION": "redis://127.0.0.1:8000/",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient"
},
"KEY_PREFIX": "example"
}
}
Then in views.
from django.conf import settings
from django.core.cache.backends.base import DEFAULT_TIMEOUT
from django.views.decorators.cache import cache_page
CACHE_TTL = getattr(settings, 'CACHE_TTL', DEFAULT_TIMEOUT)
and then added the decorator for the function
#cache_page(CACHE_TTL)
#login_required_dietitian
def patient_profile(request, id):
data = {}
return render(request, 'profile.html', {'data':data})
And then I am getting this error while I run the server
redis.exceptions.ConnectionError: Connection closed by server.
I am new to such caching technique, Any suggestion how to resolve this issue?
Your configuration specifies Redis on port 8000, Redis, by default, runs on port 6379. Looks like its trying to connect your Django app, hence the Connection Error. Redis runs as a separate process, listening for requests on port 6379.
First of all follow this https://computingforgeeks.com/how-to-install-redis-on-fedora/ guide to install redis into your system and start it. In my case it is fedora and there is a link to Ubuntu on the page.
Change the port from 8000 to 6379 on LOCATION. Now then you'll be up and running.
I'd encourage this for a tutorial on redis for caching
Related
return {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': f'redis://{self.REDIS_URL}:{self.REDIS_PORT}',
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
}
this is my redis code to put in CACHES, still even after specifying different redis_url and port, connects to localhost 127.0.0.1:6379
I want to connect to a different IP url and PORT.
I tried different 'BACKENDS' and CLIENT_CLASS and pools, playing around with different CACHES types.
I can connect to my url-port redis using this:
return redis.StrictRedis(host=self.REDIS_URL, port=self.REDIS_PORT, db=0, decode_responses=True, encoding="utf-8")
but not when I want to setup CACHES.
I solved all my redis problems by switching to rabbitmq. I suggest you do the same.
I have taken a free trial for Redis and it gave me an endpoint with a password. I haven't done anything with Redis or celery before so I really don't have any idea how it works. From the Docs of Celery everyone connects to the local host but how can I connect to this endpoint?
CELERY_BROKER_URL='redis://localhost:6379',
CELERY_RESULT_BACKEND='redis://localhost:6379'
What should I replace this with? Where should I give the password?
My endpoint looks something like this: redis-18394.c252.######.cloud.redislabs.com:18394, Should I add the password at the end of this after a / ?
According to celery's documentation, the format is
redis://:password#hostname:port/db_number
By default, redis has 16 databases so you can use any number from 0-15 for db_number. Use a different db number for broker and result backend.
https://docs.celeryproject.org/en/stable/getting-started/backends-and-brokers/redis.html#configuration
You can use channel_redis for this
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": ["password#your_ip"],
},
},
}
I am getting an error
redis.exceptions.ConnectionError: Error 24 connecting to redis-service:6379. Too many open files.
...
OSError: [Errno 24] Too many open files
I know this can be fixed by increasing the ulimit but I don't think that's the issue here and also this is a service running on a container.
The application starts up correctly works for 48 hours correctly and then I get the above error.
Which implies that the connections are growing over time exponentially.
What my application is basically doing
background_task (ran using celery) -> collects data from postgres and sets it on redis
prometheus reaches the app at '/metrics' which is a django view -> collects data from redis and serves the data using django prometheus exporter
The code looks something like this
views.py
from prometheus_client.core import GaugeMetricFamily, REGISTRY
from my_awesome_app.taskbroker.celery import app
class SomeMetricCollector:
def get_sample_metrics(self):
with app.connection_or_acquire() as conn:
client = conn.channel().client
result = client.get('some_metric_key')
return {'some_metric_key': result}
def collect(self):
sample_metrics = self.get_sample_metrics()
for key, value in sample_metrics.items():
yield GaugeMetricFamily(key, 'This is a custom metric', value=value)
REGISTRY.register(SomeMetricCollector())
tasks.py
# This is my boilerplate taskbroker app
from my_awesome_app.taskbroker.celery import app
# How it's collecting data from postgres is trivial to this issue.
from my_awesome_app.utility_app.utility import some_value_calculated_from_query
#app.task()
def app_metrics_sync_periodic():
with app.connection_or_acquire() as conn:
client = conn.channel().client
client.set('some_metric_key', some_value_calculated_from_query(), ex=21600)
return True
I don't think the background data collection in tasks.py is causing the Redis connections to grow exponentially but it's the Django view '/metrics' in views.py which is causing.
Can you please tell me what I am doing wrong here?
If there is a better way to read from Redis from a Django view. The Prometheus instance scrapes the Django application every 5s.
This answer is according to my use case and research.
The issue here, according to me, is the fact that each request to /metrics initiates a new thread where the views.py creates new connections in the Celery broker's connection pool.
This can be easily handled by letting Django manage its own Redis connection pool through cache backend and Celery manage its own Redis connection pool and not use each other's connection pools from their respective threads.
Django Side
config.py
# CACHES
# ------------------------------------------------------------------------------
# For more details on options for your cache backend please refer
# https://docs.djangoproject.com/en/3.1/ref/settings/#backend
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "redis://localhost:6379/0",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
},
}
}
views.py
from prometheus_client.core import GaugeMetricFamily, REGISTRY
# *: Replacing celery app with Django cache backend
from django.core.cache import cache
class SomeMetricCollector:
def get_sample_metrics(self):
# *: This is how you will get the new client, which is still context managed.
with cache.client.get_client() as client:
result = client.get('some_metric_key')
return {'some_metric_key': result}
def collect(self):
sample_metrics = self.get_sample_metrics()
for key, value in sample_metrics.items():
yield GaugeMetricFamily(key, 'This is a custom metric', value=value)
REGISTRY.register(SomeMetricCollector())
This will ensure that Django will maintain it's Redis connection pool and not cause new connections to be spun up unnecessarily.
Celery Side
tasks.py
# This is my boilerplate taskbroker app
from my_awesome_app.taskbroker.celery import app
# How it's collecting data from postgres is trivial to this issue.
from my_awesome_app.utility_app.utility import some_value_calculated_from_query
#app.task()
def app_metrics_sync_periodic():
with app.connection_or_acquire() as conn:
# *: This will force celery to always look into the existing connection pool for connection.
client = conn.default_channel.client
client.set('some_metric_key', some_value_calculated_from_query(), ex=21600)
return True
How do I monitor connections?
There is a nice prometheus celery exporter which will help you monitor your celery task activity not sure how you can add connection pool and connection monitoring to it.
The easiest way to manually verify if the connections are growing every time /metrics is hit on the web app, is by:
$ redis-cli
127.0.0.1:6379> CLIENT LIST
...
The client list command will help you see if the number of connections are growing or not.
I don't use queues sadly but I would recommend using queues. This is how my worker runs:
$ celery -A my_awesome_app.taskbroker worker --concurrency=20 -l ERROR -E
I'm having a hard time integrating create-react-app single page application to my flask backend. I want to be able to make a fetch/axios call from my front end like so: axios.get('/getResults') and fetch('/getResults'). Some things I have tried but not limited to is specifying the Flask port as 3000 which is the same used by create-react-app. Also, used the proxy configuration feature on the "package.json" file of create-react-app but to no avail. I suspect my folder structure and Flask code implementation may likely be causing this. Below is my folder structure and "app.py" code. Any help I could get will be appreciated. I can provide additional information if necessary. Thanks
Project -build(contains static folder, index.html...Other meta files)-node_modules-public-srcapp.pypackage.jsonrequirements.txt
app.py:
from flask import Flask, Response, request, jsonify, make_response, send_from_directory,render_template
app = Flask(__name__, static_path='/build/static/')
app.debug=True
#app.route('/')
def root():
print('Inside root function')
return app.send_static_file('index.html')
#app.route('/getResults', methods=["GET"])
def results():
print('Inside getResults path')
return app.send_static_file('index.html')
#app.route('/postData', methods=["POST"])
def data_results():
print('Inside postData path')
data = request.get_json
return jsonify(data)
#app.route('/<path:path>')
def send_js(path):
print("inside send_js fxn")
return send_from_directory('./build/static',path)
if __name__ == "__main__":
print("inside main host call")
app.run(host='0.0.0.0', port=3000)
Errors I get when I run "python app.py" are:
On the terminal: Inside root function
127.0.0.1 - - [12/Jun/2017 09:42:24] "GET / HTTP/1.1" 404 -
On the browser:Not Found - The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
I was having the exact same issue and I was able to solve it by appeasing Flask with symlinks.
Keep the templates and static directory paths at their defaults and in the directory with your Flask main file, add these symlinks
ln -s build/ templates
ln -s build/static static
In case you were curious, this was my specific problem, which just involved a few more nested directories but was in essence the same:
Running NPM Build from Flask
You can then use Nurzhan's root configuration:
#app.route('/')
def root():
print('Inside root function')
return render_template('index.html')
But you only require your app declaration to be: app = Flask(__name__)
The only thing that doesn't work for me is the favicon, and I will update this answer once I figure that out.
In development mode, you need to configure your create-react-app package.json to forward "ajax" request to the flask server.
Here is what my package.json looks like:
{
"name": "socialite",
"version": "0.1.0",
"private": true,
"proxy": "http://localhost:8080",
"devDependencies": {
"react-scripts": "1.0.10"
},
"dependencies": {
"react": "^15.6.1",
"react-dom": "^15.6.1"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test --env=jsdom",
"eject": "react-scripts eject"
}
}
See the proxy field? That's where the magic happens, replace its value with the flask server address. That way, you can take advantage of CRA hot reloading feature. This is documented at in create-react-app as "Proxying API Requests in Development"
Then do run your application, you go at localhost:3000 or whatever port yarn opens for you. And when you do an API call in javascript over the rainbow to the server for instance: fetch('/api/model/') or something nodejs' server will forward to the flask app. I think the nodejs server does look at the content-type field of the ajax request to know whether it should forward the request to the backend server or not.
I recommend you prefix all your backend routes with something like /api/v1/ or something so the nginx configuration is neat and easy to write.
I think you have a number of misunderstandings.
The create-react-app runs on its own server on port 3000 and if you try to run your flask app on the same port on the same machine it will complain that port 3000 is already in use. So from this we move to another question - the structure of your application.
Will it be a separate reactjs based client on the frontend and api based on flask in the backend which will be 2 separate applications communicating with each other over HTTP? In this case the frontend and backend will usually run on separate servers.
Or it will one flask application which will use reactjs in its template pages?
You can fix your current problem with not finding URL by changing to this in your code:
#app.route('/')
def root():
print('Inside root function')
return render_template('index.html')
And this:
template_dir = os.path.abspath('build/templates')
app = Flask(__name__, static_path='/build/static/',
template_folder=template_dir)
Since your templates folder is in the build directory.
I've read the official docs yet I'm not quite sure I understand how to apply what they tell. Also I've seen this QA, I also use a factory pattern. Just can't see the whole picture.
The connection pool as long as other redis/huey settings may differ depending on the given environment (development, production). How do we wire huey up so we can configure it similar to the Flask application?
As long as I understand to fire a task from a view we need to import tasks moudule and call the specific task (call a function passing the sensitive params). Where shoud we instantiate, keep the huey instance?
Should tasks know about the application dependencies? Should we consider another stripped-down Flask app for this matter?
Can you help a little bit?
Here's how I wired it all up.
First off, here's the contents of my project folder:
Get a stripped-down Flask application to be used by your tasks. As it was suggested in the post I created a secondary application factory:
# global dependencies
db = SQLAlchemy()
def create_app_huey(config_name):
app = Flask(__name__)
# apply configuration
app.config.from_object(config[config_name])
# init extensions
db.init_app(app)
return app
Create tasking package. Two important files here are config.py and tasks.py. This post helped a lot. Let's start with configuration. Note, this is very simple approach.
# config.py (app.tasking.config)
import os
from huey import RedisHuey
settings__development = {
'host': 'localhost'
}
settings__testing = {
'host': 'localhost'
}
settings__production = {
'host': 'production_server'
}
settings = {
'development': settings__development,
'testing': settings__testing,
'production': settings__production,
'default': settings__development
}
huey = RedisHuey(**settings[os.getenv('FLASK_ENV') or 'default'])
Then the tasks.py module will look like this:
import os
from app.tasking.config import huey
from app import create_app_huey
app = create_app_huey(config_name=os.getenv('FLASK_ENV') or 'default')
#huey.task()
def create_thumbnails(document):
pass
Run the consumer. Activate your virtual environment. Then run from cmd (I'm on Windows):
huey_consumer.py app.tasking.config.huey
Where app.tasking.config is a package.package.module path (in my case!) and
huey is the name of available (in the config module) huey instance. Check your huey instance name.
Reading this helped.