How to connect Celery with redis? - python

I have taken a free trial for Redis and it gave me an endpoint with a password. I haven't done anything with Redis or celery before so I really don't have any idea how it works. From the Docs of Celery everyone connects to the local host but how can I connect to this endpoint?
CELERY_BROKER_URL='redis://localhost:6379',
CELERY_RESULT_BACKEND='redis://localhost:6379'
What should I replace this with? Where should I give the password?
My endpoint looks something like this: redis-18394.c252.######.cloud.redislabs.com:18394, Should I add the password at the end of this after a / ?

According to celery's documentation, the format is
redis://:password#hostname:port/db_number
By default, redis has 16 databases so you can use any number from 0-15 for db_number. Use a different db number for broker and result backend.
https://docs.celeryproject.org/en/stable/getting-started/backends-and-brokers/redis.html#configuration

You can use channel_redis for this
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": ["password#your_ip"],
},
},
}

Related

Redis django connects to localhost redis, even when I change URL and PORT

return {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': f'redis://{self.REDIS_URL}:{self.REDIS_PORT}',
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
}
this is my redis code to put in CACHES, still even after specifying different redis_url and port, connects to localhost 127.0.0.1:6379
I want to connect to a different IP url and PORT.
I tried different 'BACKENDS' and CLIENT_CLASS and pools, playing around with different CACHES types.
I can connect to my url-port redis using this:
return redis.StrictRedis(host=self.REDIS_URL, port=self.REDIS_PORT, db=0, decode_responses=True, encoding="utf-8")
but not when I want to setup CACHES.
I solved all my redis problems by switching to rabbitmq. I suggest you do the same.

Django caching with Redis

I have implemented django caching using redis following this blog: https://realpython.com/caching-in-django-with-redis/
So I followed this, installed the package,
Added in
CACHES = {
"default": {
"BACKEND": "redis_cache.RedisCache",
"LOCATION": "redis://127.0.0.1:8000/",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient"
},
"KEY_PREFIX": "example"
}
}
Then in views.
from django.conf import settings
from django.core.cache.backends.base import DEFAULT_TIMEOUT
from django.views.decorators.cache import cache_page
CACHE_TTL = getattr(settings, 'CACHE_TTL', DEFAULT_TIMEOUT)
and then added the decorator for the function
#cache_page(CACHE_TTL)
#login_required_dietitian
def patient_profile(request, id):
data = {}
return render(request, 'profile.html', {'data':data})
And then I am getting this error while I run the server
redis.exceptions.ConnectionError: Connection closed by server.
I am new to such caching technique, Any suggestion how to resolve this issue?
Your configuration specifies Redis on port 8000, Redis, by default, runs on port 6379. Looks like its trying to connect your Django app, hence the Connection Error. Redis runs as a separate process, listening for requests on port 6379.
First of all follow this https://computingforgeeks.com/how-to-install-redis-on-fedora/ guide to install redis into your system and start it. In my case it is fedora and there is a link to Ubuntu on the page.
Change the port from 8000 to 6379 on LOCATION. Now then you'll be up and running.
I'd encourage this for a tutorial on redis for caching

django channels on aws : daphne and workers running but websocket taret unhealthy

I have been following this article - https://blog.mangoforbreakfast.com/2017/02/13/django-channels-on-aws-elastic-beanstalk-using-an-alb/
to get my django-channels app working on aws..but only non-websockets request are getting handled.
my channel layer setting is :
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"CONFIG": {
"hosts": [os.environ.get('REDIS_URL', 'redis://localhost:6379')],
},
"ROUTING": "malang.routing.channel_routing",
},
}
I have two target group as mentioned in the article. One forwarding path / to port 80 and /ws/* to 5000.
My supervisord.conf is -
[program:Daphne]
environment=PATH="/opt/python/run/venv/bin"
command=/opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 5000
malang.asgi:channel_layer
directory=/opt/python/current/app
autostart=true
autorestart=true
redirect_stderr=true
user=root
stdout_logfile=/tmp/daphne.out.log
[program:Worker]
environment=PATH="/opt/python/run/venv/bin"
command= /opt/python/run/venv/bin/python manage.py runworker
directory=/opt/python/current/app
process_name=%(program_name)s_%(process_num)02d
numprocs=4
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/workers.out.log
When I check the result of supervisorctl status on aws logs it shows them running fine. But still I get 404 response for ws.
Please help and let me know if you want some more info..
Does the project run locally? If not, the issue is with the software. If so, the issue is with your deployment. I would check the security group/firewall/ELB configuration to ensure the correct ports are accessible.
It makes no sense to run a Redis backend locally on each instance, provided the fact that you actually deployed it, which you don't given your info.
Redis is a cache system that allow data sharing through different instances. Closer to a DB on architectural point of view that a simple daemon thread.
You should use a external Redis Cache instead and refer to it on you Django conf.
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"ROUTING": "<YOUR_APP>.routing.application",
"CONFIG": {
"hosts": ["redis://"+REDIS_URL+":6379"],
},
},
}
See AWS ElasticCache service for that.

How to connect to redis in Django?

CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "redis://127.0.0.1:6379/1",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
}
}
}
I am trying to connect to redis to save my object in it, but it gives me this error when i try to connect
Error 10061 connecting to 127.0.0.1:6379. No connection could be made
because the target machine actively refused it
How does it work, what should i give in location and i am on a proxy from my company. Need some detailed explanation on location.
if your redis is password protected, you should have a config like this:
CACHES.update({
"redis": {
"BACKEND": "redis_cache.cache.RedisCache",
"LOCATION": "redis://127.0.0.1:6379/1",
"OPTIONS": {
"PASSWORD": "XXXXXXXXXXX",
"CLIENT_CLASS": "redis_cache.client.DefaultClient",
},
},
})
First start the redis server. Your OS will provide a mechanism to do that, e.g. on some Linuxes you could use systemctl start redis, or /etc/init.d/redis start or similar. Or you could just start it directly with:
$ redis-server
which will run it as a foreground process.
Then try running the redis-cli ping command. Receiving a PONG response indicates that redis is in fact up and running on your local machine:
$ redis-cli ping
PONG
Once you have that working try Django again.

Python + Celery manual routing

I've been working on getting manual routing set up with Celery, but can't seem to get specific tasks into specific queues. Here's what I've got going on so far pretty much:
CELERY_QUEUES = {
"default": {
"binding_key": "default"},
"medium": {
"binding_key": "medium"},
"heavy": {
"binding_key": "heavy"},
}
with the routes defined like
CELERY_ROUTES = ({ "tasks.some_heavy_task": {
"queue": "heavy",
"routing_key": "tasks.heavy"
}}, )
and the daemons started like
celeryd -l INFO -c 3 -Q heavy
The "some_heavy_task"'s never get run though. When I remove the routing and just have a default queue I can get them to run. What am I doing wrong here, any suggestions?
I created special celeryconfig file for each tasks, all tasks stored in special queue.
Here is example:
CELERY_IMPORTS = ('cleaner_on_celery.tasks',)
CELERYBEAT_SCHEDULE = {
'cleaner': {
"task": "cleaner_on_celery.tasks.cleaner",
"schedule": timedelta(seconds=CLEANER_TIMEOUT),
},
}
CELERY_QUEUES = {
"cleaner": {"exchange": "cleaner", "binding_key": "cleaner"}
}
CELERY_DEFAULT_QUEUE = "cleaner"
from celeryconfig import *
You can see in the bottom: I import common celeryconfig module. In this case you can start few celeryd instances. Also I recommend to use it with supervisord, after creating supervisord.conf file for each task you can easy manage them as:
supervisorctl start cleaner
supervisorctl stop cleaner

Categories