Django Redis set max connections - python

I'm using Django and having issues exceeding my max number of redis connections. The library I'm using is:
https://github.com/sebleier/django-redis-cache
Here is my settings.py file:
CACHES = {
'default': {
'BACKEND': 'redis_cache.RedisCache',
'LOCATION': "pub-redis-11905.us-east-1-3.1.ec2.garantiadata.com:11905",
'OPTIONS': {
'DB' : 0,
'PASSWORD': "*****",
'PARSER_CLASS': 'redis.connection.HiredisParser'
},
},
}
Then i another file, I do some direct cache access like so:
from django.core.cache import cache
def getResults(self, key):
return cache.get(key)

Looks like this is an outstanding issue with django-redis-cache - perhaps you should consider a different Redis cache backend for Django that does support connection pooling.

Here's django-redis-cache using connection-pool set max_connections.
CACHES = {
'default': {
'OPTIONS': {
'CONNECTION_POOL_CLASS': 'redis.BlockingConnectionPool',
'CONNECTION_POOL_CLASS_KWARGS': {
'max_connections': 50,
'timeout': 20,
...
},
...
},
...
}
}

Related

django #cache_page decorator not setting a cache

Here's my cache setting:
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': 'redis://127.0.0.1:6379',
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
}
}
}
And a very basic view with cache_page decorator
#api_view(['POST'])
#cache_page(60*1)
def sawan_ko_jhari(request):
data = request.data.get('name')
return JsonResponse({"success": True, "data": data}, safe=False)
I've been checking cache keys for every request sent.. and I get empty array.
Is there something I'm missing here?

Eve: how to use different endpoints to access the same collection with different filters

I have an Eve app publishing a simple read-only (GET) interface. It is interfacing a MongoDB collection called centroids, which has documents like:
[
{
"name":"kachina chasmata",
"location":{
"type":"Point",
"coordinates":[-116.65,-32.6]
},
"body":"ariel"
},
{
"name":"hokusai",
"location":{
"type":"Point",
"coordinates":[16.65,57.84]
},
"body":"mercury"
},
{
"name":"caƱas",
"location":{
"type":"Point",
"coordinates":[89.86,-31.188]
},
"body":"mars"
},
{
"name":"anseris cavus",
"location":{
"type":"Point",
"coordinates":[95.5,-29.708]
},
"body":"mars"
}
]
Currently, (Eve) settings declare a DOMAIN as follows:
crater = {
'hateoas': False,
'item_title': 'crater centroid',
'url': 'centroid/<regex("[\w]+"):body>/<regex("[\w ]+"):name>',
'datasource': {
'projection': {'name': 1, 'body': 1, 'location.coordinates': 1}
}
}
DOMAIN = {
'centroids': crater,
}
Which will successfully answer to requests of the form http://hostname/centroid/<body>/<name>. Inside MongoDB this represents a query like: db.centroids.find({body:<body>, name:<name>}).
What I would like to do also is to offer an endpoint for all the documents of a given body. I.e., a request to http://hostname/centroids/<body> would answer the list of all documents with body==<body>: db.centroids.find({body:<body>}).
How do I do that?
I gave a shot by including a list of rules to the DOMAIN key centroids (the name of the database collection) like below,
crater = {
...
}
body = {
'item_title': 'body craters',
'url': 'centroids/<regex("[\w]+"):body>'
}
DOMAIN = {
'centroids': [crater, body],
}
but didn't work...
AttributeError: 'list' object has no attribute 'setdefault'
Got it!
I was assuming the keys in the DOMAIN structure was directly related to the collection Eve was querying. That is true for the default settings, but it can be adjusted inside the resources datasource.
I figured that out while handling an analogous situation as that of the question: I wanted to have an endpoint hostname/bodies listing all the (unique) values for body in the centroids collection. To that, I needed to set an aggregation to it.
The following settings give me exactly that ;)
centroids = {
'item_title': 'centroid',
'url': 'centroid/<regex("[\w]+"):body>/<regex("[\w ]+"):name>',
'datasource': {
'source': 'centroids',
'projection': {'name': 1, 'body': 1, 'location.coordinates': 1}
}
}
bodies = {
'datasource': {
'source': 'centroids',
'aggregation': {
'pipeline': [
{"$group": {"_id": "$body"}},
]
},
}
}
DOMAIN = {
'centroids': centroids,
'bodies': bodies
}
The endpoint, for example, http://127.0.0.1:5000/centroid/mercury/hokusai give me the name, body, and coordinates of mercury/hokusai.
And the endpoint http://127.0.0.1:5000/bodies, the list of unique values for body in centroids.
Beautiful. Thumbs up to Eve!

Is it possible to have multiple index in haystack elastic using real-time signal for auto-update

I have multiple index in elastic using haystack I am trying to auto-update the index with RealtimeSignalProcessor. Is it supported by Haystack ?
Here is the link I followed .
The same thing worked for single index very well.
I suspect the Haystack_connection in settings is something wrong. please suggest the correct syntax.
I don't have any specific need to write any Custom SignalProcessors. Is there a way to use off-the-shelve Haystack Realtime - RealtimeSignalProcessor
I referred to this question but was not helpful .
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE': 'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine',
'URL': 'http://127.0.0.1:9200/',
'INDEX_NAME': 'haystack',
'INCLUDE_SPELLING': True,
},
'Hello':
{
'ENGINE': 'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine',
'URL': 'http://127.0.0.1:9200/',
'INDEX_NAME': 'helloindex',
'INCLUDE_SPELLING': True,
},
'Note':
{
'ENGINE': 'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine',
'URL': 'http://127.0.0.1:9200/',
'INDEX_NAME': 'noteindex',
'INCLUDE_SPELLING': True,
},
}
Thank-you in advance.
Yes its possible
I was able to solve this issue by using Django-Haystack's routers
In settings.py i did this
HAYSTACK_CONNECTIONS = {
'My_Testing':
{
'ENGINE': 'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine',
'URL': 'http://127.0.0.1:9200/',
'INDEX_NAME': 'my_testing',
'INCLUDE_SPELLING': True,
'EXCLUDED_INDEXES': ['talks.search_indexes.NoteIndex'],
},
'Note':
{
'ENGINE': 'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine',
'URL': 'http://127.0.0.1:9200/',
'INDEX_NAME': 'note',
'INCLUDE_SPELLING': True,
'EXCLUDED_INDEXES': ['talks.search_indexes.My_TestingIndex'],
},
'default': {
'ENGINE': 'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine',
'URL': 'http://127.0.0.1:9200/',
'INDEX_NAME': 'haystack',
# 'INCLUDE_SPELLING': True,
},
}
HAYSTACK_ROUTERS = ['talks.routers.My_TestingRouter',
'talks.routers.NoteRouter']
HAYSTACK_SIGNAL_PROCESSOR = 'haystack.signals.RealtimeSignalProcessor'
and in routers.py file which is at same level as search_indexes.py add this
from haystack import routers
class My_TestingRouter(routers.BaseRouter):
def for_write(self, **hints):
return 'My_Testing'
def for_read(self, **hints):
return 'My_Testing'
class NoteRouter(routers.BaseRouter):
def for_write(self, **hints):
return 'Note'
def for_read(self, **hints):
return 'Note'
Hope this helps somebody someday.
peace.

Get stats of a specific Django memcached cache

I can get the stats for a Memcached server in Python like this:
import memcache
host = memcache._Host('127.0.0.1:11211')
host.connect()
host.send_cmd('stats')
stats = {}
while 1:
line = host.readline().split(None, 2)
if line[0] == "END": break
stat, key, value = line
try:
value = int(value)
except ValueError:
pass
stats[key] = value
host.close_socket()
print stats
Using several caches in Django, how do I get the stats for a specific one, e.g. the "store" cache in this configuration:
CACHES = {
'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': '127.0.0.1:11211', 'OPTIONS': { 'MAX_ENTRIES': 4000 } },
'store': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': '127.0.0.1:11211', 'OPTIONS': { 'MAX_ENTRIES': 1000 } },
}
I'd like to find out if MAX_ENTRIES is large enough for our purpose. So I need to know how many items are currently in the "default" cache and in the "stored" cache.
UPDATE: AFAICS, Memcached does not support MAX_ENTRIES and the different "cache names" are only used for Django's cache prefixs. Likely, there's no different cache units inside Memcached.

Django's cache framework not working in a Django-Celery application

I'm using Django + Celery to asynchronously process data.
Here is my settings.py:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'unique-snowflake'
}
}
And here is my Celery task:
from celery import shared_task
from django.core.cache import cache
#shared_task
def process():
my_data = cache.get('hello')
if(my_data == None):
my_data = 'something'
cache.set('hello', my_data)
It's very simple. However, everytime I call the task, cache.get('hello') returns always None. I have no clu why. Someone could help me?
I also tried with Memcached and these settings:
> CACHES = {
> 'default': {
> 'BACKEND':
> 'django.core.cache.backends.memcached.MemcachedCache',
> 'LOCATION': '127.0.0.1:11211',
> 'TIMEOUT': 60 * 60 * 60 * 24,
> 'OPTIONS': {
> 'MAX_ENTRIES': 5000,
> }
> } }
Of course, memcached is running as daemon. But the code is still not working...

Categories