I can get the stats for a Memcached server in Python like this:
import memcache
host = memcache._Host('127.0.0.1:11211')
host.connect()
host.send_cmd('stats')
stats = {}
while 1:
line = host.readline().split(None, 2)
if line[0] == "END": break
stat, key, value = line
try:
value = int(value)
except ValueError:
pass
stats[key] = value
host.close_socket()
print stats
Using several caches in Django, how do I get the stats for a specific one, e.g. the "store" cache in this configuration:
CACHES = {
'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': '127.0.0.1:11211', 'OPTIONS': { 'MAX_ENTRIES': 4000 } },
'store': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': '127.0.0.1:11211', 'OPTIONS': { 'MAX_ENTRIES': 1000 } },
}
I'd like to find out if MAX_ENTRIES is large enough for our purpose. So I need to know how many items are currently in the "default" cache and in the "stored" cache.
UPDATE: AFAICS, Memcached does not support MAX_ENTRIES and the different "cache names" are only used for Django's cache prefixs. Likely, there's no different cache units inside Memcached.
Related
Here's my cache setting:
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': 'redis://127.0.0.1:6379',
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
}
}
}
And a very basic view with cache_page decorator
#api_view(['POST'])
#cache_page(60*1)
def sawan_ko_jhari(request):
data = request.data.get('name')
return JsonResponse({"success": True, "data": data}, safe=False)
I've been checking cache keys for every request sent.. and I get empty array.
Is there something I'm missing here?
I have an Eve app publishing a simple read-only (GET) interface. It is interfacing a MongoDB collection called centroids, which has documents like:
[
{
"name":"kachina chasmata",
"location":{
"type":"Point",
"coordinates":[-116.65,-32.6]
},
"body":"ariel"
},
{
"name":"hokusai",
"location":{
"type":"Point",
"coordinates":[16.65,57.84]
},
"body":"mercury"
},
{
"name":"caƱas",
"location":{
"type":"Point",
"coordinates":[89.86,-31.188]
},
"body":"mars"
},
{
"name":"anseris cavus",
"location":{
"type":"Point",
"coordinates":[95.5,-29.708]
},
"body":"mars"
}
]
Currently, (Eve) settings declare a DOMAIN as follows:
crater = {
'hateoas': False,
'item_title': 'crater centroid',
'url': 'centroid/<regex("[\w]+"):body>/<regex("[\w ]+"):name>',
'datasource': {
'projection': {'name': 1, 'body': 1, 'location.coordinates': 1}
}
}
DOMAIN = {
'centroids': crater,
}
Which will successfully answer to requests of the form http://hostname/centroid/<body>/<name>. Inside MongoDB this represents a query like: db.centroids.find({body:<body>, name:<name>}).
What I would like to do also is to offer an endpoint for all the documents of a given body. I.e., a request to http://hostname/centroids/<body> would answer the list of all documents with body==<body>: db.centroids.find({body:<body>}).
How do I do that?
I gave a shot by including a list of rules to the DOMAIN key centroids (the name of the database collection) like below,
crater = {
...
}
body = {
'item_title': 'body craters',
'url': 'centroids/<regex("[\w]+"):body>'
}
DOMAIN = {
'centroids': [crater, body],
}
but didn't work...
AttributeError: 'list' object has no attribute 'setdefault'
Got it!
I was assuming the keys in the DOMAIN structure was directly related to the collection Eve was querying. That is true for the default settings, but it can be adjusted inside the resources datasource.
I figured that out while handling an analogous situation as that of the question: I wanted to have an endpoint hostname/bodies listing all the (unique) values for body in the centroids collection. To that, I needed to set an aggregation to it.
The following settings give me exactly that ;)
centroids = {
'item_title': 'centroid',
'url': 'centroid/<regex("[\w]+"):body>/<regex("[\w ]+"):name>',
'datasource': {
'source': 'centroids',
'projection': {'name': 1, 'body': 1, 'location.coordinates': 1}
}
}
bodies = {
'datasource': {
'source': 'centroids',
'aggregation': {
'pipeline': [
{"$group": {"_id": "$body"}},
]
},
}
}
DOMAIN = {
'centroids': centroids,
'bodies': bodies
}
The endpoint, for example, http://127.0.0.1:5000/centroid/mercury/hokusai give me the name, body, and coordinates of mercury/hokusai.
And the endpoint http://127.0.0.1:5000/bodies, the list of unique values for body in centroids.
Beautiful. Thumbs up to Eve!
I'm using Django + Celery to asynchronously process data.
Here is my settings.py:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'unique-snowflake'
}
}
And here is my Celery task:
from celery import shared_task
from django.core.cache import cache
#shared_task
def process():
my_data = cache.get('hello')
if(my_data == None):
my_data = 'something'
cache.set('hello', my_data)
It's very simple. However, everytime I call the task, cache.get('hello') returns always None. I have no clu why. Someone could help me?
I also tried with Memcached and these settings:
> CACHES = {
> 'default': {
> 'BACKEND':
> 'django.core.cache.backends.memcached.MemcachedCache',
> 'LOCATION': '127.0.0.1:11211',
> 'TIMEOUT': 60 * 60 * 60 * 24,
> 'OPTIONS': {
> 'MAX_ENTRIES': 5000,
> }
> } }
Of course, memcached is running as daemon. But the code is still not working...
I'm using Django and having issues exceeding my max number of redis connections. The library I'm using is:
https://github.com/sebleier/django-redis-cache
Here is my settings.py file:
CACHES = {
'default': {
'BACKEND': 'redis_cache.RedisCache',
'LOCATION': "pub-redis-11905.us-east-1-3.1.ec2.garantiadata.com:11905",
'OPTIONS': {
'DB' : 0,
'PASSWORD': "*****",
'PARSER_CLASS': 'redis.connection.HiredisParser'
},
},
}
Then i another file, I do some direct cache access like so:
from django.core.cache import cache
def getResults(self, key):
return cache.get(key)
Looks like this is an outstanding issue with django-redis-cache - perhaps you should consider a different Redis cache backend for Django that does support connection pooling.
Here's django-redis-cache using connection-pool set max_connections.
CACHES = {
'default': {
'OPTIONS': {
'CONNECTION_POOL_CLASS': 'redis.BlockingConnectionPool',
'CONNECTION_POOL_CLASS_KWARGS': {
'max_connections': 50,
'timeout': 20,
...
},
...
},
...
}
}
In the according to the docs it effectively says that you should use a KEY_PREFIX when sharing a cache instance between servers. My question is when does is at what point does the KEY_PREFIX apply? Using MemcachedStats here is basic example
from memcached_stats import MemcachedStats
from django.core.cache import get_cache
cache = get_cache('default')
assert len(cache._servers) == 1
mem = MemcachedStats(*cache._servers[0].split(":"))
# Now lets play verify no key
cache.get("TEST") == None
key = next((x for x in mem.keys() if "TEST" in x))
# Create a key
cache.set("TEST", "X", 30)
key = next((x for x in mem.keys() if "TEST" in x))
print key
':1:TEST'
At this point it looks OK - I mean the prefix is set or so I think..
from django.conf import settings
print settings.KEY_PREFIX
'beta'
print settings.SITE_ID
2
print settings.CACHE_MIDDLEWARE_KEY_PREFIX
'beta'
At this point is this just a bug?
Interesting problem. Turns out you need to look very closely at the documentation and notice that KEY_PREFIX is a subkey in the CACHES[<cache>]. You need to define it like this.
CACHE_MIDDLEWARE_KEY_PREFIX = 'staging'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'production_cache_server:11211',
'KEY_PREFIX': CACHE_MIDDLEWARE_KEY_PREFIX,
}
}
This is also the way to define a KEY_FUNCTION as well. I verified this will also work.
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'production.jxycyn.cfg.usw1.cache.amazonaws.com:11211',
'KEY_FUNCTION': 'apps.core.cache_utils.make_key',
}
}