I have a django based http server and I use django.core.cache.backends.memcached.MemcachedCache as the client library to access memcache. I want to know whether we can set a timeout or something (say 500ms.) so that the call to memcached returns False if it is not able to access the cache for 500ms. and we make the call to the DB. Is there any such setting to do that?
Haven't tried this before, but you may be able to use threading and set up a timeout for the function call to cache. As an example, ignore the example provided in the main body at this link, but look at Jim Carroll's comment:
http://code.activestate.com/recipes/534115-function-timeout/
Adapted for something you might use:
from threading import Timer
import thread, time, sys
def timeout():
thread.interrupt_main()
try:
Timer(0.5, timeout).start()
cache.get(stuff)
except:
print "Use a function to grab it from the database!"
I don't have time to test it right now, but my concern would be whether Django itself is threaded, and if so, is interrupting the main thread what you really want to do? Either way, it's a potential starting point. I did look for a configuration option that would allow for this and found nothing.
Related
I have a standard function-based view in Django which receives some parameters via POST after the user has clicked a button, computes something and then returns a template with context.
#csrf_exempt
def myview(request, param1, param2):
if request.method == 'POST':
return HttpResponseRedirect(reverse("app1:view_name", args=[param1, param2]))
'''Calculate and database r/w'''
template = loader.get_template('showData.html')
return HttpResponse(template.render(context, request))
It works with no problem as long as one request is processed at the time (tested both with runserver and in an Apache server).
However, when I use two devices and click on the button simultaneously in each, both requests are mixed up, run simultaneously, and the website ends up trowing a 500 error, or 404 or sometimes success but cannot GET static files.. (again, tested both with runserver and Apache).
How can I force Django to finish the execution of the current request before starting the next?
Or is there a better way to tackle this?
Any light on this will be appreciated. Thanks!
To coordinate threads within a single server process, use
from threading import RLock
lock = RLock()
and then within myview:
lock.acquire()
... # get template, render it
lock.release()
You might start your server with $ uwsgi --processes 1 --threads 2 ...
Django web server on local machine is not for production environment. So it processes one request at a time. In production, you need to use WSGI server, like uwsgi. With that your app can be set up to serve more than one request at a time. Check https://docs.djangoproject.com/en/2.1/howto/deployment/wsgi/uwsgi/
I post my solution in case its of any help to other.
Finally I configured Apache with a pre-forking to isolate requests from each other. According to the documentation the pre-forking is advised for sites using non-thread-safe libraries (my case, apparently).
With this fix Apache can handle well simultaneous requests. However I will still be glad to hear if someone else has other suggestions!
There should be ways to rewrite the code such, that things do not get mixed up. (At least in many cases this is possible)
One of the pre-requirements (if your server uses threading) is to write thread safe code
This means not using global variables (which is bad practice anyway) (or protecting them with Locks)
and using no calls to functions that aren't thread safe. (or protect them with Locks)
As you don't provide any details we cannot help with this. (this = finding a way to not make the whole request blocking, but keep data integrity)
Otherwise you could use a mutex / Lock, that works across multiple processes.
you could for example try to access a locked file
https://pypi.org/project/filelock/ and block until the file is unlocked by the other view.
example code (after pip installing filelock)
from filelock import FileLock
lock = FileLock("my.lock")
with lock:
if request.method == 'POST':
return HttpResponseRedirect(reverse("app1:view_name", args=[param1, param2]))
'''Calculate and database r/w'''
template = loader.get_template('showData.html')
return HttpResponse(template.render(context, request))
If you use uwsgi, then you could look at the uwsgi implementation of locks:
https://uwsgi-docs.readthedocs.io/en/latest/Locks.html
Here the example code from the uwsgi documentation:
def use_lock_zero_for_important_things():
uwsgi.lock() # Implicit parameter 0
# Critical section
uwsgi.unlock() # Implicit parameter 0
def use_another_lock():
uwsgi.lock(1)
time.sleep(1) # Take that, performance! Ha!
uwsgi.unlock(1)
I currently run a daemon thread that grabs all cell values, calculates if there's a change, and then writes out dependent cells in a loop, ie:
def f():
while not event.is_set():
update()
event.wait(15)
Thread(target=f).start()
This works, but the looped get-all calls are significant I/O.
Rather than doing this, it would be much cleaner if the thread was notified of changes by Google Sheets. Is there a way to do this?
I rephrased my comment on gspread GitHub's Issues:
Getting a change notification from Google Sheets is possible with help of installable triggers in Apps Script. You set up a custom function in the Scripts editor and assign a trigger event for this function. In this function you can fetch an external url with UrlFetchApp.fetch.
On the listening end (your web server) you'll have a handler for this url. This handler will do the job. Depending on the server configuration (many threads or processes) make sure to avoid possible race condition.
Also, I haven't tested non browser-triggered updates. If Sheets trigger the same event for this type of updates there could be a case for infinite loops.
I was able to get this working by triggering an HTTP request whenever Google Sheets detected a change.
On Google Sheets:
function onEdit (e) {
UrlFetchApp.fetch("http://myaddress.com");
}
Python-side (w/ Tornado)
import tornado.ioloop
import tornado.web
class MainHandler(tornado.web.RequestHandler):
def get(self):
on_edit()
self.write('Updating.')
def on_edit():
# Code here
pass
app = tornado.web.Application([(r'/', MainHandler)])
app.listen(#port here)
tornado.ioloop.IOLoop.current().start()
I don't think this sort of functionality should be within the scope of gspread, but I hope the documentation helps others.
I'm using AWS python API (boto3). My script starts a few instances and then waits for them to come up online, before proceeding doing stuff. I want the wait to timeout after a predefined period, but I can't find any API for that in Python. Any ideas? A snippet of my current code:
def waitForInstance(id):
runningWaiter = self.ec2c.get_waiter("instance_status_ok")
runningWaiter.wait(InstanceIds = [id])
instance = ec2resource.Instance(id)
return instance.state
I can certainly do something like running this piece of code in a separate thread and terminate it if needed, but I was wondering whether there is already a built in API in boto3 for that and I'm just missing it.
A waiter has a configuration associated with it which can be accessed (using your example above) as:
runningWaiter.config
One of the settings in this config is max_attempts which controls how many attempts will be tried before giving up. The default value is 40. You can change that value like this:
runningWaiter.config.max_attempts = 10
This isn't directly controlling a timeout as your question asked but will cause the waiter to give up earlier.
Why not check the instances status from time to time?
#code copy from boto3 doc
for status in ec2.meta.client.describe_instance_status()['InstanceStatuses']:
print(status)
refence : http://boto3.readthedocs.org/en/latest/guide/migrationec2.html
BTW, it is better to use tag naming for all the instances with a standard naming convention. Query any aws resources with its original ID is a maintenance nightmare.
You could put a sleep timer in your code. Sleep for x minutes, check it to see if it is finished and go back to sleep if not. After y number of attempts take some sort it action.
I writing app that connect to a web server (I am the owner of he server) sends information provided by the user, process that information and send result back to the application. The time needed to process the results depends on the user request (from few seconds to a few minutes).
I use a infinite loop to check if the file exist (may be there is a more intelligent approach... may be I could estimated the maximum time a request could take and avoid using and infinite loop)
the important part of the code looks like this
import time
import mechanize
br = mechanize.Browser()
br.set_handle_refresh(False)
proxy_values={'http':'proxy:1234'}
br.set_proxies(proxy_values)
While True:
try:
result=br.open('http://www.example.com/sample.txt').read()
break
except:
pass
time.sleep(10)
Behind a proxy the loop never ends, but if i change the code for something like this,
time.sleep(200)
result=br.open('http://www.example.com/sample.txt').read()
i.e. I wait enough time to ensure that the file is created before trying to read it, I indeed get the file :-)
It seems like if mechanize ask for a file that does not exits everytime mechanize will ask again I will get no file...
I replicated the same behavior using Firefox. I ask for a non-existing file then I create that file (remember I am the owner of the server...) I can not get the file.
And using mechanize and Firefox I can get deleted files...
I think the problem is related to the Proxy cache, I think I canĀ“t delete that cache, but may be there is some way to tell the proxy I need to recheck if the file exists...
Any other suggestion to fix this problem?
The simplest solution could be to add a (unused) GET parameter to avoid caching the request.
ie:
i = 0
While True:
try:
result=br.open('http://www.example.com/sample.txt?r=%d' % i).read()
break
except:
i += 1
time.sleep(10)
The extra parameter should be ignored by the web application.
A HTTP HEAD is probably the correct way to do this, see this question for a example.
I'm using the django_notification module. https://github.com/pinax/django-notification/blob/master/docs/usage.txt
This is what I do in my code to send an email to a user when something happens:
notification.send([to_user], "comment_received", noti_dict)
But, this seems to block the request. And it takes a long time to send it out. I read the docs and it says that it's possible to add it to a queue (asynchronous). How do I add it to an asynchronous queue?
I don't understand what the docs are trying to say. What is "emit_notices"? When do I call that? Do I have a script that calls that every 5 seconds? That's silly. What's the right way to do it asynchronously? What do I do?
Lets first break down what each does.
``send_now``
~~~~~~~~~~~~
This is a blocking call that will check each user for elgibility of the
notice and actually peform the send.
``queue``
~~~~~~~~~
This is a non-blocking call that will queue the call to ``send_now`` to
be executed at a later time. To later execute the call you need to use
the ``emit_notices`` management command.
``send``
~~~~~~~~
A proxy around ``send_now`` and ``queue``. It gets its behavior from a global
setting named ``NOTIFICATION_QUEUE_ALL``. By default it is ``False``. This
setting is meant to help control whether you want to queue any call to
``send``.
``send`` also accepts ``now`` and ``queue`` keyword arguments. By default
each option is set to ``False`` to honor the global setting which is ``False``.
This enables you to override on a per call basis whether it should call
``send_now`` or ``queue``.
It looks like in your settings file you need to set
NOTIFICATION_QUEUE_ALL=True
And then you need to setup a cronjob (maybe every 10-30 seconds or something) to run something like,
django_admin.py emit_notices
This will periodically run and do the blocking call which sends out all the emails and whatever legwork the notification app needs. I'm sure if there is nothing to do it's not that intense of a workload.
And before you expand on your comment about this being silly you should think about it. It's not really silly at all. You don't want a blocking call to be tied to a web request otherwise the user will never get a response back from the server. Sending email is blocking in this sense.
Now, if you were just going to have the person receive this notification when they login, then you probably don't need to go this way because you do have to make an external call to sendmail or whatever you're using to send emails. But in your case, sending emails, you should do it this way.
According to those docs, send is just wrapping send_now and queue. So if you want to send the notifications asynchronous instead of synchronous you have 2 options:
Change your settings:
# This flag will make all messages default to async
NOTIFICATION_QUEUE_ALL = True
Use teh queue keyword argument:
notification.send([to_user], "comment_received", noti_dict, queue=True)
If you queue the notifications you will have to run the emit_notices management command periodically. So you could put that in a cron job to run every couple of minutes.