I have a view, and when user click on a button it gets called. But if a user clicks the button twice, it gets called for another even if the first is still executing or running.
This produces a problem (if I am correct), and that is: it stops execution for the first one and starts executing for the other. How can I stop this calling of views twice?
When a request has been sent to the server, Django will pick it up and send a response when it's finished. if there is no one there to receive the response then nothing happens but the processes have been done already. You should ask for verification if your process is important to be received.
When the user sends the second request, both requests will be processed and will return a response. If you're working with an API and your frontend isn't rendered in the backend server then the user will probably receive both responses and depending on the code on the front side, many things can happen. they might see both responses or just one. or the first one might get updated instantly when the second response comes in.
This might lead to problems depending on the processes you are running. for example, if you are updating a record in a database and the update can only happen once, then probably you'll get an error in the second try. it is really really rare and mostly impossible that the update part for both requests happens in the exact same time depending on your database, your code, and many other things.
There are many ways to handle such a problem which most of them depends on many things but I'm gonna list you few of these options:
1 - Change the display for the button to none so the user won't be able to click on it again. This is a very simple solution and works most of the times unless you have users trying to hurt your system intentionally.
2 - Redirect the user to another page and then wait for a response there.
3 - If your view is doing some heavy processes which are expensive to run, then just create a queue system with some limitations for each user. This is usually done in scaled projects with a lot of users.
4 - Use a rate limiting system to deny too many requests at once or block any none normal traffic.
my advice, use jquery
$("btnSubmit").click(function(event){
event.preventDefault();
//disable the submit button
$("#btnSubmit").attr("disabled", true);
// call ur view using ajax here
});
and then after ajax ended activate the button
$("#btnSubmit").attr("disabled", false);
Related
So I am currently writing a script that will allow me to wait on a website that has queue page before I can access contents
Essentially queue page is where they let people in randomly. In order to increase my chance of getting in faster , I am writing multi thread script and have each thread wait in line.
First thing that came to my mind is would session.get() works in this case?
If I send session get request every 10 seconds, would I stay hold my position in queue? Or would I end up at the end?
Some info about website, they randomly let people in. I am not sure if refreshing page reset your chance or not. But best thing would be to leave page open and let it do it things.
I could use phantomjs but I would rather not have over 100 headless browser open slowing down my program and computer
You don't need to keep sending the session, as long as you keep the Python application running you should be good.
I have an anchor tag that hits a route which generates a report in a new tab. I am lazyloading the report specs because I don't want to have copies of my data in the original place and on the report object. But collecting that data takes 10-20 seconds.
from flask import render_template
#app.route('/report/')
#app.route('/report/<id>')
def report(id=None):
report_specs = function_that_takes_20_seconds(id)
return render_template('report.html', report_specs=report_specs)
I'm wondering what I can do so that the server responds immediately with a spinner and then when function_that_takes_20_seconds is done, load the report.
You are right: a HTTP view is not a place for a long running tasks.
You need think your system architecture: what you can prepare outside the view and what computations must happen real time in a view.
The usual solutions include adding asynchronous properties and processing your data in a separate process. Often people use schedulers like Celery for this.
Prepare data in a scheduled process which runs for e.g. every 10 minutes
Cache results by storing them in a database
HTTP view always returns the last cached version
This, or then make a your view to do an AJAX call via JavaScript ("the spinner approach"). This requires obtaining some basic JavaScript skills. However this doesn't make the results appear to an end user any faster - it's just user experience smoke and mirrors.
So I have a BlobstoreUploadHandler class that, uses put_async and wait like so:
x = Model.put_async()
x.wait()
then proceeds to pass some data up front to javascript, so that the user is redirected to the class serving their file upload, it does this like so:
redirecthref = '%s/serve/%s' % (
self.request.host_url, Model.uploadid)
self.response.headers['Content-Type'] = 'application/json'
obj = { 'success' : True, 'redirect': redirecthref }
self.response.write(json.dumps(obj))
this all works well and good, however, it takes a CRAZY amount of time for this redirect to happen, we're talking minutes, and while the file is uploading, the page is completely frozen. I've noticed I am able to access the link that javascript would redirect to even while the upload is happening and the page is frozen, so my question is, what strategies can I pursue to make the redirect happen right when the url becomes available? Is this what the 'callback' parameter of put_async is for, or is this where I want to look into url_fetch.
Im pretty new to this and any and all help is appreciated. Thanks!
UPDATE:
So I've figured out that the upload is slow for several reasons:
I should be using put() rather than put_aync(), which I've found does speed up the upload time, however something is breaking and it's giving me a 500 error that looks like:
POST http://example.com/_ah/upload/AMmfu6au6zY86nSUjPMzMmUqHuxKmdTw1YSvtf04vXFDs-…tpemOdVfHKwEB30OuXov69ZQ9cXY/ALBNUaYAAAAAU-giHjHTXes0sCaJD55FiZxidjdpFTmX/ 500 (Internal Server Error)
It still uploads both the resources, but the redirect does not work. I believe this is happening on the created upload_url, which is created using
upload_url = blobstore.create_upload_url('/upload')
All that aside, even using put() instead of put_async(), the wait() method is still taking an exorbitant amount of time.
If I remove the x.wait(), the upload will still happen, but the redirect gives me:
IndexError: List index out of range
this error is thrown on the following line of my /serve class Handler
qry = Model.query(Model.uploadid == param).fetch(1)[0]
So in short, I believe the fastest way to serve an entity after upload is to take out x.wait() and instead use a try: and except: on the query, so that it keeps trying to serve the page until it doesnt get a listindex error.
Like I said, im pretty new to this so actually making this happen is a little over my skill level, thus any thoughts or comments are greatly appreciated, and I am always happy to offer more in the way of code or explanation. Thanks!
async calls are about sending something to the background when you don't REALLY care about when it finishes. Seems to me you are looking for a put.
By definition a put_async isn't meant to finish fast. It sends something to the back for when your instance has time to do it. You're looking for a put I think. It'll freeze your application the same way your wait is doing, but instead of waiting a LONG time for the async to finish, it'll start working on it right away.
as said in the async documentation (https://developers.google.com/appengine/docs/java/datastore/async):
However, if your application needs the result of the get() plus the result of a Query to render the response, and if the get() and the Query don't have any data dependencies, then waiting until the get() completes to initiate the Query is a waste of time.
Doesn't seem to be what you're doing. You're using an async call in a purely synced way. It WILL take longer to complete than a simple put. Unless there is some reason to push the "put" to take longer, you shouldn't use async
Looking back, I wanted to circle up on this since I solved this one shortly after posting about it. What I discovered was that there was no way real way to speed up the upload, other than using put instead of put_async of course.
But there was a tricky way to access the blob in my redirect url, other than through the Model.uploadid which was not guaranteed to be consistently uploaded by the time the redirect occurred.
The solution was to simply access the blob using the .keys() method of my upload object and to pass that into the redirect_href, instead of the Model.uploadid
redirecthref = '%s/serve/%s' % (self.request.host_url, self.get_uploads(‘my_upload_object’)[0].key())
Not sure why the .keys() lookup seemed to bypass the whole upload process, but this seemed to work for me.
Thanks,
I spent the last hours trying to get to know wxPython, because I want to write a GUI program. I found some tutorials on that (not too many), but all of them just explain how to add yet another kind of widget, down to fancy things like LED number outputs and mouse gestures (this one e.g. takes it quite far: Another Tutorial). But everything I could find so far does nothing more than create a static GUI, waiting for the user to do something, then execute some handlers and wait again. It took me a while to even find out that wx.App takes a part in all of that, and that you can subclass it.
I want to write a program, that does things without input! The GUI is supposed to be a client that logs in on a server, and when the server sends something, I want the GUI to show what happened. I could not find a single tutorial even mentioning, that such programs exist. How can I write such a thing? How do they integrate with wxpython?
Do I need to span another thread? Is there a way to hook into the MainLoop and have some code executed periodically, that checks for change and then updates some of those fancy GUI things? And is there any page that teaches you, how to do this?
First of all, you should figure out how to do what you want WITHOUT a GUI. In this case, you'll need to figure out how to login to a server. You'll probably need to use something like paramiko for that. See http://www.lag.net/paramiko/
Once you've got that figured out, then you can add it to your GUI. Probably in a button handler so when the user presses a button, it pops up a dialog asking for a user name and password to pass to paramiko to login to the server.
If the server query takes a long time to execute (like say you're querying a database for a huge set of data), then you'll want to run the query in a separate thread. Why? Because that query will block the GUI's main loop and make your app freeze until it finishes. See the following articles for information on wxPython and threads:
http://wiki.wxpython.org/LongRunningTasks
http://www.blog.pythonlibrary.org/2010/05/22/wxpython-and-threads/
I wrote up a tutorial on making wxPython talk to a socket server, so you might find that useful: http://www.blog.pythonlibrary.org/2013/06/27/wxpython-how-to-communicate-with-your-gui-via-sockets/
I also have an article on how to make an image viewer, and do CRUD ops to a database on there.
This is probably a truly basic thing that I'm simply having an odd time figuring out in a Python 2.5 app.
I have a process that will take roughly an hour to complete, so I made a backend. To that end, I have a backend.yaml that has something like the following:
-name: mybackend
options: dynamic
start: /path/to/script.py
(The script is just raw computation. There's no notion of an active web session anywhere.)
On toy data, this works just fine.
This used to be public, so I would navigate to the page, the script would start, and time out after about a minute (HTTP + 30s shutdown grace period I assume, ). I figured this was a browser issue. So I repeat the same thing with a cron job. No dice. Switch to a using a push queue and adding a targeted task, since on paper it looks like it would wait for 10 minutes. Same thing.
All 3 time out after that minute, which means I'm not decoupling the request from the backend like I believe I am.
I'm assuming that I need to write a proper Handler for the backend to do work, but I don't exactly know how to write the Handler/webapp2Route. Do I handle _ah/start/ or make a new endpoint for the backend? How do I handle the subdomain? It still seems like the wrong thing to do (I'm sticking a long-process directly into a request of sorts), but I'm at a loss otherwise.
So the root cause ended up being doing the following in the script itself:
models = MyModel.all()
for model in models:
# Magic happens
I was basically taking for granted that the query would automatically batch my Query.all() over many entities, but it was dying at the 1000th entry or so. I originally wrote it was computational only because I completely ignored the fact that the reads can fail.
The actual solution for solving the problem we wanted ended up being "Use the map-reduce library", since we were trying to look at each model for analysis.