Why Twisted resource.Resource execute render() twice? - python

Im new to Twisted. Why is it printing "render()" twice? I know if I return server.NOT_DONE_YET, it will only print once, but I wish to return string/JSON instead. Any help?
Code:
from twisted.web import resource, server
from twisted.internet import reactor
import simplejson
class WResource(resource.Resource):
isLeaf=True
def __init__(self):
print "resource started"
def render(self, request):
print "render()"
request.setHeader('Content-Type', 'application/json')
return simplejson.dumps(dict(through_port=8080, subdomain='hello'))
reactor.listenTCP(9000, server.Site(WResource()))
reactor.run()
Output:
> python server.py
resource started
render()
render()

Because your web browser is requesting favicon.ico. If you print request.postpath in your render method, you'll see that only one of the requests is hitting the page that you expect.

Related

How can I send server-side events from Flask while accessing the request context?

I am trying to use Flask to send a stream of events to a front-end client as documented in this question. This works fine if I don't access anything in the request context, but fails as soon as I do.
Here's an example to demonstrate.
from time import sleep
from flask import Flask, request, Response
app = Flask(__name__)
#app.route('/events')
def events():
return Response(_events(), mimetype="text/event-stream")
def _events():
while True:
# yield "Test" # Works fine
yield request.args[0] # Throws RuntimeError: Working outside of request context
sleep(1)
Is there a way to access the request context for server-sent events?
You can use the #copy_current_request_context decorator to make a copy of the request context that your event stream function can use:
from time import sleep
from flask import Flask, request, Response, copy_current_request_context
app = Flask(__name__)
#app.route('/events')
def events():
#copy_current_request_context
def _events():
while True:
# yield "Test" # Works fine
yield request.args[0]
sleep(1)
return Response(_events(), mimetype="text/event-stream")
Note that to be able to use this decorator the target function must be moved inside the view function that has the source request.

How to get errback from interrupted response using klein?

This page describes how to set an errback that fires when the connection to
the client is lost before the response is generated.
Is there a way to do something similar using klein?
Code from referenced page is below, which works with twisted.web. I would like something like:
request.notifyFinish().addErrback(self._responseFailed, call)
which is code to fire an errback when request does not finish, that works with klein.
from twisted.web.resource import Resource
from twisted.web.server import NOT_DONE_YET
from twisted.internet import reactor
class DelayedResource(Resource):
def _delayedRender(self, request):
request.write("<html><body>Sorry to keep you waiting.</body></html>")
request.finish()
def _responseFailed(self, err, call):
call.cancel()
def render_GET(self, request):
call = reactor.callLater(5, self._delayedRender, request)
request.notifyFinish().addErrback(self._responseFailed, call)
return NOT_DONE_YET
resource = DelayedResource()
Klein handlers are passed a regular Twisted Web Request object. You can use the same notifyFinish method on it to get interrupted response notification.

Process Multiple Requests Simultaneously and return the result using Klein Module Python

Hi I am using Klein Python module for my web server.
I need to run each request separately as a thread and also need to
return the result.
But Klein waits until the completion of single request to process
another request.
I also tried using deferToThread from twisted module. But it also
process the requests only after completion of the first request.
Similarly I also tried #inlineCallbacks method it also produce the
same result.
Note: This methods works perfectly when there is nothing to return.
But I need to return the result.
Here I attached a sample code snippet below,
import time
import klein
import requests
from twisted.internet import threads
def test():
print "started"
x = requests.get("http://google.com")
time.sleep(10)
return x.text
app = klein.Klein()
#app.route('/square/submit',methods = ['GET'])
def square_submit(request):
return threads.deferToThread(test)
app.run('localhost', 8000)
As #notorious.no suggested, the code is valid and it works.
To prove it, check out this code
# app.py
from datetime import datetime
import json
import time
import random
import string
import requests
import treq
from klein import Klein
from twisted.internet import task
from twisted.internet import threads
from twisted.web.server import Site
from twisted.internet import reactor, endpoints
app = Klein()
def test(y):
print(f"test called at {datetime.now().isoformat()} with arg {y}", )
x = requests.get("http://www.example.com")
time.sleep(10)
return json.dumps([{
"time": datetime.now().isoformat(),
"text": x.text[:10],
"arg": y
}])
#app.route('/<string:y>',methods = ['GET'])
def index(request, y):
return threads.deferToThread(test, y)
def send_requests():
# send 3 concurrent requests
rand_letter = random.choice(string.ascii_letters)
for i in range(3):
y = rand_letter + str(i)
print(f"request send at {datetime.now().isoformat()} with arg {y}", )
d = treq.get(f'http://localhost:8080/{y}')
d.addCallback(treq.content)
d.addCallback(lambda r: print("response", r.decode()))
loop = task.LoopingCall(send_requests)
loop.start(15) # repeat every 15 seconds
reactor.suggestThreadPoolSize(3)
# disable unwanted logs
# app.run("localhost", 8080)
# this way reactor logs only print calls
web_server = endpoints.serverFromString(reactor, "tcp:8080")
web_server.listen(Site(app.resource()))
reactor.run()
Install treq and klein and run it
$ python3.6 -m pip install treq klein requests
$ python3.6 app.py
The output should be
request send at 2019-12-28T13:22:27.771899 with arg S0
request send at 2019-12-28T13:22:27.779702 with arg S1
request send at 2019-12-28T13:22:27.780248 with arg S2
test called at 2019-12-28T13:22:27.785156 with arg S0
test called at 2019-12-28T13:22:27.786230 with arg S1
test called at 2019-12-28T13:22:27.786270 with arg S2
response [{"time": "2019-12-28T13:22:37.853767", "text": "<!doctype ", "arg": "S1"}]
response [{"time": "2019-12-28T13:22:37.854249", "text": "<!doctype ", "arg": "S0"}]
response [{"time": "2019-12-28T13:22:37.859076", "text": "<!doctype ", "arg": "S2"}]
...
As you can see Klein does not block the requests.
Furthermore, if you decrease thread pool size to 2
reactor.suggestThreadPoolSize(2)
Klein will execute the first 2 requests and wait until there is a free thread again.
And "async alternatives", suggested by #notorious.no are discussed here.
But Klein waits until the completion of single request to process another request.
This is not true. In fact, there's absolutely nothing wrong with the code you've provided. Simply running your example server at tcp:localhost:8000 and using the following curl commands, invalidates your claim:
curl http://localhost:8000/square/submit & # run in background
curl http://localhost:8000/square/submit
Am I correct in assuming you're testing the code in a web browser? If you are, then you're experiencing a "feature" of most modern browsers. The browser will make single request per URL at a given time. One way around this in the browser would be to add a bogus query string at the end of the URL, like so:
http://localhost:8000/squre/submit
http://localhost:8000/squre/submit?bogus=0
http://localhost:8000/squre/submit?bogus=1
http://localhost:8000/squre/submit?bogus=2
However, a very common mistake new Twisted/Klein developers tend to make is to write blocking code, thinking that Twisted will magically make it async. Example:
#app.route('/square/submit')
def square_submit():
print("started")
x = requests.get('https://google.com') # blocks the reactor
time.sleep(5) # blocks the reactor
return x.text
Code like this will handle requests sequentially and should be modified with async alternatives.

Simple async server with tornado Python

I want to write a simple async http server with Tornado.
It is not clear to me how to set the callback in order to free the server for additional requests while the current request is processed.
The code I wrote is:
import tornado.web
from tornado.ioloop import IOLoop
from tornado import gen
import time
class TestHandler(tornado.web.RequestHandler):
#gen.coroutine
def post(self, *args, **kwargs):
json_input = tornado.escape.json_decode(self.request.body)
print ('Now in POST. body: {}'.format(json_input))
self.perform_long_task(*args, **json_input)
#gen.coroutine
def perform_long_task(self, **params):
time.sleep(10)
self.write(str(params))
self.finish()
application = tornado.web.Application([
(r"/test", TestHandler),
])
application.listen(9999)
IOLoop.instance().start()
To test I tried to send few POST requests in parallel:
curl -v http://localhost:9999/test -X POST -H "Content-Type:appication/json" -d '{"key1": "val1", "key2": "val2"}' &
Currently the server is blocked while perform_long_task() is processed.
I need help getting the server to be a non-blocking.
Never use time.sleep in Tornado code!
http://www.tornadoweb.org/en/latest/faq.html#why-isn-t-this-example-with-time-sleep-running-in-parallel
Do this in your code instead:
class TestHandler(tornado.web.RequestHandler):
#gen.coroutine
def post(self, *args, **kwargs):
json_input = tornado.escape.json_decode(self.request.body)
print ('Now in POST. body: {}'.format(json_input))
# NOTE: yield here
yield self.perform_long_task(*args, **json_input)
#gen.coroutine
def perform_long_task(self, **params):
yield gen.sleep(10)
self.write(str(params))
# NOTE: no need for self.finish()
You don't need to call self.finish - when the "post" coroutine finishes, Tornado automatically finishes the request.
You must yield self.perform_long_task(), though, otherwise Tornado will end your request early, before you've called "self.write()".
Once you make these changes, two "curl" commands will show that you're doing concurrent processing in Tornado.
I'm still using time.sleep() as my code calls other code that I can't control how is written.
The FAQ http://www.tornadoweb.org/en/latest/faq.html#why-isn-t-this-example-with-time-sleep-running-in-parallel describes three methods. The third one is what I needed.
The only change I needed in my code is to replace:
yield self.perform_long_task(*args, **json_input)
which works only for a class that is written for async,
with:
yield executor.submit(self.perform_long_task,*args, **json_input)
All replies and comments were helpful. Many thanks!

Flask with Gevent Blocking Requests on Separate Browser Windows

In the following snippet, I have a simple web server running that utilizes Flask. It appears as though all requests wait for the previous requests to complete before being processed.
To test, I point two windows in Chrome to localhost:5000. The second waits for the first request to finish completely.
This does not occur when I open one of those windows in 'Incognito' or when running two curl commands simultaneously.
If anyone has an idea why two separate windows get treated as the same connection (and why an incognito one is treated separately), this would be much appreciated.
Here is my code:
from gevent import monkey; monkey.patch_all()
monkey.patch_time()
from gevent.pywsgi import WSGIServer
from flask import Flask, Response, jsonify
import json
import time
app = Flask(__name__)
def toJson(obj):
return json.dumps(obj, indent=None, separators=(',', ':'))
#app.route("/")
def hello():
print 'Received Request'
time.sleep(5)
return Response(toJson({'hello': 'world'}), mimetype='application/json')
print 'Starting Server'
http = WSGIServer(('', 5000), app)
http.serve_forever()

Categories