I'm working on a single page application with Django, and would like to use WebSockets, and therefore Channels. To keep things simple, I think I want to handle all server communication over a WebSocket alone, rather than adding XHR (XML HTTP Request) into the mix. I'm using channels from the get-go since there will be a lot of data pushed from the server to the client asynchronously.
With regular Django, a conventional request made to https://example.com/login or https://example.com/logout or whatever and the Django URL router will decide what view to send it to. Instead, I would like to have the user perform their action in the client, handle it with Javascript, and use the WebSocket to send the request to the server. Since I'm using Django-allauth, I would like to use the provided Django views to handle things like authentication. The server would then update the client with the necessary state information from the view.
My question: how can I process the data received over the WebSocket and submit the HTTP request to the Django view? My channels consumer would then take the rendered HTML and send it back to the client to update the page or section.
I can picture what would happen using XHR, but I'm trying to avoid mixing the two, unless someone can point out the usefulness in using XHR plus WebSockets...? I suppose another option is to use XHR for authentication and other client initiated requests, and use the WebSocket for asynchronously updating the client. Does this make any sense at all?
Update: It occurs to me that I could use requests from PyPi, and make an sync_to_async call to localhost using credentials I received over the WebSocket. However, this would require me to then handle the session data and send it back to the client. This seems like a lot more work. That said, I could maintain the sessions themselves on the server and just associate them with the WebSocket connection itself. Since I'm using a secure WebSocket wss:// is there any possibility for hijacking the WebSocket connection?
Check out this project that gives the ability to process a channels websocket request using Django Rest Framework views. You can try to adapt it to a normal Django view.
EDIT: I am quoting the following part of the DCRF docs in response to #hobs comments:
Using your normal views over a websocket connection
from djangochannelsrestframework.consumers import view_as_consumer
application = ProtocolTypeRouter({
"websocket": AuthMiddlewareStack(
URLRouter([
url(r"^front(end)/$", view_as_consumer(YourDjangoView)),
])
),
})
In this situation if your view needs to read the GET query string
values you can provides these using the query option. And if the view
method reads parameters from the URL you can provides these with the
parameters.
#hobs if you have a problem with the naming of the package or the functionality is not working as intended, please take it up with the developers on Github using their issue tracker.
Related
As far as I know, this question hasn't really been asked.
I want to use twilio to send and receive messages from a python app. Sending isn't a problem, but I know receiving uses webhooks. Every tutorial and even the twilio documentation uses flask. Im wondering if I can create a program to receive information from twilio without using flask or is flask/django required.
Thanks
You need something that can accept HTTP requests
Webhooks are user-defined HTTP callbacks. They are usually triggered by some event, such as receiving an SMS message or an incoming phone call. When that event occurs, Twilio makes an HTTP request (usually a POST or a GET) to the URL configured for the webhook.
To handle a webhook, you only need to build a small web application that can accept the HTTP requests.
It needs to be something that Twilio can access through HTTP. So while you can probably use any web framework you want, you are going to need one.
https://www.twilio.com/docs/sms/tutorials/how-to-receive-and-reply-python
I came with a problem while integrating Django REST Framework with Django Channels.
I have a viewset with retrieve (GET) method that prepares information from several different models in tricky way and sends this "complex" result to the frontend. So when client sends GET request with entity primary key to this endpoint (like /complex_entity/1) he instantly receives everything he needed.
And now guys on the frontend side want to have another feature - backend should be able to send results of this complex request to the frontend each time when some of relevant underlying models were changed. Like this: browser subscribes for the changes of ComplexEntity with primary key 1 and when ComplexEntity 1 is changed (or its linked entities which is not a problem) server sends the result of this complex request via websocket. So the request can be executed many times during one websocket connection (on each model change signal).
I see two intuitive ways to provide this behaviour:
Good(?): somehow execute requests to this viewset retrieve method from the django itself - either by calling this method internally or by executing "loopback" HTTP request.
Bad/ugly: copy all complex logic from viewset retrieve method to websocket consumer
Also I've found Django Channels REST Framework which allows to subscribe to model entity but the problem is I need to return not just model instance but this "custom" result glued from several models. DCRF lacks that feature as I understood.
For now I don't really know what is the best way to solve my problem - looks like calling method internally is ok but how to do it?
Loopback HTTP request is ok too (I think) but it should be untied from site hostname and sanity says that it's better to forward "originator" cookies to such request to prevent unauthorized access to entities. The question is, again, how to do it right.
So does anybody know what is a best way to execute same complex request several times during one websocket connection?
The proper way would be to move the common logic into a reusable method and use it in both DRF view and in channels.
That method will receive some arguments (I guess ComplexEntity's ID) and will return the result data in the format you need.
Originally, I tried to post an ajax request from my client side to a third party url, but it seems that the browser have security issues with that. I thought about sending an ajax to the server side, from there to send a GET request to the third party, get the response and send it back to the client side. How can I do that with flask?
Install the requests module (much nicer than using urllib2) and then define a route which makes the necessary request - something like:
import requests
from flask import Flask
app = Flask(__name__)
#app.route('/some-url')
def get_data():
return requests.get('http://example.com').content
Depending on your set up though, it'd be better to configure your webserver to reverse proxy to the target site under a certain URL.
Flask alone does not have this capability, but it is a simple matter to write a request handler that makes a request to another server using an HTTP client library and then return that response.
# third-party HTTP client library
import requests
# assume that "app" below is your flask app, and that
# "Response" is imported from flask.
#app.route("/proxy-example")
def proxy_example():
r = requests.get("http://example.com/other-endpoint")
return Response(
r.text,
status=r.status_code,
content_type=r.headers['content-type'],
)
However, this will not achieve exactly the same result as you might expect from a client-side request. Since your server cannot "see" any cookies that the client browser has stored for the target site, your proxied request will be effectively anonymous and so, depending on the target site, may fail or give you a different response than you'd get requesting that resource in the browser.
If you have a relationship with the third-party URL (that is, if you control it or are able to work with the people who do) they can give access for cross-domain requests in the browser using CORS (which is only supported in modern browsers) or JSON-P (an older workaround that predates CORS).
The third-party provider could also give you access to the data you want at an endpoint that is designed to accept requests from other servers and that provides a mechanism for you to authenticate your app. The most popular protocol for this is OAuth.
As the other answers have stated using the requests module for python would be the best way to tackle this from the coding perspective. However as the comments mentioned (and the reason I came to this question) this can give an error that the request was denied. This error is likely cause by SELinux.
To check if this is the issue first make sure SELinux is enabled with this command:
sestatus
If 'Current Mode' is 'enforcing' then SELinux is enabled.
Next get the current bool values that apply directly to apache with this command:
getsebool -a | grep httpd
Look for the setting 'httpd_can_network_connect' this determines if apache is allowed to make TCP requests out to the network. If it is on then all apache TCP requests will be allowed. To turn it on run the following as root:
setsebool -P httpd_can_network_connect 1
If you only need database access (I had this problem before which is why I suspected SELinux here) then it would probably be better to only turn on 'httpd_cna_network_connect'.
The purpose of this policy is that if a hacker was to hijack your apache server they would not be able to get out through the server to the rest of your internal network.
This probably would've been better as a comment but I don't have enough rep..
Sources:
https://tag1consulting.com/blog/stop-disabling-selinux
https://wiki.centos.org/TipsAndTricks/SelinuxBooleans
I have been trying to do something which I think should be pretty simple. The situation is as follows. The client makes a request for a resource on my web server. My flask application processes the request and determines that this resource is located at a certain location on another web server and the client should make a request of that server instead.
I know I can use the redirect function to tell the client to send a request to the remote location, but my problem is that the remote location is the Amazon Glacier servers. These servers require a request to be made in a certain way, with a special signature (see http://docs.aws.amazon.com/amazonglacier/latest/dev/amazon-glacier-signing-requests.html). My flask application knows how to go about the business of making these requests in the required way. I essentially want to know if it's possible to send a response to my client saying, send this request (generated by my application, with all the required signing) to the Amazon server?
Any ideas?
If the request can be encoded with get params like
http://www.redirecturl.com/?param1=bla¶m2=blub
then it should work no problem. Just construct the request as a string and pass it to redirect().
As far as i know, you can't tell a client to send specific headers to a HTTP redirect URL.
Hitting the Glacier URL serverside would be the easiest. Using javascript on the clientside would only work if Glacier is implementing CORS.
I need to create a python middleware that will do the following:
a) Accept http get/post requests from multiple clients.
b) Modify and Dispatch these requests to a backend remote application (via socket communication). I do not have any control over this remote application.
c) Receive processed results from backend application and return these results back to the requesting clients.
Now the clients are expecting a synchronous request/response scenario. But the backend application is not returning the results synchronously. That is, some requests take much longer to process than others. Hence,
Client 1 : send http request C1 --> get response R1
Client 2 : send http request C2 --> get response R2
Client 3 : send http request C3 --> get response R3
Python middleware receives them in some order: C2, C3, C1. Dispatches them in this order to backend (as non-http messages). Backend responds with results in mixed order R1, R3, R2. Python middleware should package these responses back into http response objects and send the response back to the relevant client.
Is there any sample code to program this sort of behavior. There seem to be something like 20 different web frameworks for python and I'm confused as to which one would be best for this scenario (would prefer something as lightweight as possible ... I would consider Django too heavy ... I tried bottle, but I am not sure how to go about programming that for this scenario).
================================================
Update (based on discussions below): Requests have a request id. Responses have a response id (which should match the request id that they correspond to). There is only one socket connection between the middleware and the remote backend application. While we can maintain a {request_id : ip_address} dictionary, the issue is how to construct a HTTP response object to the correct client. I assume, threading might solve this problem where each thread maintains its own response object.
Screw frameworks. This exactly the kind of task for asyncore. This module allows event-based network programming: given a set of sockets, it calls back given handlers when data is ready on any of them. That way, threads are not necessary just to dumbly wait for data on one socket to arrive and painfully pass it to another thread. You would have to implement the http handling yourself, but examples can be found on that. Alternatively, you could use the async feature of uwsgi, which would allow your application to be integrated with an existing webserver, but that does not integrate with asyncore by default --- though it wouldn't be hard to make it work. Depends on specific needs.
Quoting your comment:
The middleware uses a single persistent socket connection to the backend. All requests from middleware are forwarded via this single socket. Clients do send a request id along with their requests. Response id should match the request id. So the question remains: How does the middleware (web server) keep track of which request id belonged to which client? I mean, is there any way for a cgi script in middleware to create a db of tuples like and once a response id matches, then send a http response to clientip:clienttcpport ?
Is there any special reason for doing all this processing in a middleware? You should be able to do all this in a decorator, or somewhere else, if more appropriate.
Anyway, you need to maintain a global concurrent dictionary (extend dict and protect it using threading.Lock). Upon a new request, store the given request-id as key, and associate it to the respective client (sender). Whenever your backend responds, retrieve the client from this dictionary, and remove the entry so it doesn't accumulate forever.
UPDATE: someone already extended the dictionary for you - check this answer.
Ultimately your going from the synchronous http request-response protocol from your clients to an asynchronous queuing/messaging protocol with your backend. So you've two choices (1) either make requests wait until the backend has no outstanding work, then process one (2) write something that marries the backend responses with their associated request (using a dictionary of request or something)
One way might be to run your server in one thread while dealing with your backend in another (see... Run Python HTTPServer in Background and Continue Script Execution) or maybe look at aiohttp (https://docs.aiohttp.org/en/v0.12.0/web.html)