I'm building a django app that will provide real time data. I'm fairly new to Django, and now i'm focusing on how to update my data in real time, without having to reload the whole page.
Some clarification: the real time data should be update regularly, not only through a user input.
View
def home(request):
symbol = "BTCUSDT"
tst = client.get_ticker(symbol=symbol)
test = tst['lastPrice']
context={"test":test}
return render(request,
"main/home.html", context
)
Template
<h3> var: {{test}} </h3>
I already asked this question, but i'm having some doubts:
I've been told to use Ajax, and that's ok, but is Ajax good for this case, where i will have a page loaded with data updated in real time every x seconds?
I have also been told to use DRF (Django Rest Framework). I've been digging through it a lot, but what it's not clear to me is how does it work with this particular case.
Here below, I'm giving a checklist of the actions needed to implement a solution based on Websocket and Django Channels, as suggested in a previous comment.
The motivation for this are given at the end.
1) Connect to the Websocket and prepare to receive messages
On the client, you need to execute the follwing javascript code:
<script language="javascript">
var ws_url = 'ws://' + window.location.host + '/ws/ticks/';
var ticksSocket = new WebSocket(ws_url);
ticksSocket.onmessage = function(event) {
var data = JSON.parse(event.data);
console.log('data', data);
// do whatever required with received data ...
};
</script>
Here, we open the Websocket, and later elaborate the notifications sent by the server in the onmessage callback.
Possible improvements:
support SSL connections
use ReconnectingWebSocket: a small wrapper on WebSocket API that automatically reconnects
<script language="javascript">
var prefix = (window.location.protocol == 'https:') ? 'wss://' : 'ws://';
var ws_url = prefix + window.location.host + '/ws/ticks/';
var ticksSocket = new ReconnectingWebSocket(ws_url);
...
</script>
2) Install and configure Django Channels and Channel Layers
To configure Django Channels, follow these instructions:
https://channels.readthedocs.io/en/latest/installation.html
Channel Layers is an optional component of Django Channels which provides a "group" abstraction which we'll use later; you can follow the instructions given here:
https://channels.readthedocs.io/en/latest/topics/channel_layers.html#
3) Publish the Websocket endpoint
Routing provides for Websocket (and other protocols) a mapping between the published endpoints and the associated server-side code, much as urlpattens does for HTTP in a traditional Django project
file routing.py
from django.urls import path
from channels.routing import ProtocolTypeRouter, URLRouter
from . import consumers
application = ProtocolTypeRouter({
"websocket": URLRouter([
path("ws/ticks/", consumers.TicksSyncConsumer),
]),
})
4) Write the consumer
The Consumer is a class which provides handlers for Websocket standard (and, possibly, custom) events. In a sense, it does for Websocket what a Django view does for HTTP.
In our case:
websocket_connect(): we accept the connections and register incoming clients to the "ticks" group
websocket_disconnect(): cleanup by removing che client from the group
new_ticks(): our custom handler which broadcasts the received ticks to it's Websocket client
I assume TICKS_GROUP_NAME is a constant string value defined in project's settings
file consumers.py:
from django.conf import settings
from asgiref.sync import async_to_sync
from channels.consumer import SyncConsumer
class TicksSyncConsumer(SyncConsumer):
def websocket_connect(self, event):
self.send({
'type': 'websocket.accept'
})
# Join ticks group
async_to_sync(self.channel_layer.group_add)(
settings.TICKS_GROUP_NAME,
self.channel_name
)
def websocket_disconnect(self, event):
# Leave ticks group
async_to_sync(self.channel_layer.group_discard)(
settings.TICKS_GROUP_NAME,
self.channel_name
)
def new_ticks(self, event):
self.send({
'type': 'websocket.send',
'text': event['content'],
})
5) And finally: broadcast the new ticks
For example:
ticks = [
{'symbol': 'BTCUSDT', 'lastPrice': 1234, ...},
...
]
broadcast_ticks(ticks)
where:
import json
from asgiref.sync import async_to_sync
import channels.layers
def broadcast_ticks(ticks):
channel_layer = channels.layers.get_channel_layer()
async_to_sync(channel_layer.group_send)(
settings.TICKS_GROUP_NAME, {
"type": 'new_ticks',
"content": json.dumps(ticks),
})
We need to enclose the call to group_send() in the async_to_sync() wrapper, as channel.layers provides only the async implementation, and we're calling it from a sync context. Much more details on this are given in the Django Channels documentation.
Notes:
make sure that "type" attribute matches the name of the consumer's handler (that is: 'new_ticks'); this is required
every client has it's own consumer; so when we wrote self.send() in the consumer's handler, that meant: send the data to a single client
here, we send the data to the "group" abstraction, and Channel Layers in turn will deliver it to every registered consumer
Motivations
Polling is still the most appropriate choice in some cases, being simple and effective.
However, on some occasions you might suffer a few limitations:
you keep querying the server even when no new data are available
you introduce some latency (in the worst case, the full period of the polling). The tradeoff is: less latency = more traffic.
With Websocket, you can instead notify the clients only when (and as soon as) new data are available, by sending them a specific message.
AJAX calls and REST APIs are the combinations you are looking for. For real-time update of data, polling the REST API at regular intervals is the best option you have. Something like:
function doPoll(){
$.post('<api_endpoint_here>', function(data) {
// Do operation to update the data here
setTimeout(doPoll, <how_much_delay>);
});
}
Now add Django Rest Framework to your project. They have a simple tutorial here. Create an API endpoint which will return the data as JSON, and use that URL in the AJAX call.
Now you might be confused because you passed in the data into the template as context, while rendering the page from your home view. Thats not going to work anymore. You'll have to add a script to update the value of the element like
document.getElementById("element_id").value = "New Value";
where element_id is the id you give to the element, and "New Value" is the data you get from the response of the AJAX call.
I hope this gives you a basic context.
Related
I'm trying to implement a service that will get a request from an external API, do some work (which might take time) and then return a response to the external API with the parsed data. However I'm at a loss on how to achieve this. I'm using FastAPI as my API service and have been looking at the following documentation: OpenAPI Callbacks
By following that documentation I can get the OpenAPI docs looking all pretty and nice. However I'm stumped on how to implement the actual callback and the docs don't have much information about that.
My current implementation is as follows:
from typing import Union
from fastapi import APIRouter, FastAPI
from pydantic import BaseModel, AnyHttpUrl
import requests
import time
from threading import Thread
app = FastAPI()
class Invoice(BaseModel):
id: str
title: Union[str, None] = None
customer: str
total: float
class InvoiceEvent(BaseModel):
description: str
paid: bool
class InvoiceEventReceived(BaseModel):
ok: bool
invoices_callback_router = APIRouter()
#invoices_callback_router.post(
"{$callback_url}/invoices/{$request.body.id}", response_model=InvoiceEventReceived
)
def invoice_notification(body: InvoiceEvent):
pass
#app.post("/invoices/", callbacks=invoices_callback_router.routes)
async def create_invoice(invoice: Invoice, callback_url: Union[AnyHttpUrl, None] = None):
# Send the invoice, collect the money, send the notification (the callback)
thread = Thread(target=do_invoice(invoice, callback_url))
thread.start()
return {"msg": "Invoice received"}
def do_invoice(invoice: Invoice, callback_url: AnyHttpUrl):
time.sleep(10)
url = callback_url + "/invoices/" + invoice.id
json = {
"data": ["Payment celebration"],
}
requests.post(url=url, json=json)
I thought putting the actual callback in a separate thread might work and that the {"msg": "Invoice received"} would be returned immediately and then 10s later the external api would recieve the result from the do_invoice function. But this doesn't seem to be the case so perhaps I'm doing something wrong.
I've also tried putting the logic in the invoice_notification function but that doesn't seem to do anything at all.
So what is the correct to implement a callback like the one I want? Thankful for any help!
I thought putting the actual callback in a separate thread might work and that the {"msg": "Invoice received"} would be returned
immediately and then 10s later the external api would recieve the
result from the do_invoice function. But this doesn't seem to be the case so perhaps I'm doing something wrong.
If you would like to run a task after the response has been sent, you could use a BackgroundTask, as demonstrated in this answer, as well as here and here. If you instead would like to wait for the task to finish before returning the response, you could run the task in either an external ThreadPool or ProcessPool (depending on the nature of the task) and await it, as explained in this detailed answer.
I would also strongly recommend using the httpx library in an async environment such as FastAPI, instead of using Python requests—you may find details and working examples here, as well as here and here.
I am a very beginner writing one of my first webapps. I'm using FastAPI and I'm stuck on the logic of creating an endpoint that has to do a lot of things before it returns something back to the user. Since I'm new I also lack a lot of the vocabulary that I think I'd have if I were more experienced so bear with me:
To start -- What is my web app doing?
Well, I am trying to create something that will pull a schedule from a 3rd party API and then show it to me, allowing me to book things from my webapp rather than having to navigate to the 3rd party APIs. Some of these events are recurring (or are assumed to be) and get recorded as a "preference"
My approach so far, using 'MVVM'
User navigates to www.myfakeurl.com/schedule (either directly or from the home page) (handled by views/schedule.py)
This sends a request to the /schedule router (also handled by views/schedule.py). The request is forwarded to view_models/schedule.py
In the ScheduleViewModel in view_models/schedule.py, there's no payload to check, but I do need to request the schedule from the 3rd party API. I decoupled this step from the constructor on purpose, one must call get_schedule.
Once get_schedule is called, the user id and token are sent to services/3p_schedule.py to function called get_current_schedule, this is an asynchronous function that requests the schedule from the 3rd party API
Back in view_models.schedule.ScheduleViewModel, I validate the response and do some json reorganizing. The 3rd party API sends over a lot of stuff I don't need, so I just keep a few relevant pieces of info (like booking_id, if I already am signed up for a class, etc)
Still in view_models.schedule.ScheduleViewModel, I quickly check against the models.preferences.UserPreferences table (in the SQLAlchemy DB) if any classes in the schedule match my preferences and create a is_a_preferred_time attribute for each item in the list generated in (4) above
Still in the ScheduleViewModel, I finally combine the dict of info from step (4) with my info about user preferences from step (5). The schedule is a Dict[str, List[dict]]] where the key is the date and values are a list of dicts containing time (from 3rd party API), is_preferred (from UserPreferences table), booking_id (from 3rd party API), currently_enrolled (from 3rd party API). This is returned the views/schedule.py and the template is generated
(Somewhat) pseudocode below:
# views/schedule.py
from fastAPI import APIRouter
from fastapi.templating import Jinja2Templates
from startlette.requests import Request
from pydantic_schema import Schedule
from view_models.schedule import ScheduleViewModel
router = APIRouter()
templates = Jinja2Templates(directory="templates")
#router.get('/schedule')
async def schedule(request: Request) -> Schedule:
schedule_model = ScheduleViewModel(request)
schedule = await schedule_model.get_schedule(schedule_model.user)
return templates.TemplateResponse("schedule.html", {"schedule": schedule})
# views_models/schedule.py
from starlette.requests import Requests
from sqlalchemy.orm import Session
from dependencies.db import get_db
from pydantic_schema.user import User
from services.3p_services import get_3p_schedule
class ScheduleViewModel:
def __init__(self, request: Requests):
self.user = get_current_user(request)
self.db = self.get_db()
async def get_schedule(self, user: User):
# try to request the schedule
3p_schedule = await get_3p_schedule(id=user.id, token=user.token)
# parse schedule
relevant_schedule_info = self.parse_schedule(3p_schedule)
# combine with user preferences
user_schedule = await self.update_sched_with_prefs(3p_schedule)
return user_schedule
def parse_schedule(self, schedule):
# does stuff
return {"yyyy-mm-dd" : [{"booking_id": 1, "time": "x:xx xm", "currently_enrolled": False}]} # returns a dict of length N with this format
async def update_sched_with_prefs(self, schedule):
# gonna ignore the exact implementation here for brevity
return [{"yyyy-mm-dd" : [{"booking_id": 1, "time": "x:xx xm", "currently_enrolled": False, "is_preference": True}]} # dict of len(N) with values that are lists of variable length
I use the dict to populate a template.
My concerns and confusions
I started off using pydantic for a lot of this, and then I learned about MVVM and got confused and just went with MVVM since I don't know what I am doing and it seemed more clear.
But, this feels like a lot for one endpoint to handle ? Or maybe the examples in all my tutorials are just very basic and real-life is closer to what I have going on?
I apologize in advance for just how clueless I am, but I don't have anyone to ask about this and am looking for any and all guidance / resources. I think I am in a very steep section of the learning curve atm :)
Trying to get authentication working with Django channels with a very simple websockets app that echoes back whatever the user sends over with a prefix "You said: ".
My processes:
web: gunicorn myproject.wsgi --log-file=- --pythonpath ./myproject
realtime: daphne myproject.asgi:channel_layer --port 9090 --bind 0.0.0.0 -v 2
reatime_worker: python manage.py runworker -v 2
I run all processes when testing locally with heroku local -e .env -p 8080, but you could also run them all separately.
Note I have WSGI on localhost:8080 and ASGI on localhost:9090.
Routing and consumers:
### routing.py ###
from . import consumers
channel_routing = {
'websocket.connect': consumers.ws_connect,
'websocket.receive': consumers.ws_receive,
'websocket.disconnect': consumers.ws_disconnect,
}
and
### consumers.py ###
import traceback
from django.http import HttpResponse
from channels.handler import AsgiHandler
from channels import Group
from channels.sessions import channel_session
from channels.auth import channel_session_user, channel_session_user_from_http
from myproject import CustomLogger
logger = CustomLogger(__name__)
#channel_session_user_from_http
def ws_connect(message):
logger.info("ws_connect: %s" % message.user.email)
message.reply_channel.send({"accept": True})
message.channel_session['prefix'] = "You said"
# message.channel_session['django_user'] = message.user # tried doing this but it doesn't work...
#channel_session_user_from_http
def ws_receive(message, http_user=True):
try:
logger.info("1) User: %s" % message.user)
logger.info("2) Channel session fields: %s" % message.channel_session.__dict__)
logger.info("3) Anything at 'django_user' key? => %s" % (
'django_user' in message.channel_session,))
user = User.objects.get(pk=message.channel_session['_auth_user_id'])
logger.info(None, "4) ws_receive: %s" % user.email)
prefix = message.channel_session['prefix']
message.reply_channel.send({
'text' : "%s: %s" % (prefix, message['text']),
})
except Exception:
logger.info("ERROR: %s" % traceback.format_exc())
#channel_session_user_from_http
def ws_disconnect(message):
logger.info("ws_disconnect: %s" % message.__dict__)
message.reply_channel.send({
'text' : "%s" % "Sad to see you go :(",
})
And then to test, I go into Javascript console on the same domain as my HTTP site, and type in:
> var socket = new WebSocket('ws://localhost:9090/')
> socket.onmessage = function(e) {console.log(e.data);}
> socket.send("Testing testing 123")
VM481:2 You said: Testing testing 123
And my local server log shows:
ws_connect: test#test.com
1) User: AnonymousUser
2) Channel session fields: {'_SessionBase__session_key': 'chnb79d91b43c6c9e1ca9a29856e00ab', 'modified': False, '_session_cache': {u'prefix': u'You said', u'_auth_user_hash': u'ca4cf77d8158689b2b6febf569244198b70d5531', u'_auth_user_backend': u'django.contrib.auth.backends.ModelBackend', u'_auth_user_id': u'1'}, 'accessed': True, 'model': <class 'django.contrib.sessions.models.Session'>, 'serializer': <class 'django.core.signing.JSONSerializer'>}
3) Anything at 'django_user' key? => False
4) ws_receive: test#test.com
Which, of course, makes no sense. Few questions:
Why would Django see message.user as an AnonymousUser but have the actual user id _auth_user_id=1 (this is my correct user ID) in the session?
I am running my local server (WSGI) on 8080 and daphne (ASGI) on 9090 (different ports). And I didn't include session_key=xxxx in my WebSocket connection - yet Django was able to read my browser's cookie for the correct user, test#test.com? According to Channels docs, this shouldn't be possible.
Under my setup, what is the best / simplest way to carry out authentication with Django channels?
Note: This answer is explicit to channels 1.x, channels 2.x uses a different auth mechanism.
I had a hard time with django channels too, i had to dig into the source code to better understand the docs ...
Question 1:
The docs mention this kind of long trail of decorators relying on each other (http_session, http_session_user ...) that you can use to wrap your message consumers, in the middle of that trail it states this:
Now, one thing to note is that you only get the detailed HTTP information during the connect message of a WebSocket connection (you can read more about that in the ASGI spec) - this means we’re not wasting bandwidth sending the same information over the wire needlessly.
This also means we’ll have to grab the user in the connection handler and then store it in the session;....
Its easy to get lost in all that, at least we both did ...
You just have to remember that this happens when you use channel_session_user_from_http:
It calls http_session_user
a. calls http_session which will parse the message and give us a message.http_session attribute.
b. Upon returning from the call, it initiates a message.user based on the information it got in message.http_session ( this will bite you later)
It calls channel_session which will initiate a dummy session in message.channel_session and ties it to the message reply channel.
Now it calls transfer_user which will move the http_session into the channel_session
This happens during the connection handling of a websocket, so on subsequent messages you won't have acces to detailed HTTP information, so what's happening after the connect is that you're calling channel_session_user_from_http again, which in this situation (post-connect messages) calls http_session_user which will attempt reading the Http information but fails resulting in setting message.http_session to None and overriding message.user to AnonymousUser.
That's why you need to use channel_session_user in this case.
Question 2:
Channels can use Django sessions either from cookies (if you’re running your websocket server on the same port as your main site, using something like Daphne), or from a session_key GET parameter, which works if you want to keep running your HTTP requests through a WSGI server and offload WebSockets to a second server process on another port.
Remember http_session, that decorator that gets us the message.http_session data? it appears that if it doesn't find a session_key GET parameter it fails to settings.SESSION_COOKIE_NAME, which is the regular sessionid cookie, so whether you provide session_key or not, you'll still get connected if you're logged in, of course that happens only when your ASGI and WSGI servers are on the same domain (127.0.0.1 in this case), the port difference doesn't matter.
I think the difference that the docs are trying to communicate but didn't expand on is that you need to setup session_key GET parameter when having your ASGI and WSGI servers on different domains since cookies are restricted by domain not port.
Due to that lack of explanation i had to test running ASGI and WSGI on same port and different port and the result was the same, i was still getting authenticated, changed one server domain to 127.0.0.2 instead of 127.0.0.1 and the authentication was gone, set the session_key get parameter and the authentication was back again.
Update: a rectification of the docs paragraph was just pushed to the channels repo, it was meant to mention domain instead of port like i mentioned.
Question 3:
my answer is the same as turbotux's but longer, you should use #channel_session_user_from_http on ws_connect and #channel_session_user on ws_receive and ws_disconnect, nothing from what you showed tells that it won't work if you do that change, maybe try removing http_user=True from your receive consumer? even thou i suspect it has no effect since its undocumented and intended only to be used by Generic Consumers...
Hope this helps!
To answer your first question you need to use the:
channel_session_user
decorator in the receive and disconnect calls.
channel_session_user_from_http
calls the transfer_user session during the connect method to transfer the http session to the channel session. This way all future calls may access the channel session to retrieve user information.
To your second question I believe what you are seeing is that default web socket library passes the browser cookies over the connection.
Third, I think your setup will be working quite well once have changed the decorators.
I ran into this problem and I found that it was due to a couple of issues that might be the cause. I'm not suggesting this will solve your issue, but might give you some insight. Keep in mind I am using rest framework. First I was overriding the User model. Second when I defined the application variable in my root routing.py I didn't use my own AuthMiddleware. I was using the docs suggested AuthMiddlewareStack. So, per the Channels docs, I defined my own custom authentication middleware, which takes my JWT value from the cookies, authenticates it and assigns it to the scope["user"] like so:
routing.py
from channels.routing import ProtocolTypeRouter, URLRouter
import app.routing
from .middleware import JsonTokenAuthMiddleware
application = ProtocolTypeRouter(
{
"websocket": JsonTokenAuthMiddleware(
(URLRouter(app.routing.websocket_urlpatterns))
)
}
middleware.py
from http import cookies
from django.contrib.auth.models import AnonymousUser
from django.db import close_old_connections
from rest_framework.authtoken.models import Token
from rest_framework_jwt.authentication import BaseJSONWebTokenAuthentication
class JsonWebTokenAuthenticationFromScope(BaseJSONWebTokenAuthentication):
def get_jwt_value(self, scope):
try:
cookie = next(x for x in scope["headers"] if x[0].decode("utf-8")
== "cookie")[1].decode("utf-8")
return cookies.SimpleCookie(cookie)["JWT"].value
except:
return None
class JsonTokenAuthMiddleware(BaseJSONWebTokenAuthentication):
def __init__(self, inner):
self.inner = inner
def __call__(self, scope):
try:
close_old_connections()
user, jwt_value =
JsonWebTokenAuthenticationFromScope().authenticate(scope)
scope["user"] = user
except:
scope["user"] = AnonymousUser()
return self.inner(scope)
Hope this helps this helps!
in my flask based http server designed to remotely manage some services on RPI I've approached a problem I cannot solve alone, thus a kind request to you to give me a hint.
Concept:
Via flask and gevent I can stop and run some (two) services running on RPI. I use gevent and server side event with respect javascript in order to listen to the html updates.
The html page shows the status (on/off/processing) of the services and provides buttons to switch them on/off. Additionally display some system parameters (CPU, RAM, HDD, NET).
As long as there is only one user/page opened everything works as desired. As soon as there are more users accessing the flask server there is a race between greenlets serving each user/page and not all pages are getting reloaded.
Problem:
How can I send a message to all running greenlets sse_worker() and process it on top of their regular job?
Below a high level code. The complete source can be found here: https://github.com/petervflocke/flasksse_rpi check the sse.py file
def sse_worker(): #neverending task
while True:
if there_is_a_change_in_process_status:
reload_page=True
else:
reload_page=False
Do some other tasks:
update some_single_parameters_to_be_passed_to_html_page
yield 'data: ' + json.dumps(all_parameters)
gevent.sleep(1)
#app.route('/stream/', methods=['GET', 'POST'])
def stream():
return Response(sse_worker(), mimetype="text/event-stream")
if __name__ == "__main__":
gevent.signal(signal.SIGTERM, stop)
http_server = WSGIServer(('', 5000), app)
http_server.serve_forever()
...on the html page the streamed json data are processed accordingly. If a status of a service has been changed based on the reload_page variable javascript reload the complete page - code extract below:
<script>
function listen() {
var source = new EventSource("/stream/");
var target1 = document.getElementById("time");
....
source.onmessage = function(msg) {
obj = JSON.parse(msg.data);
target1.innerHTML = obj.time;
....
if (obj.reload == "1") {
location.reload();
}
}
}
listen();
</script>
My desired solution would be to extend the sse_worker() like this:
def sse_worker():
while True:
if there_is_a_change_in_process_status:
reload_page=True
# NEW: set up a semaphore/flag that there is a change on the page
message_set(reload)
elif message_get(block=false)==reload: # NEW: check the semaphore
# issue: the message_get must retun "reload" for _all_ active sse_workers, that all of them can push the reload to "their" pages
reload_page=True
else:
reload_page=False
Do some other tasks:
update some_single_parameters_to_be_passed_to_html_page
yield 'data: ' + json.dumps(all_parameters)
gevent.sleep(1)
I hope I could pass on my message. Any idea from your side how I can solve the synchronization? Please notice that we have here the producer and consumer in the same sse_worker function.
Any idea is very welcome!
best regards
Peter
I am using event stream in front end. Yield function in back end. Storing client in redis queue.
I am storing users correctly in redis queue, but I don't know to send push notifications to specific logged user.
My problem is how to push notification to specific logged user.
Front end code:
var source = new EventSource('/stream');
source.onmessage = function (event) {
console.log(event)
};
Back end code:
from redis import Redis
redis = Redis()
import time
from datetime import datetime
p = redis.pipeline()
app.config['ONLINE_LAST_MINUTES'] = 5
def check_updates():
yield 'data: %s \n\n' % data
#app.route('/stream')
##nocache
def stream():
return Response(check_updates(),mimetype='text/event-stream')
Here is a snippet shows how to use redis operate user:
http://flask.pocoo.org/snippets/71/
if you use flask-login, it's easy to get current_user and return the specify message for him/her by checking all the user's fields in database.