Django Channels - connected isn't executed - python

Hi i am copying parts of the github project multichat from the creator of django channels.
I am making slight changes to the code like not using jquery, renaming of some consumers and such.
I have literally no errors when running the code however when i join the page and the JS creates a websocket it says simply
[2017/08/03 13:13:48] WebSocket HANDSHAKING /chat/stream [127.0.0.1:37070]
[2017/08/03 13:13:48] WebSocket CONNECT /chat/stream [127.0.0.1:37070]
Which one would think is fine ofcourse... However i'n my connect function i have a print("********CONNECTED**********"), wich is nowhere to be seen in the console. It simply doesn't run the function i have told it to when someone connects but it still says the person connected and it throws no errors.
This is the main routing:
channel_routing = [
include("crypto_chat.routing.websocket_routing", path=r"^/chat-stream/$"),
include("crypto_chat.routing.chat_routing"),
]
Routing from app:
websocket_routing = [
route("websocket.connect", ws_connect),
route("websocket.receive", ws_receive),
route("websocket.disconnect", ws_disconnect),
]
chat_routing = [
route("chat.receive", chat_send, command="^send$"),
route("chat.receive", user_online, command="^online$"),
Connect Consumer:
#channel_session_user_from_http
def ws_connect(message):
# only accept connection if you have any rooms to join
print("******************CONNECT*************************''")
message.reply_channel.send({"accept": True})
# init rooms - add user to the groups and pk num to the session
message.channel_session['rooms'] = []
for room in Room.objects.get(users=message.user):
room.websocket_group.add(message.reply_channel)
message.channel_session['rooms'].append(room.pk)
print(message.channel_session['rooms'])
Heres JS (note: i am using the JS extension that is available on the project website also):
function send_msg(){
var msg=document.getElementById('msg_input').value;
console.log("sending msg" + msg);
webSocketBridge.send({
"command": "send",
"room": "1",
"message": msg
});
}
// logging
var ws_path = "/chat/stream";
console.log("connecting to " + ws_path);
// connect
var webSocketBridge = new channels.WebSocketBridge();
webSocketBridge.connect(ws_path);
// listen loop
webSocketBridge.listen(function(data)
{
// read json file and act accordingly
if(data.error){
// post error message in chat
console.log("Error - " + data.error);
return;
}
// handle if the user comes back online
if(data.online){
console.log("User is online");
}
else if(data.offline){
console.log("User offline");
}
else if(data.message){
console.log("Got message");
}
else{ console.log("Unknown message type"); }
});
// Helpful debugging
webSocketBridge.socket.onopen = function () {
console.log("Connected to chat socket");
};
webSocketBridge.socket.onclose = function () {
console.log("Disconnected from chat socket");
}

Websocket paths should match on server and client side. On server side, you have /chat-stream/ and on client side /chat/stream. These should match. Also, make sure you don't forget the trailing slash as django explicitly requires it.

Related

Python publish to RabbitMQ exchange/queue consumed by ASP.NET Core Service

I'm running RabbitMQ, in a Docker container (rabbitmq:3-management image) as part of a Docker Compose application. The application contains some ASP.NET Core WebApi microservices, which exchange messages via this broker. That works fine and didn't give me any problems so far.
Now I need to publish messages from a Python application to an exchange/queue which was created from one of the ASP.NET Core microservices. The microservice contains a consumer for this queue. For publishing from python, I'm using pika. The problem is, I can't seem to get the publishing right. Whenever I execute my Python script, I can see in the RabbitMQ management UI that a new exchange and queue with the suffix "_skipped" were created. It seems as if my message was sent there instead of the actual queue. Also, when trying to publish directly from the management UI, the message actually makes it to the microservice, but there I'll get an exception, that the message could not be deserialized to a MassTransit envelope object, and also a new exchange and queue with the "_error" suffix.
I have no idea where the problem is. I think the exchange/queue themselves are fine, since other queues/consumers/publishers for microservice to microservice communication in this project work. So then it's probably either how I'm trying to address the exchange/queue from Python, or something with my message body which is not right.
This page gives some info about how messages need to be structured, but not too detailed, and here I got most of the info about how to publish with Python.
Below you see the relevant code regarding the host/queue configuration in the microservice, as well as the Python script. Any help/tips on how I can get this to work would be greatly appreciated.
ASP.NET Core:
// Declaring the host, queue "mappingQueue", consumer in Startup.ConfigureServices of microservice
...
services.AddMassTransit(x =>
{
x.AddConsumer<MappingUpdateConsumer>();
x.AddBus(provider => Bus.Factory.CreateUsingRabbitMq(config =>
{
config.Host(new Uri(RabbitMqConst.RabbitMqRootUri), h =>
{
h.Username(RabbitMqConst.RabbitMqUsername);
h.Password(RabbitMqConst.RabbitMqPassword);
});
config.ReceiveEndpoint("mappingQueue", e =>
{
e.ConfigureConsumer<MappingUpdateConsumer>(provider);
});
}));
});
services.AddMassTransitHostedService();
...
// Consumer
public class MappingUpdateConsumer : IConsumer<MappingUpdateMessage>
{
...
public async Task Consume(ConsumeContext<MappingUpdateMessage> context)
{
await Task.Run(async () =>
{
if (context.Message == null)
{
return;
}
...
});
}
}
// Message class (will have more properties in the future, thus not just using a string consumer)
public class MappingUpdateMessage
{
public string Message { get; set; }
}
Python:
import pika
import json
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='mappingQueue', exchange_type='fanout', durable=True)
message = {
"message" : {
"message": "Hello World"
},
"messageType": [
"urn:message:MassTransit.Tests:ValueMessage"
]
}
channel.basic_publish(exchange='mappingQueue',
routing_key='mappingQueue',
body=json.dumps(message))
connection.close()
print("sent")
For those with the same problem, I figured it out eventually:
..
config.ReceiveEndpoint("mappingQueue", e =>
{
e.ClearMessageDeserializers();
e.UseRawJsonSerializer();
e.ConfigureConsumer<MappingUpdateConsumer>(provider);
});
...

How can I get the console output in Angular webpage?

I have a Django project which is using Angular as frontend. I have a button which on clicking is scanning the tables in the database. I have some print statements views.py file which is printing the scanned results constantly in the IDE console. I want that output in the webpage. I want that live printing of the console output in the frontend. Can any one know how i can achieve this?
You can achieve this by using server sent events. python can push these console logs to the frontend. Not a expert of python so giving a link below to how to send server side events from python to frontend
https://medium.com/code-zen/python-generator-and-html-server-sent-events-3cdf14140e56
In frontend you can listen to url exposed and as soon as server will push any message on this stream frontend can receive it and push it into component's array and can display over ui.
for frontend code, i am giving a minimal example below :-
import { Injectable, NgZone } from "#angular/core";
import { Observable } from "rxjs";
#Injectable({
providedIn: "root"
})
export class SseService {
constructor(private _zone: NgZone) {}
getServerSentEvent(url: string): Observable<any> {
return Observable.create(observer => {
const eventSource = this.getEventSource(url);
eventSource.onmessage = event => {
this._zone.run(() => {
observer.next(event);
});
};
eventSource.onerror = error => {
this._zone.run(() => {
observer.error(error);
});
};
});
}
private getEventSource(url: string): EventSource {
return new EventSource(url);
}
}
you can susbcribe to getServerSentEvent in above method and can continuously receive new messages, which is in your case your console logs.
You can try calling the following function with the information needed to be displayed.
addItem(val:any) {
let node = document.createElement("li")
let textnode = document.createTextNode(val)
node.appendChild(textnode)
document.getElementById("output").appendChild(node)
}
Make sure to have an element with the id="output".

WebSocket: Error during WebSocket handshake: Sent non-empty 'Sec-WebSocket-Protocol' header but no response was received

I'm trying to create a WS connection with my tornado server. The server code is simple:
class WebSocketHandler(tornado.websocket.WebSocketHandler):
def open(self):
print("WebSocket opened")
def on_message(self, message):
self.write_message(u"You said: " + message)
def on_close(self):
print("WebSocket closed")
def main():
settings = {
"static_path": os.path.join(os.path.dirname(__file__), "static")
}
app = tornado.web.Application([
(r'/ws', WebSocketHandler),
(r"/()$", tornado.web.StaticFileHandler, {'path':'static/index.html'}),
], **settings)
app.listen(8888)
tornado.ioloop.IOLoop.current().start()
I copy pasted the client code from here:
$(document).ready(function () {
if ("WebSocket" in window) {
console.log('WebSocket is supported by your browser.');
var serviceUrl = 'ws://localhost:8888/ws';
var protocol = 'Chat-1.0';
var socket = new WebSocket(serviceUrl, protocol);
socket.onopen = function () {
console.log('Connection Established!');
};
socket.onclose = function () {
console.log('Connection Closed!');
};
socket.onerror = function (error) {
console.log('Error Occured: ' + error);
};
socket.onmessage = function (e) {
if (typeof e.data === "string") {
console.log('String message received: ' + e.data);
}
else if (e.data instanceof ArrayBuffer) {
console.log('ArrayBuffer received: ' + e.data);
}
else if (e.data instanceof Blob) {
console.log('Blob received: ' + e.data);
}
};
socket.send("Hello WebSocket!");
socket.close();
}
});
When it tries to connect i get the following output on the browser's console:
WebSocket connection to 'ws://localhost:8888/ws' failed: Error during WebSocket handshake: Sent non-empty 'Sec-WebSocket-Protocol' header but no response was received
Why is that?
As pointed out in whatwg.org's Websocket documentation (it's a copy from the standard's draft):
The WebSocket(url, protocols) constructor takes one or two arguments. The first argument, url, specifies the URL to which to connect. The second, protocols, if present, is either a string or an array of strings. If it is a string, it is equivalent to an array consisting of just that string; if it is omitted, it is equivalent to the empty array. Each string in the array is a subprotocol name. The connection will only be established if the server reports that it has selected one of these subprotocols. The subprotocol names must all be strings that match the requirements for elements that comprise the value of Sec-WebSocket-Protocol fields as defined by the WebSocket protocol specification.
Your server answers the websocket connection request with an empty Sec-WebSocket-Protocol header, since it doesn't support the Chat-1 subprotocol.
Since you're writing both the server side and the client side (and unless your writing an API you intend to share), it shouldn't be super important to set a specific subprotocol name.
You can fix this by either removing the subprotocol name from the javascript connection:
var socket = new WebSocket(serviceUrl);
Or by modifying your server to support the protocol requested.
I could give a Ruby example, but I can't give a Python example since I don't have enough information.
EDIT (Ruby example)
Since I was asked in the comments, here's a Ruby example.
This example requires the iodine HTTP/WebSockets server, since it supports the rack.upgrade specification draft (concept detailed here) and adds a pub/sub API.
The server code can be either executed through the terminal or as a Rack application in a config.ru file (run iodine from the command line to start the server):
# frozen_string_literal: true
class ChatClient
def on_open client
#nickname = client.env['PATH_INFO'].to_s.split('/')[1] || "Guest"
client.subscribe :chat
client.publish :chat , "#{#nickname} joined the chat."
if client.env['my_websocket.protocol']
client.write "You're using the #{client.env['my_websocket.protocol']} protocol"
else
client.write "You're not using a protocol, but we let it slide"
end
end
def on_close client
client.publish :chat , "#{#nickname} left the chat."
end
def on_message client, message
client.publish :chat , "#{#nickname}: #{message}"
end
end
module APP
# the Rack application
def self.call env
return [200, {}, ["Hello World"]] unless env["rack.upgrade?"]
env["rack.upgrade"] = ChatClient.new
protocol = select_protocol(env)
if protocol
# we will use the same client for all protocols, because it's a toy example
env['my_websocket.protocol'] = protocol # <= used by the client
[101, { "Sec-Websocket-Protocol" => protocol }, []]
else
# we can either refuse the connection, or allow it without a match
# here, it is allowed
[101, {}, []]
end
end
# the allowed protocols
PROTOCOLS = %w{ chat-1.0 soap raw }
def select_protocol(env)
request_protocols = env["HTTP_SEC_WEBSOCKET_PROTOCOL"]
unless request_protocols.nil?
request_protocols = request_protocols.split(/,\s?/) if request_protocols.is_a?(String)
request_protocols.detect { |request_protocol| PROTOCOLS.include? request_protocol }
end # either `nil` or the result of `request_protocols.detect` are returned
end
# make functions available as a singleton module
extend self
end
# config.ru
if __FILE__.end_with? ".ru"
run APP
else
# terminal?
require 'iodine'
Iodine.threads = 1
Iodine.listen2http app: APP, log: true
Iodine.start
end
To test the code, the following JavaScript should work:
ws = new WebSocket("ws://localhost:3000/Mitchel", "chat-1.0");
ws.onmessage = function(e) { console.log(e.data); };
ws.onclose = function(e) { console.log("Closed"); };
ws.onopen = function(e) { e.target.send("Yo!"); };
For those who use cloudformation templates, AWS has a nice example here.
UPDATE
The key thing is the response in the connection function. On the abovementioned AWS shows how this can be done:
exports.handler = async (event) => {
if (event.headers != undefined) {
const headers = toLowerCaseProperties(event.headers);
if (headers['sec-websocket-protocol'] != undefined) {
const subprotocolHeader = headers['sec-websocket-protocol'];
const subprotocols = subprotocolHeader.split(',');
if (subprotocols.indexOf('myprotocol') >= 0) {
const response = {
statusCode: 200,
headers: {
"Sec-WebSocket-Protocol" : "myprotocol"
}
};
return response;
}
}
}
const response = {
statusCode: 400
};
return response;
};
function toLowerCaseProperties(obj) {
var wrapper = {};
for (var key in obj) {
wrapper[key.toLowerCase()] = obj[key];
}
return wrapper;
}
Please note the header settings in the response. Also this response must be delivered to the requester, for this response integration must be configured.
In the AWS example consider the code:
MyIntegration:
Type: AWS::ApiGatewayV2::Integration
Properties:
ApiId: !Ref MyAPI
IntegrationType: AWS_PROXY
IntegrationUri: !GetAtt MyLambdaFunction.Arn
IntegrationMethod: POST
ConnectionType: INTERNET
The most important are the last two lines.

Asynchronous Django using django-websocket-redis

I'm trying to use django-websocket-redis and I didn't understand how it works even reading the doc..
The part client (javascript/template) was easy to understand but I want to send data messages from one client to other and i'm blocking here..
Connecting each client :
var ws = new WebSocket('ws://localhost:8000/ws/foobar?subscribe-group');
ws.onopen = function(e) {
console.log("websocket connected");
};
ws.onclose = function(e) {
console.log("connection closed");
};
How manage my views.py to create a link between them ?
With NodeJS I was using this code to link the clients together :
io.sockets.on('connection', function (socket) {
var data={"action": "connexion", "session_id": socket.id,};
socket.emit('message',data);
socket.on('message', function(socket){
if (socket.action == "test")
{
io.sockets.socket(socket.code).emit('message',{"action": "move"});
//the socket.code is the session_id of the client one transmitted by a form
}
});
});
Thanks you.
The link between your Django view.py and the Websocket loop is the Redis message queue. Imagine to have two separate main loops on the server: One which handles HTTP-requests using the normal Django request handler. The other loop handles the Websockets, with their long living connections. Since you can't mix both loops within the normal Django request handler, you need message queuing, so that they can communicate to each other.
Therefore, in your Django view.py, send the data to the websocket using something like:
def __init__(self):
self.redis_publisher = RedisPublisher(facility='foo', broadcast=True)
def get(self, request):
data_for_websocket = json.dumps({'some': 'data'})
self.redis_publisher.publish_message(RedisMessage(data_for_websocket))
This will publish data_for_websocket on all Websockets subscribed (=listening) using the URL:
ws://example.com/ws/foo?subscribe-broadcast

Communicating Raspberry Pi with cloudfoundry

I have written a Node.js socket.io routine which will be called by a python socket io routine from my raspberry pi. It will communicate both ways. At the moment when I run these two routines on localhost it works fine. However when I deploy the server application to cloudfoundry and change the SocketIO connection link to cloudfoundry it does not work. Below is the client python
from socketIO_client import SocketIO
def on_updatepi_response(*args):
print 'updatepi'
def on_receivepi_response(*args):
print 'receiveepi'
with SocketIO('raspinode-server.cloudfoundry.com', 8080) as socketIO:
socketIO.on('receivepi', on_receivepi_response)
socketIO.on('updatepi', on_updatepi_response)
socketIO.emit('sendrabbit','testdata')
socketIO.wait(seconds=1)
I know cloudfoundry can be a bit strange as my first idea was to use rabbitmq but it is tied to the VCAP_SERVICES idea. However I did not think such a restriction would be there on a Node.js page.
Let me know if there is anything wrong with the above code and if not how can i get my external pi to send reading to my cloud app ?
Server Code is listed below though it is not relevant. It responds on localhost...I know the rabbitmq code is not hooked up yet
var express = require('express');
var app = express();
var server = require('http').createServer(app);
var amqp = require('amqp');
var io = require('socket.io').listen(server)
function rabbitUrl() {
if (process.env.VCAP_SERVICES) {
conf = JSON.parse(process.env.VCAP_SERVICES);
return conf['rabbitmq-2.4'][0].credentials.url;
}
else {
return "amqp://localhost";
}
}
var port = process.env.VCAP_APP_PORT || 3000;
var messages = [];
function setup() {
var exchange = conn.exchange('cf-demo', {'type':'fanout', durable:false}, function(){
var queue = conn.queue('', {durable:false, exclusive:true},
function() {
queue.subscribe(function(msg) {
messages.push(htmlEscape(msg.body));
if (messages.length > 10) {
messages.shift();
}
});
queue.bind(exchange.name, '');
});
queue.on('queueBindOk', function() {httpServer(exchange);});
});
}
server.listen(8080);
io.sockets.on('connection', function(socket){
// when the client emits sendrabbit, this listens
socket.on('sendrabbit', function(data)
{
// we tell the client to execute updatepi with 2 parameters
io.sockets.emit('updatepi', socket.username, data)
});
socket.on('disconnect', function()
{
socket.broadcast.emit('updatepi', 'SERVER', socket.username + ' has disconnected');
});
});
It's my understanding that your server should listen on the port Cloud Foundry assigns it (available in an env var). You can't assume it will be 8080. Then the client talks to raspinode-server.cloudfoundry.com (no port) and Cloud Foundry routes it to the correct place.

Categories