I cannot understand how send function works in flask-socketio.
For example, I am using flask-socketio as server and socket.io as client:
server:
#socketio.on("test")
def handle_test():
send("Wow")
client:
socket.emit('test', "", (data) => {
console.log(data);
});
I think I can get data from server side, but I'm wrong. I just get nothing.
I understand I can build a structure based on event. But I cannot understand how send works. Will it send response to client? If it will, how could I get that response? If it won't, what does it do?
First, I suggest you use emit() instead of send().
To send from client to server, use this:
socket.emit('test', {data: 'my data'});
On the server, you can receive the event and then emit back to the client:
#socketio.on('test')
def handle_test():
emit('wow');
To receive this second emit on the client, do this:
socket.on('wow', function(data) {
console.log(data);
});
Related
I'm running RabbitMQ, in a Docker container (rabbitmq:3-management image) as part of a Docker Compose application. The application contains some ASP.NET Core WebApi microservices, which exchange messages via this broker. That works fine and didn't give me any problems so far.
Now I need to publish messages from a Python application to an exchange/queue which was created from one of the ASP.NET Core microservices. The microservice contains a consumer for this queue. For publishing from python, I'm using pika. The problem is, I can't seem to get the publishing right. Whenever I execute my Python script, I can see in the RabbitMQ management UI that a new exchange and queue with the suffix "_skipped" were created. It seems as if my message was sent there instead of the actual queue. Also, when trying to publish directly from the management UI, the message actually makes it to the microservice, but there I'll get an exception, that the message could not be deserialized to a MassTransit envelope object, and also a new exchange and queue with the "_error" suffix.
I have no idea where the problem is. I think the exchange/queue themselves are fine, since other queues/consumers/publishers for microservice to microservice communication in this project work. So then it's probably either how I'm trying to address the exchange/queue from Python, or something with my message body which is not right.
This page gives some info about how messages need to be structured, but not too detailed, and here I got most of the info about how to publish with Python.
Below you see the relevant code regarding the host/queue configuration in the microservice, as well as the Python script. Any help/tips on how I can get this to work would be greatly appreciated.
ASP.NET Core:
// Declaring the host, queue "mappingQueue", consumer in Startup.ConfigureServices of microservice
...
services.AddMassTransit(x =>
{
x.AddConsumer<MappingUpdateConsumer>();
x.AddBus(provider => Bus.Factory.CreateUsingRabbitMq(config =>
{
config.Host(new Uri(RabbitMqConst.RabbitMqRootUri), h =>
{
h.Username(RabbitMqConst.RabbitMqUsername);
h.Password(RabbitMqConst.RabbitMqPassword);
});
config.ReceiveEndpoint("mappingQueue", e =>
{
e.ConfigureConsumer<MappingUpdateConsumer>(provider);
});
}));
});
services.AddMassTransitHostedService();
...
// Consumer
public class MappingUpdateConsumer : IConsumer<MappingUpdateMessage>
{
...
public async Task Consume(ConsumeContext<MappingUpdateMessage> context)
{
await Task.Run(async () =>
{
if (context.Message == null)
{
return;
}
...
});
}
}
// Message class (will have more properties in the future, thus not just using a string consumer)
public class MappingUpdateMessage
{
public string Message { get; set; }
}
Python:
import pika
import json
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='mappingQueue', exchange_type='fanout', durable=True)
message = {
"message" : {
"message": "Hello World"
},
"messageType": [
"urn:message:MassTransit.Tests:ValueMessage"
]
}
channel.basic_publish(exchange='mappingQueue',
routing_key='mappingQueue',
body=json.dumps(message))
connection.close()
print("sent")
For those with the same problem, I figured it out eventually:
..
config.ReceiveEndpoint("mappingQueue", e =>
{
e.ClearMessageDeserializers();
e.UseRawJsonSerializer();
e.ConfigureConsumer<MappingUpdateConsumer>(provider);
});
...
Trying to learn Groovy, and it's been a fun and only slightly confusing adventure so far. What I'm currently trying to do is stand up a server, make a wget request to it and when that request is received, have a certain action be executed - in this case, just creating a new file:
import java.net.http.HttpResponse
class ServerLogic {
static def holdupServer() {
println('Standing up server..\n')
def socketServer = new ServerSocket(5000)
// server is up
socketServer.accept { socket ->
// the lines below only execute when a connection is made to the server
socket.withStreams { input, output ->
println("[${new Date()}] HELLO\n")
def newFile = new File("/home/nick/IdeaProjects/groovy_learning/resources/new.txt")
newFile.createNewFile()
newFile.text = 'Hello!!!'
println("NEW FILE SHOULD HAVE BEEN CREATED.")
println ">> READ: ${input.newReader().readLine()}"
}
}
return HttpResponse
}
}
ServerLogic.holdupServer()
With the above code, when I execute a wget http://localhost:5000, it "works" in the sense that the file is created like I want it to be, but the wget output is unhappy:
--2021-07-17 15:42:32-- http://localhost:5000/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:5000... connected.
HTTP request sent, awaiting response... No data received.
Retrying.
--2021-07-17 15:42:33-- (try: 2) http://localhost:5000/
Connecting to localhost (localhost)|127.0.0.1|:5000... failed: Connection refused.
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:5000... failed: Connection refused.
// these occur because the server has shut down after the last println call, when the `return HttpResponse` triggers
So from that, we can reason out that there isn't a proper response being returned, even though I have the return HttpResponse after the sockerServer.accept ... logic. My thought on how to solve the problem (primarily because I come from a Python background) would be to somehow mimic yielding a response in Python (basically, return a response without breaking out of the holdupServer() logic and thus breaking the server connection). Is there a way to achieve this in Groovy, or is there a different approach I could use to essentially return a valid HttpResponse without exiting the holdupServer() block?
Explanation
You can use a function callback, which in Groovy translates to a closure callback. Basically, you pass the value you want to return to another function/method defering the stack on the current method. This approach works essentially on all languages. In java (the versions which don't support lambda), for instance, you would have to pass an object in which you would call a method later.
Example
import java.net.http.HttpResponse
class ServerLogic {
static def holdupServer(Closure closure) {
(0..2).each {
closure.call(HttpResponse)
}
}
}
ServerLogic.holdupServer { httpResponse ->
println httpResponse
}
Output
interface java.net.http.HttpResponse
interface java.net.http.HttpResponse
interface java.net.http.HttpResponse
Addressing OP's comment
You have to provide some headers. At least Content-Type and Content-Length should've been provided along with the data and HTTP status (HTTP/1.1 200, in this case) properly formatted. Also, you should've wrapped the ServerSocket.accept calls in a while loop.
See the MDN Overview on HTTP.
Code
class ServerLogic {
static HEADERS = [
"HTTP/1.1 200",
"Content-Type: text/html; charset=utf-8",
"Connection: Keep-Alive",
"Keep-Alive: timeout=5, max=1000",
"Content-Length: %d\r\n",
"%s"
].join("\r\n")
static def holdupServer(Closure callback) {
println('Standing up server..\n')
def socketServer = new ServerSocket(5000)
// server is up
while(true) { // Continue to accept connections
socketServer.accept { socket ->
// the lines below only execute when a connection is made to the server
callback.call(socket) // call function
}
}
}
}
ServerLogic.holdupServer { Socket socket ->
String data = "RESPONSE <--\n"
def response = String.format(ServerLogic.HEADERS, data.size(), data)
println response
socket << response
}
Client output
RESPONSE <--
I want to handle some malicious request by not sending any kind of response in Flask.
In a route like:
#app.route("/something", methods=['POST'])
def thing():
return
Still returns an INTERNAL SERVER ERROR with View function did not return a response.
How do I formally send NO RESPONSE back to a client, i.e from an ajax call like so?
$.ajax({
url: "/something/",
method: "POST",
data: JSON.stringify({
"foo": "abc",
"bar": "123"
}),
success: function(resp) {
console.log(resp);
},
error: function(error) {
console.log(error); // this still gets called. I want it to hang.
}
})
... I want it to hang.
What you probably want is to close the connection on the server side (since you need to free the resources) but don't want the client to realize that the connection is closed. You cannot do this from flask or any other web application since any close of the connection on the server will cause the OS kernel to send the FIN to the client and thus the client knows about the closing too.
I suggest instead that you issue a redirect to the client to a URL where the client just hangs: for example have some iptables DROP rule on port 8080 and then redirect the client to http://your.host:8080/. Many clients will blindly follow such redirects and then hang (until they timeout) while trying to connect to this dead drop URL.
I trying to send sensor data (in python) from my raspberry pi3 to my local node server.
I found a module for python called requests to send data to a server.
Here I'm trying send the value 22 (later there will be sensor data) from my raspberry pi3 to my local node server with socket.io.The requests.get() works but the put commmand doesn't send the data.
Can you tell me where the mistake is ?
#!/usr/bin/env python
#
import requests
r = requests.get('http://XXX.XXX.XXX.XXX:8080');
print(r)
r = requests.put('http://XXX.XXX.XXX.XXX:8080', data = {'rasp_param':'22'});
In my server.js I try to get the data but somehow nothing getting received
server.js
var express = require('express')
, app = express()
, server = require('http').createServer(app)
, io = require('socket.io').listen(server)
, conf = require('./config.json');
// Webserver
server.listen(conf.port);
app.configure(function(){
app.use(express.static(__dirname + '/public'));
});
app.get('/', function (req, res) {
res.sendfile(__dirname + '/public/index.html');
});
// Websocket
io.sockets.on('connection', function (socket) {
//Here I want get the data
io.sockets.on('rasp_param', function (data){
console.log(data);
});
});
});
// Server Details
console.log('Ther server runs on http://127.0.0.1:' + conf.port + '/');
you are using HTTP PUT from Python, but you are listening with a websocket server on nodejs side.
Either have node listening for HTTP POST (I'd use POST rather than PUT):
app.post('/data', function (req, res) {
//do stuff with the data here
});
Or have a websocket client on python's side :
ws = yield from websockets.connect("ws://10.1.10.10")
ws.send(json.dumps({'param':'value'}))
A persistant websocket connection is probably the best choice.
I'm trying to use django-websocket-redis and I didn't understand how it works even reading the doc..
The part client (javascript/template) was easy to understand but I want to send data messages from one client to other and i'm blocking here..
Connecting each client :
var ws = new WebSocket('ws://localhost:8000/ws/foobar?subscribe-group');
ws.onopen = function(e) {
console.log("websocket connected");
};
ws.onclose = function(e) {
console.log("connection closed");
};
How manage my views.py to create a link between them ?
With NodeJS I was using this code to link the clients together :
io.sockets.on('connection', function (socket) {
var data={"action": "connexion", "session_id": socket.id,};
socket.emit('message',data);
socket.on('message', function(socket){
if (socket.action == "test")
{
io.sockets.socket(socket.code).emit('message',{"action": "move"});
//the socket.code is the session_id of the client one transmitted by a form
}
});
});
Thanks you.
The link between your Django view.py and the Websocket loop is the Redis message queue. Imagine to have two separate main loops on the server: One which handles HTTP-requests using the normal Django request handler. The other loop handles the Websockets, with their long living connections. Since you can't mix both loops within the normal Django request handler, you need message queuing, so that they can communicate to each other.
Therefore, in your Django view.py, send the data to the websocket using something like:
def __init__(self):
self.redis_publisher = RedisPublisher(facility='foo', broadcast=True)
def get(self, request):
data_for_websocket = json.dumps({'some': 'data'})
self.redis_publisher.publish_message(RedisMessage(data_for_websocket))
This will publish data_for_websocket on all Websockets subscribed (=listening) using the URL:
ws://example.com/ws/foo?subscribe-broadcast