I'm using django-websocket-redis and have this in my settings.py:
WS4REDIS_CONNECTION = {
'host': 'unix://var/run/redis/redis.sock',
'db': 0,
# 'port': 16379,
# 'password': 'verysecret',
}
I tried all possible combinations of 'host' parameter and can't get it to connect using unix socket instead of tcp. I always get this message:
ConnectionError: Error -2 connecting to /var/run/redis/redis.sock:6379. Name or service not known.
Is there a way to connect ws4redis from django to redis using unix socket? If there is, how?
TL;DR: you have to override default Redis connection settings of ws4redis
I've bumped into this when I was implementing custom Django command, which was supposed to send a websocket message on certain server-side events.
If you look at the source code of RedisPublisher class, you will notice this line at the very top:
redis_connection_pool = ConnectionPool(**settings.WS4REDIS_CONNECTION)
while comments for ConnectionPool.__init__() state following:
By default, TCP connections are created connection_class is specified.
Use redis.UnixDomainSocketConnection for unix sockets.
So when you're instantiating RedisPublisher it uses ConnectionPool which, by default, does not know anything about sockets. Therefore 2 approaches are possible:
Switch default Connection to UnixDomainSocketConnection in ConnectionPool instantiation or
Substitute ConnectionPool with StrictRedis connection which has built-in capabilities to use unix socket (named argument unix_socket_path).
This is how I solved it using 2nd approach (it appears cleaner for me):
from redis import StrictRedis
from django.conf import settings
from ws4redis.publisher import RedisPublisher
from ws4redis.redis_store import RedisMessage
r = StrictRedis(**settings.WS4REDIS_CONNECTION)
publisher = RedisPublisher(facility='foobar', broadcast=True)
publisher._connection = r
msg = RedisMessage('ping')
publisher.publish_message(msg)
The whole magic is in publisher._connection line which ultimately switches connection, used by RedisPublisher class.
Since _connection assumes protected access, this looks like a bit dirty, but working solution.
You also need to specify following WS4REDIS_CONNECTION settings:
WS4REDIS_CONNECTION = {
'unix_socket_path': '/tmp/redis.sock'
}
This is required for wsgi, since it appears to use built-in redis.py capability to connect to unix socket as it stated in docs
Related
Hi all I have the following code but for some reason I keep getting the following error but it seems to work on a colleagues pc. We can't seem to figure out why this won't work on mine.
We have also double checked that we're importing the same socketio using dir()
I've tried specifying the namespace both on sio.connect and in the sio.emit but still no luck!
socketio.exceptions.BadNamespaceError: / is not a connected namespace.
bearerToken = 'REDACT'
core = 'REDACT'
output = 'REDACT'
import socketio
import json
def getListeners(token, coreUrl, outputId):
sio = socketio.Client(reconnection_attempts=5, request_timeout=5)
sio.connect(url=coreUrl, transports='websocket')
#sio.on('mwedge:batch:stats')
def batchStats(data):
if (outputId in data['outputStats']):
listeners = data['outputStats'][outputId][16]
print("Number of listeners ", len(listeners))
ips = []
for listener in listeners:
ips.append(listener[1])
print("Ips", ips)
def authCallback(data):
print(json.dumps(data))
sio.emit(event='auth',
data={
'token': token
},
callback=authCallback)
getListeners(bearerToken, core, output)
The Socket.IO connection involves a number of exchanges between the client and the server. The connect() function initiates this process, but this continues in the background. The connection ends when the handler for your connect event is invoked. At this point you can emit.
The problem with your code is that you are not waiting until the connection handshakes are completed, so your emit() call happens before there is a connection established. The solution is to add a connect event handler, and move your emit() call there.
As an additional note, I suggest you set up your event handlers before you call the connect() function.
I have been using stomp.py and stompest for years now to communicate with activemq to great effect, but this has mostly been with standalone python Daemons.
I would like to use these two libraries from the webserver to communicate with the backend, but I am having trouble finding out how to do this without creating a new connection every request.
Is there a standard approach to safely handling TCP connections in the webserver? In other languages, some sort of global object at that level would be used for connection pooling.
HTTP is a synchronous protocol. Each waiting client consumes server resources (CPU, memory, file descriptors) while waiting for a response. This means that web server has to respond quickly. HTTP web server should not block on external long-running processes when responding to a request.
The solution is to process requests asynchronously. There are two major options:
Use polling.
POST pushes a new task to a message queue:
POST /api/generate_report
{
"report_id": 1337
}
GET checks the MQ (or a database) for a result:
GET /api/report?id=1337
{
"ready": false
}
GET /api/report?id=1337
{
"ready": true,
"report": "Lorem ipsum..."
}
Asynchronous tasks in Django ecosystem are usually implemented using Celery, but you can use any MQ directly.
Use WebSockets.
Helpful links:
What are Long-Polling, Websockets, Server-Sent Events (SSE) and Comet?
https://en.wikipedia.org/wiki/Push_technology
https://www.reddit.com/r/django/comments/4kcitl/help_in_design_for_long_running_requests/
https://realpython.com/asynchronous-tasks-with-django-and-celery/
https://blog.heroku.com/in_deep_with_django_channels_the_future_of_real_time_apps_in_django
Edit:
Here is a pseudocode example of how you can reuse a connection to a MQ:
projectName/appName/services.py:
import stomp
def create_connection():
conn = stomp.Connection([('localhost', 9998)])
conn.start()
conn.connect(wait=True)
return conn
print('This code will be executed only once per thread')
activemq = create_connection()
projectName/appName/views.py:
from django.http import HttpResponse
from .services import activemq
def index(request):
activemq.send(message='foo', destination='bar')
return HttpResponse('Success!')
Trying to get authentication working with Django channels with a very simple websockets app that echoes back whatever the user sends over with a prefix "You said: ".
My processes:
web: gunicorn myproject.wsgi --log-file=- --pythonpath ./myproject
realtime: daphne myproject.asgi:channel_layer --port 9090 --bind 0.0.0.0 -v 2
reatime_worker: python manage.py runworker -v 2
I run all processes when testing locally with heroku local -e .env -p 8080, but you could also run them all separately.
Note I have WSGI on localhost:8080 and ASGI on localhost:9090.
Routing and consumers:
### routing.py ###
from . import consumers
channel_routing = {
'websocket.connect': consumers.ws_connect,
'websocket.receive': consumers.ws_receive,
'websocket.disconnect': consumers.ws_disconnect,
}
and
### consumers.py ###
import traceback
from django.http import HttpResponse
from channels.handler import AsgiHandler
from channels import Group
from channels.sessions import channel_session
from channels.auth import channel_session_user, channel_session_user_from_http
from myproject import CustomLogger
logger = CustomLogger(__name__)
#channel_session_user_from_http
def ws_connect(message):
logger.info("ws_connect: %s" % message.user.email)
message.reply_channel.send({"accept": True})
message.channel_session['prefix'] = "You said"
# message.channel_session['django_user'] = message.user # tried doing this but it doesn't work...
#channel_session_user_from_http
def ws_receive(message, http_user=True):
try:
logger.info("1) User: %s" % message.user)
logger.info("2) Channel session fields: %s" % message.channel_session.__dict__)
logger.info("3) Anything at 'django_user' key? => %s" % (
'django_user' in message.channel_session,))
user = User.objects.get(pk=message.channel_session['_auth_user_id'])
logger.info(None, "4) ws_receive: %s" % user.email)
prefix = message.channel_session['prefix']
message.reply_channel.send({
'text' : "%s: %s" % (prefix, message['text']),
})
except Exception:
logger.info("ERROR: %s" % traceback.format_exc())
#channel_session_user_from_http
def ws_disconnect(message):
logger.info("ws_disconnect: %s" % message.__dict__)
message.reply_channel.send({
'text' : "%s" % "Sad to see you go :(",
})
And then to test, I go into Javascript console on the same domain as my HTTP site, and type in:
> var socket = new WebSocket('ws://localhost:9090/')
> socket.onmessage = function(e) {console.log(e.data);}
> socket.send("Testing testing 123")
VM481:2 You said: Testing testing 123
And my local server log shows:
ws_connect: test#test.com
1) User: AnonymousUser
2) Channel session fields: {'_SessionBase__session_key': 'chnb79d91b43c6c9e1ca9a29856e00ab', 'modified': False, '_session_cache': {u'prefix': u'You said', u'_auth_user_hash': u'ca4cf77d8158689b2b6febf569244198b70d5531', u'_auth_user_backend': u'django.contrib.auth.backends.ModelBackend', u'_auth_user_id': u'1'}, 'accessed': True, 'model': <class 'django.contrib.sessions.models.Session'>, 'serializer': <class 'django.core.signing.JSONSerializer'>}
3) Anything at 'django_user' key? => False
4) ws_receive: test#test.com
Which, of course, makes no sense. Few questions:
Why would Django see message.user as an AnonymousUser but have the actual user id _auth_user_id=1 (this is my correct user ID) in the session?
I am running my local server (WSGI) on 8080 and daphne (ASGI) on 9090 (different ports). And I didn't include session_key=xxxx in my WebSocket connection - yet Django was able to read my browser's cookie for the correct user, test#test.com? According to Channels docs, this shouldn't be possible.
Under my setup, what is the best / simplest way to carry out authentication with Django channels?
Note: This answer is explicit to channels 1.x, channels 2.x uses a different auth mechanism.
I had a hard time with django channels too, i had to dig into the source code to better understand the docs ...
Question 1:
The docs mention this kind of long trail of decorators relying on each other (http_session, http_session_user ...) that you can use to wrap your message consumers, in the middle of that trail it states this:
Now, one thing to note is that you only get the detailed HTTP information during the connect message of a WebSocket connection (you can read more about that in the ASGI spec) - this means we’re not wasting bandwidth sending the same information over the wire needlessly.
This also means we’ll have to grab the user in the connection handler and then store it in the session;....
Its easy to get lost in all that, at least we both did ...
You just have to remember that this happens when you use channel_session_user_from_http:
It calls http_session_user
a. calls http_session which will parse the message and give us a message.http_session attribute.
b. Upon returning from the call, it initiates a message.user based on the information it got in message.http_session ( this will bite you later)
It calls channel_session which will initiate a dummy session in message.channel_session and ties it to the message reply channel.
Now it calls transfer_user which will move the http_session into the channel_session
This happens during the connection handling of a websocket, so on subsequent messages you won't have acces to detailed HTTP information, so what's happening after the connect is that you're calling channel_session_user_from_http again, which in this situation (post-connect messages) calls http_session_user which will attempt reading the Http information but fails resulting in setting message.http_session to None and overriding message.user to AnonymousUser.
That's why you need to use channel_session_user in this case.
Question 2:
Channels can use Django sessions either from cookies (if you’re running your websocket server on the same port as your main site, using something like Daphne), or from a session_key GET parameter, which works if you want to keep running your HTTP requests through a WSGI server and offload WebSockets to a second server process on another port.
Remember http_session, that decorator that gets us the message.http_session data? it appears that if it doesn't find a session_key GET parameter it fails to settings.SESSION_COOKIE_NAME, which is the regular sessionid cookie, so whether you provide session_key or not, you'll still get connected if you're logged in, of course that happens only when your ASGI and WSGI servers are on the same domain (127.0.0.1 in this case), the port difference doesn't matter.
I think the difference that the docs are trying to communicate but didn't expand on is that you need to setup session_key GET parameter when having your ASGI and WSGI servers on different domains since cookies are restricted by domain not port.
Due to that lack of explanation i had to test running ASGI and WSGI on same port and different port and the result was the same, i was still getting authenticated, changed one server domain to 127.0.0.2 instead of 127.0.0.1 and the authentication was gone, set the session_key get parameter and the authentication was back again.
Update: a rectification of the docs paragraph was just pushed to the channels repo, it was meant to mention domain instead of port like i mentioned.
Question 3:
my answer is the same as turbotux's but longer, you should use #channel_session_user_from_http on ws_connect and #channel_session_user on ws_receive and ws_disconnect, nothing from what you showed tells that it won't work if you do that change, maybe try removing http_user=True from your receive consumer? even thou i suspect it has no effect since its undocumented and intended only to be used by Generic Consumers...
Hope this helps!
To answer your first question you need to use the:
channel_session_user
decorator in the receive and disconnect calls.
channel_session_user_from_http
calls the transfer_user session during the connect method to transfer the http session to the channel session. This way all future calls may access the channel session to retrieve user information.
To your second question I believe what you are seeing is that default web socket library passes the browser cookies over the connection.
Third, I think your setup will be working quite well once have changed the decorators.
I ran into this problem and I found that it was due to a couple of issues that might be the cause. I'm not suggesting this will solve your issue, but might give you some insight. Keep in mind I am using rest framework. First I was overriding the User model. Second when I defined the application variable in my root routing.py I didn't use my own AuthMiddleware. I was using the docs suggested AuthMiddlewareStack. So, per the Channels docs, I defined my own custom authentication middleware, which takes my JWT value from the cookies, authenticates it and assigns it to the scope["user"] like so:
routing.py
from channels.routing import ProtocolTypeRouter, URLRouter
import app.routing
from .middleware import JsonTokenAuthMiddleware
application = ProtocolTypeRouter(
{
"websocket": JsonTokenAuthMiddleware(
(URLRouter(app.routing.websocket_urlpatterns))
)
}
middleware.py
from http import cookies
from django.contrib.auth.models import AnonymousUser
from django.db import close_old_connections
from rest_framework.authtoken.models import Token
from rest_framework_jwt.authentication import BaseJSONWebTokenAuthentication
class JsonWebTokenAuthenticationFromScope(BaseJSONWebTokenAuthentication):
def get_jwt_value(self, scope):
try:
cookie = next(x for x in scope["headers"] if x[0].decode("utf-8")
== "cookie")[1].decode("utf-8")
return cookies.SimpleCookie(cookie)["JWT"].value
except:
return None
class JsonTokenAuthMiddleware(BaseJSONWebTokenAuthentication):
def __init__(self, inner):
self.inner = inner
def __call__(self, scope):
try:
close_old_connections()
user, jwt_value =
JsonWebTokenAuthenticationFromScope().authenticate(scope)
scope["user"] = user
except:
scope["user"] = AnonymousUser()
return self.inner(scope)
Hope this helps this helps!
I'm trying to get websockets to work between two machines. One pc and one raspberry pi to be exact.
On the PC I'm using socket.io as a client to connect to the server on the raspberry pi.
With the following code I iniated the connection and try to send predefined data.
var socket = io.connect(ip + ':8080');
socket.send('volumes', { data: data });
On the raspberry pi, the websocket server looks like this:
from tornado import web, ioloop
from sockjs.tornado import SockJSRouter, SockJSConnection
class EchoConnection(SockJSConnection):
def on_message(self, msg):
self.send(msg)
def check_origin(self, origin):
return True
if __name__ == '__main__':
EchoRouter = SockJSRouter(EchoConnection, '/echo')
app = web.Application(EchoRouter.urls)
app.listen(8080)
ioloop.IOLoop.instance().start()
But the connection is never established. And I don't know why. In the server log I get:
WARNING:tornado.access:404 GET /socket.io/1/?t=1412865634790
(192.168.0.16) 9.01ms
And in the Inspector on the pc there is this error message:
XMLHttpRequest cannot load http://192.168.0.10:8080/socket.io/1/?t=1412865634790. Origin sp://793b6d4588ead99e1780e35b71d24d1b285328f8.hue is not allowed by Access-Control-Allow-Origin.
I am out of ideas and don't know what to do. Can you help me?
Thank you!
Well, the solution for your problem has to do with the internal design of the sockjs-tornado library more than with the socket.io library.
Basically, your problem has to do with cross origin request i.e. the html that is generating the request to the websocket server is not at the same origin as the websocket server. I can see from your code that you already identified the problem ( and you tried to solve it by redefining the method "check_origin") but you didn´t find the proper way to do it, basically because within this library is not the SockJSConnection class the one that extends tornado WebSocketHandler and so redefining its "check_origin" is useless. If you dig a little bit into the code, you will see that there exists one class defined, namely SockJSWebSocketHandler that has a redefinition of such method itself, which relies on the tornado implementation if it returns true, but that also allows you to avoid that check using a setting parameter :
class SockJSWebSocketHandler(websocket.WebSocketHandler):
def check_origin(self, origin):
***
allow_origin = self.server.settings.get("websocket_allow_origin", "*")
if allow_origin == "*":
return True
So, to summarize, you just need to include the setting "websocket_allow_origin"="*" in the server settings and everything should work properly =D
if __name__ == '__main__':
EchoRouter = SockJSRouter(EchoConnection, '/echo', user_settings={"websocket_allow_origin":"*"})
I'm developing a Python Service(Class) for accessing Redis Server. I want to know how to check if Redis Server is running or not. And also if somehow I'm not able to connect to it.
Here is a part of my code
import redis
rs = redis.Redis("localhost")
print rs
It prints the following
<redis.client.Redis object at 0x120ba50>
even if my Redis Server is not running.
As I found that my Python Code connects to the Server only when I do a set() or get() with my redis instance.
So I dont want other services using my class to get an Exception saying
redis.exceptions.ConnectionError: Error 111 connecting localhost:6379. Connection refused.
I want to return proper message/Error code. How can I do that??
If you want to test redis connection once at startup, use the ping() command.
from redis import Redis
redis_host = '127.0.0.1'
r = Redis(redis_host, socket_connect_timeout=1) # short timeout for the test
r.ping()
print('connected to redis "{}"'.format(redis_host))
The command ping() checks the connection and if invalid will raise an exception.
Note - the connection may still fail after you perform the test so this is not going to cover up later timeout exceptions.
The official way to check if redis server availability is ping ( http://redis.io/topics/quickstart ).
One solution is to subclass redis and do 2 things:
check for a connection at instantiation
write an exception handler in the case of no connectivity when making requests
As you said, the connection to the Redis Server is only established when you try to execute a command on the server. If you do not want to go head forward without checking that the server is available, you can just send a random query to the server and check the response. Something like :
try:
response = rs.client_list()
except redis.ConnectionError:
#your error handlig code here
There are already good solutions here, but here's my quick and dirty for django_redis which doesn't seem to include a ping function (though I'm using an older version of django and can't use the newest django_redis).
# assuming rs is your redis connection
def is_redis_available():
# ... get redis connection here, or pass it in. up to you.
try:
rs.get(None) # getting None returns None or throws an exception
except (redis.exceptions.ConnectionError,
redis.exceptions.BusyLoadingError):
return False
return True
This seems to work just fine. Note that if redis is restarting and still loading the .rdb file that holds the cache entries on disk, then it will throw the BusyLoadingError, though it's base class is ConnectionError so it's fine to just catch that.
You can also simply except on redis.exceptions.RedisError which is the base class of all redis exceptions.
Another option, depending on your needs, is to create get and set functions that catch the ConnectionError exceptions when setting/getting values. Then you can continue or wait or whatever you need to do (raise a new exception or just throw out a more useful error message).
This might not work well if you absolutely depend on setting/getting the cache values (for my purposes, if cache is offline for whatever we generally have to "keep going") in which case it might make sense to have the exceptions and let the program/script die and get the redis server/service back to a reachable state.
I have also come across a ConnectionRefusedError from the sockets library, when redis was not running, therefore I had to add that to the availability check.
r = redis.Redis(host='localhost',port=6379,db=0)
def is_redis_available(r):
try:
r.ping()
print("Successfully connected to redis")
except (redis.exceptions.ConnectionError, ConnectionRefusedError):
print("Redis connection error!")
return False
return True
if is_redis_available(r):
print("Yay!")
Redis server connection can be checked by executing ping command to the server.
>>> import redis
>>> r = redis.Redis(host="127.0.0.1", port="6379")
>>> r.ping()
True
using the ping method, we can handle reconnection etc. For knowing the reason for error in connecting, exception handling can be used as suggested in other answers.
try:
is_connected = r.ping()
except redis.ConnectionError:
# handle error
Use ping()
from redis import Redis
conn_pool = Redis(redis_host)
# Connection=Redis<ConnectionPool<Connection<host=localhost,port=6379,db=0>>>
try:
conn_pool.ping()
print('Successfully connected to redis')
except redis.exceptions.ConnectionError as r_con_error:
print('Redis connection error')
# handle exception