I have a Flask server that is running in an Azure VM (Ubuntu 20.04) that is supposed to be run on http://127.0.0.1:5000 and an Angular app that serves as a frontend and hosts 0.0.0.0/80. The Angular app is supposed to send GET/POST requests to the Flask server, but when I try to do so I get the following error [1].
I have CORS enabled for all domains on all routes in Flask.
And if I send the request using wget it works perfectly fine.
Here's how I'm sending my request from Angular:
this.http.post<ILoginResponse>('http://127.0.0.1:5000/login', {username: un, password: pswrd}).subscribe(data => {
this.loginResponse.success = data.success;
this.loginResponse.teamID = data.teamID;
})
With ILoginResponse being:
export interface ILoginResponse{
success: boolean;
teamID: string;
}
I have set rules in the Azure portal to allow connections on port 5000 and unlocked the port in the firewall in the VM itself. Running Flask with --host 0.0.0.0 does not help either.
Any idea of what could help or which direction I could look in?
[1]
scheduleTask # zone-evergreen.js:2845
scheduleTask # zone-evergreen.js:385
onScheduleTask # zone-evergreen.js:272
scheduleTask # zone-evergreen.js:378
scheduleTask # zone-evergreen.js:210
scheduleMacroTask # zone-evergreen.js:233
scheduleMacroTaskWithCurrentZone # zone-evergreen.js:1134
(anonymous) # zone-evergreen.js:2878
proto.<computed> # zone-evergreen.js:1449
(anonymous) # http.js:1785
_trySubscribe # Observable.js:42
subscribe # Observable.js:28
innerSubscribe # innerSubscribe.js:67
_innerSub # mergeMap.js:57
_tryNext # mergeMap.js:51
_next # mergeMap.js:34
next # Subscriber.js:49
(anonymous) # subscribeToArray.js:3
_trySubscribe # Observable.js:42
subscribe # Observable.js:28
call # mergeMap.js:19
subscribe # Observable.js:23
call # filter.js:13
subscribe # Observable.js:23
call # map.js:16
subscribe # Observable.js:23
checkCredentials # login.component.ts:63
login # login.component.ts:44
LoginComponent_Template_form_ngSubmit_6_listener # login.component.html:12
executeListenerWithErrorHandling # core.js:14994
wrapListenerIn_markDirtyAndPreventDefault # core.js:15029
schedulerFn # core.js:25687
__tryOrUnsub # Subscriber.js:183
next # Subscriber.js:122
_next # Subscriber.js:72
next # Subscriber.js:49
next # Subject.js:39
emit # core.js:25656
onSubmit # forms.js:5719
FormGroupDirective_submit_HostBindingHandler # forms.js:5774
executeListenerWithErrorHandling # core.js:14994
wrapListenerIn_markDirtyAndPreventDefault # core.js:15029
(anonymous) # platform-browser.js:582
invokeTask # zone-evergreen.js:399
onInvokeTask # core.js:28289
invokeTask # zone-evergreen.js:398
runTask # zone-evergreen.js:167
invokeTask # zone-evergreen.js:480
invokeTask # zone-evergreen.js:1621
globalZoneAwareCallback # zone-evergreen.js:1647
HttpErrorResponse {headers: HttpHeaders, status: 0, statusText: 'Unknown Error', url: 'http://127.0.0.1:5000/login', ok: false, …}
error: ProgressEvent {isTrusted: true, lengthComputable: false, loaded: 0, total: 0, type: 'error', …}
headers: HttpHeaders {normalizedNames: Map(0), lazyUpdate: null, headers: Map(0)}
message: "Http failure response for http://127.0.0.1:5000/login: 0 Unknown Error"
name: "HttpErrorResponse"
ok: false
status: 0
statusText: "Unknown Error"
url: "http://127.0.0.1:5000/login"
[[Prototype]]: HttpResponseBase
constructor: class HttpErrorResponse
[[Prototype]]: Object```
Thank you furas. Posting your suggestion as an answer to help other community members.
Address 127.0.0.1 can assess only programs which run on the same computer but Angular runs in the user's browser so it runs on user's computer so using 127.0.0.1 it tries to access user's computer, not the server with Flask.
Running Flask with --host 0.0.0.0 does not help either.
You can refer to the answer by nwillo to run the flask app on IPv4 address
Related
I have a Web App built in Flask where tweets are captured (using Tweepy library) and displayed on the front-end. I used Socket IO to display the tweets live on the front-end.
My code works fine when I run this locally. The tweets appear instantly.
However, when i Dockerized the web app, the front-end doesn't update immediately. It takes some time to show the changes (sometimes I think tweets are lost due to the slowness)
Below are code extracts from my website:
fortsocket.js
$(document).ready(function () {
/************************************/
/*********** My Functions ***********/
/************************************/
function stream_active_setup() {
$("#favicon").attr("href", "/static/icons/fortnite-active.png");
$("#stream-status-ic").attr("src", "/static/icons/stream-active.png");
$("#stream-status-text").text("Live stream active");
}
function stream_inactive_setup() {
$("#favicon").attr("href", "/static/icons/fortnite-inactive.png");
$("#stream-status-ic").attr("src", "/static/icons/stream-inactive.png");
$("#stream-status-text").text("Live stream inactive");
}
/*********************************/
/*********** My Events ***********/
/*********************************/
// Socket connection to server
// Prometheus
//var socket = io.connect('http://104.131.173.145:8083');
// Local
var socket = io.connect(window.location.protocol + '//' + document.domain + ':' + location.port);
// Heroku
//var socket = io.connect('https://fortweet.herokuapp.com/');
// Send a hello to know
// if a stream is already active
socket.on('connect', () => {
socket.emit('hello-stream', 'hello-stream');
});
// Listene for reply from hello
socket.on('hello-reply', function (bool) {
if (bool == true) {
stream_active_setup()
} else {
stream_inactive_setup()
}
});
// Listens for tweets
socket.on('stream-results', function (results) {
// Insert tweets in divs
$('#live-tweet-container').prepend(`
<div class="row justify-content-md-center mt-3">
<div class="col-md-2">
<img width="56px" height="56px" src="${results.profile_pic !== "" ? results.profile_pic : "/static/icons/profile-pic.png"}" class="mx-auto d-block rounded" alt="">
</div>
<div class="col-md-8 my-auto">
<div><b>${results.author}</b></div>
<div>${results.message}</div>
</div>
</div>
`);
});
// Listener for when a stream of tweets starts
socket.on('stream-started', function (bool) {
if (bool == true) {
stream_active_setup()
}
});
// Listener for when a stream of tweets ends
socket.on('stream-ended', function (bool) {
if (bool == true) {
stream_inactive_setup()
}
});
});
init.py
# Create the app
app = create_app()
# JWT Configurations
jwt = JWTManager(app)
# Socket IO
socketio = SocketIO(app, cors_allowed_origins="*")
# CORS
CORS(app)
app.config["CORS_HEADERS"] = "Content-Type"
# Creates default admins and insert in db
create_default_admin()
# Main error handlers
#app.errorhandler(404) # Handling HTTP 404 NOT FOUND
def page_not_found(e):
return Err.ERROR_NOT_FOUND
# Listen for hello emit data
# from client
#socketio.on("hello-stream")
def is_stream_active(hello_stream):
emit("hello-reply", streamer.StreamerInit.is_stream_active(), broadcast=True)
streamer.py
import time
import tweepy
import threading as Coroutine
import app.messages.constants as Const
import app.setup.settings as settings_mod
import app.models.tweet as tweet_mod
import app.services.logger as logger
import app
class FStreamListener(tweepy.StreamListener):
def __init__(self):
self.start_time = time.time()
self.limit = settings_mod.TwitterSettings.get_instance().stream_time
logger.get_logger().debug("Live capture has started")
# Notify client that a live capture will start
app.socketio.emit(
"stream-started", True, broadcast=True,
)
super(FStreamListener, self).__init__()
def on_status(self, status):
if (time.time() - self.start_time) < self.limit:
# Create tweet object
forttweet = tweet_mod.TweetModel(
status.source,
status.user.name,
status.user.profile_background_image_url_https,
status.text,
status.created_at,
status.user.location,
)
# Emit to socket
app.socketio.emit(
"stream-results",
{
"profile_pic": forttweet.profile_pic,
"author": forttweet.author,
"message": forttweet.message,
},
broadcast=True,
)
# Add to database
forttweet.insert()
return True
else:
logger.get_logger().debug("Live capture has ended")
# Notify client that a live capture has ended
app.socketio.emit(
"stream-ended", True, broadcast=True,
)
# Stop the loop of streaming
return False
def on_error(self, status):
logger.get_logger().debug(f"An error occurred while fetching tweets: {status}")
raise Exception(f"An error occurred while fetching tweets: {status}")
class StreamerInit:
# [Private] Twitter configurations
def __twitterInstantiation(self):
# Get settings instance
settings = settings_mod.TwitterSettings.get_instance()
# Auths
auth = tweepy.OAuthHandler(settings.consumer_key, settings.consumer_secret,)
auth.set_access_token(
settings.access_token, settings.access_token_secret,
)
# Get API
api = tweepy.API(auth)
# Live Tweets Streaming
myStreamListener = FStreamListener()
myStream = tweepy.Stream(auth=api.auth, listener=myStreamListener)
myStream.filter(track=settings.filters)
def start(self):
for coro in Coroutine.enumerate():
if coro.name == Const.FLAG_TWEETS_LIVE_CAPTURE:
return False
stream = Coroutine.Thread(target=self.__twitterInstantiation)
stream.setName(Const.FLAG_TWEETS_LIVE_CAPTURE)
stream.start()
return True
#staticmethod
def is_stream_active():
for coro in Coroutine.enumerate():
if coro.name == Const.FLAG_TWEETS_LIVE_CAPTURE:
return True
return False
The streamer.py is called on a button click
Dockerfile
# Using python 3.7 in Alpine
FROM python:3.6.5-stretch
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
RUN apt-get update -y && apt-get upgrade -y && pip install -r requirements.txt
# Run the command
ENTRYPOINT ["uwsgi", "app.ini"]
#ENTRYPOINT ["./entry.sh"]
docker-compose.yml
version: "3.8"
services:
fortweet:
container_name: fortweet
image: mervin16/fortweet:dev
build: ./
env_file:
- secret.env
networks:
plutusnet:
ipv4_address: 172.16.0.10
expose:
- 8083
restart: always
nginx_fortweet:
image: nginx
container_name: nginx_fortweet
ports:
- "8083:80"
networks:
plutusnet:
ipv4_address: 172.16.0.100
volumes:
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
depends_on:
- fortweet
restart: always
networks:
plutusnet:
name: plutus_network
driver: bridge
ipam:
driver: default
config:
- subnet: 172.16.0.0/24
gateway: 172.16.0.1
app.ini
[uwsgi]
module = run:app
master = true
processes = 5
# Local & Prometheus
http-socket = 0.0.0.0:8083
http-websockets = true
chmod-socket = 660
vacuum = true
die-on-term = true
For a full, updated code, you can find it here under the branch dev/mervin
Any help is appreciated.
in order to see if ipv6 is responsible i would suggest you shutdown everything
open /etc/sysctl.conf and add the following lines to disable ipv6
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1
run sudo sysctl -p so changes can take effect
start nginx and the docker again
if you dont see any difference then you can just change the settings to 0 and rerun sysctl -p and let me know
Unfortunately I can't reproduce the issue without the configuration, so I can't verify my answer.
I was able to find a similar issue on a JP's blog: Performance problems with Flask and Docker
In short, it might be that having both IPv6 and IPv4 configs on the container are causing the issue.
In order to verify the issue:
Run the docker
Go inside the running container and change the hosts file so that it won't map IPv6 to localhost
Run application again inside of container
If the app runs smoothly then you've identified your issue.
The solution would be to tweak the uwsgi parameters.
What the author did in the blog post:
CMD uwsgi -s /tmp/uwsgi.sock -w project:app --chown-socket=www-data:www-data --enable-threads & nginx -g 'daemon off;'
I have keycloak server running in docker (192.168.99.100:8080) and python flask-oidc flask application running locally ( localhost:5000) i am not able to access the protected Rest Api even after getting the access_token. has anyone tried this code. if so please help me regarding this. thank you
this is my keycloak client using docker jboss/keycloak image
this is my newuser under the new realm
below is my flask-application
app.py
from flask import Flask, g
from flask_oidc import OpenIDConnect
import requests
secret_key = os.urandom(24).hex()
print(secret_key)
logging.basicConfig(level=logging.DEBUG)
app = Flask(__name__)
app.config["OIDC_CLIENT_SECRETS"]="client_secrets.json"
app.config["OIDC_COOKIE_SECURE"]=False
app.config["OIDC_SCOPES"]=["openid","email","profile"]
app.config["SECRET_KEY"]=secret_key
app.config["TESTING"]=True
app.config["DEBUG"] = True
app.config["OIDC_ID_TOKEN_COOKIE_SECURE"]=False
app.config["OIDC_REQUIRED_VERIFIED_EMAIL"]=False
app.config["OIDC_INTROSPECTION_AUTH_METHOD"]='client_secret_post'
app.config["OIDC_USER_INFO_ENABLED"]=True
oidc = OpenIDConnect(app)
#app.route('/')
def hello_world():
if oidc.user_loggedin:
return ('Hello, %s, See private '
'Log out') % \
oidc.user_getfield('preferred_username')
else:
return 'Welcome anonymous, Log in'
client_secrets.json
{
"web": {
"issuer": "http://192.168.99.100:8080/auth/realms/kariga",
"auth_uri": "http://192.168.99.100:8080/auth/realms/kariga/protocol/openid-connect/auth",
"client_id": "flask-app",
"client_secret": "eb11741d-3cb5-4457-8ff5-0202c6d6b250",
"redirect_uris": [
"http://localhost:5000/"
],
"userinfo_uri": "http://192.168.99.100:8080/auth/realms/kariga/protocol/openid-connect/userinfo",
"token_uri": "http://192.168.99.100:8080/auth/realms/kariga/protocol/openid-connect/token",
"token_introspection_uri": "http://192.168.99.100:8080/auth/realms/kariga/protocol/openid-connect/token/introspect"
}
}
when i launch the flask-app in web browser
i click on the Log in link
next it prompts for the user details (user created under my new realm)
it takes a couple of seconds then it redirects me to an error page
http://localhost:5000/oidc_callback?state=eyJjc3JmX3Rva2VuIjogIkZZbEpqb3ZHblZoUkhEbmJsdXhEVW
that says
httplib2.socks.HTTPError
httplib2.socks.HTTPError: (504, b'Gateway Timeout')
and also it is redirecting to /oidc_callback which is not mentioned anywhere
any help would be appreciated
the problem is occuring because keycloak server which is running
in docker(192.168.99.100)
is not able to hit the flask application server which is running locally(localhost)
better to run both as services in docker by creating a docker-compose file
I have a rather simple test app:
import redis
import os
import logging
log = logging.getLogger()
log.setLevel(logging.DEBUG)
def test_redis(event, context):
redis_endpoint = None
if "REDIS" in os.environ:
redis_endpoint = os.environ["REDIS"]
log.debug("redis: " + redis_endpoint)
else:
log.debug("cannot read REDIS config environment variable")
return {
'statusCode': 500
}
redis_conn = None
try:
redis_conn = redis.StrictRedis(host=redis_endpoint, port=6379, db=0)
redis_conn.set("foo", "boo")
redis_conn.get("foo")
except:
log.debug("failed to connect to redis")
return {
'statusCode': 500
}
finally:
del redis_conn
return {
'statusCode': 200
}
which I have deployed as a HTTP endpoint with serverless
#
# For full config options, check the docs:
# docs.serverless.com
#
service: XXX
plugins:
- serverless-aws-documentation
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: true
provider:
name: aws
stage: dev
region: eu-central-1
runtime: python3.6
environment:
# our cache
REDIS: xx-xx-redis-001.xxx.euc1.cache.amazonaws.com
functions:
hello:
handler: hello/hello_world.say_hello
events:
- http:
path: hello
method: get
# private: true # <-- Requires clients to add API keys values in the `x-api-key` header of their request
# authorizer: # <-- An AWS API Gateway custom authorizer function
testRedis:
handler: test_redis/test_redis.test_redis
events:
- http:
path: test-redis
method: get
When I trigger the endpoint via API Gateway, the lambda just times out after about 7 seconds.
The environmental variable is read properly, no error message displayed.
I suppose there's a problem connecting to the redis, but the tutorial are quite explicit - not sure what the problem could be.
The problem might need the need to set up a NAT, not sure how to accomplish this task with serverless
I ran into this issue as well. For me, there were a few problems that had to be ironed out
The lambda needs VPC permissions.
The ElastiCache security group needs an inbound rule from the Lambda security group that allows communication on the Redis port. I thought they could just be in the same security group.
And the real kicker: I had turned on encryption in-transit. This meant that I needed to pass redis.RedisClient(... ssl=True). The redis-py page mentions that ssl_cert_reqs needs to be set to None for use with ElastiCache, but that didn't seem to be true in my case. I did however need to pass ssl=True.
It makes sense that ssl=True needed to be set but the connection was just timing out so I went round and round trying to figure out what the problem with the permissions/VPC/SG setup was.
Try having the lambda in the same VPC and security group as your elastic cluster
Trying to get authentication working with Django channels with a very simple websockets app that echoes back whatever the user sends over with a prefix "You said: ".
My processes:
web: gunicorn myproject.wsgi --log-file=- --pythonpath ./myproject
realtime: daphne myproject.asgi:channel_layer --port 9090 --bind 0.0.0.0 -v 2
reatime_worker: python manage.py runworker -v 2
I run all processes when testing locally with heroku local -e .env -p 8080, but you could also run them all separately.
Note I have WSGI on localhost:8080 and ASGI on localhost:9090.
Routing and consumers:
### routing.py ###
from . import consumers
channel_routing = {
'websocket.connect': consumers.ws_connect,
'websocket.receive': consumers.ws_receive,
'websocket.disconnect': consumers.ws_disconnect,
}
and
### consumers.py ###
import traceback
from django.http import HttpResponse
from channels.handler import AsgiHandler
from channels import Group
from channels.sessions import channel_session
from channels.auth import channel_session_user, channel_session_user_from_http
from myproject import CustomLogger
logger = CustomLogger(__name__)
#channel_session_user_from_http
def ws_connect(message):
logger.info("ws_connect: %s" % message.user.email)
message.reply_channel.send({"accept": True})
message.channel_session['prefix'] = "You said"
# message.channel_session['django_user'] = message.user # tried doing this but it doesn't work...
#channel_session_user_from_http
def ws_receive(message, http_user=True):
try:
logger.info("1) User: %s" % message.user)
logger.info("2) Channel session fields: %s" % message.channel_session.__dict__)
logger.info("3) Anything at 'django_user' key? => %s" % (
'django_user' in message.channel_session,))
user = User.objects.get(pk=message.channel_session['_auth_user_id'])
logger.info(None, "4) ws_receive: %s" % user.email)
prefix = message.channel_session['prefix']
message.reply_channel.send({
'text' : "%s: %s" % (prefix, message['text']),
})
except Exception:
logger.info("ERROR: %s" % traceback.format_exc())
#channel_session_user_from_http
def ws_disconnect(message):
logger.info("ws_disconnect: %s" % message.__dict__)
message.reply_channel.send({
'text' : "%s" % "Sad to see you go :(",
})
And then to test, I go into Javascript console on the same domain as my HTTP site, and type in:
> var socket = new WebSocket('ws://localhost:9090/')
> socket.onmessage = function(e) {console.log(e.data);}
> socket.send("Testing testing 123")
VM481:2 You said: Testing testing 123
And my local server log shows:
ws_connect: test#test.com
1) User: AnonymousUser
2) Channel session fields: {'_SessionBase__session_key': 'chnb79d91b43c6c9e1ca9a29856e00ab', 'modified': False, '_session_cache': {u'prefix': u'You said', u'_auth_user_hash': u'ca4cf77d8158689b2b6febf569244198b70d5531', u'_auth_user_backend': u'django.contrib.auth.backends.ModelBackend', u'_auth_user_id': u'1'}, 'accessed': True, 'model': <class 'django.contrib.sessions.models.Session'>, 'serializer': <class 'django.core.signing.JSONSerializer'>}
3) Anything at 'django_user' key? => False
4) ws_receive: test#test.com
Which, of course, makes no sense. Few questions:
Why would Django see message.user as an AnonymousUser but have the actual user id _auth_user_id=1 (this is my correct user ID) in the session?
I am running my local server (WSGI) on 8080 and daphne (ASGI) on 9090 (different ports). And I didn't include session_key=xxxx in my WebSocket connection - yet Django was able to read my browser's cookie for the correct user, test#test.com? According to Channels docs, this shouldn't be possible.
Under my setup, what is the best / simplest way to carry out authentication with Django channels?
Note: This answer is explicit to channels 1.x, channels 2.x uses a different auth mechanism.
I had a hard time with django channels too, i had to dig into the source code to better understand the docs ...
Question 1:
The docs mention this kind of long trail of decorators relying on each other (http_session, http_session_user ...) that you can use to wrap your message consumers, in the middle of that trail it states this:
Now, one thing to note is that you only get the detailed HTTP information during the connect message of a WebSocket connection (you can read more about that in the ASGI spec) - this means we’re not wasting bandwidth sending the same information over the wire needlessly.
This also means we’ll have to grab the user in the connection handler and then store it in the session;....
Its easy to get lost in all that, at least we both did ...
You just have to remember that this happens when you use channel_session_user_from_http:
It calls http_session_user
a. calls http_session which will parse the message and give us a message.http_session attribute.
b. Upon returning from the call, it initiates a message.user based on the information it got in message.http_session ( this will bite you later)
It calls channel_session which will initiate a dummy session in message.channel_session and ties it to the message reply channel.
Now it calls transfer_user which will move the http_session into the channel_session
This happens during the connection handling of a websocket, so on subsequent messages you won't have acces to detailed HTTP information, so what's happening after the connect is that you're calling channel_session_user_from_http again, which in this situation (post-connect messages) calls http_session_user which will attempt reading the Http information but fails resulting in setting message.http_session to None and overriding message.user to AnonymousUser.
That's why you need to use channel_session_user in this case.
Question 2:
Channels can use Django sessions either from cookies (if you’re running your websocket server on the same port as your main site, using something like Daphne), or from a session_key GET parameter, which works if you want to keep running your HTTP requests through a WSGI server and offload WebSockets to a second server process on another port.
Remember http_session, that decorator that gets us the message.http_session data? it appears that if it doesn't find a session_key GET parameter it fails to settings.SESSION_COOKIE_NAME, which is the regular sessionid cookie, so whether you provide session_key or not, you'll still get connected if you're logged in, of course that happens only when your ASGI and WSGI servers are on the same domain (127.0.0.1 in this case), the port difference doesn't matter.
I think the difference that the docs are trying to communicate but didn't expand on is that you need to setup session_key GET parameter when having your ASGI and WSGI servers on different domains since cookies are restricted by domain not port.
Due to that lack of explanation i had to test running ASGI and WSGI on same port and different port and the result was the same, i was still getting authenticated, changed one server domain to 127.0.0.2 instead of 127.0.0.1 and the authentication was gone, set the session_key get parameter and the authentication was back again.
Update: a rectification of the docs paragraph was just pushed to the channels repo, it was meant to mention domain instead of port like i mentioned.
Question 3:
my answer is the same as turbotux's but longer, you should use #channel_session_user_from_http on ws_connect and #channel_session_user on ws_receive and ws_disconnect, nothing from what you showed tells that it won't work if you do that change, maybe try removing http_user=True from your receive consumer? even thou i suspect it has no effect since its undocumented and intended only to be used by Generic Consumers...
Hope this helps!
To answer your first question you need to use the:
channel_session_user
decorator in the receive and disconnect calls.
channel_session_user_from_http
calls the transfer_user session during the connect method to transfer the http session to the channel session. This way all future calls may access the channel session to retrieve user information.
To your second question I believe what you are seeing is that default web socket library passes the browser cookies over the connection.
Third, I think your setup will be working quite well once have changed the decorators.
I ran into this problem and I found that it was due to a couple of issues that might be the cause. I'm not suggesting this will solve your issue, but might give you some insight. Keep in mind I am using rest framework. First I was overriding the User model. Second when I defined the application variable in my root routing.py I didn't use my own AuthMiddleware. I was using the docs suggested AuthMiddlewareStack. So, per the Channels docs, I defined my own custom authentication middleware, which takes my JWT value from the cookies, authenticates it and assigns it to the scope["user"] like so:
routing.py
from channels.routing import ProtocolTypeRouter, URLRouter
import app.routing
from .middleware import JsonTokenAuthMiddleware
application = ProtocolTypeRouter(
{
"websocket": JsonTokenAuthMiddleware(
(URLRouter(app.routing.websocket_urlpatterns))
)
}
middleware.py
from http import cookies
from django.contrib.auth.models import AnonymousUser
from django.db import close_old_connections
from rest_framework.authtoken.models import Token
from rest_framework_jwt.authentication import BaseJSONWebTokenAuthentication
class JsonWebTokenAuthenticationFromScope(BaseJSONWebTokenAuthentication):
def get_jwt_value(self, scope):
try:
cookie = next(x for x in scope["headers"] if x[0].decode("utf-8")
== "cookie")[1].decode("utf-8")
return cookies.SimpleCookie(cookie)["JWT"].value
except:
return None
class JsonTokenAuthMiddleware(BaseJSONWebTokenAuthentication):
def __init__(self, inner):
self.inner = inner
def __call__(self, scope):
try:
close_old_connections()
user, jwt_value =
JsonWebTokenAuthenticationFromScope().authenticate(scope)
scope["user"] = user
except:
scope["user"] = AnonymousUser()
return self.inner(scope)
Hope this helps this helps!
For circumstances outside of my control, I need to use the Flask server to serve basic html files, the Flask SocketIO wrapper to provide a web socket interface between any clients and the server. The async_mode has to be threading instead of gevent or eventlet, I understand that it is less efficient to use threading, but I can't use the other two frameworks.
In my unit tests, I need to shut down and restart the web socket server. When I attempt to shut down the server, I get the RunTimeError 'Cannot stop unknown web server.' This is because the function werkzeug.server.shutdown is not found in the Flask Request Environment flask.request.environ object.
Here is how the server is started.
SERVER = flask.Flask(__name__)
WEBSOCKET = flask_socketio.SocketIO(SERVER, async_mode='threading')
WEBSOCKET.run(SERVER, host='127.0.0.1', port=7777)
Here is the short version of how I'm attempting to shut down the server.
client = WEBSOCKET.test_client(SERVER)
#WEBSOCKET.on('kill')
def killed():
WEBSOCKET.stop()
try:
client.emit('kill')
except:
pass
The stop method must be called from within a flask request context, hence the weird kill event callback. Inside the stop method, the flask.request.environ has the value
'CONTENT_LENGTH' (40503696) = {str} '0'
'CONTENT_TYPE' (60436576) = {str} ''
'HTTP_HOST' (61595248) = {str} 'localhost'
'PATH_INFO' (60437104) = {str} '/socket.io'
'QUERY_STRING' (60327808) = {str} ''
'REQUEST_METHOD' (40503648) = {str} 'GET'
'SCRIPT_NAME' (60437296) = {str} ''
'SERVER_NAME' (61595296) = {str} 'localhost'
'SERVER_PORT' (61595392) = {str} '80'
'SERVER_PROTOCOL' (65284592) = {str} 'HTTP/1.1'
'flask.app' (65336784) = {Flask} <Flask 'server'>
'werkzeug.request' (60361056) = {Request} <Request 'http://localhost/socket.io' [GET]>
'wsgi.errors' (65338896) = {file} <open file '<stderr>', mode 'w' at 0x0000000001C92150>
'wsgi.input' (65338848) = {StringO} <cStringIO.StringO object at 0x00000000039902D0>
'wsgi.multiprocess' (65369288) = {bool} False
'wsgi.multithread' (65369232) = {bool} False
'wsgi.run_once' (65338944) = {bool} False
'wsgi.url_scheme' (65338800) = {str} 'http'
'wsgi.version' (65338752) = {tuple} <type 'tuple'>: (1, 0)
My question is, how do I set up the Flask server to have the werkzeug.server.shutdownmethod available inside the flask request contexts?
Also this is using Python 2.7
I have good news for you, the testing environment does not use a real server, in that context the client and the server run inside the same process, so the communication between them does not go through the network as it does when you run things for real. Really in this situation there is no server, so there's nothing to stop.
It seems you are starting a real server, though. For unit tests, that server is not used, all you need are your unit tests which import the application and then use a test client to issue socket.io events. I think all you need to do is just not start the server, the unit tests should run just fine without it if all you use is the test client as you show above.