IOError: headers already sent - flask socketio with gevent_uwsgi - python

Flask, uwsgi, nginx mongo socket application.
Periodically produces this errors
File "/home/username/venv/local/lib/python2.7/site-packages/engineio/server.py", line 277, in handle_request
start_response(r['status'], r['headers'] + cors_headers)
IOError: headers already sent
File "/home/username/venv/local/lib/python2.7/site-packages/flask_socketio/__init__.py", line 562, in _handle_event
app = self.server.environ[sid]['flask.app']
KeyError: 'flask.app'
All configs here:
uwsgi.ini
[uwsgi]
module = socket_app:app
chdir = /home/username/app_dir/
virtualenv = /home/usernamevenv/
touch-reload = /home/username/reload.txt
logdate = 1
logto = /home/usernamelogs/socket.log
procname = socket_username
log-maxsize = 204800
env = LANG=ru_RU.utf8
env = LC_ALL=ru_RU.utf8
env = LC_LANG=ru_RU.utf8
ANd extra args:
--http :3456 --gevent 100 --http-websockets
Nginx socket proxy
server {
listen 80;
server_name domain;
charset utf-8;
client_max_body_size 5M; # adjust to taste
location /socket.io {
proxy_pass http://upstream_name/socket.io;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
Application init sources:
init.py (using in another web app without sockets)
from flask_openid import OpenID
from flask_mongoengine import MongoEngine
from flask import Flask
from flask_socketio import SocketIO
app = Flask(__name__)
own_socketio = SocketIO()
db = MongoEngine()
base_app.py (COnfigure application && postforked database connection)
# coding: utf-8
from __future__ import unicode_literals
from flask import Flask, g
from flask_mongoengine import MongoEngine, MongoEngineSessionInterface
from blueprint.monitor import monitor
from init import oid, app, db
from flask_mongoengine import MongoEngine
from uwsgidecorators import postfork
def make_app():
app.register_blueprint(monitor)
app.config.from_pyfile('settings.py')
#if not UWSGI_ALLOWED:
# db.init_app(app)
oid.init_app(app)
app.session_interface = MongoEngineSessionInterface(db)
return app
#postfork
def setup_db():
db.init_app(app)
And socket_app.py
from gevent.monkey import patch_all
patch_all()
from init import own_socketio
from init import app
from base_app import make_app
make_app()
own_socketio.init_app(app, async_mode='gevent_uwsgi', message_queue=app.config['SOCKETIO_REDIS_URL'], cookie='session')
if __name__ == '__main__':
own_socketio.run(app, port=3456)

Related

Flask Socket application deployment using nginx and gunicorn in Ubuntu 22.04 is not working. Getting 404 error from nginx

When I run the below code from a local Ubuntu machine, the socket connection is successfully established; however, when I deploy it to an Ubuntu server, it does not work.
This is the error message I received from Nginx.
- [28/Dec/2022:07:29:10 +0000] "GET /media HTTP/1.1" 404 153 "-" "Boost.Beast/266" "3.235.111.201"
Linux is running and I check the IP it is 200 and the application also is running.
The code is in the file aap.py
import json
import logging
from flask import Flask, request, jsonify
from flask_sockets import Sockets
from gevent import pywsgi
from geventwebsocket.handler import WebSocketHandler
import os
import logging
from logging.handlers import RotatingFileHandler
from flask_cors import CORS
app = Flask(__name__)
CORS(app)
sockets = Sockets(app)
#app.route("/hello", methods=['GET', 'POST'])
def hello():
print("testing")
return jsonify({"message": "Success"})
#sockets.route('/test')
def test(ws):
print(f"*****Test****{ws}")
#sockets.route('/media')
def echo(ws):
print(f"Media WS: {ws}")
while True:
message = ws.receive()
packet = json.loads(message)
print(packet)
if __name__ == '__main__':
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
file_handler = RotatingFileHandler('log_data.log', maxBytes=10000, backupCount=2)
file_handler.setFormatter(formatter)
logging.basicConfig(handlers=[file_handler], level=logging.DEBUG)
logger = logging.getLogger('log_data.log')
app.logger.setLevel(logging.DEBUG)
server = pywsgi.WSGIServer(('127.0.0.1', 5000), app, handler_class=WebSocketHandler)
#server = pywsgi.WSGIServer(('', 5000), app, handler_class=WebSocketHandler)
server.serve_forever()
from the localhost, I run the command python3 app.py
Here is the screenshot of the terminal and socket connection test result.
My nginx config is in file /etc/nginx/sites-available/test-socket and the content is:
server {
listen 80;
server_name _;
access_log /var/log/nginx/access.log;
location / {
include proxy_params;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://127.0.0.1:5000;
}
}
I am running one service file /etc/systemd/system/demo.service
[Unit]
Description=Gunicorn instance to serve testApp
After=network.target
[Service]
User=root
Group=www-data
WorkingDirectory=/root/test-socket
Environment="PATH=/root/test-socket/myenv/bin"
ExecStart=/root/test-socket/myenv/bin/gunicorn --worker-class eventlet -w1 --workers 3 --bind 127.0.0.1:5000 app:app
[Install]
WantedBy=multi-user.target

Websockets not working with Django channels after deploying

I'm trying to deploy my Django project, but I faced some difficulties.
Everything is working perfectly on local machine.
I'm using Django + nginx + uvicorn (runned by supervisor). Also, I got my SSL certificate in use.
When I try to connect to websocket (/ws) via loading the page and making my js files work, I get a message in console:
WebSocket connection to 'wss://example.com/ws/' failed
Here is my nginx config
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
ssl on;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
server_name example.com;
client_max_body_size 100M;
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript;
location /ws/ {
proxy_pass https://uvicorn/ws;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_intercept_errors on;
proxy_redirect off;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-NginX-Proxy true;
proxy_ssl_session_reuse off;
}
location /static/ {
root /root/server/social;
expires 1d;
}
location /media/ {
root /root/server/social;
expires 1d;
}
location / {
proxy_pass https://uvicorn;
proxy_set_header Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
upstream uvicorn {
server unix:/tmp/uvicorn.sock;
}
And the supervisor config
[program:django]
command = /root/server/social/venv/bin/python -m uvicorn myproject.asgi:application --uds /tmp/uvicorn.sock --ssl-keyfile=/etc/letsencrypt/live/mysite.com/privkey.pem --ssl-certfile=/etc/letsencrypt/live/mysite.com/fullchain.pem
directory = /root/server/social
stderr_logfile=/var/log/long.err.log
stdout_logfile=/var/log/long.out.log
autostart=true
autorestart=true
When everything is ready, I start nginx: service nginx restart. I get no mistakes.
After that I go with service supervisor restart. And here is the thing. If no one uses the website, in my log files (/var/log/long.err.log) I get no mistakes:
INFO: connection closed
INFO: Shutting down
INFO: Finished server process [606]
INFO: Started server process [660]
INFO: Waiting for application startup.
INFO: ASGI 'lifespan' protocol appears unsupported.
INFO: Application startup complete.
INFO: Uvicorn running on unix socket /tmp/uvicorn.sock (Press CTRL+C to quit)
But if someone uses the website and activates websocket, I get:
File "/root/server/social/venv/lib/python3.9/site-packages/websockets/legacy/protocol.py", line 1216, in write_close_frame
await self.write_frame(True, OP_CLOSE, data, _state=State.CLOSING)
File "/root/server/social/venv/lib/python3.9/site-packages/websockets/legacy/protocol.py", line 1189, in write_frame
await self.drain()
File "/root/server/social/venv/lib/python3.9/site-packages/websockets/legacy/protocol.py", line 1178, in drain
await self.ensure_open()
File "/root/server/social/venv/lib/python3.9/site-packages/websockets/legacy/protocol.py", line 921, in ensure_open
raise self.connection_closed_exc()
websockets.exceptions.ConnectionClosedError: sent 1000 (OK); no close frame received
INFO: connection closed
Here is everything connected with websockets in project structure if needed:
routing.py
from django.urls import re_path
from . import consumers
websocket_urlpatterns = [
re_path(r"ws/messenger/", consumers.ChatConsumer.as_asgi()),
re_path(r"ws/profile/", consumers.ProfileConsumer.as_asgi()),
re_path(r"ws/like/", consumers.LikeConsumer.as_asgi())
]
asgi.py
import os
from django.core.asgi import get_asgi_application
from channels.routing import ProtocolTypeRouter, URLRouter
from channels.auth import AuthMiddlewareStack
import messenger.routing
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings')
application = ProtocolTypeRouter({
"http":get_asgi_application(),
"websocket": AuthMiddlewareStack(
URLRouter(messenger.routing.websocket_urlpatterns)
)
})
consumers.py
import json
from channels.generic.websocket import WebsocketConsumer
from profile.models import User, Message, Chat, Publication
from asgiref.sync import async_to_sync
from datetime import datetime
group_members = []
class ChatConsumer(WebsocketConsumer):
...
class ProfileConsumer(WebsocketConsumer):
...
class LikeConsumer(WebsocketConsumer):
...

Flask HTTPS - Unable to upload a flask server with SSL(using waitress and nginx)

I edited my question:
I was trying to change my flask server to a production-level server.
Unfortunately, I am failing at the moment to do so even with the example flask app:
When trying to connect without HTTPS the site works fine, when connecting with HTTPS I get a "This site can't be reached" error.
My nginx config:
server {
listen 443 ssl;
ssl_certificate path/cert.pem;
ssl_certificate_key path/key.pem;
server_name 127.0.0.1;
location / {
proxy_pass http://127.0.0.1:5000;
proxy_set_header X-Real-IP $remote_addr;
}
}
server {
listen 80;
server_name 127.0.0.1;
return 302 https://$server_name$request_uri;
}
server {
listen 5000;
server_name 127.0.0.1;
return 302 https://$server_name$request_uri;
}
After edditing my config i used this command in order for nginx to recognize the new config:
sudo ln -s /etc/nginx/sites-available/default.conf /etc/nginx/sites-enabled
My flask app:
from flask import Flask
from waitress import serve
import logging
app = Flask(__name__)
#app.route("/")
def hello():
return "<h1 style='color:blue'> A very simple flask server !</h1>"
if __name__ == "__main__":
# app.run(host='0.0.0.0', port=8080)
logger = logging.getLogger('waitress')
logger.setLevel(logging.INFO)
serve(app, host='127.0.0.1', port=5000, url_scheme='https')
Thanks a lot in advance for all your help!

Flask - SocketIO hidden behind reverse proxy with Nginx

I want to integrate Flask-SocketIO with my Flask project. My app is running behind a Nginx reverse proxy:
location /plivo_api{
rewrite ^/plivo_api(.*) /$1 break;
proxy_pass http://127.0.0.1:8090;
}
So I undestand all trafic received in the /plivo_api will be rewrited as "/" port 8090. This part work well.
The problem starts when I want to connect to the socket. Direct connection to socket has no problem.
# all those examples work
# from localhost
var socket = io.connect()
var socket = io.connect('http://localhost:8090/')
# running the app out of the reverse proxy
var socket = io.connect('http://my_server:8090/')
But throught Nginx I cannot connect
# Bad Gateway
var socket = io.connect('http://my_server/plivo_api')
Question is Do I'm missing something to connect to my socketio app or there's something extra to add to Nginx config ?
The flask app code with socketio integration looks like
# this code work well, the flask app and socket io
# problem must be in Ngin settings.
The flask app code with socketio integration looks like
from flask import Flask, render_template
from flask_socketio import SocketIO, emit
HOST = '127.0.0.1'
PORT = 8090
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret'
app.config['DEBUG'] = True
app.config['SERVER_NAME'] = f'{HOST}:{PORT}'
socketio = SocketIO(app)
#app.route('/')
def index():
return render_template('index.html')
if __name__ == '__main__':
socketio.run(app, port=PORT, host=HOST)
You need to create a special location block in nginx for the Socket.IO endpoint. You can't use a regular URL like you do for your HTTP routes.
The documentation has an example:
server {
listen 80;
server_name _;
location /socket.io {
include proxy_params;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://127.0.0.1:5000/socket.io;
}
}

Why is "nginx + tornado" setup taking longer to return results compared to just "tornado" setup?

I have an nginx in front of 5 tornado servers.
When I call one of my Tornado server directly then the results are returned very fast.
But when I call nginx instead, it takes VERY long to return results. On checking the logs I can see that the request comes in as "OPTIONS" into nginx and the selected tornado server almost immediately. But then it takes its own sweet little time after which I see "GET" request in the logs and then the response is returned. Why is there such a long delay between OPTIONS and GET? When calling Tornado directly, OPTIONS and GET request happens back to back very quickly. Do I need to change something on my nginx config file to make the performance better?
My nginx config looks like this:
worker_processes 1;
error_log logs/error.;
events {
worker_connections 1024;
}
http {
# Enumerate all the Tornado servers here
upstream frontends {
server 127.0.0.1:5052;
server 127.0.0.1:5053;
server 127.0.0.1:5054;
server 127.0.0.1:5055;
server 127.0.0.1:5056;
}
include mime.types;
default_type application/octet-stream;
keepalive_timeout 65;
sendfile on;
server {
listen 5050;
server_name x;
ssl on;
ssl_certificate certificate.crt;
ssl_certificate_key keyfile.key;
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass https://frontends;
}
}
}
And my tornado files have this structure:
import tornado.httpserver
import tornado.ioloop
import tornado.web
from flasky import app
from tornado.wsgi import WSGIContainer
from tornado.ioloop import IOLoop
from tornado.web import FallbackHandler
tr = WSGIContainer(app)
application = tornado.web.Application([
(r".*", FallbackHandler, dict(fallback=tr)),
])
if __name__ == '__main__':
http_server = tornado.httpserver.HTTPServer(application, ssl_options={
"certfile": "certificate.crt",
"keyfile": "keyfile.key",
})
http_server.listen(5056, address='127.0.0.1')
IOLoop.instance().start()

Categories