tornado Python: Tornado server integration with NGINX - python

I am trying to run Tornado on multicore CPU with each tornado IOLoop process on a different core, and I'll use NGINX for proxy pass to Tornado processes. Now when I check http://www.tornadoweb.org/en/stable/guide/running.html
Editing the actual configuration here for more details:
events {
worker_connections 1024;
}
http {
upstream chatserver {
server 127.0.0.1:8888;
}
server {
# Requires root access.
listen 80;
# WebSocket.
location /chatsocket {
proxy_pass http://chatserver;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location / {
proxy_pass http://chatserver;
}
}
}
Now previously I was able to connect to socket ws://localhost:8888 from client (When I was running python main.py but now I can't connect. At the server, NGINX is changing the request to http somehow that I want to avoid. Access logs at tornado server:
WARNING:tornado.access:400 GET /search_image (127.0.0.1) 0.83ms
How can I make the nginx only communicate via ws:// not http://

I figured out the issue and it was solved by override tornado's check_origin function by making it return true in all cases. Thank you all.

Related

Remote access to node.js

I’d like tu run my python scripts remotly using webbrowser.On my LAN server i have installed node.js. If I start these scripts by webbrowsrer on my server, all is OK. If I try to launch script on remote machine I can’t see any response, but on server script starts (request without response?). Also proxy nginx settled on the server helped nothing.
Summarizing, application is accessible only from the localhost, not from the remote machine.
At this point, my script application is accessible only from the localhost. I can’t access it from the remote machine.
Node.js index.js file is:
const express = require('express');
const { spawn } = require('child_process');
const app = express();
const port = 8811;
app.get('/script', (req, res) => {
let data1;
const pythonOne = spawn('python3', ['script.py']);
pythonOne.stdout.on('data', function(data){
data1 = data.toString();
})
pythonOne.on('close', (code) => {
console.log("code", code)
res.send(data1);
})
})
app.listen(port, () => console.log('node python script app listening on port ' + port));
My server is 192.168.1.161
Using http://192.168.1.161:8811/script on server, script.py starts OK.
Using http://192.168.1.161:8811/script on remote machine script.py starts, but only on the server.
Additionally installed nginx with script.conf file:
upstream backend {
server localhost:8811;
keepalive 32;
}
server {
listen 8500;
server_name script.example.com;
location / {
client_max_body_size 50M;
proxy_set_header X-Real-IP $remote_addr;
proxy_http_version 1.1;
proxy_pass http://backend;
}
}
Now,
Using http://192.168.1.161:8500/script on server script.py starts OK.
Using http://192.168.1.161:8500/script on remote machine script.py starts, but only on the server.
Any answer will be appreciated.
bind node all on 0.0.0.0
app.listen(port,'0.0.0.0' () => console.log('node python script app listening on port ' + port));

First request doesnt terminate (no FIN) uWSGI + nginx

I am using nginx as a reverse proxy in front of a uWSGI server (flask apps).
Due to a memory leak, use --max-requests to reload workers after so many calls.
The issue is the following : When a worker just restarted/started, the first request it receives stays hanging between uWSGI and NGINX, the process time inside of the flask app is usual and very quick but the client waits until uwsgi_send_timeout is triggered.
Using tcpdump to see the request (nginx is XXX.14 and uWSGI is XXX.11) :
You can see in the time column that it hangs for 300 seconds (uwsgi_send_timeout) eventhough the HTTP request has been received by NGINX... uWSGI just doesn't send a [FIN] packet to signal that the connexion is closed. Then NGINX triggers the timeout and closes the session.
The end client receives a truncated response.. With a 200 status code. which is very frustrating.
This happens at every worker reload, only once, the first request no matter how big the request.
Does anyone have a workaround this issue? have I misconfigured something?
uwsgi.ini
[uwsgi]
# Get the location of the app
module = api:app
plugin = python3
socket = :8000
manage-script-name = true
mount = /=api:app
cache2 = name=xxx,items=1024
# Had to increase buffer-size because of big authentication requests.
buffer-size = 8192
## Workers management
# Number of workers
processes = $(UWSGI_PROCESSES)
master = true
# Number of requests managed by 1 worker before reloading (reload is time expensive)
max-requests = $(UWSGI_MAX_REQUESTS)
lazy-apps = true
single-interpreter = true
nginx-server.conf
server {
listen 443 ssl http2;
client_max_body_size 50M;
location #api {
include uwsgi_params;
uwsgi_pass api:8000;
uwsgi_read_timeout 300;
uwsgi_send_timeout 300;
}
For some weird reason, adding the parameter uwsgi_buffering off; in the nginx config fixed the issue.
I still don't understand why but for now this fixes my issue. If anyone has a valid explanation, don't hesitate.
server {
listen 443 ssl http2;
client_max_body_size 50M;
location #api {
include uwsgi_params;
uwsgi_pass api:8000;
uwsgi_buffering off;
uwsgi_read_timeout 300;
uwsgi_send_timeout 300;
}

flask not able to get the real ip of the remote client

I am simply running a flask app and not using nginx and uwsgi, yes my host is behind the load blancer .
I am trying to read all the keys which can read the IP address, but I am not getting the actual IP of the client.
X-Real-IP is changing on every request and X-Forwarded-For has only one IP address which is the loadbalancer IP.
Same issue with bottle. When I started the application directly python app.py , I am not able to get the real IP address.
Is this must to use uwsgi and nginx for a sample app to read IP?
If I use below configuration and forward the uwsgi_param I can read the list of IP address in the response.
Below wsgi_file.ini
[uwsgi]
socket = 127.0.0.1:8000
plugin = python
wsgi-file = app/app.py
process = 3
callable = app
nginx.conf
server {
listen 3000;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
uwsgi_pass 0.0.0.0:8000; #unix:///tmp/uwsgi.sock;
include /etc/nginx/uwsgi_params;
uwsgi_param X-Real-IP $remote_addr;
uwsgi_param X-Forwarded-For $proxy_add_x_forwarded_for;
uwsgi_param X-Forwarded-Proto $http_x_forwarded_proto;
}
}
I started the nginx server and ran the application using command:
uwsgi --ini wsgi_file.ini.
The IP address of the client can be obtained in Flask with request.remote_addr.
Note that if you are using a reverse proxy, load balancer or any other intermediary between the client and the server, then this is going to return the IP address of the last intermediary, the one that sends requests directly into the Flask server. If the intermediaries include the X-Real-IP, X-Forwarded-For or Forwarded headers, then you can still figure out the IP address of the client.

Why is "nginx + tornado" setup taking longer to return results compared to just "tornado" setup?

I have an nginx in front of 5 tornado servers.
When I call one of my Tornado server directly then the results are returned very fast.
But when I call nginx instead, it takes VERY long to return results. On checking the logs I can see that the request comes in as "OPTIONS" into nginx and the selected tornado server almost immediately. But then it takes its own sweet little time after which I see "GET" request in the logs and then the response is returned. Why is there such a long delay between OPTIONS and GET? When calling Tornado directly, OPTIONS and GET request happens back to back very quickly. Do I need to change something on my nginx config file to make the performance better?
My nginx config looks like this:
worker_processes 1;
error_log logs/error.;
events {
worker_connections 1024;
}
http {
# Enumerate all the Tornado servers here
upstream frontends {
server 127.0.0.1:5052;
server 127.0.0.1:5053;
server 127.0.0.1:5054;
server 127.0.0.1:5055;
server 127.0.0.1:5056;
}
include mime.types;
default_type application/octet-stream;
keepalive_timeout 65;
sendfile on;
server {
listen 5050;
server_name x;
ssl on;
ssl_certificate certificate.crt;
ssl_certificate_key keyfile.key;
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass https://frontends;
}
}
}
And my tornado files have this structure:
import tornado.httpserver
import tornado.ioloop
import tornado.web
from flasky import app
from tornado.wsgi import WSGIContainer
from tornado.ioloop import IOLoop
from tornado.web import FallbackHandler
tr = WSGIContainer(app)
application = tornado.web.Application([
(r".*", FallbackHandler, dict(fallback=tr)),
])
if __name__ == '__main__':
http_server = tornado.httpserver.HTTPServer(application, ssl_options={
"certfile": "certificate.crt",
"keyfile": "keyfile.key",
})
http_server.listen(5056, address='127.0.0.1')
IOLoop.instance().start()

Django on a url other than localhost [duplicate]

This question already has answers here:
How to specify which eth interface Django test server should listen on?
(3 answers)
Closed 7 years ago.
I have a server aerv.nl.
it has django (a python framework) but when i run the django server it says:
server started at: http://127.0.0.1:8000/ how can i let the server run on: http://www.aerv.nl/~filip/ ? (a real url)
You'll have to configure your http server and Django. For example if you're using apache you'll need to go through this:
https://docs.djangoproject.com/en/1.9/howto/deployment/wsgi/modwsgi/
What you're doing here is setting up your server to handle the http requests through your django app.
You will need to understand how DNS works, then use redirecting and then some proper server (like nginx or apache with e.g. gunicorn), not django development server, which shouldn't be used on production. There is no way you could do what you ask for just with ./manage runserver. All you can do is to change IP address and port to something different by e.g.: ./manage.py runserver 192.168.0.12:9999 so for example other computers in your network might access your site on this specific IP and port.
Example
You are owner of domain example.com, you have server where you want to server your site with IP address e.g. 5.130.2.19.
You need go to your domain provider and add an A record which connects these together: example.com -> 5.130.2.19.
Then on your server you set up webserver, e.g. nginx and let it run with e.g. this config for your particular server/site:
server {
listen 80;
server_name example.com;
client_max_body_size 4G;
location /static/ {
autoindex on;
alias /var/www/example/django/static/;
}
location /media/ {
autoindex on;
alias /var/www/example/django/media/;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://upstream_server;
break;
}
}
upstream upstream_server {
server unix:/var/www/example/gunicorn.sock fail_timeout=10s;
}
then you would need to run gunicorn with something like
gunicorn example.wsgi:application --bind=unix:/var/www/example/gunicorn.sock
That should be all, but it's of course very brief. Just substitute example.com for your URL. It is up to you if this going to be specific record in nginx config (think about this as an entry point) or if it is going to be one of route specified in your django project.
How does it work?
User put example.com into an address bar, your computer then ask global DNS servers: To what IP address example.com points to?, DNS reply: It's 5.130.2.19. User's browser then sends HTTP request on that IP, where nginx get this request and looks inside its config if there is example.com handler. It find that it is there and that he should look for files in unix:/var/www/example/gunicorn.sock. He looks there and see working gunicorn, which basically parse python django project to something what nginx can present as your website.

Categories