I’d like tu run my python scripts remotly using webbrowser.On my LAN server i have installed node.js. If I start these scripts by webbrowsrer on my server, all is OK. If I try to launch script on remote machine I can’t see any response, but on server script starts (request without response?). Also proxy nginx settled on the server helped nothing.
Summarizing, application is accessible only from the localhost, not from the remote machine.
At this point, my script application is accessible only from the localhost. I can’t access it from the remote machine.
Node.js index.js file is:
const express = require('express');
const { spawn } = require('child_process');
const app = express();
const port = 8811;
app.get('/script', (req, res) => {
let data1;
const pythonOne = spawn('python3', ['script.py']);
pythonOne.stdout.on('data', function(data){
data1 = data.toString();
})
pythonOne.on('close', (code) => {
console.log("code", code)
res.send(data1);
})
})
app.listen(port, () => console.log('node python script app listening on port ' + port));
My server is 192.168.1.161
Using http://192.168.1.161:8811/script on server, script.py starts OK.
Using http://192.168.1.161:8811/script on remote machine script.py starts, but only on the server.
Additionally installed nginx with script.conf file:
upstream backend {
server localhost:8811;
keepalive 32;
}
server {
listen 8500;
server_name script.example.com;
location / {
client_max_body_size 50M;
proxy_set_header X-Real-IP $remote_addr;
proxy_http_version 1.1;
proxy_pass http://backend;
}
}
Now,
Using http://192.168.1.161:8500/script on server script.py starts OK.
Using http://192.168.1.161:8500/script on remote machine script.py starts, but only on the server.
Any answer will be appreciated.
bind node all on 0.0.0.0
app.listen(port,'0.0.0.0' () => console.log('node python script app listening on port ' + port));
Related
I am simply running a flask app and not using nginx and uwsgi, yes my host is behind the load blancer .
I am trying to read all the keys which can read the IP address, but I am not getting the actual IP of the client.
X-Real-IP is changing on every request and X-Forwarded-For has only one IP address which is the loadbalancer IP.
Same issue with bottle. When I started the application directly python app.py , I am not able to get the real IP address.
Is this must to use uwsgi and nginx for a sample app to read IP?
If I use below configuration and forward the uwsgi_param I can read the list of IP address in the response.
Below wsgi_file.ini
[uwsgi]
socket = 127.0.0.1:8000
plugin = python
wsgi-file = app/app.py
process = 3
callable = app
nginx.conf
server {
listen 3000;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
uwsgi_pass 0.0.0.0:8000; #unix:///tmp/uwsgi.sock;
include /etc/nginx/uwsgi_params;
uwsgi_param X-Real-IP $remote_addr;
uwsgi_param X-Forwarded-For $proxy_add_x_forwarded_for;
uwsgi_param X-Forwarded-Proto $http_x_forwarded_proto;
}
}
I started the nginx server and ran the application using command:
uwsgi --ini wsgi_file.ini.
The IP address of the client can be obtained in Flask with request.remote_addr.
Note that if you are using a reverse proxy, load balancer or any other intermediary between the client and the server, then this is going to return the IP address of the last intermediary, the one that sends requests directly into the Flask server. If the intermediaries include the X-Real-IP, X-Forwarded-For or Forwarded headers, then you can still figure out the IP address of the client.
I have a Flask app with bjoern as python server. An example url I have is something like:
http://example.com/store/junihh
http://example.com/store/junihh/product-name
Where "junihh" and "product-name" are arguments that I need to pass to python.
I try to use unix socket after reading about the performance against TCP/IP calls. But now I get a 502 error on the browser.
This is an snippet of my conf:
upstream backend {
# server localhost:1234;
# server unix:/run/app_stores.sock weight=10 max_fails=3 fail_timeout=30s;
server unix:/run/app_stores.sock;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name example.com www.example.com;
root /path/to/my/public;
location ~ ^/store/(.*)$ {
include /etc/nginx/conf.d/jh-proxy-pass.conf;
include /etc/nginx/conf.d/jh-custom-headers.conf;
proxy_pass http://backend/$1;
}
}
How to pass the url arguments to Flask through Nginx proxy_pass with unix socket?
Thanks for any help.
Here is my conf, it can works. 502 is because it cannot find route to the upstream server(ie. change http://127.0.0.1:5000/$1 to http://localhost:5000/$1) will cause 502.
nginx.conf
http {
server {
listen 80;
server_name localhost;
location ~ ^/store/(.*)$ {
proxy_pass http://127.0.0.1:5000/$1;
}
}
}
flask app.py
#!/usr/bin/env python3
from flask import Flask
app = Flask(__name__)
#app.route('/')
def world():
return 'world'
#app.route('/<name>/<pro>')
def shop(name, pro):
return 'name: ' + name + ', prod: ' + pro
if __name__ == '__main__':
app.run(debug=True)
Update
or you can use unix socket like this, but relay on uwsgi.
nginx.conf
http {
server {
listen 80;
location /store {
rewrite /store/(.+) $1 break;
include uwsgi_params;
uwsgi_pass unix:/tmp/store.sock;
}
}
}
flask app.py
like above, not change
uwsgi config
[uwsgi]
module=app:app
plugins=python3
master=true
processes=1
socket=/tmp/store.sock
uid=nobody
gid=nobody
vaccum=true
die-on-term=true
save as config.ini, then run uwsgi config.ini
after nginx reload, you can visit your page ;-)
I'm having some problems hosting a Django site on an ubuntu AWS server. I have it all running on local host fine.
I am following these instructions: https://github.com/ialbert/biostar-central/blob/master/docs/deploy.md
when i try and run it in aws console using:
waitress-serve --port 8080 live.deploy.simple_wsgi:application
i get import error:
1. No module named simple_wsgi
Then if i use the base settings file (not the cut down one), i get import error
1. No module named logger
I've tried moving settings files around and copying the settings files to deploy.env and deploy.py and then sample.env and sample.py and i can't get it running. Please help
I had the same issue and opened it in Biostar project.
Here is how I fixed it: by serving the application via gunicorn and nginx.
Here is my script:
cp live/staging.env live/deploy.env
cp live/staging.py live/deploy.py
# Replace the value of DJANGO_SETTINGS_MODULE to "live.deploy" thanks to http://stackoverflow.com/a/5955623/535203
sed -i -e '/DJANGO_SETTINGS_MODULE=/ s/=.*/=live.deploy/' live/deploy.env
[[ -n "$BIOSTAR_HOSTNAME" ]] && sed -i -e "/BIOSTAR_HOSTNAME=/ s/=.*/=$BIOSTAR_HOSTNAME/" live/deploy.env
source live/deploy.env
biostar.sh init import
# Nginx config based on http://docs.gunicorn.org/en/stable/deploy.html and https://github.com/ialbert/biostar-central/blob/production/conf/server/biostar.nginx.conf
tee "$LIVE_DIR/nginx.conf" <<EOF
worker_processes 1;
user nobody nogroup;
# 'user nobody nobody;' for systems with 'nobody' as a group instead
pid /tmp/nginx.pid;
error_log /tmp/nginx.error.log;
events {
worker_connections 1024; # increase if you have lots of clients
accept_mutex off; # set to 'on' if nginx worker_processes > 1
# 'use epoll;' to enable for Linux 2.6+
# 'use kqueue;' to enable for FreeBSD, OSX
}
http {
include /etc/nginx/mime.types;
# fallback in case we can't determine a type
default_type application/octet-stream;
access_log /tmp/nginx.access.log combined;
sendfile on;
upstream app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
# for UNIX domain socket setups
server unix:/tmp/biostar.sock fail_timeout=0;
# for a TCP configuration
# server 192.168.0.7:8000 fail_timeout=0;
}
server {
# if no Host match, close the connection to prevent host spoofing
listen 8080 default_server;
return 444;
}
server {
# use 'listen 80 deferred;' for Linux
# use 'listen 80 accept_filter=httpready;' for FreeBSD
listen 8080;
client_max_body_size 5M;
# set the correct host(s) for your site
server_name $SITE_DOMAIN;
keepalive_timeout 25s;
# path for static files
root $LIVE_DIR/export/;
location = /favicon.ico {
alias $LIVE_DIR/export/static/favicon.ico;
}
location = /sitemap.xml {
alias $LIVE_DIR/export/static/sitemap.xml;
}
location = /robots.txt {
alias $LIVE_DIR/export/static/robots.txt;
}
location /static/ {
autoindex on;
expires max;
add_header Pragma public;
add_header Cache-Control "public";
access_log off;
}
location / {
# checks for static file, if not found proxy to app
try_files \$uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS
# proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host \$http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
proxy_pass http://app_server;
}
}
}
EOF
gunicorn -b unix:/tmp/biostar.sock biostar.wsgi &
# Start Nginx in non daemon mode thanks to http://stackoverflow.com/a/28099946/535203
nginx -g 'daemon off;' -c "$LIVE_DIR/nginx.conf"
I am trying to run Tornado on multicore CPU with each tornado IOLoop process on a different core, and I'll use NGINX for proxy pass to Tornado processes. Now when I check http://www.tornadoweb.org/en/stable/guide/running.html
Editing the actual configuration here for more details:
events {
worker_connections 1024;
}
http {
upstream chatserver {
server 127.0.0.1:8888;
}
server {
# Requires root access.
listen 80;
# WebSocket.
location /chatsocket {
proxy_pass http://chatserver;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location / {
proxy_pass http://chatserver;
}
}
}
Now previously I was able to connect to socket ws://localhost:8888 from client (When I was running python main.py but now I can't connect. At the server, NGINX is changing the request to http somehow that I want to avoid. Access logs at tornado server:
WARNING:tornado.access:400 GET /search_image (127.0.0.1) 0.83ms
How can I make the nginx only communicate via ws:// not http://
I figured out the issue and it was solved by override tornado's check_origin function by making it return true in all cases. Thank you all.
This question already has answers here:
How to specify which eth interface Django test server should listen on?
(3 answers)
Closed 7 years ago.
I have a server aerv.nl.
it has django (a python framework) but when i run the django server it says:
server started at: http://127.0.0.1:8000/ how can i let the server run on: http://www.aerv.nl/~filip/ ? (a real url)
You'll have to configure your http server and Django. For example if you're using apache you'll need to go through this:
https://docs.djangoproject.com/en/1.9/howto/deployment/wsgi/modwsgi/
What you're doing here is setting up your server to handle the http requests through your django app.
You will need to understand how DNS works, then use redirecting and then some proper server (like nginx or apache with e.g. gunicorn), not django development server, which shouldn't be used on production. There is no way you could do what you ask for just with ./manage runserver. All you can do is to change IP address and port to something different by e.g.: ./manage.py runserver 192.168.0.12:9999 so for example other computers in your network might access your site on this specific IP and port.
Example
You are owner of domain example.com, you have server where you want to server your site with IP address e.g. 5.130.2.19.
You need go to your domain provider and add an A record which connects these together: example.com -> 5.130.2.19.
Then on your server you set up webserver, e.g. nginx and let it run with e.g. this config for your particular server/site:
server {
listen 80;
server_name example.com;
client_max_body_size 4G;
location /static/ {
autoindex on;
alias /var/www/example/django/static/;
}
location /media/ {
autoindex on;
alias /var/www/example/django/media/;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://upstream_server;
break;
}
}
upstream upstream_server {
server unix:/var/www/example/gunicorn.sock fail_timeout=10s;
}
then you would need to run gunicorn with something like
gunicorn example.wsgi:application --bind=unix:/var/www/example/gunicorn.sock
That should be all, but it's of course very brief. Just substitute example.com for your URL. It is up to you if this going to be specific record in nginx config (think about this as an entry point) or if it is going to be one of route specified in your django project.
How does it work?
User put example.com into an address bar, your computer then ask global DNS servers: To what IP address example.com points to?, DNS reply: It's 5.130.2.19. User's browser then sends HTTP request on that IP, where nginx get this request and looks inside its config if there is example.com handler. It find that it is there and that he should look for files in unix:/var/www/example/gunicorn.sock. He looks there and see working gunicorn, which basically parse python django project to something what nginx can present as your website.