Here is my setup: I have a Python webserver (written myself) that listens on port 80 and also have the Transmission-daemon (bittorrent client) that provides a webUI on port 9101. (running on Linux)
I can access both webservers locally without problems, but now would like to access them externally also. My issue is that I would prefer not to have to open extra ports on my firewall to access the Transmission webUI. Is it possible to within the python webserver to redirect some traffic to the appropriate port.
So for example:
http: //mywebserver/index.html -> served by the Python webserver
http: //mywebserver/transmission.html -> redirected to transmission (which is currently http: //localhost:9101)
Thanks
I found my answer: a reverse proxy. It will take care of the routing to the correct port based on the URL. I now just have to select the right one there are so many (NginX, pound, lighttd etc...)
Thanks anyway.
Return an http response with status code 300, see this
Related
I am creating an online IDE for different languages. So my approach to the same is to spin up a docker container from my DJango app once a user runs his code, but the problem is how do I expose the terminal of the docker container to the internet and make it accessible via the browser? I am planning to use xterm.js for the frontend of the terminal but am unable to connect to it.
Any form of insight is appreciable.
You can you an reverse-proxy with NGINX to point to your localIP:<container_port>, and then modify with your domain name.
As this exemple below:
location /some/path/ {
proxy_pass http://www.example.com/link/;
}
Adding some links to help you.
enter link description here
Also you can use 'Nginx Proxy Manager'
Short answer:
Your application uses a port to access it (for example, you use 127.0.0.1:8080 or 192.168.1.100:3000).
127.0.0.1 means the computer you are using right now, 192.168.1.100 is a computer inside the 192.168.1.0 network. (Your LAN)
You must create (on your router) access from the internet to your application.
For exemple, if your public IP address is 123.245.123.245, you need to use an external port (ex: 80), and map it to your internal address and port (ex: 192.168.1.100:3000)
the URL 123.245.123.245:80 will redirect to the website on 192.168.1.100:3000.
On the short term, this is the easiest solution.
For the long term, you should try using a Reverse Proxy.
It's a program that will (depending on the domain) redirect to sites inside your network and adding encryption in the requests (if possible).
Check this link : doc
For security reasons, I want to know whether all of my endpoint addresses can be accessed by anyone with access to the (home) address and port numbers?
Assuming you are running your application on a server or PC that can be accessed from the internet, and the port it is running on is opened - as opposed to running locally on your local network/PC - then yes, any client that knows (or guesses) your IP and the port on which the application is running can attempt to access any endpoint in your application.
Note that although the client will not have a full list of endpoints that can be accessed, a common attack vector is to repeatedly attempt to guess endpoints - for example /admin or /debug. Due to automation, it is practically guaranteed that if your server running the flask application is open to the internet, requests will be made to try to access endpoints by third-parties.
Due to this, it is essential to lock down any sensitive information behind security, be that IP white-listing, or by login mechanisms such as those provided by the flask-login module.
I have a Python REST service and I want to serve it using HTTP2. My current server setup is nginx -> Gunicorn. In other words, nginx (port 443 and 80 that redirects to port 443) is running as a reverse proxy and forwards requests to Gunicorn (port 8000, no SSL). nginx is running in HTTP2 mode and I can verify that by using chrome and inspecting the 'protocol' column after sending a simple GET to the server. However, Gunicorn reports that the requests it receives are HTTP1.0. Also, I coulnt't find it in this list:
https://github.com/http2/http2-spec/wiki/Implementations
So, my questions are:
Is it possible to serve a Python (Flask) application with HTTP2? If yes, which servers support it?
In my case (one reverse proxy server and one serving the actual API), which server has to support HTTP2?
The reason I want to use HTTP2 is because in some cases I need to perform thousands of requests all together and I was interested to see if the multiplexed requests feature of HTTP2 can speed things up. With HTTP1.0 and Python Requests as the client, each request takes ~80ms which is unacceptable. The other solution would be to just bulk/batch my REST resources and send multiple with a single requests. Yes, this idea sounds just fine, but I am really interested to see if HTTP2 could speed things up.
Finally, I should mention that for the client side I use Python Requests with the Hyper http2 adapter.
Is it possible to serve a Python (Flask) application with HTTP/2?
Yes, by the information you provide, you are doing it just fine.
In my case (one reverse proxy server and one serving the actual API), which server has to support HTTP2?
Now I'm going to tread on thin ice and give opinions.
The way HTTP/2 has been deployed so far is by having an edge server that talks HTTP/2 (like ShimmerCat or NginX). That server terminates TLS and HTTP/2, and from there on uses HTTP/1, HTTP/1.1 or FastCGI to talk to the inner application.
Can, at least theoretically, an edge server talk HTTP/2 to web application? Yes, but HTTP/2 is complex and for inner applications, it doesn't pay off very well.
That's because most web application frameworks are built for handling requests for content, and that's done well enough with HTTP/1 or FastCGI. Although there are exceptions, web applications have little use for the subtleties of HTTP/2: multiplexing, prioritization, all the myriad of security precautions, and so on.
The resulting separation of concerns is in my opinion a good thing.
Your 80 ms response time may have little to do with the HTTP protocol you are using, but if those 80 ms are mostly spent waiting for input/output, then of course running things in parallel is a good thing.
Gunicorn will use a thread or a process to handle each request (unless you have gone the extra-mile to configure the greenlets backend), so consider if letting Gunicorn spawn thousands of tasks is viable in your case.
If the content of your requests allow it, maybe you can create temporary files and serve them with an HTTP/2 edge server.
It is now possible to serve HTTP/2 directly from a Python app, for example using Twisted. You asked specifically about a Flask app though, in which case I'd (with bias) recommend Quart which is the Flask API reimplemented on top of asyncio (with HTTP/2 support).
Your actual issue,
With HTTP1.0 and Python Requests as the client, each request takes ~80ms
suggests to me that the problem you may be experiencing is that each request opens a new connection. This could be alleviated via the use of a connection pool without requiring HTTP/2.
I'm developing a website using the Python Flask framework and I now do some devving, pushing my changes to a remote dev server. I set this remote dev server up to serve the website publically using app.run(host='0.0.0.0').
This works fine, but I just don't want other people to view my website yet. For this reason I somehow want to whitelist my ip so that the dev server only serves the website to my own ip address, giving no response, 404's or some other non-useful response to other ip addresses. I can of course set up the server to use apache or nginx to actually serve the website, but I like the automatic reloading of the website on code changes for devving my website
So does anybody know of a way to do this using the built in Flask dev server? All tips are welcome!
Using just the features of Flask, you could use a before_request() hook testing the request.remote_addr attribute:
from flask import abort, request
#app.before_request
def limit_remote_addr():
if request.remote_addr != '10.20.30.40':
abort(403) # Forbidden
but using a firewall rule on the server is probably the safer and more robust option.
Note that the Remote_Addr can be masked if there is a reverse proxy in between the browser and your server; be careful how you limit this and don't lock yourself out. If the proxy lives close to the server itself (like a load balancer or front-end cache), you can inspect the request.access_route list to access the actual IP address. Do this only if remote_addr itself is a trusted IP address too:
trusted_proxies = ('42.42.42.42', '82.42.82.42', '127.0.0.1')
def limit_remote_addr():
remote = request.remote_addr
route = list(request.access_route)
while remote in trusted_proxies:
remote = route.pop()
if remote != '10.20.30.40':
abort(403) # Forbidden
This IPTABLES/Netfilter rule will attend your need, dropping all incoming traffic, EXCEPT the traffic originated from your_ip_address to port 80:
$ /sbin/iptables -A INPUT -s ! your_ip_address --dport 80 -j DROP
Here's something presented on many forums, which allows localhost traffic + external access to your Flask app from your_ip_address, but reject all traffic from other IP address:
$ /sbin/iptables -A INPUT -i lo -j ACCEPT
$ /sbin/iptables -A INPUT -s your_ip_address --dport 80 -j DROP
$ /sbin/iptables -A INPUT --dport 80 -j REJECT
Although you can easily achieve the expected result via Flask (as pointed out on the elected answer), this kind of issue should be treated at the Network Layer of the Operating System. Considering that you're using a Nix-like OS, you can deny/allow incoming connections using Netfilter via IPTABLES, with rules like these.
Incoming traffic/packets, firstly, they pass through the analysis of the Kernel of your Operating System. To deny/allow traffic, from any source to specific ports, it's a job for the Firewall of the Operating System, on the Network Layer of its Kernel. If you don't have a Firewall running on your server, you should configure it.
Here's a takeaway:
Traffic must be treated at the Network Layer of your Operating System. Do not let application handle this task, at least on a Production environment. No one will do a best job regarding this task, than the Kernel of you Operating System (hoping that you're using a Nix-like OS). The Linux Kernel and its modules (Netfilter) are much more reliable, competent and effective to treat these kind of tasks.
I found this very helpful, but there is an easier way to do this if you have multiple ip address.
trusted_ips = ['42.42.42.42', '82.42.82.42', '127.0.0.1']
#app.before_request
def limit_remote_addr():
if request.remote_addr not in trusted_ips:
abort(404) # Not Found
This will check your trusted ip list and return "404 - Not Found" if the remote ip is not in the list.
You can also block specific ip by changing a few things
bad_ips = ['42.42.42.42', '82.42.82.42', '127.0.0.1']
#app.before_request
def limit_remote_addr():
if request.remote_addr in bad_ips:
abort(404) # Not Found
Same thing but blocks the ips in your bad_ips list
I'm running a temporary Django app on a host that has lots of IP addresses. When using manage.py runserver 0.0.0.0:5000, how can the code see which of the many IP addresses of the machine was the one actually hit by the request, if this is even possible?
Or to put it another way:
My host has IP addresses 10.0.0.1 and 10.0.0.2. When runserver is listening on 0.0.0.0, how can my application know whether the user hit http://10.0.0.1/app/path/etc or http://10.0.0.2/app/path/etc?
I understand that if I was doing it with Apache I could use the Apache environment variables like SERVER_ADDR, but I'm not using Apache.
Any thoughts?
EDIT
More information:
I'm testing a load balancer using a small Django app. This app is listening on a number of different IPs and I need to know which IP address is hit for a request coming through the load balancer, so I can ensure it is balancing properly.
I cannot use request.get_host() or the request.META options, as they return what the user typed to hit the load balancer.
For example: the user hits http://10.10.10.10/foo and that will forward the request to either http://10.0.0.1/foo or http://10.0.0.2/foo - but request.get_host() will return 10.10.10.10, not the actual IPs the server is listening on.
Thanks,
Ben
request.get_host()
https://docs.djangoproject.com/en/dev/ref/request-response/#django.http.HttpRequest.get_host
but be aware this can be cheated so don't relay your security on it.
If users are seeing your machine under same address I am not sure if this is possible via runserver (it is supposed to be simple development tool).
Maybe you could use nginx?
Or if this is only for testing do something like:
for i in 1 2 3 4 5; do manage.py runserver 10.0.0.$i:5000; done
and then sys.args[2] is your address
If your goal is to ensure the load balancer is working correctly, I suppose it's not an absolute requirement to do this in the application code. You can use a network packet analyzer that can listen on a specific interface (say, tcpdump -i <interface>) and look at the output.