Flask's built-in server always 404 with SERVER_NAME set - python

Here is a minimal example:
from flask import Flask
app = Flask(__name__)
app.config['DEBUG'] = True
app.config['SERVER_NAME'] = 'myapp.dev:5000'
#app.route('/')
def hello_world():
return 'Hello World!'
#app.errorhandler(404)
def not_found(error):
print(str(error))
return '404', 404
if __name__ == '__main__':
app.run(debug=True)
If I set SERVER_NAME, Flask would response every URL with a 404 error, and when I comment out that line, it functions correctly again.
/Users/sunqingyao/Envs/flask/bin/python3.6 /Users/sunqingyao/Projects/play-ground/python-playground/foo/foo.py
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 422-505-438
127.0.0.1 - - [30/Oct/2017 07:19:55] "GET / HTTP/1.1" 404 -
404 Not Found: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
Please note that this is not a duplicate of Flask 404 when using SERVER_NAME, since I'm not using Apache or any production web server. I'm just dealing with Flask's built-in development server.
I'm using Python 3.6.2, Flask 0.12.2, Werkzeug 0.12.2, PyCharm 2017.2.3 on macOS High Sierra, if it's relevant.

When set SERVER_NAME, you should make HTTP request header 'Host' the same with it:
# curl http://127.0.0.1:5000/ -sv -H 'Host: myapp.dev:5000'
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 5000 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
> Accept: */*
> Host: myapp.dev:5000
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Content-Type: text/html; charset=utf-8
< Content-Length: 13
< Server: Werkzeug/0.14.1 Python/3.6.5
< Date: Thu, 14 Jun 2018 09:34:31 GMT
<
* Closing connection 0
Hello, World!
if you use web explorer, you should access it use http://myapp.dev:5000/ and set /etc/hosts file.
It is like the nginx vhost, use Host header to do routing.
I think The SERVER_NAME is mainly used for route map.
you should set host and ip by hand
app.run(host="0.0.0.0",port=5000)
if you not set host/ip but set SERVER_NAME and found it seems to work,because the app.run() have this logic:
def run(self, host=None, port=None, debug=None,
load_dotenv=True, **options):
...
_host = '127.0.0.1'
_port = 5000
server_name = self.config.get('SERVER_NAME')
sn_host, sn_port = None, None
if server_name:
sn_host, _, sn_port = server_name.partition(':')
host = host or sn_host or _host
port = int(port or sn_port or _port)
...
try:
run_simple(host, port, self, **options)
finally:
self._got_first_request = False
At last, don't use SERVER_NAME to set host,ip app.run() used, unless you know its impact on the route map.

Sometimes I find Flask's docs to be confusing (see the quotes above by #dm295 - the meaning of the implications surrounding 'SERVER_NAME' is hard to parse). But an alternative setup to (and inspired by) #Dancer Phd's answer is to specify the 'HOST' and 'PORT' parameters in a config file instead of 'SERVER_NAME'.
For example, let's say you use this config strategy proposed in the Flask docs, add the host & port number like so:
class Config(object):
DEBUG = False
TESTING = False
DATABASE_URI = 'sqlite://:memory:'
HOST = 'http://localhost' #
PORT = '5000'
class ProductionConfig(Config):
DATABASE_URI = 'mysql://user#localhost/foo'
class DevelopmentConfig(Config):
DEBUG = True
class TestingConfig(Config):
TESTING = True

From Flask docs:
the name and port number of the server. Required for subdomain support
(e.g.: 'myapp.dev:5000') Note that localhost does not support
subdomains so setting this to “localhost” does not help. Setting a
SERVER_NAME also by default enables URL generation without a request
context but with an application context.
and
More on SERVER_NAME
The SERVER_NAME key is used for the subdomain
support. Because Flask cannot guess the subdomain part without the
knowledge of the actual server name, this is required if you want to
work with subdomains. This is also used for the session cookie.
Please keep in mind that not only Flask has the problem of not knowing
what subdomains are, your web browser does as well. Most modern web
browsers will not allow cross-subdomain cookies to be set on a server
name without dots in it. So if your server name is 'localhost' you
will not be able to set a cookie for 'localhost' and every subdomain
of it. Please choose a different server name in that case, like
'myapplication.local' and add this name + the subdomains you want to
use into your host config or setup a local bind.
It looks like there's no point to setting it to localhost. As suggested in the docs, try something like myapp.dev:5000.

You can also use just port number and host inside of the app.run like:
app.run(debug=True, port=5000, host="localhost")
Delete:
app.config['DEBUG'] = True
app.config['SERVER_NAME'] = 'myapp.dev:5000'

Using debug=True worked for me:
from flask import Flask
app = Flask(__name__)
app.config['SERVER_NAME'] = 'localhost:5000'
#app.route('/')
def hello_world():
return 'Hello World!'
if __name__ == '__main__':
app.run(debug=True)

Related

Timeout with Flask/uWSGI/nginx app using mongodb

I have a Flask python web app on uWSGI/nginx that works fine, except when I use pymongo, specifically when I initialize the MongoClient class. I get the following nginx error when I try to access the app while using pymongo:
019/02/19 21:58:13 [error] 16699#0: *5 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 127.0.0.1, server: example.com, request: "GET /api/test HTTP/1.1", upstream: "uwsgi://unix:/var/www/html/myapp/myapp.sock:”, host: “example.com”
My small test app:
from flask import Flask
from flask_cors import CORS
from bson.json_util import dumps
import pymongo
DEBUG = True
app = Flask(__name__)
app.config.from_object(__name__)
CORS(app)
client = pymongo.MongoClient() # This line
db = client.myapp
#app.route('/api/test')
def test():
item = db.items.find_one()
return item['name']
def create_app(app_name='MYAPP'):
return app
# if __name__ == '__main__':
# app.run(debug=True, threaded=True, host='0.0.0.0')
If I run this app from the command line (python app.py) it works fine accessing 0.0.0.0:5000/api/test, so I'm pretty sure it's just a uWSGI configuration issue. My first thought was to increase the uwsgi_read_timeout parameter in my nginx config file:
uwsgi_read_timeout 3600
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name example.com www.example.com;
location /api {
include uwsgi_params;
uwsgi_read_timeout 3600;
uwsgi_pass unix:/var/www/html/myapp/myapp.sock;
}
location / {
root /var/www/html/myapp;
try_files $uri $uri/ /index.html;
}
#return 301 https://$server_name$request_uri;
}
But it had no apparent effect. My uWSGI app is running as a service, using the following config (myapp.ini):
[uwsgi]
module = wsgi:app
master = true
processes = 4
enable-threads = True
socket = /var/www/html/myapp/myapp.sock
chmod-socket = 660
vacuum = true
die-on-term = true
Again, everything seems to work fine except for when I try to initialize pymongo. Finally, my app's service file:
[Unit]
Description=uWSGI Python container server
After=network.target
[Service]
User=pi
Group=www-data
WorkingDirectory=/var/www/html/myapp
ExecStart=/usr/bin/uwsgi --ini /etc/uwsgi/apps-available/myapp.ini
[Install]
WantedBy=multi-user.target
I believe the issue is that you're forking and this causes issues with PyMongo.
PyMongo is thread safe but not Fork safe. Once you run the app in daemon mode you are forking the process. You'll have to create a MongoClient inside the app so that your threads can see it after the process has started.
You can try this(I didn't try this out, I normally wrap stuff like this in a class and do this in the init method):
def create_app(app_name='MYAPP'):
app.client = pymongo.MongoClient(connect=False) # this will prevent connecting until you need it.
app.db = app.client.myapp
return app
Read this: http://api.mongodb.com/python/current/faq.html#id3

Cannot access Flask with apache

I have a linux server which I am running my flask app on it like this:
flask run --host=0.0.0.0
Inside the server I can access it like this:
curl http://0.0.0.0:5000/photo (and I am getting a valid response)
However, when I am trying to access it outside the server:
http://my_ip:5000/photo - the connection is refused.
The same ip, will return an image saved on public_html with apache2 configured
http://my_ip/public_html/apple-touch-icon-144x144-precomposed.png
I use this simple snippet to get the ip-address from the interface
import socket
def get_ip_address():
""" get ip-address of interface being used """
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect(("8.8.8.8", 80))
return s.getsockname()[0]
IP = get_ip_address()
And in main:
if __name__ == '__main__':
app.run(host=IP, port=PORT, debug=False)
And running:
./app.py
* Running on http://10.2.0.41:1443/ (Press CTRL+C to quit)
I have a suspicion you have a firewall on your Linux machine that is blocking port 5000.
Solution 1:
Open the relevant port on your firewall.
Solution 2:
I would suggest you to install nginx as a web proxy and configure it so that http://my_ip/photo would forward traffic to and from http://127.0.0.1:5000/photo:
server {
listen 80;
location /photo {
proxy_pass http://127.0.0.1:5000/photo;
}
}

upstream timed out error in Python Flask web app using Fabric on nginx

I have my Python Flask web app hosted on nginx. While trying to execute a request it shows a timeout error in the nginx error log as shown below :
[error] 2084#0: *1 upstream timed out (110: Connection timed out)
while reading response header from upstream, client:
192.168.2.224, server: 192.168.2.131, request: "POST /execute HTTP/1.1", upstream: "uwsgi://unix:/hom
e/jay/PythonFlaskApp/app.sock", host: "192.168.2.131:9000", referrer:
"http://192.168.2.131:9000/"
If I try to run the app locally it works fine and responds fine.
Any one have any idea what might be wrong ?
the error found in browser console is :
Gateway Time-out
Here is the nginx config file:
server {
listen 9000;
server_name 192.168.2.131;
location / {
include uwsgi_params;
proxy_read_timeout 300;
uwsgi_pass unix:/home/jay/PythonFlaskApp/app.sock;
}
}
And here is the Python Fabric code that i trying to execute. i'm not sure if this is causing the issue, but any waz here is the code :
from fabric.api import *
#application.route("/execute",methods=['POST'])
def execute():
try:
machineInfo = request.json['info']
ip = machineInfo['ip']
username = machineInfo['username']
password = machineInfo['password']
command = machineInfo['command']
isRoot = machineInfo['isRoot']
env.host_string = username + '#' + ip
env.password = password
resp = ''
with settings(warn_only=True):
if isRoot:
resp = sudo(command)
else:
resp = run(command)
return jsonify(status='OK',message=resp)
except Exception, e:
print 'Error is ' + str(e)
return jsonify(status='ERROR',message=str(e))
I have a uWSGi config file for the web app and started it using an upstart script. Here is uwSGi conf file :
[uwsgi]
module = wsgi
master = true
processes = 5
socket = app.sock
chmod-socket = 660
vacuum = true
die-on-term = true
and here is upstart script
description "uWSGI server instance configured to serve Python Flask App"
start on runlevel [2345]
stop on runlevel [!2345]
setuid jay
setgid www-data
chdir /home/jay/PythonFlaskApp
exec uwsgi --ini app.ini
I have followed the below tutorial on running flask app on nginx
This is likely a problem with the Fabric task, not with Flask. Have you tried isolating / removing Fabric from the application, just for troubleshooting purposes? You could try stubbing out a value for resp, rather than actually executing the run/sudo commands in your function. I would bet that the app works just fine if you do that.
And so that would mean that you've got a problem with Fabric executing the command in question. First thing you should do is verify this by mocking up an example Fabfile on the production server using the info you're expecting in one of your requests, and then running it with fab -f <mock_fabfile.py>.
It's also worth noting that using with settings(warn_only=True): can result in suppression of error messages. I think that you should remove this, since you are in a troubleshooting scenario. From the docs on Managing Output:
warnings: Warning messages. These are often turned off when one expects a given operation to fail, such as when using grep to test existence of text in a file. If paired with setting env.warn_only to True, this can result in fully silent warnings when remote programs fail. As with aborts, this setting does not control actual warning behavior, only whether warning messages are printed or hidden.
As a third suggestion, you can get more info out of Fabric by using the show('debug') context manager, as well as enabling Paramiko's logging:
from fabric.api import env
from fabric.context_managers import show
# You can also enable Paramiko's logging like so:
import logging
logging.basicConfig(level=logging.DEBUG)
def my_task():
with show('debug'):
run('my command...')
The Fabric docs have some additional suggestions for troubleshooting: http://docs.fabfile.org/en/1.6/troubleshooting.html. (1.6 is an older/outdated version, but the concepts still apply.)

Flask Python - Put request returns only 404 after changing host to 0.0.0.0

After setting the .py script to have app.run(host='0.0.0.0') all my put/get/etc request end up in 404.
E:\location>c:Python27\python.exe coordinatorSim.py
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
192.168.0.101 - - [01/Nov/2015 09:19:18] "PUT /patient/start HTTP/1.1" 404 -
I send the request to 192.168.0.103:5000/patient/start from another machine in the wifi network, which is the ip of the machine on which the py script runs on.
If I remove the app.run(host='0.0.0.0'), than the requests work, on the default localhost address, 127.0.0.1:5000 (given that I send the request to 127.0.0.103:5000/patient/start)
What is it that I am missing?
the put request is:
app = Flask(__name__)
app.debug = True
app.run(host='0.0.0.0')
sSocket = None
......
# Creates client for socket communication. The http client supplies the IP address and port no.
#app.route('/patient/start', methods = ['PUT'])
##requires_auth
def patient_start():
global sThreadStarted
global sSocket
global sSocketThread
# If the client is already started, return an error code !?
#print "JSON:", request.json
if len(request.json) > 0:
l_address = request.json["address"]
l_port = request.json["port"]
# Start client
print l_address, l_port
try:
sSocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
except socket.error, e:
return "Error creating socket: " + e
try:
sSocket.connect((l_address, int(l_port)))
except socket.gaierror, e:
return "Address error: " + e
except socket.error, e:
return "Connection error: " + e
try:
sThreadStarted = True
sSocketThread.start()
except threading.ThreadError, e:
return "Threading error: " + e
message = {
'status': 200,
'message': 'Socket created'
}
resp = jsonify(message)
resp.status_code = 200
return resp
else:
return bad_request()
I was facing this issue while hosting my Flask React App on Amazon EC2. Took me a couple of hours to resolve this issue.
The solution is to set the SERVER_NAME = None in the settings.py file for the Flask App.
All credits to Ronhanson for his comment on Github.
Just to reiterate :
Open port 5000 in the EC2 security group inbound rules.
In manage.py add the line app.run(host='0.0.0.0')
In settings.py update SERVER_NAME = None
Follow the above steps and your app would be accessible from external networks.
From doc
By default, a route only answers to GET requests, but that can be changed by providing the methods argument to the route() decorator
So you have to set
#app.route('/url', methods=['GET', 'PUT'])
When I had this problem in a small Raspberry Pi script, I found that having the "app.run(host='0.0.0.0')" at the top of the script was a problem.
I now use a construct like this:
from flask import Flask, jsonify, abort, make_response
...
app = Flask(__name__)
#app.route('/')
def index():
return "Yadda yadda yadda!"
#app.route('/properties', methods=['GET'])
def get_properties():
return jsonify({'site': site})
...
if __name__ == '__main__':
app.run(host='0.0.0.0', debug=True)
The "debug=True" works fine in this context, as do GETs and PUTs from remote clients. The documentation discourages the use of debug=True when listening on all addresses, but does not prohibit it ("If you have debug disabled or trust the users on your network...", emphasis mine).
I also had the problem that i couldnt run my app in local network. The error message in my web browser was
Not Found The requested URL was not found on the server. If you
entered the URL manually please check your spelling and try again.
and in the log of my terminal it said
"GET / HTTP/1.1" 404 -
so -> it couldnt find anything, why:
The problem is that you run your app with app.run its not a var so:
First you have to make your imports, build and configurate your app. Then you can start your web server which is running your app at your local machine like this in your
__init__.py
app.run(host= '0.0.0.0', debug=True)
or like this directly at the end of your web app
if __name__ == '__main__':
app.run(host='0.0.0.0', debug=True)

How to get CherryPy to listen only on a specific host

I have a flask app that I want to deploy using CherryPy's built in server. I chose CherryPy so that the app can be deployed without having to reverse proxy (ie. nginx in front).
I'm having trouble getting CherryPy to listen for requests on just a single hostname.
Say I'm serving 2 sites: test1.com and test2.com (and have them set in my hosts file to point back to localhost).
My /etc/hosts file:
127.0.0.1 test1.com test2.com
CherryPy is serving test1.com, test2.com doesn't have anything serving it.
My cherrypy file is as follows:
import cherrypy
from my_test_flask_app import app
if __name__ == '__main__':
cherrypy.tree.graft(app, "/")
cherrypy.server.unsubscribe()
server = cherrypy._cpserver.Server()
server.socket_host = "test1.com"
server.socket_port = 8030
server.thread_pool = 30
server.subscribe()
cherrypy.engine.start()
cherrypy.engine.block()
Set up this way, I go to test1.com:8030 on my browser and it works as expected.
But when I go to test2.com:8030, the same app is served. I expected it not to serve anything, since CherryPy isn't set up to listen for test2.com.
To me, it seems that CherryPy is just listening for everything on the given port (8030), and treating the socket_host part as if its 0.0.0.0
Am I missing something here? I've looked through lots of docs and tutorials, but all things suggest that this code snippet should be working as I expected.
Thanks
Here's how you can setup what you want...
root = Root()
RootApp = cherrypy.Application(root)
Domain2App = cherrypy.Application(root)
SecureApp = cherrypy.Application(Secure())
vhost = cherrypy._cpwsgi.VirtualHost(RootApp,
domains={'www.domain2.example': Domain2App,
'www.domain2.example:443': SecureApp,
})
cherrypy.tree.graft(vhost)
https://cherrypy.readthedocs.org/en/3.3.0/refman/_cpwsgi.html#classes
Hope this helps!
You misunderstand the socket listen address - they are IP addresses only, not on DNS names. Set this way, CherryPy listens to the localhost (127.0.0.1) only - try using your Ethernet/Wlan local address and you should get connection refused.
Also, you can wrap your application with a WSGI middleware that checks the Host header for the proper domain, or use CherryPy virtual host facility to check the host header.

Categories