I have an application (on OpenShift) which can run python. However I would also like to reach static, eg. HTML files. When try to do so I get:
uWSGI Error
Python application not found
Could you please help how I can make the server not to interpret all files as python?
uwsgi needs a Python app to serve the URL.
As http://uwsgi-docs.readthedocs.org/en/latest/HTTP.html#can-i-use-uwsgi-s-http-capabilities-in-production said:
If you want to use it as a real webserver you should take into account that serving static files in uWSGI instances is possible, but not as good as using a dedicated full-featured web server.
In a normal case, the clients send HTTP requests to Nginx or some other web server, which handles static files' responses and leaves the rest to uwsgi.
You might be better ask that on https://serverfault.com/about
Related
In a typical python server setup it is recommended to have Nginx web server serve the static content and proxy the dynamic requests to Gunicorn app server.
Now if I am not serving any static content through my python application do I still need Nginx in front of Gunicorn ? What would be the advantages ?
Detailed explanation would be really appreciated.
All the static content is served through CDN and the backend server will only need to serve the APIs(REST). So when I will only server dynamic content, will I need to have Nginx ? Does it have any advantage in case of high load etc.
It is recommended in Gunicorn docs to run it behind a proxy server.
Technically, you don't really need Nginx.
BUT it's the Internet: your server will receive plenty of malformed HTTP requests which are made by bots and vulnerability scanner scripts. Now, your Gunicorn process will be busy parsing and dealing with these requests instead of serving genuine clients.
With Nginx in front, it will terminate these requests without forwarding to your Gunicorn backend.
Most of these bots make requests to your IP address, instead of your domain name. So it's really easy to configure Nginx to ignore requests made to IP address and only serve requests made to your domain. This is far more secure and faster than relying on Django's ALLOWED_HOSTS settings.
Also, it's much easier to find resources for Nginx about protecting your server, like blacklisting rogue IP addresses or user agents, etc. Compare these two google searches: nginx ban ip vs gunicorn ban ip. You can see Nginx search has more resources.
If you're worried about performance, then rest assured Nginx will not be the bottleneck. If you really want to optimise performance, database querying will be the first place to start.
No, I no longer deploy nginx specifically for the python app. I may have an application load balancer / nginx in the path to split requests to other apps, but not for load management. If using asyncio based systems, I typically don't even use an app server (uwsgi/gunicorn). This is including apps with very high throughput. Every layer of reverse-proxy / layer-7 load balancing will add a touch of latency- don't add it if you don't need it.
Even if you don't use NGINX for serving static assets putting gunicorn behind a proxy server is the recommended setup.
For example, putting gunicorn behind a proxy will allow to add some back pressure to your system in order to protect you from attacks such as Slowloris.
I am a .net developer coming over to python. I have recently started using Flask and have some quick questions about serving files.
I noticed a lot of tutorials focused on nginix and flask. However, I am able to run flask without nginx. I'm just curious as to why this is used together (nginx and flask). Is nginx only for static files?
Nginx is a proxy server, imagine your apps have multiples microservices on differents languagues.
For more info NGINX REVERSE PROXY
On a development machine flask can be run without a webserver (nginx, apache etc) or an application container (eg uwsgi, gunicorn etc).
Things are different when you want to handle the load on a production server. For starters python is relatively very slow when it comes to serving static content where as apache / nginx do that very well.
When the application becomes big enough to be broken into multiple separate services or has to be horizontally scaled, the proxy server capabilities of nginx come in very handy.
In the architectures I build, nginx serves as the entry point where ssl is terminates and the rest of the application is behind a VPN and firewall.
Does this help?
From http://flask.pocoo.org/docs/1.0/deploying/ :
"While lightweight and easy to use, Flask’s built-in server is not suitable for production as it doesn’t scale well. Some of the options available for properly running Flask in production are documented here."
It seems that uwsgi is capable of doing almost anything I am using nginx for: serving static content, execute PHP scripts, host python web apps, ...
So (in order to simplify my environment) can I replace nginx + uwsgi with uwsgi without loss of performance/functionality?
As they say in the documentation:
Can I use uWSGI’s HTTP capabilities in production?
If you need a load balancer/proxy it can be a very good idea. It will
automatically find new uWSGI instances and can load balance in various
ways. If you want to use it as a real webserver you should take into
account that serving static files in uWSGI instances is possible, but
not as good as using a dedicated full-featured web server. If you host
static assets in the cloud or on a CDN, using uWSGI’s HTTP
capabilities you can definitely avoid configuring a full webserver.
So yes, uWSGI is slower than a traditional web server.
Besides performance, in a really basic application you're right, uWSGI can do everything the webserver offers. However, should your application grow/change over time you may find that there are many things the traditional webserver offers which uWSGI does not.
I would recommend setting up deploy scripts in your language of choice (such as Fabric for Python). I would say my webserver is one of the simplest components to deploy & setup in our applications stack, and the least "needy" - it is rarely on my radar unless I'm configuring a new server.
I've recently started to experiment with Python and Tornado web server/framework for web development. Previously, I have used PHP with my own framework on a LAMP stack. With PHP, deploying updated code/new code is as easy as uploading it to the server because of the way mod_php and Apache interact.
When I add new code or update code in Python/Tornado, do I need to restart the Tornado server? I could see this being problematic if you have a number of active users.
(a) Do I have to restart the server, or is there another/better way?
(b) If so, how can I avoid users being disconnected/getting errors/etc. while it's restarting (which could take a few seconds)?
[One possible thought is to use the page flipping paradigm with Nginx pointing to a server, launch the new server instance with updated code, redirect Nginx there and take down the original server...?]
It appears the best method is to use Nginx with multiple Tornado instances as I alluded to in my original question and as Cole mentions. Nginx can reload its configuration file on the fly . So the process looks like this:
Update Python/Tornado web application code
Start a new instance of the application on a different port
Update the configuration file of Nginx to point to the new instance (testing the syntax of the configuration file first)
Reload the Nginx configuration file with a kill -HUP command
Stop the old instance of Python/Tornado web server
A couple useful resources on Nginx regarding hot-swapping the configuration file:
https://calomel.org/nginx.html (in "Explaining the directives in nginx.conf" section)
http://wiki.nginx.org/CommandLine (in "Loading a New Configuration Using Signals" section)
Use HAProxy or Nginx and proxy to multiple Tornado processes, which you can then restart one by one. The Tornado docs cover Nginx, but it doesn't support websockets, so if you're using them you'll need HAProxy.
You could use a debug=True switch with the tornado web instance.
T_APP = tornado.web.Application(<URL_MAP>, debug=True)
This reflects the handler changes as and when they happen.
Is this what you are searching for?
A module to automatically restart the server when a module is modified.
http://www.tornadoweb.org/en/branch2.4/autoreload.html
If you just want to deploy new code with tornado/python during development without restarting the server, you can use the realtimefunc decorator in this GitHub repository.
I currently have Apache setup on my VPS and I'm wondering what would be the best way to handle Pylons development.
I have the directory structure with public_html in my home directory which includes separate website directories to which I map the IP to the DNS provided by my name registrar.
Is there a way to get paster running within a new directory (i.e. make an env/bin/paster) and run it to that?
If so then do I even need to get a new IP? Or would I be able to run both webservers in parallel on the same server without experiencing any conflicts?
I'm looking to convert all my new projects to Pylons.
It's usually more practical to develop first your application locally using pserve, the builtin HTTP server in Pyramid (it used to be paster before Pyramid 1.3 but pserve behaves similarly). This HTTP server comes quite handy when developing for debugging, but you don't usually expose your web application publicly with this server.
Once your application is ready to go public you should deploy your application on your server with another HTTP server like Apache. You can use WSGIScriptAlias if you have Apache with mod_wsgi, as it's documented in Pyramid, to map a subdirectory.
The official documentation explains also explains how you can have different subdirectories running different Pyramid instances with a virtual root.
If you really want to make your application accessible publicly with pserve, you can still use the urlmap composite functionality of PasteDeploy as explained in the documentation.
If your DNS are properly configured you don't need to mess with the IP.