I have a production ready application that I've installed on a VM with CentOS. All dependencies and all other settings are up and running and all that's left for me is to properly configure the gunicorn server and run it with an start.sh script to begin routing web traffic to the app.
However, I'm not sure how I can have gunicorn handle the SSL layer itself. I'd prefer to simply have gunicorn handle the SSL to keep deployments simple and streamlined and not the load balancers.
I've got a my_site.ca-bundle file from an SSL validator.
My bash script looks something like this based off the documentation here and referenced in this stack overflow question
#!/bin/bash
exec gunicorn -w3 --certfile=my_site.crt --keyfile=my_site.key myapp.wsgi:application
However, how do I use the ca-bundle file given these settings referenced in the documentation? I don't actually have my_site.csr and my_site.key since I think both private and public key are inside the ca-bundle file.
Sorry for the super-noob question, first time setting up SSL by hand and not through load balancers. Is there a different gunicorn setting parameter for just a ca-bundle file like AWS?
Related
Using Google App Engine Standard Python 2.7, I have a path in my dispatch.yaml to specify all urls of type "*/flex/*" to route to the flex service.
dispatch.yaml
dispatch:
- url: '*/flex/*'
module: flex
The flex environment is a custom python 3.7 runtime which is executed normally using:
python dev_appserver.py flex.yaml --custom_entrypoint="docker run -p 9090:8080 flex_app".
With other services in my environment, I attempt to launch a dev environment with the command:
python dev_appserver.py dispatch.yaml default.yaml sync.yaml task.yaml flex.yaml --custom_entrypoint="docker run -p 9090:8080 flex_app" --port=8080 --skip_sdk_update_check"
However, when this starts, it starts assigning local ip addresses to each service when I need the flex service to be accessed from port 9090.
Example server output:
INFO devappserver2.py:278] Skipping SDK update check.
INFO dispatcher.py:223] Starting dispatcher running at: http://0.0.0.0:8080
INFO dispatcher.py:256] Starting module "default" running at: http://0.0.0.0:8081
INFO dispatcher.py:256] Starting module "sync" running at: http://0.0.0.0:8082
INFO dispatcher.py:256] Starting module "task" running at: http://0.0.0.0:8083
INFO dispatcher.py:256] Starting module "flex" running at: http://0.0.0.0:8084
I am able to successfully access the flex app if I hit the URL localhost:9090. However, if I access localhost:8084 or localhost:8080/flex/, I receive the error:
503 - This request has timed out.
The server logs reflect this but do not show an actual error:
INFO module.py:861] flex: "GET / HTTP/1.1" 503 59
Is it possible to dispatch urls from GAE Standard Environments to a Flex environment and have it route from its designated port to the desired port needed? I would think this is possible as Google App Engine's Doc specifies it is possible to mix the environments together. I've also attempted to solve this by forcing docker to run on port 8084 but the ports can't be shared.
Found this by looking in dev_appserver.py --help. Turns out the answer to this was simply changing the custom_entrypoint to the command docker run -p {port}:8080 flex_app and this would automatically forward GAE's randomly assigned port to the docker instance.
--custom_entrypoint CUSTOM_ENTRYPOINT
specify an entrypoint for custom runtime modules. This
is required when such modules are present. Include
"{port}" in the string (without quotes) to pass the
port number in as an argument.
The development server can only be used for the 1st generation standard environment apps, it doesn't work with flexible apps, see How to use Python 3 with Google App Engine's Local Development Server.
I think your attempt just ends up running the service as a standard environment one, not a flexible one (chances of it running correctly are pretty slim).
To run correctly you'd have to drop it from the local dev_server execution. Cross-service links to the flexible service would need some sort of hack locally to use the 9090 port (via env variables or simply some hardcoded values), you won't be able to use the dispatch.yaml routing in this case (since the local devserver won't know about the flexible service's existence).
Up to now I followed this advice to reload the code:
https://code.google.com/archive/p/modwsgi/wikis/ReloadingSourceCode.wiki
This has the drawback, that the code changes get detected only every N second. I could use N=0.1, but this results in useless disk IO.
AFAIK the inotify callback of the linux kernel is available via python.
Is there a faster way to detect code changes and restart the wsgi handler?
We use daemon mode on linux.
Why code reload for mod_wsgi at all
There is interest in why I want this at all. Here is my setup:
Most people use "manage.py runserver" for development and some other wsgi deployment for for production.
In my context we have automated the creation of new systems and prod and development systems are mostly identical.
One operating system (linux) can host N systems (virtual environments).
Developers can use runserver or mod_wsgi. Using runserver has the benefit that it's easy for debugging, mod_wsgi has the benefit that you don't need to start the server first.
mod_wsgi has the benefit, that you know the URL: https://dev-server/system-name/myurl/
With runserver you don't know the port. Use case: You want to link from an internal wiki to a dev-system ....
A dirty hack to get code reload for mod_wsgi, which we used in the past: maximum-requests=1 but this is slow.
Preliminaries.
Developers can use runserver or mod_wsgi. Using runserver has the
benefit that you it easy for debugging, mod_wsgi has the benefit that
you don't need to start the server first.
But you do, the server needs to be setup first and that takes a lot of effort. And the server needs to be started here as well though you can configure it to start automatically at boot.
If you are running on port 80 or 443 which is usually the case, the server can be started only by the root. If it needs to be restarted you will have to ask the super user's help again. So ./manage.py runserver scores heavily here.
mod_wsgi has the benefit, that you know the URL:
https://dev-server/system-name/myurl/
Which is no different from the dev server. By default it starts on port 8000 so you can access it as http://dev-server:8000/system-name/myurl/. If you wanted to use SSL with the development server you can use a package such as django-sslserver or you can put nginx in front of django development server.
With runserver you don't know the port. Use case: You want to link from >an internal wiki to a dev-system ....
With runserver, the port is well defined as mentioned above. And you can make it listen on a different port for exapmle with:
./manage.py runserver 0.0.0.0:9090
Note that if you put development server behind apache (as a reverse proxy) or NGINX, restarting problems etc that I have mentioned above do not apply here.
So in short, for development work, what ever you do with mod_wsgi can be done with the django development server (aka ./manage.py runserver).
Inotify
Here we are getting to the main topic at last. Assuming you have installed inotify-tools you could type this into your shell. You don't need to write a script.
while inotifywait -r -e modify .; do sudo kill -2 yourpid ; done
This will result in the code being reloaded when ...
... using daemon mode with a single process you can send a SIGINT
signal to the daemon process using the ‘kill’ command, or have the
application send the signal to itself when a specific URL is
triggered.
ref: http://modwsgi.readthedocs.io/en/develop/user-guides/frequently-asked-questions.html#application-reloading
alternatively
while inotifywait -r -e modify .; do touch wsgi.py ; done
when
... using daemon mode, with any number of processes, and the process
reload mechanism of mod_wsgi 2.0 has been enabled, then all you need
to do is touch the WSGI script file, thereby updating its modification
time, and the daemon processes will automatically shutdown and restart
the next time they receive a request.
In both situations we are using the -r flag to tell inotify to monitor subdirectories. That means each time you save a .css or .js file apache will reload. But without the -r flag changes to python code in subfolders will be undetected. To have the best of both worls, remove css, js, images etc with the --exclude directive.
What about when your IDE saves an auto backup file? or vim saves the .swp file? That too will cause a code reload. So you would have to exclude those file types too.
So in short, it's a lot of hard work to reproduce what the django development server does free of charge.
You can use inotify hooktables to run any command you want depending on a i-notify signal (here's my source link: http://terokarvinen.com/2016/when-files-change-take-action-inotify-hookable).
After looking the tables you can just reload the code of apache.
For your specific problem, it should be something like:
inotify-hookable --watch-directories sources/ --recursive --on-modify-command './code_reload.sh'
In the previous link, the command to execute was just a simple touch flask/init.wsgi
So, the whole code (adding ignored files was):
inotify-hookable --watch-directories flask/ --recursive --ignore-paths='flask/init.wsgi' --on-modify-command 'touch flask/init.wsgi'
As stated here: Flask + mod_wsgi automatic reload on source code change, if you have enabled WSGIScriptReloading, you can just touch that file. It will cause the entire code to reload (not just the config file). But, if you prefer, you can set any other script to reload the code.
After googling a bit, it seems to be a pretty standard solution for that problem and I think that you can use it for your application.
I've been creating an webapp (just for learning purposes) using python django, and have no intention in deploying it. However, is there a way to let someone else, try the webapplication, or more precisely: Is it possible to somehow test the webapp on another computer. I tried to send det source code (and the whole folder), to another computer, installed virtual environment, activated it, and tried to runserver. However, I always get runtimeerror:maximum recursion depth exceeded in cmp. Is there any other way around it?
You can use ngrok -- https://ngrok.com/ -- to create a public URL to your local server for testing, and then give that URL to people so they can try your webapp.
You can also use Localtunnel to easily share a web service on your local development without deploying the code in the server.
Install the localtunnel
npm install -g localtunnel
Start a webserver on some local port (eg http://localhost:8000) and use the command line interface to request a tunnel to your local server
lt --port 8000
You will receive a url, for example https://xyz.localtunnel.me, that you can share with anyone for as long as your local instance of lt remains active. Any requests will be routed to your local service at the specified port.
I'm trying to get a Crossbar.io app running on Heroku. Crossbar.io requires you to put the app's host in a config file that's used to launch the app. I've tried the following:
my-app-name.herokuapp.com: No dice. I imagine Heroku does some fancy redirection internally that prevents this from working.
$HOSTNAME: running a script that outputs the HOSTNAME and using the result in the config file doesn't work either. The HOSTNAME is a GUID that contains no useful information.
IP: I tried getting the external IP of the app, but no luck. The IP changes each time I start the app.
Is there an established way to do this on Heroku?
Also the config requires a port and Heroku seems to assign these dynamically. Any way to access the port as well (ideally before the app runs)
For the host use 0.0.0.0. For the port number it's slightly more complicated...
When it creates a web dynamo Heroku sets a PORT environment variable with the dynamo's port. To set this in crossbar you need to create a script that reads that variable and writes it into your config wherever the port is requested. Then you make sure that the script returns 0 on exit and put the following in your Procfile:
web: ./your_config_helper_script && crossbar start
That runs your script first (which should get your config file ready) before running crossbar
I was tasked with making some changes to a Django application. I've never worked with Django and I am having trouble figuring out how to get my changes to compile and be available online.
What I know so far is that the application is currently available online. netstat tells me that httpd is listening on port 80. My change was made in the myapp/views.py file.
I tried to restart httpd using services httpd restart but my changes did not take effect. I've been looking into the issue a bit an I believe that I need to run a command along the lines of:
I tried calling python manage.py runserver MY.IP.AD.DR:8000 and I get:
python manage.py runserver 129.64.101.14:8000
Validating models...
0 errors found
Django version 1.4.1, using settings 'cutsheets.settings'
Development server is running at http://MY.IP.AD.DR:8000/
Quit the server with CONTROL-C.
Nice that no errors are found but when I navigate to http://MY.IP.AD.DR:8000/ I just get a "Unable to connect" message from my browser. I tried with port 81 too and had the same problem.
Without knowing exactly how your application is set up, I can't really say exactly how to solve this problem.
I can tell you that it's quite common to use two web servers with Django - one handles the static content, and reverse proxies everything else to a different port where the Django app is listening. Restarting the normal HTTP daemon therefore wouldn't affect the Django app, so you need to restart the one handling the Django app. Until you restart it, the prior version of the code will be running.
I generally use Nginx as my static server and Gunicorn with the Django app, with Supervisor used to run Gunicorn, and this is a common setup. I recommend you take a look at the config for the main web server to see if it forwards anything to another port. If so, you need to see what server is running on that port and restart it.
Also, is there a Fabric configuration (fabfile.py)? A lot of people use Fabric to automate Django deployments, and if there is one then there may be a command already defined for deploying.