I have a Python web application that sits behind Nginx, and is served via Gunicorn.
I have configured it so that Nginx servers static files directly from the disk and it only talks to Gunicorn for static assets such as images.
My questions:
Is it a good idea or a big "no" to dockerize the web app together with static assets?
If I want to deploy my container in 2 servers, which need access to same assets, how can I make the static assets portable just like the containerized app?
What I'd like to have if possible:
I'd like to put my app in a container and I would like to make it as portable as possible, without spending more funds on additional resources such as a separate server to keep the images (like a DB)
If you know your app will always-and-forever have the same static assets, then just containerize them with the app and be done with it.
But things change, so when you need it I would recommend a Docker Volume Container approach: put your static assets in a DVC and mount that DVC in the main container so it's all pretty much "just one app container". You could use Docker Compose something like this:
appdata:
image: busybox
volumes:
- /path/to/app/static
command: echo "I'm just a volume container"
app:
build: .
volumes_from:
- appdata
command: …
You can expand further by starting your container with a bootstrap script that copies initial static files into the destination path on startup. That way your app is guaranteed to always have a default set to get started, but you can add more static files as the app grows. For an example of this, pull the official Jenkins container and read /usr/local/bin/jenkins.sh.
I agree with kojiro, if things do not change much, containerize the static files with your app. Regarding your second question, it seems that you think the Docker Volume Container approach is still not flexible enough since you will have multiple docker hosts. Maybe Flocker addresses your requirements? From the Flocker docs (https://docs.clusterhq.com/en/0.3.2/):
Flocker lets you move your Docker containers and their data together
between Linux hosts. This means that you can run your databases,
queues and key-value stores in Docker and move them around as easily
as the rest of your app. Even stateless apps depend on many stateful
services and currently running these services in Docker containers in
production is nearly impossible. Flocker aims to solve this problem by
providing an orchestration framework that allows you to port both your
stateful and stateless containers between environments.
Related
Hey,
I would like to start a small website that will be entirely handled in Python. I will be using the Flask framework for this. So far I had a lot of contact with AWS ECS and ELB service, but I admit, Python itself is still unknown to me. That's why I have a few questions:
1. I understand that from the point of view of a software engineer it is better to separate the backend and frontend - so it is best to create two separate Python projects based on Flask - one will be the API, the other the frontend, right? Generally, both should be separate services in the ECS service I guess.
2. In such configuration do they both have to use some kind of WSGI server, like gunicorn? Is this a good solution to run inside Fargate with multiple vCPU?
3. There are quite a few questions and myths around Nginx for this solution. Until now I assumed that if I use Application Load Balancer it should be enough (after all it also acts as reverse proxy). Is it necessary to use Nginx as a sidecar in ECS, are there any benefits of this? Assuming that using Nginx would be advisable, should it be only for the frontend or also for API?
Thank you really in advance for any supportive advice here - I know that I have asked for a lot of things.
In my consideration
1- if you want to have a microservice concept you can separate you application with front-end and back-end each of them has their freameworks. for front-end you can use Angular, React , Vuejs and so on. Python is backend technology and you can write strong restfull api to communicate with you front-end application
2- if you containerize your application with for example Docker and write Dockerfile for each service witch it is most common in microservice it is okay to run your container with any servers like nginx,apache or WSGI server(i did not work with this) then expose port (if it is needed) to be accessible
3- when you run your service in AWS Fargate it is possible to connect loadbalancer to your service and a service itself run tasks each task actually is one or more container with may be nginx server or something else , if you mean that it is normal to have nginx in your container.
I'm looking for high-level insight here, as someone coming from the PHP ecosystem. What's the common way to deploy updates to a live Flask application thats running on a single server (no load balancing nodes), served by some WSGI like Gunicorn behind Nginx?
Specifically, when you pull updates from a git repository or rsync files to the server, I'm assuming this leaves a small window where a request can come through to the application while its files are changing.
I've mostly deployed Laravel applications for production, so to prevent this is use php artisan down to throw up a maintenance page while files copy, and php artisan up to bring the site back up when its all done.
What's the equivalent with Flask, or is there some other way of handling this (Nginx config)?
Thanks
Looks like Docker might be my best bet:
Have Nginx running on the host, and the application running in container A with Gunicorn. Nginx directs traffic to container A.
Before starting the file sync, tear down container A and start up container B, which listens on the same local port. Container B can be a maintenance page or a copy of the application.
Start file sync and wait for it to finish. When done, tear down container B, and start container A again.
I'm trying to "dockerize" my java web application and finally run the docker image on EC2.
My application is a WAR file and connects to a database. There is also a python script which the application calls via REST. The python side uses the tornado webserver
Question 1:
Should I have the following Docker containers?
Container for Application Server (Tomcat 7)
Container for HTTP Server (nginx of httpd)
Container for postgres db
Container for python script (this will have tornado web server and my python script).
Question 2:
What is the best way to build dockerfile? I will have to do trial and error for what commands need to be put into the dockerfile for each container. Should I have an ubuntu VM on which I do trial and error and once I nail down which commands I need then put them into the dockerfile for that container?
That list looks about right.
The advantage of splitting up your stack to separate containers is that you can (in many cases) use off-the-shelf official images, and only have to provide the right configuration to make them work together. In addition, you'd be able to upgrade the components (containers) separately.
Note that combining multiple services in a single container is not forbidden, but in Docker it's overall best practice to separate concerns, and have a
single container only be responsible for a single task/service.
To get all containers started with the right configuration, docker-compose is
a good choice; it enables you to create a single file (docker-compose.ymlhttps://docs.docker.com/compose/compose-file/) that
describes your project; which images to build for each container, how the containers relate to each-other, and pass configurations to them.
With docker-compose you can then start all containers by simply running
docker-compose up -d
You can use Docker Machine to create a Docker development environment on Mac or Windows. This is really good for trial and error. There is no need to for Ubuntu VM.
Docker container does one thing only. So your application would consist of multiple containers, one for each component. You've also clearly identified the different containers for your application. Here is how the workflow might look like:
Create a Dockerfile for Tomcat container, nginx, postgres, tornado
Deploy the application to Tomcat in Dockerfile or by mapping volumes
Create image for each of the container
Optionally push these images to Docker hub
If you plan to deploy these containers on multiple hosts then create an overlay network
Use Docker Compose to start these containers together. It would use the network created previously. Alternatively you can also use --x-networking for Docker Compose to create the network.
It seems that uwsgi is capable of doing almost anything I am using nginx for: serving static content, execute PHP scripts, host python web apps, ...
So (in order to simplify my environment) can I replace nginx + uwsgi with uwsgi without loss of performance/functionality?
As they say in the documentation:
Can I use uWSGI’s HTTP capabilities in production?
If you need a load balancer/proxy it can be a very good idea. It will
automatically find new uWSGI instances and can load balance in various
ways. If you want to use it as a real webserver you should take into
account that serving static files in uWSGI instances is possible, but
not as good as using a dedicated full-featured web server. If you host
static assets in the cloud or on a CDN, using uWSGI’s HTTP
capabilities you can definitely avoid configuring a full webserver.
So yes, uWSGI is slower than a traditional web server.
Besides performance, in a really basic application you're right, uWSGI can do everything the webserver offers. However, should your application grow/change over time you may find that there are many things the traditional webserver offers which uWSGI does not.
I would recommend setting up deploy scripts in your language of choice (such as Fabric for Python). I would say my webserver is one of the simplest components to deploy & setup in our applications stack, and the least "needy" - it is rarely on my radar unless I'm configuring a new server.
I currently have Apache setup on my VPS and I'm wondering what would be the best way to handle Pylons development.
I have the directory structure with public_html in my home directory which includes separate website directories to which I map the IP to the DNS provided by my name registrar.
Is there a way to get paster running within a new directory (i.e. make an env/bin/paster) and run it to that?
If so then do I even need to get a new IP? Or would I be able to run both webservers in parallel on the same server without experiencing any conflicts?
I'm looking to convert all my new projects to Pylons.
It's usually more practical to develop first your application locally using pserve, the builtin HTTP server in Pyramid (it used to be paster before Pyramid 1.3 but pserve behaves similarly). This HTTP server comes quite handy when developing for debugging, but you don't usually expose your web application publicly with this server.
Once your application is ready to go public you should deploy your application on your server with another HTTP server like Apache. You can use WSGIScriptAlias if you have Apache with mod_wsgi, as it's documented in Pyramid, to map a subdirectory.
The official documentation explains also explains how you can have different subdirectories running different Pyramid instances with a virtual root.
If you really want to make your application accessible publicly with pserve, you can still use the urlmap composite functionality of PasteDeploy as explained in the documentation.
If your DNS are properly configured you don't need to mess with the IP.