Easy deployment of a Flask API, how do we do that? What is the best way?
I would like to deploy my Flask API on a single server, in the beginning. I just got started with a new project and I don't want to spend too much time on Docker and scalability. I am even a bit scared to use Docker in production at the beginning anyway.
With PHP there are a ton of options, I just saw they even have "deployer" now, which makes things even easier.
What I am looking for:
with one command, deploying my project to the server (using git). But depending on "deploy dev" or "deploy prod" command, the server needs to know from which branch to pull. So I do need to merge branches before deploying.
create a new "release" folder on the server and symlink the www folder to the new release.
keep at least 5 release folders, remove the 5th on every deploy.
make it possible to rollback, so change symlink to a previous release folder.
I saw I can use Fabric, but it seems kinda complicated and perhaps overkill (like capistrano). I searched quite a lot on the web, but couldn't find a very clear answer/solution. Or a solution which most people agree on.
Any thoughts or people who would like share their experience?
I will post an answer, cause I see I've been given an answer 9 months ago already, without actually answering the thread.
Like Sayse already told: plenty of ways but GIT and CI are both good ways to implement Continuous Deployment on a VPS.
I've been trying CI, with much success!
Related
Ideally I’d like to build a package to deploy to Debian. Ideally the installation process would check the system has the required dependencies installed, as well as configure Cronjobs, set up users etc.
I’ve tried googling around and I understand a .deb is the format I can distribute in - but that is as far as I got since I’m getting confused now with the tooling I need to get up to speed with. The other option is to just git clone on the server and configure the environment manually… but that’s not preferable for obvious reasons.
How can I get started with building a Debian package and is that the right direction for deploying web applications? If anyone could point me in the right direction tools-wise and perhaps a tutorial that would be massively appreciated :) also if you advise to just take the simple route with git, happy to take that advice as well if you explain why. if it makes any difference I’m deploying one nodejs and one python web application
You can for sure package everything as a Linux application; for example using pyinstaller for your python webapp.
Besides that, it depends on your use case.
I will focus on the second part of your question,
How can I get started with building a Debian package and is that the right direction for deploying web applications?
as that seems to be what you are after when considering other alternatives to .dev already in your question.
I want to deploy 1-2 websites on my linux server
In this case, I'd say manually git clone and configure everything. Its totally fine when you know that there won't be much more running on the server and is pretty hassle free.
Why spend time packaging when noone will need the package ever again after you just installed it on your server?
I want to distribute my webapps to others on Debian
Here a .deb would make total sense. For example Plex media server and other applications are shipped like this.
If the official Debian wiki is too abstract, there are also other more hands on guides to get you started quickly. You could also get other .deb Packages and extract them to see what they are made up from. You mentioned one of your websites is using python, so I just suspect it might be flask or Django. If it's Django, there is an example repository you might want to check out.
I want to run a lot of stuff on my server / distribute to other devs and platforms / or scale soon
In this case I would make the webapps into docker containers. They are easy to build, share, and deploy. On top you can easily bundle all dependencies and scripts to make sure everything is setup right. Also they are easy to run and stop. So you have a simple "on/off" switch if your server is running low on resources while you want to run something else. I highly favour this solution, as it also allows you to easily control what is running on what ip when you deploy more and more applications to your server. But, as you pointed out, it runs with a bit of overhead and is not the best solution on weak hardware.
Also, if you know for sure what will be running on the server long term and don't need the flexibility I would probably skip Docker as well.
I have been looking at setting up a web server to use Python and I have installed Apache 2.2.22 on Debian 7 Wheezy with mod_wsgi. I have gotten the initial page up and going and the Apache will display the contents of the wsgi file that I have in my directory.
However, I have been researching on how to deploy a Python application and I have to admin, I find some of it a little confusing. I am coming from a background in PHP where it is literally install what you need and you are up and running and PHP is processing the way it should be.
Is this the same with Python? I can't seem to get anything to process outside of the wsgi file that I have setup. I can't import anything from other files without the server throwing a "500" error. I have looked on Google and Bing to try to find an answer to this, but I can't seem to find anything, or don't know that what I have been looking at is the answer.
I really appreciate any help that you guys can offer.
Thanks in advance! (If I need to post any coding, I can do that, I just don't know what you guys would need, if anything, as far as coding examples for this...)
Python is different from PHP in that PHP executes your entire program separately for each hit to your website, whereas Python runs "worker processes" that stay resident in memory.
You need some sort of web framework to do this work for you (you could write your own, but using someone else's framework makes it much easier). Flask is an example of a light one; Django is an example of a very heavy one. Pick one and follow that framework's instructions, or look for tutorials for that framework. Since the frameworks differ, most practical documentation on handling web services with Python are focused around a framework instead of just around the language itself.
Nearly any python web framework will have a development server that you can run locally, so you don't need to worry about deploying yet. When you are ready to deploy, Apache will work, although it's usually easier and better to use Gunicorn or another python-specific webserver, and then if you need more webserver functionality, set up nginx or Apache as a reverse proxy. Apache is a very heavy application to use for nothing but wsgi functionality. You also have the option of deploying to a PaaS service like Heroku (free for development work, costs money for production applications) which will handle a lot of sysadmin work for you.
As an aside, if you're not using virtualenv to set up your Python environment, you should look into it. It will make it much easier to keep track of what you have installed, to install new packages, and to isolate an environment so you can work on multiple projects on the same computer.
I never dug into how server tech like WSGI really worked underneath and thought I had a basic understanding until now...
What's the explanation for this type of behavior?: On an Apache2/WSGI/Django setup, after getting the new code on the dev server and "reloading" it by doing the prescribed touch myapp.wsgi, things started getting weird. On successive browser refreshes, I get either the old version of the app (from before pushing the new code) or the new one, RANDOMLY! It's like some threads/processes are still serving the old code while some have loaded the new code from the disk... What would be a simple explanation for this and how can I properly "reload" my app without restarting apache? Or where can I find a simple (better still, graphical/schematical) explanations of how things like WSGI, FCGI etc. work.
Note: I'm not a devops guy, but I've been forced into wrangling with things like this and I'm looking for any "condensed", "crash course type" knowledge on this, not the full fledged documentations for all the components...
You likely aren't using daemon mode. Read:
http://blog.dscpl.com.au/2012/10/why-are-you-using-embedded-mode-of.html
http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode
The Django documentation on mod_wsgi setup wasn't clear enough about what you had to do to use daemon mode. That has been fixed now.
https://docs.djangoproject.com/en/dev/howto/deployment/wsgi/modwsgi/
I have a django site that needs to be rebuilt every night. I would like to check out the code from the Git repo and then begin doing the stuff like setting up the virtual environment, downloading the packages, etc. This would have no manual intervention as this would be run from cron
I'm really confused as to what to use for this. Should I write a Python script or a Shell script? Are there any tools that assist in this?
Thanks.
So what I'm looking for is CI and from what I've seen I'll probably end up using Jenkins or Buildbot for it. I've found the docs to be rather cryptic for someone who's never attempted anything like this before.
Do all CI like Buildbot/Jenkins simply run tests and more test and send you reports or do they actually set up a working Django environment that you can access through your browser?
You'll need to create some sort of build script that does everything but the GIT checkout. I've never used any Python build tools, but perhaps something like: http://www.scons.org/.
Once you've created a script you can use Jenkins to schedule a nightly build and report success/failure: http://jenkins-ci.org/. Jenkins will know how to checkout your code and then you can have it run your script.
There are litterally 100's of different tools to do this. You can write python scripts to be run from cron, you can write shell scripts, you can use one of the 100's of different build tools.
Most python/django shops would likely recommend Fabric. This really is a matter of you running through and making sure you understand everything that needs to be done and how to script it. Do you need to run a test suite before you deploy to ensure it doesn't really break everything? Do you need to run South database migrations? You really need to think about what needs to be done and then you just write a fabric script to do those things.
None of this even touches the fact that overall what you're asking for is continuous integration which itself has a whole slew of tools to help manage that.
What you are asking for is Continuous Integration.
There are many CI tools out there, but in the end it boils down to your personal preferences (like always, hopefully) and which one just works for you.
The Django project itself uses buildbot.
If you would ask me, then I would recommend you continuous.io, which works ouf the box with Django applications.
You can manually set how many times you would like to build your Django project, which is great.
You can, of course, write a shell script which rebuilds your Django project via cron, but you should deserve better than that.
I'm looking for a tool to keep track of "what's running where". We have a bunch of servers, and on each of those a bunch of projects. These projects may be running on a specific version (hg tag/commit nr) and have their requirements at specific versions as well.
Fabric looks like a great start to do the actual deployments by automating the ssh part. However, once a deployment is done there is no overview of what was done.
Before reinventing the wheel I'd like to check here on SO as well (I did my best w/ Google but could be looking for the wrong keywords). Is there any such tool already?
(In practice I'm deploying Django projects, but I'm not sure that's relevant for the question; anything that keeps track of pip/virtualenv installs or server state in general should be fine)
many thanks,
Klaas
==========
EDIT FOR TEMP. SOLUTION
==========
For now, we've chosen to simply store this information in a simple key-value store (in our case: the filesystem) that we take great care to back up (in our case: using a DCVS). We keep track of this store with the same deployment tool that we use to do the actual deploys (in our case: fabric)
Passwords are stored inside a TrueCrypt volume that's stored inside our key-value store.
==========
I will still gladly accept any answer when some kind of Open Source solution to this problem pops up somewhere. I might share (part of) our solution somewhere myself in the near future.
pip freeze gives you a listing of all installed packages. Bonus: if you redirect the output to a file, you can use it as part of your deployment process to install all those packages (pip can programmatically install all packages from the file).
I see you're already using virtualenv. Good. You can run pip freeze -E myvirtualenv > myproject.reqs to generate a dependency file that doubles as a status report of the Python environment.
Perhaps you want something like Opscode Chef.
In their own words:
Chef works by allowing you to write
recipes that describe how you want a
part of your server (such as Apache,
MySQL, or Hadoop) to be configured.
These recipes describe a series of
resources that should be in a
particular state - for example,
packages that should be installed,
services that should be running, or
files that should be written. We then
make sure that each resource is
properly configured, only taking
corrective action when it's
neccessary. The result is a safe,
flexible mechanism for making sure
your servers are always running
exactly how you want them to be.
EDIT: Note Chef is not a Python tool, it is a general purpose tool, written in Ruby (it seems). But it is capable of supporting various "cookbooks", including one for installing/maintaining Python apps.