Docker vs old approach (supervisor, git, your project) - python

I'm on Docker for past weeks and I can say I love it and I get the idea. But what I can't figure out is how can I "transfer" my current set-up on Docker solution. I guess I'm not the only one and here is what I mean.
I'm Python guys, more specifically Django. So I usually have this:
Debian installation
My app on the server (from git repo).
Virtualenv with all the app dependencies
Supervisor that handles Gunicorn that runs my Django app.
The thing is when I want to upgrade and/or restart the app (I use fabric for these tasks) I connect to the server, navigate to the app folder, run git pull, restart the supervisor task that handles Gunicorn which reloads my app. Boom, done.
But what is the right (better, more Docker-ish) approach to modify this setup when I use Docker? Should I connect to docker image bash somehow everytime I want upgrade the app and run the upgrade or (from what I saw) should I like expose the app into folder out-of docker image and run the standard upgrade process?
Hope you get the confusion of old school dude. I bet Docker guys were thinking about that.
Cheers!

For development, docker users will typically mount a folder from their build directory into the container at the same location the Dockerfile would otherwise COPY it. This allows for rapid development where at most you need to bounce the container rather than rebuild the image.
For production, you want to include everything in the image and not change it, only persistent data goes in the volumes, your code is in the image. When you make a change to the code, you build a new image and replace the running container in production.
Logging into the container and manually updating things is something I only do to test while developing the Dockerfile, not to manage a developing application.

Related

Setting up docker container so that I can access python packages on ubuntu server

I'm new to using Docker, so I'm either looking for direct help or a link to a relevant guide. I need to train some deep learning models on my school's linux server, but I can't manually install pytorch and other python packages since I don't have root access (sudo). Another student said that he uses docker and has everything ready to go in his container.
I'm wondering how to wrap up my code and relevant packages into a container that I can push to the linux server and then run.
To address your specific problem the easiest way I found to get code into a container is to use git.
start the container in interactive mode or ssh to it if it's attached to a network.
git clone <your awesome deep learning code>. In your git repo have a requirements.txt file. Change directories into your local clone of your repo and run pip install -r requirements.txt
Run whatever script you need to run your code. Note you can easily put your pip install command in one of your run scripts.
It's important to remember that docker containers are stateless/ephemeral. You should not expect the container nor its contents to exist in some durable fashion. This specific issue is addressed by mapping a directory on the host system to a directory in the container.
Side note: I first recommend starting with the docker tutorial. You can easily skip over the installation parts if you are working on system that already has docker installed and where you have permissions to build, start, and stop containers.
I don't have root access (sudo). Another student said that he uses docker
I would like to point out that docker requires sudo permissions.
Instead I think you should look at using something like Google Colab or JupyterLab. This gives you the added benefit of code that is backed-up on a remote server

How to move a python application to another folder in server without damaging it?

I am running the celery flower application (https://github.com/mher/flower) in my server. I installed this application in my Python LAMP server using the following command:
pip install flower
Now I want to do some modifications in the application such as functionality and layout. I want to do it by placing a copy of the application files in my /var/www/html public folder where all of my other applications are placed so as not to disturb the original application and not having to go into the system files like ../lib/......./dist/flower. I have been developing applications in django previously and and in django, we can just put a copy of application files in our root applications folder and do modifications in it and the system reads the new copy of the files instead of original installation (pretty clean and clear method). I was hoping to have something like this for this application also? Any suggestions?
Firstly, none of your application files should be in /var/www/html. That's for documents served directly by the webserver, not for code.
To answer your question though, if you want to modify a project you should fork it on github, make your changes there, and install from the forked repo in pip.

Can I use Heroku as a Python server?

My web host does not have python and I am trying to build a machine learning application. I know that heroku lets you use python. I was wondering if I could use heroku as a python server? As in I would let heroku do all of the python processing for me and use my regular domain for everything else.
Yes, and it may be a pain at first but once it is set I would say Heroku is the easiest platform to continually deploy to. However, it is not intuitive - don't try and just 'take a stab' at it; follow a tutorial and try and understand why Heroku works the way it does.
Following the docs is a good bet; Heroku has great documentation for the most part.
Here's the generalized workflow for deploying to Heroku:
Locally, create your project and use virtualenv to install/manage
libraries.
Initialize a git repository in the base dir for your
Python project; create a heroku remote (heroku create)
Create a
procfile for Heroku to use when starting gunicorn (or see
the options for using waitress/etc); this is used by Heroku to start your process
cd to your base dir; freeze
your virtualenv (pip freeze > requirements.txt) and add/commit
requirements.txt. This tells Heroku what packages need to be installed, a requirement for your deployment to work. If you are trying to run a Python project and there are required packages missing, the app will be unable to start and Heroku will display an Internal Server Error.
Whenever changes are made, git commit your changes and git push heroku master to push all commits to Heroku. This will cause Heroku to restart the server application with your updated deployment. If there's a failure, you can use heroku rollback to just return to your last deployment.
In reality, it's not a pain in the ass, just particular. Knowing the rules of Heroku, you are able to manage your deployment with command-line git commands with ease.
One caveat - If deploying Django, Flask applications etc there are peculiarities to account for; specifically, non-project files (including assets) should NOT be stored on Heroku as Heroku periodically restarts your 'dyno' (server instance(s)), loading the whole project from the latest push to Heroku. With Django and Flask, this typically means serving assets/static/media files from an Amazon S3 bucket.
That being said, if you use virtualenv properly, provision your databases, and follow Heroku practices for serving files and commiting updates, it is (imho) the absolute best platform out there for ease of use, reliable uptime, and well-oiled rolling deployments.
One last tip - if you are creating a Django app, I'd suggest starting your project out of this boilerplate. I have a custom one I use for new projects and can start and publish a project in minutes.
Yes, you can use Heroku as a python server. I put a Python Flask server on Heroku but it was a pain: Heroku seemed to have some difficulties, and there were lots of conflicting advice on getting around those. I eventually got it working, can't remember what web page had the ultimate answer but you might look at this one: http://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-xviii-deployment-on-the-heroku-cloud
Have you done your Python Server on Heroku by using twisted?
I don't know if this can help you.
I see the doc 'Getting Started on Heroku with Python' is about the Django.
It is sure that Heroku can use Twisted from docs
Pure Python applications, such as headless processes and evented web frameworks like Twisted, are fully supported.
django-twisted-server has twisted in django but it isn't on Heroku.

How to setup Git to deploy python app files into Ubuntu Server?

I setup a new Ubuntu 12.10 Server on VPN hosting. I have installed all the required setup like Nginx, Python, MySQL etc. I am configuring this to deploy a Flask + Python app using uWSGI. Its working fine.
But to create a basic app i used Putty tool (from Windows) and created required app .py files.
But I want to setup a Git functionality so that i can push my code to required directory say /var/www/mysite.com/app_data so that i don't have to use SSH or FileZilla etc everytime i make some changes into my website.
Since i use both Ubuntu & Windows for development of app, setting up a Git kind of functionality would help me push or change my data easily to my Cloud Server.
How can i setup a Git functionality in Ubuntu ? and How could i access it and Deploy data using tools like GitBash etc. ?
Please Suggest
Modified version of innaM:
Concept
Have three repositories
devel - development on your local development machine
central - repository server - like GitHub, Bitbucket or anything other
prod - production server
Then you commit things from devel to central and as soon as you want to deploy on prod, than you ask prod to pull data from prod.
"asking" prod server to pull the updates can be managed by cron (then you have to wait a moment) or you may use other means like one shot call of ssh asking to do git pull and possibly restart your app.
Step by step
In more details you can go this way.
Prepare repo on devel
Develop and test the app on your devel server.
Put it into local repository:
$ git init
$ git add *
$ git commit -m "initial commit"
Create repo on central server
E.g. bitbucket provides this description: https://confluence.atlassian.com/display/BITBUCKET/Import+code+from+an+existing+project
Generally, you create the project on Bitbucket, find the url of it and then from your devel repo call:
$ git remote add origin <bitbucket-repo-url>
$ git push origin
Clone central repo to prod server
Log onto your prod server.
Go to /var/www and clone form bitucket:
$ cd /var/www
$ git clone <bitbucket-repo-url>
$ cd mysite.com
and you shall have your directory ready.
Trigger publication of updates to prod3
There are numerous options. One being a cron task, which would regularly call
$ git pull
In case, your app needs restart afte an update, then you have to ensure, the restart would happen (this shall be possible using git log command, which will show new line after the update, or you may check, if status code would tell you.
Personally I would use "one shot ssh" (you asked not to use ssh, but I assume you are asking for "simpler" solution, so one shot call shall work simpler then using ftp, scp or other magic.
From your devel machine (assuming you have ssh access there):
$ ssh user#prod.server.com "cd /var/www/mysite.com && git pull origin && myapp restart"
Advantage is, that you do control the moment, the update happens.
Discussion
I use similar workflow.
rsync seems in many cases serve well enough or better (be aware of files being created at app runtime and by files in your app, which shall be removed during ongoing versions and shall be removed on server too).
salt (saltstack) could serve too, but requires a bit more learning and setup).
I have learned, that keeping source code and configuration data in the same repo makes sometime situation more dificult (that is why I am working on using salt).
fab command from Fabric (python based) may be best option (in case installation on Windows becomes difficult, look at http://ridingpython.blogspot.cz/2011/07/installing-fabric-on-windows.html
Create a bare repository on your server.
Configure your local repository to use the repository on the server as a remote.
When working on your local workstation, commmit your changes and push them to the repository on your server.
Create a post-receive hook in the server repository that calls "git archive" and thus transfers your files to some other directory on the server.

Installing a my Django app on ec2

Im in the process of launching a Django app on ec2, but have hit a wall trying to install my code on my AMI instance. This is my situation: I have a bitnami AMI up and running that has Django, apache, Postgresql, and nearly all my dependancies pre installed, and I have my fully functional Django app running on my local machine that I have been testing thus far with the Django Dev server. After quite a bit of googling, the most common methods of installing an app to an ec2 instance seem either using ssh/sftp/scp to drop a tarball in the instance, or creating a repository and importing code from there. If anyone can tell me the method they prefer, and guide me through the process, or provide a link to a good tutorial, it would be hugely appreciated!
tar -pczf yourfile.tar.gz MyProject
scp -i /home/user/.cert/yourcert.pem yourfile.tar.gz user#serveripaddress:/home/user
tar -xvf /home/user/yourfile.tar
I usually simply scp -R my whole site directory into /home/bitnami of my AMI. I'm using Apache/NGINX/Django with mod_wsgi. So the directory (for example /home/bitnami/djangosites/) gets referred to based on my mod_wsgi path in my apache cfg file.
In other words, why not just move the whole directory recursively (scp -R) instead of making a tarball etc?
Directly copy the folder where your project resides may work. However you mention that you are using a BitNami image, so it is likely that you are using the BitNami Django Stack Amazon image. BitNami also provides a native version of the BitNami Django Stack so I would suggest that you first try to deploy your application on top of the native installer and see what exact steps you need to follow. For instance you may need to install python dependencies or if you plan to use Apache on production instead of the Django development server you will need to configure Apache to serve your project. I'm a BitNami developer and I mention this because make easier the deployment in different platforms (including ec2) is one of the goal of BitNami and as you are already using it you can take advantage of this.

Categories