Packaging Django code for deployment - python

I'm getting ready to move my Django project from my laptop to my server. What is the recommended way to do this? E.g., is there a Django command that will package everything up (and select the correct settings file for test vs prod servers) and create a zip or tar file that can be moved over to the server? Sort of like Ant for building Java projects.

I recommend using virtual environment for your Django project.
source bin/activate your virtual environment in your server would simulate the same setup as of your local.
In your project
List all your dependencies in requirements.txt and settings in my_settings.py apart from django settings.py
In your server
just pull/transfer your code via git or any other means and activate virtual environment.
pip install -r reuirements.txt and change any minor changes required in my_settings
Take care of your migrations and Db setup. You may have to run migrations if you are migrating to your server for the first time.
And thats it you are up and running.

Related

Production server saying it is missing modules after database flush with Python

I have an application using Django1.9, and python2.7. I recently flushed my PostgreSQL database on my production server, and now whenever I try to use the application it is telling me there are missing modules. I never faced this issue before so I am curious, when you put your application on a production server, does your virtual environment go with it ? If so, does flushing your database have any effect on your virtual environment ?
I have been getting past the issues by downloading each module to a third parties directory in my application, and including them in my 'installed apps' list in my setting file, but I wouldn't want to continue doing that if there are 100+ modules I need to download.
I also tried to use pip install on my production server, and it said that the command was not found, although I have the latest version of pip installed on my mac ?
I am curious, when you put your application on a production server, does your virtual environment go with it ?
Not necessarily unless you copied the virtualenv folder with it which isn't really a good practice, you should create the virtualenv on the production server
If so, does flushing your database have any effect on your virtual environment ?
No, the database and virtualenv are completely separate
I wouldn't want to continue doing that if there are 100+ modules
Use a requirements.txt file and install them all at one go with pip install -r requirements.txt
I also tried to use pip install on my production server, and it said that the command was not found
You have to install pip first, on the production server

Deploying websites in django virtual machine

Sorry I'm new to this specific topic.
I have a website implemented in django and AskBot it also has a DB (postgreSQL). I want to create a deployment package which can be distributed to any customer; such that this customer can have their own server. Taking into consideration that the deployment package should be platform independent; so it should work on all operating systems.
Can you tell me what are the available tools to achieve this?
virtualenv is a really good tool but I think Vagrant is what you're looking for.
https://www.vagrantup.com/
It should enable you to easily set up your system regardless of the platform and it's free as well and quite well documented. I'd suggest you give it a look over !
From my point of view, database always should be created before deployment. And the information of the database must be posted to the settings.py
for the application it self, I think virtualenv can be very helpful in these cases with requirements.txt
You run the application in your virtual environment and then export your dependencies using
pip freeze > requirements.txt
Then in the new server you create the database, and insert the configuration in your settings, then install dependences
pip install -r /path/to/requirements.txt
Run migrations, and you are done.

Deploying python site to production server

I have django website in testing server and i am confused with how should the deployement procedure goes.
Locally i have these folders
code
virtualenv
static
static/app/bower_components
node_modules
Current on git i only have code folder in there.
My Initial thought was to do this on production server
git clone repo
pip install
npm install
bower install
colectstatic
But i had this problem that sometimes some components in pip or npm or bowel fail to install and then production deployemnet fails.
I was thinking of put everything in static, bower, npm etc inside git so that i can fetch all in prodcution.
Is that the right way to do. i want to know the right way to tackle that problem
But i had this problem that sometimes some components in pip or npm or
bowel fail to install and then production deployment fails.
There is no solution for this other than to find out why things are failing in production (or a way around would be to not install anything in production, just copy stuff over).
I would caution against the second option because Python virtual environments are not designed to be portable. If you have components such as PIL/Pillow or database drivers, these need system libraries to be installed and compiled against at build time.
Here is what I would recommend, which is in-line with the deployment section in the documentation:
Create an updated requirements file (pip freeze > requirements.txt)
Run collectstatic on your testing environment.
Move the static directory to your frontend/proxy machine, and map it to STATIC_URL. Confirm this works by browsing the static URL (for example: http://example.com/static/images/logo.png)
Clone/copy your codebase to the production server.
Create a blank virtual environment.
Install dependencies with pip install -r requirements.txt
Make sure you run through the deployment checklist, which includes security tips and settings you need to enable for production.
After this point, you can bring up your django server using your favorite method.
There are many, many guides on deploying django and many are customized for particular environments (for example, AWS automation, Heroku deployment tips, Digital Ocean, etc.) You can browse those for ideas (I usually pick out any automation tips) but be careful adopting one strategy without making sure it works with your particular environment/requirements.
In addition this might be helpful for some guidelines on deployment.

newrelic python agent issue

I have 3 standalone python applications in python Virtual env in mog_wsgi modein same server. I installed newrelic in 1st application python virtual environment and its showing fine in newrelic GUI page.
when i followed and installed same thing for 2nd and 3rd application in respective python Virtual env. but these 2 applications are not showing in Newrelic GUI application page.
For all 3 applications log file is not updating from starting.
Please help me out to configure and integrate multiple python apps which are hosted in a single server python env.
below are steps that i have followed.
centos OS
Install the New Relic Python agent in apps virtual env
source <virtual path>
pip install newrelic
pip freeze
Generate config file:
cd /etc/newrelic/
newrelic-admin generate-config <Licence Key> newrelic.ini
Validate the conf file:
newrelic-admin validate-config newrelic.ini
Configure the variables in conf file
logfile = log file name
loglevel=info
app_name = name
Integrate the appliactions mod_wsgi file with NewRelic:
Adding below in wsgi.py file
vi wsgi.py
import newrelic.agent
newrelic.agent.initialize('/etc/newrelic/newrelic.ini')
installed newrelic version is "newrelic==2.12.0.10"
Please help me.
Thank you,
subhani466#gmail.com
I know that you have posted this a while ago, but the solution I've found was to install newrelic pip install newrelic outside of virtual envs.
Anyone that is facing this issue just install new relic outside the virtual envs.
I'm a little unclear on your setup. If you have all three Python apps in the same virtual environment and you're using newrelic.ini for all three, then all three applications will report to the same UI listing. You need to either name the config files differently (newrelic.ini, newrelic2.ini, newrelic3.ini) and reference them likewise, or split the applications into separate virtual environments.
As for the log files not writing, it sounds like you don't have user rights to that directory. You can read more about this in the New Relic docs, here: https://docs.newrelic.com/docs/python/python-agent-logging
The path provided for 'log_file' should be writable to the user that
your application runs as. If using Apache/mod_wsgi that would usually
be the Apache user which has restricted access to the filesystem. You
might therefore need to create a special directory into which the log
file can be placed which is writable to the Apache user. Because the
current working directory of an application could be anything, it is
recommended that an absolute path and not a relative path be used.
If you need more help, please open a ticket with us at http://support.newrelic.com
It looks like you are using one config file for three applications.
They all have the same app_name.
Therefore, you have one APM displayed in the UI.
Remove the app_name line from newrelic.ini and pass a unique variable NEW_RELIC_APP_NAME to each app.

Using git post-receive hook to deploy python application in virtualenv

My goal is to be able to deploy a Django application to one of two environments (DEV or PROD) based on the Git branch that was committed and pushed to a repository. This repository is hosted on the same server as the Django applications are being run on.
Right now, I have two virtualenvs set up. One for each environment. They are identical. I envision them only changing if the requirements.txt is modified in my repository.
I've seen tutorials around the internet that offer deployments via git by hosting the repository directly in the location where the application will be deployed. This doesn't work for my architecture. I'm using RhodeCode to host/manage the repository. I'd like to be able to use a post-receive (or other if it's more appropriate) hook to trigger the update to the appropriate environment.
Something similar to this answer will allow me to narrow down which environment I want to focus on.
When I put source activate command in an external script (ie. my hook), the script stops at that command. The virtualenv is started appropriately, but any further actions in the script (ie. pip install -r requirements.txt or ./manage.py migrate) aren't executed.
My question, is how can I have that hook run the associated virtualenv? Or, if it is already running, update it appropriately with the new requirements.txt, South migrations, and application code?
Is this work flow overly complicated? Theoretically, it should be as simple as git push to the appropriate branch.

Categories