Trying to make a development instance for a Python pyramid project - python

So I have this Python pyramid-based application, and my development workflow has basically just been to upload changed files directly to the production area.
Coming close to launch, and obviously that's not going to work anymore.
I managed to edit the connection strings and development.ini and point the development instance to a secondary database.
Now I just have to figure out how to create another copy of the project somewhere where I can work on things and then make the changes live.
At first, I thought that I could just make a copy of the project directory somewhere else and run it with different arguments pointing to the new location. That didn't work.
Then, I basically set up an entirely new project called myproject-dev. I went through the setup instructions:
I used pcreate, and then setup.py develop, and then I copied over my development.ini from my project and carefully edited the various references to myproject-dev instead of myproject.
Then,
initialize_myproject-dev_db /var/www/projects/myproject/development.ini
Finally, I get a nice pyramid welcome page that everything is working correctly.
I thought at that point I could just blow out everything in the project directory and copy over the main project files, but then I got that feeling in the pit of my stomach when I noticed that a lot of things weren't working, like static URLs.
Apparently, I'm referencing myproject in includes and also static URLs, and who knows where else.
I don't think this idea is going to work, so I've given up for now.
Can anyone give me an idea of how people go about setting up a development instance for a Python pyramid project?

The first thing you should do, if it's not the case, is version control your project. I'd recommend using git.
In addition to the benefits of managing the changes made to the application when developing, it will aldo make it easier to share copies between developers... or with the production deployment. Indeed, production can just be a git clone of the project, just like your development instance.
The second thing is you need to install the project in your Python library path. This is how all the imports and includes are going to work.
I'd recommend creating a virtual environment for this, with either virtualenv or pew, so that your app (and its dependencies) are "isolated" from the rest of your system and other apps.
You probably have a setup.py script in your project. If not, create one. Then install your project with pip install . in production, or pip install -e . in development.

Here's how I managed my last Pyramid app:
I had both a development.ini and a production.ini. I actually had a development.local.ini in addition to the other two - one for local development, one for our "test" system, and one for production. I used git for version control, and had a main branch for production deployments. On my prod server I created the virtual environment, etc., then would pull my main branch and run using the production.ini config file. Updates basically involved jumping back into the virtualenv and pulling latest updates from the repo, then restarting the pyramid server.

Related

Aldryn - DjangoCMS install addons not present in "Manage Addons"

I am quite a Django n00b, and figured using Aldryn for my first real django site would be a good idea!
I have successfully installed and implementer Aldryn News & Blog.
Now I would like to install Aldryn Search that is not accessible from the "Manage Addons" under the Aldryn control panel.
I very confused on how to install an addon like Aldryn Search that is not accessible from within "Manage Addons". Should I somehow use the "Add custom Addon" and register the package as a new custom addon.
Or should I create a local development environment and somehow install the addon and upload it? (does it exist a tutorial for this?)
Thank you!
There are various ways in which to install arbitrary Django packages into an Aldryn project.
The quick, easy way
The easiest, quickest way is simply to place the module(s) you need into the project directory, thus placing them on the Python path. You need then to make sure that your settings.py, urls.py and so on are appropriately configured. Then you can push these changes to Aldryn itself. This is described in Adding a new application to your Aldryn project - the quick and easy way.
The create-an-Addon way
A more involved way to do it, that has benefits for long-term use and re-use, is to turn the package into a private or public Aldryn Addon. This is described in Developing an Addon application for Aldryn.
A middle way
Another way is somewhere between the two. Add the package to the project's requirements.in - you can do this in various ways, for example:
# standard install from PyPI
some-package==1.2.3
# install from an archive
https://example.com/some-package1.2.3.tar.gz#egg=some-package==1.2.3
# install from a GitHub repository
git+http://git#github.com/some-org/some-package.git#egg=some-package==1.2.3
You will need to make sure that your settings.py, urls.py and so on are appropriately configured.
Run aldryn project update. This in effect redeploys your project locally, except for:
docker-compose run --rm web python manage.py migrate - you need to run any migrations manually, unlike on Aldryn.
Finally, git add, commit and push your changes to your project, and redeploy it on Aldryn.
This method isn't yet documented in the Aldryn support system, but will be soon.
That's a very valid question in my opinion since add-ons are wrapped into an additional directory that makes the django app inside invisible to Django's INSTALLED_APPS.
If you add them to addons-dev they are ignored by git.
A possible solution (even if maybe not the cleanest) would be to unignore addons-dev by adding !/addons-dev to the .gitignore in the project's root directory and then add -e /app/addons-dev/aldryn-package-name to requirements.in (outside the section generated/overwritten by Aldryn). That's what aldryn project develop aldryn-package-name does (for the local environment).
Similarly, if you have a git repository that contains the code (like aldryn-search) you would use -e git+https://github.com/aldryn/aldryn-search.git in requirements.in
In case you need to apply changes to the addon code best practise would be forking the original repository and then checking out your fork as per above instructions.

Reusable Django apps + Ansible provisioning

I'm a long-time Django developer and have just started using Ansible, after using Vagrant for the last 18 months. Historically I've created a single VM for development of all my projects, and symlinked the reusable Django apps (Python packages) I create, to the site-packages directory.
I've got a working dev box for my latest Django project, but I can't really make changes to my own reusable apps without having to copy those changes back to a Git repo. Here's my ideal scenario:
I checkout all the packages I need to develop as Git submodules within the site I'm working on
I have some way (symlinking or a better method) to tell Ansible to setup the box and install my packages from these Git submodules
I run vagrant up or vagrant provision
It reads requirements.txt and installs the remaining packages (things like South, Pillow, etc), but it skips my set of tools because it knows they're already installed
I hope that makes sense. Basically, imagine I'm developing Django. How do I tell Vagrant (via Ansible I assume) to find my local copy of Django, rather than the one from PyPi?
Currently the only way I can think of doing this is creating individual symlinks for each of those packages I'm developing, but I'm sure there's a more sensible model.
Thanks!
You should probably think of it slightly differently. You create a Vagrant file which specifies Ansible as a provisioner. In that Vagrant file you also specify what playbook to use for your vagrant provision portion.
If your playbooks are written in an idempotent way, running them multiple times will skip steps that already match the desired state.
You should also think about what your desired end-state of a VM should look like and write playbooks to accomplish that. Unless I'm misunderstanding something, all your playbook actions should be happening inside of VM, not directly on your local machine.

How do you setup Python with virtualenv locally and on a real Linode server (or similar)?

Using PyCharm on Windows and would like to gain a better understanding of how to setup my local environment so that it translates cleanly as possible to my servers on Linode (or any other Linux box for that matter).
I have a physical drive set aside for development work. In my case this is drive Z:.
I will typically create one directory per project. A project being defined as an entire website.
I have currently also opted to have a directory, Z:\virtualenv, where I create my virtual environments. One per project. I suppose multiple projects could share the same virtualenv but I am not sure if this is smart for either development or production.
I've considered the idea of having the per-project virtualenv live inside it's corresponding project. This appeals to me because then each project would be monolithic. For example, if we are talking about Flask application under PyCharm:
d z:\flask_app
d .git
d .idea
d static
d templates
d virtualenv
main.py
How, then, do you setup the production server given the above?
Let's assume one is using a single machine to host more than one site through virtual hosting, this being one of them:
<VirtualHost *:80>
ServerAdmin you#example.com
ServerName example.com
ServerAlias example.com *.example.com
DocumentRoot /var/www/example/public_html
ErrorLog /var/www/example/logs/access.log
CustomLog /var/www/example/logs/error.log combined
<Directory /var/www/example>
Options Indexes FollowSymLinks
AllowOverride All
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
Do I setup virtualenv at the global server level? I think that's a global "yes". It couldn't work any other way. I don't think.
OK, that does mean that the entire file structure under
z:\flask_app
can now be FTP'd into
/var/www/example/public_html
and the site is good to go?
I understand that db server, db's, tables, etc. need to be setup on the production machine to match. I am simply focusing on Python on the transition of Python with virtualenv from a desktop development environment to an external Linux production box.
I think I have to use virtualenv at the server root level to also enable that virtual environment, right? This is where I am a bit fuzzy about things. Most tutorials I've come across cover your local development environment extensively but rarely go into the transition of projects to production servers, their setup and ongoing relationship to the development setup.
I will be using a virtual machine with Ubuntu 14.04 LTS to sort this out as I move forward.
I've also considered using 14.04 Desktop for development on a VM in order to have matched environments and get off Windows.
1) A 14.04 Desktop VM just to muck around in and get things right before transferring to scripts and the command line for your server is a great idea.
2) You may happen to love the virtualenvwrapper tool/project. It maps almost exactly to your current workflow, but with some handy conveniences (its whole point). It essentially hosts a central folder of virtualenvs to different names (/ folders). Its most handy commands are mkproject (create a new folder and virtualenv of the same name) and workon (activate the project of that name).
3) Fortunately given 14.04 isn't too old, it has quite a recent virtualenv already present in its packages, python-virtualenv (1.11.4). I would install this and then use it to create environments on your server to run python projects under, as you suggest.
OK, that does mean that the entire file structure under ... can now be FTP'd into ... and the site is good to go?
No, because you'd be trying to transfer a virtualenv created for a Python on a Windows machine and hoping it would work under a Python on Linux/Ubuntu.
4) To keep a managed list of packages each project needs installed, list them in a requirements.txt. Then with a new virtualenv active, you can simply run pip install -r requirements.txt and all the needed packages will be installed for it.
5) For running your apps under the one server, I would suggest running a local WSGI server like Chaussette (perhaps under Circus) or uWSGI that hosts your python WSGI app under a local port / unix socket; then configure Apache or Nginx to reverse proxy all needed dynamic traffic to that server (see this SO answer as an example).
6) Some rudimentary bash scripting know-how can help a lot if you have things repeatably bootstrap-able :) If it gets even more complicated, you can use a managed configuration product like Salt.
Consider this: your Git repository should contain the source code, data files and other files relating to developing that project. It shouldn't contain the virtualenv as that is a mix of executables (Python, pip), header files and dependencies installed from various sources. It should be possible to wipe a virtualenv and rebuild it without much hassle.
Although you could have source and virtualenv in the same directory, you'd need to update your .gitignore file anyway which indicates that it makes sense to locate all virtualenvs elsewhere. It's not a matter of FTP'ing a whole directory into another: you should separate the notion of updating code from the notion of setting up a virtualenv (which may have different packages installed than your development machine).
For example, you would be developing on a Windows machine and deploy to a Linux machine which could lead to different packages being used. So it's important to separate "project source code" from "dependencies and configuration needed to run".
Likewise, on the production servers you can designate one location for your virtualenvs and all projects (which have their code installed somewhere, in a directory structure of your choice) would activate the virtualenv and then run. There isn't really a wrong or right way to do things, as long as each process has sufficient permissions to do its thing.
Depending on how much automating you wish to put into the deployment, you should consider at least automating a little bit to ensure it works properly. It comes down to setting up the directory structure, permissions, checking out code from git, setting up a virtualenv, installing dependencies and any other remaining configuration to make things work. You can do these tasks with Ansible, for example.
In general it's better not to think of your application as one monolithic thing because things can move to other locations as time goes on. Static files? They may need to go to a content-delivery network someday. Database installation? May need to be moved to another machine someday. And so on.

How to setup django application for production and open source, with one repository

I have a python/django project that I've set up for development and production using git revision control. I have three settings.py files:
-settings.py (which has dummy variables for potential open source project),
-settings_production.py (for production variables), and
-settings_local.py (to override settings just for my local environment). This file is not tracked by git.
I use this method, which works great:
try:
from settings_production import *
except ImportError, e:
print 'Unable to load settings_production.py:', e
try:
from settings_local import *
except ImportError, e:
print 'Unable to load settings_local.py:', e
HOWEVER, I want this to be an open source project. I've set up two git remotes, one called 'heroku-production' and one called 'github-opensource'. How can I set it up so that the 'heroku-remote' includes settings_production.py while 'github-opensource' doesn't, so that I can keep those settings private?
Help! I've look at most of the resources over the internets, but they don't seem to address this use case. Is this the right way? Is there a better approach?
The dream would be to be able to push my local environment to either heroku-production or github-opensource without haveing to mess with the settings files.
Note: I've looked at the setup where you use environment variables or don't track the production settings, but that feels overly complicated. I like to see everything in front of me in my local setup. See this method.
I've also looked through all these methods, and they don't quite seem to fit the bill.
There's a very similar question here. One of the answers suggests git submodules
which I would say are the easiest way to go about this. This is a problem for your VCS, not your Python code.
I think using environment variables, as described on the Two Scoops of Django book, is the best way to do this.
I'm following this approach and I have an application running out of a private GitHub repository in production (with an average of half a million page views per month), staging and two development environments and I use a directory structure like this:
MyProject
-settings
--__init__.py
--base.py
--production.py
--staging.py
--development_1.py
--development_2.py
I keep everything that's common to all the environments in base.py and then make the appropiate changes on production.py, staging.py, development_1.py or development_2.py.
My deployment process for production includes virtualenv, Fabric, upstart, a bash script (used by upstart), gunicorn and Nginx. I have a slightly modified version of the bash script I use with upstart to run the test server; it is something like this:
#!/bin/bash -e
# starts the development server using environment variables and django-admin.py
PROJECTDIR=/home/user/project
PROJECTENV=/home/user/.virtualenvs/project_virtualenv
source $PROJECTENV/bin/activate
cd $PROJECTDIR
export LC_ALL="en_US.UTF-8"
export HOME="/home/user"
export DATABASES_DEFAULT_NAME_DEVELOPMENT="xxxx"
export DATABASES_DEFAULT_USER_DEVELOPMENT="xxxxx"
export DATABASES_DEFAULT_PASSWORD_DEVELOPMENT="xxx"
export DATABASES_DEFAULT_HOST_DEVELOPMENT="127.0.0.1"
export DATABASES_DEFAULT_PORT_DEVELOPMENT="5432"
export REDIS_HOST_DEVELOPMENT="127.0.0.1:6379"
django-admin.py runserver --pythonpath=`pwd` --settings=MyProject.settings.development_1 0.0.0.0:8006
Notice this is not the complete story and I'm simplifying to make my point. I have some extra Python code in base.py that takes the values from these environment variables too.
Play with this and make sure to check the relevant chapter in Two Scoops of Django, I was also using the import approach you mentioned but having settings out of the repository wasn't easy to manage and I made the switch a few months ago; it's helped me a lot.

pyramid default page when deploying to production server

I've managed to deploy to a production site, running on Apache + mod_wsgi, python3.3 + pyramid 1.4.
Right now, it's showing the pyramid default page.
I was messing around with the myapp folder, even when I removed __init__.py, restart apache, it is still showing the default pyramid page. Why is this so?
For some reason which I don't understand, when using install over develop, there's another
folder ( build ) being created, I've tried editing the template.pt file in build as well as
the one in the template folder, restart apache, it is still showing the default pyramid page that comes with when setting up a new project.
I don't know if this is the right way of doing it but it works for me. Instead of using install as detailed in http://docs.pylonsproject.org/projects/pyramid/en/latest/tutorials/modwsgi/index.html, step 6:
$ ../bin/python setup.py install
I used develop, edited the template.pt in the template folder, restart apache, and the site is
reflecting the changes.
install bundles your app, and will not include static files unless you have a proper MANIFEST. develop is usually a better way to deploy unless you're trying to make your app redistributable as an open project.

Categories