I am quite a Django n00b, and figured using Aldryn for my first real django site would be a good idea!
I have successfully installed and implementer Aldryn News & Blog.
Now I would like to install Aldryn Search that is not accessible from the "Manage Addons" under the Aldryn control panel.
I very confused on how to install an addon like Aldryn Search that is not accessible from within "Manage Addons". Should I somehow use the "Add custom Addon" and register the package as a new custom addon.
Or should I create a local development environment and somehow install the addon and upload it? (does it exist a tutorial for this?)
Thank you!
There are various ways in which to install arbitrary Django packages into an Aldryn project.
The quick, easy way
The easiest, quickest way is simply to place the module(s) you need into the project directory, thus placing them on the Python path. You need then to make sure that your settings.py, urls.py and so on are appropriately configured. Then you can push these changes to Aldryn itself. This is described in Adding a new application to your Aldryn project - the quick and easy way.
The create-an-Addon way
A more involved way to do it, that has benefits for long-term use and re-use, is to turn the package into a private or public Aldryn Addon. This is described in Developing an Addon application for Aldryn.
A middle way
Another way is somewhere between the two. Add the package to the project's requirements.in - you can do this in various ways, for example:
# standard install from PyPI
some-package==1.2.3
# install from an archive
https://example.com/some-package1.2.3.tar.gz#egg=some-package==1.2.3
# install from a GitHub repository
git+http://git#github.com/some-org/some-package.git#egg=some-package==1.2.3
You will need to make sure that your settings.py, urls.py and so on are appropriately configured.
Run aldryn project update. This in effect redeploys your project locally, except for:
docker-compose run --rm web python manage.py migrate - you need to run any migrations manually, unlike on Aldryn.
Finally, git add, commit and push your changes to your project, and redeploy it on Aldryn.
This method isn't yet documented in the Aldryn support system, but will be soon.
That's a very valid question in my opinion since add-ons are wrapped into an additional directory that makes the django app inside invisible to Django's INSTALLED_APPS.
If you add them to addons-dev they are ignored by git.
A possible solution (even if maybe not the cleanest) would be to unignore addons-dev by adding !/addons-dev to the .gitignore in the project's root directory and then add -e /app/addons-dev/aldryn-package-name to requirements.in (outside the section generated/overwritten by Aldryn). That's what aldryn project develop aldryn-package-name does (for the local environment).
Similarly, if you have a git repository that contains the code (like aldryn-search) you would use -e git+https://github.com/aldryn/aldryn-search.git in requirements.in
In case you need to apply changes to the addon code best practise would be forking the original repository and then checking out your fork as per above instructions.
Related
So I have this Python pyramid-based application, and my development workflow has basically just been to upload changed files directly to the production area.
Coming close to launch, and obviously that's not going to work anymore.
I managed to edit the connection strings and development.ini and point the development instance to a secondary database.
Now I just have to figure out how to create another copy of the project somewhere where I can work on things and then make the changes live.
At first, I thought that I could just make a copy of the project directory somewhere else and run it with different arguments pointing to the new location. That didn't work.
Then, I basically set up an entirely new project called myproject-dev. I went through the setup instructions:
I used pcreate, and then setup.py develop, and then I copied over my development.ini from my project and carefully edited the various references to myproject-dev instead of myproject.
Then,
initialize_myproject-dev_db /var/www/projects/myproject/development.ini
Finally, I get a nice pyramid welcome page that everything is working correctly.
I thought at that point I could just blow out everything in the project directory and copy over the main project files, but then I got that feeling in the pit of my stomach when I noticed that a lot of things weren't working, like static URLs.
Apparently, I'm referencing myproject in includes and also static URLs, and who knows where else.
I don't think this idea is going to work, so I've given up for now.
Can anyone give me an idea of how people go about setting up a development instance for a Python pyramid project?
The first thing you should do, if it's not the case, is version control your project. I'd recommend using git.
In addition to the benefits of managing the changes made to the application when developing, it will aldo make it easier to share copies between developers... or with the production deployment. Indeed, production can just be a git clone of the project, just like your development instance.
The second thing is you need to install the project in your Python library path. This is how all the imports and includes are going to work.
I'd recommend creating a virtual environment for this, with either virtualenv or pew, so that your app (and its dependencies) are "isolated" from the rest of your system and other apps.
You probably have a setup.py script in your project. If not, create one. Then install your project with pip install . in production, or pip install -e . in development.
Here's how I managed my last Pyramid app:
I had both a development.ini and a production.ini. I actually had a development.local.ini in addition to the other two - one for local development, one for our "test" system, and one for production. I used git for version control, and had a main branch for production deployments. On my prod server I created the virtual environment, etc., then would pull my main branch and run using the production.ini config file. Updates basically involved jumping back into the virtualenv and pulling latest updates from the repo, then restarting the pyramid server.
I have django website in testing server and i am confused with how should the deployement procedure goes.
Locally i have these folders
code
virtualenv
static
static/app/bower_components
node_modules
Current on git i only have code folder in there.
My Initial thought was to do this on production server
git clone repo
pip install
npm install
bower install
colectstatic
But i had this problem that sometimes some components in pip or npm or bowel fail to install and then production deployemnet fails.
I was thinking of put everything in static, bower, npm etc inside git so that i can fetch all in prodcution.
Is that the right way to do. i want to know the right way to tackle that problem
But i had this problem that sometimes some components in pip or npm or
bowel fail to install and then production deployment fails.
There is no solution for this other than to find out why things are failing in production (or a way around would be to not install anything in production, just copy stuff over).
I would caution against the second option because Python virtual environments are not designed to be portable. If you have components such as PIL/Pillow or database drivers, these need system libraries to be installed and compiled against at build time.
Here is what I would recommend, which is in-line with the deployment section in the documentation:
Create an updated requirements file (pip freeze > requirements.txt)
Run collectstatic on your testing environment.
Move the static directory to your frontend/proxy machine, and map it to STATIC_URL. Confirm this works by browsing the static URL (for example: http://example.com/static/images/logo.png)
Clone/copy your codebase to the production server.
Create a blank virtual environment.
Install dependencies with pip install -r requirements.txt
Make sure you run through the deployment checklist, which includes security tips and settings you need to enable for production.
After this point, you can bring up your django server using your favorite method.
There are many, many guides on deploying django and many are customized for particular environments (for example, AWS automation, Heroku deployment tips, Digital Ocean, etc.) You can browse those for ideas (I usually pick out any automation tips) but be careful adopting one strategy without making sure it works with your particular environment/requirements.
In addition this might be helpful for some guidelines on deployment.
I'm a long-time Django developer and have just started using Ansible, after using Vagrant for the last 18 months. Historically I've created a single VM for development of all my projects, and symlinked the reusable Django apps (Python packages) I create, to the site-packages directory.
I've got a working dev box for my latest Django project, but I can't really make changes to my own reusable apps without having to copy those changes back to a Git repo. Here's my ideal scenario:
I checkout all the packages I need to develop as Git submodules within the site I'm working on
I have some way (symlinking or a better method) to tell Ansible to setup the box and install my packages from these Git submodules
I run vagrant up or vagrant provision
It reads requirements.txt and installs the remaining packages (things like South, Pillow, etc), but it skips my set of tools because it knows they're already installed
I hope that makes sense. Basically, imagine I'm developing Django. How do I tell Vagrant (via Ansible I assume) to find my local copy of Django, rather than the one from PyPi?
Currently the only way I can think of doing this is creating individual symlinks for each of those packages I'm developing, but I'm sure there's a more sensible model.
Thanks!
You should probably think of it slightly differently. You create a Vagrant file which specifies Ansible as a provisioner. In that Vagrant file you also specify what playbook to use for your vagrant provision portion.
If your playbooks are written in an idempotent way, running them multiple times will skip steps that already match the desired state.
You should also think about what your desired end-state of a VM should look like and write playbooks to accomplish that. Unless I'm misunderstanding something, all your playbook actions should be happening inside of VM, not directly on your local machine.
My goal is to be able to deploy a Django application to one of two environments (DEV or PROD) based on the Git branch that was committed and pushed to a repository. This repository is hosted on the same server as the Django applications are being run on.
Right now, I have two virtualenvs set up. One for each environment. They are identical. I envision them only changing if the requirements.txt is modified in my repository.
I've seen tutorials around the internet that offer deployments via git by hosting the repository directly in the location where the application will be deployed. This doesn't work for my architecture. I'm using RhodeCode to host/manage the repository. I'd like to be able to use a post-receive (or other if it's more appropriate) hook to trigger the update to the appropriate environment.
Something similar to this answer will allow me to narrow down which environment I want to focus on.
When I put source activate command in an external script (ie. my hook), the script stops at that command. The virtualenv is started appropriately, but any further actions in the script (ie. pip install -r requirements.txt or ./manage.py migrate) aren't executed.
My question, is how can I have that hook run the associated virtualenv? Or, if it is already running, update it appropriately with the new requirements.txt, South migrations, and application code?
Is this work flow overly complicated? Theoretically, it should be as simple as git push to the appropriate branch.
I'm currently developing several websites on Django, which requiere several Django Apps. Lets say I have two Django projects: web1 and web2 (the websites, each has a git repo). Both web1 and web2 have a different list of installed apps, but happen to both use one (or more) application(s) developed by me, say "MyApp" (also has a git repo there). My questions are:
What is the best way to decouple MyApp from any particular website? What I want is to develop MyApp independently, but have each website use the latest version of the app (if it has it installed, of course). I have two "proposed" solutions: use symlinks on each website to a "master" MyApp folder, or use git to push from MyApp to each website's repo.
How to deploy with this setup? Right now, I can push the git repo of web1 and web2 to a remote repo in my shared hosting account, and it works like a charm. Will this scale adequately?
I think I have the general idea working in my head, but I'm not sure about the specifics. Won't this create a nested git repo issue? How does git deal with simlinks, specifically if the symlink destination has a .git folder in it?
The way I work:
Each website has it's own git repo and each app has it's own repo. Each website also has it's own virtualenv and requirements.txt. Even though 2 websites may share the most recent version of MyApp right now, they may not in the future (maybe you haven't gotten one of the websites up to date on some API changes).
If you really must have just one version of MyApp, you could install it at the system level and then symlink it into the virtualenv for each project.
For development on a local machine (not production) I do it a little differently. I symlink the app's project folder into a "src" folder in the virtualenv of the website and then to a python setup.py develop into the virtualenv so that the newest changes are always used on the website "in real time".