pyramid default page when deploying to production server - python

I've managed to deploy to a production site, running on Apache + mod_wsgi, python3.3 + pyramid 1.4.
Right now, it's showing the pyramid default page.
I was messing around with the myapp folder, even when I removed __init__.py, restart apache, it is still showing the default pyramid page. Why is this so?
For some reason which I don't understand, when using install over develop, there's another
folder ( build ) being created, I've tried editing the template.pt file in build as well as
the one in the template folder, restart apache, it is still showing the default pyramid page that comes with when setting up a new project.
I don't know if this is the right way of doing it but it works for me. Instead of using install as detailed in http://docs.pylonsproject.org/projects/pyramid/en/latest/tutorials/modwsgi/index.html, step 6:
$ ../bin/python setup.py install
I used develop, edited the template.pt in the template folder, restart apache, and the site is
reflecting the changes.

install bundles your app, and will not include static files unless you have a proper MANIFEST. develop is usually a better way to deploy unless you're trying to make your app redistributable as an open project.

Related

Host Django application in the Lightsail's built in Apache server

I want to have a production ready Django app with Lighsail and for that I'm following two tutorials to achieve this
Deploy Django-based application onto Amazon Lightsail
Deploy A Django Project
From the Bitnami article can see that the AWS documentation follows its Approach B: Self-Contained Bitnami Installations.
According to:
AWS's documentation, my blocker appears in 5. Host the application using Apache, step g.
Bitnami's documentation, where it says
On Linux, you can run the application with mod_wsgi in daemon mode.
Add the following code in
/opt/bitnami/apps/django/django_projects/PROJECT/conf/httpd-app.conf:
The blocker relates to the code I'm being asked to add, in particular the final part that has
Alias /tutorial/static "/opt/bitnami/apps/django/lib/python3.7/site-packages/Django-2.2.9-py3.7.egg/django/contrib/admin/static"
WSGIScriptAlias /tutorial '/opt/bitnami/apps/django/django_projects/tutorial/tutorial/wsgi.py'
More specifically, /home/bitnami/apps/django/. In /home/bitnami/ can only see the following folders
. bitnami_application_password
. bitnami_credentials
. htdocs
. stack
and from them the one that most likely resembles /opt/bitnami/apps/ is /home/bitnami/stack/. Thing is, inside of that particular folder, there's no django folder - at least as far as I can tell (already checked inside some of its folders, like the python one).
The workaround for me at this particular stage is to move to a different approach, Approach A: Bitnami Installations Using System Packages (which I've done and managed to make it work as wrote in this blog post), but I'd like to get it to work using Approach B and hence this question.
The problem here is in the mentioning of the paths for both the project and Django.
In my case, projects are under /home/bitnami/projects/ where I created a Django project named tutorial.
Also, if I run the command
python -c "
import sys
sys.path = sys.path[1:]
import django
print(django.__path__)"
it'll print me the location where Django is installed
['/opt/bitnami/python/lib/python3.8/site-packages/django']
So, the httpd-app.conf should have instead at the end
Alias /tutorial/static "/opt/bitnami/python/lib/python3.8/site-packages/django/contrib/admin/static"
WSGIScriptAlias /tutorial '/home/bitnami/projects/tutorial/tutorial/wsgi.py'

Trying to make a development instance for a Python pyramid project

So I have this Python pyramid-based application, and my development workflow has basically just been to upload changed files directly to the production area.
Coming close to launch, and obviously that's not going to work anymore.
I managed to edit the connection strings and development.ini and point the development instance to a secondary database.
Now I just have to figure out how to create another copy of the project somewhere where I can work on things and then make the changes live.
At first, I thought that I could just make a copy of the project directory somewhere else and run it with different arguments pointing to the new location. That didn't work.
Then, I basically set up an entirely new project called myproject-dev. I went through the setup instructions:
I used pcreate, and then setup.py develop, and then I copied over my development.ini from my project and carefully edited the various references to myproject-dev instead of myproject.
Then,
initialize_myproject-dev_db /var/www/projects/myproject/development.ini
Finally, I get a nice pyramid welcome page that everything is working correctly.
I thought at that point I could just blow out everything in the project directory and copy over the main project files, but then I got that feeling in the pit of my stomach when I noticed that a lot of things weren't working, like static URLs.
Apparently, I'm referencing myproject in includes and also static URLs, and who knows where else.
I don't think this idea is going to work, so I've given up for now.
Can anyone give me an idea of how people go about setting up a development instance for a Python pyramid project?
The first thing you should do, if it's not the case, is version control your project. I'd recommend using git.
In addition to the benefits of managing the changes made to the application when developing, it will aldo make it easier to share copies between developers... or with the production deployment. Indeed, production can just be a git clone of the project, just like your development instance.
The second thing is you need to install the project in your Python library path. This is how all the imports and includes are going to work.
I'd recommend creating a virtual environment for this, with either virtualenv or pew, so that your app (and its dependencies) are "isolated" from the rest of your system and other apps.
You probably have a setup.py script in your project. If not, create one. Then install your project with pip install . in production, or pip install -e . in development.
Here's how I managed my last Pyramid app:
I had both a development.ini and a production.ini. I actually had a development.local.ini in addition to the other two - one for local development, one for our "test" system, and one for production. I used git for version control, and had a main branch for production deployments. On my prod server I created the virtual environment, etc., then would pull my main branch and run using the production.ini config file. Updates basically involved jumping back into the virtualenv and pulling latest updates from the repo, then restarting the pyramid server.

Deploy Django project using wsgi and virtualenv on shared webhosting server without root access

I have a Django project which I would like to run on my shared webspace (1und1 Webspace) running on linux. I don't have root access and therefore can not edit apache's httpd.conf or install software system wide.
What I did so far:
installed squlite locally since it is not available on the server
installed Python 3.5.1 in ~/.localpython
installed virtualenv for my local python
created a virtual environment in ~/ve_tc_lb
installed Django and Pillow in my virtual environment
cloned my django project from git server
After these steps, I'm able to run python manage.py runserver in my project directory and it seems to be running (I can access the login screen using lynx on my local machine).
I read many postings on how to configure fastCGI environments, but since I'm using Django 1.9.1, I'm depening on wsgi. I saw a lot about configuring django for wsgi and virtualenv, but all examples required access to httpd.conf.
The shared web server is apache.
I can create a new directory in my home with a sample hello.py and it is working when I enter the url, but it is (of course) using the python provided by the server and not my local installation.
When I change the first line indicating which python version to use to my virtual environment (#!/path/to/home/ve_tc_lb/bin/python), it seems to use the correct version in the virtual environment. Since I'm using different systems for developing and deployment, I'm not sure whether it is a good idea to e.g. add such a line in my djangoproject/wsgi.py.
Update 2016-06-02
A few more things I tried:
I learned that I don't have access to the apache error logs
read a lot about mod_wsgi and django in various sources which I just want to share here in case someone needs them in the future:
modwsgi - IntegrationWithDjango.wiki
debug mod_wsgi installation (only applicable if you are root)
mod_wsgi configuration guide
I followed the wsgi test script installation here - but the wsgi-file is just displayed in my browser instead of beeing executed.
All in all it seems like my provider 1und1 did not install wsgi extensions (even though the support told me a week ago it would be installed)
Update 2016-06-12: I got a reply from support (after a week or so :-S ) confirming that they dont have mod_wsgi but wsgiref...
So I'm a bit stuck here - which steps should I do next?
I'll update the question regularly based on comments and remarks. Any help is appreciated.
Since your apache is shared, I don't expect you can change the httpd.conf but use instead your solution. My suggestion is:
If you have multiple servers you will deploy your project (e.g. testing, staging, production), then do the following steps for each deploy target.
In each server, create a true wsgi.py file which you will never put in versioning systems. Pretty much like you would do with a local_settings.py file. This file will be named wsgy.py since most likely you cannot edit the apache settings (since it is shared) and that name will be expected for your wsgi file.
The content for the file will be:
#!/path/to/your/virtualenv/python
from my_true_wsgi import *
Which will be different for each deploy server, but the difference will be, most likely, in the shebang line to locate the proper python interpreter.
You will have a file named my_true_wsgi to have it matching the import in the former code. That file will be in the versioning systems, unlike the wsgi.py file. The contents of such file is the usual contents of the wsgi.py on any regular django project, just that you are not using that name directly.
With this solution you can have several different wsgi files with no conflict on shebangs.
You'll have to use a webhost that supports Django. See https://code.djangoproject.com/wiki/DjangoFriendlyWebHosts. Personally, I've used WebFaction and was quite happy with it, their support was great and customer service very responsive.

Aldryn - DjangoCMS install addons not present in "Manage Addons"

I am quite a Django n00b, and figured using Aldryn for my first real django site would be a good idea!
I have successfully installed and implementer Aldryn News & Blog.
Now I would like to install Aldryn Search that is not accessible from the "Manage Addons" under the Aldryn control panel.
I very confused on how to install an addon like Aldryn Search that is not accessible from within "Manage Addons". Should I somehow use the "Add custom Addon" and register the package as a new custom addon.
Or should I create a local development environment and somehow install the addon and upload it? (does it exist a tutorial for this?)
Thank you!
There are various ways in which to install arbitrary Django packages into an Aldryn project.
The quick, easy way
The easiest, quickest way is simply to place the module(s) you need into the project directory, thus placing them on the Python path. You need then to make sure that your settings.py, urls.py and so on are appropriately configured. Then you can push these changes to Aldryn itself. This is described in Adding a new application to your Aldryn project - the quick and easy way.
The create-an-Addon way
A more involved way to do it, that has benefits for long-term use and re-use, is to turn the package into a private or public Aldryn Addon. This is described in Developing an Addon application for Aldryn.
A middle way
Another way is somewhere between the two. Add the package to the project's requirements.in - you can do this in various ways, for example:
# standard install from PyPI
some-package==1.2.3
# install from an archive
https://example.com/some-package1.2.3.tar.gz#egg=some-package==1.2.3
# install from a GitHub repository
git+http://git#github.com/some-org/some-package.git#egg=some-package==1.2.3
You will need to make sure that your settings.py, urls.py and so on are appropriately configured.
Run aldryn project update. This in effect redeploys your project locally, except for:
docker-compose run --rm web python manage.py migrate - you need to run any migrations manually, unlike on Aldryn.
Finally, git add, commit and push your changes to your project, and redeploy it on Aldryn.
This method isn't yet documented in the Aldryn support system, but will be soon.
That's a very valid question in my opinion since add-ons are wrapped into an additional directory that makes the django app inside invisible to Django's INSTALLED_APPS.
If you add them to addons-dev they are ignored by git.
A possible solution (even if maybe not the cleanest) would be to unignore addons-dev by adding !/addons-dev to the .gitignore in the project's root directory and then add -e /app/addons-dev/aldryn-package-name to requirements.in (outside the section generated/overwritten by Aldryn). That's what aldryn project develop aldryn-package-name does (for the local environment).
Similarly, if you have a git repository that contains the code (like aldryn-search) you would use -e git+https://github.com/aldryn/aldryn-search.git in requirements.in
In case you need to apply changes to the addon code best practise would be forking the original repository and then checking out your fork as per above instructions.

Installing a my Django app on ec2

Im in the process of launching a Django app on ec2, but have hit a wall trying to install my code on my AMI instance. This is my situation: I have a bitnami AMI up and running that has Django, apache, Postgresql, and nearly all my dependancies pre installed, and I have my fully functional Django app running on my local machine that I have been testing thus far with the Django Dev server. After quite a bit of googling, the most common methods of installing an app to an ec2 instance seem either using ssh/sftp/scp to drop a tarball in the instance, or creating a repository and importing code from there. If anyone can tell me the method they prefer, and guide me through the process, or provide a link to a good tutorial, it would be hugely appreciated!
tar -pczf yourfile.tar.gz MyProject
scp -i /home/user/.cert/yourcert.pem yourfile.tar.gz user#serveripaddress:/home/user
tar -xvf /home/user/yourfile.tar
I usually simply scp -R my whole site directory into /home/bitnami of my AMI. I'm using Apache/NGINX/Django with mod_wsgi. So the directory (for example /home/bitnami/djangosites/) gets referred to based on my mod_wsgi path in my apache cfg file.
In other words, why not just move the whole directory recursively (scp -R) instead of making a tarball etc?
Directly copy the folder where your project resides may work. However you mention that you are using a BitNami image, so it is likely that you are using the BitNami Django Stack Amazon image. BitNami also provides a native version of the BitNami Django Stack so I would suggest that you first try to deploy your application on top of the native installer and see what exact steps you need to follow. For instance you may need to install python dependencies or if you plan to use Apache on production instead of the Django development server you will need to configure Apache to serve your project. I'm a BitNami developer and I mention this because make easier the deployment in different platforms (including ec2) is one of the goal of BitNami and as you are already using it you can take advantage of this.

Categories