I implemented a REST api in django with django-rest-framework and used oauth2 for authentication.
I tested with:
curl -X POST -d "client_id=YOUR_CLIENT_ID&client_secret=YOUR_CLIENT_SECRET&grant_type=password&username=YOUR_USERNAME&password=YOUR_PASSWORD" http://localhost:8000/oauth2/access_token/
and
curl -H "Authorization: Bearer <your-access-token>" http://localhost:8000/api/
on localhost with successful results consistent with the documentation.
When pushing this up to an existing AWS elastic beanstalk instance, I received:
{ "detail" : "Authentication credentials were not provided." }
I like the idea of just having some extra configuration on the standard place. In your .ebextensions directory create a wsgi_custom.config file with:
files:
"/etc/httpd/conf.d/wsgihacks.conf":
mode: "000644"
owner: root
group: root
content: |
WSGIPassAuthorization On
As posted here: https://forums.aws.amazon.com/message.jspa?messageID=376244
I thought the problem was with my configuration in django or some other error type instead of focusing on the differences between localhost and EB. The issue is with EB's Apache settings.
WSGIPassAuthorization is natively set to OFF, so it must be turned ON. This can be done in your *.config file in your .ebextensions folder with the following command added:
container_commands:
01_wsgipass:
command: 'echo "WSGIPassAuthorization On" >> ../wsgi.conf'
Please let me know if I missed something or if there is a better way I should be looking at the problem. I could not find anything specifically about this anywhere on the web and thought this might save somebody hours of troubleshooting then feeling foolish.
I use a slightly different approach now. sahutchi's solution worked as long as env variables were not changed as Tom dickin pointed out. I dug a bit deeper inside EB and found out where the wsgi.conf template is located and added the "WSGIPassAuthorization On" option there.
commands:
WSGIPassAuthorization:
command: sed -i.bak '/WSGIScriptAlias/ a WSGIPassAuthorization On' config.py
cwd: /opt/elasticbeanstalk/hooks
That will always work, even when changing environment variables. I hope you find it useful.
Edit: Seems like lots of people are still hitting this response. I haven't used ElasticBeanstalk in a while, but I would look into using Manel Clos' solution below. I haven't tried it personally, but seems a much cleaner solution. This one is literally a hack on EBs scripts and could potentially break in the future if EB updates them, specially if they move them to a different location.
Though the above solution is interesting, there is another way. Keep the wsgi.conf VirtualHost configuration file you want to use in .ebextensions, and overwrite it in a post deploy hook (you can't do this pre-deploy because it will get re-generated (yes, I found this out the hard way). If you do this, to reboot, make sure to use the supervisorctl program to restart so as to get all your environment variables set properly. (I found this out the hard way as well.)
cp /tmp/wsgi.conf /etc/httpd/conf.d/wsgi.conf
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart httpd
exit 0
01_python.config:
05_fixwsgiauth:
command: "cp .ebextensions/wsgi.conf /tmp"
Related
I work on multiple appengine projects in any given week. i.e. assume multiple clients. Earlier I could set application in app.yaml. So whenever I did appcfg.py update.... it would ensure deployment to the right project.
When deploying, the application variable throws an error with gcloud deploy. I had to use
gcloud app deploy --project [YOUR_PROJECT_ID]. So what used to be a directory level setting for a project, is now going into our build tooling. And missing out that simple detail can push a project code to the wrong customer.
i.e. if I did gcloud config set project proj1 and then somehow did a gcloud app deploy in proj2, it would deploy to proj1. Production deployments are done after detailed verification on the build tools and hence it is less of an issue there because we still use the --project flag.
But its hard to do similar stuff on the development environment. dev_appserver.py doesn't have a --project flag.
When starting dev_appserver.py I've to do gcloud config set project <project-id> before I start the server. This is important when I using stuff like PubSub or GCS (in dev topics or dev buckets).
Unfortunately, missing out a simple configuration like setting a project ID in a dev environment can result into uploading blobs/messages/etc into the wrong dev gcs bucket or wrong dev pubsub topic (not using emulators). And this has happened quite a few times especially when starting new projects.
I find the above solutions as hackish-workarounds. Is there a good way to ensure that we do not deploy or develop in a wrong project when working from a certain directory?
TL;DR - Not supported based on the current working directory, but there are workarounds.
Available workarounds
gcloud does not directly let you set up a configuration per working directory. Instead, you could use one of these 3 options to achieve something similar:
Specify --project, --region, --zone or the config of interest per command. This is painful but gets the job done.
Specify a different gcloud configuration directory per command (gcloud uses ~/.config/gcloud on *nix by default):
CLOUDSDK_CONFIG=/path/to/config/dir1 gcloud COMMAND
CLOUDSDK_CONFIG=/path/to/config/dir2 gcloud COMMAND
Create multiple configurations and switch between them as needed.
gcloud config configurations activate config-1 && gcloud COMMAND
Shell helpers
As all of the above options are ways to customize on the command line, aliases and/or functions in your favorite shell will also help make things easier.
For example in bash, option 2 can be implemented as follows:
function gcloud_proj1() {
CLOUDSDK_CONFIG=CLOUDSDK_CONFIG=/path/to/config/dir1 $#
}
function gcloud_proj2() {
CLOUDSDK_CONFIG=CLOUDSDK_CONFIG=/path/to/config/dir2 $#
}
gcloud_proj1 COMMAND
gcloud_proj2 COMMAND
There's a very nice way I've been using with PyCharm, I suspect you can do so with other IDEs.
You can declare the default env variables for the IDE Terminal, so when you open a new terminal gcloud recognises these env variables and sets the project and account.
No need to switch configurations between projects manually (gcloud config configurations activate ). Terminals open in other projects will inherit it's own GCP project and config from the ENV variables.
I've had this problem for years and I believe I found a decent compromise.
Create a simple script called contextual-gcloud. Note the \gcloud, fundamental for future aliasing.
🐧$ cat > contextual-gcloud
#!/bin/bash
if [ -d .gcloudconfig/ ]; then
echo "[$0] .gcloudconfig/ directory detected: using that dir for configs instead of default."
CLOUDSDK_CONFIG=./.gcloudconfig/ \gcloud "$#"
else
\gcloud "$#"
fi
Add to your .bashrc and reload / start new bash. This will fix autocompletion.
alias gcloud=contextual-gcloud
That's it! If you have a directory called that way the system will use that instead, which means you can load your configuration into source control etc.. only remember to git ignore stuff like logs, and private stuff (keys, certificates, ..).
Note: auto-completion is fixed by the alias ;)
Code: https://github.com/palladius/sakura/blob/master/bin/contextual-gcloud
These are exactly the reasons for which I highly dislike gcloud. Making command line argument mandatory and dropping configuration files support, much too error prone for my taste.
So far I'm still able to use the GAE SDK instead of Google Cloud SDK (see What is the relationship between Google's App Engine SDK and Cloud SDK?), which could be one option - basically keep doing stuff "the old way". Please note that it's no longer the recommended method.
You can find the still compatible GAE SDKs here.
For whenever the above will no longer be an option and I'll be forced to switch to the Cloud SDK my plan is to have version-controlled cheat-sheet text files in each app directory containing the exact cmds to use for running the devserver, deploy, etc for that particular project which I can just copy-paste into the terminal without fear of making mistakes. You carefully set these up once and then you just copy-paste them. As a bonus you can have different branch versions for different environments (staging/production, for example).
Actually I'm using this approach even for the GAE SDK - to prevent accidental deployment of the app-level config files to the wrong GAE app (such deployments must use cmdline arguments to specify the app in multi-service apps).
Or do the same but with environment config files and wrapper scripts instead of cheat-sheet files, if that's your preference.
I'm running a Django server with Gunicorn and Nginx hosted on DigitalOcean. I've run into a problem where adding a new file through the administrator interface produces a 403 forbidden error. Specifically, the file in question works fine if I summon a query of it (e.g. Object.objects.all())but can't be rendered in my templates. I've previously fixed the problem by doing chmod/chown, but the fix only applies to existing files, not new ones. Does anyone know how to permanently apply the fix once?
TL;DR:
FILE_UPLOAD_PERMISSIONS = 0o644 in settings.py
in bash shell: find /path/to/MEDIA_ROOT -type d -exec chmod go+rx {} +
The explanation
The files are created with permissions that are too restrictive, so the user Nginx runs as, cannot read them. To fix this you need to make sure Nginx can read the files and can get to the files.
The goal
First you need FILE_UPLOAD_PERMISSIONS to allow reading by the Nginx user. Second, MEDIA_ROOT and all subdirectories must be readable by Nginx and writeable by Gunicorn.
How to
You must ensure the directories are world readable (and executable) or the group for the directories must be a group that the Nginx process belongs to and they must be at least group readable (and executable).
As a side note, you said you've used chmod and chown before, so I assumed you were familiar with the terminology used. Since you're not, I highly recommend fully reading the linked tutorial, so you understand what the commands you used can do and can screw up.
Hi heroku python people,
I want my heroku app to access shared private libraries in my github account.
So I would like to have a requirements.txt file that looks like this ...
# requirements.txt
requests==1.2.2
-e git+ssh://git#github.com/jtushman/dict_digger.git#egg=dict_digger
And I would like it to use a ssh key that I upload with heroku keys:add or have some mechanism to get a private key from the heroku cli.
Right now I get the following error (which is I guess expected):
Host key verification failed.
It does work if I do (per #kenneth_reitz's https://stackoverflow.com/a/9136665/192791):
-e git+https://username:password#github.com/jtushman/dict_digger.git#egg=dict_digger
But it is really unworkable for me to put credentials in my requirements.txt file
Has anyone come up with a nice solution for this?
I have also posted an issue on the heroku python buildpack project here
Kenneth, the maintainer of heroku's python buildpack said the following (and I am cutting and pasting here)
I would currently recommend the way mentioned (git over https)
Using the key you have registered with heroku would be cool, but
unfortunately, you would have to provide your private key for this to
work. Quite undesirable.
However, you could also write your keys into a .ssh folder in your app
or use .profile scripts to facilitate this.
Can see the full thread here: https://github.com/heroku/heroku-buildpack-python/issues/97
I had the same issue before I wanted to use django-avatar and the version in PyPI is old and doesn't support Django 1.5 Custom User .
The simple solution is to download the package and use it as a regular app as if it was part of your project then just git add . and push it and it works !
It might not be the best idea but it just works .
I have a python/django project that I've set up for development and production using git revision control. I have three settings.py files:
-settings.py (which has dummy variables for potential open source project),
-settings_production.py (for production variables), and
-settings_local.py (to override settings just for my local environment). This file is not tracked by git.
I use this method, which works great:
try:
from settings_production import *
except ImportError, e:
print 'Unable to load settings_production.py:', e
try:
from settings_local import *
except ImportError, e:
print 'Unable to load settings_local.py:', e
HOWEVER, I want this to be an open source project. I've set up two git remotes, one called 'heroku-production' and one called 'github-opensource'. How can I set it up so that the 'heroku-remote' includes settings_production.py while 'github-opensource' doesn't, so that I can keep those settings private?
Help! I've look at most of the resources over the internets, but they don't seem to address this use case. Is this the right way? Is there a better approach?
The dream would be to be able to push my local environment to either heroku-production or github-opensource without haveing to mess with the settings files.
Note: I've looked at the setup where you use environment variables or don't track the production settings, but that feels overly complicated. I like to see everything in front of me in my local setup. See this method.
I've also looked through all these methods, and they don't quite seem to fit the bill.
There's a very similar question here. One of the answers suggests git submodules
which I would say are the easiest way to go about this. This is a problem for your VCS, not your Python code.
I think using environment variables, as described on the Two Scoops of Django book, is the best way to do this.
I'm following this approach and I have an application running out of a private GitHub repository in production (with an average of half a million page views per month), staging and two development environments and I use a directory structure like this:
MyProject
-settings
--__init__.py
--base.py
--production.py
--staging.py
--development_1.py
--development_2.py
I keep everything that's common to all the environments in base.py and then make the appropiate changes on production.py, staging.py, development_1.py or development_2.py.
My deployment process for production includes virtualenv, Fabric, upstart, a bash script (used by upstart), gunicorn and Nginx. I have a slightly modified version of the bash script I use with upstart to run the test server; it is something like this:
#!/bin/bash -e
# starts the development server using environment variables and django-admin.py
PROJECTDIR=/home/user/project
PROJECTENV=/home/user/.virtualenvs/project_virtualenv
source $PROJECTENV/bin/activate
cd $PROJECTDIR
export LC_ALL="en_US.UTF-8"
export HOME="/home/user"
export DATABASES_DEFAULT_NAME_DEVELOPMENT="xxxx"
export DATABASES_DEFAULT_USER_DEVELOPMENT="xxxxx"
export DATABASES_DEFAULT_PASSWORD_DEVELOPMENT="xxx"
export DATABASES_DEFAULT_HOST_DEVELOPMENT="127.0.0.1"
export DATABASES_DEFAULT_PORT_DEVELOPMENT="5432"
export REDIS_HOST_DEVELOPMENT="127.0.0.1:6379"
django-admin.py runserver --pythonpath=`pwd` --settings=MyProject.settings.development_1 0.0.0.0:8006
Notice this is not the complete story and I'm simplifying to make my point. I have some extra Python code in base.py that takes the values from these environment variables too.
Play with this and make sure to check the relevant chapter in Two Scoops of Django, I was also using the import approach you mentioned but having settings out of the repository wasn't easy to manage and I made the switch a few months ago; it's helped me a lot.
This is a weird question but its been driving me bonkers for the last 3 hours. I wanted to play around with a pyramid based cms Kotti and I made a mistake by installing it using easy_install first(sudo easy_install kotti). I'm getting weird behavior and I'm not sure if its the way the program itself or the way I installed it.
I want to change some parts of the code and see how it works but my changes are not taking effect. After I installed it via easy_install I did:
virtualenv mysite --no-site-packages
bin/easy_install pyramid
git clone https://github.com/Pylons/Kotti.git
cd Kotti
sudo ../bin/python setup.py develop
../bin/pserve app.ini --reload
I went to 127.0.0.0:5000 and saw it was working. The first page has text that says "Congratulations! You have successfully installed Kotti." so I went into the kotti directory and did a grep "Congratulations" *.* and found it was coming from populate.py. So I opened the file and changed the line to a different piece of text and saved. Because I have the --reload flag on pserve I noticed it reloaded my code on the terminal and when I went back to the site the data did not change.
I'm so confused because the server reloads when I change the python code, so it sees the change but its not reflected in the browser(just to test if its the browser cache I tried it using different browsers and cleared the cache).
Any ideas?
When you run a Kotti web application for the first time, as with most CMS systems, it runs a set of data population methods (including that populate.py code you mentioned) to set up a database and insert all the content you see. The --reload is only telling the deployment server to watch for file changes as you work on the file system.
If you want to rerun the installation/population code then you need to delete the created database. If you haven't made any changes from their example app.ini file it will likely be Kotti.db.
Alternatively use the CMS to make the changes, as is intended by the CMS systems.
Running python -v will show all the imports