Elastic Beanstalk doesn't accept my changes to WSGIPath - python

I have an app where I want to run it from aws_wsgi.py instead of application.py, as there are several different entry points depending on where we are hosting it. For this reason, I would like to be able to change the WSGIPath variable to point to the correct location.
This, in an .ebextensions .config file, does not work:
option_settings:
- namespace: "aws:elasticbeanstalk:container:python"
option_name: WSGIPath
value: "/opt/python/current/app/aws_wsgi.py"
The environment attempts to use 'application.py' despite the above lines. No error appears to be emitted. Other parts of that same config file work perfectly, such as packages commands to get the system to install some yum packages. I can confirm the config files are getting uploaded in the logs:
INFO: Creating new application version using project code
WARNING: You have uncommitted changes.
INFO: Getting version label from git with git-describe
Creating application version archive "0_3_0-507-ga36f".
INFO: creating zip using git archive HEAD
INFO: git archive output: .ebextensions/
.ebextensions/01-weave_server_eb.config
.ebextensions/02-weave_server_eb_lxml_dependencies.config
.ebextensions/03-weave_server_eb_nltk_data.config
.ebextensions/04-weave_server_eb_entity_data.config
.ebextensions/05-weave_server_eb_geography_data.config
.gitattributes
.gitignore
...etc.
We run with a saved configuration, i.e. via eb create --cfg Live, and in the dashboard, that configuration shows that WSGIPath is "application.py". But there is nowhere to change that value in the dashboard. It seems like it is a built-in value that overrides the data we send with the above config file.
I tried adding it as an environment variable via the dashboard, but that goes in the aws:elasticbeanstalk:application:environment namespace and does not affect how the application is started up in the first place. (I checked this by using eb config to download the configuration.)
Maybe I could add a section in the file retrieved by eb config, but I heard that doing that will start to override .ebextensions files, and I have several important commands in my .ebextensions files that I need to continue using. (But see the comment below.) It's not clear from the documentation how .ebextensions data translates to and compares with config files used by eb config but the .ebextensions files are well-documented and reasonably convenient so I'd rather not break those if possible!
If I retrieve the configuration on the server via eb config get Live, it contains the following (numerous API keys removed):
EnvironmentConfigurationMetadata:
Description: Includes API keys for live operation
DateModified: '1437734273000'
DateCreated: '1437734273000'
AWSConfigurationTemplateVersion: 1.1.0.0
EnvironmentTier:
Name: WebServer
Type: Standard
SolutionStack: 64bit Amazon Linux 2015.03 v1.4.3 running Python 2.7
OptionSettings:
aws:elb:loadbalancer:
CrossZone: true
aws:elasticbeanstalk:command:
BatchSize: '30'
BatchSizeType: Percentage
aws:autoscaling:launchconfiguration:
IamInstanceProfile: aws-elasticbeanstalk-ec2-role
EC2KeyName: aws-eb
InstanceType: t2.micro
aws:elb:policies:
ConnectionDrainingEnabled: true
aws:autoscaling:updatepolicy:rollingupdate:
RollingUpdateType: Health
RollingUpdateEnabled: true
aws:elasticbeanstalk:application:environment:
DATA_DIR: /opt/python/current/app/data
WSGIPath: /opt/python/current/aws_wsgi.py
aws:elb:healthcheck:
Interval: '30'
(NB. The WSGIPath environment variable there is invalid - but I am unable to remove it from the configuration due to bugs in the AWS Dashboard. It appears to have no effect anyway.)
How do I get AWS to respect my chosen WSGIPath?

Related

Why do I have no logs? empty web.stdout.logs?

So I have an AWS EB environment with and application deployed.
I can't view the applications log output (web.stdout.logs is empty)
You can try by adding the following to your python.config, or by creating new config file, e.g. mylogs.config:
files:
"/opt/elasticbeanstalk/config/private/logtasks/bundle/applogs.conf" :
mode: "000755"
owner: root
group: root
content: |
/opt/python/log/*.log
The /opt/python/log/*.log should be adjusted to your application.
The problem was not that I couldn't see the output, it was always in the /var/log/web.stdout.log file however when I was zipping the file to upload it to the EB environment I was zipping it using the file manager. This caused an issue on upload as the Procfile was being placed beside the application rather than inside of it.
The effect looked like there were no log files, but really the application was just never getting passed to the Procfile.
The solution was
cd path/to/application
zip -r my_application_code.zip .
Now when I upload the zip file to the EB console the Procfile is generated correctly and the applications log files are found in web.stdout.log as normal.
Thanks for your help.

Cloud Build env variables not passed to Django app on GAE

I have a Django app running on Google AppEngine Standard environment. I've set up a cloud build trigger from my master branch in Github to run the following steps:
steps:
- name: 'python:3.7'
entrypoint: python3
args: ['-m', 'pip', 'install', '--target', '.', '--requirement', 'requirements.txt']
- name: 'python:3.7'
entrypoint: python3
args: ['./manage.py', 'collectstatic', '--noinput']
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy', 'app.yaml']
env:
- 'SHORT_SHA=$SHORT_SHA'
- 'TAG_NAME=$TAG_NAME'
I can see under the Execution Details tab on Cloud Build that the variables were actually set.
The problem is, SHORT_SHA and TAG_NAME aren't accessible from my Django app (followed instructions at https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values#using_user-defined_substitutions)! But if I set them in my app.yaml file with hardcoded values under env_variables, then my Django app can access those hardcoded values (and the values set in my build don't overwrite those hardcoded in app.yaml).
Why is this? Am I accessing them/setting them incorrectly? Should I be setting them in app.yaml somehow?
I even printed the whole os.environ dictionary in one of my views to see if they were just there with different names or something, but they're not present in there.
Not the cleanest solution, but I used this medium post as a guidance to my solution. I hypothesize that runserver command isn't being passed those env variables, and that those variables are only used for the app deploy command.
Write a Python script to dump the current environment variables in a .env file in project dir
In your settings file, read env variables from the .env file (I used django-environ library for this)
Add a step to cloud build file that runs your new Python script and pass env variables in that step (you're essentially dumping these variables into a .env file in this step)
- name: 'python:3.7'
entrypoint: python3
args: ['./create_env_file.py']
env:
- 'SHORT_SHA=$SHORT_SHA'
- 'TAG_NAME=$TAG_NAME'
Set the variables through Substitution Variables section in Edit Trigger page in Cloud Build
Now your application should have these env variables when app deploy happens

Saleor front-end installation

I am trying to install saleor front-end package from github.The documentation is outdated and i get an error when i try
>>>nmp start
Error: Environment variable API_URI not set
I found this variable in different places but did not know what to change, and where to set it
EDIT:Solved.just in case somebody is going through the same problem
in webpack>config.base.js
process.env.API_URI = 'http://localhost:8000/graphql/'
On Linux, I fixed this by setting environment variable before running npm; start with:
export API_URI=http://localhost:8000/graphql/
on the terminal.
create a file in the root directory of /saleor-storefront called ".env" and write inside:
API_URI=http://localhost:8000/graphql/
This will create an environment variable called API_URI with the value 'http://localhost:8000/graphql/'
Create .env file in a root directory or set environment variables with following values:
API_URI (required) - URI of a running instance of Saleor GraphQL API. If you are running Saleor locally with the default settings, set API_URI to: http://localhost:8000/graphql/.
APP_MOUNT_URI - URI at which the Dashboard app will be mounted. E.g. If you set APP_MOUNT_URI to /dashboard/, your app will be mounted at http://localhost:9000/dashboard/.
STATIC_URL - URL where the static files are located. E.g. if you use S3 bucket, you should set it to the bucket's URL. By default Saleor assumes you serve static files from the root of your site at http://localhost:9000/.
Saleor on Github: How to configure the Dashboard

How to ignore files when running `gcloud app deploy`?

When I run
gcloud app deploy app.yaml
which files actually get uploaded?
The project folder contains folders and files such as .git, .git_ignore, Makefile or venv that are irrelevant for the deployed application.
How does gcloud app deploy decide which files get uploaded?
tl;dr: you should use a .gcloudignore file, not skip_files in app.yaml.
While the prior two answers make use of skip_files in the app.yaml file. There is now a .gcloudignore that is created when using gcloud deploy or upload commands. The default will depend on the detected language that you are using but here is automatically created .gcloudignore that I found in my Python project:
# This file specifies files that are *not* uploaded to Google Cloud Platform
# using gcloud. It follows the same syntax as .gitignore, with the addition of
# "#!include" directives (which insert the entries of the given .gitignore-style
# file at that point).
#
# For more information, run:
# $ gcloud topic gcloudignore
#
.gcloudignore
# If you would like to upload your .git directory, .gitignore file or files
# from your .gitignore file, remove the corresponding line
# below:
.git
.gitignore
# Python pycache:
__pycache__/
Note: These commands will not work when both skip_files is defined and .gcloudignore is present. This is not mentioned in the skip_filesdefinition of theapp.yaml` reference.
It seems better to have a globally recognized standard across gcloud commands and makes more sense to adopt the .gcloudignore versus using the skip_files which is only relevant without App Engine. Additionally, it works pretty much like a .gitignore file which the reference mentions:
The syntax of .gcloudignore borrows heavily from that of .gitignore;
see https://git-scm.com/docs/gitignore or man gitignore for a full
reference.
https://cloud.google.com/sdk/gcloud/reference/topic/gcloudignore
EDIT Aug 2018: Google has since introduced .gcloudignore, which is now preferred, see dalanmiller's answer.
They're all uploaded, unless you use the skip_files instruction in app.yaml. Files with a dot like .git are ignored by default. If you want to add more, beware that you're overriding these defaults and almost certainly want to keep them around.
skip_files:
- ^Makefile$
- ^venv$
# Defaults
- ^(.*/)?#.*#$
- ^(.*/)?.*~$
- ^(.*/)?.*\.py[co]$
- ^(.*/)?.*/RCS/.*$
- ^(.*/)?\..*$
Note also that they are uploaded to different places if you use a static handler. Static files are sent to a CDN and are not available to your language run time (although there are ways around that, too).
Make sure to read the docs:
https://cloud.google.com/appengine/docs/standard/python/config/appref#skip_files
How does gcloud app deploy decide which files get uploaded?
It doesn't. It uploads everything by default. As mentioned in another response you can use the skip_files section in app.yaml as follows:
skip_files:
- ^(.*/)?#.*#$
- ^(.*/)?.*~$
- ^(.*/)?.*\.py[co]$
- ^(.*/)?.*/RCS/.*$
- ^(.*/)?\..*$
- ^(.*/)?\.bak$
- ^\.idea$
- ^\.git$
You can also use the --verbosity param to see what files are being deployed, i.e. gcloud app deploy app.yaml --verbosity=debug or gcloud app deploy app.yaml --verbosity=info per docs.

Setting NewRelic environment on Dotcloud (Python)

I have a Python application that is set up using the new New Relic configuration variables in the dotcloud.yml file, which works fine.
However I want to run a sandbox instance as a test/staging environment, so I want to be able to set the environment of the newrelic agent so that it uses the different configuration sections of the ini configuration. My dotcloud.yml is set up as follows:
www:
type: python
config:
python_version: 'v2.7'
enable_newrelic: True
environment:
NEW_RELIC_LICENSE_KEY: *****************************************
NEW_RELIC_APP_NAME: Application Name
NEW_RELIC_LOG: /var/log/supervisor/newrelic.log
NEW_RELIC_LOG_LEVEL: info
NEW_RELIC_CONFIG_FILE: /home/dotcloud/current/newrelic.ini
I have custom environment variables so that the sanbox is set as "test" and the live application is set to "production"
I am then calling the following in my uswsgi.py
NEWRELIC_CONFIG = os.environ.get('NEW_RELIC_CONFIG_FILE')
ENVIRONMENT = os.environ.get('MY_ENVIRONMENT', 'test')
newrelic.agent.initialize(NEWRELIC_CONFIG, ENVIRONMENT)
However the dotcloud instance is already enabling newrelic because I get this in the uwsgi.log file:
Sun Nov 18 18:50:12 2012 - unable to load app 0 (mountpoint='') (callable not found or import error)
Traceback (most recent call last):
File "/home/dotcloud/current/wsgi.py", line 15, in <module>
newrelic.agent.initialize(NEWRELIC_CONFIG, ENVIRONMENT)
File "/opt/ve/2.7/local/lib/python2.7/site-packages/newrelic-1.8.0.13/newrelic/config.py", line 1414, in initialize
log_file, log_level)
File "/opt/ve/2.7/local/lib/python2.7/site-packages/newrelic-1.8.0.13/newrelic/config.py", line 340, in _load_configuration
'environment "%s".' % (_config_file, _environment))
newrelic.api.exceptions.ConfigurationError: Configuration has already been done against differing configuration file or environment. Prior configuration file used was "/home/dotcloud/current/newrelic.ini" and environment "None".
So it would seem that the newrelic agent is being initialised before uwsgi.py is called.
So my question is:
Is there a way to initialise the newrelic environment?
The easiest way to do this, without changing any code would be to do the following.
Create a new sandbox app on dotCloud (see http://docs.dotcloud.com/0.9/guides/flavors/ for more information about creating apps in sandbox mode)
$ dotcloud create -f sandbox <app_name>
Deploy your code to the new sandbox app.
$ dotcloud push
Now you should have the same code running in both your live and sandbox apps. But because you want to change some of the ENV variables for the sandbox app, you need to do one more step.
According to this page http://docs.dotcloud.com/0.9/guides/environment/#adding-environment-variables there are 2 different ways of adding ENV variables.
Using the dotcloud.yml's environment section.
Using the dotcloud env cli command
Whereas dotcloud.yml allows you to define different environment variables for each service, dotcloud env set environment variables for the whole application. Moreover, environment variables set with dotcloud env supersede environment variables defined in dotcloud.yml.
That means that if we want to have different values for our sandbox app, we just need to run a dotcloud env command to set those variables on the sandbox app, which will override the ones in your dotcloud.yml
If we just want to change on variable we would run this command.
$ dotcloud env set NEW_RELIC_APP_NAME='Test Application Name'
If we want to update more then one at a time we would do the following.
$ dotcloud env set \
'NEW_RELIC_APP_NAME="Test Application Name"' \
'NEW_RELIC_LOG_LEVEL=debug'
To make sure that you have your env varibles set correctly you can run the following command.
$ dotcloud env list
Notes
The commands above, are using the new dotCloud 0.9.x CLI, if you are using the older one, you will need to either upgrade to the new one, or refer to the documentation for the old CLI http://docs.dotcloud.com/0.4/guides/environment/
When you set your environment variables it will restart your application so that it can install the variables, so to limit your downtime, set all of them in one command.
Unless they are doing something odd, you should be able to override the app_name supplied by the agent configuration file by doing:
import newrelic.agent
newrelic.agent.global_settings().app_name = 'Test Application Name'
Don't call newrelic.agent.initialize() a second time.
This will only work if app_name is listing a single application to report data to.

Categories