Django check and alternate Docker Compose file: Debug flag not being set? - python

I'm trying to prep a Django application for production. I created an alternate docker-compose YML file where I specify DEBUG=0. However when I run the django check for deployment, it says that DEBUG is set to True.
Any thoughts on where I'm going wrong? I thought it was an issue with my use of Docker. But is it perhaps an issue with how I'm using Django?
Here are my steps:
Created my project with docker-compose.yml (see below)
Created docker-compose-prod.yml for production (see below)
Ran the following
$ docker-compose down
$ docker-compose -f docker-compose-prod.yml up -d --build
$ docker-compose exec web python manage.py check --deploy
Output of the check:
?: (security.W018) You should not have DEBUG set to True in deployment.
Some of investigatory steps so far:
A. Check the environment variables.
$ docker-compose exec web python
>>> import os
>>> os.environ.get('DEBUG')
'0'
B. Try rebuilding docker image in different ways, e.g. with --no-cache flag
C. Set the DEBUG flag in settings.py in a conditional code block (rather than using os.environ.get). This seemed to work. But I don't understand why? Code detail below.
Code excerpts
Excerpt from docker-compose.yml
services:
web:
...
environment:
- ENVIRONMENT=development
- SECRET_KEY=randomlongseriesofchars
- DEBUG=1
Excerpt from docker-compose-prod.yml
services:
web:
...
environment:
- ENVIRONMENT=production
- SECRET_KEY=anotherrandomlongseriesofchars
- DEBUG=0
Excerpts from settings.py
ENVIRONMENT = os.environ.get('ENVIRONMENT', default='production')
SECRET_KEY = os.environ.get('SECRET_KEY')
DEBUG = os.environ.get('DEBUG', default=0) #How is DEBUG being set to True?
...
if ENVIRONMENT == 'production':
[Various settings for production]
[Putting DEBUG=0 in this conditional block works]

Environment variables are strings, so result of os.environ.get() is a string.
If code expects variable to be int, boolean, etc - not string - first, it might fail. Or, try converting to desired type, an that might differ from desired result.
I.e., if converting to boolean, only empty string would result in False.
So, possible options are:
add logic to first read env vars and then correctly convert or parse them into target arguments.
use env helpers, like django-environ, environs which allow to auto-convert env vars to taget type and provide useful utility methods, generally providing something you might have implemented yourself in option 1
use different schema to pass arguments - i.e. not by env vars, but by custom settings file (there also exist helpers for that). Someone prefers env vars, someone prefers files, it also might depend on project / team / how you deploy (what is supported by instrument).

DEBUG = (os.getenv('DEBUG', '0') == '1')

Related

Docker: How to set environment variables from file during build?

I would like to set a list of environment variables as specified in an env.list file during the build process, i.e. have a respective command in the Dockerfile. Like this:
FROM python:3.9.4-slim-buster
COPY env.list env.list
# Here I need a corresponding command:
ENV env.list
The file looks like this:
FOO=foo
BAR=bar
My book of already failed attempts / ruled out options:
On Linux, one can usually set environment variables from a file env.list by running:
source env.list
export $(cut -d= -f1 env.list)
However, executing those commands as RUN in the Dockerfile does not work because env variables defined using RUN export FOO=foo are not persisted across different stages of the image.
I do not want to explicitly set those variables in the Dockerfile using ENV FOO=foo because they contain login credentials. It's also easier to automate/maintain the project if the variables are defined in one place.
I also don't want to set those variables during docker run --env-file env.list because I need them for a development container which does not "run".
ENV directive does not allow to parse a file like env.list, as pointed out. But even if it did, the resulting environment variables would still be saved in the final image, passwords included.
The correct approach to my knowledge is to set the passwords at runtime with "docker run", when this image runs, or when the child image runs via "docker run".
If the credentials are required while the image is built, I would pass them via the ARG directive so that they can be reference as shell variables in the Dockerfile but not saved in the final image:
ARG VAR
FROM image
RUN echo ${VAR}
etc...
which can run as:
docker build --build-arg VAR=value ...
If you use docker-compose you can pass a variables.env file
docker-compose.yml:
version: "3.7"
services:
service_name:
build: folder/.
ports:
- '5001:5000'
env_file:
- folder/variables.env
folder/Dockerfile
FROM python:3.9.4-slim-buster
folder/variables.env
FOO=foo
BAR=bar
For more info on compose: https://docs.docker.com/compose/

Cloud Build env variables not passed to Django app on GAE

I have a Django app running on Google AppEngine Standard environment. I've set up a cloud build trigger from my master branch in Github to run the following steps:
steps:
- name: 'python:3.7'
entrypoint: python3
args: ['-m', 'pip', 'install', '--target', '.', '--requirement', 'requirements.txt']
- name: 'python:3.7'
entrypoint: python3
args: ['./manage.py', 'collectstatic', '--noinput']
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy', 'app.yaml']
env:
- 'SHORT_SHA=$SHORT_SHA'
- 'TAG_NAME=$TAG_NAME'
I can see under the Execution Details tab on Cloud Build that the variables were actually set.
The problem is, SHORT_SHA and TAG_NAME aren't accessible from my Django app (followed instructions at https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values#using_user-defined_substitutions)! But if I set them in my app.yaml file with hardcoded values under env_variables, then my Django app can access those hardcoded values (and the values set in my build don't overwrite those hardcoded in app.yaml).
Why is this? Am I accessing them/setting them incorrectly? Should I be setting them in app.yaml somehow?
I even printed the whole os.environ dictionary in one of my views to see if they were just there with different names or something, but they're not present in there.
Not the cleanest solution, but I used this medium post as a guidance to my solution. I hypothesize that runserver command isn't being passed those env variables, and that those variables are only used for the app deploy command.
Write a Python script to dump the current environment variables in a .env file in project dir
In your settings file, read env variables from the .env file (I used django-environ library for this)
Add a step to cloud build file that runs your new Python script and pass env variables in that step (you're essentially dumping these variables into a .env file in this step)
- name: 'python:3.7'
entrypoint: python3
args: ['./create_env_file.py']
env:
- 'SHORT_SHA=$SHORT_SHA'
- 'TAG_NAME=$TAG_NAME'
Set the variables through Substitution Variables section in Edit Trigger page in Cloud Build
Now your application should have these env variables when app deploy happens

Why does gunicorn not see the corrent environment variables?

On my production server, I've set environment variables both inside and outside my virtualenv (only because I don't understand this issue going on) including a variable HELLO_WORLD_PROD which I've set to '1'. in the python interpreter, both inside and out my venv, os.environ.get('HELLO_WORLD_PROD') == '1' returns True. In my settings folder, I have:
import os
if os.environ.get('HELLO_WORLD_PROD') == '1':
from hello_world.settings.prod import * # noqa
else:
from hello_world.settings.dev import * # noqa
Both prod.py and dev.py inherit from base.py, and in base DEBUG = False, and only in dev.py does DEBUG = True.
However, when I trigger an error through the browser, I'm seeing the debug page.
I'm using nginx and gunicorn. Why is my application importing the wrong settings file?
You can see my gunicorn conf here
Thanks in advance for your patience!
I was using sudo service gunicorn start to run gunicorn. The problem is service strips all environment variables but TERM, PATH and LANG. To fix it, in my exec line in my gunicorn.conf I added the environment variables there using the --env flag, like exec env/bin/gunicorn --env HELLO_WORLD_PROD=1 --env DB_PASSWORD=secret etc.

How to run Odoo tests unittest2?

I tried running odoo tests using --test-enable, but it won't work. I have a couple of questions.
According to the documentation Tests can only be run during module installation, what happens when we add functionality and then want to run tests?
Is it possible to run tests from IDE like Pycharm ?
This useful For Run odoo test case:
./odoo.py -i/-u module_being_tested -d being_used_to_test --test-enable
Common options:
-i INIT, --init=INIT
install one or more modules (comma-separated list, use "all" for all modules), requires -d
-u UPDATE, --update=UPDATE
update one or more modules (comma-separated list, use "all" for all modules). Requires -d.
Database related options:
-d DB_NAME, --database=DB_NAME
specify the database name
Testing Configuration:
--test-enable: Enable YAML and unit tests.
#aftab You need add log-level please see below.
./odoo.py -d <dbname> --test-enable --log-level=test
and regarding you question, If you are making changes to installed modules and need to re test all test cases then you need to simple restart you server with -u <module_name> or -u all(for all modules) with the above command.
Here is a REALLY nice plugin to run unit odoo tests directly with pytest:
https://github.com/camptocamp/pytest-odoo
Here's a result example:
I was able to run odoo's tests using pycharm, to achieve this I used docker + pytest-odoo + pycharm (using remote interpreters).
First you setup a Dockerfile like this:
FROM odoo:14
USER root
RUN apt-get update && \
apt-get install -y --no-install-recommends \
python3-pip
RUN pip3 install pytest-odoo coverage pytest-html
USER odoo
And a docker-compose.yml like this:
version: '2'
services:
web:
container_name: plusteam-odoo-web
build:
context: .
dockerfile: Dockerfile
image: odoo:14
depends_on:
- db
ports:
- "8069:8069"
volumes:
- odoo-web-data:/var/lib/odoo
- ./config:/etc/odoo
- ./addons:/mnt/extra-addons
command: --dev all
db:
container_name: plusteam-odoo-db
image: postgres:13
ports:
- "5432:5432"
environment:
- POSTGRES_DB=postgres
- POSTGRES_PASSWORD=odoo
- POSTGRES_USER=odoo
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- odoo-db-data:/var/lib/postgresql/data/pgdata
volumes:
odoo-web-data:
odoo-db-data:
So we extend an odoo image with packages to generate coverage reports and pytest-odoo
Once you have this, you can run docker-compose up -d to get your odoo instance running, the odoo container will have pytest-odoo installed, the next part is to tell pycharm to use a remote interpreter with the odoo modified image including the pyest-odoo package:
Now every time you run a script in pycharm they will launch a new container based on the image you provided.
After examining the containers launched by pycharm I realized they bind the project's directory to the /opt/project/ directory inside the container, this is useful because you will need to modify the odoo.conf file when you run your tests.
You can customize the database connection for a custom testing db which you should do, and the important part is that you need to map the addons_path option to /opt/project/addons or the final path inside the containers launched by pycharm where your custom addons are available.
With this you can create a pycharm script for pytest like this:
Notice how we provided the path for the odoo config with modifications for testing, this way the odoo available in the container launched by pycharm will know where your custom addon's code is located.
Now we can run the script and even debug it and everything will work as expected.
I go further in this matter (my particular solution) in a medium article, I even wrote a repository with a working demo so you can try it out, hope this helps:
https://medium.com/plusteam/how-to-run-odoo-tests-with-pycharm-51e4823bdc59 https://github.com/JSilversun/odoo-testing-example
Be aware that using remote interpreters you just need to make sure the odoo binary can find the addons folder properly and you will be all set :) besides using a Dockerfile to extend an image helps to speed up development.

Docker-compose and pdb

I see that I'm not the first one to ask the question but there was no clear answer to this:
How to use pdb with docker-composer in Python development?
When you ask uncle Google about django docker you get awesome docker-composer examples and tutorials and I have an environment working - I can run docker-compose up and I have a neat developer environment but the PDB is not working (which is very sad).
I can settle with running docker-compose run my-awesome-app python app.py 0.0.0.0:8000 but then I can access my application over http://127.0.0.1:8000 from the host (I can with docker-compose up) and it seems that each time I use run new containers are made like: dir_app_13 and dir_db_4 which I don't desire at all.
People of good will please aid me.
PS
I'm using pdb++ for that example and a basic docker-compose.yml from this django example. Also I experimented but nothing seems to help me. And I'm using docker-composer 1.3.0rc3 as it has Dockerfile pointing support.
Use the following steps to attach pdb on any python script.
Step 1. Add the following in your yml file
stdin_open: true
tty: true
This will enable interactive mode and will attach stdin. This is equivalent for -it mode.
Step 2.
docker attach <generated_containerid>
You'll now get the pdb shell
Try running your web container with the --service-ports option: docker-compose run --service-ports web
If after the adding of
stdin_open: true
tty: true
you started to get issues similar to that:
fd = self._input_fileno()
if fd is not None and fd in ready:
> return ord(os.read(fd, 1))
E TypeError: ord() expected a character, but string of length 0 found
You can try to add ENV LC_ALL en_US.UTF-8 at the top of your Docker file
FROM python:3.8.2-slim-buster as build_base
ENV LC_ALL en_US.UTF-8
Till my experience, docker-compose up command does not provide an interactive shell, but it starts the printing STDOUT to default read-only shell.
Or if you have specified and mapped logs directory, docker-compose up command will print nothing on the attached shell but it sends output to your mapped logs. So you have to attach the container separately once it is running.
when you do docker-compose up, make it in detached mode via -d and connect to the container via
docker exec -it your_container_name bash

Categories