How to restart linked Docker Compose services in PyCharm's Run Configuration? - python

I'm trying to use a Django server run/debug configuration in PyCharm with a docker compose interpreter and the 'backend' service. Everything works fine, however when I restart the server, only one container ('backend') is restarted:
xxxxx_redis is up-to-date
xxxxx_frontend_1 is up-to-date
xxxxx_postgresql is up-to-date
xxxxx_celery_1 is up-to-date
Starting xxxxx_backend_1 ...
How can I make some linked services (e.g. 'celery') restart as well via PyCharm? The definition of 'backend' looks like this:
backend:
build:
# build args
command: python manage.py runserver 0.0.0.0:8000 --settings=<settings.module>
user: root
volumes:
# volumes definition
links:
- postgresql
- redis
- frontend
- celery

Simply adding the name of the service to the end of the default up command in Command and options did the trick for me:
Now both backend and celery are restarted when I run the configuration.

Related

How to migrate database in Django inside Docker

I have a docker-compose project with two containers running NGINX and gunicorn with my django files.
I also have a database outside of docker in AWS RDS.
My question is similiar to this one. But, that question is related to a database that is within docker-compose. Mine is outside.
So, if I were to open a bash terminal for my container and run py manage.py makemigrations the problem would be that the migration files in the django project, for example: /my-django-project/my-app/migrations/001-xxx.py would get out of sync with the database that stores which migrations has been applied. This will happen since my containers can shutdown and open a new container at any time. And the migration files would not be saved.
My ideas are to either:
Use a volume inside docker compose, but since the migrations folder are spread out over all django apps that could be hard to achieve.
Handle migrations outside of docker, that would require some kind of "master" project where migration files would be stored. This does not seem like a good idea since then the whole project would be dependent on some locals file existing.
I'm looking for suggestions on a good practice how I can handle migrations.
EDIT:
Here is docker-compose.yml, I'm runing this locally with docker-compose up and in production to AWS ECS with docker compose up. I left out some aws-cloudformation config which should not matter I think.
docker-compose.yml
version: '3'
services:
web:
image: <secret>.dkr.ecr.eu-west-3.amazonaws.com/api-v2/django:${IMAGE_TAG}
build:
context: .
dockerfile: ./Dockerfile
networks:
- demoapp
environment:
- DEBUG=${DEBUG}
- SECRET_KEY=${SECRET_KEY}
nginx:
image: <secret>.dkr.ecr.eu-west-3.amazonaws.com/api-v2/nginx:${IMAGE_TAG}
build:
context: .
dockerfile: ./nginx.Dockerfile
ports:
- 80:80
depends_on:
- web
networks:
- demoapp
The problem boiled down to where I would store my migration files that Django generates upon py manage.py makemigrations and when/where I would run py manage.py migrate. As 404pio suggested you can simple store these in your code repo like GitHub.
So my workflow goes like this:
In my local development environment, run py manage.py makemigrations and py manage.py migrations, (target a development database like sqlite).
If everything OK, commit and push to git.
(I'm using CircleCI to test and deploy my Django project, but this could be done manually aswell.) CircleCI runs pipeline after git push. In pipeline I have as the very last step to run py manage.py migrate. This must be after deployment of app since that might fail and then you don't want to migrate.

How to access the Django app running inside the Docker container?

I am currently running my django app inside the docker container by using the below command
docker-compose run app sh -c "python manage.py runserver"
but I am not able to access the app with local host url, (not using any additional db server or ngnix or gunicorn, just simply running the django devlopment server inside the docker).
please let me know how to access the app
docker-compose run is intended to launch a utility container based on a service in your docker-compose.yml as a template. It intentionally does not publish the ports: declared in the Compose file, and you shouldn't need it to run the main service.
docker-compose up should be your go-to call for starting the services. Just docker-compose up on its own will start everything in the docker-compose.yml, concurrently, in the foreground; you can add -d to start the processes in the background, or a specific service name docker-compose up app to only start the app service and its dependencies.
The python command itself should be the main CMD in your image's Dockerfile. You shouldn't need to override it in your docker-compose.yml file or to provide it at the command line.
A typical Compose YAML file might look like:
version: '3.8'
services:
app:
build: . # from the Dockerfile in the current directory
ports:
- 5000:5000 # make localhost:5000 forward to port 5000 in the container
While Compose supports many settings, you do not need to provide most of them. Compose provides reasonable defaults for container_name:, hostname:, image:, and networks:; expose:, entrypoint:, and command: will generally come from your Dockerfile and don't need to be overridden.
Try 0.0.0.0:<PORT_NUMBER> (typically 80 or 8000), If you are still troubling to connect the server you should use the Docker Machine IP instead of localhost. Enter the following in terminal and navigate to the provided url:
docker-machine ip

Flask and selenium-hub are not communicating when dockerised

I am having issues with getting data back from a docker-selenium container, via a Flask application (also dockerized).
When I have the Flask application running in one container, I get the following error on http://localhost:5000, which goes to the selenium driver using a Remote driver that is running on http://localhost:4444/wd/hub
The error that is generated is:
urllib.error.URLError: <urlopen error [Errno 99] Cannot assign requested address>
I have created a github repo with my code to test, see here.
My docker-compose file below seems ok:
version: '3.5'
services:
web:
volumes:
- ./app:/app
ports:
- "5000:80"
environment:
- FLASK_APP=main.py
- FLASK_DEBUG=1
- 'RUN=flask run --host=0.0.0.0 --port=80'
command: flask run --host=0.0.0.0 --port=80
# Infinite loop, to keep it alive, for debugging
# command: bash -c "while true; do echo 'sleeping...' && sleep 10; done"
selenium-hub:
image: selenium/hub:3.141
container_name: selenium-hub
ports:
- 4444:4444
chrome:
shm_size: 2g
volumes:
- /dev/shm:/dev/shm
image: selenium/node-chrome:3.141
# image: selenium/standalone-chrome:3.141.59-copernicium
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
What is strange is that when I run the Flask application in Pycharm, and the selenium grid is up in docker, I am able to get the data back through http://localhost:5000. The issue is only happening when the Flask app is running inside docker.
Thanks for the help in advance, let me know if you require further information.
Edit
So I amended my docker-compose.yml file to include a network (updated the code in github. As I've had the Flask app code running in debug and in a volume, any update to the code results in a refresh of the debugger.
I ran docker network inspect on the created network, and found the local docker IP address of selenium-hub. I updated the app/utils.py code, in get_driver() to use the IP address in command_executor rather than localhost. Saving, and re-running from my browser results in a successful return of data.
But I don't understand why http://localhost:4444/wd/hub would not work, the docker containers should see each other in the network as localhost, right?
the docker containers should see each other in the network as localhost, right?
No, this is only true when they use the host networking and expose ports through the host.
When you have services interacting with each other in docker-compose (or stack) the services should refer to each other by the service name. E.g. you would reach the hub container at http://selenium-hub:4444/wd/hub. Your Flask application could be reached by another container on the same network at http://web
You may be confused if your default when running docker normally is to use host networking because on the host network selenium-hub is also exposed on the same port 4444. So, if you started a container with host networking it could use http://localhost:4444 just fine there as well.
Could potentially be a port in use issue related to the execution?
See:
Python urllib2: Cannot assign requested address

How to run Odoo tests unittest2?

I tried running odoo tests using --test-enable, but it won't work. I have a couple of questions.
According to the documentation Tests can only be run during module installation, what happens when we add functionality and then want to run tests?
Is it possible to run tests from IDE like Pycharm ?
This useful For Run odoo test case:
./odoo.py -i/-u module_being_tested -d being_used_to_test --test-enable
Common options:
-i INIT, --init=INIT
install one or more modules (comma-separated list, use "all" for all modules), requires -d
-u UPDATE, --update=UPDATE
update one or more modules (comma-separated list, use "all" for all modules). Requires -d.
Database related options:
-d DB_NAME, --database=DB_NAME
specify the database name
Testing Configuration:
--test-enable: Enable YAML and unit tests.
#aftab You need add log-level please see below.
./odoo.py -d <dbname> --test-enable --log-level=test
and regarding you question, If you are making changes to installed modules and need to re test all test cases then you need to simple restart you server with -u <module_name> or -u all(for all modules) with the above command.
Here is a REALLY nice plugin to run unit odoo tests directly with pytest:
https://github.com/camptocamp/pytest-odoo
Here's a result example:
I was able to run odoo's tests using pycharm, to achieve this I used docker + pytest-odoo + pycharm (using remote interpreters).
First you setup a Dockerfile like this:
FROM odoo:14
USER root
RUN apt-get update && \
apt-get install -y --no-install-recommends \
python3-pip
RUN pip3 install pytest-odoo coverage pytest-html
USER odoo
And a docker-compose.yml like this:
version: '2'
services:
web:
container_name: plusteam-odoo-web
build:
context: .
dockerfile: Dockerfile
image: odoo:14
depends_on:
- db
ports:
- "8069:8069"
volumes:
- odoo-web-data:/var/lib/odoo
- ./config:/etc/odoo
- ./addons:/mnt/extra-addons
command: --dev all
db:
container_name: plusteam-odoo-db
image: postgres:13
ports:
- "5432:5432"
environment:
- POSTGRES_DB=postgres
- POSTGRES_PASSWORD=odoo
- POSTGRES_USER=odoo
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- odoo-db-data:/var/lib/postgresql/data/pgdata
volumes:
odoo-web-data:
odoo-db-data:
So we extend an odoo image with packages to generate coverage reports and pytest-odoo
Once you have this, you can run docker-compose up -d to get your odoo instance running, the odoo container will have pytest-odoo installed, the next part is to tell pycharm to use a remote interpreter with the odoo modified image including the pyest-odoo package:
Now every time you run a script in pycharm they will launch a new container based on the image you provided.
After examining the containers launched by pycharm I realized they bind the project's directory to the /opt/project/ directory inside the container, this is useful because you will need to modify the odoo.conf file when you run your tests.
You can customize the database connection for a custom testing db which you should do, and the important part is that you need to map the addons_path option to /opt/project/addons or the final path inside the containers launched by pycharm where your custom addons are available.
With this you can create a pycharm script for pytest like this:
Notice how we provided the path for the odoo config with modifications for testing, this way the odoo available in the container launched by pycharm will know where your custom addon's code is located.
Now we can run the script and even debug it and everything will work as expected.
I go further in this matter (my particular solution) in a medium article, I even wrote a repository with a working demo so you can try it out, hope this helps:
https://medium.com/plusteam/how-to-run-odoo-tests-with-pycharm-51e4823bdc59 https://github.com/JSilversun/odoo-testing-example
Be aware that using remote interpreters you just need to make sure the odoo binary can find the addons folder properly and you will be all set :) besides using a Dockerfile to extend an image helps to speed up development.

Docker-compose and pdb

I see that I'm not the first one to ask the question but there was no clear answer to this:
How to use pdb with docker-composer in Python development?
When you ask uncle Google about django docker you get awesome docker-composer examples and tutorials and I have an environment working - I can run docker-compose up and I have a neat developer environment but the PDB is not working (which is very sad).
I can settle with running docker-compose run my-awesome-app python app.py 0.0.0.0:8000 but then I can access my application over http://127.0.0.1:8000 from the host (I can with docker-compose up) and it seems that each time I use run new containers are made like: dir_app_13 and dir_db_4 which I don't desire at all.
People of good will please aid me.
PS
I'm using pdb++ for that example and a basic docker-compose.yml from this django example. Also I experimented but nothing seems to help me. And I'm using docker-composer 1.3.0rc3 as it has Dockerfile pointing support.
Use the following steps to attach pdb on any python script.
Step 1. Add the following in your yml file
stdin_open: true
tty: true
This will enable interactive mode and will attach stdin. This is equivalent for -it mode.
Step 2.
docker attach <generated_containerid>
You'll now get the pdb shell
Try running your web container with the --service-ports option: docker-compose run --service-ports web
If after the adding of
stdin_open: true
tty: true
you started to get issues similar to that:
fd = self._input_fileno()
if fd is not None and fd in ready:
> return ord(os.read(fd, 1))
E TypeError: ord() expected a character, but string of length 0 found
You can try to add ENV LC_ALL en_US.UTF-8 at the top of your Docker file
FROM python:3.8.2-slim-buster as build_base
ENV LC_ALL en_US.UTF-8
Till my experience, docker-compose up command does not provide an interactive shell, but it starts the printing STDOUT to default read-only shell.
Or if you have specified and mapped logs directory, docker-compose up command will print nothing on the attached shell but it sends output to your mapped logs. So you have to attach the container separately once it is running.
when you do docker-compose up, make it in detached mode via -d and connect to the container via
docker exec -it your_container_name bash

Categories