How to run Odoo tests unittest2? - python

I tried running odoo tests using --test-enable, but it won't work. I have a couple of questions.
According to the documentation Tests can only be run during module installation, what happens when we add functionality and then want to run tests?
Is it possible to run tests from IDE like Pycharm ?

This useful For Run odoo test case:
./odoo.py -i/-u module_being_tested -d being_used_to_test --test-enable
Common options:
-i INIT, --init=INIT
install one or more modules (comma-separated list, use "all" for all modules), requires -d
-u UPDATE, --update=UPDATE
update one or more modules (comma-separated list, use "all" for all modules). Requires -d.
Database related options:
-d DB_NAME, --database=DB_NAME
specify the database name
Testing Configuration:
--test-enable: Enable YAML and unit tests.

#aftab You need add log-level please see below.
./odoo.py -d <dbname> --test-enable --log-level=test
and regarding you question, If you are making changes to installed modules and need to re test all test cases then you need to simple restart you server with -u <module_name> or -u all(for all modules) with the above command.

Here is a REALLY nice plugin to run unit odoo tests directly with pytest:
https://github.com/camptocamp/pytest-odoo
Here's a result example:

I was able to run odoo's tests using pycharm, to achieve this I used docker + pytest-odoo + pycharm (using remote interpreters).
First you setup a Dockerfile like this:
FROM odoo:14
USER root
RUN apt-get update && \
apt-get install -y --no-install-recommends \
python3-pip
RUN pip3 install pytest-odoo coverage pytest-html
USER odoo
And a docker-compose.yml like this:
version: '2'
services:
web:
container_name: plusteam-odoo-web
build:
context: .
dockerfile: Dockerfile
image: odoo:14
depends_on:
- db
ports:
- "8069:8069"
volumes:
- odoo-web-data:/var/lib/odoo
- ./config:/etc/odoo
- ./addons:/mnt/extra-addons
command: --dev all
db:
container_name: plusteam-odoo-db
image: postgres:13
ports:
- "5432:5432"
environment:
- POSTGRES_DB=postgres
- POSTGRES_PASSWORD=odoo
- POSTGRES_USER=odoo
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- odoo-db-data:/var/lib/postgresql/data/pgdata
volumes:
odoo-web-data:
odoo-db-data:
So we extend an odoo image with packages to generate coverage reports and pytest-odoo
Once you have this, you can run docker-compose up -d to get your odoo instance running, the odoo container will have pytest-odoo installed, the next part is to tell pycharm to use a remote interpreter with the odoo modified image including the pyest-odoo package:
Now every time you run a script in pycharm they will launch a new container based on the image you provided.
After examining the containers launched by pycharm I realized they bind the project's directory to the /opt/project/ directory inside the container, this is useful because you will need to modify the odoo.conf file when you run your tests.
You can customize the database connection for a custom testing db which you should do, and the important part is that you need to map the addons_path option to /opt/project/addons or the final path inside the containers launched by pycharm where your custom addons are available.
With this you can create a pycharm script for pytest like this:
Notice how we provided the path for the odoo config with modifications for testing, this way the odoo available in the container launched by pycharm will know where your custom addon's code is located.
Now we can run the script and even debug it and everything will work as expected.
I go further in this matter (my particular solution) in a medium article, I even wrote a repository with a working demo so you can try it out, hope this helps:
https://medium.com/plusteam/how-to-run-odoo-tests-with-pycharm-51e4823bdc59 https://github.com/JSilversun/odoo-testing-example
Be aware that using remote interpreters you just need to make sure the odoo binary can find the addons folder properly and you will be all set :) besides using a Dockerfile to extend an image helps to speed up development.

Related

PyCharm reports 'unresolved reference' on Python imports with docker-compose interpreter running

PyCharm reports 'unresolved reference' to Python imports with docker-compose interpreter running.
see image attached
unresolved references e.g. in settings.py
I have already read and tried some problems of the same kind and the solution answers on this portal, like marking the folders in the PYCharm IDE as source root. Also I have used the Repair IDE function a lot to rebuild the indexes. Nothing. Nothing has helped so far.
I'm having this problem with PyCharm since I'm not running my Python installation in a venv and switching the PyCharm interpreter to it, but working with a Docker Compose environment.
I have created a dockerfile and a docker-compose.yml file for this purpose. If I use the terminal command "docker compose up", the container environment runs and my Python/Django application can also be started without errors via the browser. The respective logs of the containers do not cause any problems either.So the problem doesn't seem to be with my Docker environment, but rather with how the PyCharm IDE interacts with the Docker environment.
here is my Dockerfile code:
FROM python:3.10.4-slim-bullseye
# Set environment variables
ENV PIP_DISABLE_PIP_VERSION_CHECK 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /cpp_base
# Install dependencies
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# Copy project
COPY . .
and here my docker-compose.yml:
version: "3.9"
services:
web:
build: .
container_name: python_django
command: python /cpp_base/manage.py runserver 0.0.0.0:8000
volumes:
- .:/cpp_base
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres:14.5
container_name: postgres_14.5
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_DB: cpp_base
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
pgadmin:
container_name: pgadmin4_container
image: dpage/pgadmin4
restart: always
volumes:
- pgadmin_data:/var/lib/pgadmin
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: root
ports:
- "5050:80"
blackd:
restart: always
image: docker.io/pyfound/black
command: blackd --bind-host 0.0.0.0 --bind-port 45484
ports:
- "45484:45484"
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
restart: unless-stopped
security_opt:
- no-new-privileges:true
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./portainer-data:/data
ports:
- "9000:9000"
volumes:
postgres_data:
pgadmin_data:
In my PyCharm Ide:
Connect to docker daemon
Settings->Build,Execution, Deployment, see attached Image
Add new Interpreter
Interpreter Docker-Compose configuration image, see attached file
select the new Interpreter and see that the all needed Packages were there
Interpreter Selection and Package list, see attached file
configure a Run/Debug Configuration
see attached configuration image
After all these configurations, I was able to start the Docker Environment inside the Ide with the green triangle play button. The code also seems to run because I can see the Django default app in the browser. I don't have the slightest idea why the IDE makes the red underlines though. The funny thing is that if I don't select any interpreter within the IDE I can still run the application and I don't get any unresolved messages. So only when I set the interpreter to the "web" service in the Docker compose file the IDE starts to complain.
Does anyone know help.
Thank you very much.
My Software Versions:
PYCharm 2022.2.2
Windows 11, 10.0.22000
Docker v2.12.0, running on WSL2
Python 3.10.4
Django 4.1
I have found a solution. The jetbrain support and the bug tool from jetbrain YoutTrack helped me to solve the problem. There were 2 things I had to do:
1. first solution section
First of all, the support found an error in my PyCharm log that had to do with the PyCharm Docker interpreter.
The error in the log had the following output:
Error response from daemon: invalid environment variable: =::=::\
To fix this error you can do the same as in this bug report:
https://youtrack.jetbrains.com/issue/PY-24604/Unable-to-create-Docker-Compose-interpreter-InternalServerErrorException-invalid-environment-variable
So when setting up the remote Docker interpreter in PyCharm, uncheck the following option in the environment settings:
include parent environment variables
Unfortunately, this is quite hard to find and probably a lot of users won't find it right away and therefore run into the same error.
2. second solution section
A user on another platform could give me a hint about a bug in the current PyCharm and show the workaround for it.You can find the workaround here:
https://youtrack.jetbrains.com/issue/PY-55617/Pycharm-doesnt-recognize-any-of-my-installed-packages-on-a-remote-host
I can't say if the two solutions mentioned are dependent on each other. However, after the solution in point 1, the error messages were gone in the logs and after the workaround in point 2, all package dependencies and modules in the code were also no longer shown to me as "unresolved references". This has been my solution.
I have this same issue.
As far as I can tell, Jetbrains doesn't support a remote interpreter in a docker orchestration. And although this is ~supposed to work, it is broken in 2022.2.
Here's the open issue on it.

command in docker-compose.yaml throwing errors

version: "3.4"
services:
app:
build:
context: ./
target: project_build
image: sig-project
environment:
PROJECT_NAME: "${PROJECT_NAME:-NullProject}"
command: /bin/bash -c "init_project.sh"
ports:
- 7777:7777
expose:
- 7777
volumes:
- ./$PROJECT_NAME:/app/$PROJECT_NAME
- .:/home/jovyan/work
working_dir: /app
entrypoint: "jupyter notebook --ip=0.0.0.0 --port=7777 --allow-root --no-browser"
Above is my docker-compose.yaml.
The command doesn't run, I get this error:
Unrecognized alias: 'c', it will have no effect
Furthermore, it runs the jupyter notebook out of /bin instead of /app.
If I change the command to
command: "init_project.sh"
it fails silently, and if try to do something complicated like:
command: python init_project.py
then it gives me this:
No such file or directory: /app/python
note: init_project.sh is just a bash script wrapper for init_project.py
so it seems that for some reason the commands are run in a way that I don't understand and from within the /app directory but without shell or bash.
I've been hitting my head against the wall trying to figure out what I'm missing here and I don't know what else to try.
I've found a variety of issues and discussions that are similar, but nothing seems to resolve it.
these are the contents of the working-dir /app:
#ls
Dockerfile docker-compose.yaml docker_compose.sh init_project.sh poetry.lock pyproject.toml
Makefile create_poetry_lock.sh docker_build.sh init_project.py install-poetry.py poetry_lock_update.sh test.sh
What am I missing?
Thanks!
Your compose file looks weird to me!
You either have a docker image which container will be created with, or you have a Dockerfile present which builds the image and then container will be created with that image.
Based on what has been said above;
Why do you have both image and build attributes on your compose file?
If image is already available (e.g: postgreSQL, rabbitmq) then you don't need to build anything, just provide the image.
If there's no image, then please also add your Dockerfile to the quiestion.
Why are you trying to run /bin/bash -c?
You can simply add:
bash /path/to/ur/shell_script
Lastly, why don't you add your command in the Dockerfile with CMD?
What are init_project.sh and init_project.py? Maybe also share the content inside those files as well.
Also might be good to add tree output so we know how from where different commands are being executed.

View Docker Swarm CMD Line Output

I am trying to incorporate a python container and a dynamodb container into one stack file to experiment with Docker swarm. I have done tutorials on docker swarm seeing web apps running across multiple nodes before but never built anything independently. I am able to run docker-compose up with no issues, but struggling with swarm.
My docker-compose.yml looks like
version: '3.3'
services:
dynamodb:
image: "amazon/dynamodb-local"
ports:
- "8000:8000"
track-count:
image: "my-app"
links:
- "dynamodb:localhost"
Running docker stack deploy -c docker-compose.yml trial_stack brings up no errors however printing 'hello world' as the first line of python code is not displayed in the terminal. I get the following as CMD line output
Ignoring unsupported options: links
Creating network trial_stack_default
Creating service trial_stack_dynamodb
Creating service trial_stack_track-count
My question is:
1) Why is the deploy service ignoring the links? I have noticed this is repeated in the docs https://docs.docker.com/engine/reference/commandline/stack_deploy/ but unsure if this will cause my stack to fail.
2) Assuming the links issue is fixed, where will any command line output be shown, to confirm the system is running? Currently I only have one node, my local machine, which is the manager.
For reference, my python image is being built by the following Dockerfile:
FROM python:3.8-slim-buster
RUN mkdir /app
WORKDIR /app
RUN pip install --upgrade pip
COPY ./requirements.txt ./
RUN pip install -r ./requirements.txt
COPY / /
COPY /resources/secrets.py /resources/secrets.py
CMD [ "python", "/main.py" ]
You can update docker-compose.yaml to enable tty for the services for which you want to see the stdout on console.
Updated docker-compose.yaml should look like this:
version: '3.3'
services:
dynamodb:
image: "amazon/dynamodb-local"
ports:
- "8000:8000"
track-count:
image: "my-app"
tty: true
links:
- "dynamodb:localhost"
and then when once you have the task deployed, to check service logs you can run:
# get the service name
docker stack services <STACK_NAME>
# display the service logs, edited based on user's suggestion
docker service logs --follow --raw <SERVICE_NAME>

docker-compose: Why is my python application being invoked here?

I've been scratching my head for a while with this. I have the following Dockerfile for my python application:
# Use an official Python runtime as a parent image
FROM frankwolf/rpi-python3
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
RUN chmod 777 docker-entrypoint.sh
# Install any needed packages specified in requirements.txt
RUN pip3 install --trusted-host pypi.python.org -r requirements.txt
# Run __main__.py when the container launches
CMD ["sudo", "python3", "__main__.py", "-debug"] # Not sure if I need sudo here
docker-compose file:
version: "3"
services:
mongoDB:
restart: unless-stopped
volumes:
- "/data/db:/data/db"
ports:
- "27017:27017"
- "28017:28017"
image: "andresvidal/rpi3-mongodb3:latest"
mosquitto:
restart: unless-stopped
ports:
- "1883:1883"
image: "mjenz/rpi-mosquitto"
FG:
privileged: true
network_mode: "host"
depends_on:
- "mosquitto"
- "mongoDB"
volumes:
- "/home/pi:/home/pi"
#image: "arkfreestyle/fg:v1.8"
image: "test:latest"
entrypoint: /app/docker-entrypoint.sh
restart: unless-stopped
And this what docker-entrypoint.sh looks like:
#!/bin/sh
if [ ! -f /home/pi/.initialized ]; then
echo "Initializing..."
echo "Creating .initialized"
# Create .initialized hidden file
touch /home/pi/.initialized
else
echo "Initialized already!"
sudo python3 __main__.py -debug
fi
Here's what I am trying to do:
(This stuff already works)
1) I need a docker image which runs my python application when I run it in a container. (this works)
2) I need a docker-compose file which runs 2 services + my python application, BUT before running my python application I need to do some initialization work, for this I created a shell script which is docker-entrypoint.sh. I want to do this initialization work ONLY ONCE when I deploy my application on a machine for the first time. So I'm creating a .initialized hidden file which I'm using as a check in my shell script.
I read that using entrypoint in a docker-compose file overwrites any old entrypoint/cmd given to the Dockerfile. So that's why in the else portion of my shell script I'm manually running my code using "sudo python3 main.py -debug", this else portion works fine.
(This is the main question)
In the if portion, I do not run my application in the shell script. I've tested the shell script itself separately, both if and else statements work as I expect, but when I run "sudo docker-compose up", the first time when my shell script hits the if portion it echoes the two statements, creates the hidden file and THEN RUNS MY APPLICATION. The console output appears in purple/pink/mauve for the application, while the other two services print their logs out in yellow and cyan. I'm not sure if the colors matter, but in the normal condition my application logs are always green, in fact the first two echoes "Initializing" and "Creating .initialized" are also green! so I thought I'd mention this detail. After those two echoes, my application mysteriously begins and logs console output in purple...
Why/how is my application being invoked in the if statement of the shell script?
(This is only happens if I run through docker-compose, not if I just run the shell script with sh docker-entrypoint.sh)
Problem 1
Using ENTRYPOINT and CMD at the same time has some strange effects.
Problem 2
This happens to your container:
It is started the first time. The .initialized file does not exist.
The if case is executed. The file is created.
The script and therefore the container ends.
The restart: unless-stopped option restarts the container.
The .initialized file exists now, the else case is run.
python3 __main__.py -debug is executed.
BTW the USER command in the Dockerfile or the user option in Docker Compose are better options than sudo.

Using docker to compose a remote image with a local code base for *development*

I have been reading this tutorial:
https://prakhar.me/docker-curriculum/
along with other tutorials, and Docker docks and I am still not completely clear on how to do this task.
The problem
My local machine is running Mac OS X, and I would like to set up a development environment for a python project. In this project I need to run call an api from a docker repo bamos/openface. The project also has some dependencies such as yaml, etc. If I just mount my local to openface, ie:
docker run -v path/to/project:/root/project -p 9000:9000 -p 8000:8000 -t -i bamos/openface /bin/bash
Then I need to install yaml and other dependencies, and every time I exit the container the installations would be lost. Additionally, it is also much slower for some reason. So the right way to do this is using Docker compose, but I am not sure how to proceed from here.
UPDATE
In response to the comments, I will now update the problem:
Right now my Dockerfile looks like this:
FROM continuumio/anaconda
ADD . /face-off
WORKDIR /face-off
RUN pip install -r requirements.txt
EXPOSE 5000
CMD [ "python", "app.py" ]
It is important that I build from anaconda since a lot of my code will use numpy and scipy. Now I also need bamos/openface, so I tried adding that to my docker-compose.yml file:
version: '2'
services:
web:
build: .
command: python app.py
ports:
- "5000:5000"
volumes:
- .:/face-off
openface:
build: bamos/openface
However, I am getting error:
build path path/to/face-off/bamos/openface either does not exist, is not accessible, or is not a valid URL
So I need to pass bamos/openface the right way so I can build a container with it. Right now bamos/openface is listed when I do docker images.

Categories