I am implementing CircleCI for one of the projects. The project is built on Django 3.2.
My test cases run properly when I run using python manage.py test blog, when I run the same in CircleCI it returns ,
======================================================================
ERROR: project.blog (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: project.blog
Traceback (most recent call last):
File "/usr/local/lib/python3.8/unittest/loader.py", line 470, in _find_test_path
package = self._get_module_from_name(name)
File "/usr/local/lib/python3.8/unittest/loader.py", line 377, in _get_module_from_name
__import__(name)
ModuleNotFoundError: No module named 'project.blog'
Here is my CircleCI config
version: 2
jobs:
build:
docker:
- image: circleci/python:3.8
steps:
- checkout
- run:
name: Installing dependencies
command: |
python3 -m venv venv
. venv/bin/activate
pip3 install -r requirements.txt
- run:
name: Running migrations
command: |
. venv/bin/activate
python manage.py migrate --skip-checks
- run:
name: Running tests
command: |
. venv/bin/activate
python manage.py test blog
I understand that CircelCI clones the project in project folder. Is that something that I am missing in config?
CircleCI by default checkouts our codebase to /home/circleci/project, problem was in list of installed_apps I had an app with name project.(which was conflicting)
When CircleCI ran python manage.py test the unittest module was searching the app blog inside django app project.
I fixed this problem by changing the default path to which CircleCI puts out codebase. Here is the updated CircleCI config
version: 2
jobs:
build:
working_directory: ~/platform #Here is the answer
docker:
- image: circleci/python:3.8
steps:
- checkout:
path: ~/platform #Here is the answer
- run:
name: Install dependencies
command: |
ls -l
python3 -m venv venv
. venv/bin/activate
pip3 install -r requirements.txt
- run:
name: Run migrations
command: |
. venv/bin/activate
python manage.py migrate --skip-checks
- run:
name: Run tests
command: |
. venv/bin/activate
python manage.py test website
Related
I'm creating a project which needs to make a connection from Python running in a docker container to a MySQL database running in another container. Currently, my docker-compose file looks like this:
version: "3"
services:
login:
build:
context: ./services/login
dockerfile: docker/Dockerfile
ports:
- "80:80"
# Need to remove this volume - this is only for dev work
volumes:
- ./services/login/app:/app
# Need to remove this command - this is only for dev work
command: /start-reload.sh
db_users:
image: mysql
volumes:
- ./data/mysql/users_data:/var/lib/mysql
- ./databases/users:/docker-entrypoint-initdb.d/:ro
restart: always
ports:
- 3306:3306
# Remove 'expose' below for prod
expose:
- 3306
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: users
MYSQL_USER: user
MYSQL_PASSWORD: password
And my Dockerfile for the login service looks like this:
# Note: this needs to be run from parent service directory
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.8
# Install Poetry
RUN curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | POETRY_HOME=/opt/poetry python && \
cd /usr/local/bin && \
ln -s /opt/poetry/bin/poetry && \
poetry config virtualenvs.create false
# Copy using poetry.lock* in case it doesn't exist yet
COPY ./app/pyproject.toml ./app/poetry.lock* /app/
RUN poetry install --no-root --no-dev
COPY ./app /app
I am trying to connect my login service to db_users, and want to make use of mysqlclient, but when I run poetry add mysqlclient, I get an error which includes the following lines:
/bin/sh: mysql_config: command not found
/bin/sh: mariadb_config: command not found
/bin/sh: mysql_config: command not found
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/var/folders/33/5yy7bny964bb0f3zggd1b4440000gn/T/pip-req-build-lak6lqu7/setup.py", line 15, in <module>
metadata, options = get_config()
File "/private/var/folders/33/5yy7bny964bb0f3zggd1b4440000gn/T/pip-req-build-lak6lqu7/setup_posix.py", line 70, in get_config
libs = mysql_config("libs")
File "/private/var/folders/33/5yy7bny964bb0f3zggd1b4440000gn/T/pip-req-build-lak6lqu7/setup_posix.py", line 31, in mysql_config
raise OSError("{} not found".format(_mysql_config_path))
OSError: mysql_config not found
mysql_config --version
mariadb_config --version
mysql_config --libs
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
I'm assuming this is something to do with the fact that I need the mysql-connector-c library to work, but I'm not sure how to go about getting this in poetry.
I was looking at following this tutorial, but since I'm not running MySQL locally but rather in docker, I'm not sure how to translate those steps to work in docker.
So essentially, my question is two-fold:
How do I add mysqlclient to my pyproject.toml file
How do I get this working in my docker env?
I was forgetting that my dev environment is also in Docker so I didn't really need to care about the poetry environment.
With that said, I edited the Dockerfile to look like the below:
# Note: this needs to be run from parent service directory
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.8
RUN apt-get install default-libmysqlclient-dev
# Install Poetry
RUN curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | POETRY_HOME=/opt/poetry python && \
cd /usr/local/bin && \
ln -s /opt/poetry/bin/poetry && \
poetry config virtualenvs.create false
# Copy using poetry.lock* in case it doesn't exist yet
COPY ./app/pyproject.toml ./app/poetry.lock* /app/
RUN poetry install --no-root --no-dev
COPY ./app /app
Which now has everything working as expected.
I have a Python Flask application setup in PyCharm. The folder structure for the project is as follows:
- README.md
- .gitignore
- projecta/
- __init__.py
- src/__init__.py
- src/app.py
- src/api/hello.py
- src/service/helloService.py
- Dockerfile
- requirements.txt
- projectb/
In my dockerfile, I have the following content:
FROM python:3.6
RUN mkdir /projecta
WORKDIR /projecta
ADD . /projecta/
RUN pip install -r requirements.txt
EXPOSE 8000
CMD ["python", "/projecta/src/app.py"]
In my PyCharm, I run it as a Python Configuration with script path as path-to-folder/projecta/src/app.py and working directory as path-to-folder/projecta/src.
When I run from PyCharm, things run normally without any issues. But when I run from docker using docker run -d --name a a:0.0.2 and building using docker build -t a:0.0.2 ., it gives the following error:
Traceback (most recent call last):
File "/projecta/src/app.py", line 3, in <module>
from projecta.src.api import api
ModuleNotFoundError: No module named 'projecta'
I am not expert in Python/Flask nor Docker. Can someone point out what is wrong here?
try from .api import api in line 3 of app.py
It will not run in pycharm then but will go well in docker
I am building a python module. In order to define its path, a .pth file has been defined as follows:
# creation of the virtual environment
python -v venv env
# activation of the newly creation virtual environment
source env/bin/activate
To set the path of my module (my module is located in packages/regression_model/regression_model) I created this .pth file env/lib/python3.7/site-packages/regression_model.pth which contains:
# env/lib/python3.7/site-packages/regression_model.pth
../../../../packages/regression_model
Now, any where in my project, I can import my module regression_model through this command:
import regression_model
Actually my objective is to use CircleCI for the deployment of my project.
CircleCI is configured as follows:
version: 2
jobs:
test_regression_model:
working_directory: ~/project
docker:
- image: circleci/python:3.7.6
environment: # environment variables for primary container
PYTHONPATH: ~/project/packages/regression_model:~/project/packages/ml_api
steps:
- checkout
- run:
name: Runnning tests
command: |
virtualenv venv
. venv/bin/activate
pip install --upgrade pip
pip install -r packages/regression_model/requirements.txt
chmod +x ./scripts/fetch_kaggle_dataset.sh
./scripts/fetch_kaggle_dataset.sh
python packages/regression_model/regression_model/train_pipeline.py
py.test -vv packages/regression_model/tests
workflows:
version: 2
test-all:
jobs:
- test_regression_model
The problem I am facing is that CircleCI is indicating that my module can not be imported
Traceback (most recent call last):
File "packages/regression_model/regression_model/train_pipeline.py", line 4, in <module>
from regression_model import pipeline
ModuleNotFoundError: No module named 'regression_model'
To solve the problem, the path to that module regression_model has to be defined exactly as it was done locally. The question is then: how to define path in the CircleCI?
I tried to do it through the use of the environment variable PYTHONPATH but without success.
Any suggestions?
I found out the solution. Similarly to what it has been done manually on my local machine, I just define 2 command lines to get it done in CircleCI:
echo "../../../../packages/regression_model" >> env/lib/python3.7/site-packages/extra.pth
echo "../../../../packages/ml_api" >> env/lib/python3.7/site-packages/extra.pth
And below the full yml file just in case it could help others.
version: 2
jobs:
test_regression_model:
working_directory: ~/project
docker:
- image: circleci/python:3.7.6
steps:
- checkout
- run:
name: Runnning tests
command: |
virtualenv env
. env/bin/activate
pip install --upgrade pip
pip install -r packages/regression_model/requirements.txt
echo "../../../../packages/regression_model" >> env/lib/python3.7/site-packages/extra.pth
echo "../../../../packages/ml_api" >> env/lib/python3.7/site-packages/extra.pth
chmod +x ./scripts/fetch_kaggle_dataset.sh
./scripts/fetch_kaggle_dataset.sh
sudo apt-get install unzip
unzip packages/regression_model/regression_model/datasets/house-prices-advanced-regression-techniques.zip -d packages/regression_model/regression_model/datasets/
python packages/regression_model/regression_model/train_pipeline.py
py.test -vv packages/regression_model/tests
workflows:
version: 2
test-all:
jobs:
- test_regression_model
I am having some problems when I am using google python SDK in Travis-CI. I'm always getting this exception:
Failure: ImportError (No module named google.appengine.api) ... ERROR
I think the problem is in my travis file or django settings file. Can I use the GAE SDK API in the Travis platform?
I write down my .travis.yml file:
language: python
python:
- "2.7"
before_script:
- wget https://storage.googleapis.com/appengine-sdks/featured/google_appengine_1.9.10.zip -nv
- unzip -q google_appengine_1.9.10.zip
- mysql -e 'create database DATABASE_NAME;'
- echo "USE mysql;\nUPDATE user SET password=PASSWORD('A_PASSWORD') WHERE user='USER';\nFLUSH PRIVILEGES;\n" | mysql -u USER
- python manage.py syncdb --noinput
install:
- pip install -r requirements.txt
- pip install mysql-python
script: python manage.py test --with-coverage
branches:
only:
- testing
Thank you
After trying a lot I solved it adding this in my travis.yml file in the before_script section after the unzip order:
- export PYTHONPATH=${PYTHONPATH}:google_appengine
Im using the following travis-ci configuration
language: python
env:
- DJANGO=1.4
- DJANGO=1.5
- DJANGO=1.6
python:
- "2.6"
- "2.7"
install:
- sudo pip install Django==$DJANGO
- sudo pip install .
script:
- cd autotest
- python manage.py test ...
But in everytime the tests are executed, I run into the following issue:
$ python manage.py test ...
Traceback (most recent call last):
File "manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named django.core.management
The command "python manage.py test ..." exited with 1.
As i said on irc,
You are running pip install as root. More than that, sudo will reset the environment before finding and running pip. This will mean your pip install is not into the virtualenv that travis provides, but into the global site-packages.
When you do python manage.py test you are using the python binary provided by a virtualenv. However virtualenv will not look in the system site-packages. So it cannot see the Django you installed into the system site-packages.