Whenever I open my gitpod workspace I have to re-install my requirements.txt file. I was reading about the gitpod.yml file and see that I have to add it in there so the dependencies get installed during the prebuild.
I can't find any examples of this so I just want to see if I understand it correctly.
Right now my gitpod.yml file looks like this...
image:
file: .gitpod.Dockerfile
# List the start up tasks. Learn more https://www.gitpod.io/docs/config-start-tasks/
tasks:
- init: echo 'init script' # runs during prebuild
command: echo 'start script'
# List the ports to expose. Learn more https://www.gitpod.io/docs/config-ports/
ports:
- port: 3000
onOpen: open-preview
vscode:
extensions:
- ms-python.python
- ms-azuretools.vscode-docker
- eamodio.gitlens
- batisteo.vscode-django
- formulahendry.auto-close-tag
- esbenp.prettier-vscode
Do I just add these two new 'init' and 'command' lines under tasks?
image:
file: .gitpod.Dockerfile
# List the start up tasks. Learn more https://www.gitpod.io/docs/config-start-tasks/
tasks:
- init: echo 'init script' # runs during prebuild
command: echo 'start script'
- init: pip3 install -r requirements.txt
command: python3 manage.py
# List the ports to expose. Learn more https://www.gitpod.io/docs/config-ports/
ports:
- port: 3000
onOpen: open-preview
vscode:
extensions:
- ms-python.python
- ms-azuretools.vscode-docker
- eamodio.gitlens
- batisteo.vscode-django
- formulahendry.auto-close-tag
- esbenp.prettier-vscode
Thanks so much for your help. I'm still semi-new to all this and trying to figure my way around.
To install requirements in the prebuild, you have to install them in the Dockerfile. The exception is editable installs, pip install -e ..
For example, to install a package named <package-name>, add this line to .gitpod.Dockerfile:
RUN python3 -m pip install <package-name>
Installing from a requirements file is slightly trickier because the Dockerfile can't "see" the file when it's building. One workaround is to give the Dockerfile the URL of the requirements file in the repo.
RUN python3 -m pip install -r https://gitlab.com/<gitlab-username>/<repo-name>/-/raw/master/requirements.txt
Edit: Witness my embarrassing struggle with the same issue today: https://github.com/gitpod-io/gitpod/issues/7306
Related
We're moving our CI from Jenkins to Gitlab and I'm trying to setup a pipeline that runs on both Windows and Linux.
Running multiple Python versions on a Linux gitlab runner works ok by defining versions like this:
.versions:
parallel:
matrix:
- PYTHON_VERSION: ['3.7', '3.8', '3.9']
OPERATING_SYSTEM: ['linux', 'windows']
then calling them for each stage they are needed
build_wheel:
parallel: !reference [.versions, parallel]
I'm trying to add a Windows runner now and have run into the snag that Powershell syntax is different to bash. Most of the Python calls still work, but calling the activate script needs to be different. How do I switch scripts depending on the operating system?
It doesn't seem to be possible to add rules to a script, so I'm trying something like this
.activate_linux: &activate_linux
rules:
- if: $OPERATING_SYSTEM == 'linux'
script:
- source venv/bin/activate
.activate_windows: &activate_windows
rules:
- if: $OPERATING_SYSTEM == 'windows'
script:
- .\venv\Scripts\activate
.activate: &activate
- *activate_linux
- *activate_windows
before_script:
- python -m venv venv
- *activate
- pip install --upgrade pip wheel "setuptools<60"
but it gives me the error: "before_script config should be a string or a nested array of strings up to 10 levels deep".
Is it possible to have one .gitlab-ci.yml file that works on both Windows and Linux? Surely someone has worked this out, but I can't find any solutions.
You can't easily do this in the yaml with that matrix.
Instead, you can do this:
.scripts:
make_venv:
- python -m venv venv
activate_linux:
- !reference [.scripts, make_venv]
- source ./venv/bin/activate
activate_windows:
- !reference [.scripts, make_venv]
- venv/Scripts/activate.ps1
.job_template:
parallel:
matrix:
- PYTHON_VERSION: ['3.7', '3.8', '3.9']
script:
- pip install --upgrade pip wheel "setuptools<60"
- # ...
build_linux:
extends: .job_template
variables:
OPERATING_SYSTEM: 'linux'
before_script:
- !reference [.scripts, activate_linux]
build_windows:
extends: .job_template
variables:
OPERATING_SYSTEM: 'windows'
before_script:
- !reference [.scripts, activate_windows]
I apologize in advance.I have a task to create CI pipeline in Gitlab for projects on python language with results in SonarQube. I found some gitlab-ci.yml file:
image: image-registry/gitlab/python
before_script:
- cd ..
- git clone https://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab/python-education/junior.git
stages:
- PyLint
pylint:
stage: PyLint
only:
- merge_requests
script:
- cp -R ${CI_PROJECT_NAME}/* junior/project
- cd junior && python3 run.py --monorepo
Is it possible to some code in script to output in SonarQube?
Yes, third party issues are supported with SonarQube. For PyLint, you can set sonar.python.pylint.reportPath in your sonar.properties file with the path of the report(s) for pylint. You must use --output-format=parseable argument to pylint.
When you run sonar scanner, it will collect the report(s) and send it to SonarQube.
I'm running into an issue where it seems I can only run a python command in either the dockerfile or Kubernetes. Right now I have two python scripts, the first script setting up keys and tokens so the second script can run properly.
My dockerfile looks something like this:
FROM python:3.8.0-alpine
WORKDIR /code
COPY script1.py .
COPY script2.py .
# Install python libraries
RUN pip install --upgrade pip
RUN apk add build-base
RUN apk add linux-headers
RUN pip install -r requirements.txt
CMD [ "python", "-u", "script2.py"]
and my Kubernetes yaml file is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: script2
labels:
app: script2-app
spec:
selector:
matchLabels:
app: script2-app
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: script2-app
spec:
containers:
- name: script2-app
image: script2:v1.0.1
ports:
- containerPort: 5000
env:
- name: KEYS_AND_TOKENS
valueFrom:
secretKeyRef:
name: my_secret
key: KEYS_AND_TOKENS
command:
- "python"
- "script1.py"
The issue starts with the 'command' portion in the yaml file. Without it, Kubernetes will run the container as usual. (the container can still run without the keys and tokens. It will just log that some functions failed to run then move on.) However, when I include the 'command' portion, script1 will run and successfully set up the keys. But once script1 finishes, nothing else happens. The deployment continues to run but script2 never starts.
The reason I am doing it this way is because script2 may need to restart on occasion due to internet connection failures causing it to crash. Sence all script1 is doing is setting up keys and tokens, it only needs to run once, then things will be set up for as long as the pod lives. I don't want to verify the keys and tokens every time script2 restarts. This is why the two scripts are separate and why I'm only running script1 in startup.
Any help would be much appreciated!
What is happening?
The command supplied through the yaml file overrides the CMD in the dockerfile (you can refer the kubernetes documentation here). So, when you supply command for executing the script1 in the yaml file, it overrides the command (CMD) in dockerfile for the execution of script2 because of which you must be getting errors as per your code logic.
How to resolve?
Step 1: Create a bash file as follows: (naming it "run.sh")
#!/bin/bash
exec python3 /path/to/script1.py
exec python3 /path/to/script2.py
Step 2: Update the dockerfile as:
FROM python:3.8.0-alpine
WORKDIR /code
COPY script1.py .
COPY script2.py .
COPY run.sh . # copying the bash file
# Install python libraries
RUN pip install --upgrade pip
RUN apk add build-base
RUN apk add linux-headers
RUN pip install -r requirements.txt
RUN chmod a+x run.sh # making it executable
CMD ["./run.sh"] # executing the scripts through bash file
Step 3: Remove the command from the kubernetes deployment yaml
But if you want to keep the command in the yaml file then you should replace it with "./run.sh"
Note:
Ideally you should not run two different scripts like this. If you need to set up tokens/keys you should do that in a sub-module which you can call in your main script. You can handle network connectivity issues, etc. through a combination of exception handling and a retry mechanism.
When tests are launched in GitLab CI, pytest-sugar doesn't show output like in local launching. What the problem can be?
My gitlab config:
image: project.com/path/dir
stages:
- tests
variables:
TESTS_ENVIRORMENT:
value: "--stage my_stage"
description: "Tests launch my_stage as default"
before_script:
- python3 --version
- pip3 install --upgrade pip
- pip3 install -r requirements.txt
api:
stage: tests
script:
- pytest $TESTS_ENVIRORMENT Tests/API/ -v
Local:
GitLab:
It seems that there's a problem with pytest-sugar inside containers. Add --force-sugar option to pytest call, it worked for me
By default docker container do not allocate a pseudo-terminal(tty) as a result not stdout, its simple output from console.
There is not clear solution for that case, mostly needs to do workarounds and try special python libraries.
I am trying to integrate docker in to my django workflow and I have everything set up except one really annoying issue. If I want to add dependencies to my requirements.txt file I basically just have to rebuild the entire container image for those dependencies to stick.
For example, I followed the docker-compose example for django here. the yaml file is set up like this:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db
and the Docker file used to build the web container is set up like this:
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
So when the image is built for this container requirements.txt is installed with whatever dependencies are initially in it.
If I am using this as my development environment it becomes very difficult to add any new dependencies to that requirements.txt file because I will have to rebuild the container for the changes in requirements.txt to be installed.
Is there some sort of best practice out there in the django community to deal with this? If not, I would say that docker looks very nice for packaging up an app once it is complete, but is not very good to use as a development environment. It takes a long time to rebuild the container so a lot of time is wasted.
I appreciate any insight . Thanks.
You could mount requirements.txt as a volume when using docker run (untested, but you get the gist):
docker run container:tag -v /code/requirements.txt ./requirements.txt
Then you could bundle a script with your container which will run pip install -r requirements.txt before starting your application, and use that as your ENTRYPOINT. I love the custom entrypoint script approach, it lets me do a little extra work without needing to make a new container.
That said, if you're changing your dependencies, you're probably changing your application and you should probably make a new container and tag it with a later version, no? :)
So I changed the yaml file to this:
db:
image: postgres
web:
build: .
command: sh startup.sh
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db
I made a simple shell script startup.sh:
#!/bin/bash
#restart this script as root, if not already root
[ `whoami` = root ] || exec sudo $0 $*
pip install -r dev-requirements.txt
python manage.py runserver 0.0.0.0:8000
and then made a dev-requirements.txt that is installed by the above shell script as sort of a dependency staging environment.
when I am satisfied with a dependency in dev-requirements.txt I will just move it over to the requirements.txt to be committed to the next build of the image. This gives me flexibility to play with adding and removing dependencies while developing.
I think the best way is to ignore what's currently the most common way to install python dependencies (pip install -r requirements.txt) and specify your requirements directly in the Dockerfile, effectively getting rid of the requirements.txt file. Additionally you get dockers layer caching for free.
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
# make sure you install requirements before the ADD, since everything after ADD is not cached
RUN pip install flask==0.10.1
RUN pip install sqlalchemy==1.0.6
...
ADD . /code/
If the docker container is the only way your application is ever run, then I would suggest you do it this way. If you want to support other means of setting up your code (e.g. virtualenv) then this is of course not for you and you should fall back to either using a requirements file or use a setup.py routine. Either way, I found this way to be most simple and straightforward without dealing with all the messed up python package distribution issues.