I am following the instructions on Push to deploy to use Jenkins to test and deploy a Google App Engine app written in python and Flask.
the test is located in the root folder of the app in a file called tests.py
The command in the execute shell step is
nosetests tests.py
I get the following error and I am not sure how to troubleshoot this as I am fairly new to Jenkins.
Started by user User Name
Building remotely on cloud-dev-php in workspace /var/jenkins/workspace/CFC Melbourne production pipeline
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url https://source.developers.google.com/p/cfc-melbourne-website/ # timeout=10
Fetching upstream changes from https://source.developers.google.com/p/cfc-melbourne-website/
> git --version # timeout=10
using .gitcredentials to set credentials
> git config --local credential.helper store --file=/tmp/git7069316934747655973.credentials # timeout=10
> git -c core.askpass=true fetch --tags --progress https://source.developers.google.com/p/cfc-melbourne-website/ +refs/heads/*:refs/remotes/origin/*
> git config --local --remove-section credential # timeout=10
> git rev-parse refs/remotes/origin/master^{commit} # timeout=10
> git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 3a8caffa38303b3ae4741aac83e6ac807077b5be (refs/remotes/origin/master)
> git config core.sparsecheckout # timeout=10
> git checkout -f 3a8caffa38303b3ae4741aac83e6ac807077b5be
> git rev-list 3a8caffa38303b3ae4741aac83e6ac807077b5be # timeout=10
[CFC Melbourne production pipeline] $ /bin/sh -xe /tmp/hudson3364335209750264714.sh
+ nosetests tests.py
/tmp/hudson3364335209750264714.sh: 2: /tmp/hudson3364335209750264714.sh: nosetests: not found
Build step 'Execute shell' marked build as failure
Finished: FAILURE
This isn't really a Jenkins problem — as the build output indicates, your shell script is failing because it cannot find the nosetests executable:
nosetests: not found
Have you made sure that nose is installed on the cloud-dev-php Jenkins build machine?
Supposedly it should already be installed if you're using that push-to-deploy image — but as your build is running on the PHP build machine rather than the Python machine, perhaps that's not the case.
You should double-check that you've followed the instructions to ensure that your Python Jenkins job runs on a Python build machine.
If it is installed, perhaps it's not on the default PATH, in which case you can change the usage of nosetests to /usr/local/bin/nosetests (or whatever the path is).
Related
I have a currently working gitlab CICD setup ... which takes my conan recipe , my library repo.. and whichever git tag you hard code.. it will clone that and build the package... and push it to gitlab package manager.. GREAT!
What I am wondering is.. how should I automate this so it looks at the git repo and builds ALL git tags.. so that I can roll back and forth more easily on conan packages.
For reference here is my conan.py
from conans import ConanFile, CMake, tools
class TwsApiConan(ConanFile):
name = "twsapi"
version = "10.17.01"
license = "IBKR"
author = "someemail"
url = "https://github.com/ibkr/tws-api/"
description = "Built from a mirror of the actual TWS API files in Github"
topics = ("tws", "interactive brokers")
settings = "os", "compiler", "build_type", "arch"
options = {"shared": [True, False]}
default_options = {"shared": False}
generators = "cmake"
def source(self):
self.run("git clone --depth 1 --branch 10.17.01 git#github.com:ibkr/tws-api.git")
tools.replace_in_file("tws-api/CMakeLists.txt", " LANGUAGES CXX )",
''' LANGUAGES CXX )
add_compile_options(-std=c++17)''')
def build(self):
cmake = CMake(self)
cmake.configure(source_folder="tws-api")
cmake.build()
def package(self):
self.copy("*.h", dst="include", src="tws-api/source/cppclient/client")
self.copy("*hello.lib", dst="lib", keep_path=False)
self.copy("*.dll", dst="bin", keep_path=False)
self.copy("*.so", dst="lib", keep_path=False)
self.copy("*.dylib", dst="lib", keep_path=False)
self.copy("*.a", dst="lib", keep_path=False)
def package_info(self):
self.cpp_info.libs = ["twsapi"]
The gitlab CICD routine so far
variables:
GITHUB_DEPLOY_KEY_BASE64: $GITHUB_DEPLOY_KEY_BASE64
stages: # List of stages for jobs, and their order of execution
- build
build-job: # This job runs in the build stage, which runs first.
stage: build
image: registry.gitlab.com/jrgemcp-public/gitlab-cicd-docker/build-conan-docker:latest
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- MY_SECRET_DECODED="$(echo $GITHUB_DEPLOY_KEY_BASE64 | base64 -d)"
- echo "$MY_SECRET_DECODED" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan github.com >> ~/.ssh/known_hosts 2>/dev/null;
- chmod 644 ~/.ssh/known_hosts
script:
- conan profile new default --detect
- conan profile update settings.compiler.libcxx=libstdc++11 default
- conan remote add gitlab https://gitlab.com/api/v4/projects/${CI_PROJECT_ID}/packages/conan
- conan user myusername -r gitlab -p ${CI_JOB_TOKEN}
- conan create . mypackagename/prod
- conan upload "*" --remote=gitlab --all --confirm
You could generate your config dynamically in a script. That is to say, you might script getting all the tags/refs you want to build and create a yaml file containing a job for each ref that will checkout the correct ref and build it.
Basic idea in bash:
for tag in "$(get-all-tags-to-build)"; do
job_yaml="job ${tag}: {\"script\": \"make build ${tag}\"}"
echo job_yaml >> generated-config.yml
done
The idea being that make build is configured to checkout the tag provided as the argument and run the build.
Using that generated config artifact, it will cause the created child pipeline to contain a job for every ref returned by the get-all-tags-to-build script (you implement this).
Super new to python, and never used docker before. I want to host my python script on Google Cloud Run but need to package into a Docker container to submit to google.
What exactly needs to go in this DockerFile to upload to google?
Current info:
Python: v3.9.1
Flask: v1.1.2
Selenium Web Driver: v3.141.0
Firefox Geckodriver: v0.28.0
Beautifulsoup4: v4.9.3
Pandas: v1.2.0
Let me know if further information about the script is required.
I have found the following snippets of code to use as a starting point from here. I just don't know how to adjust to fit my specifications, nor do I know what 'gunicorn' is used for.
# Use the official Python image.
# https://hub.docker.com/_/python
FROM python:3.7
# Install manually all the missing libraries
RUN apt-get update
RUN apt-get install -y gconf-service libasound2 libatk1.0-0 libcairo2 libcups2 libfontconfig1 libgdk-pixbuf2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libxss1 fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils
# Install Chrome
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install
# Install Python dependencies.
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . .
# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 main:app
# requirements.txt
Flask==1.0.2
gunicorn==19.9.0
selenium==3.141.0
chromedriver-binary==77.0.3865.40.0
Gunicorn is an application server for running your python application instance, it is a pure-Python HTTP server for WSGI applications. It allows you to run any Python application concurrently by running multiple Python processes within a single dyno.
Please have a look into the following Tutorial which explains in detail regarding gunicorn.
Regarding Cloud Run, to deploy to Cloud Run, please follow next steps or the Cloud Run Official Documentation:
1) Create a folder
2) In that folder, create a file named main.py and write your Flask code
Example of simple Flask code
import os
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello_world():
name = os.environ.get("NAME", "World")
return "Hello {}!".format(name)
if __name__ == "__main__":
app.run(debug=True, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
3) Now your app is finished and ready to be containerized and uploaded to Container Registry
3.1) So to containerize your app, you need a Dockerfile in the same directory as the source files (main.py)
3.2) Now build your container image using Cloud Build, run the following command from the directory containing the Dockerfile:
gcloud builds submit --tag gcr.io/PROJECT-ID/FOLDER_NAME
where PROJECT-ID is your GCP project ID. You can get it by running gcloud config get-value project
4) Finally you can deploy to Cloud Run by executing the following command:
gcloud run deploy --image gcr.io/PROJECT-ID/FOLDER_NAME --platform managed
You can also have a look into the Google Cloud Run Official GitHub Repository for a Cloud Run Hello World Sample.
Don't get me wrong, virtualenv (or pyenv) is a great tool, and the whole concept of virtual environments is a great improvement on developer environments, mitigating the whole Snowflake Server anti-pattern.
But nowadays Docker containers are everywhere (for good reasons) and it feels odd having your application running on a container but also setting up a local virtual environment for running tests and such in the IDE.
I wonder if there's a way we could leverage Docker containers for this purpose?
Summary
Yes, there's a way to achieve this. By configuring a remote Python interpreter and a "sidecar" Docker container.
This Docker container will have:
A volume mounted to your source code (henceforth, /code)
SSH setup
SSH enabled for the root:password credentials and the root user allowed to login
Get the sidecar container ready
The idea here is to duplicate your app's container and add SSH abilities to it. We'll use docker-compose to achieve this:
docker-compose.yml:
version: '3.3'
services:
dev:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- 127.0.0.1:9922:22
volumes:
- .:/code/
environment:
DEV: 'True'
env_file: local.env
Dockerfile.dev
FROM python:3.7
ENV PYTHONUNBUFFERED 1
WORKDIR /code
# Copying the requirements, this is needed because at this point the volume isn't mounted yet
COPY requirements.txt /code/
# Installing requirements, if you don't use this, you should.
# More info: https://pip.pypa.io/en/stable/user_guide/
RUN pip install -r requirements.txt
# Similar to the above, but with just the development-specific requirements
COPY requirements-dev.txt /code/
RUN pip install -r requirements-dev.txt
# Setup SSH with secure root login
RUN apt-get update \
&& apt-get install -y openssh-server netcat \
&& mkdir /var/run/sshd \
&& echo 'root:password' | chpasswd \
&& sed -i 's/\#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Setting up PyCharm Professional Edition
Preferences (CMD + ,) > Project Settings > Project Interpreter
Click on the gear icon next to the "Project Interpreter" dropdown > Add
Select "SSH Interpreter" > Host: localhost, Port: 9922, Username: root > Password: password > Interpreter: /usr/local/bin/python, Sync folders: Project Root -> /code, Disable "Automatically upload..."
Confirm the changes and wait for PyCharm to update the indexes
Setting up Visual Studio Code
Install the Python extension
Install the Remote - Containers extension
Open the Command Pallette and type Remote-Containers, then select the Attach to Running Container... and selecet the running docker container
VS Code will restart and reload
On the Explorer sidebar, click the open a folder button and then enter /code (this will be loaded from the remote container)
On the Extensions sidebar, select the Python extension and install it on the container
When prompet on which interppreter to use, select /usr/local/bin/python
Open the Command Pallette and type Python: Configure Tests, then select the unittest framework
TDD Enablement
Now that you can run your tests directly from your IDE, use it to try out Test-Driven-Develop! One of its key points is a fast feedback loop, and not having to wait for the full test suite to finish execution just to see if your new test is passing is great! Just write it and run it right away!
Reference
The contents of this answer are also available in this GIST.
I am currently trying to use GitLab to run a CI/CD job that runs a Python file that makes changes to a particular repository and then commits and pushes those changes to master. I also have a role of Master in the repository. It appears that all git functions run fine except for the git push, which leads to fatal: You are not currently on a branch. and with using git push origin HEAD:master --force, that leads to fatal: unable to access 'https://gitlab-ci-token:xxx#xxx/project.git/': The requested URL returned error: 403. I've been looking over solutions online, one being this one, and another being unprotecting it, and couldn't quite find what I was looking for just yet. This is also a sub-project within the GitLab repository.
Right now, this is pretty much what my .gitlab-ci.yml looks like.
before_script:
- apt-get update -y
- apt-get install git -y
- apt-get install python -y
- apt-get python-pip -y
main:
script:
- git config --global user.email "xxx#xxx"
- git config --global user.name "xxx xxx"
- git config --global push.default simple
- python main.py
My main.py file essentially has a function that creates a new file within an internal directory provided that it doesn't already exist. It has a looks similar to the following:
import os
import json
def createFile(strings):
print ">>> Pushing to repo...";
if not os.path.exists('files'):
os.system('mkdir files');
for s in strings:
title = ("files/"+str(s['title'])+".json").encode('utf-8').strip();
with open(title, 'w') as filedata:
json.dump(s, filedata, indent=4);
os.system('git add files/');
os.system('git commit -m "Added a directory with a JSON file in it..."');
os.system('git push origin HEAD:master --force');
createFile([{"title":"A"}, {"title":"B"}]);
I'm not entirely sure why this keeps happening, but I have even tried to modify the repository settings to change from protected pull and push access, but when I hit Save, it doesn't actually save. Nonetheless, this is my overall output. I would really appreciate any guidance any can offer.
Running with gitlab-runner 10.4.0 (00000000)
on cicd-shared-gitlab-runner (00000000)
Using Kubernetes namespace: cicd-shared-gitlab-runner
Using Kubernetes executor with image ubuntu:16.04 ...
Waiting for pod cicd-shared-gitlab-runner/runner-00000000-project-00000-concurrent-000000 to be running, status is Pending
Waiting for pod cicd-shared-gitlab-runner/runner-00000000-project-00000-concurrent-000000 to be running, status is Pending
Running on runner-00000000-project-00000-concurrent-000000 via cicd-shared-gitlab-runner-0000000000-00000...
Cloning repository...
Cloning into 'project'...
Checking out 00000000 as master...
Skipping Git submodules setup
$ apt-get update -y >& /dev/null
$ apt-get install git -y >& /dev/null
$ apt-get install python -y >& /dev/null
$ apt-get install python-pip -y >& /dev/null
$ git config --global user.email "xxx#xxx" >& /dev/null
$ git config --global user.name "xxx xxx" >& /dev/null
$ git config --global push.default simple >& /dev/null
$ python main.py
[detached HEAD 0000000] Added a directory with a JSON file in it...
2 files changed, 76 insertions(+)
create mode 100644 files/A.json
create mode 100644 files/B.json
remote: You are not allowed to upload code.
fatal: unable to access 'https://gitlab-ci-token:xxx#xxx/project.git/': The requested URL returned error: 403
HEAD detached from 000000
Changes not staged for commit:
modified: otherfiles/otherstuff.txt
no changes added to commit
remote: You are not allowed to upload code.
fatal: unable to access 'https://gitlab-ci-token:xxx#xxx/project.git/': The requested URL returned error: 403
>>> Pushing to repo...
Job succeeded
Here is a resource from Gitlab that describes how to make commits to the repository within the CI pipeline: https://gitlab.com/guided-explorations/gitlab-ci-yml-tips-tricks-and-hacks/commit-to-repos-during-ci/commit-to-repos-during-ci
Try configuring your gitlab-ci.yml file to push the changes rather than trying to do it from the python file.
I managed to do this via ssh on a runner by making sure the ssh key is added, and then using the full git url:
task_name:
stage: some_stage
script:
- ssh-add -K ~/.ssh/[ssh key]
- git push -o ci-skip git#gitlab.com:[path to repo].git HEAD:[branch name]
If it is the same repo that triggered the job, the url could also be written as:
git#$CI_SERVER_HOST:$CI_PROJECT_PATH.git
This method can be used to commit tags or files. You may also wish to consider using the CI CD
variable API to store cross-build persistent data if it does not have to be committed to the repo
https://docs.gitlab.com/ee/api/project_level_variables.html
https://docs.gitlab.com/ee/api/group_level_variables.html
ACCESS_TOKEN below is a variable at the repo or an upbound group level that contains a token that
can write to the target repos. Since maintainer can see these, it is best practice to
create tokens on special API users who are least privileged for just what they need to do.
write_to_another_repo:
before_script:
- git config --global user.name "${GITLAB_USER_NAME}"
- git config --global user.email "${GITLAB_USER_EMAIL}"
script:
- |
echo "This CI job demonstrates writing files and tags back to a different repository than this .gitlab-ci.yml is stored in."
OTHERREPOPATH="guided-explorations/gitlab-ci-yml-tips-tricks-and-hacks/commit-to-repos-during-ci/pushed-to-from-another-repo-ci.git"
git clone https://gitlab-ci-token:${CI_JOB_TOKEN}#$CI_SERVER_HOST/$OTHERREPOPATH
cd pushed-to-from-another-repo-ci
CURRENTDATE="$(date)"
echo "$CURRENTDATE added a line" | tee -a timelog.log
git status
git add timelog.log
# "[ci skip]" and "-o ci-skip" prevent a CI trigger loop
git commit -m "[ci skip] updated timelog.log at $CURRENTDATE"
git push -o ci-skip http://root:$ACCESS_TOKEN#$CI_SERVER_HOST/$OTHERREPOPATH HEAD:master
#Tag commit (can be used without commiting files)
git tag "v$(date +%s)"
git tag
git push --tags http://root:$ACCESS_TOKEN#$CI_SERVER_HOST/$OTHERREPOPATH HEAD:master
The requested URL returned error: 403
The HTTP 403 Forbidden client error status response code indicates that the server understood the request but refuses to authorize it.
The problem is we cannot provide a valid authentication to git and hence our request is forbidden.
Try this:Control Panel => User Accounts => Manage your credentials => Windows Credentials
It worked for me.However I'm not quite sure if it will work for you.
Maybe you can need to generate access token on profile, edit profile - then access tokens for 'read_repository' or 'write_repository'
profile => edit profile => access tokens
I have a django project under development on my windows computer (dev-machine). I am using pyCharm for development.
I have set up a server (server-machine) running ubuntu. And now want to push my project to the server.
So in my project folder on the dev-machine I have done the git init:
$ git init
$ git add .
$ git commit -m"Init of Git"
And on the server-machine I have made a project folder: /home/username/projects
In this folder I init git as well
$ git init --bare
Back on my dev-machine, I set the connection to the server-machine by doing this
$ git remote add origin username#11.22.33.44:/home/username/projects
And finally pushing my project to server-machine by typing this command on my dev-machine
$ git push origin master
It starts to do ome transfer. And here's the problem.
On the server-machine when I check what's been transferred, it's only stuff like this
~/projects$ ls
branches config description HEAD hooks info objects refs
Not a single file from the project is transferred. This looks much like what the .git folder contains on the dev-machine.
What am I doing wrong?
What you see is the directory structure git uses to store your files and meta data. This is not a checked-out copy of the repository.
To check whether the data made it into the repository use git log in ~/project
Okay, so I understand now where I went wrong.
With git push I am not setting up the project.
To set up the project I need to do git clone.
This is how I did it.
1.
So I made a folder for git repositories on the server-machine. I called it /home/username/gitrepos/
2.
Inside there, I made a folder for my project, where I push the git repository into. So path would look like this for me /home/username/gitrepos/projectname/
3.
Being inside that folder I do a 'git init' like this
$ git init --bare
4.
Then I push the git repo to this location. First setting the remote adress from my dev-machine with this command. If adding a remote destination new use this:
$ git remote set nameofconnection username#ip.ip.ip.ip:/home/username/gitrepos/projectname
if changing the adress for a remote destination use this:
$ git remote set-url nameofconnection username#ip.ip.ip.ip:/home/username/gitrepos/projectname
To se with remote destinations you have set type this:
$ git remote -v
5.
Now go back to server-machine and clone the project into a project folder. I made a folder like this /home/username/projects/
When being inside that folder I clone from the gitrepo ike this:
$ git clone /home/username/gitrepos/projectname
Thank you all for the help! <3