I am managing a build for a cross platform project: OSX/Windows/Linux. I simply run a Makefile with command: make win_installer, make linux and make mac.
Respectively for each Operating system.
For this, in the server I run a Python Twisted application that will monitor regularly if there is new tag in our git repository. If detected, a build will commence and the resulting artefacts will be uploaded into our private FTP.
Can TeamCity be easily configured to implement this behaviour?
yes there are 3 basic steps(you can have one teamcity agent on each of the OS and run individual targets for OS specific build in specific agent)
Setup a teamcity target to run whenever there are changes to a tag
https://confluence.jetbrains.com/display/TCD8/Configuring+VCS+Triggers#ConfiguringVCSTriggers-BranchFilter
Add a comand line build step for the makefile
Add a command line target to upload the makefile to your artefact repository
Related
I'm new to using Docker, so I'm either looking for direct help or a link to a relevant guide. I need to train some deep learning models on my school's linux server, but I can't manually install pytorch and other python packages since I don't have root access (sudo). Another student said that he uses docker and has everything ready to go in his container.
I'm wondering how to wrap up my code and relevant packages into a container that I can push to the linux server and then run.
To address your specific problem the easiest way I found to get code into a container is to use git.
start the container in interactive mode or ssh to it if it's attached to a network.
git clone <your awesome deep learning code>. In your git repo have a requirements.txt file. Change directories into your local clone of your repo and run pip install -r requirements.txt
Run whatever script you need to run your code. Note you can easily put your pip install command in one of your run scripts.
It's important to remember that docker containers are stateless/ephemeral. You should not expect the container nor its contents to exist in some durable fashion. This specific issue is addressed by mapping a directory on the host system to a directory in the container.
Side note: I first recommend starting with the docker tutorial. You can easily skip over the installation parts if you are working on system that already has docker installed and where you have permissions to build, start, and stop containers.
I don't have root access (sudo). Another student said that he uses docker
I would like to point out that docker requires sudo permissions.
Instead I think you should look at using something like Google Colab or JupyterLab. This gives you the added benefit of code that is backed-up on a remote server
I want to build an existing Python web application in jenkins, just like building a java application using Maven.
Is it possible or not? if possible, please help me with necessary configurations to build and Continuous Deployment of the same application.
Options 1) without using docker
Choose Free Style project when create jenkins job
Configure the Source code management section to tell jenkins where to fetch the source code,
Write the build commands you used by manual in build section.
If your jenkins slave not installed Python and jenkins server not support to install Python on jenkins slave, you need to write commands in 'build' section to install Python,pip needed for build on jenkins slave.
Option 2) using docker
Build a docker image with all requried soft and tool installed and configured to supply a build enviroment
(You can search there is exist docker image can meet your requirement on docker image hub before build image yourself? )
Upload the image to docker image hub
Install docker engine on jenkins slave
Create Free style project
Congiure Source Code Management section
Write docker command in Build section to pull docker image and start a docker container, execute build command inside container
I'm investigating how Python applications can also use a CI pipeline, but I'm not sure how to create the standard work-flow.
Jenkins is used to do the initial repository clone, and then initiates tox. Basically this is where maven, and/or msbuild, would get dependency packages and build.... which tox does via pip, so all good here.
But now for the confusing part, the last part of the pipeline is creating and uploading packages. Devs would likely upload created packages to a local pip repository, BUT then also possibly create a deployment package. In this case it would need to be an RPM containing a virtualenv of the application. I have made one manually using rpmvenev, but regardless of how its made, how what such a step be added to a tox config? In the case if rpmvenv, it creates its own virtualenv, a self contained command so to speak.
I like going with the Unix philosophy for this problem. Have a tool that does one thing incredibly well, then compose other tools together. Tox is purpose built to run your tests in a bunch of different python environments so using it to then build a deb / rpm / etc for you I feel is a bit of a misuse of that tool. It's probably easier to use tox just to run all your tests then depending on the results have another step in your pipeline deal with building a package for what was just tested.
Jenkins 2.x which is fairly recent at the time of this writing seems to be much better about building pipelines. BuildBot is going through a decent amount of development and already makes it fairly easy to build a good pipeline for this as well.
What we've done at my work is
Buildbot in AWS which receives push notifications from Github on PR's
That kicks off a docker container that pulls in the current code and runs Tox (py.test, flake8, as well as protractor and jasmine tests)
If the tox step comes back clean, kick off a different docker container to build a deb package
Push that deb package up to S3 and let Salt deal with telling those machines to update
That deb package is also just available as a build artifact, similar to what Jenkins 1.x would do. Once we're ready to go to staging, we just take that package and promote it to the staging debian repo manually. Ditto for rolling it to prod.
Tools I've found useful for all this:
Buildbot because it's in Python thus easier for us to work on but Jenkins would work just as well. Regardless, this is the controller for the entire pipeline
Docker because each build should be completely isolated from every other build
Tox the glorious test runner to handle all those details
fpm builds the package. RPM, DEB, tar.gz, whatever. Very configurable and easy to script.
Aptly makes it easy to manage debian repositories and in particular push them up to S3.
I need to set up a Jenkins server to build a python project. But I can't find any good tutorials online to do this, most of what I see uses pip, but our project simply works on
python.exe setup.py build
I've tried running it as a Windows Batch file setting it up through the configure options in Jenkins, where I enter the above line in the box provided, but the build fails.
When I try to run this I get a build error telling me there is no cmd file or program in my project's workspace. But it seems to me that cmd would be a program inherent to Jenkins itself in this case.
What's the best way to go about setting up Jenkins to build a python project using setup.py?
Really i used jenkins to test Java EE project and i don't khnow if it will be the same principle or not ,
so i downloaded the jenkins.war from the website and i deployed it on my jboss server and I reached it via an url on the browser : http://localhost:serverport/jenkins the i created a job and i select the server & jdk & maven & the location of my project in workspace then i make a run to build the project.
I am sorry if you find that my subject is far from your demand but I tried to give you a visibility onto my use.
I relaized I did something stupid and forgot that my coworker had set it up on a UNIX server, so I should have been using the shell instead of Windows Batch. Once I changed that and installed the python plugin, I got it to build fine.
I want to execute a script via the "execute shell" build section of my build's configuration
sudo python buildscript.py param1 param2
however I don't know how/where to actually upload the script. Hope this makes sense, I don't have shell access to the Jenkins server (or I would put it in the workspace directory I'm guessing) so I'm wondering if there's an upload interface that I missed somewhere.
You can create a project on your version control system to keep all your scripts. Then, just create a job on jenkins to fetch them (dont need to build anything). They will be made available on the filesystem, inside ~jenkins/workspace, and everytime you change them, Jenkins will make sure they keep up-to-date on the build machine.
For sudo, you need to have ssh access (or know who has) as #Bruno suggested, and grant jenkins user that permission.
You should just put buildscript.py in version control next to your source files -- I assume you already have an action configured in your job config to checkout your sources from version control?