How to deploy a single python script with gitlab-ci? - python

I created a bot in python and wanted to auto deploy it with every new release on my personal gitlab runner.
I have the following .gilab-ci.yml and didn't found a solution for my problem, because the gitlab runner seems to close it every time.
image: python:3.7.4
before_script:
- pip install -r requirements.txt
deploy_prod:
stage: deploy
script:
- setsid nohup python __main__.py &
environment:
name: production
when: manual
I also tried python __main__.py &.

GitLab CI isn't really made to be used for hosting applications as you are trying to do.
In the advanced configurations options for GitLab CI, there are ways to modify the timeouts, you could try to hijack them but it really isn't what it is meant for.
GitLab CI is meant to execute short term operations to build your code and deploy applications to other servers, not for hosting long running applications.

Related

Is there a way to test an application deployed with Zappa?

I have created a Django application which is deployed to AWS using Zappa in a CI/CD pipeline using the following commands:
- zappa update $ENVIRONMENT
- zappa manage $ENVIRONMENT "collectstatic --noinput"
- zappa manage $ENVIRONMENT "migrate"
However I want to add a test step to the CI/CD pipeline that would test the application much like in a way that I would test the application locally using the Django command:
python manage.py test
Is there a way to issue a zappa command to make run the "python manage.py test"? Do you have any suggestion?
Here is the docs that have a docker setup that you can use for local testing (Docs here)
Also, this post from Ian Whitestone(a core Zappa contributor) uses the official AWS docker image for the same job. And he has given one example for local testing also (Blog post)

Can't find appropriate image on Docker Hub while it appears to run OK on Bitbucket

I am working on a project and using Bitbucket as my remote server. I have set up a basic pipeline with the following:
# This is a sample build configuration for Python.
# Check our guides at https://confluence.atlassian.com/x/x4UWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: python:3.8.3
pipelines:
default:
- step:
caches:
- pip
script: # Modify the commands below to build your repository.
- pip install -r requirements.txt
- pytest -v test_cliff_erosion_equations.py
Since there are slight differences in results between the pipeline and my local machine, I would like to debug this pipeline locally using Docker as explained in the bitbucket docs. In fact, I would like to develop my entire program within the same containerized environment, both locally and remotely. I have realized that PyCharm community version won't allow you to do this, so I've decided to switch to VSCode which appears to have full Docker support.
As you can see the image is python:3.8.3 I had a look through Docker Hub but I can't find it! However it seems to run just fine in Bitbucket. Why is this so?

How to use python script taskin vsts release pipeline

I am new to CI and CD world. I am using VSTS pipelines to automate my build and release processs.
This question is about the Release Pipeline. My deploy my build drop to a AWS VM. I created a Deployment group and ran the script in the VM to generate a deployment Agent on the AWS VM.
This works well and I am able to deploy successfully.
I would like to run few automation scripts in python after successful deployment.
I tried using Python Script Task. One of the settings is Python Interpretor. the help information says:
"Absolute path to the Python interpreter to use. If not specified, the task will use the interpreter in PATH.
Run the Use Python Version task to add a version of Python to PATH."
So,
I tried to use Python Version Task and specified the version of python I ususally run my scripts with. The prerequisites for the task mention
"A Microsoft-hosted agent with side-by-side versions of Python installed, or a self-hosted agent with Agent.ToolsDirectory configured (see Q&A)."
reference to Python Version task documentation
I am not sure how and where to set Agent.ToolsDirectory or how to use Microsoft Hosted agent on a release pipeline deploying to AWS VM. I could not find any step by step examples for this. Can anyone help me with clear steps how to run python scripts in my scenario?
the easiest way of doing this is just doing something like in your yaml definition:
- script: python xxx
this will run python and pass arguments to it, you can use python2 or python3 (default version installed on the hosted agent). another way of achieving this (more reliable) is using container inside hosted agent. this way you can explicitly specify python version and guarantee you are getting what you specified. example:
resources:
containers:
- container: my_container # can be anything
image: python:3.6-jessie # just an example
jobs:
- job: job_name
container: my_container # has to be the container name from resources
pool:
vmImage: 'Ubuntu-16.04'
steps:
- checkout: self
fetchDepth: 1
clean: true
- script: python xxx
this will start the python:3.6-jessie container, mount your code inside the container and run the python command in the root of the repo. Reading:
https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema?view=azdevops&tabs=schema&viewFallbackFrom=vsts#job
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=azdevops&tabs=yaml&viewFallbackFrom=vsts
in case you are using your own agent - just install python on it and make sure its in the path, so it should work when you just type python in the console (you'd have to use script task in this case). if you want to use python task, follow these articles:
https://github.com/Microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/tool/use-python-version?view=azdevops

Docker vs old approach (supervisor, git, your project)

I'm on Docker for past weeks and I can say I love it and I get the idea. But what I can't figure out is how can I "transfer" my current set-up on Docker solution. I guess I'm not the only one and here is what I mean.
I'm Python guys, more specifically Django. So I usually have this:
Debian installation
My app on the server (from git repo).
Virtualenv with all the app dependencies
Supervisor that handles Gunicorn that runs my Django app.
The thing is when I want to upgrade and/or restart the app (I use fabric for these tasks) I connect to the server, navigate to the app folder, run git pull, restart the supervisor task that handles Gunicorn which reloads my app. Boom, done.
But what is the right (better, more Docker-ish) approach to modify this setup when I use Docker? Should I connect to docker image bash somehow everytime I want upgrade the app and run the upgrade or (from what I saw) should I like expose the app into folder out-of docker image and run the standard upgrade process?
Hope you get the confusion of old school dude. I bet Docker guys were thinking about that.
Cheers!
For development, docker users will typically mount a folder from their build directory into the container at the same location the Dockerfile would otherwise COPY it. This allows for rapid development where at most you need to bounce the container rather than rebuild the image.
For production, you want to include everything in the image and not change it, only persistent data goes in the volumes, your code is in the image. When you make a change to the code, you build a new image and replace the running container in production.
Logging into the container and manually updating things is something I only do to test while developing the Dockerfile, not to manage a developing application.

How to setup Git to deploy python app files into Ubuntu Server?

I setup a new Ubuntu 12.10 Server on VPN hosting. I have installed all the required setup like Nginx, Python, MySQL etc. I am configuring this to deploy a Flask + Python app using uWSGI. Its working fine.
But to create a basic app i used Putty tool (from Windows) and created required app .py files.
But I want to setup a Git functionality so that i can push my code to required directory say /var/www/mysite.com/app_data so that i don't have to use SSH or FileZilla etc everytime i make some changes into my website.
Since i use both Ubuntu & Windows for development of app, setting up a Git kind of functionality would help me push or change my data easily to my Cloud Server.
How can i setup a Git functionality in Ubuntu ? and How could i access it and Deploy data using tools like GitBash etc. ?
Please Suggest
Modified version of innaM:
Concept
Have three repositories
devel - development on your local development machine
central - repository server - like GitHub, Bitbucket or anything other
prod - production server
Then you commit things from devel to central and as soon as you want to deploy on prod, than you ask prod to pull data from prod.
"asking" prod server to pull the updates can be managed by cron (then you have to wait a moment) or you may use other means like one shot call of ssh asking to do git pull and possibly restart your app.
Step by step
In more details you can go this way.
Prepare repo on devel
Develop and test the app on your devel server.
Put it into local repository:
$ git init
$ git add *
$ git commit -m "initial commit"
Create repo on central server
E.g. bitbucket provides this description: https://confluence.atlassian.com/display/BITBUCKET/Import+code+from+an+existing+project
Generally, you create the project on Bitbucket, find the url of it and then from your devel repo call:
$ git remote add origin <bitbucket-repo-url>
$ git push origin
Clone central repo to prod server
Log onto your prod server.
Go to /var/www and clone form bitucket:
$ cd /var/www
$ git clone <bitbucket-repo-url>
$ cd mysite.com
and you shall have your directory ready.
Trigger publication of updates to prod3
There are numerous options. One being a cron task, which would regularly call
$ git pull
In case, your app needs restart afte an update, then you have to ensure, the restart would happen (this shall be possible using git log command, which will show new line after the update, or you may check, if status code would tell you.
Personally I would use "one shot ssh" (you asked not to use ssh, but I assume you are asking for "simpler" solution, so one shot call shall work simpler then using ftp, scp or other magic.
From your devel machine (assuming you have ssh access there):
$ ssh user#prod.server.com "cd /var/www/mysite.com && git pull origin && myapp restart"
Advantage is, that you do control the moment, the update happens.
Discussion
I use similar workflow.
rsync seems in many cases serve well enough or better (be aware of files being created at app runtime and by files in your app, which shall be removed during ongoing versions and shall be removed on server too).
salt (saltstack) could serve too, but requires a bit more learning and setup).
I have learned, that keeping source code and configuration data in the same repo makes sometime situation more dificult (that is why I am working on using salt).
fab command from Fabric (python based) may be best option (in case installation on Windows becomes difficult, look at http://ridingpython.blogspot.cz/2011/07/installing-fabric-on-windows.html
Create a bare repository on your server.
Configure your local repository to use the repository on the server as a remote.
When working on your local workstation, commmit your changes and push them to the repository on your server.
Create a post-receive hook in the server repository that calls "git archive" and thus transfers your files to some other directory on the server.

Categories