I am new to CI and CD world. I am using VSTS pipelines to automate my build and release processs.
This question is about the Release Pipeline. My deploy my build drop to a AWS VM. I created a Deployment group and ran the script in the VM to generate a deployment Agent on the AWS VM.
This works well and I am able to deploy successfully.
I would like to run few automation scripts in python after successful deployment.
I tried using Python Script Task. One of the settings is Python Interpretor. the help information says:
"Absolute path to the Python interpreter to use. If not specified, the task will use the interpreter in PATH.
Run the Use Python Version task to add a version of Python to PATH."
So,
I tried to use Python Version Task and specified the version of python I ususally run my scripts with. The prerequisites for the task mention
"A Microsoft-hosted agent with side-by-side versions of Python installed, or a self-hosted agent with Agent.ToolsDirectory configured (see Q&A)."
reference to Python Version task documentation
I am not sure how and where to set Agent.ToolsDirectory or how to use Microsoft Hosted agent on a release pipeline deploying to AWS VM. I could not find any step by step examples for this. Can anyone help me with clear steps how to run python scripts in my scenario?
the easiest way of doing this is just doing something like in your yaml definition:
- script: python xxx
this will run python and pass arguments to it, you can use python2 or python3 (default version installed on the hosted agent). another way of achieving this (more reliable) is using container inside hosted agent. this way you can explicitly specify python version and guarantee you are getting what you specified. example:
resources:
containers:
- container: my_container # can be anything
image: python:3.6-jessie # just an example
jobs:
- job: job_name
container: my_container # has to be the container name from resources
pool:
vmImage: 'Ubuntu-16.04'
steps:
- checkout: self
fetchDepth: 1
clean: true
- script: python xxx
this will start the python:3.6-jessie container, mount your code inside the container and run the python command in the root of the repo. Reading:
https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema?view=azdevops&tabs=schema&viewFallbackFrom=vsts#job
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=azdevops&tabs=yaml&viewFallbackFrom=vsts
in case you are using your own agent - just install python on it and make sure its in the path, so it should work when you just type python in the console (you'd have to use script task in this case). if you want to use python task, follow these articles:
https://github.com/Microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/tool/use-python-version?view=azdevops
Related
I want to be able to test my jobs and code on my local machine before executing on a remote cluster. Ideally this will not require a lot of setup on my end. Is this possible?
Yes, this is possible. A common development pattern with the Iguazio platform is to utilize a local version of MLRun and Nuclio on a laptop/workstation and move/execute jobs on the cluster at a later point.
There are two main options for installing MLRun and Nuclio on a local environment:
docker-compose - Simpler and easier to get up and running, however restricted to running jobs within the environment it was executed in (i.e. Jupyter or IDE). This means you cannot specify resources like CPU/MEM/GPU to run a particular job. This approach is great for quickly getting up and running. Instructions can be found here.
Kubernetes - More complex to get up and running, but allows for running jobs in their own containers with specified CPU/MEM/GPU resources. This approach is a better for better emulating capabilities of the Iguazio platform in a local environment. Instructions can be found here.
Once you have installed MLRun and Nuclio using one of the above options and have created a job/function you can test it locally as well as deploy to the Iguazio cluster directly from your local development environment:
To run your job locally, utilize the local=True flag when specifying your MLRun function like in the Quick-Start guide.
To run your job remotely, specify the required environment files to allow connectivity to the Iguazio cluster as specified in this guide, and run your job with local=False
I would like to run docker-compose via python docker sdk.
However I couldn't find any reference on how to achieve this using these reference Python SDK? I could also use subprocess but I have some other difficulty while using that. see here docker compose subprocess
I am working on the same issue and was looking for answers, but nothing so far. The best shot I can give it is to simplify that docker-compose logic. For example, you have a YAML file with a network and services - create them separately using Python Docker SDK and connect containers to a network.
It gets cumbersome, but eventually you can get things working that way from Python.
I created a package to make this easy: python-on-whales
Install with
pip install python-on-whales
Then you can do
from python_on_whales import docker
docker.compose.up()
docker.compose.stop()
# and all the other commands.
You can find the source for the package in my GitHub repository: https://gabrieldemarmiesse.github.io/python-on-whales/
I created a bot in python and wanted to auto deploy it with every new release on my personal gitlab runner.
I have the following .gilab-ci.yml and didn't found a solution for my problem, because the gitlab runner seems to close it every time.
image: python:3.7.4
before_script:
- pip install -r requirements.txt
deploy_prod:
stage: deploy
script:
- setsid nohup python __main__.py &
environment:
name: production
when: manual
I also tried python __main__.py &.
GitLab CI isn't really made to be used for hosting applications as you are trying to do.
In the advanced configurations options for GitLab CI, there are ways to modify the timeouts, you could try to hijack them but it really isn't what it is meant for.
GitLab CI is meant to execute short term operations to build your code and deploy applications to other servers, not for hosting long running applications.
Say I have a file "main.py" and I just want it to run at 10 minute intervals, but not on my computer. The only external libraries the file uses are mysql.connector and pip requests.
Things I've tried:
PythonAnywhere - free tier is too limiting (need to connect to external DB)
AWS Lambda - Only supports up to Python 2.7, converted my code but still had issues
Google Cloud Platform + Heroku - can only find tutorials covering deploying applications, I think these could do what I'm looking for but I can't figure out how.
Thanks!
I'd start by taking a look at this question/answer that I asked previously on unix.stackexchange - I went with an AWS redhat installation and it was free to use.
Once you've decided on your VM, you can add SSH onto your server using any SSH client and upload your Python script. A personal preference is this application.
If you need to update the Python version on the server, you can do this by installing the required Python RPMs. A quick google should return the yum [or whichever RPM management system you're using] repository for the required RPMs.
Once you've installed the version of Python that you need, I'd suggest looking into the 'crontab' which can be used to schedule jobs. You can set a cronjob to run every 10minutes which will call your script.
See this site for more information on how to use the crontab
This sounds like a perfect use case for AWS Lambda which supports Python. You can invoke your Lambda on a schedule using Scheduled Events.
I see that you tried Lambda and it didn't work out for you which is too bad as that seems like the easiest route. You could also launch an EC2 instance and use userdata to schedule a cron when the instance starts.
Another option would be an Elastic Beanstalk worker with a cron.yml that defines your schedule. Elastic Beanstalk supports Python 3.4.
Update: AWS does now support Python 3.6. Just select Python 3.6 from the runtime environments when configuring.
I need to set up a Jenkins server to build a python project. But I can't find any good tutorials online to do this, most of what I see uses pip, but our project simply works on
python.exe setup.py build
I've tried running it as a Windows Batch file setting it up through the configure options in Jenkins, where I enter the above line in the box provided, but the build fails.
When I try to run this I get a build error telling me there is no cmd file or program in my project's workspace. But it seems to me that cmd would be a program inherent to Jenkins itself in this case.
What's the best way to go about setting up Jenkins to build a python project using setup.py?
Really i used jenkins to test Java EE project and i don't khnow if it will be the same principle or not ,
so i downloaded the jenkins.war from the website and i deployed it on my jboss server and I reached it via an url on the browser : http://localhost:serverport/jenkins the i created a job and i select the server & jdk & maven & the location of my project in workspace then i make a run to build the project.
I am sorry if you find that my subject is far from your demand but I tried to give you a visibility onto my use.
I relaized I did something stupid and forgot that my coworker had set it up on a UNIX server, so I should have been using the shell instead of Windows Batch. Once I changed that and installed the python plugin, I got it to build fine.