Continuously run python script, which is deployed with Azure Pipelines and GitHub - python

I have transferred forecasts.py file from GitHub to my virtual machine via Azure Pipelines. If I start the script from the virtual machine terminal with python3 forecasts.py &, everything goes smoothly and the script remains running in the background. For some reason, I get the following message from the Azure Pipelines, if I try to start that script similarly:
The STDIO streams did not close within 10 seconds of the exit event from process '/bin/bash'. This may indicate a child process inherited the STDIO streams and has not yet exited.
Full debug logs can be found here
The core content of the forecasts.py is the following:
import schedule
import time
def job():
print("I'm working...")
schedule.every().minute.at(":00").do(job)
while True:
schedule.run_pending()
time.sleep(5)
This script should print "I'm working..." once per minute. Should I start the script with some different way?
EDIT
azure-pipelines.yml might help to solve this:
variables:
- name: system.debug
value: true
jobs:
- deployment: fmi_forecasts_deployment
displayName: fmi_forecasts
environment:
name: AnalyticsServices
resourceType: VirtualMachine
strategy:
rolling:
maxParallel: 2 #for percentages, mention as x%
preDeploy:
steps:
- download: current
- script: echo initialize, cleanup, backup, install certs
deploy:
steps:
- checkout: self
- script: sudo apt install python3-pip
displayName: 'Update pip'
- script: python3 -m pip install -r requirements.txt
displayName: 'Install requirements.txt modules'
- script: rsync -a $(Build.SourcesDirectory) /home/ubuntu/$(Build.Repository.Name)/
displayName: 'Sync files to $(Build.Repository.Name)'
- task: Bash#3
inputs:
targetType: 'inline'
script: python3 /home/ubuntu/$(Build.Repository.Name)/s/forecasts.py &
displayName: 'Start the script'
routeTraffic:
steps:
- script: echo routing traffic
postRouteTraffic:
steps:
- script: echo health check post-route traffic
on:
failure:
steps:
- script: echo Restore from backup! This is on failure
success:
steps:
- script: echo Notify! This is on success
EDIT
I edited the forecasts.py file to print "Sleeping..." every 5 seconds. And when I execute that with nohup python -u /home/ubuntu/$(Build.Repository.Name)/s/forecasts.py & I will receive the following logs. So, the script works, but when I look the running processes in the VM, there is not any python processes running. The script dies, when the pipeline ends, I assume.

##[debug]The task was marked as "done", but the process has not closed after 5 seconds. Treating the task as complete.
According to the debug log, this should be more like a prompt message indicating that some process is still running and has not been cleaned up, rather than an error message which didn't write into the standard error steam and fail the task.
If you want that script will continue to to run in the background while the task has finished. You could try to use start-process command to launch the script. This will make sure that the launched job keeps running when the task is finished. But the job will be closed when the build is finished.
Start-Process powershell.exe -ArgumentList '-file xxx\forecasts.py'
For details, please refer to the workaround in this ticket.

Related

How to restart Python Docker Container from inside

My Objective: I want to be able to restart a container based on the official Python Image using some command inside the container.
My system: I have a own Docker image based on the official python image which look like this:
FROM python:3.6.15-buster
WORKDIR /webserver
COPY requirements.txt /webserver
RUN /usr/local/bin/python -m pip install --upgrade pip
RUN pip3 install -r requirements.txt --no-binary :all:
COPY . /webserver
ENTRYPOINT ["./start.sh"]
As you can see, the image does not execute a single python file but it executes a script called start.sh, which looks like this:
#!/bin/bash
echo "Starting"
echo "Env: $ENTORNO"
exec python3 "$PATH_ENTORNO""Script1.py" &
exec python3 "$PATH_ENTORNO""Script2.py" &
exec python3 "$PATH_ENTORNO""Script3.py" &
All of this works perfectly, but, I want that if, for example, script 3 fails, the entire container based on this image get restarted.
My approach: I had two ideas about this problem. First, try to execute a reboot command in the python3 script, something like this:
from subprocess import call
[...]
call(["reboot"])
This does not work inside the Python Debian image, because of error:
reboot: command not found
The other approach was to mount the docker.sock inside the container, but the error this time is:
root#MachineName:/var/run# /var/run/docker.sock docker ps
bash: /var/run/docker.sock: Permission denied
I dont know if I'm doing right these two approach, or if anyone has any idea about this but any help will be very appreciated.
Update
After thinking about it, I realised you could send some signal to the PID 1 (your entrypoint), trap it and use a handler to exit with an appropriate code so that docker will reschedule it.
Here's an MRE:
Dockerfile
FROM python:3.9
WORKDIR /app
COPY ./ /app
ENTRYPOINT ["./start.sh"]
start.sh
#!/usr/bin/env bash
python script.py &
# This traps user defined signal and kills the last command
# (`tail -f /dev/null`) before exiting with code 1.
trap 'kill ${!}; echo "Killed by backgrounded process"; exit 1' USR1
# Launches `tail` in the background and sets this program to wait
# for it to finish, so that it does not block execution
tail -f /dev/null & wait $!
script.py
import os
import signal
# Process 1 will be your entrypoint if you declared it in `exec-form`*
print("Sending signal to stop container")
os.kill(1, signal.SIGUSR1)
*exec form
Testing it
> docker build . -t test
> docker run test
Sending signal to stop container
Killed by backgrounded process
> docker inspect $(docker container ls -n 1 -q) --format='{{.State.ExitCode}}'
1
Original post
I think the safest bet would be to instruct docker to restart your container when there's some failure. Then you'd only have to exit your program with a non-zero code (i.e: run exit 1 from your start.sh) and docker will restart it from scratch.
Option 1: docker run --restart
Related documentation
docker run --restart on-failure <image>
Option 2: Using docker-compose
Version 3
In your docker-compose.yml you can set the restart_policy directive to the service you're interested on restarting. i.e:
version: "3"
services:
app:
...
restart_policy:
condition: on-failure
...
Version 2
Before version 3, the same policy could be applied with the restart directive, which allows for less configuration.
version: "2"
services:
app:
...
restart: "on-failure"
...
Is there any reason why you are running 3 processes in the same container? As per the microservice architecture basics, only one process should run in a container. So you should run 3 dockers for the 3 scripts. All 3 scripts should have the logic that if one of the 3 dockers is not reachable, then it should get killed.
Well, in the end the solution was much simpler than I expected.
I started from the base where I mount the docker socket inside the container (I know that this practice is not recommended, but in my case, I know that it does not pose security problems), using the command in docker-compose:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Then, it was as simple as using the Docker library for python, which gives a complete SDK through that socket that allowed me to restart the container inside the python script in an ultra-simple way.
import docker
[...]
docker_client = docker.DockerClient(base_url='unix://var/run/docker.sock')
docker_client.containers.get("container_name").restart()

DietPi - running a script manually works - but starting from postboot.d throws I/O error

I'm trying to run a script automatic when booting Raspberry with DietPi.
My script starts a Python3 programm which then at the end starts an external program MP4Box which merges 2 video files to a mp4 in a folder in my lighttp webserver.
When I start the script manually everything works. But when the script starts automatically on boot, when it comes to the external program MP4Box, I get an error:
Cannot open destination file /var/www/Videos/20201222_151210.mp4: I/O Error
Script starting my pythons is "startcam" - which lies in the folder /var/lib/dietpi/postboot.d
#!/bin/sh -e
# Autostart RaspiCam
cd /home/dietpi
rm -f trigger/*
python3 -u record_v0.1.py > record.log 2>&1 &
python3 -u motioninterrupt.py > motion.log 2>&1 &
the readme.txt in postboot.d says:
# /var/lib/dietpi/postboot.d is implemented by DietPi and allows to run scripts at the end of the boot process:
# - /etc/systemd/system/dietpi-postboot.service => /boot/dietpi/postboot => /var/lib/dietpi/postboot.d/*
# There are nearly no restrictions about file names and permissions:
# - All files (besides this "readme.txt" and dot files ".filename") are executed as root user.
# - Execute permissions are automatically added.
# NB: This delays the login prompt by the time the script takes, hence it must not be used for long-term processes, but only for oneshot tasks.
So it should also start my script with root priviledges. And that is the (part of the) Script "record_v0.1.py" that throws the error:
import os
os.system('MP4Box -fps 15 -cat /home/dietpi/b-file001.h264 -cat /home/dietpi/a-file001.h264 -new /var/www/Videos/file001.mp4 -tmp ~ -quiet')
When I start the python programs manually (logged in as root) with:
/var/lib/dietpi/postboot.d/startcam
everythin is OK and instead of the error I get the message:
Appending file /home/dietpi/Videos/b-20201222_153124.h264
No suitable destination track found - creating new one (type vide)
Appending file /home/dietpi/Videos/a-20201222_153124.h264
Saving /var/www/Videos/20201222_153124.mp4: 0.500 secs Interleaving
Thanks for every hint
Contrary to the description, the scripts in postboot.d are not excuted as root. So I changed my script to:
#!/bin/sh -e
# Autostart RaspiCam
cd /home/dietpi
rm -f trigger/*
sudo python3 -u record_v0.1.py > record.log 2>&1 &
sudo python3 -u motioninterrupt.py > motion.log 2>&1 &
Now they are running as root and everything works as wanted.

How to write a starter script to start my backend and frontend?

I running python flask as my backend and react as my frontend. Every time I start my app, I have to run export FLASK_APP=app and then flask start in terminal 1 and npm start in terminal 2. How do I write a single script that starts both processes?
Here is my attempt:
#!/bin/bash
export FLASK_APP=microblog.py
flask run > /dev/null
npm start --prefix ~/app
Try this:
#!/bin/bash
export FLASK_APP=microblog.py
flask run > /dev/null & pids=$!
npm start --prefix ~/app & pids+=" $!"
trap "kill $pids" SIGTERM SIGINT
wait $pids
This script starts both flask and npm in background, and stores their PIDs. After that, we set up a trap - in case you hit CTRL - C, both programs will get killed.
The wait line will block until both the flask and npm process has finished - so you can easily terminate both with CTRL-C.

coreutils timeout in a bash script not transparent for the application

I have an issue with executing application via /usr/bin/timeout in a bash script.
In this specific case this is a simple python fabric script (fabric version 1.14)
In order to install this version of fabric library run: pip install "fabric<2"
There is no reproduction with new fabric 2.x.
Shell script causing issue:
[root#testhost:~ ] $ cat testNOK.sh
#!/bin/bash
timeout 10 ./test.py
echo "RETCODE=$?"
[root#testhost:~ ] $ ./testNOK.sh
[localhost] run: echo Hello!
RETCODE=124
[root#testhost:~ ] $
Similar script (without timeout) working fine
[root#testhost:~ ] $ cat testOK.sh
#!/bin/bash
./test.py
echo "RETCODE=$?"
[root#testhost:~ ] $ ./testOK.sh
[localhost] run: echo Hello!
[localhost] out: Hello!
[localhost] out:
RETCODE=0
[root#testhost:~ ] $
Manual execution from bash commandline with timeout working fine:
[root#testhost:~ ] $ timeout 10 ./test.py && echo "RETCODE=$?"
[localhost] run: echo Hello!
[localhost] out: Hello!
[localhost] out:
RETCODE=0
[root#testhost:~ ] $
Python2.7 test.py script
[root#testhost:~ ] $ cat test.py
#!/usr/bin/python
from fabric.api import run, settings
with settings(host_string='localhost', user='root', password='XXXXX'):
run('echo Hello!')
[root#testhost:~ ] $
I have observed the same behavior on different Linux distributions.
Now the question is why application executed via timeout within bash script behaves in a different way and what would be the best solution to this issue?
You need to invoke timeout with the --foreground option:
timeout --foreground ./test.py
This is only required if the timeout command is not executed from an interactive shell (that is, if it's executed from a script file).
Quoting from the timeout info page:
‘--foreground’
Don’t create a separate background program group, so that the
managed COMMAND can use the foreground TTY normally. This is
needed to support timing out commands not started directly from an
interactive shell, in two situations.
1. COMMAND is interactive and needs to read from the terminal for
example
2. the user wants to support sending signals directly to COMMAND
from the terminal (like Ctrl-C for example)
What's actually going on in this case is that fabric (or something invokes) is calling tcsetattr to turn terminal echo off. I don't know why, but I suppose it has something to do with the process used to (not) collect the user password. (I just saw it in an strace; I made no attempt to find the call.) Attempting to change tty configuration from a background process will cause the process to block until it regains control of the tty, and that's what's happening.
It doesn't happen when timeout is not used because bash doesn't create a background program group. I suppose that fabric 2 avoids the call to tcsetattr.
You could probably also avoid the issue by avoiding password-based SSH authentication but I didn't try that.
You can also avoid the problem by redirecting stdin to /dev/null (either in the timeout command or in the invocation of the shell script.) If you don't need to forward stdin to the remote command (and you probably don't), that might also be useful.
You Can Use time out without using bash Just by using the time model in python
import time
time.sleep(5)
#change the 5 by the seconds that you need to set a timeout

Sleepwatcher on OS X 10.11 not executing script on wake

In installed Sleepwatcher 2.2 on OS X 10.11 and launching it via LaunchD as an agent.
It launches okay and shows up in the activity monitor.
However, I want it to fire off a python script when the computer wakes up.
My installation commands are as follows.
sudo mkdir -p /usr/local/sbin /usr/local/share/man/man8
sudo cp ~/Desktop/sleepwatcher_2.2/sleepwatcher /usr/local/sbin
sudo cp ~/Desktop/sleepwatcher_2.2/sleepwatcher.8 /usr/local/share/man/man8
sudo cp ~/Desktop/sleepwatcher_2.2/sleepwatcher/config/rc.sleep /etc
sudo cp ~/Desktop/sleepwatcher_2.2/sleepwatcher/config/rc.wakeup /etc
sudo cp ~/Desktop/sleepwatcher_2.2/sleepwatcher/config/de.bernhard-baehr.sleepwatcher-20compatibility-localuser.plist /Library/LaunchAgents
chmod +x /etc/rc.sleep
chmod +x /etc/rc.wakeup
chmod +x /usr/local/bin/test.py
My rc.wakeup file is as follows.
#!/bin/sh
/usr/local/bin/python3 /usr/local/bin/test.py
When executing Sleepwatcher at the terimnal window by typing in the following, it seems to work.
/usr/local/sbin/sleepwatcher --verbose --wakeup /usr/local/bin/test.py
However, when trying to run it as a start-up item under LaunchD, it does not seem to work execute my python script.
I have search all over and cannot figure out why it is not working when being launched in LaunchD.
Has anybody ran into this type of problem?
Thanks in advance.
I encountered similar problems so I took a different approach using another open source tool called Hammerspoon. It can provide for automation of bunch of things on MacOS including sleep/wake events. It's quite simple to replicate sleepwatcher's functionality by adding the following to Hammerspoon's ~/.hammerspoon/init.lua (or create a 'spoon') script that triggers when the machine wakes or sleeps and calls the corresponding wake and sleep scripts (in e.g. /Users/username/scripts - ensure username is changed) from sleepwatcher:
function caffeinateWatcher(eventType)
if (eventType == hs.caffeinate.watcher.systemWillSleep or
eventType == hs.caffeinate.watcher.systemWillPowerOff) then
print ("WillSleep...")
-- Execute sleep script
hs.task.new("/Users/username/scripts/rc.sleep", nil):start()
elseif (eventType == hs.caffeinate.watcher.systemDidWake) then
print ("Woken...")
-- Execute wake script
hs.task.new("/Users/username/scripts/rc.wake", nil):start()
end
end
sleepWatcher = hs.caffeinate.watcher.new(caffeinateWatcher)
sleepWatcher:start()
Note if you want Hammerspoon to launch the shell scripts you need to ensure they start with the standard bash shell header #!/bin/bash.

Categories