FreeBSD rc script with output to file - python

I have this script below where I start a python program.
The python program outputs to stdout/terminal. But I want the program to be started via rc script silently.
I can start the and stop the program perfectly. And it also creates the log file, but dosent fill anything to it. I tried a lot of different ways. Even with using daemon as starter.
Where is my problem?
#!/bin/sh
# REQUIRE: DAEMON
# KEYWORD: shutdown
. /etc/rc.subr
location="/rpiVent"
name="rpiVentService"
rcvar=`set_rcvar`
command="$location/$name"
#command_args="> $location/$name.log" // Removed
command_interpreter="/usr/bin/python"
load_rc_config $name
run_rc_command "$1"

Piping with > is a feature of the shell and not an actual part of the command line. When commands are programmatically involved, the arguments given them cannot contain shell directives (unless the parent process has special support for shell, like with Python subprocess.Popen(shell=True) (doc).
What in this case you can do is that you can wrap your command (/rpiVent/rpiVentService) to a shell script then invoke this shell script in FreeBSD rc script::
Creat /rpiVent/run.sh:
#!/bin/sh
/rpiVent/rpiVentservice > /rpiVent/rpiVentService.log
and then use this is a command (no args needed).

The correct way to do this is probably by "overriding" the start command using start_cmd variable, like this:
#!/bin/sh
# REQUIRE: DAEMON
# KEYWORD: shutdown
. /etc/rc.subr
location="/rpiVent"
name="rpiVentService"
rcvar=`set_rcvar`
load_rc_config $name
command="$location/$name"
command_interpreter="/usr/bin/python"
start_cmd=rpivent_cmd
rpivent_cmd()
{
$command_interpreter $command >$location/$name.log
}
run_rc_command "$1"

Related

accessing python dictionary from bash script

I am invoking the bash script from python script.
I want the bash script to add an element to dictionary "d" in the python script
abc3.sh:
#!/bin/bash
rank=1
echo "plugin"
function reg()
{
if [ "$1" == "what" ]; then
python -c 'from framework import data;data(rank)'
echo "iamin"
else
plugin
fi
}
plugin()
{
echo "i am plugin one"
}
reg $1
python file:
import sys,os,subprocess
from collections import *
subprocess.call(["./abc3.sh what"],shell=True,executable='/bin/bash')
def data(rank,check):
d[rank]["CHECK"]=check
print d[1]["CHECK"]
If I understand correctly, you have a python script that runs a shell script, that in turn runs a new python script. And you'd want the second Python script to update a dictionnary in the first script. That will not work like that.
When you run your first python script, it will create a new python process, which will interpret each instruction from your source script.
When it reaches the instruction subprocess.call(["./abc3.sh what"],shell=True,executable='/bin/bash'), it will spawn a new shell (bash) process which will in turn interpret your shell script.
When the shell script reaches python -c <commands>, it invokes a new python process. This process is independant from the initial python process (even if you run the same script file).
Because each of theses scripts will run in a different process, they don't have access to each other data (the OS makes sure that each process is independant from each other, excepted for specific inter-process communications methods).
What you need to do: use some kind of interprocess mechanism, so that the initial python script gets data from the shell script. You may for example read data from the shell standard output, using https://docs.python.org/3/library/subprocess.html#subprocess.check_output
Let's suppose that you have a shell plugin that echoes the value:
echo $1 12
The mockup python script looks like (I'm on windows/MSYS2 BTW, hence the strange paths for a Linux user):
import subprocess
p = subprocess.Popen(args=[r'C:\msys64\usr\bin\sh.exe',"-c","C:/users/jotd/myplugin.sh myarg"],stdout=subprocess.PIPE,stderr=subprocess.PIPE)
o,e= p.communicate()
p.wait()
if len(e):
print("Warning: error found: "+e.decode())
result = o.strip()
d=dict()
d["TEST"] = result
print(d)
it prints the dictionary, proving that argument has been passed to the shell, and went back processed.
Note that stderr has been filtered out to avoid been mixed up with the results, but is printed to the console if occurs.
{'TEST': b'myarg 12'}

How to import a python file in a bash script ? (to use a python value in my bash script)

I would like to know if it's possible in a bash script to include a python script in order to write (in the bash script) the return value of a funnction I wrote in my python program ?
For example:
my file "file.py" has a function which returns a variable value "my_value" (which represents the name of a file but anyway)
I want to create a bash script which has to be able to execute a commande line like "ingest my_value"
So do you know how to include a python file in a bash script (import ...?) and how is it possible to call a value from a python file inside a bash script ?
Thank you in advance.
Update
Actually, my python file looks like that:
class formEvents():
def __init__(self):
...
def myFunc1(self): # function which returns the name of a file that the user choose in his computeur
...
return name_file
def myFunc2(self): # function which calls an existing bash script (bash_file.sh) in writing the name_file inside it (in the middle of a line)
subprocess.call(['./bash_file.sh'])
if__name__="__main__":
FE=formEvents()
I don't know if it's clear enough but here is my problem: it's to be able to write name_file inside the bash_file.sh
Jordane
The easiest way of doing this is via the standard UNIX Pipeline and your Shell.
Here's an example:
foo.sh:
#!/bin/bash
my_value=$(python file.py)
echo $my_value
file.py:
#!/usr/bin/env python
def my_function():
return "my_value"
if __name__ == "__main__":
print(my_function())
The way this works is simple:
You launch foo.sh
Bash spawns a subprocess and runs python file.py
Python (and the interpretation of file.py) run the function my_function and print it's return value to "Standard Output"
Bash captures the "Standard Output" of the Python process in my_value
Bash then simply echoes the value stored in my_value also to "Standard Output" and you should see "my_value" printed to the Shell/Terminal.
If the python script outputs the return value to the console, you should be able to just do this
my_value=$(command)
Edit: Damn, beat me to it
Alternatively, you can make the bash script process arguments.
#!/usr/bin/bash
if [[ -n "$1" ]]; then
name_file="$1"
else
echo "No filename specified" >&2
exit 1
fi
# And use $name_file in your script
In Python, your subprocess call should be changed accordingly:
subprocess.call(['./bash_file.sh', name_file])

how to pass an argument to a python script when starting with nohup

I need to start a python script with bash using nohup passing an arg that aids in defining a constant in a script I import. There are lots of questions about passing args but I haven't found a successful way using nohup.
a simplified version of my bash script:
#!/bin/bash
BUCKET=$1
echo $BUCKET
script='/home/path/to/script/script.py'
echo "starting $script with nohup"
nohup /usr/bin/python $script $BUCKET &
the relevant part of my config script i'm importing:
FLAG = sys.argv[0]
if FLAG == "b1":
AWS_ACCESS_KEY_ID = "key"
BUCKET = "bucket1"
AWS_SECRET_ACCESS_KEY = "secret"
elif FLAG == "b2":
AWS_ACCESS_KEY_ID = "key"
BUCKET = "bucket2"
AWS_SECRET_ACCESS_KEY = "secret"
else:
AWS_ACCESS_KEY_ID = "key"
BUCKET = "bucket3"
AWS_SECRET_ACCESS_KEY = "secret"
the script thats using it:
from config import BUCKET, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY
#do stuff with the values.
Frankly, since I'm passing the args to script.py, I'm not confident that they'll be in scope for the import script. That said, when I take a similar approach without using nohup, it works.
In general, the argument vector for any program starts with the program itself, and then all of its arguments and options. Depending on the language, the program may be sys.argv[0], argv[0], $0, or something else, but it's basically always argument #0.
Each program whose job is to run another program—like nohup, and like the Python interpreter itself—generally drops itself and all of its own options, and gives the target program the rest of the command line.
So, nohup takes a COMMAND and zero or more ARGS. Inside that COMMAND, argv[0] will be COMMAND itself (in this case, '/usr/bin/python'), and argv[1] and later will be the additional arguments ('/home/path/to/script/script.py' and whatever $BUCKET resolves to).
Next, Python takes zero or more options, a script, and zero or more args to that script, and exposes the script and its args as sys.argv. So, in your script, sys.argv[0] will be '/home/path/to/script/script.py', and sys.argv[1] will be whatever $BUCKET resolves to.
And bash works similarly to Python; $1 will be the first argument to the bash wrapper script ($0 will be the script itself), and so on. So, sys.argv[1] in the inner Python script will end up getting the first argument passed to the bash wrapper script.
Importing doesn't affect sys.argv at all. So, in both your config module and your top-level script, if you import sys, sys.argv[1] will hold the $1 passed to the bash wrapper script.
(On some platforms, in some circumstances argv[0] may not have the complete path, or may even be empty. But that isn't relevant here. What you care about is the eventual sys.argv[1], and bash, nohup, and python are all guaranteed to pass that through untouched.)
nohup python3 -u ./train.py --dataset dataset_directory/ --model model_output_directory > output.log &
Here Im executing train.py file with python3, Then -u is used to ignore buffering and show the logs on the go without storing, specifying my dataset_directory with argument style and model_output_directory then Greater than symbol(>)
then the logs is stored in output.log and them atlast and(&) symbol is used
To terminate this process
ps ax | grep train
then note the process_ID
sudo kill -9 Process_ID

Powershell equivalent of python's if __name__ == '__main__':

I am really fond of python's capability to do things like this:
if __name__ == '__main__':
#setup testing code here
#or setup a call a function with parameters and human format the output
#etc...
This is nice because I can treat a Python script file as something that can be called from the command line but it remains available for me to import its functions and classes into a separate python script file easily without triggering the default "run from the command line behavior".
Does Powershell have a similar facility that I could exploit? And if it doesn't how should I be organizing my library of function files so that i can easily execute some of them while I am developing them?
$MyInvocation.Invocation has information about how the script was started.
If ($MyInvocation.InvocationName -eq '&') {
"Called using operator: '$($MyInvocation.InvocationName)'"
} ElseIf ($MyInvocation.InvocationName -eq '.') {
"Dot sourced: '$($MyInvocation.InvocationName)'"
} ElseIf ((Resolve-Path -Path $MyInvocation.InvocationName).ProviderPath -eq $MyInvocation.MyCommand.Path) {
"Called using path: '$($MyInvocation.InvocationName)'"
}
$MyInvocation has lots of information about the current context, and those of callers. Maybe this could be used to detect if a script is being dot-sourced (i.e. imported) or executed as a script.
A script can act like a function: use param as first non-common/whitespace in the file to defined parameters. It is not clear (one would need to try different combinations) what happens if you dot-source a script that starts param...
Modules can directly execute code as well as export functions, variables, ... and can take parameters. Maybe $MyInvocation in a module would allow the two cases to be detected.
EDIT: Additional:
$MyInvocation.Line contains the command line used to execute the current script or function. Its Line property has the scrip text used for the execution, when dot-sourcing this will start with "." but not if run as a script (obviously a case to use a regex match to allow for variable whitespace around the period).
In a script run as a function
As of now I see 2 options that work
if ($MyInvocation.InvocationName -ne '.') {#do main stuff}
and
if ($MyInvocation.CommandOrigin -eq 'Runspace') {#do main stuff}
Disclaimer: This is only tested on Powershell Core on Linux. It may not work the same for Windows. If anyone tries it on Windows I would appreciate if you could verify in the comments.
function IsMain() {
(Get-Variable MyInvocation -Scope Local).Value.PSCommandPath -Eq (Get-Variable MyInvocation -Scope Global).Value.InvocationName
}
Demonstrated with a gist

Python subprocess block

I'm having a problem with the module subprocess; I'm running a script from Python:
subprocess.Popen('./run_pythia.sh', shell=True).communicate()
and sometimes it just blocks and it doesn't finish to execute the script. Before I was using .wait(), but I switched to .communicate(). Nevertheless the problem continues.
First the script compiles a few files, then it execute into a file:
run_pythia.sh:
#!/bin/bash
#PBS -l walltime=1:00:00
./compile.sh
./exec > resultado.txt
compile.sh:
O=`find ./ -name "*.o" | xargs`
# LOAD cernlib2005
module load libs/cernlib/2005
# Compile and Link
FC=g77
CERNLIBPATH="-L/software/local/cernlib/2005/lib -lpacklib"
$FC call_pyth_mix.f analise_tt.f $O $CERNLIBPATH -o exec
Is the script you execute, is run_pythia.sh guaranteed to finish executing? If not, you might not want to use blocking methods like communicate(). You might want to look into interacting with the .stdout, .stderr, and .stdin file handles of the returned process handle yourself (in a non-blocking manner).
Also, if you still want to use communicate(), you need to have had passed subprocess.PIPE object to Popen's constructor arguments.
Read the documentation on the module for more details.
Maybe you can try to do a trace on it:
import pdb; pdb.set_trace()

Categories