Can't modify environment variable(assigned in Shell Script) using Python - python

I'm trying to create a loop inside a Shell Script and I want to break out of the loop and finish the shell script execution when i find an integer different than 0 in a specific string(using Python).The problem is even after the first occurrence of an integer different than 0 in that specific string the shell script keeps executing.I tried to debug it by echoing the value of GET_OUT_OF_LOOP but it just keeps echoing 0 even after finding the kind of integer I was looking for. I already looked on the web for a way to do this but I still didn't figure it out...
Here's my shell script:
#!/bin/sh
export GET_OUT_OF_LOOP=0
while [ $GET_OUT_OF_LOOP -ne 1 ]; do
python3 provas.py provas.txt
./provas < provas.txt >> data.txt
python3 test.py data.txt
sh clear_data.sh
done
And here is my Python code(test.py) where I'm trying to change the value of the GET_OUT_OF_LOOP variable using os.environ:
#!usr/env/bin python3
import sys
import os
import re
script, filename = sys.argv
os.environ['GET_OUT_OF_LOOP'] = '0'
fin = open("data.txt", 'r')
for line in fin:
if "A percentagem de aprovação foi de" in line:
if int(re.search(r'\d+', line).group()) != 0:
print(line)
os.environ['GET_OUT_OF_LOOP'] = '1'

The python process is a subprocess of the shell process, and it can not modify environment vars of its parent process.
For your case, you can use the exit code to pass the message; i.e.
shell script:
python3 test.py data.txt || GET_OUT_OF_LOOP=1
python:
#!usr/env/bin python3
import sys
import os
import re
script, filename = sys.argv
fin = open("data.txt", 'r')
for line in fin:
if "A percentagem de aprovação foi de" in line:
if int(re.search(r'\d+', line).group()) != 0:
print(line)
sys.exit(1)
sys.exit(0)

That is just the way environment variables work: you can't in a sub-process change variables in the environment of the process which called it.
(And in shell script, almost all lines of code, but for control structures, are external sub-processes)
What you can have is a simple unsigned byte return value of your sub-process that can be read in the shell script as the implicit $? variable.
In Python's case, you terminate the program with this return value by calling sys.exit()
So, in your shell script you can do this to assign the variable:
python3 test.py data.txt
GET_OUT_OF_LOOP=$?
And the Python in the Python script change:
os.environ['GET_OUT_OF_LOOP'] = '1'
for
sys.exit(1)
Of course, it would be much more sane and maintainable if yu just use Python all the way from the top - the shutils module in the stdlib makes it easy to copy files around, and you, above all, get a consistent syntax across all lines of your script, much easier to use comparison operators and variables.

Here are two similar stackoverflow questions that might explain yours:
how-do-i-make-environment-variable-changes-stick-in-python
environment-variables-in-python-on-linux
So the real reason causing this issue is that when we run a process, the environment variables being changed by the process are only available during the process runtime, it won't change the external variables, here is a simplified script of yours to prove it:
#test.py
import os
os.environ['test_env_var'] = '1'
#test.sh
export test_env_var=0
while [ $test_env_var -ne 1 ]; do
python test.py
echo $test_env_var
done
As you might have already seen what's coming, the loop will echo $tev to be 0 forever.
Hence the solution to solve this problem to my understanding, would be to out-source the change into the external system files, if it's necessary. Append changes to the configuration files of the regarding systems, for instance of this example, you can append "export test_env_var=1" into ~/.bashrc, if you are a linux bash user.

Related

How to pass arguments to python script? [duplicate]

I know that I can run a python script from my bash script using the following:
python python_script.py
But what about if I wanted to pass a variable / argument to my python script from my bash script. How can I do that?
Basically bash will work out a filename and then python will upload it, but I need to send the filename from bash to python when I call it.
To execute a python script in a bash script you need to call the same command that you would within a terminal. For instance
> python python_script.py var1 var2
To access these variables within python you will need
import sys
print(sys.argv[0]) # prints python_script.py
print(sys.argv[1]) # prints var1
print(sys.argv[2]) # prints var2
Beside sys.argv, also take a look at the argparse module, which helps define options and arguments for scripts.
The argparse module makes it easy to write user-friendly command-line interfaces.
Use
python python_script.py filename
and in your Python script
import sys
print sys.argv[1]
Embedded option:
Wrap python code in a bash function.
#!/bin/bash
function current_datetime {
python - <<END
import datetime
print datetime.datetime.now()
END
}
# Call it
current_datetime
# Call it and capture the output
DT=$(current_datetime)
echo Current date and time: $DT
Use environment variables, to pass data into to your embedded python script.
#!/bin/bash
function line {
PYTHON_ARG="$1" python - <<END
import os
line_len = int(os.environ['PYTHON_ARG'])
print '-' * line_len
END
}
# Do it one way
line 80
# Do it another way
echo $(line 80)
http://bhfsteve.blogspot.se/2014/07/embedding-python-in-bash-scripts.html
use in the script:
echo $(python python_script.py arg1 arg2) > /dev/null
or
python python_script.py "string arg" > /dev/null
The script will be executed without output.
I have a bash script that calls a small python routine to display a message window. As I need to use killall to stop the python script I can't use the above method as it would then mean running killall python which could take out other python programmes so I use
pythonprog.py "$argument" & # The & returns control straight to the bash script so must be outside the backticks. The preview of this message is showing it without "`" either side of the command for some reason.
As long as the python script will run from the cli by name rather than python pythonprog.py this works within the script. If you need more than one argument just use a space between each one within the quotes.
and take a look at the getopt module.
It works quite good for me!
Print all args without the filename:
for i in range(1, len(sys.argv)):
print(sys.argv[i])

Python curses does not work with command substitution

I was using python project pick to select an option from a list. Below code returns the option and index.
option, index = pick(options, title)
Pick uses curses library from python. I want to pass the output of my python script to shell script.
variable output = $(pythonfile.py)
but it gets stuck on the curses screen. It cannot draw anything. What can be the reason for this?
pick gets stuck because when you use $(pythonfile.py), the shell redirects the output of pythonfile.py as if it were a pipe. Also, the output of pick contains characters for updating the screen (not what you want). You can work around those problems by
redirecting the output of pythonfile.py to /dev/tty
ensuring that your pythonfile.py writes its result to the standard error, and
directing the standard error in the shell script to the output of the $(...) construct.
For example:
#!/bin/bash
foo=$(python basic.py 2>&1 >/dev/tty )
echo "result '$foo'"
and in pythonfile.py, doing
import sys
print(option, index, file=sys.stderr)
rather than
print(option, index)
To pass the output of a Python script to a Bash variable you need to specify the command with which to open the python file inside the variable's declaration.
Like so:
variable_output=$(python pythonfile.py)
Furthermore, if you'd like to pass a variable from Python to bash you could use Python's sys module and then redirect the stderr.
Like so:
test.py
import sys
test_var = (str(3 + 3))
sys.exit(test_var)
test.sh
test_var=$(python3 test.py 2>&1)
echo $testvar
Now, if we run test.sh we get the output 6.

accessing python dictionary from bash script

I am invoking the bash script from python script.
I want the bash script to add an element to dictionary "d" in the python script
abc3.sh:
#!/bin/bash
rank=1
echo "plugin"
function reg()
{
if [ "$1" == "what" ]; then
python -c 'from framework import data;data(rank)'
echo "iamin"
else
plugin
fi
}
plugin()
{
echo "i am plugin one"
}
reg $1
python file:
import sys,os,subprocess
from collections import *
subprocess.call(["./abc3.sh what"],shell=True,executable='/bin/bash')
def data(rank,check):
d[rank]["CHECK"]=check
print d[1]["CHECK"]
If I understand correctly, you have a python script that runs a shell script, that in turn runs a new python script. And you'd want the second Python script to update a dictionnary in the first script. That will not work like that.
When you run your first python script, it will create a new python process, which will interpret each instruction from your source script.
When it reaches the instruction subprocess.call(["./abc3.sh what"],shell=True,executable='/bin/bash'), it will spawn a new shell (bash) process which will in turn interpret your shell script.
When the shell script reaches python -c <commands>, it invokes a new python process. This process is independant from the initial python process (even if you run the same script file).
Because each of theses scripts will run in a different process, they don't have access to each other data (the OS makes sure that each process is independant from each other, excepted for specific inter-process communications methods).
What you need to do: use some kind of interprocess mechanism, so that the initial python script gets data from the shell script. You may for example read data from the shell standard output, using https://docs.python.org/3/library/subprocess.html#subprocess.check_output
Let's suppose that you have a shell plugin that echoes the value:
echo $1 12
The mockup python script looks like (I'm on windows/MSYS2 BTW, hence the strange paths for a Linux user):
import subprocess
p = subprocess.Popen(args=[r'C:\msys64\usr\bin\sh.exe',"-c","C:/users/jotd/myplugin.sh myarg"],stdout=subprocess.PIPE,stderr=subprocess.PIPE)
o,e= p.communicate()
p.wait()
if len(e):
print("Warning: error found: "+e.decode())
result = o.strip()
d=dict()
d["TEST"] = result
print(d)
it prints the dictionary, proving that argument has been passed to the shell, and went back processed.
Note that stderr has been filtered out to avoid been mixed up with the results, but is printed to the console if occurs.
{'TEST': b'myarg 12'}

Python/Shell script String issues

I'm trying to customize my zsh prompt. The function below calls a Python script and returns the entire working directory path minus just the current directory. E.g. ~/research would go to ~. This is a .zsh-theme file.
function collapse_pwd {
echo $(python ~/.oh-my-zsh/themes/truncatecwd.py '%~' '%c')
}
This is the python script, truncatecwd.py.
#!/usr/bin/env python
import sys
cwd = sys.argv[1]
current_dir_end = sys.argv[2]
sys.stdout.write(cwd[0: cwd.index(current_dir_end)])
Weird things happen here. I keep getting errors saying that current_dir_end can't be found in cwd. I think that it has something to do with string formatting. I printed out cwd, and it seems to be correct: '~/.oh-my-zsh/themes'. However, when I call length on it, I get 2. Same goes for current_dir_end: I get length 2. In fact, even cwd = '~' returns a length of 2. Clearly, something subtle (but probably simple) is going on.
Thanks for your help.
I don't really understand what you're trying to do here, but wouldn't the following suffice, with no Python involved at all?
collapse_pwd() {
local result=${1:-$PWD}
if [[ $result = */* ]]; then
result="${result%/*}"
fi
if [[ $result = "$HOME"/* ]]; then
result="~/${result#$HOME/}"
fi
echo "$result"
}
could you do something like this:
import os
import sys
cwd = os.getcwd()
ret = os.path.sep.join(cwd.split(os.path.sep)[:-1])
sys.stdout.write(ret)
also, just an observation, because I'm not too familiar with zsh you may need to call python with -u option to ensure unbuffered output otherwise a newline may be written and that wouldn't be good with a command prompt.

Problem with reading in parameters with special characters in Python

I have a scripts (a.py) reads in 2 parameters like this:-
#!/usr/bin/env python
import sys
username = sys.argv[1]
password = sys.argv[2]
Problem is, when I call the script with some special characters:-
a.py "Lionel" "my*password"
It gives me this error:-
/swdev/tools/python/current/linux64/bin/python: No match.
Any workaround for this?
Updated-
It has been suspected that this might be a shell issue rather than the script issue.
I thought the same too, until i tried it out on a perl script(a.pl), which works perfectly without any issue:-
#!/usr/bin/env perl
$username = $ARGV[1];
$password = $ARGV[2];
print "$username $password\n";
%a.pl "lionel" "asd*123"
==> lionel asd*123
No problem.
So i guess , this looks to me more like a PYTHON issue.
Geezzz ........
The problem is in the commands you're actually using, which are not the same as the commands you've shown us. Evidence: in Perl, the first two command-line arguments are $ARGV[0] and $ARGV[1] (the command name is $0). The Perl script you showed us wouldn't produce the output you showed us.
"No match" is a shell error message.
Copy-and-paste (don't re-type) the exact contents of your Python script, the exact command line you used to invoke it, and the exact output you got.
Some more things to watch out for:
You're invoking the script as a.py, which implies either that you're copying it to some directory in your $PATH, or that . is in your $PATH. If the latter, that's a bad idea; consider what happens if you cd info a directory that contains a (possibly malicious) command called ls. Putting . at the end of your $PATH is safer than putting it at the beginning, but I still recommend leaving it out altogether and using ./command to invoke commands in the current directory. In any case, for purposes of this exercise, please use ./a.py rather than a.py, just so we can be sure you're not picking up another a.py from elsewhere in your $PATH.
This is a long shot, but check whether you have any files in your current directory with a * character in their names. some_command asd*123 (without quotation marks) will fail if there are no matching files, but not if there happens to be a file whose name is literally "asd*123".
Another thing to try: change your Python script as follows:
#!/usr/bin/env python
print "before import sys"
import sys
print "after import sys"
username = sys.argv[1]
password = sys.argv[2]
This will tell you whether the shell is invoking your script at all.
That error comes from your shell, not from Python. Do you have a shopt -s failglob set in your .bashrc or somewhere?
/swdev/tools/python/current/linux64/bin/python: No match.
I think the problem is that the python env is not set:
Does python run at all on your machine ?

Categories