I had a very strange experience with env variables.
In summary, I needed to set a env variable VAR1, I was 99% sure I ran the command export VAR1=some-value. However, after a few hours I forgot whether I set it or not, so I ran echo $VAR1 to check. And the output is exactly some-value, which confirmed that I set it correctly.
However, when I do this in python:
import os
print("VAR1" in os.environ)
The output is False.
I was very confused at this point. If I trust my python output, it means that my way of checking a env variable using echo was wrong. Is that the case?
Because I don't know what's wrong, I cannot provide a reproducible code sample. I really appreciate any explanation.
The only thing I can think of is that you were mistaken in that you set it but did not export it. That would explain why you could echo the value, but it didn't make into os.environ in Python. Here's a demonstration:
>>> VAR1="vvv1"
>>> export VAR2="vvv2"
>>> echo $VAR1
vvv1
>>> echo $VAR2
vvv2
>>> python
Python 3.7.3 (default, Sep 16 2020, 12:18:14)
[Clang 10.0.1 (clang-1001.0.46.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> "VAR1" in os.environ
False
>>> "VAR2" in os.environ
True
So VAR1 was only set, but VAR2 was set and exported. Both can be echoed, but only the exported one shows up in Python.
To check at the command line for if a variable is set and exported, use export and grep:
>>> export | grep VAR1
>>> export | grep VAR2
declare -x VAR2="vvv2"
>>>
It's helpful to really understand what export does. Only an exported variable will be inherited by child processes launched by the current shell process. So when you launch Python from the command line, that launches a child process. Only exported variables are copied into the child process, and so only exported variables are seen by Python.
It is this child process thing that explains why you can't run a shell script to set environment variables in your current shell. Because that script will run in a child process, it will only affect the variables in that child process. Once the script runs, that child process goes away. The main process's variable space was not affected by the script.
There is a way to run a script to set environment variables, and it can also be used to run a script that will have access to unexported variables. That is to run a script with '.' or 'source'. When you do . myscript.sh or source myscript.sh, this causes your script to be run in the current shell process. It prevents a subprocess from launching to run the script. So then the script is seeing and affecting the main shell environment.
Another small bit of trivia. I wasn't sure if there was any difference between . myscript.sh and source myscript.sh. Per this SO question, the only difference is portability. In Bash and other modern shells, there is no difference, but not all shells support both variants.
Related
In Linux When I invoke python from the shell it replicates its environment, and starts the python process. Therefore if I do something like the following:
import os
os.environ["FOO"] = "A_Value"
When the python process returns, FOO, assuming it was undefined originally, will still be undefined. Is there a way for the python process (or any child process) to modify the environment of its parent process?
I know you typically solve this problem using something like
source script_name.sh
But this conflicts with other requirements I have.
No process can change its parent process (or any other existing process' environment).
You can, however, create a new environment by creating a new interactive shell with the modified environment.
You have to spawn a new copy of the shell that uses the upgraded environment and has access to the existing stdin, stdout and stderr, and does its reinitialization dance.
You need to do something like use subprocess.Popen to run /bin/bash -i.
So the original shell runs Python, which runs a new shell. Yes, you have a lot of processes running. No it's not too bad because the original shell and Python aren't really doing anything except waiting for the subshell to finish so they can exit cleanly, also.
It's not possible, for any child process, to change the environment of the parent process. The best you can do is to output shell statements to stdout that you then source, or write it to a file that you source in the parent.
I would use the bash eval statement, and have the python script output the shell code
child.py:
#!/usr/bin/env python
print 'FOO="A_Value"'
parent.sh
#!/bin/bash
eval `./child.py`
I needed something similar, I ended up creating a script envtest.py with:
import sys, os
sys.stdout = open(os.devnull, 'w')
# Python code with any number of prints (to stdout).
print("This is some other logic, which shouldn't pollute stdout.")
sys.stdout = sys.__stdout__
print("SomeValue")
Then in bash:
export MYVAR=$(python3 envtest.py)
echo "MYVAR is $MYVAR"
Which echos the expected: MYVAR is SomeValue
Question: is there any options to hold the session using Subprocess or else module?
It runs only one instruction or command per time.
I can't find any direct solution. Why it works only so?
It's same question for Windows, Linux and Mac OS-s.
Example 1.
I need to do some work in cmd with admin rights.
from subprocess import run, Popen, call
run('net user Administrator /active: yes', shell=True)
run('pip install [some module]', shell=True) # or "powershell -command [some command]"
Example 2.
I need to use virtual env module and get in needed environment.
run(["workon", "Universal"]
run("[some changes]")
There is a CMD module, but it looks isolated from system cmd-terminal,like your own created CLI, and having a different purpose.
Please don't answer about bash scripts, or other options to get administrator rights, or about launching scripts from cmd itself like ">python main.py". There is talking about the sequential habitual execution of commands from CMD and how can Python do this, hold the cmd session, or it can't.
I suppose you want keep administrative privileges once gained.
I don't know about windows, don't use it.
But if i try something similar on Linux, i only have to type the admin password once:
$ python3
Python 3.8.2 (default, Jul 16 2020, 14:00:26)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import subprocess
>>> subprocess.run("sudo ls", shell=True)
[sudo] password for <me>:
a.txt b.txt
CompletedProcess(args='sudo ls', returncode=0)
>>> subprocess.run("sudo ls", shell=True)
a.txt b.txt
CompletedProcess(args='sudo ls', returncode=0)
>>>
But your actual question is if you can 'hold the session'.
That is, if I understand you correctly, execute some commands in a shell, keep the shell running after those commands have finished, while you do something else in your script to generate extra commands, and then send these new commands to the same old shell
Yes that is possible.
You can use subprocess.Popen() to spawn an interactive shell (i.e. the shell is the command to execute)
and write actual commands to it's stdin and read results from stdout
If that sounds complicated: yes it is!
e.g. proper error handling means parsing the text returned by the shell, compared to simply check the return code for a synchronous call. Even detecting if your command has finished requires monitoring what appears on the shell stdout/stderr
You don't want to go down the asynchronous road.
Synchronous calls to external commands (such as subprocess.run()), without any shell in between, is so much simpler.
I tried one more time, and it seems it works! Holds the session!!
I used:
Popen("workon Universal", shell=True) # 1
Popen("pip install termcolor") # 2 without shell=True
And that finally installed a module on "Universal" virtual environment. And i can endless continue the Popen(3), Popen(4).
Same works with run().
It's all about right using the "shell=True" attribute.
I guess that is the solution i looked for. And it's universal for all commands and sessions. That's hard to understand, no any example on google or youtube.
Thanks!
I am trying to overwrite to environment variables in Python. I can read the value and then write the value and print the updated value. But then if I check the value in command line its still the original value. Why is that?
First, I create the variable
export MYVAR=old_val
My test script myvar.py
#!/usr/bin/env python3
import os
print (os.environ['MYVAR'])
os.environ['MYVAR'] = "new_val"
print (os.environ['MYVAR'])
Outputs
$ ./myvar.py
old_val
new_val
$ echo $MYVAR
old_val
As you can see, the last line of the output still shows the old_val
Short version:
The python script changes its environment. However this does not affect the environment of the parent process (The shell)
Long version:
Well this is a well know, but quite confusing problem.
What you have to know is, that there is not the environment, each process has its own environment.
So in your example above the shell (where you type your code) has one environment.
When you call ./myvar.py, a copy of the current environment is created and passed to your python script.
Your code 'only' changes this copy of the environment.
As soon as the python script is finished this copy is destroyed and the shell will see its initial unmodified environment.
This is true for most operating systems (Windows, Linux, MS-DOS, ...)
In other words: No child process can change the environment of the process, that called it.
In bash there is a trick, where you source a script instead of calling it as a process.
However if your python script starts another process (for example /bin/bash), then the child process would see the modified environment.
You started a new process that changed its environment and exited. That's all really.
You shouldn't expect that to affect the process you started it from (your shell).
I have a data processing pipeline setup that I want to debug.
The pipeline consists of a bash script that calls a python script.
I usually use iPython's embed() function for debugging. However, when calling the python script from the bash file, the embed() function is called but immediately exited, without me being able to interfere. When running the same python program directly from the command line I don't observe this kind of behavior. Is this intended behavior or am I doing something wrong?
Python 2.7.6 (default, Oct 26 2016, 20:30:19)
Type "copyright", "credits" or "license" for more information.
IPython 2.4.1 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]:
Do you really want to exit ([y]/n)?
'follow up code prints here'
I can replicate the problem like this:
# test.py
import IPython
import sys
print(sys.stdin.read())
IPython.embed()
# session
❯ echo 'foo' | python test.py
foo
Python 3.6.8 (default, Oct 7 2019, 12:59:55)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.10.1 -- An enhanced Interactive Python. Type '?' for help.
In [1]: Do you really want to exit ([y]/n)?
❯ # I didn't quit on purpose, it happened automatically
STDIN is not a TTY, so I'm thinking that IPython is worried that the inbound text (via the pipe) won't be a user typing. It doesn't want foo (from my example above) to spew into the IPython shell and do something unexpected.
You can work around this by getting your terminal id via the tty command, and redirecting stdin to the calling terminal after it has finished reading from the pipe, something like this:
with open('/dev/pts/16') as user_tty:
sys.stdin=user_tty
IPython.embed()
For more on ttys, see this post. Note also that if you put the wrong tty in there, input from some other terminal will control IPython.
I'm not sure if it's possible for IPython to know what the calling tty would have been, had it not been overwritten by bash to be the output-side of the pipe.
Edit: Here's my workaround put more simply: How do I debug a script that uses stdin with ipython?
I ran some experiments to see the behaviour. I noticed that IPython shows the console if any of the ancestor process is terminal.
Following are the files in /tmp directory:
x.py
import IPython
IPython.embed()
call.sh
/usr/bin/python /tmp/x.py
call2.sh
/tmp/call.sh
Experiment 1
Running python x.py does open the IPython shell and waits.
Experiment 2
Running bash call.sh also opens the IPython shell and waits.
Experiment 3
Running bash call2.sh also opens the IPython shell and waits.
As you can see, it does not matter how deep is your IPython.embed call is. It always starts the interactive console and waits.
Lets try if it also works when we fork a new process.
fork.sh
/usr/bin/python /tmp/x.py &
Experiment 4
In this case, IPython shell started but immediately exited. Notice the & at the end. It starts a different process. IPython was not able to access the terminal in this case and hence exited gracefully.
I have a Python script that is always called from a shell, which can be either zsh or bash.
How can I tell which one called the script?
In Linux you can use procfs:
>>> os.readlink('/proc/%d/exe' % os.getppid())
'/bin/bash'
os.getppid() returns the PID of parent process. This is portable. But obtaining process name can't be done in portable way. You can parse ps output which is available on all unices, e.g. with psutil.
You can't do this in a reliable automated way.
Environment variables can be misleading (a user can maliciously switch them). Most automatic shell variables aren't "leaky", i.e. they are only visible in the shell process, and not for child processes.
You could figure out your parent PID and then search the list of processes for that ID. Doesn't work if you're run in the background (in this case PPID is always 1).
A user could start your program from within a script. Which is the correct shell in this case? The one in which the script was started or the script's shell?
Other programs can use system calls to run your script. In this case, you'd get either their shell or nothing.
If you have absolute control over the user's environment, then put a variable in their profile (check the manuals for BASH and ZSH for a file which is always read at startup. IIRC, it's .profile for BASH).
[EDIT] Create an alias which is invoked for both shells. In the alias, use
env SHELL_HINT="x$BASH_VERSION" your_script.py
That should evaluate to "x" for zsh and to something else for bash.
os.system("echo $0")
This works flawlessly on my system:
cat shell.py:
#!/ms/dist/python/PROJ/core/2.5/bin/python
import os
print os.system("echo $0")
bash-2.05b$ uname -a
Linux pi929c1n10 2.4.21-32.0.1.EL.msdwhugemem #1 SMP Mon Dec 5 21:32:44 EST 2005 i686 athlon i386 GNU/Linux
pi929c1n10 /ms/user/h/hirscst 8$ ./shell.py
/bin/ksh
pi929c1n10 /ms/user/h/hirscst 9$ bash
bash-2.05b$ ./shell.py
/bin/ksh
bash-2.05b$