I'm trying to generate an encryption key for a file and then save it for use next time the script runs. I know that's not very secure, but it's just an interim solution for keeping a password out of a git repo.
subprocess.call('export KEY="password"', shell=True) returns 0 and does nothing.
Running export KEY="password" manually in my bash prompt works fine on Ubuntu.
subprocess.call('export KEY="password"', shell=True)
creates a shell, sets your KEY and exits: accomplishes nothing.
Environment variables do not propagate to parent process, only to child processes. When you set the variable in your bash prompt, it is effective for all the subprocesses (but not outside the bash prompt, for a quick parallel)
The only way to make it using python would be to set the password using a master python script (using os.putenv("KEY","password") or os.environ["KEY"]="password") which calls sub-modules or processes.
Using Python:
#SET:
os.environ["EnvVar"] = "1"
#GET:
print os.environ["EnvVar"]
This isn't a thing you can do. Your subprocess call creates a subshell and sets the env var there, but doesn't affect the current process, let alone the calling shell.
Related
I work in tc shell and have environment set script with commands like setenv foo bar. I use to run this script by command source set_env.sh and then, when environment is set, run compilation command make my_ target...
I would like to automate the whole process by executing it from Python by calling subprocess.Popen or even os.system. The troubles are:
Shell opened form Python do not recognize command source. My assumption that /bin/bash shell is opened. How do I force Python to open /bin/tcsh?
How do I preserve changed environment setting for second command (make my_ target...)?
If you use the subprocess package in Python then it won't run the shell by default, it will just launch a process (without using the shell). You can add shell=True, but this won't do you much good since environment variables are set for the current process only, and are not sent back to the parent process.
Python starts a tcsh child process, and child processes can't affect parent processes.
The solution would be to make your script print out the environment variables, and then parse the output of that in Python.
So if your script prints something like:
XX=one
YY=two
Then in Python do something along the likes of:
for line in output.split('\n'):
if line == '':
continue
k, v = line.split('=')
os.environ[k] = v
I am trying to overwrite to environment variables in Python. I can read the value and then write the value and print the updated value. But then if I check the value in command line its still the original value. Why is that?
First, I create the variable
export MYVAR=old_val
My test script myvar.py
#!/usr/bin/env python3
import os
print (os.environ['MYVAR'])
os.environ['MYVAR'] = "new_val"
print (os.environ['MYVAR'])
Outputs
$ ./myvar.py
old_val
new_val
$ echo $MYVAR
old_val
As you can see, the last line of the output still shows the old_val
Short version:
The python script changes its environment. However this does not affect the environment of the parent process (The shell)
Long version:
Well this is a well know, but quite confusing problem.
What you have to know is, that there is not the environment, each process has its own environment.
So in your example above the shell (where you type your code) has one environment.
When you call ./myvar.py, a copy of the current environment is created and passed to your python script.
Your code 'only' changes this copy of the environment.
As soon as the python script is finished this copy is destroyed and the shell will see its initial unmodified environment.
This is true for most operating systems (Windows, Linux, MS-DOS, ...)
In other words: No child process can change the environment of the process, that called it.
In bash there is a trick, where you source a script instead of calling it as a process.
However if your python script starts another process (for example /bin/bash), then the child process would see the modified environment.
You started a new process that changed its environment and exited. That's all really.
You shouldn't expect that to affect the process you started it from (your shell).
I'd like to save my credentials in a file (config_vars.env) which is gitignored.
Then I'd like to start my project and have those credentials set to environment variables because my program uses os.environ.get('DB_NAME') and the like.
So I'd like those environment variables to be set while my script runs, then deleted when my program stops running.
I could literally set them using python or bash, then unset them upon exit. But that's not ideal because if my program crashes the environment variables are left there.
Ideally, I'd be able to automatically set them in a virtual environment, only available to my process, and when the process stops running the env vars are gone.
Is there any way to do this in native python? I've looked into things like click or dotenv for python, but is there no other way?
Here's what I've got so far:
import os
import subprocess
def bash_command():
#not good
#subprocess.Popen(cmd, shell=False, executable=".\git-bash.exe")
#os.popen('setenv a b')
subprocess.call("config_vars.sh", shell=False)
print(os.environ.get('DB_NAME')) # prints None because the env var dies with bash
import time
time.sleep(5) # delays for 5 seconds
bash_command()
and config_vars.sh is:
export ENV_FILE=env/config_vars.env
echo $DB_NAME
That echo command shows that it worked, but the bash process ends, removing that environment variable (or the whole virtual environment altogether) and the python process continues without having access to it.
So the question is: How can I set environment variables that die when my python process dies?
You have to capture the output of the script; the child cannot affect the parent's environment, so you need some form of interprocess communication.
value = subprocess.check_output("config_vars.sh", shell=False).lstrip('\n')
I need to run a lot of bash commands from Python. For the moment I'm doing this with
subprocess.Popen(cmd, shell=True)
Is there any solution to run all these commands in the same shell? subprocess.Popen opens a new shell at every execution and I need to set up all the necessary variables at every call, in order for cmd command to work properly.
subprocess.Popen lets you supply a dictionary of environment variables, which will become the environment for the process being run. If the only reason you need shell=True is to set an environment variable, then I suggest you use an explicit environment dictionary instead; it's safer and not particularly difficult. Also, it's usually much easier to construct command invocations when you don't have to worry about quoting and shell metacharacters.
It may not even be necessary to construct the environment dictionary, if you don't mind having the environment variables set in the running process. (Most of the time, this won't be a problem, but sometimes it is. I don't know enough about your application to tell.)
If you can modify your own environment with the settings, just do that:
os.environ['theEnvVar'] = '/the/value'
Then you can just use a simple Popen.call (or similar) to run the command:
output = subprocess.check_output(["ls", "-lR", "/tmp"])
If for whatever reason you cannot change your own environment, you need to make a copy of the current environment, modify it as desired, and pass it to each subprocess.call:
env = os.environ.copy()
env['theEnvVar'] = '/the/value'
output = subprocess.check_output(["ls", "-lR", "/tmp"], env=env)
If you don't want to have to specify env=env every time, just write a little wrapper class.
Why not just create a shell script with all the commands you need to run, then just use a single subprocess.Popen() call to run it? If the contents of the commands you need to run depend on results calculated in your Python script, you can just create the shell script dynamically, then run it.
Use multiprocessing instead, it's more lightweight and efficient.
Unlike subprocess.Popen it does not open a new shell at every execution.
You didn't say you need to run subprocess.Popen and you may well not need to; you just said that's what you're currently doing. More justification please.
See set env var in Python multiprocessing.Process for how to set your env vars once and for all in the parent process.
I have a python program that uses the ThreadPool for multithreading. The program is one step in a shell script. When I execute the shell script manually on the command line, the entire flow works as expected. However, when I execute the shell script as a cronjob, it appears that the flow goes to the next steps before the python multithreading steps are completely finished.
Inside the python program, I do call AsyncResult.get(timeout) to wait for all the results to come back before moving on.
Run your program via batch(1) (see the output of the command man batch) as well. If that works OK, but the cron version does not, then it is almost certainly a problem with your environment variable setup. To verify that, run printenv from your interactive shell to inspect your environment there. Then do the same thing inside the crontab (you will just need to temporarily set up an extra cron entry for it). Try setting the variables in your shell script before invoking Python.
On the other hand, if it doesn't work via batch(1) either, it could be something to do with the files that your code has open. Try running your shell script with input redirected from /dev/null and output going to a file:
$ /usr/local/bin/myscript </dev/null >|/tmp/outfile.txt 2>&1
Try setting "TERM=xterm" (or whatever env variable you have, figure out by command 'env' on your terminal) in your crontab.