Python seems to process my attempt :
subprocess.call(['set' , 'logfile=cat'], shell=True)
It returns no errors. However when I try using logfile as a variable or do %logfile%, it doesn't seem to have set logfile as anything. How does one make batch variables from within a python script?
What I am attempting to do with this is: I have a batch script that sequentially runs several python scripts. I wanted to set a variable from within one of my python scripts that would persist throughout my batch script.
Your variable is set, but as soon as the call returns, that instance of the shell ends and the variable goes away.
What are you trying to accomplish exactly? This sounds like an XY problem.
Related
I'm working on a python script that will start and stop some services based on whether or not a display is plugged in. For this I want to assign the output of "tvservice -n" to a variable, and use this information to determine further actions. I'm using a raspberry pi zero w and I have run my script on both python 2.7.13 and 3.5.3.
I have tried many different approaches from my research on this site, this is one example Assign output of os.system to a variable and prevent it from being displayed on the screen
When I run the script it first runs the command, and then shows the variable output (which is null). The code below is one of the simplest tests I did, I also tried subprocess.run, subprocess.popen, subprocess.check_output etc etc...
import os
var = os.popen('tvservice -n').read()
print "result: ",var
Actual results from running my code:
[E] No device present
result:
I want to execute a testrun via bash, if the test needs too much time. So far, I found some good solutions here. But since the command kill does not work properly (when I use it correctly it says it is not used correctly), I decided to solve this problem using python. This is the Execution call I want to monitor:
EXE="C:/program.exe"
FILE="file.tpt"
HOME_DIR="C:/Home"
"$EXE" -vm-Xmx4096M --run build "$HOME_DIR/test/$FILE" "Auslieferung (ML) Execute"
(The opened *.exe starts a testrun which includes some simulink simulation runs - sometimes there are simulink errors - in this case, the execution time of the tests need too long and I want to restart the entire process).
First, I came up with the idea, calling a shell script containing these lines within a subprocess from python:
import subprocess
import time
process = subprocess.Popen('subprocess.sh', shell = True)
time.sleep(10)
process.terminate()
But when I use this, *.terminate() or *.kill() does not close the program I started with the subprocess call.
That´s why I am now trying to implement the entire call in python language. I got the following so far:
import subprocess
file = "somePath/file.tpt"
p = subprocess.Popen(["C:/program.exe", file])
Now I need to know, how to implement the second call "Auslieferung (ML) Execute" of the bash function. This call starts an intern testrun named "Auslieferung (ML) Execute". Any ideas? Or is it better to choose one of the other ways? Or can I get the "kill" option for bash somewhere, somehow?
How can I access a running Python script's variable? Or access a function, to set the variable. I want to access it from the command line or from another Python script, that doesn't matter.
For example,
I have one script running run_motor.py, with a variable called mustRun. When the user pushes the stop button it should access the variable mustRun to change it to false.
If you want to interact with a running python script and modify some variables in it (I don't know why you want to do that, but... meh) you can have a look at Pyrasite.
Here is a demo of Pyrasite on asciinema
This is damn impressive.
By the way just so you know, that's NOT the best practice for what you want to do. I assume this is for testing purpose because using that kind of script in production or something like that wouldn't be safe at all...
Easiest way of accomplishing this is to run a small TCP server in a thread and have it change the variable you want to change when it receives a command to do so. Then write a python script that sends the stop command to that TCP server.
I've written a little Python (2.7.2+) module (called TWProcessing) that can be described as an improvised batch manager. The way it works is that I pass it a long list of commands that it will then run in parallel, but limiting the total number of simultaneous processes. That way, if I have 500 commands I would like to run, it will loop through all of them, but only running X of them at a time so as to not overwhelm the machine. The value of X can be easily set when declaring an instance of this batch manager (the class is called TWBatchManager) :
batch = TWProcessing.TWBatchManager(MaxJobs=X)
I then add a list of jobs to this object in a very straightforward manner :
batch.Queue.append(/CMD goes here/)
Where Queue is a list of commands that the batch manager will run. When the queue has been filled, I then call Run() which loops through all the commands, only running X at a time :
batch.Run()
So far, everything works fine. Now what I'd like to do is be able to change the value of X (i.e. the maximum number of processes running at once) dynamically i.e. while the processes are still running. My old way of doing this was rather straightforward. I had a file called MAXJOBS that the class would know to look at, and, if it existed, it would check it regularly to see if the desired value has changed. Now I'd like to try something a bit more elegant. I would like to be able to write something along the lines of export MAXJOBS=newX in the bash shell that launched the script containing the batch manager, and have the batch manager realize that this is now the value of X it should be using. Obviously os.environ['MAXJOBS'] is not what I'm looking for, because this is a dictionary that is loaded on startup. os.getenv('MAXJOBS') doesn't cut it either, because the export will only affect child processes that the shell will spawn from then on. So what I need is a way to get back to the environment of the parent process that launched my python script. I know os.ppid will give me the parent pid, but I have no idea how to get from there to the parent environment. I've poked around the interwebz to see if there was a way in which the parent shell could modify the child process environment, and I've found that people tend to insist I not try anything like that, lest I be prepared to do some of the ugliest things one can possibly do with a computer.
Any ideas on how to pull this off? Granted my "read from a standard text file" idea is not so ugly, but I'm new to Python and am therefore trying to challenge myself to do things in an elegant and clean manner to learn as much as I can. Thanks in advance for your help.
For me it looks that you are asking for inter-process communication between a bash script and a python program.
I'm not completely sure about all your requirements, but it might be a candidate for a FIFO (named pipe):
1) make the fifo:
mkfifo batch_control
2) Start the python - server, which reads from the fifo. (Note: the following is only a minimalistic example; you must adapt things:
while True:
fd = file("batch_control", "r")
for cmd in fd:
print("New command [%s]" % cmd[:-1])
fd.close()
3) From the bash script you can than 'send' things to the python server by echo-ing strings into the fifo:
$ echo "newsize 800" >batch_control
$ echo "newjob /bin/ps" >batch_control
The output of the python server is:
New command [newsize 800]
New command [newjob /bin/ps]
Hope this helps.
I am running my Test Harness which is written in Python. Before running a test through this test harness, I am exporting some environment variables through a shell script which calls the test harness after exporting the variables. When the harness comes in picture, it checks if the variables are in the environment and does operations depending on the values in the env variables.
However after the test is executed, I think the environment variables values aren't getting cleared as the next time, it picks up those values even if those aren't set through the shell script.
If they are set explicitly, the harness picks up the new values but if we clear it next time, it again picks up the values set in 1st run.
I tried clearing the variables using "del os.environ['var']" command after every test execution but that didn't solve the issue. Does anybody know why are these values getting preserved?
On the shell these variables are not set as seen in the 'env' unix command. It is just in the test harness that it shows the values. None of the env variables store their values in any text files.
A subshell can change variables it inherited from the parent, but the changes made by the child don't affect the parent.
When a new subshell is started, in which the variable exported from the parent is visible. The variable is unsetted by del os.environ['var'], but the value for this variable in the parent stays the same.
The python process cannot affect the environment of the parent shell process that launched it. If you have a parent shell, its environment will persist unless/until the shell itself changes it.
However, a bash script can set environment variables for the child script only, like this:
export OPTIONS=parent
OPTIONS=child python child.py
echo $OPTIONS
This will echo "parent", not "child", but the python process will see OPTIONS=child. You don't describe your set-up very clearly, but maybe this can help?