Re-read environment of parent process in python - python

I've written a little Python (2.7.2+) module (called TWProcessing) that can be described as an improvised batch manager. The way it works is that I pass it a long list of commands that it will then run in parallel, but limiting the total number of simultaneous processes. That way, if I have 500 commands I would like to run, it will loop through all of them, but only running X of them at a time so as to not overwhelm the machine. The value of X can be easily set when declaring an instance of this batch manager (the class is called TWBatchManager) :
batch = TWProcessing.TWBatchManager(MaxJobs=X)
I then add a list of jobs to this object in a very straightforward manner :
batch.Queue.append(/CMD goes here/)
Where Queue is a list of commands that the batch manager will run. When the queue has been filled, I then call Run() which loops through all the commands, only running X at a time :
batch.Run()
So far, everything works fine. Now what I'd like to do is be able to change the value of X (i.e. the maximum number of processes running at once) dynamically i.e. while the processes are still running. My old way of doing this was rather straightforward. I had a file called MAXJOBS that the class would know to look at, and, if it existed, it would check it regularly to see if the desired value has changed. Now I'd like to try something a bit more elegant. I would like to be able to write something along the lines of export MAXJOBS=newX in the bash shell that launched the script containing the batch manager, and have the batch manager realize that this is now the value of X it should be using. Obviously os.environ['MAXJOBS'] is not what I'm looking for, because this is a dictionary that is loaded on startup. os.getenv('MAXJOBS') doesn't cut it either, because the export will only affect child processes that the shell will spawn from then on. So what I need is a way to get back to the environment of the parent process that launched my python script. I know os.ppid will give me the parent pid, but I have no idea how to get from there to the parent environment. I've poked around the interwebz to see if there was a way in which the parent shell could modify the child process environment, and I've found that people tend to insist I not try anything like that, lest I be prepared to do some of the ugliest things one can possibly do with a computer.
Any ideas on how to pull this off? Granted my "read from a standard text file" idea is not so ugly, but I'm new to Python and am therefore trying to challenge myself to do things in an elegant and clean manner to learn as much as I can. Thanks in advance for your help.

For me it looks that you are asking for inter-process communication between a bash script and a python program.
I'm not completely sure about all your requirements, but it might be a candidate for a FIFO (named pipe):
1) make the fifo:
mkfifo batch_control
2) Start the python - server, which reads from the fifo. (Note: the following is only a minimalistic example; you must adapt things:
while True:
fd = file("batch_control", "r")
for cmd in fd:
print("New command [%s]" % cmd[:-1])
fd.close()
3) From the bash script you can than 'send' things to the python server by echo-ing strings into the fifo:
$ echo "newsize 800" >batch_control
$ echo "newjob /bin/ps" >batch_control
The output of the python server is:
New command [newsize 800]
New command [newjob /bin/ps]
Hope this helps.

Related

Is there any way to know the command-line options available for a separate program from Python?

I am relatively new to the python's subprocess and os modules. So, I was able to do the process execution like running bc, cat commands with python and putting the data in stdin and taking the result from stdout.
Now I want to first know that a process like cat accepts what flags through python code (If it is possible).
Then I want to execute a particular command with some flags set.
I googled it for both things and it seems that I got the solution for second one but with multiple ways. So, if anyone know how to do these things and do it in some standard kind of way, it would be much appreciated.
In the context of processes, those flags are called arguments, hence also the argument vector called argv. Their interpretation is 100% up to the program called. In other words, you have to read the manpages or other documentation for the programs you want to call.
There is one caveat though: If you don't invoke a program directly but via a shell, that shell is the actual process being started. It then also interprets wildcards. For example, if you run cat with the argument vector ['*'], it will output the content of the file named * if it exists or an error if it doesn't. If you run /bin/sh with ['-c', 'cat *'], the shell will first resolve * into all entries in the current directory and then pass these as separate arguments to cat.

Python & subprocess - Open terminal session as user and execute one/two commands

As much as I hate regurgitating questions, it's a necessary evil to achieve a result to the next issue I'll present.
Using python3, tkinter and the subprocess package, my goal is to write a control panel to start and stop different terminal windows with a specific set of commands to run applications/sessions of the ROS application stack, including the core.
As such, the code would look like this per executable I wish to control:
class TestProc(object):
def __init__(self):
pass
def start(self):
self.process = subprocess.Popen(["gnome-terminal", "-c", "'cd /path/to/executable/script.sh; ./script.sh'"])
print("Process started.")
def stop(self):
self.process.terminate()
print("Process terminated.")
Currently, it is possible to start a terminal window and the assigned commands/processes, yet two issues persist:
gnome-terminal is set to launch a terminal window, then relieve control to the processes inside; as such, I have no further control once it has started. A possible solution for this is to use xterm yet that poses a slew of other issues. I am required to have variables from the user's .bashrc and/or export
Certain "global commands" eg. cd or roslaunch would be unavailable to the terminal sessions, perhaps due to the order of execution (eg. the commands are run before the bash profile is loaded) preventing any usable terminal at all
Thus, the question rings: How would I be able to start and stop a new terminal window that would run up to two commands/processes in the user environment?
There are a couple approaches you can take, the most flexible here is also the most complicated, so you'd want to consider whether you need to do it.
If you only need to show the output of the script, you can simply pipe the output to a file or to a named pipe. You can then capture that output by reading/tailing the file. This is simplest, as long as the script don't actually need to have any user interaction.
If you really only need to spawn a script that runs in the background, and you need to simulate user interaction but you don't actually need to accept actual user input, you can use expect approach (using the pexpect library).
If you need to actually allow the real user to interact with the program, then you have two approaches. First is that you can embed the VTE widget into your application, this is the most seamless integration as it'll make the terminal look seamless with your application, however it's also the most heavy.
Another approach is to start gnome-terminal as you've done here, this necessarily spawns a new window.
If you need to both script some interaction while also allowing some user input, you can do this by spawning your script in a tmux session. Using tmux send-keys command to automate the moon interactive part, and then spawn a terminal emulator for users to interact with tmux attach. If you need to go back and forth between automated part and interactive part, you can combine this approach with expect.

Parallel Python for loop

I work primarily with arcgis and pci flavours of python 2.7. I have a number of processes that I've created that run outside of these programs but use these libraries. They are run via .bat files through cmd.
Currently, they run the processes in a series of for loops. And each for loop processes sequentially. I was wondering if there was a way to run the processing within the for loop for each object in the list at the same time. That is in parallel. The only way I can think of this is opening a cmd for each object in the list, and running the processing separately.
Is what I am asking even possible? Where should I look for solutions?
Look into Subprocess So youd want a new commandline window created in the background where test.bat runs in parallel.. and in your case you don't want to wait for the command to complete before you continue your program, so use subprocess.Popen instead (may be something to look into as
well)
subprocess.call
Run the command described by args. Wait for command to complete, then return the returncode attribute.
If you want to start an external program from your python script pass the program's filename to subprocess.Popen() on Ubuntu Linux you would enter something like
>>>import subprocess
>>>subprocess.Popen('/usr/bin/gnome-...')
<subprocess.Popen Object at 0x7f2bcf93b20
The Return value is a Popen object which has two useful methods : poll() & wait()
poll() is like asking your friend if he has finished running the code you gave him.
wait() is like waiting for your friend to finish working on his code before you keep working on yours.(something you might want to look into)

Control executed programm with python

I want to execute a testrun via bash, if the test needs too much time. So far, I found some good solutions here. But since the command kill does not work properly (when I use it correctly it says it is not used correctly), I decided to solve this problem using python. This is the Execution call I want to monitor:
EXE="C:/program.exe"
FILE="file.tpt"
HOME_DIR="C:/Home"
"$EXE" -vm-Xmx4096M --run build "$HOME_DIR/test/$FILE" "Auslieferung (ML) Execute"
(The opened *.exe starts a testrun which includes some simulink simulation runs - sometimes there are simulink errors - in this case, the execution time of the tests need too long and I want to restart the entire process).
First, I came up with the idea, calling a shell script containing these lines within a subprocess from python:
import subprocess
import time
process = subprocess.Popen('subprocess.sh', shell = True)
time.sleep(10)
process.terminate()
But when I use this, *.terminate() or *.kill() does not close the program I started with the subprocess call.
That´s why I am now trying to implement the entire call in python language. I got the following so far:
import subprocess
file = "somePath/file.tpt"
p = subprocess.Popen(["C:/program.exe", file])
Now I need to know, how to implement the second call "Auslieferung (ML) Execute" of the bash function. This call starts an intern testrun named "Auslieferung (ML) Execute". Any ideas? Or is it better to choose one of the other ways? Or can I get the "kill" option for bash somewhere, somehow?

Can I control a powershell "session" from Python

I want to initalise a powershell subprocess and be able to send commands to it and then receive its output.
Slightly different from the usual subprocess.Popen(['powershell', '...']) use case because ideally I would like my powershell session to persist whilst my script is running.
This is because I might need to run 100+ commands (Get-ADUser, etc..) and the startup costs of loading the powershell console and loading the Active directory module is quite significant.
Roughly speaking I was imagining something like the following (though I'm aware this does not work):
class ActiveDirectory:
def __init__(self):
self.proc = subprocess.Popen(['powershell'], stdin.., stdout..)
def run_command(self, command):
output = self.proc.communicate(command)
return output
I can't find an obvious way to do this using the subprocess module.
Does anybody have any idea how subprocess can be manipulated to achieve this? Or is there any other tool that provide a richer interface into sub processes? Possibly twisted.reactor?
Thanks,
O.
You can try by combining this 2 post:
hope it helps:
Keep a subprocess alive and keep giving it commands? Python
Run PowerShell function from Python script

Categories