I've been digging into the world of Python and GUI applications and have made some considerable progress. However, I'd like some advice on how to proceed with the following:
I've created a GUI application using python (2.6.6 - cannot upgrade system due to it being legacy) and gtk that displays several buttons e.g. app1, app2, app3
When I click on a button, it then runs a bash shell script. This script will set up some required environment variables and then execute another external application (that uses these env variables)
Example:
1) use clicks on button app1
2) GUI then launches app1.sh to set up environment variables
3) GUI then runs external_app1
# external_app1 is an example application
# that requires that some environment
# variables to be set before it can launch
Example app1.sh contents:
#/bin/bash
export DIR=/some/location/
export LICENSE=/some/license/
export SOMEVAR='some value'
NOTE: Due to the way the environment is configured, it has to launch shell scripts first to set up the environment etc, and then launch the external applications. The shell scripts will be locked down so it cannot be edited by anyone once I've tested them.
So I've thought about how to have the python GUI execute this and so far, I am doing the following:
When user clicks on app1, check if app1.sh is executable/readable, if not, return error
Create another helper script, let's say helper1.sh that will contain the app1.sh followed by the external_app1 command and then have python execute that helper1.sh script via the below:
subprocess.Popen(helper1.sh, shell=True, stdout=out, stderr=subprocess.PIPE, close_fds=True)
Example helper1.sh contents:
#!/usr/bin/env bash
source app1.sh # sets up env variables
if [ $? = 0 ]; then
external_app & # Runs the actual application in background
else
echo "Error executing app1.sh" 2>/dev/stderr
fi
This is done so that the helper script executes in its own subshell and so that I can run multiple environment setup / external applications (app2, app3 etc).
So I ask:
Is there a better perhaps more pythonic way of doing this? Can someone point me in the right direction?
And when it comes to logging and error handling, how to effectively capture stderr or stdout from the helper scripts (e.g. helper1.sh) without blocking/freezing the GUI? Using threads or queues?
Thank you.
As I understand your question, you're trying to execute an external command with one of n sets of environment variables, where n is specified by the user. The fact that it's a GUI application doesn't seem relevant to the problem. Please correct me if I'm missing something.
You have several choices:
Execute a command in Python with custom environment variables
Rather than store the environment variables in separate files, you can set them directly with the env argument to Popen():
If env is not None, it must be a mapping that defines the environment variables for the new process; these are used instead of inheriting the current process’ environment, which is the default behavior.
So instead of having app1.sh, app2.sh, app3.sh, and so on, store your environment variable sets in Python, and add them to the environment you pass to Popen(), like so:
env_vars = {
1: {
'DIR': '/some/location/',
'LICENSE': '/some/license/'
'SOMEVAR': 'some value'
},
2: ...
}
...
environ_copy = dict(os.environ)
environ_copy.update(env_vars[n])
subprocess.Popen('external_application', shell=True, env=environ_copy, ...)
Modify the environment with a wrapper script
If your environment vars must live in separate, dedicated shell scripts something like your helper is the best you can do.
We can clean it up a little, though:
#!/usr/bin/env bash
if source "$1"; then # Set up env variables
external_app # Runs the actual application
else
echo "Error executing $1" 2>/dev/stderr
exit 1 # Return a non-zero exit status
fi
This lets you pass app1.sh to the script, rather than create n separate helper files. It's not clear why you're using & to background the process - Popen starts a separate process which doesn't block the Python process from continuing. With subprocess.PIPE you can use Popen.communicate() to get back the process' stdout and stderr.
Avoid setting environment variables at all
If you have control of external_process (i.e. you wrote it, and can modify it), you'd be much better off changing it to use command line arguments, rather than environment variables. That way you could call:
subprocess.Popen('external_command', '/some/location/', '/some/license/', 'some value')
and avoid needing shell=True or a wrapper script entirely. If external_command expects a number of variables it might be better to use --flags (e.g. --dir /some/location/) rather than positional arguments. Most programming languages have a argument processing library (or several) to make this easy; Python provides argparse for this purpose.
Using command line arguments rather than environment variables will make external_process much more user friendly, especially for the use case you're describing. This is what I would suggest doing.
Related
As much as I hate regurgitating questions, it's a necessary evil to achieve a result to the next issue I'll present.
Using python3, tkinter and the subprocess package, my goal is to write a control panel to start and stop different terminal windows with a specific set of commands to run applications/sessions of the ROS application stack, including the core.
As such, the code would look like this per executable I wish to control:
class TestProc(object):
def __init__(self):
pass
def start(self):
self.process = subprocess.Popen(["gnome-terminal", "-c", "'cd /path/to/executable/script.sh; ./script.sh'"])
print("Process started.")
def stop(self):
self.process.terminate()
print("Process terminated.")
Currently, it is possible to start a terminal window and the assigned commands/processes, yet two issues persist:
gnome-terminal is set to launch a terminal window, then relieve control to the processes inside; as such, I have no further control once it has started. A possible solution for this is to use xterm yet that poses a slew of other issues. I am required to have variables from the user's .bashrc and/or export
Certain "global commands" eg. cd or roslaunch would be unavailable to the terminal sessions, perhaps due to the order of execution (eg. the commands are run before the bash profile is loaded) preventing any usable terminal at all
Thus, the question rings: How would I be able to start and stop a new terminal window that would run up to two commands/processes in the user environment?
There are a couple approaches you can take, the most flexible here is also the most complicated, so you'd want to consider whether you need to do it.
If you only need to show the output of the script, you can simply pipe the output to a file or to a named pipe. You can then capture that output by reading/tailing the file. This is simplest, as long as the script don't actually need to have any user interaction.
If you really only need to spawn a script that runs in the background, and you need to simulate user interaction but you don't actually need to accept actual user input, you can use expect approach (using the pexpect library).
If you need to actually allow the real user to interact with the program, then you have two approaches. First is that you can embed the VTE widget into your application, this is the most seamless integration as it'll make the terminal look seamless with your application, however it's also the most heavy.
Another approach is to start gnome-terminal as you've done here, this necessarily spawns a new window.
If you need to both script some interaction while also allowing some user input, you can do this by spawning your script in a tmux session. Using tmux send-keys command to automate the moon interactive part, and then spawn a terminal emulator for users to interact with tmux attach. If you need to go back and forth between automated part and interactive part, you can combine this approach with expect.
I'm working on a web application (using Django) that use another software to make some processing. This software needs to set its working directory to be in the environment variables. When a client make a request the app create the working directory (create data to be used by the external software). Then set the environment variable used by the external software to the created directory. Finally we call the external software and get the result.
Here's a summary of what the app is doing :
def request(data):
path = create_working_directory(data)
os.environ['WORKING_DIRECTORY'] = path
result = call_the_external_software()
I haven't tested this yet (in reality it's not as simple as in this example). I'm thinking to execute this function in new process. Will I have problems when multiple client make simultaneous requests? If yes what should I do to fix the problems?
ps : I can't change anything on the external program.
See https://docs.python.org/2/library/subprocess.html#subprocess.Popen. Note that Popen takes a "env" argument that you can use to define environment variables in the child call.
def request(data):
path = create_working_directory(data)
env = {"WORKING_DIRECTORY": path}
result = subprocess.call([ext_script] + ext_args, env=env)
return result # presumably
I want to create a build pipeline, and developers need to set up a few things into a properties file which gets populated using a front end GUI.
I tried running sample CLI interactive script using python that just asked for a name and prints it out afterwards, but Jenkins just waited for ages then hanged. I see that it asked for the input, but there was no way for the user to input the data.
EDIT: Currently running Jenkins as a service..Or is there a good plugin anyone recommends or is it the way I created the python script?
Preference:
I would prefer to use Python because it is a little lightweight, but if people had success with other languages I can comprise.
Using a GUI menu to populate the data, would be cool because I can use option boxes, drop down menus and make it fancy but it isn't a necessity, a CLI is considerably better than our current deployment.
BTW, running all this on Windows 7 laptop running Python 2.7 and Java 1.7
Sorry for the essay! Hopefully people can help me!
Sorry, but Jenkins is not an interactive application. It is designed for automated execution.
The only viable way to get input to a Jenkins job (and everything that is executed from that job) is with the job parameters that are populated before the job is started. Granted, Jenkins GUI for parameter entry is not the greatest, but it does the job. Once the Jenkins job collected the job parameters at the start of the job, it can pass those parameters to anything it executes (Python, shell, whatever) at any time during the job. Two things have to be true for that to happen:
You need to collect all the input data before the job starts
Whatever your job calls (Python, shell, etc) need to be able to receive their input not interactively, but through command line.
How to get input into program
A well designed script should be able to simply accept parameters on the command line:
./goodscript.sh MyName will be the simplest way of doing it, where value MyName will be stored in $1 first parameter of the script. Subsequent command line parameters will be available in variables $2, $3 and so on.
./goodscript.sh -name MyName -age 30 will be a better way of doing it, where the script can take multiple parameters regardless of their order by specifying a parameter name before parameter value. You can read about using getopt for this method of parameter passing
Both examples above assume that the goodscript.sh is written well enough to be able to process those command line parameters. If the script does not explicitly process command line parameters, doing the above will be useless.
You can "pipe" some output to an interactive script that is not designed to handle command line parameters explicitly:
echo MyName | ./interactivescript.sh will pass value MyName to the first interactive prompt that interactivescript.sh provides to the user. Problem with this is that you can only pass a value to the first interactive prompt.
Jenkins job parameters GUI
Like I said above, you can use Jenkins GUI to gather all sorts of job parameters (dropdown lists, checkboxes, text entry). I assume you know how to setup Jenkins job with parameters. If not, in the job configuration click "This build is parameterized" checkbox. If you can't figure out how to set this up, that's a different question and will need to be explained separately.
However, once your Jenkins job collected all the parameters up front, you can reference them in your "execute shell" step. If you are using Windows, you will reference them as %PARAM_NAME%, and for Linux as $PARAM_NAME.
Explain what you need help with: getting your script to accept command line parameters, or passing those command line parameters from jenkins job GUI, and I will expand this answer further
I've written a little Python (2.7.2+) module (called TWProcessing) that can be described as an improvised batch manager. The way it works is that I pass it a long list of commands that it will then run in parallel, but limiting the total number of simultaneous processes. That way, if I have 500 commands I would like to run, it will loop through all of them, but only running X of them at a time so as to not overwhelm the machine. The value of X can be easily set when declaring an instance of this batch manager (the class is called TWBatchManager) :
batch = TWProcessing.TWBatchManager(MaxJobs=X)
I then add a list of jobs to this object in a very straightforward manner :
batch.Queue.append(/CMD goes here/)
Where Queue is a list of commands that the batch manager will run. When the queue has been filled, I then call Run() which loops through all the commands, only running X at a time :
batch.Run()
So far, everything works fine. Now what I'd like to do is be able to change the value of X (i.e. the maximum number of processes running at once) dynamically i.e. while the processes are still running. My old way of doing this was rather straightforward. I had a file called MAXJOBS that the class would know to look at, and, if it existed, it would check it regularly to see if the desired value has changed. Now I'd like to try something a bit more elegant. I would like to be able to write something along the lines of export MAXJOBS=newX in the bash shell that launched the script containing the batch manager, and have the batch manager realize that this is now the value of X it should be using. Obviously os.environ['MAXJOBS'] is not what I'm looking for, because this is a dictionary that is loaded on startup. os.getenv('MAXJOBS') doesn't cut it either, because the export will only affect child processes that the shell will spawn from then on. So what I need is a way to get back to the environment of the parent process that launched my python script. I know os.ppid will give me the parent pid, but I have no idea how to get from there to the parent environment. I've poked around the interwebz to see if there was a way in which the parent shell could modify the child process environment, and I've found that people tend to insist I not try anything like that, lest I be prepared to do some of the ugliest things one can possibly do with a computer.
Any ideas on how to pull this off? Granted my "read from a standard text file" idea is not so ugly, but I'm new to Python and am therefore trying to challenge myself to do things in an elegant and clean manner to learn as much as I can. Thanks in advance for your help.
For me it looks that you are asking for inter-process communication between a bash script and a python program.
I'm not completely sure about all your requirements, but it might be a candidate for a FIFO (named pipe):
1) make the fifo:
mkfifo batch_control
2) Start the python - server, which reads from the fifo. (Note: the following is only a minimalistic example; you must adapt things:
while True:
fd = file("batch_control", "r")
for cmd in fd:
print("New command [%s]" % cmd[:-1])
fd.close()
3) From the bash script you can than 'send' things to the python server by echo-ing strings into the fifo:
$ echo "newsize 800" >batch_control
$ echo "newjob /bin/ps" >batch_control
The output of the python server is:
New command [newsize 800]
New command [newjob /bin/ps]
Hope this helps.
I am running my Test Harness which is written in Python. Before running a test through this test harness, I am exporting some environment variables through a shell script which calls the test harness after exporting the variables. When the harness comes in picture, it checks if the variables are in the environment and does operations depending on the values in the env variables.
However after the test is executed, I think the environment variables values aren't getting cleared as the next time, it picks up those values even if those aren't set through the shell script.
If they are set explicitly, the harness picks up the new values but if we clear it next time, it again picks up the values set in 1st run.
I tried clearing the variables using "del os.environ['var']" command after every test execution but that didn't solve the issue. Does anybody know why are these values getting preserved?
On the shell these variables are not set as seen in the 'env' unix command. It is just in the test harness that it shows the values. None of the env variables store their values in any text files.
A subshell can change variables it inherited from the parent, but the changes made by the child don't affect the parent.
When a new subshell is started, in which the variable exported from the parent is visible. The variable is unsetted by del os.environ['var'], but the value for this variable in the parent stays the same.
The python process cannot affect the environment of the parent shell process that launched it. If you have a parent shell, its environment will persist unless/until the shell itself changes it.
However, a bash script can set environment variables for the child script only, like this:
export OPTIONS=parent
OPTIONS=child python child.py
echo $OPTIONS
This will echo "parent", not "child", but the python process will see OPTIONS=child. You don't describe your set-up very clearly, but maybe this can help?