Not sure if this is possible. I have a set of python scripts and have modified the linux PATH in ~/.bashrc so that whenever I open a terminal, the python scripts are available to run as a command.
export PATH=$PATH:/home/user/pythonlib/
my_command.py resides in the above path.
I can run my_command.py (args) from anywhere in terminal and it will run the python scripts.
I'd like to control this functionality from a different python script as this will be the quickest solution to automating my processing routines. So I need it to open a terminal and run my_command.py (args) from within the python script I'm working on.
I have tried subprocess:
import subprocess
test = subprocess.Popen(["my_command.py"], stdout=subprocess.PIPE)
output = test.communicate()[0]
While my_command.py is typically available in any terminal I launch, here I have no access to it, returns file not found.
I can start a new terminal using os then type in my_command.py, and it works
os.system("x-terminal-emulator -e /bin/bash")
So, is there a way to get the second method to accept a script you want to run from python with args?
Ubuntu 16
Thanks :)
Popen does not load the system PATH for the session you create in a python script. You have to modify the PATH in the session to include the directory to your project like so:
someterminalcommand = "my_command.py (args)"
my_env = os.environ.copy()
my_env["PATH"] = "/home/usr/mypythonlib/:" + my_env["PATH"]
combine = subprocess.Popen(shlex.split(someterminalcommand), env=my_env)
combine.wait()
This allows me to run my "my_command.py" file from a different python session just like I had a terminal window open.
If you're using Gnome, the gnome-terminal command is rather useful in this situation.
As an example of very basic usage, the following code will spawn a terminal, and run a Python REPL in it:
import subprocess
subprocess.Popen(["gnome-terminal", "-e", "python"])
Now, if you want to run a specific script, you will need to concatenate its path with python, for the last element of that list it the line that will be executed in the new terminal.
For instance:
subprocess.Popen(["gnome-terminal", "-e", "python my_script.py"])
If your script is executable, you can omit python:
subprocess.Popen(["gnome-terminal", "-e", "my_script.py"])
If you want to pass parameters to your script, simply add them to the python command:
subprocess.Popen(["gnome-terminal", "-e", "python my_script.py var1 var2"])
Note that if you want to run your script with a particular version of Python, you should specify it, by explicitly calling "python2" or "python3".
A small example:
# my_script.py
import sys
print(sys.argv)
input()
# main.py
import subprocess
subprocess.Popen(["gnome-terminal", "-e", "python3 my_script.py hello world"])
Running python3 main.py will spawn a new terminal, with ['my_script.py', 'hello', 'world'] printed, and waited for an input.
Related
I need to set up the ROS2 Galactic environment by sourcing the following file through python: -
"source /opt/ros/galactic/setup.bash"
If I write the above line in terminal it will be sourced but I need to do this from python script.
I tried: -
import subprocess
subprocess.call("source /opt/ros/galactic/setup.bash", shell=True)
and
import os
os.system('source /opt/ros/galactic/setup.bash')
But none of them is sourcing the enviornment. I am working on Ubuntu 20.04, Python 3.8.10.
This will not impact your Python runtime environment. subprocess starts a shell wherein it runs your script (setup.bash) and then terminates.
Consider this:
import subprocess
import os
subprocess.run('export FOO=1', shell=True)
print(os.environ['FOO'])
This tries to set an environment variable in the sub-shell. That actually works but when run() returns, the shell no longer exists. Thus, when we try to access the environment variable we get KeyError
I am using an anaconda environment both for the python code and the terminal.
When I want to execute a program in the shell (Windows CMD) with the environment activated. The program ogr2ogr returns the correct output with the given parameter. The tool ogr2ogr has been installed via a conda package.
But when I execute the my python code, the ogr2ogr returns an errors output. I thought it might be to different installations used due to usage of different environments (without my knowledge), but this is ownly a guess.
The python code goes as follows:
from pathlib import Path
from subprocess import check_call, STDOUT
...
file_path = Path(file_name)
destination = str(file_path.with_suffix(".gpkg"))
command = f"ogr2ogr -f GPKG -s_srs EPSG:25833 -t_srs EPSG:25833 {destination} GMLAS:{file_name} -oo REMOVE_UNUSED_LAYERS=YES"
check_call(command, stderr=STDOUT, shell=True)
ogr2ogr translates a file into another format. Which is also done, but when I open the file, I see, it's not done 100 % correctly.
When I copy the value of the string command and copy it to the shell and execute the command the execution is done correctly!
How can I correct the behaviour of using subprocess.check_call
I have a bash script that I can run flawlessly in my Rpi terminal in its folder:
./veye_mipi_i2c.sh -r -f mirrormode -b 10
it works like this: Usage: ./veye_mipi_i2c.sh [-r/w] [-f] function name -p1 param1 -p2 param2 -b bus
options:
-r read
-w write
-f [function name] function name
-p1 [param1] param1 of each function
-p2 [param1] param2 of each function
-b [i2c bus num] i2c bus number
When I try to run it in Python (2) via my Spyder editor with os.system, I get a "0" return which I interpret as "succesfully executed" but in fact the script has not been executed and the functions have not been performed. I know this because the script is suppose to change the camera functioning and by checking the images I take afterwards, I can see that nothing has changed.
import os
status = os.system('/home/pi/VeyeMipi/Camera_Folder/veye_mipi_i2c.sh -w -f mirrormode -p1 0x04 -b 10')
print status
Any idea, what causes this? The bash script uses two other scripts that lie in the same folder location (read and write). Could it be, that it cannot execute these additional scripts when startet through Python? It does not make sense to me, but so do a lot of things....
Many thanks
Ok, I understand that my question was not exemplary because of the lack of a minimal reproducible example, but as I did not understand what the problem was, I was not able to create one.
I have found out, what the problem was. The script I am calling in bash requires two more scripts that are in the same folder. Namely the "write" script and "read" script. When executing in terminal in the folder, no problem, because the folder was the working directory.
I tried to execute the script within Spyder editor and added the file location to the PATH in the user interface. But still it would not be able to execute the "write" script in the folder.
Simply executing it in the terminal did the trick.
It would help if you fix your scripts so they don't depend on the current working directory (that's a very bad practice).
In the meantime, running
import subprocess
p = subprocess.run(['./veye_mipi_i2c.sh', '-r', '-f', 'mirrormode', '-b', '10'], cwd='/home/pi/VeyeMipi/Camera_Folder')
print(p.returncode)
which changes the directory would help.
Use subprocess and capture the output:
import subprocess
output = subprocess.run(stuff, capture_output=True)
Check output.stderr and output.stdout
I am creating a bash script which calls a python script that in turn runs other processes in bash using subprocess.run(). However, when the bash script runs the python script within it, in the line where subprocess.run is called, I get an error message:
run_metric = subprocess.run(command, shell=True, stdout = subprocess.PIPE, universal_newlines = True)
AttributeError: 'module' object has no attribute 'run'
1) I made sure I ran the script using python 3 by activating a conda environment with python=3.6, which should not bring me any problem to call subprocess.run. The interesting thing is that if I change subprocess.run() to subprocess.Popen() the script works, but I could not work out how to get run_metric.stdout properly.
2) I do not have any subprocess.py file within any directory I am working in
3) the result of print(subprocess.__file__) is showing me that python is not 3.6: /usr/lib/python2.7/subprocess.pyc
Also, I tried to use something like
from subprocess import run
and making sure in both the python script and the function I had import subprocess
The bash script is as follows:
SWC_FOLDER_PATH=$(pwd)
sudo chmod +x /media/leandroscholz/KINGSTON/Results_article/Tracing_data/run_metrics.py
echo "run /media/leandroscholz/Tracing_data/run_metrics.py ${SWC_FOLDER_PATH} /media/leandroscholz/KINGSTON/Results_article/TREEStoolbox_tree_fixed.swc"
python /media/leandroscholz/Tracing_data/run_metrics.py ${SWC_FOLDER_PATH} /media/leandroscholz/TREEStoolbox_tree_fixed.swc
And the python script I run calls a certain function that uses subprocess.run() this way (just part of the code where the problem arises):
import subprocess
import glob
import numpy as np
def compute_metrics(swc_folder_path, gt_file_path):
# first get list of files in swc_folder_path
swc_files = (glob.glob(swc_folder_path+"/*_fixed.swc"))
n_swc_files = len(swc_files)
workflow_dict = gets_workflow_dict(swc_files)
n_images = get_n_images(swc_files)
n_workflows = len(workflow_dict)
for swc in range(0,n_swc_files):
command = "java -jar /home/leandroscholz/DiademMetric.jar -G " + swc_files[swc] +" -T " + gt_file_path
run_metric = subprocess.run(command, shell=True, stdout = subprocess.PIPE, universal_newlines = True)
I am using subprocess.run within python because, in the end, I want to get a string of the run_metric.stdout after running the process in bash so I can later store it in an array and save it to a txt file.
I hope I was sufficiently clear and provided enough information.
Thanks!
After the comments received, I tested the output of print(subprocess.__file__), which showed that python being used was python2.7,
Thus, I changed the call of the python script from python script.py to python3 script.py. I've found this question, which also shows another way to call python programs from terminal.
Running Python File in Terminal
The python script I would use (source code here) would parse some arguments when called from the command line. However, I have no access to the Windows command prompt (cmd.exe) in my environment. Can I call the same script from within a Python console? I would rather not rewrite the script itself.
%run is a magic in IPython that runs a named file inside IPython as a program almost exactly like running that file from the shell. Quoting from %run? referring to %run file args:
This is similar to running at a system prompt python file args,
but with the advantage of giving you IPython's tracebacks, and of
loading all variables into your interactive namespace for further use
(unless -p is used, see below). (end quote)
The only downside is that the file to be run must be in the current working directory or somewhere along the PYTHONPATH. %run won't search $PATH.
%run takes several options which you can learn about from %run?. For instance: -p to run under the profiler.
If you can make system calls, you can use:
import os
os.system("importer.py arguments_go_here")
You want to spawn a new subprocess.
There's a module for that: subprocess
Examples:
Basic:
import sys
from subprocess import Popen
p = Popen(sys.executable, "C:\test.py")
Getting the subprocess's output:
import sys
from subprocess import Popen, PIPE
p = Popen(sys.executable, "C:\test.py", stdout=PIPE)
stdout = p.stdout
print stdout.read()
See the subprocess API Documentation for more details.