Starting a VirtualBox VM from a Python Script - python

I have this simple script..that does not work
import subprocess
subprocess.call(["C:\Program Files\Oracle\VirtualBox\VBoxManage.exe", "VBoxManage startvm WIN7"])
I have the same thing in a bat file...which works perfectly.
cd C:\Program Files\Oracle\VirtualBox
VBoxManage startvm "WIN7"
I have the VBoxManage.exe in the PATH of Windows 8.1 (My host OS).
The python script understands the VBoxManage executable and spits out it's manual and then this ..
Syntax error: Invalid command 'VBoxManage startvm WIN7'
Could you give me a way to start a VM from inside a python script, either by invoking the .exe directly or by running the .bat file ?
Note: I have searched for the vboxshell.py file but not found it anywhere...:[

subprocess.call() expects a list of arguments, like so:
subprocess.call(['C:\Program Files\Oracle\VirtualBox\VBoxManage.exe',
'startvm',
'WIN7'])
Your code passes 'VBoxManage startvm WIN7' as a single argument to VBoxManage.exe, which expects to find only a command (e.g. 'startvm') there. The subsequent arguments ('WIN7' in this case) need to be passed separately.
In addition, there is no need to repeat the executable name when using subprocess.call(). The example from the Python docs invokes the UNIX command "ls -l" as follows:
subprocess.call(['ls', '-l'])
In other words, you don't need to repeat the 'VBoxManage' part.

The trick is to pass the command as separate arguments
import subprocess
subprocess.call(["C:\Program Files\Oracle\VirtualBox\VBoxManage.exe", "startvm", "WIN7"])

Related

A shell script initiated by Python's os.system fails to run, but the script do run when called from terminal

I have a Python3 script that needs to call a shell script with some parameters. When I call this shell script directly form the terminal - it works. The shell script call from terminal:
source $HW/scripts/gen.sh -top $TOP -proj opy_fem -clean
But when I try to call the shell script exactly the same way from Python 3 using os.system (or os.popen - same result), the shell script fails to run. Python call to the shell script:
os.system("source $HW/scripts/gen.sh -top $TOP -proj opy_fem -clean")
Get the next errors:
/project/users/alona/top_fabric_verif_env/logic/hw/scripts/gen.sh: line 18: syntax error near unexpected token `('
/project/users/alona/top_fabric_verif_env/logic/hw/scripts/gen.sh: line 18: `foreach i ( $* )'
Could you please shed light on why the same shell script fails to run from Python?
Thank you for any help
foreach is a C-shell command. csh (and derivates like tcsh) are not standard system shells in Unix/Linux.
If you need to use a specific shell, for instance the C-shell:
os.system('/bin/csh -c "put the command here"')
This will execute the /bin/csh in the standard shell, but starting two shells instead of one creates an additional overhead. A better solution is:
subprocess.run(['/bin/csh', '-c', 'put the command here'])
Note that using the shell's source ... command does not make much sense when the shell exits after the command.

Bash script executed within Python with os.systems returns 0 but does not execute/ write

I have a bash script that I can run flawlessly in my Rpi terminal in its folder:
./veye_mipi_i2c.sh -r -f mirrormode -b 10
it works like this: Usage: ./veye_mipi_i2c.sh [-r/w] [-f] function name -p1 param1 -p2 param2 -b bus
options:
-r read
-w write
-f [function name] function name
-p1 [param1] param1 of each function
-p2 [param1] param2 of each function
-b [i2c bus num] i2c bus number
When I try to run it in Python (2) via my Spyder editor with os.system, I get a "0" return which I interpret as "succesfully executed" but in fact the script has not been executed and the functions have not been performed. I know this because the script is suppose to change the camera functioning and by checking the images I take afterwards, I can see that nothing has changed.
import os
status = os.system('/home/pi/VeyeMipi/Camera_Folder/veye_mipi_i2c.sh -w -f mirrormode -p1 0x04 -b 10')
print status
Any idea, what causes this? The bash script uses two other scripts that lie in the same folder location (read and write). Could it be, that it cannot execute these additional scripts when startet through Python? It does not make sense to me, but so do a lot of things....
Many thanks
Ok, I understand that my question was not exemplary because of the lack of a minimal reproducible example, but as I did not understand what the problem was, I was not able to create one.
I have found out, what the problem was. The script I am calling in bash requires two more scripts that are in the same folder. Namely the "write" script and "read" script. When executing in terminal in the folder, no problem, because the folder was the working directory.
I tried to execute the script within Spyder editor and added the file location to the PATH in the user interface. But still it would not be able to execute the "write" script in the folder.
Simply executing it in the terminal did the trick.
It would help if you fix your scripts so they don't depend on the current working directory (that's a very bad practice).
In the meantime, running
import subprocess
p = subprocess.run(['./veye_mipi_i2c.sh', '-r', '-f', 'mirrormode', '-b', '10'], cwd='/home/pi/VeyeMipi/Camera_Folder')
print(p.returncode)
which changes the directory would help.
Use subprocess and capture the output:
import subprocess
output = subprocess.run(stuff, capture_output=True)
Check output.stderr and output.stdout

Shell command not found in python script

I source a dotcshrc file in my python script with :os.system(‘/bin/csh dotcshrc’) and it works,but when I want to use the command I have just put into the env by the source command,like os.system(‘ikvalidate mycase ‘),linux complaints:command not found.
But when I do it all by hand,everything go well.
Where is problem?
If you have a command in linux like ls and you want to use it in your python code do like this:
import os
ls = lambda : os.system('ls')
# This effectively turns that command into a python function.
ls() # skadoosh!
Output is :
FileManip.py Oscar
MySafety PROJECT DOCS
GooSpace Pg Admin
l1_2014 PlatformMavenRepo
l1_2015 R
l1_201617 R64
l2_2014 Resources
os.system runs each command in its own isolated environment. If you are sourcing something in an os.system call, subsequent calls will not see that because they are starting with a fresh shell environment. If you have dependencies like the above, you might be able to combine it into one call:
os.system(‘/bin/csh "dotcshrc; ikvalidate mycase"’)

Linux, Python open terminal run global python command

Not sure if this is possible. I have a set of python scripts and have modified the linux PATH in ~/.bashrc so that whenever I open a terminal, the python scripts are available to run as a command.
export PATH=$PATH:/home/user/pythonlib/
my_command.py resides in the above path.
I can run my_command.py (args) from anywhere in terminal and it will run the python scripts.
I'd like to control this functionality from a different python script as this will be the quickest solution to automating my processing routines. So I need it to open a terminal and run my_command.py (args) from within the python script I'm working on.
I have tried subprocess:
import subprocess
test = subprocess.Popen(["my_command.py"], stdout=subprocess.PIPE)
output = test.communicate()[0]
While my_command.py is typically available in any terminal I launch, here I have no access to it, returns file not found.
I can start a new terminal using os then type in my_command.py, and it works
os.system("x-terminal-emulator -e /bin/bash")
So, is there a way to get the second method to accept a script you want to run from python with args?
Ubuntu 16
Thanks :)
Popen does not load the system PATH for the session you create in a python script. You have to modify the PATH in the session to include the directory to your project like so:
someterminalcommand = "my_command.py (args)"
my_env = os.environ.copy()
my_env["PATH"] = "/home/usr/mypythonlib/:" + my_env["PATH"]
combine = subprocess.Popen(shlex.split(someterminalcommand), env=my_env)
combine.wait()
This allows me to run my "my_command.py" file from a different python session just like I had a terminal window open.
If you're using Gnome, the gnome-terminal command is rather useful in this situation.
As an example of very basic usage, the following code will spawn a terminal, and run a Python REPL in it:
import subprocess
subprocess.Popen(["gnome-terminal", "-e", "python"])
Now, if you want to run a specific script, you will need to concatenate its path with python, for the last element of that list it the line that will be executed in the new terminal.
For instance:
subprocess.Popen(["gnome-terminal", "-e", "python my_script.py"])
If your script is executable, you can omit python:
subprocess.Popen(["gnome-terminal", "-e", "my_script.py"])
If you want to pass parameters to your script, simply add them to the python command:
subprocess.Popen(["gnome-terminal", "-e", "python my_script.py var1 var2"])
Note that if you want to run your script with a particular version of Python, you should specify it, by explicitly calling "python2" or "python3".
A small example:
# my_script.py
import sys
print(sys.argv)
input()
# main.py
import subprocess
subprocess.Popen(["gnome-terminal", "-e", "python3 my_script.py hello world"])
Running python3 main.py will spawn a new terminal, with ['my_script.py', 'hello', 'world'] printed, and waited for an input.

Starting module shell command from python subprocess module

I'm trying to run vnc server, but in order to do it first I need to run 'module load vnc'.
If I call which module in loaded bash shell then the command in not found is the PATH but in the same time it's available. It looks like the command is built-in.
In other words it looks like I need to execute two commands at once module load vnc;vncserver :8080 -localhost and I'm writing script to start it from python.
I have tried different variants with subprocess.Popen like
subprocess.Popen('module load vnc;vncserver :8080 -localhost', shell=True)
which returns 127 exit code or command not found.
subprocess.Popen('module load vnc;vncserver :8080 -localhost', shell=False)
showing
File <path>/subprocess.py line 621, in \__init__
errread, errwrite)
OSError: [Errno 2] No such file or directory.
If I specify shell=True, it executes from /bin/sh but I need it from /bin/bash.
Specifying executable='/bin/bash' doesn't help as it loads new bash shell but it starts as string but not as process, i.e. I see in ps list exactly the same command I would like to start.
Would you please advise how to do start this command from subprocess module? Is it possible to have it started with shell=False?
Environment Modules usually just modifies a couple environment variables for you. It's usually possible to skip the module load whatever step altogether and just not depend on those modules. I recommend
subprocess.Popen(['/possibly/path/to/vncserver', ':8080', '-localhost'],
env={'WHATEVER': 'you', 'MAY': 'need'})
instead of loading the module at all.
If you do insist on using this basic method, then you want to start bash yourself with Popen(['bash',....
If you want to do it with shell=False, just split this into two Popen calls.
subprocess.check_call('module load vnc'.split())
subprocess.Popen('vncserver :8080 -localhost'.split())
You can call module from a Python script. The module command is provided by the environment-modules software, which also provides a python.py initialization script.
Evaluating this script in a Python script enables the module python function. If environment-modules is installed in /usr/share/Modules, you can find this script at /usr/share/Modules/init/python.py.
Following code enables module python function:
import os
exec(open('/usr/share/Modules/init/python.py').read())
Thereafter you can load your module and start your application:
module('load', 'vnc')
subprocess.Popen(['vncserver', ':8080', '-localhost'])

Categories