Import results of a c++-Programm to python - python

I'm currently dealing with some python based squish gui tests. Some of these tests call another tool, written in c++ and build as an executable. I have full access to that tool and I'm able to modify it. The tests call it via command line and currently evaluate the error code and create a passed or failed depending on the error codes value.
I think there is a better way to do it or? One Problem is, that the error code is limited to uint8 on unix systems and I would like to be able to share more than just an error code with my python script.
My first idea was printing everything in a file in json or xml and read that file. But this somehow sounds wrong for me. Has anybody a better idea?

When I first read the question, I immediately thought piping the output would work. Check this link out to get a better idea:
Linux Questions Piping
If this doesn't work, I do think writing your output to a file and reading it with your python script would get the job done.

You can capture the output of the external process via Python and process it as you see fit.
Here is a very simple variant:
import os
import subprocess
def main():
s = os_capture(["ls"])
if "ERROR" in s:
test.fail("Executing 'ls' failed.")
def os_capture(args, cwd=None):
if cwd is None:
cwd = os.getcwd()
stdout = subprocess.Popen(
args=args,
cwd=cwd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT).communicate()[0]
return stdout

Related

Can't seem to write error to output file if a script doesn't run correctly

I've written a function which reads and runs a python script, then sends it's output to a text file.
I'm trying to get it to write a simple string, or the error in question, to the text file if the script it ran is broken/doesn't work.
Below is the code in question:
file_dir = a_dir[0]
file_name = a_dir[1][:-3]
with open(f'{self.output_directory}\\output_{file_name}.txt', 'w') as f:
try:
subprocess.call(
[sys.executable, file_dir], stdout=f)
except:
f.write("An error occured with the script")
The first part of it works fine - it does run a functioning file and writes the output.
Do I need to be more specific with the error exception? Any help would be greatly appreciated!
If your code works fine and sys.executable is being run then there will be no exception and so your f.write code won't be run. If there is an error in a program you run using subprocess this doesn't propagate to an exception in a program you run it from. You'd have to know something about this program to know that there was an error you could look at the [returncode][1] from the subprocess.call function call. Another option is that instead of running a new python interpreter you could load the module yourself and then run code from within it using try except blocks. If you can't rely on any structure to the file then you could read the file as text and then run the code within it using eval or exec within a try except structure, that being said if you don't know anything about the file in advance it is likely a massive security flaw for you to be executing it at all let alone within the context of your running application.
**late edit read the file as text vice test which was a typo
[1]: https://docs.python.org/3/library/subprocess.html#subprocess.Popen.returncode

How to run a .py file from a .py file in an entirely different project

For the life of me i can't figure this one out.
I have 2 applications build in python, so 2 projects in different folders, is there a command to say in the first application like run file2 from documents/project2/test2.py ?
i tried something like os.system('') and exec() but that only seems to work if its in the same folder. How can i give a command a path like documents/project2 and then for example:
exec(documents/project2 python test2.py) ?
short version:
Is there a command that runs python test2.py while that test2 is in a completely different file/project?
thnx for all feedback!
There's a number of approaches to take.
1 - Import the .py
If the path to the other Python script can be made relative to your project, you can simply import the .py. This will cause all the code at the 'root' level of the script to be executed and makes functions as well as type and variable definitions available to the script importing it.
Of course, this only works if you control how and where everything is installed. It's the most preferable solution, but only works in limited situations.
import ..other_package.myscript
2 - Evaluate the code
You can load the contents of the Python file like any other text file and execute the contents. This is considered more of a security risk, but given the interpreted nature of Python in normal use not that much worse than an import under normal circumstances.
Here's how:
with open('/path/to/myscript.py', 'r') as f:
exec(f.read())
Note that, if you need to pass values to code inside the script, or out of it, you probably want to use files in this case.
I'd consider this the least preferable solution, due to it being a bit inflexible and not very secure, but it's definitely very easy to set up.
3 - Call it like any other external program
From a Python script, you can call any other executable, that includes Python itself with another script.
Here's how:
from subprocess import run
run('python path/to/myscript.py')
This is generally the preferable way to go about it. You can use the command line to interface with the script, and capture the output.
You can also pipe in text with stdin= or capture the output from the script with stdout=, using subprocess.Popen directly.
For example, take this script, called quote.py
import sys
text = sys.stdin.read()
print(f'In the words of the poet:\n"{text}"')
This takes any text from standard in and prints them with some extra text, to standard out like any Python script. You could call it like this:
dir | python quote.py
To use it from another Python script:
from subprocess import Popen, PIPE
s_in = b'something to say\nright here\non three lines'
p = Popen(['python', 'quote.py'], stdin=PIPE, stdout=PIPE)
s_out, _ = p.communicate(s_in)
print('Here is what the script produced:\n\n', s_out.decode())
Try this:
exec(open("FilePath").read())
It should work if you got the file path correct.
Mac example:
exec(open("/Users/saudalfaris/Desktop/Test.py").read())
Windows example:
exec(open("C:\Projects\Python\Test.py").read())

Calling a python script with args from another python script

I am still a newbie to python, so apologies in advance. I have related topics on this but didn't find the best solution. (Run a python script from another python script, passing in args)
Basically, I have a python script (scriptB.py) that takes in a config file as argument and does some stuff. I need to call this script from another python script (scriptA.py).
If I had no arguments to pass, I could have just done
import scriptB.py
However, things got little complicated because we need to pass the config file (mycnofig.yml) as argument.
One of the suggestions was to use;
os.system(python scriptB.py myconfig.yml)
But, it is often reported as not a recommended approach and that it often does not work.
Another suggestion was to use:
import subprocess
subprocess.Popen("scriptB.py myconfig.yaml", shell=True)
I am not very sure if this is a common practice.
Just want to point out that both scripts don't have any main inside the script.
Please advise on the best way to handle this.
Thanks,
this should work just fine
subprocess.Popen(['python', '/full_path/scriptB.py', 'myconfig.yaml'], stdout=PIPE, stderr=PIPE)
See https://docs.python.org/3/library/subprocess.html#replacing-os-popen-os-popen2-os-popen3
If you really need to run a separate process, using the multiprocessing library is probably best. I would make an actual function inside scriptB.py that does the work. In the below example I consider config_handler to be a function inside scriptB.py that actually takes the config file path argument.
1.) create a function that will handle the calling of your external python script, also, import your script and the method inside it that takes arguments
scriptA.py: importing config_handler from scriptB
import multiprocessing
from scriptB import config_handler
def other_process(*args):
p = multiprocessing.Process(*args)
p.start()
2.) Then just call the process and feed your arguments to it:
scriptA.py: calling scriptB.py function, config_handler
other_process(name="config_process_name", target=config_handler, args=("myconfig.yml",))
Opinion:
From the information you have provided, i imagine you could manage to do this without separate processes. Just do things all in sequence and make scriptB.py a library with a function you use in scriptA.py.
It seems you got all your answers in the old thread, but if you really want to run it through os, not through python, this is what I do:
from subprocess import run, PIPE, DEVNULL
your_command = './scriptB.py myconfig.yaml'
run(your_command.split(), stdout=PIPE, stderr=DEVNULL)
In case you need the output:
output = run(your_command.split(), stdout=PIPE, stderr=DEVNULL).stdout.decode('utf-8')
If the scriptB has the shebang header telling the bash its a python script, it should run it correctly.
Path can be both relative and absolute.
It is for Python 3.x

Running a bash file with Python

I've got a bash file that I normally execute using Cygwin.
I need to run this file from my Python code.
I tried this:
for bashfile in files:
p = Popen(bashfile, cwd=dname) #dname is the current directory of the script
stdout, stderr = p.communicate()
I've also seen a similar question here, but when trying to run it that way it says that it can't find the directory of my bash file...
Any ideas? Thanks! :-)
Edit: bashfile has a full path.
Do you need its output to get it directly to Python? If not this may be very fast and easy solution:
os.system("""here some code you use to execute in Terminal""")
You can also try this, though it does (and will no matter what you try) matter where the directory is. This, as far as the output goes, may be a little bit cleaner than the os method.
import commands
cmd="bash ./script.sh"
commands.getoutput(cmd)
If the case is that you need to change the directory:
cmd = "/path/to/your/script/script.sh"
The added benefit of using this method, versus say, os is that you can assign the output to a variable...
fun_times = commands.getoutput("bash ./script.sh")
whereas...
not_fun_times = os.system("./script.sh")
will throw an error.
etc, etc.

when is 'commands' preferable to 'popen' subprocess?

I'm apprenticing into system administration without schooling, so sometimes I'm missing what is elementary information to many others.
I'm attempting to give my stdout line another argument before printing, but I'm not sure which process I should use, and I'm a bit fuzzy on the commands for subprocess if that's what I should be using.
My current code is:
f = open('filelist', 'r')
searchterm = f.readline()
f.close()|
#takes line from a separate file and gives it definition so that it may be callable.
import commands
commands.getoutput('print man searchterm')
This is running, but not giving me an ouput to the shell. My more important question is though, am I using the right command to get my preferred process? Should I be using one of the subprocess commands instead? I tried playing around with popen, but I don't understand it fully enough to use it correctly.
Ie, I was running
subprocess.Popen('print man searchterm')
but I know without a doubt that's not how you're supposed to run it. Popen requires more arguments than I have given it, like file location and where to run it (Stdout or stderr). But I was having trouble making these commands work. Would it be something like:
subprocess.Popen(pipe=stdout 'man' 'searchterm')
#am unsure how to give the program my arguments here.
I've been researching everywhere, but it is such a widely used process I seem to be suffering from a surplus of information rather than not enough. Any help would be appreciated, I'm quite new.
Preemptive thanks for any help.
The cannonical way to get data from a separate process is to use subprocess (commands is deprecated)
import subprocess
p = subprocess.Popen(['print','man','searchitem'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdoutdata, stderrdata = p.communicate()
Note that some convenience functions exist for splitting strings into lists of arguments. Most notably is shlex.split which will take a string and split it into a list the same way a shell does. (If nothing is quoted in the string, str.split() works just as well).
commands is deprecated in Python 2.6 and later, and has been removed in Python 3. There's probably no situation where it's preferable in new code, even if you are stuck with Python 2.5 or earlier.
From the docs:
Deprecated since version 2.6: The commands module has been removed in
Python 3. Use the subprocess module instead.
To run man searchterm in a separate process and display the result in the terminal, you could do this:
import subprocess
proc = subprocess.Popen('man searchterm'.split())
proc.communicate()

Categories