I am trying to make a program in c++, but i cant make the program because in one part of the code I need to run a python program from c++ and I dont know how to do it. I've been trying many ways of doing it but none of them worked. So the code should look sometihnglike this:somethingtoruntheprogram("pytestx.py"); or something close to that. Id prefer doing it without python.h. I just need to execute this program, I need to run the program because I have redirected output and input from the python program with sys.stdout and sys.stdin to text files and then I need to take data from those text files and compare them. I am using windows.
You have two way of doing that:
Use system/fork and exec*/...
Embed a python interpreter in your program (cf python 2.6 doc or boost.python)
Using a embedded interpreter is (IMHO) the best way to do it because it gives you more control over the execution of the script, because it's not OS-dependant and it does not rely on your target having a python interpreter (configured as you require).
There's POSIX popen and on Windows _popen, which is halfway between exec and system. It offers the required control over stdin and stdout, which system does not. But on the other hand, it's not as complicated as the exec family of functions.
Related
I am using a 3rd-party python module which is normally called through terminal commands. When called through terminal commands it has a verbose option which prints to terminal in real time.
I then have another python program which calls the 3rd-party program through subprocess. Unfortunately, when called through subprocess the terminal output no longer flushes, and is only returned on completion (the process takes many hours so I would like real-time progress).
I can see the source code of the 3rd-party module and it does not set printing to be flushed such as print('example', flush=True). Is there a way to force the flushing through my module without editing the 3rd-party source code? Furthermore, can I send this output to a log file (again in real time)?
Thanks for any help.
The issue is most likely that many programs work differently if run interactively in a terminal or as part of a pipe line (i.e. called using subprocess). It has very little to do with Python itself, but more with the Unix/Linux architecture.
As you have noted, it is possible to force a program to flush stdout even when run in a pipe line, but it requires changes to the source code, by manually applying stdout.flush calls.
Another way to print to screen, is to "trick" the program to think it is working with an interactive terminal, using a so called pseudo-terminal. There is a supporting module for this in the Python standard library, namely pty. Using, that, you will not explicitly call subprocess.run (or Popen or ...). Instead you have to use the pty.spawn call:
def prout(fd):
data = os.read(fd, 1024)
while(data):
print(data.decode(), end="")
data = os.read(fd, 1024)
pty.spawn("./callee.py", prout)
As can be seen, this requires a special function for handling stdout. Here above, I just print it to the terminal, but of course it is possible to do other thing with the text as well (such as log or parse...)
Another way to trick the program, is to use an external program, called unbuffer. Unbuffer will take your script as input, and make the program think (as for the pty call) that is called from a terminal. This is arguably simpler if unbuffer is installed or you are allowed to install it on your system (it is part of the expect package). All you have to do then, is to change your subprocess call as
p=subprocess.Popen(["unbuffer", "./callee.py"], stdout=subprocess.PIPE)
and then of course handle the output as usual, e.g. with some code like
for line in p.stdout:
print(line.decode(), end="")
print(p.communicate()[0].decode(), end="")
or similar. But this last part I think you have already covered, as you seem to be doing something with the output.
So this one is a doozie, and a little too specific to find an answer online.
I am writing to a file in C++ and reading that file in Python at the same time to move a robot. Or trying to.
When I try running both programs at the same time, the C++ one runs first and then the Python one runs.
Here's the command I use:
./ColorFollow & python fileToHex.py
This happens even if I switch the order of commands.
Even if I run them in different terminals (which is the same thing, just covering all bases).
Both the Python and C++ code read / write in 'infinite' loops, so these two should run until I say stop.
The code works fine; when the Python script finally runs the robot moves as intended. It's just that the code doesn't run at the same time.
Is there a way to make this happen, or is this impossible?
If you need more information, lemme know, but the code is pretty much what you'd expect it to be.
If you are using Linux, & will release bash session and in this case, CollorFlow and fileToXex.py will run in different bash sessions.
At the same time, composition ./ColorFollow | python fileToHex.py looks interesting, cause you redirect stdout of ColorFollow to fileToHex.py stdin - it can syncronize scripts by printing some code string upon exit, then reading it by fileToHex.py and exit as well.
I would create some empty file like /var/run/ColorFollow.flag and write there 1 when one of processes exit. Not a pipe - cause we do not care which process will start first. So, if next loop step of ColorFollow sees 1 in the file, it deletes it and exits (means that fileToHex already exited). The same - for fileToHex - check flag file each loop step and exit if it exists, after deleting flag file.
I have a build.sh script that my automated build server executes as part of a build. A big portion of logic of the build is calculating and building a version number. All of this logic is in a python script such as calculate-version.py.
Typically what I would do in this case is setup the python script to ONLY print the version number, from which I would read stdout from the bash script, and assign that to an environment variable. However, the python script is becoming sufficiently complex that I'd like to start adding logs to it.
I need to be able to output (stdout) logs from the Python script (via print()) while at the same time when it is done, propagate a "return value" from the python script back to the parent shell script.
What is the best way of doing this? I thought of doing this through environment variables, but my understanding is those won't be available to the parent process.
Short answer: you can't. The return value of a *nix-style executable is an unsigned integer from 0-255. That usually indicates if it failed or not, but you could co-opt it for your own uses.
In this case, I don't think a single unsigned byte is enough. Thus, you need to output it some other way. You have a few options
The simplest (and probably best in this case) is to continue outputting your output data on stdout, and send your logs/debugging information somewhere else. That could be to a file, or (it's sort-of what it's for) stderr
Output your data to a file (such as one given in a command line parameter)
Arrange some kind of named pipe scheme. In practice, this is pretty much the same thing as sending it to a file.
Create an executable python script that prints a variable print 99.
#!/usr/bin/python
print 99
chmod a+x test.py to set it as executable.
From bash do this a=$(./test.py) and if you print a(echo $a) you should get 99.
To get only the version number, you should print only the version number.
I have a caller.py which repeatedly calls routines from some_c_thing.so, which was created from some_c_thing.c. When I run it, it segfaults - is there a way for me to detect which line of c code is segfaulting?
This might work:
make sure the native library is compiled with debug symbols (-g switch for gcc).
Run python under gdb and let it crash:
gdb --args python caller.py
run # tell gdb to run the program
# script runs and crashes
bt # print backtrace, which should show the crashing line
If crash happens in the native library code, then this should reveal the line.
If native library code just corrupts something or violates some postconditions, and crash happens in Python interpreter's code, then this will not be helpful. In that case your options are code review, adding debug prints (first step would be to just log entry and exit of each C function to detect which is the last C function called before crash, then adding more fine-grained logging for variable values etc), and finally using debugger to see what happens by using the usual debugger techniques (breakpoints, stepping, watches...).
Take Python and the .so file(s) out of the equation. See what params are being passed, if any, and call the routines from a debugger capable of stepping through C code and binaries.
Here is a link to an article describing a simple C debugging process, in case you're not familiar with debugging C (command line interface). Here is another link on using NetBeans to debug C. Also using Eclipse...
This could help: gdb: break in shared library loaded by python (might also turn out to be a dupe)
segfault... Check if the number of variables or the types of variables you passed to that c function (in .so) are correct. If not aligned, usually it's a segfault.
Prior info: I'm on a Mac.
Q: How can I get terminal-like text output from the program execution, if I compile it with py2app for redistribution?
My case is a program that copies a lot of big files and takes a while to process so I would like to at least have an output notification everytime each file is copied.
This is easy if I run it on the command line, I can just print a new line.
But when I make a self-sufficient package, it simply opens on the bottom dock, with no window, and closes upon completion.
A simple text window would be fine.
Thanks in advance.
If you want to create a simple text window, you need to pick a GUI framework to do that with. For something this simple, there's no reason not to use Tkinter (which comes with any Python) or PyObjC (which is pre-installed with Apple's Python 2.7), unless you happen to be more familiar with wx, gobject, Qt, etc.
At any rate, however you do it, you'll need to write a function that takes a message and appends it to the text window (maybe creating it lazily, if necessary), and call that function wherever you would normally print. You may also want to write and install a logging handler that does the same thing, so you can just log.info stuff. (You could instead create a file-like object that does this and redirect stdout and/or stderr, but unless you have no control over the printing code, that's going to be a lot more work.)
The only real problem here is that a GUI needs an event loop, and you probably just wrote your code as a sequential script.
One way around that is to turn your whole current script into a background thread. If you're using a GUI library that allows you to access the widgets from background threads, everything is easy; your printfunc just does textwidget.append(msg). If not, it may at least have a call_on_main_thread type function, so your printfunc does call_on_main_thread(textwidget.append, msg). If worst comes to worst (and I believe with Tkinter, it does), you have to create an explicit queue to push messages through, and write a queue handler in the event loop. This recipe should give you an idea. Replace the body of workerThread with your code, and end it with self.endApplication(). (There are probably better examples out there; this was just what I found first in a quick search.)
The other way around that is to have your code cooperatively operate with the event loop. Some libraries, like wx, have functions like SafeYield that make things work if you just call it after every chunk of processing. Others don't have that, but have a way to explicitly drive the event loop from your code. Others have neither—but every event loop framework has to have a way to schedule new events, so you can break your code up into a sequence of functions that each finish quickly and then do something like root.after_idle(nextfunc).
However… are you sure you need to do this?
First, any app, including one created by py2app, will send its stdout to the terminal if you run it with Foo.app/Contents/MacOS/Foo. And you can even set things up so that open Foo.app works that way, if you want. Obviously this doesn't help for people who just double-click the app in Finder (because then there is no terminal), but sometimes it's sufficient to just have to output available when people need it and know how to follow instructions.
And you can take this farther: Create a Foo.command file that just does something like $(dirname $0)/Foo.app/Contents/MacOS/Foo, and when you double-click that file, it launches Terminal.app and runs your script.
Or you can get even simpler: Just use logging to syslog the output, and if you want to see when each file is done, just watch the log messages go by in Console.app.
Finally, do you even need py2app in the first place? If you don't have any external dependencies, just rename you script to Foo.command, and double-clicking it will run it in Terminal.app. If you do have external dependencies, you might still be able to get away with bundling it all together as a folder with a .command in it instead of as a .app.
Obviously none of these ideas are exactly a professional or newbie-friendly way to build an interface, so if that matters, you will have to create a GUI.