I am using python ctypes to call a function in c++ from python. Currently, I have the following c++ file:
five.cpp
extern "C" {
int get_five(){
return 5;
}
}
And python file:
five.py
import ctypes
from pathlib import Path
lib = ctypes.CDLL(Path(Path.cwd(),'five.dll').as_posix())
print(lib.get_five())
Which works and prints the number 5 when i run it.
However, as soon as I include any headers in the c++ file, it breaks down. So if I change the file to:
#include <iostream>
extern "C" {
int get_five(){
return 5;
}
}
It breaks, and I get the following error:
FileNotFoundError: Could not find module '...\five.dll' (or one of its dependencies). Try using the full path with constructor syntax.
I am compiling on Windows, with the following command:
g++ -shared five.cpp -o five.dll
I am probably missing something obvious, since I am very new to programming in c++. However, I can't seem to find what it is.
(This question was asked here, but the answer was Linux-specific; I'm running on FreeBSD and NetBSD systems which (EDIT: ordinarily) do not have /proc.)
Python seems to dumb down argv[0], so you don't get what was passed in to the process, as a C program would. To be fair, sh and bash and Perl are no better. Is there any way I can work around this, so my Python programs can get that original value? I have administrative privileges on this FreeBSD system, and can do things like changing everyone's default PATH environment variable to point to some other directory before the one that contains python2 and python3, but I don't have control over creating /proc. I have a script which illustrates the problem. First, the script's output:
the C child program gets it right: arbitrary-arg0 arbitrary-arg1
the python2 program dumbs it down: ['./something2.py', 'arbitrary-arg1']
the python3 program dumbs it down: ['./something3.py', 'arbitrary-arg1']
the sh script dumbs it down: ./shscript.sh arbitrary-arg1
the bash script dumbs it down: ./bashscript.sh arbitrary-arg1
the perl script drops arg0: ./something.pl arbitrary-arg1
... and now the script:
#!/bin/sh
set -e
rm -rf work
mkdir work
cd work
cat > childc.c << EOD; cc childc.c -o childc
#include <stdio.h>
int main(int argc,
char **argv
)
{
printf("the C child program gets it right: ");
printf("%s %s\n",argv[0],argv[1]);
}
EOD
cat > something2.py <<EOD; chmod 700 something2.py
#!/usr/bin/env python2
import sys
print "the python2 program dumbs it down:", sys.argv
EOD
cat > something3.py <<EOD; chmod 700 something3.py
#!/usr/bin/env python3
import sys
print("the python3 program dumbs it down:", sys.argv)
EOD
cat > shscript.sh <<EOD; chmod 700 shscript.sh
#!/bin/sh
echo "the sh script dumbs it down:" \$0 \$1
EOD
cat > bashscript.sh <<EOD; chmod 700 bashscript.sh
#!/bin/sh
echo "the bash script dumbs it down:" \$0 \$1
EOD
cat > something.pl <<EOD; chmod 700 something.pl
#!/usr/bin/env perl
print("the perl script drops arg0: \$0 \$ARGV[0]\n")
EOD
cat > launch.c << EOD; cc launch.c -o launch; launch
#include <sys/types.h>
#include <sys/wait.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main(int argc,
char **argv,
char **arge)
{
int child_status;
size_t program_index;
pid_t child_pid;
char *program_list[]={"./childc",
"./something2.py",
"./something3.py",
"./shscript.sh",
"./bashscript.sh",
"./something.pl",
NULL
};
char *some_args[]={"arbitrary-arg0","arbitrary-arg1",NULL};
for(program_index=0;
program_list[program_index];
program_index++
)
{
child_pid=fork();
if(child_pid<0)
{
perror("fork()");
exit(1);
}
if(child_pid==0)
{
execve(program_list[program_index],some_args,arge);
perror("execve");
exit(1);
}
wait(&child_status);
}
return 0;
}
EOD
What follows is a generally useful answer to what I meant to ask.
The answer that kabanus gave is excellent, given the way I phrased the problem, so of course he gets the up-arrow and the checkmark. The transparency is a beautiful plus, in my opinion.
But it turns out that I didn't specify the situation completely. Each python script starts with a shebang, and the shebang feature makes it more complicated to launch a python script with an artificial argv[0].
Also, transparency isn't my goal; backward compatibility is. I would like the normal situation to be that sys.argv works as shipped, right out of the box, without my modifications. Also, I would like any program which launches a python script with an artificial argv[0] not to have to worry about any additional argument manipulation.
Part of the problem is to overcome the "shebang changing argv" problem.
The answer is to write a wrapper in C for each script, and the launching program launches that program instead of the actual script. The actual script looks at the arguments to the parent process (the wrapper).
The cool thing is that this can work for script types other than python. You can download a proof of concept here which demonstrates the solution for python2, python3, sh, bash, and perl. You'll have to change each CRLF to LF, using dos2unix or fromdos. This is how the python3 script handles it:
def get_arg0():
return subprocess.run("ps -p %s -o 'args='" % os.getppid(),
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
).stdout.decode(encoding='latin1').split(sep=" ")[0]
The solution does not rely on /proc, so it works on FreeBSD as well as Linux.
What I think is the path of least resistance here is a bit hacky, but would probably work on any OS. Basically you double wrap your Python calls. First (using Python 3 as an example), the Python3 in your path is replaced by a small C program, which you know you can trust:
#include<stdlib.h>
#include<string.h>
int main(int argc, char **argv) {
// The python 3 below should be replaced by the path to the original one
// In my tests I named this program wrap_python so there was no problem
// but if you are changing this system wide (and calling the wrapper python3
// you can't leave this.
const char *const program = "python3 wrap_python.py";
size_t size = strlen(program) + 1; // Already added null character at end
for(int count = 0; count < argc; ++count)
size += strlen(argv[count]) + 1; // + 1 for space
char *cmd = malloc(size);
if(!cmd) exit(-1);
cmd[0] = '\0';
strcat(cmd, program);
for(int count = 1; count < argc; ++count) {
strcat(cmd, " ");
strcat(cmd, argv[count]);
}
strcat(cmd, " ");
strcat(cmd, argv[0]);
return system(cmd);
}
You can make this faster, but hey, premature optimization?
Note we are calling a script called wrap_python.py (probably you would need a full path here). We want to pass the "true" argv, but we need to work some in the Python context to make it transparent. The true argv[0] is passed as a last argument, and wrap_python.py is:
from sys import argv
argv[0] = argv.pop(-1)
print("Passing:", argv) # Delete me
exit(exec(open(argv[1]).read())) # Different in Python 2. Close the file handle if you're pedantic.
Our small wrapper replaces argv[0] with the one provided by our C wrapper removing it from the end, and then manually executes in the same context. Specifically __name__ == __main__ is true.
This would be run as
python3 my_python_script arg1 arg2 etc...
where your path now will point to that original C program. Testing this on
import sys
print(__name__)
print("Got", sys.argv)
yields
__main__
Got ['./wrap_python', 'test.py', 'hello', 'world', 'this', '1', '2', 'sad']
Note I called my program wrap_python - you want to name it python3.
Use Python's ctypes module to get the "program name" which by default is set to argv[0]. See Python source code here. For example:
import ctypes
GetProgramName = ctypes.pythonapi.Py_GetProgramName
GetProgramName.restype = ctypes.c_wchar_p
def main():
print(GetProgramName())
if __name__ == '__main__':
main()
Running the command prints:
$ exec -a hello python3 name.py
hello
I want to call a Python script from C, passing some arguments that are needed in the script.
The script I want to use is mrsync, or multicast remote sync. I got this working from command line, by calling:
python mrsync.py -m /tmp/targets.list -s /tmp/sourcedata -t /tmp/targetdata
-m is the list containing the target ip-addresses.
-s is the directory that contains the files to be synced.
-t is the directory on the target machines where the files will be put.
So far I managed to run a Python script without parameters, by using the following C program:
Py_Initialize();
FILE* file = fopen("/tmp/myfile.py", "r");
PyRun_SimpleFile(file, "/tmp/myfile.py");
Py_Finalize();
This works fine. However, I can't find how I can pass these argument to the PyRun_SimpleFile(..) method.
Seems like you're looking for an answer using the python development APIs from Python.h. Here's an example for you that should work:
#My python script called mypy.py
import sys
if len(sys.argv) != 2:
sys.exit("Not enough args")
ca_one = str(sys.argv[1])
ca_two = str(sys.argv[2])
print "My command line args are " + ca_one + " and " + ca_two
And then the C code to pass these args:
//My code file
#include <stdio.h>
#include <python2.7/Python.h>
void main()
{
FILE* file;
int argc;
char * argv[3];
argc = 3;
argv[0] = "mypy.py";
argv[1] = "-m";
argv[2] = "/tmp/targets.list";
Py_SetProgramName(argv[0]);
Py_Initialize();
PySys_SetArgv(argc, argv);
file = fopen("mypy.py","r");
PyRun_SimpleFile(file, "mypy.py");
Py_Finalize();
return;
}
If you can pass the arguments into your C function this task becomes even easier:
void main(int argc, char *argv[])
{
FILE* file;
Py_SetProgramName(argv[0]);
Py_Initialize();
PySys_SetArgv(argc, argv);
file = fopen("mypy.py","r");
PyRun_SimpleFile(file, "mypy.py");
Py_Finalize();
return;
}
You can just pass those straight through. Now my solutions only used 2 command line args for the sake of time, but you can use the same concept for all 6 that you need to pass... and of course there's cleaner ways to capture the args on the python side too, but that's just the basic idea.
Hope it helps!
You have two options.
Call
system("python mrsync.py -m /tmp/targets.list -s /tmp/sourcedata -t /tmp/targetdata")
in your C code.
Actually use the API that mrsync (hopefully) defines. This is more flexible, but much more complicated. The first step would be to work out how you would perform the above operation as a Python function call. If mrsync has been written nicely, there will be a function mrsync.sync (say) that you call as
mrsync.sync("/tmp/targets.list", "/tmp/sourcedata", "/tmp/targetdata")
Once you've worked out how to do that, you can call the function directly from the C code using the Python API.
The python script contains lots of libraries imported
My C code so far:
#include <stdio.h>
#include <python2.7/Python.h>
void main(int argc, char *argv[])
{
FILE* file;
Py_SetProgramName(argv[0]);
Py_Initialize();
PySys_SetArgv(argc, argv);
file = fopen("analyze.py","r");
PyRun_SimpleFile(file, "analyze.py");
Py_Finalize();
return;
}
Is there any other way that I can use so that even if any modification in arguments or number of python scripts I call inside c program increases the same code can be used with little changes?
Can I use system call and use the result obtained from it?
One useful way is calling a python function within c, which is that you need instead of execute whole script.
As described in
>> here
You can do like this to call the python file from the C program:
char command[50] = "python full_path_name\\file_name.py";
system(command);
This piece of code worked for me...
I didn't use # include < python2.7/Python.h>
You can write the results from the python file to any text file and then use the results stored in the text file to do whatever you want to do...
You can also have a look at this post for further help:
Calling python script from C++ and using its output
I am attempting to execute a python script from a C++ program. The problem that I am having is that I am unable to execute my python script.
If I take out the lpParameter value by setting it equal to NULL everything works fine, my program launches the python terminal and then my program finishes when I exit the python terminal.
I have a feeling that it has to do with the lpParameters field separating arguments with spaces, so I attempted to the entire python script in escaped quotation marks.
#include "windows.h"
#include "shellapi.h"
#include <iostream>
using namespace std;
int main()
{
cout<<"About to execute the shell command";
SHELLEXECUTEINFO shExecInfo;
shExecInfo.cbSize = sizeof(SHELLEXECUTEINFO);
shExecInfo.fMask = NULL;
shExecInfo.hwnd = NULL;
shExecInfo.lpVerb = "runas";
shExecInfo.lpFile = "C:\\Python25\\python.exe";
shExecInfo.lpParameters = "\"C:\\Documents and Settings\\John Williamson\\My Documents\\MyPrograms\\PythonScripts\\script.py\"";
shExecInfo.lpDirectory = NULL;
shExecInfo.nShow = SW_NORMAL;
shExecInfo.hInstApp = NULL;
ShellExecuteEx(&shExecInfo);
return 0;
}
What happens when I launch this code is my program runs, quickly pops up another terminal that is quickly gone and then my original terminal says the task is complete. In reality though the python script that I specified is never executed.
Not really an answer, but too long for a comment.
The problem with those kind of execution in a new window, it the as soon as the program has ended the window is closed. As a window has been opened, it is likely from the point of view of the launching program all is fine.
My advice here would be to use a cmd /k that forces a window to keep opened after the end of the program :
shExecInfo.lpFile = "cmd";
shExecInfo.lpParameters = "/k C:\\Python25\\python.exe \"C:\\Documents and Settings\\John Williamson\\My Documents\\MyPrograms\\PythonScripts\\script.py\"";
At least if there is an error anywhere, you will be given a chance to see it.
Turns out the issue was with permissions and setting this parameter:
shExecInfo.lpVerb = "runas";
Instead I left it
shExecInfo.lpVerb = NULL;
and also filled in the directory parameter and it is working now.