python wrong subprocess.call return code - python

Look at this 2 simples programs in C:
#include <stdio.h>
int main(int argc, char *argv[]) {
return -1;
}
#include <stdio.h>
int main(int argc, char *argv[]) {
return 1337;
}
Now look this very basic python script:
>>> import subprocess
>>> r=subprocess.call(['./a.out'])
I do not understand why but the python script r value contains:
255 for the first C program. It should be -1
57 for the second C program. It should be 1337
Have i something wrong ?
Thanks

Python has nothing to do with it. This is system dependent.
On Unix/Linux systems, the return code is stored on 1 byte, and is unsigned, so truncation occurs outside 0-255 range.
So -1 becomes 255, 1337 becomes 57 (can be checked by applying modulus 256).
note that in Windows, return codes can be higher than 255 (was able to pass 100000000, but still not negative)
The conclusion is: don't rely on return codes too much to pass information. Print something on the console instead.
Related: https://unix.stackexchange.com/questions/37915/why-do-i-get-error-255-when-returning-1

Related

Calling argc/argv function with ctypes

I am creating a wrapper to a code in c for Python. The c code basically runs in terminal and has the following main function prototype:
void main(int argc, char *argv[]){
f=fopen(argv[1],"r");
f2=fopen(argv[2],"r");
So basically arguments read are strings in terminal. I created following python ctype wrapper, but it appears I am using wrong type. I know the arguments passed from the terminal is read as characters but an equivalent python side wrapper is giving following error:
import ctypes
_test=ctypes.CDLL('test.so')
def ctypes_test(a,b):
_test.main(ctypes.c_char(a),ctypes.c_char(b))
ctypes_test("323","as21")
TypeError: one character string expected
I have tried adding one character, just to check if shared object gets executed, it does as print commands work but momentarily till the section of the code in shared object needs file name. I also tried
ctypes.c_char_p but get.
Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)
Updated as per the suggestion in the comments to the following:
def ctypes_test(a,b):
_test.main(ctypes.c_int(a),ctypes.c_char_p(b))
ctypes_test(2, "323 as21")
Yet getting the same error.
Using this test DLL for Windows:
#include <stdio.h>
__declspec(dllexport)
void main(int argc, char* argv[])
{
for(int i = 0; i < argc; ++i)
printf("%s\n", argv[i]);
}
This code will call it. argv is basically a char** in C, so the ctypes type is POINTER(c_char_p). You also have to pass bytes strings and it can't be a Python list. It has to be an array of ctypes pointers.
>>> from ctypes import *
>>> dll = CDLL('./test')
>>> dll.main.restype = None
>>> dll.main.argtypes = c_int, POINTER(c_char_p)
>>> args = (c_char_p * 3)(b'abc', b'def', b'ghi')
>>> dll.main(len(args), args)
abc
def
ghi

Why does this supposedly infinite loop program terminate?

I was talking to my friend about these two pieces of code. He said the python one terminates, the C++ one doesn't.
Python:
arr = [1, 2, 3]
for i in range(len(arr)):
arr.append(i)
print("done")
C++:
#include <iostream>
#include <vector>
using namespace std;
int main() {
vector<int> arr{1,2,3};
for(int i = 0; i < arr.size(); i++){
arr.push_back(i);
}
cout << "done" << endl;
return 0;
}
I challenged that and ran it on 2 computers. The first one ran out of memory (bad alloc) because it had 4gb of ram. My mac as 12gb of ram and it was able to run and terminate just fine.
I thought it wouldn't run forever because the type of size() in vector is an unsigned int. Since my mac was 64 bit, I thought that it could store 2^(64-2)=2^62 ints (which is true) but the unsigned int for the size is 32 for some reason.
Is this some bug in the C++ compiler that does not change the max_size() to be relative to the system's hardware? The overflow causes the program to terminate. Or is it for some other reason?
There is not a bug in your C++ compiler manifesting itself here.
int is overflowing (due to the i++), the behaviour of which is undefined. (It's feasible that you'll run out of memory on some platforms before this overflow occurs.) Note that there is no defined behaviour that will make i negative, although that is a common occurrence on machines with 2's complement signed integral types once std::numeric_limits<int>::max() is attained, and if i were -1 say then i < arr.size() would be false due to the implicit conversion of i to an unsigned type.
The Python version pre-computes range(len(arr)); that is subsequent appends do not change that initial value.

how to call a exe from python with integer input arguments and return the .exe output to python?

I checked already a lot of posts and the subprocess documentation but non of them provided a solution to my problem. At least, i can't find one.
Anyway, here is my problem description:
I would like to call a .exe from a .py file. The .exe needs a integer input argument and returns also an integer value, which i would like to use for further calculations in python.
In order to keep things simple, i would like to use a minimun working example of my "problem"-code (see below). If i run this code, then .exe crashes and i don't know why. Maybe i just missed something but i don't know what!? So here is what i did:
c++ code which i use to generate: MyExe.exe
#include <iostream>
using namespace std;
#include <stdlib.h>
#include <string>
int main(int argc, char* argv[])
{
int x = atoi(argv[1]);
return x;
}
My python code:
from subprocess import Popen, PIPE
path = 'Path to my MyExe.exe'
def callmyexe(value):
p = Popen([path], stdout=PIPE, stdin=PIPE)
p.stdin.write(bytes(value))
return p.stdout.read
a = callmyexe(5)
b = a + 1
print(b)
I use MSVC 2015 and Python 3.6.
You have to use cout for output:
#include <iostream>
using namespace std;
#include <stdlib.h>
#include <string>
int main(int argc, char* argv[])
{
int x = atoi(argv[1]);
cout << x;
}
And command line parameters for the input:
from subprocess import check_output
path = 'Path to my MyExe.exe'
def callmyexe(value):
return int(check_output([path, str(value)]))
a = callmyexe(5)
b = a + 1
print(b)

Python - How can I read input from a device using ioctl or spidev?

I have a hardware device and the vendor that supplied it gave a bit of C code to listen for button presses which uses ioctl. The device has an SSD1289 controller.
Push buttons require no additional pins, their status canbe read over SPI.
That's what I want, to read which push button was pressed.
I am trying to replicate this script in Python for my own application, but the _IOR and ioctl requirements are throwing me off.
#include <stdio.h>
#include <sys/types.h>
#include <fcntl.h>
#include <unistd.h>
#include <string.h>
#include <sys/ioctl.h>
#define SSD1289_GET_KEYS _IOR('keys', 1, unsigned char *)
void get_keys(int fd)
{
unsigned char keys;
if (ioctl(fd, SSD1289_GET_KEYS, &keys) == -1)
{
perror("_apps ioctl get");
}
else
{
printf("Keys : %2x\n", keys);
}
}
int main(int argc, char *argv[])
{
char *file_name = "/dev/fb1";
int fd;
fd = open(file_name, O_RDWR);
if (fd == -1)
{
perror("_apps open");
return 2;
}
while(1)
get_keys(fd);
printf("Ioctl Number: (int)%d (hex)%x\n", SSD1289_GET_KEYS, SSD1289_GET_KEYS);
close (fd);
return 0;
}
Now I know that Python has an ioctl module, and at some point I should be calling
file = open("/dev/fb1")
buf = array.array('h', [0])
fcntl.ioctl(file, ????, buf, 1)
I can't figure out what the SSD1289_GET_KEYS is supposed to be. How do I get this and what is _IOR?
Also, if this is the wrong approach, knowing that would be a help too. There are libraries such as spidev which are supposedly for SPI, but I don't know what to read using it.
#alexis provided some useful steps below, which got me to this point:
import fcntl
import array
file = open("/dev/fb1")
buf = array.array('h', [0])
fcntl.ioctl(file, -444763391, buf, 1)
Now, pressing a button changes the value of buf if I keep the above in a loop.
You're on the right track, you just need to figure out the constant to use. Your vendor's program will actually print it out, in decimal and hex-- if you would just edit main() and move the printf line above the endless while loop:
printf("Ioctl Number: (int)%d (hex)%x\n", SSD1289_GET_KEYS, SSD1289_GET_KEYS);
while(1)
get_keys(fd);
Explanation:
_IOR is a macro defined in sys/ioctl.h. Its definition is as follows:
#define _IOC(inout,group,num,len) \
(inout | ((len & IOCPARM_MASK) << 16) | ((group) << 8) | (num))
#define _IO(g,n) _IOC(IOC_VOID, (g), (n), 0)
#define _IOR(g,n,t) _IOC(IOC_OUT, (g), (n), sizeof(t))
#define _IOW(g,n,t) _IOC(IOC_IN, (g), (n), sizeof(t))
I have included the relevant context lines. You can see that this macro constructs a bit mask that (we can tell from the name) deals with read operations. But your goal is to figure out the bitmask you need, which you can do without too much trouble: Run your vendor's C program through cc -E, and you'll see the source after preprocessor commands have applied. Track down the definition of get_keys (there'll be a whole lot of header files first, so it'll be at the very end of the output), and pull out the second argument.
The result just might be system-dependent, so you should really try it yourself. On my box, it comes out as
((__uint32_t)0x40000000 | ((sizeof(unsigned char *) & 0x1fff) << 16) | ((('keys')) << 8) | ((1)))
Not eager to translate that into python, I added the following lines at the very start of main():
printf("%d", ((__uint32_t)0x40000000 | ((sizeof(unsigned char *) & 0x1fff) << 16) |
((('keys')) << 8) | ((1))));
exit(0);
I ran the program and it gave me the output 1702458113, which may be the value you need. It should be the same as the decimal output from the printf command that was already there (but hidden below the endless while loop). But check it yourself and don't blame me if you blow out your hardware or something!

c system() return from python script - confusing!

I need to call through to a python script from C and be able to catch return values from it. it doesn't particularly matter what the values are, they may as well be an enum, but the values I got out of a test case confused me, and I wanted to get to the bottom of what I was seeing.
So, here is the C:
#include <stdlib.h>
#include <stdio.h>
int main(void)
{
int out = 0;
out = system("python /1.py");
printf("script 1 returned %d\n", out);
return 0;
}
and here is /1.py :
import sys
sys.exit(1)
The output of these programs is this:
script 1 returned 256
some other values:
2 -> 512
800 -> 8192
8073784 -> 14336
Assuming that it is...reading in little rather than big endian, or something? how can I write a c function (or trick python in)to correctly returning and interpret the numbers?
From the Linux documentation on system():
... return status is in the format specified in wait(2). Thus, the exit code of the command will be WEXITSTATUS(status) ...
From following the link on wait, we get the following:
WEXITSTATUS(status): returns the exit status of the child. ... This macro should only be employed if WIFEXITED returned true.
What this amounts to is that you can't use the return value of system() directly, but must use macros to manipulate them. And, since this is conforming to the C standard and not just the Linux implementation, you will need to use the same procedure for any operating environment that you are using.
The system() call return value is in the format specified by waitpid(). The termination status is not as defined for the sh utility. I can't recall but it works something like:
int exit_value, signal_num, dumped_core;
...
exit_value = out >> 8;
signal_num = out & 127;
dumped_core = out & 128;

Categories