Equivalent expression in Python - python

I am a Python n00b and at the risk of asking an elementary question, here I go.
I am porting some code from C to Python for various reasons that I don't want to go into.
In the C code, I have some code that I reproduce below.
float table[3][101][4];
int kx[6] = {0,1,0,2,1,0};
int kz[6] = {0,0,1,0,1,2};
I want an equivalent Python expression for the C code below:
float *px, *pz;
int lx = LX; /* constant defined somewhere else */
int lz = LZ; /* constant defined somewhere else */
px = &(table[kx[i]][0][0])+lx;
pz = &(table[kz[i]][0][0])+lz;
Can someone please help me by giving me the equivalent expression in Python?

Here's the thing... you can't do pointers in python, so what you're showing here is not "portable" in the sense that:
float *px, *pz; <-- this doesn't exist
int lx = LX; /* constant defined somewhere else */
int lz = LZ; /* constant defined somewhere else */
px = &(table[kx[i]][0][0])+lx;
pz = &(table[kz[i]][0][0])+lz;
^ ^ ^
| | |
+----+----------------------+---- Therefore none of this makes any sense...
What you're trying to do is have a pointer to some offset in your multidimensional array table, because you can't do that in python, you don't want to "port" this code verbatim.
Follow the logic beyond this, what are you doing with px and pz? That is the code you need to understand to try and port.

There is no direct equivalent for your C code, since Python has no pointers or pointer arithmetic. Instead, refactor your code to index into the table with bracket notation.
table[kx[i]][0][lx] = 3
would be a rough equivalent of the C
px = &(table[kx[i]][0][0])+lx;
*px = 3;
Note that in Python, your table would not be contiguous. In particular, while this might work in C:
px[10] = 3; // Bounds violation!
This will IndexError in Python:
table[kx[i]][0][lx + 10] = 3

Related

What should I do to have multiple ctypes data types assigned to a single ctypes instance in Python?

I'm converting a C code into a Python code that uses a .dll file.
The syntax for accessing the commands from the DLL is given below:
cnc_rdmacro(unsigned short FlibHndl, short number, short length, ODBM *macro);
C code
Pointer to the odbm data structure is as follows:
typedef struct odbm {
short datano ; /* custom macro variable number */
short dummy ; /* (not used) */
long mcr_val ; /* value of custom macro variable */
short dec_val ; /* number of places of decimals */
} ODBM ;
C code used to access the dll command:
short example( short number )
{
ODBM macro ;
char strbuf[12] ;
short ret ;
ret = cnc_rdmacro( h, number, 10, &macro ) ;
The python code that I converted according to the above C code is as follows:
import ctypes
fs = ctypes.cdll.LoadLibrary(r".dll filepath")
ODBM = (ctypes.c_short * 4)() #the datatype conversion code from the above C code
ret = fs.cnc_rdmacro(libh, macro_no, 10, ctypes.byref(ODBM))
I can get the output without any errors in the above code.
The actual data structure of the ODBM has declared 4 variables of datatypes short, short, long and short which are implemented in the C code. But I had declared the ODBM data structure in python as ctypes.c_short * 4 i.e, 4 variables of short data types.
But my necessity is to declare the ODBM structure the same as in the C code and pass it to the ctypes.byref().
The ultimate solution is to include multiple data types in a single variable as a ctypes instance. Kindly help me out.
A ctypes.Structure should be used here:
import ctypes
class ODBM(ctypes.Structure):
_fields_ = [("datano", ctypes.c_short),
("dummy", ctypes.c_short),
("mcr_val", ctypes.c_long),
("dec_val", ctypes.c_short)]
fs = ctypes.cdll.LoadLibrary(r".dll filepath")
odbm = ODBM()
ret = fs.cnc_rdmacro(libh, macro_no, 10, ctypes.byref(odbm))
print(odbm.mcr_val)

Read FileMapping Object in python (adapted from c++)

I have a process that writes a FileMap to shared-memory, and want to access it in Python. I however have no idea what shape the filemap has.
I found a solution that works perfectly fine in c++, but there's a part I can't figure out because I'm not a c++ guy.
Simplified C++ code :
struct STelemetry {
struct SHeader {
char Magic[32];
Nat32 Version;
Nat32 Size;
};
};
#MAIN
HANDLE hMapFile = NULL;
void* pBufView = NULL;
const volatile STelemetry* Shared = NULL;
hMapFile = OpenFileMapping(FILE_MAP_READ, FALSE, "MP_Telemetry"); #FileMap Handle
pBufView = (void*)MapViewOfFile(hMapFile, FILE_MAP_READ, 0, 0, 4096); #Pointer to MapView (string of bytes ?)
Shared = (const STelemetry*)pBufView; #Somehow cast string of bytes to class?
Full repo : https://github.com/Electron-x/TMTelemetry/blob/master/TMTelemetry.cpp
Which I adapted in Python :
from ctypes import *
FILE_MAP_ALL_ACCESS = 0xF001F
INVALID_HANDLE_VALUE = 0xFFFFFFFF
FALSE = 0
TRUE = 1
SHMEMSIZE = 4096 #Just copied this value form c++ code
hMapObject = windll.kernel32.OpenFileMappingW(FILE_MAP_ALL_ACCESS, FALSE, "MP_Telemetry") #OpenFileMappingA for ansi encoding, OpenFileMappingW for unicode
pBuf = windll.kernel32.MapViewOfFile(hMapObject, FILE_MAP_ALL_ACCESS, 0, 0, SHMEMSIZE)
At that point, pBuf is a int value that I think represents the pointer, so I just want to read the value pointed to and create an object just like STelemetry in the C++ code.
The C++ code does Shared = (const STelemetry*)pBufView; which I think has no equivalent in Python, so I tried to print it, thinking I could create a class from a string.
I tried various things :
import mmap
shmem = mmap.mmap(0, SHMEMSIZE, "ManiaPlanet_Telemetry", mmap.ACCESS_READ)
print(shmem.read(SHMEMSIZE).decode("utf-8")) # Using OpenFileMappingW
# If SHMEMSIZE = 256: MP_Telemetry t . Stadium.....
# If SHMEMSIZE = 4096 UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb2 in position 40
print(shmem.read(SHMEMSIZE).decode("ansi")) # Using OpenFileMappingA
# MP_Telemetry t . Stadium.....
shmem.close()
The "MP_Telemetry" and "Stadium" strings are things I want. But basically everything else is gibberish.
Using the A function and Ansi seems better right ? But using the A function, pBuf = 0 always, so the pointer is null but still returns a string ? ...
I've tried a bunch of other decoders and nothing more came out.
Other Solution :
x = cast(pBuf, c_char_p)
print(x.value)
But I get None using the A function and exit code 0xC0000005 (access denied) using the W function
So the question is : How do I interpret that byte string ? Is there a way to use the C++ defined class in Python (ctypes ?)
And if you've got explanations for other points I did not get, you're welcome.
(Also If you've got a better title, cause it may look like a duplicate)
Thanks

Roots of Legendre Polynomials c++

I'm writing a program to find the roots of nth order Legendre Polynomials using c++; my code is attached below:
double* legRoots(int n)
{
double myRoots[n];
double x, dx, Pi = atan2(1,1)*4;
int iters = 0;
double tolerance = 1e-20;
double error = 10*tolerance;
int maxIterations = 1000;
for(int i = 1; i<=n; i++)
{
x = cos(Pi*(i-.25)/(n+.5));
do
{
dx -= legDir(n,x)/legDif(n,x);
x += dx;
iters += 1;
error = abs(dx);
} while (error>tolerance && iters<maxIterations);
myRoots[i-1] = x;
}
return myRoots;
}
Assuming the existence of functioning Legendre Polynomial and Legendre Polynomial derivative generating functions, which I do have but I thought that would make for unreadable walls of code text. This function is functioning in the sense that it's returning an array calculated values, but they're wildly off, outputting the following:
3.95253e-323
6.94492e-310
6.95268e-310
6.42285e-323
4.94066e-323
2.07355e-317
where an equivalent function I've written in Python gives the following:
[-0.90617985 -0.54064082 0. 0.54064082 0.90617985]
I was hoping another set of eyes could help me see what the issue in my C++ code is that's causing the values to be wildly off. I'm not doing anything different in my Python code that I'm doing in C++, so any help anyone could give on this is greatly appreciated, thanks. For reference, I'm mostly trying to emulate the method found on Rosetta code in regards to Gaussian Quadrature: http://rosettacode.org/wiki/Numerical_integration/Gauss-Legendre_Quadrature.
You are returning an address to a temporary variable in stack
{
double myRoots[n];
...
return myRoots; // Not a safe thing to do
}
I suggest changing your function definition to
void legRoots(int n, double *myRoots)
omitting the return statement, and defining myroots before calling the function
double myRoots[10];
legRoots(10, myRoots);
Option 2 would be to allocate myRoots dynamically with new or malloc.

Python: XOR each character in a string

I'm trying to validate a checksum on a string which in this case is calculated by performing an XOR on each of the individual characters.
Given my test string:
check_against = "GPGLL,5300.97914,N,00259.98174,E,125926,A"
I figured it would be as simple as:
result = 0
for char in check_against:
result = result ^ ord(char)
I know the result should be 28, however my code gives 40.
I'm not sure what encoding the text is suppose to be in, although I've tried encoding/decoding in utf-8 and ascii, both with the same result.
I implemented this same algorithm in C by simply doing an XOR over the char array with perfect results, so what am I missing?
Edit
So it was a little while ago that I implemented (what I thought) was the same thing in C. I knew it was in an Objective-C project but I thought I had just done it this way. Totally wrong, first there was a step where I converted the checksum string value at the end to hex like so (I'm filling some things in here so that I'm only pasting what is relevant):
unsigned int checksum = 0;
NSScanner *scanner = [NSScanner scannerWithString:#"26"];
[scanner scanHexInt:&checksum];
Then I did the following to compute the checksum:
NSString sumString = #"GPGLL,5300.97914,N,00259.98174,E,125926,A";
unsigned int sum = 0;
for (int i=0;i<sumString.length;i++) {
sum = sum ^ [sumString characterAtIndex:i];
}
Then I would just compare like so:
return sum == checksum;
So as #metatoaster, #XD573, and some others in the comments have helped figure out, the issue was the difference between the result, which was base 10, and my desired solution (in base 16).
The result for the code, 40 is correct - in base 10, however my correct value I was trying to achieve, 28 is given in base 16. Simply converting the solution from base 16 to base 10, for example like so:
int('28', 16)
I get 40, the computed result.
#python3
str = "GPGLL,5300.97914,N,00259.98174,E,125926,A"
cks = 0
i = 0
while(i<len(str)):
cks^=ord(str[i])
i+=1
print("hex:",hex(cks))
print("dec:",cks)
I created the C version as shown here:
#include <stdio.h>
#include <string.h>
int main()
{
char* str1="GPGLL,5300.97914,N,00259.98174,E,125926,A";
int sum = 0;
int i = 0;
for (i; i < strlen(str1); i++) {
sum ^= str1[i];
}
printf("checksum: %d\n", sum);
return 0;
}
And When I compiled and ran it:
$ gcc -o mytest mytest.c
$ ./mytest
checksum: 40
Which leads me to believe that the assumptions you have from your equivalent C code are incorrect.

ctypes outputting unknown value at end of correct values

I have the following DLL ('arrayprint.dll') function that I want to use in Python via ctypes:
__declspec(dllexport) void PrintArray(int* pArray) {
int i;
for(i = 0; i < 5; pArray++, i++) {
printf("%d\n",*pArray);
}
}
My Python script is as follows:
from ctypes import *
fiveintegers = c_int * 5
x = fiveintegers(2,3,5,7,11)
px = pointer(x)
mydll = CDLL('arrayprint.dll')
mydll.PrintArray(px)
The final function call outputs the following:
2
3
5
7
11
2226984
What is the 2226984 and how do I get rid of it? It doesn't look to be the decimal value for the memory address of the DLL, x, or px.
Thanks,
Mike
(Note: I'm not actually using PrintArray for anything; it was just the easiest example I could find that generated the same behavior as the longer function I'm using.)
mydll.PrintArray.restype = None
mydll.PrintArray(px)
By default ctypes assumes the function returns an integral type, which causes undefined behavior (reading a garbage memory location).

Categories