Sending negative values via pyserial problem - python

I need to send mouse coordinates from python to arduino. As you know there is X and Y axis and there is some negative values like -15 or -10 etc. on those axis. Arduino's serial only accepts Bytes so bytes are limited with 0 to 256. My problem is starts right here. I cant send negative values from python to arduino. Here is my code for python :
def mouse_move(x, y):
pax = [x,y]
arduino.write(pax)
print(pax)
For example when x or y is negative value like -5 , Program crashes because byte array is 0-256 .
here is my arduino's code:
#include <Mouse.h>
byte bf[2];
void setup() {
Serial.begin(9600);
Mouse.begin();
}
void loop() {
if (Serial.available() > 0) {
Serial.readBytes(bf, 2);
Mouse.move(bf[0], bf[1], 0);
Serial.read();
}
}

You need to send more bytes to represent each number.
Let's say you use 4 bytes per number.
Please note that this code needs to be adapted to the arduino endianess.
On python side you would have to do something like:
def mouse_move(x, y):
bytes = x.to_bytes(4, byteorder = 'big') + y.to_bytes(4, byteorder = 'big')
arduino.write(bytes)
print(pax)
On receiver side you need to reconstruct the number from their bytes constituants
something like:
byte bytes[4]
void loop() {
int x,y; /* use arduino int type of size 4 bytes */
if (Serial.available() > 0) {
Serial.readBytes(bytes, 4);
x = bytes[0] << 24 | bytes[1] << 16 | bytes[2] << 8 | bytes[0]
Serial.readBytes(bytes, 4);
y = bytes[0] << 24 | bytes[1] << 16 | bytes[2] << 8 | bytes[0]
Mouse.move(x, y, 0);
Serial.read();
}
}

Related

Same random numbers in C++ as computed by Python3 numpy.random.rand

I would like to duplicate in C++ the testing for some code that has already been implemented in Python3 which relies on numpy.random.rand and randn values and a specific seed (e.g., seed = 1).
I understand that Python's random implementation is based on a Mersenne twister. The C++ standard library also supplies this in std::mersenne_twister_engine.
The C++ version returns an unsigned int, whereas Python rand is a floating point value.
Is there a way to obtain the same values in C++ as are generated in Python, and be sure that they are the same? And the same for an array generated by randn ?
You can do it this way for integer values:
import numpy as np
np.random.seed(12345)
print(np.random.randint(256**4, dtype='<u4', size=1)[0])
#include <iostream>
#include <random>
int main()
{
std::mt19937 e2(12345);
std::cout << e2() << std::endl;
}
The result of both snippets is 3992670690
By looking at source code of rand you can implement it in your C++ code this way:
import numpy as np
np.random.seed(12345)
print(np.random.rand())
#include <iostream>
#include <iomanip>
#include <random>
int main()
{
std::mt19937 e2(12345);
int a = e2() >> 5;
int b = e2() >> 6;
double value = (a * 67108864.0 + b) / 9007199254740992.0;
std::cout << std::fixed << std::setprecision(16) << value << std::endl;
}
Both random values are 0.9296160928171479
It would be convenient to use std::generate_canonical, but it uses another method to convert the output of Mersenne twister to double. The reason they differ is likely that generate_canonical is more optimized than the random generator used in NumPy, as it avoids costly floating point operations, especially multiplication and division, as seen in source code. However it seems to be implementation dependent, while NumPy produces the same result on all platforms.
double value = std::generate_canonical<double, std::numeric_limits<double>::digits>(e2);
This doesn't work and produces result 0.8901547132827379, which differs from the output of Python code.
For completeness and to avoid re-inventing the wheel, here is an implementation for both numpy.rand and numpy.randn in C++
The header file:
#ifndef RANDOMNUMGEN_NUMPYCOMPATIBLE_H
#define RANDOMNUMGEN_NUMPYCOMPATIBLE_H
#include "RandomNumGenerator.h"
//Uniform distribution - numpy.rand
class RandomNumGen_NumpyCompatible {
public:
RandomNumGen_NumpyCompatible();
RandomNumGen_NumpyCompatible(std::uint_fast32_t newSeed);
std::uint_fast32_t min() const { return m_mersenneEngine.min(); }
std::uint_fast32_t max() const { return m_mersenneEngine.max(); }
void seed(std::uint_fast32_t seed);
void discard(unsigned long long); // NOTE!! Advances and discards twice as many values as passed in to keep tracking with Numpy order
uint_fast32_t operator()(); //Simply returns the next Mersenne value from the engine
double getDouble(); //Calculates the next uniformly random double as numpy.rand does
std::string getGeneratorType() const { return "RandomNumGen_NumpyCompatible"; }
private:
std::mt19937 m_mersenneEngine;
};
///////////////////
//Gaussian distribution - numpy.randn
class GaussianRandomNumGen_NumpyCompatible {
public:
GaussianRandomNumGen_NumpyCompatible();
GaussianRandomNumGen_NumpyCompatible(std::uint_fast32_t newSeed);
std::uint_fast32_t min() const { return m_mersenneEngine.min(); }
std::uint_fast32_t max() const { return m_mersenneEngine.max(); }
void seed(std::uint_fast32_t seed);
void discard(unsigned long long); // NOTE!! Advances and discards twice as many values as passed in to keep tracking with Numpy order
uint_fast32_t operator()(); //Simply returns the next Mersenne value from the engine
double getDouble(); //Calculates the next normally (Gaussian) distrubuted random double as numpy.randn does
std::string getGeneratorType() const { return "GaussianRandomNumGen_NumpyCompatible"; }
private:
bool m_haveNextVal;
double m_nextVal;
std::mt19937 m_mersenneEngine;
};
#endif
And the implementation:
#include "RandomNumGen_NumpyCompatible.h"
RandomNumGen_NumpyCompatible::RandomNumGen_NumpyCompatible()
{
}
RandomNumGen_NumpyCompatible::RandomNumGen_NumpyCompatible(std::uint_fast32_t seed)
: m_mersenneEngine(seed)
{
}
void RandomNumGen_NumpyCompatible::seed(std::uint_fast32_t newSeed)
{
m_mersenneEngine.seed(newSeed);
}
void RandomNumGen_NumpyCompatible::discard(unsigned long long z)
{
//Advances and discards TWICE as many values to keep with Numpy order
m_mersenneEngine.discard(2*z);
}
std::uint_fast32_t RandomNumGen_NumpyCompatible::operator()()
{
return m_mersenneEngine();
}
double RandomNumGen_NumpyCompatible::getDouble()
{
int a = m_mersenneEngine() >> 5;
int b = m_mersenneEngine() >> 6;
return (a * 67108864.0 + b) / 9007199254740992.0;
}
///////////////////
GaussianRandomNumGen_NumpyCompatible::GaussianRandomNumGen_NumpyCompatible()
: m_haveNextVal(false)
{
}
GaussianRandomNumGen_NumpyCompatible::GaussianRandomNumGen_NumpyCompatible(std::uint_fast32_t seed)
: m_haveNextVal(false), m_mersenneEngine(seed)
{
}
void GaussianRandomNumGen_NumpyCompatible::seed(std::uint_fast32_t newSeed)
{
m_mersenneEngine.seed(newSeed);
}
void GaussianRandomNumGen_NumpyCompatible::discard(unsigned long long z)
{
//Burn some CPU cyles here
for (unsigned i = 0; i < z; ++i)
getDouble();
}
std::uint_fast32_t GaussianRandomNumGen_NumpyCompatible::operator()()
{
return m_mersenneEngine();
}
double GaussianRandomNumGen_NumpyCompatible::getDouble()
{
if (m_haveNextVal) {
m_haveNextVal = false;
return m_nextVal;
}
double f, x1, x2, r2;
do {
int a1 = m_mersenneEngine() >> 5;
int b1 = m_mersenneEngine() >> 6;
int a2 = m_mersenneEngine() >> 5;
int b2 = m_mersenneEngine() >> 6;
x1 = 2.0 * ((a1 * 67108864.0 + b1) / 9007199254740992.0) - 1.0;
x2 = 2.0 * ((a2 * 67108864.0 + b2) / 9007199254740992.0) - 1.0;
r2 = x1 * x1 + x2 * x2;
} while (r2 >= 1.0 || r2 == 0.0);
/* Box-Muller transform */
f = sqrt(-2.0 * log(r2) / r2);
m_haveNextVal = true;
m_nextVal = f * x1;
return f * x2;
}
After doing a bit of testing, it does seem that the values are within a tolerance (see #fdermishin 's comment below) when the C++ unsigned int is divided by the maximum value for an unsigned int like this:
#include <limits>
...
std::mt19937 generator1(seed); // mt19937 is a standard mersenne_twister_engine
unsigned val1 = generator1();
std::cout << "Gen 1 random value: " << val1 << std::endl;
std::cout << "Normalized Gen 1: " << static_cast<double>(val1) / std::numeric_limits<std::uint32_t>::max() << std::endl;
However, Python's version seems to skip every other value.
Given the following two programs:
#!/usr/bin/env python3
import numpy as np
def main():
np.random.seed(1)
for i in range(0, 10):
print(np.random.rand())
###########
# Call main and exit success
if __name__ == "__main__":
main()
sys.exit()
and
#include <cstdlib>
#include <iostream>
#include <random>
#include <limits>
int main()
{
unsigned seed = 1;
std::mt19937 generator1(seed); // mt19937 is a standard mersenne_twister_engine
for (unsigned i = 0; i < 10; ++i) {
unsigned val1 = generator1();
std::cout << "Normalized, #" << i << ": " << (static_cast<double>(val1) / std::numeric_limits<std::uint32_t>::max()) << std::endl;
}
return EXIT_SUCCESS;
}
the Python program prints:
0.417022004702574
0.7203244934421581
0.00011437481734488664
0.30233257263183977
0.14675589081711304
0.0923385947687978
0.1862602113776709
0.34556072704304774
0.39676747423066994
0.538816734003357
whereas the C++ program prints:
Normalized, #0: 0.417022
Normalized, #1: 0.997185
Normalized, #2: 0.720324
Normalized, #3: 0.932557
Normalized, #4: 0.000114381
Normalized, #5: 0.128124
Normalized, #6: 0.302333
Normalized, #7: 0.999041
Normalized, #8: 0.146756
Normalized, #9: 0.236089
I could easily skip every other value in the C++ version, which should give me numbers that match the Python version (within a tolerance). But why would Python's implementation seem to skip every other value, or where do these extra values in the C++ version come from?

Getting wrong values when I stitch 2 shorts back into an unsigned long

I am doing BLE communications with an Arduino Board and an FPGA.
I have this requirement which restraints me from changing the packet structure (the packet structure is basically short data types). Thus, to send a timestamp (form millis()) over, I have to split an unsigned long into 2 shorts on the Arduino side and stitch it back up on the FPGA side (python).
This is the implementation which I have:
// Arduino code in c++
unsigned long t = millis();
// bitmask to get bits 1-16
short LSB = (short) (t & 0x0000FFFF);
// bitshift to get bits 17-32
short MSB = (short) (t >> 16);
// I then send the packet with MSB and LSB values
# FPGA python code to stitch it back up (I receive the packet and extract the MSB and LSB)
MSB = data[3]
LSB = data[4]
data = MSB << 16 | LSB
Now the issue is that my output for data on the FPGA side is sometimes negative, which tells me that I must have missed something somewhere as timestamps are not negative. Does any one know why ?
When I transfer other data in the packet (i.e. other short values and not the timestamp), I am able to receive them as expected, so the problem most probably lies in the conversion that I did and not the sending/receiving of data.
short defaults to signed, and in case of a negative number >> will keep the sign by shifting in one bits in from the left. See e.g. Microsoft.
From my earlier comment:
In Python avoid attempting that by yourself (by the way short from C perspective has no idea concerning its size, you always have to look into the compiler manual or limits.h) and use the struct module instead.
you probably need/want to first convert the long to network byte order using hotnl
As guidot reminded “short” is signed and as data are transferred to Python the code has an issue:
For t=0x00018000 most significant short MSB = 1, least significant short LSB = -32768 (0x8000 in C++ and -0x8000 in Python) and Python code expression
time = MSB << 16 | LSB
returns time = -32768 (see the start of Python code below).
So, we have incorrect sign and we are loosing MSB (any value, not only 1 in our example).
MSB is lost because in the expression above LSB is extended with sign bit 1 to the left 16 bits, then new 16 “1” bits override with “|” operator whatever MSB we have and then all new 16 “1” bits are discarded and the expression returns LSB.
Straightforward fix (1.1 Fix) would be fixing MSB, LSB to unsigned short. This could be enough without any changes in Python code.
To exclude bit operations we could use “union” as per 1.2 Fix.
Without access to C++ code we could fix in Python by converting signed LSB, MSB (2.1 Fix) or use “Union” (similar to C++ “union”, 2.2 Fix).
C++
#include <iostream>
using namespace std;
int main () {
unsigned long t = 0x00018000;
short LSB = (short)(t & 0x0000FFFF);
short MSB = (short)(t >> 16);
cout << hex << "t = " << t << endl;
cout << dec << "LSB = " << LSB << " MSB = " << MSB << endl;
// 1.1 Fix Use unsigned short instead of short
unsigned short fixedLSB = (unsigned short)(t & 0x0000FFFF);
unsigned short fixedMSB = (unsigned short)(t >> 16);
cout << "fixedLSB = " << fixedLSB << " fixedMSB = " << fixedMSB << endl;
// 1.2 Fix Use union
union {
unsigned long t2;
unsigned short unsignedShortArray[2];
};
t2 = 0x00018000;
fixedLSB = unsignedShortArray [0];
fixedMSB = unsignedShortArray [1];
cout << "fixedLSB = " << fixedLSB << " fixedMSB = " << fixedMSB << endl;
}
Output
t = 18000
LSB = -32768 MSB = 1
fixedLSB = 32768 fixedMSB = 1
fixedLSB = 32768 fixedMSB = 1
Python
DATA=[0, 0, 0, 1, -32768]
MSB=DATA[3]
LSB=DATA[4]
data = MSB << 16 | LSB
print (f"MSB = {MSB} ({hex(MSB)})")
print (f"LSB = {LSB} ({hex(LSB)})")
print (f"data = {data} ({hex(data)})")
time = MSB << 16 | LSB
print (f"time = {time} ({hex(time)})")
# 2.1 Fix
def twosComplement (short):
if short >= 0: return short
return 0x10000 + short
fixedTime = twosComplement(MSB) << 16 | twosComplement(LSB)
# 2.2 Fix
import ctypes
class UnsignedIntUnion(ctypes.Union):
_fields_ = [('unsignedInt', ctypes.c_uint),
('ushortArray', ctypes.c_ushort * 2),
('shortArray', ctypes.c_short * 2)]
unsignedIntUnion = UnsignedIntUnion(shortArray = (LSB, MSB))
print ("unsignedIntUnion")
print ("unsignedInt = ", hex(unsignedIntUnion.unsignedInt))
print ("ushortArray[1] = ", hex(unsignedIntUnion.ushortArray[1]))
print ("ushortArray[0] = ", hex(unsignedIntUnion.ushortArray[0]))
print ("shortArray[1] = ", hex(unsignedIntUnion.shortArray[1]))
print ("shortArray[0] = ", hex(unsignedIntUnion.shortArray[0]))
unsignedIntUnion.unsignedInt=twosComplement(unsignedIntUnion.shortArray[1]) << 16 | twosComplement(unsignedIntUnion.shortArray[0])
def toUInt(msShort: int, lsShort: int):
return UnsignedIntUnion(ushortArray = (lsShort, msShort)).unsignedInt
fixedTime = toUInt(MSB, LSB)
print ("fixedTime = ", hex(fixedTime))
print()
Output
MSB = 1 (0x1)
LSB = -32768 (-0x8000)
data = -32768 (-0x8000)
time = -32768 (-0x8000)
unsignedIntUnion
unsignedInt = 0x18000
ushortArray[1] = 0x1
ushortArray[0] = 0x8000
shortArray[1] = 0x1
shortArray[0] = -0x8000
fixedTime = 0x18000

different crc16 C and Python3?

I have two crc16 calculators (in C and in Python). But Im receiving different results. Why?
calculator in C:
unsigned short __update_crc16 (unsigned char data, unsigned short crc16)
{
unsigned short t;
crc16 ^= data;
t = (crc16 ^ (crc16 << 4)) & 0x00ff;
crc16 = (crc16 >> 8) ^ (t << 8) ^ (t << 3) ^ (t >> 4);
return crc16;
}
unsigned short get_crc16 (void *src, unsigned int size, unsigned short start_crc)
{
unsigned short crc16;
unsigned char *p;
crc16 = start_crc;
p = (unsigned char *) src;
while (size--)
crc16 = __update_crc16 (*p++, crc16);
return crc16;
}
calculator in Python3:
def crc16(data):
crc = 0xFFFF
for i in data:
crc ^= i << 8
for j in range(0,8):
if (crc & 0x8000) > 0:
crc =(crc << 1) ^ 0x1021
else:
crc = crc << 1
return crc & 0xFFFF
There is more that one CRC-16. 22 catalogued at http://reveng.sourceforge.net/crc-catalogue/16.htm. A CRC is charactarised by its width, polynomial, initial state and the input and output bit order.
By applying the same data to each of your functions:
Python:
data = bytes([0x01, 0x23, 0x45, 0x67, 0x89])
print ( hex(crc16(data)) )
Result: 0x738E
C:
char data[] = {0x01, 0x23, 0x45, 0x67, 0x89};
printf ("%4X\n", get_crc16 (data, sizeof (data), 0xffffu));
Result: 0x9F0D
and also applying the same data to an online tool that generates multiple CRCs, such as https://crccalc.com/ you can identify the CRC from the result.
In this case your Python code is CRC-16-CCITT-FALSE, while the C result matches CRC-16/MCRF4XX. They both have the same polynomial, but differ in their input-reflected and output-reflected parameters (both false for CCITT, and true for MCRF4XX). This means that for MCRF4XX the bits are read from LSB first, and the entire CRC is nit reversed on output.
https://pypi.org/project/crccheck/ supports both CCITT and MCRF4XX and many others.
I implemented a version of crc16 in C based on python crc16 lib. This lib calculate CRC-CCITT (XModem) variant of CRC16. I used my implementation in a stm32l4 firmware. Here is my C implementation:
unsigned short _crc16(char *data_p, unsigned short length){
unsigned int crc = 0;
unsigned char i;
for(i = 0; i < length; i++){
crc = ((crc<<8)&0xff00) ^ CRC16_XMODEM_TABLE[((crc>>8)&0xff)^data_p[i]];
}
return crc & 0xffff;
}
In the Python side, I was reading 18 bytes that was transmited by stm32. Here is a bit of my code (crc's part):
import crc16
# read first time
crc_buffer = b''
bytes = serial_comunication.read(2) # int_16 - 2 bytes
crc_buffer = crc_buffer.join([crc_buffer,bytes])
crc = crc16.crc16xmodem(crc_buffer,0)
while aux < 8:
crc_buffer = b''
bytes = serial_comunication.read(2)
crc_buffer = crc_buffer.join([crc_buffer,bytes])
crc = crc16.crc16xmodem(crc_buffer,crc)
print(crc)
In my tests, C and Python crc16 values always match, unless some connection problem occurs. Hope this helps someone!

C++ - Reading in 16bit .wav files

I'm trying to read in a .wav file, which I thought was giving me the correct result, however, when I plot the same audio file in Matlab or Python, the results are different.
This is the result that I get:
This is the result that Python (plotted with matplotlib) gives:
The results do not seem that different, but, when it comes to analysis, this is messing up my results.
Here is the code that converts:
for (int i = 0; i < size; i += 2)
{
int c = (data[i + 1] << 8) | data[i];
double t = c/32768.0;
//cout << t << endl;
rawSignal.push_back(t);
}
Where am I going wrong? Since, this conversion seems fine and does produce such a similar results.
Thanks
EDIT:
Code to read the header / data:
voidreadHeader(ifstream& file) {
s_riff_hdr riff_hdr;
s_chunk_hdr chunk_hdr;
long padded_size; // Size of extra bits
vector<uint8_t> fmt_data; // Vector to store the FMT data.
s_wavefmt *fmt = NULL;
file.read(reinterpret_cast<char*>(&riff_hdr), sizeof(riff_hdr));
if (!file) return false;
if (memcmp(riff_hdr.id, "RIFF", 4) != 0) return false;
//cout << "size=" << riff_hdr.size << endl;
//cout << "type=" << string(riff_hdr.type, 4) << endl;
if (memcmp(riff_hdr.type, "WAVE", 4) != 0) return false;
{
do
{
file.read(reinterpret_cast<char*>(&chunk_hdr), sizeof(chunk_hdr));
if (!file) return false;
padded_size = ((chunk_hdr.size + 1) & ~1);
if (memcmp(chunk_hdr.id, "fmt ", 4) == 0)
{
if (chunk_hdr.size < sizeof(s_wavefmt)) return false;
fmt_data.resize(padded_size);
file.read(reinterpret_cast<char*>(&fmt_data[0]), padded_size);
if (!file) return false;
fmt = reinterpret_cast<s_wavefmt*>(&fmt_data[0]);
sample_rate2 = fmt->sample_rate;
if (fmt->format_tag == 1) // PCM
{
if (chunk_hdr.size < sizeof(s_pcmwavefmt)) return false;
s_pcmwavefmt *pcm_fmt = reinterpret_cast<s_pcmwavefmt*>(fmt);
bits_per_sample = pcm_fmt->bits_per_sample;
}
else
{
if (chunk_hdr.size < sizeof(s_wavefmtex)) return false;
s_wavefmtex *fmt_ex = reinterpret_cast<s_wavefmtex*>(fmt);
if (fmt_ex->extra_size != 0)
{
if (chunk_hdr.size < (sizeof(s_wavefmtex) + fmt_ex->extra_size)) return false;
uint8_t *extra_data = reinterpret_cast<uint8_t*>(fmt_ex + 1);
// use extra_data, up to extra_size bytes, as needed...
}
}
//cout << "extra_size=" << fmt_ex->extra_size << endl;
}
else if (memcmp(chunk_hdr.id, "data", 4) == 0)
{
// process chunk data, according to fmt, as needed...
size = padded_size;
if(bits_per_sample == 16)
{
//size = padded_size / 2;
}
data = new unsigned char[size];
file.read(data, size);
file.ignore(padded_size);
if (!file) return false;
}
{
// process other chunks as needed...
file.ignore(padded_size);
if (!file) return false;
}
}while (!file.eof());
return true;
}
}
This is where the "conversion to double" happens :
if(bits_per_sample == 8)
{
uint8_t c;
//cout << size;
for(unsigned i=0; (i < size); i++)
{
c = (unsigned)(unsigned char)(data[i]);
double t = (c-128)/128.0;
rawSignal.push_back(t);
}
}
else if(bits_per_sample == 16)
{
for (int i = 0; i < size; i += 2)
{
int c;
c = (unsigned) (unsigned char) (data[i + 2] << 8) | data[i];
double t = c/32768.0;
rawSignal.push_back(t);
}
Note how "8bit" files work correctly?
I suspect your problem may be that data is an array of signed char values. So, when you do this:
int c = (data[i + 1] << 8) | data[i];
… it's not actually doing what you wanted. Let's look at some simple examples.
If data[i+1] == 64 and data[i] == 64, that's going to be 0x4000 | 0x40, or 0x4040, all good.
If data[i+1] == -64 and data[i] == -64, that's going to be 0xffffc000 | 0xffffffc0, or 0xffffffc0, which is obviously wrong.
If you were using unsigned char values, this would work, because instead of -64 those numbers would be 192, and you'd end up with 0xc000 | 0xc0 or 0xc0c0, just as you want. (But then your /32768.0 would give you numbers in the range 0.0 to 2.0, when you presumably want -1.0 to 1.0.)
Suggesting a "fix" is difficult without knowing what exactly you're trying to do. Obviously you want to convert some kind of 16-bit little-endian integer format into some kind of floating-point format, but a lot rests on the exact details of those formats, and you haven't provided any such details. The default .wav format is 16-bit unsigned little-endian integers, so just using unsigned char * would fix that part of the equation. But I don't know of any audio format that uses 64-bit floating point numbers from 0.0 to 2.0, and I don't know what audio format you're actually aiming for, so I can't say what that /32768.0 should actually be, just that it's probably wrong.

Passing an array using Ctypes

So my python program is
from ctypes import *
import ctypes
number = [0,1,2]
testlib = cdll.LoadLibrary("./a.out")
testlib.init.argtypes = [ctypes.c_int]
testlib.init.restype = ctypes.c_double
#create an array of size 3
testlib.init(3)
#Loop to fill the array
#use AccessArray to preform an action on the array
And the C part is
#include <stdio.h>
double init(int size){
double points[size];
return points[0];
}
double fillarray(double value, double location){
// i need to access
}
double AccessArray(double value, double location){
// i need to acess the array that is filled in the previous function
}
So what I need to do is to pass an array from the python part to the C function somehow move that array in C to the another function where I will access it in order to process it.
I'm stuck though because I cant figure out a way to move the array in the C part.
can someone show me how to do this?
You should try something like this (in your C code):
#include <stdio.h>
double points[1000];//change 1000 for the maximum size for you
int sz = 0;
double init(int size){
//verify size <= maximum size for the array
for(int i=0;i<size;i++) {
points[i] = 1;//change 1 for the init value for you
}
sz = size;
return points[0];
}
double fillarray(double value, double location){
//first verify 0 < location < sz
points[(int)location] = value;
}
double AccessArray(double value, double location){
//first verify 0 < location < sz
return points[(int)location];
}
This is a very simple solution but if you need to allocate an array with just any size you shoul study the use of malloc
Maybe something like this?
$ cat Makefile
go: a.out
./c-double
a.out: c.c
gcc -fpic -shared c.c -o a.out
zareason-dstromberg:~/src/outside-questions/c-double x86_64-pc-linux-gnu 27062 - above cmd done 2013 Fri Dec 27 11:03 AM
$ cat c.c
#include <stdio.h>
#include <malloc.h>
double *init(int size) {
double *points;
points = malloc(size * sizeof(double));
return points;
}
double fill_array(double *points, int size) {
int i;
for (i=0; i < size; i++) {
points[i] = (double) i;
}
}
double access_array(double *points, int size) {
// i need to access the array that is filled in the previous function
int i;
for (i=0; i < size; i++) {
printf("%d: %f\n", i, points[i]);
}
}
zareason-dstromberg:~/src/outside-questions/c-double x86_64-pc-linux-gnu 27062 - above cmd done 2013 Fri Dec 27 11:03 AM
$ cat c-double
#!/usr/local/cpython-3.3/bin/python
import ctypes
testlib = ctypes.cdll.LoadLibrary("./a.out")
testlib.init.argtypes = [ctypes.c_int]
testlib.init.restype = ctypes.c_void_p
#create an array of size 3
size = 3
double_array = testlib.init(size)
#Loop to fill the array
testlib.fill_array(double_array, size)
#use AccessArray to preform an action on the array
testlib.access_array(double_array, size)

Categories