reading struct in python from created struct in c - python

I am very new at using Python and very rusty with C, so I apologize in advance for how dumb and/or lost I sound.
I have function in C that creates a .dat file containing data. I am opening the file using Python to read the file. One of the things I need to read are a struct that was created in the C function and printed in binary. In my Python code I am at the appropriate line of the file to read in the struct. I have tried both unpacking the stuct item by item and as a whole without success. Most of the items in the struct were declared 'real' in the C code. I am working on this code with someone else and the main source code is his and has declared the variables as 'real'. I need to put this in a loop because I want to read all of the files in the directory that end in '.dat'. To start the loop I have:
for files in os.listdir(path):
if files.endswith(".dat"):
part = open(path + files, "rb")
for line in part:
Which then I read all of the lines previous to the one containing the struct. Then I get to that line and have:
part_struct = part.readline()
r = struct.unpack('<d8', part_struct[0])
I'm trying to just read the first thing stored in the struct. I saw an example of this somewhere on here. And when I try this I'm getting an error that reads:
struct.error: repeat count given without format specifier
I will take any and all tips someone can give me. I have been stuck on this for a few days and have tried many different things. To be honest, I think I don't understand the struct module but I've read as much as I could on it.
Thanks!

You could use ctypes.Structure or struct.Struct to specify format of the file. To read structures from the file produced by C code in #perreal's answer:
"""
struct { double v; int t; char c;};
"""
from ctypes import *
class YourStruct(Structure):
_fields_ = [('v', c_double),
('t', c_int),
('c', c_char)]
with open('c_structs.bin', 'rb') as file:
result = []
x = YourStruct()
while file.readinto(x) == sizeof(x):
result.append((x.v, x.t, x.c))
print(result)
# -> [(12.100000381469727, 17, 's'), (12.100000381469727, 17, 's'), ...]
See io.BufferedIOBase.readinto(). It is supported in Python 3 but it is undocumented in Python 2.7 for a default file object.
struct.Struct requires to specify padding bytes (x) explicitly:
"""
struct { double v; int t; char c;};
"""
from struct import Struct
x = Struct('dicxxx')
with open('c_structs.bin', 'rb') as file:
result = []
while True:
buf = file.read(x.size)
if len(buf) != x.size:
break
result.append(x.unpack_from(buf))
print(result)
It produces the same output.
To avoid unnecessary copying Array.from_buffer(mmap_file) could be used to get an array of structs from a file:
import mmap # Unix, Windows
from contextlib import closing
with open('c_structs.bin', 'rb') as file:
with closing(mmap.mmap(file.fileno(), 0, access=mmap.ACCESS_COPY)) as mm:
result = (YourStruct * 3).from_buffer(mm) # without copying
print("\n".join(map("{0.v} {0.t} {0.c}".format, result)))

Some C code:
#include <stdio.h>
typedef struct { double v; int t; char c;} save_type;
int main() {
save_type s = { 12.1f, 17, 's'};
FILE *f = fopen("output", "w");
fwrite(&s, sizeof(save_type), 1, f);
fwrite(&s, sizeof(save_type), 1, f);
fwrite(&s, sizeof(save_type), 1, f);
fclose(f);
return 0;
}
Some Python code:
import struct
with open('output', 'rb') as f:
chunk = f.read(16)
while chunk != "":
print len(chunk)
print struct.unpack('dicccc', chunk)
chunk = f.read(16)
Output:
(12.100000381469727, 17, 's', '\x00', '\x00', '\x00')
(12.100000381469727, 17, 's', '\x00', '\x00', '\x00')
(12.100000381469727, 17, 's', '\x00', '\x00', '\x00')
but there is also the padding issue. The padded size of save_type is 16, so we read 3 more characters and ignore them.

A number in the format specifier means a repeat count, but it has to go before the letter, like '<8d'. However you said you just want to read one element of the struct. I guess you just want '<d'. I guess you are trying to specify the number of bytes to read as 8, but you don't need to do that. d assumes that.
I also noticed you are using readline. That seems wrong for reading binary data. It will read until the next carriage return / line feed, which will occur randomly in binary data. What you want to do is use read(size), like this:
part_struct = part.read(8)
r = struct.unpack('<d', part_struct)
Actually, you should be careful, as read can return less data than you request. You need to repeat it if it does.
part_struct = b''
while len(part_struct) < 8:
data = part.read(8 - len(part_struct))
if not data: raise IOException("unexpected end of file")
part_struct += data
r = struct.unpack('<d', part_struct)

I had same problem recently, so I had made module for the task, stored here: http://pastebin.com/XJyZMyHX
example code:
MY_STRUCT="""typedef struct __attribute__ ((__packed__)){
uint8_t u8;
uint16_t u16;
uint32_t u32;
uint64_t u64;
int8_t i8;
int16_t i16;
int32_t i32;
int64_t i64;
long long int lli;
float flt;
double dbl;
char string[12];
uint64_t array[5];
} debugInfo;"""
PACKED_STRUCT='\x01\x00\x01\x00\x00\x01\x00\x00\x00\x00\x00\x01\x00\x00\x00\xff\x00\xff\x00\x00\xff\xff\x00\x00\x00\x00\xff\xff\xff\xff*\x00\x00\x00\x00\x00\x00\x00ff\x06#\x14\xaeG\xe1z\x14\x08#testString\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00'
if __name__ == '__main__':
print "String:"
print depack_bytearray_to_str(PACKED_STRUCT,MY_STRUCT,"<" )
print "Bytes in Stuct:"+str(structSize(MY_STRUCT))
nt=depack_bytearray_to_namedtuple(PACKED_STRUCT,MY_STRUCT,"<" )
print "Named tuple nt:"
print nt
print "nt.string="+nt.string
The result should be:
String:
u8:1
u16:256
u32:65536
u64:4294967296
i8:-1
i16:-256
i32:-65536
i64:-4294967296
lli:42
flt:2.09999990463
dbl:3.01
string:u'testString\x00\x00'
array:(1, 2, 3, 4, 5)
Bytes in Stuct:102
Named tuple nt:
CStruct(u8=1, u16=256, u32=65536, u64=4294967296L, i8=-1, i16=-256, i32=-65536, i64=-4294967296L, lli=42, flt=2.0999999046325684, dbl=3.01, string="u'testString\\x00\\x00'", array=(1, 2, 3, 4, 5))
nt.string=u'testString\x00\x00'

Numpy can be used to read/write binary data. You just need to define a custom np.dtype instance that defines the memory layout of your c-struct.
For example, here is some C++ code defining a struct (should work just as well for C structs, though I'm not a C expert):
struct MyStruct {
uint16_t FieldA;
uint16_t pad16[3];
uint32_t FieldB;
uint32_t pad32[2];
char FieldC[4];
uint64_t FieldD;
uint64_t FieldE;
};
void write_struct(const std::string& fname, MyStruct h) {
// This function serializes a MyStruct instance and
// writes the binary data to disk.
std::ofstream ofp(fname, std::ios::out | std::ios::binary);
ofp.write(reinterpret_cast<const char*>(&h), sizeof(h));
}
Based on the advice I found at stackoverflow.com/a/5397638, I've included some padding in the struct (the pad16 and pad32 fields) so that serialization will happen in a more predictable way. I think that this is a C++ thing; it might not be necessary when using plain ol' C structs.
Now, in python, we create a numpy.dtype object describing the memory-layout of MyStruct:
import numpy as np
my_struct_dtype = np.dtype([
("FieldA" , np.uint16 , ),
("pad16" , np.uint16 , (3,) ),
("FieldB" , np.uint32 ),
("pad32" , np.uint32 , (2,) ),
("FieldC" , np.byte , (4,) ),
("FieldD" , np.uint64 ),
("FieldE" , np.uint64 ),
])
Then use numpy's fromfile to read the binary file where you've saved your c-struct:
# read data
struct_data = np.fromfile(fpath, dtype=my_struct_dtype, count=1)[0]
FieldA = struct_data["FieldA"]
FieldB = struct_data["FieldB"]
FieldC = struct_data["FieldC"]
FieldD = struct_data["FieldD"]
FieldE = struct_data["FieldE"]
if FieldA != expected_value_A:
raise ValueError("Bad FieldA, got %d" % FieldA)
if FieldB != expected_value_B:
raise ValueError("Bad FieldB, got %d" % FieldB)
if FieldC.tobytes() != b"expc":
raise ValueError("Bad FieldC, got %s" % FieldC.tobytes().decode())
...
The count=1 argument in the above call np.fromfile(..., count=1) is so that the returned array will have only one element; this means "read the first struct instance from the file". Note that I am indexing [0] to get that element out of the array.
If you have appended the data from many c-structs to the same file, you can use fromfile(..., count=n) to read n struct instances into a numpy array of shape (n,). Setting count=-1, which is the default for the np.fromfile and np.frombuffer functions, means "read all of the data", resulting in a 1-dimensional array of shape (number_of_struct_instances,).
You can also use the offset keyword argument to np.fromfile to control where in the file the data read will begin.
To conclude, here are some numpy functions that will be useful once your custom dtype has been defined:
Reading binary data as a numpy array:
np.frombuffer(bytes_data, dtype=...):
Interpret the given binary data (e.g. a python bytes instance)
as a numpy array of the given dtype. You can define a custom
dtype that describes the memory layout of your c struct.
np.fromfile(filename, dtype=...):
Read binary data from filename. Should be the same result as
np.frombuffer(open(filename, "rb").read(), dtype=...).
Writing a numpy array as binary data:
ndarray.tobytes():
Construct a python bytes instance containing
raw data from the given numpy array. If the array's data has dtype
corresponding to a c-struct, then the bytes coming from
ndarray.tobytes can be deserialized
by c/c++ and interpreted as an (array of) instances of that c-struct.
ndarray.tofile(filename):
Binary data from the array is written to filename.
This data could then be deserialized by c/c++.
Equivalent to open("filename", "wb").write(a.tobytes()).

Related

Shared memory between C and python

I want to share memory between a program in C and another in python.
The c program uses the following structure to define the data.
struct Memory_LaserFrontal {
char Data[372]; // original data
float Med[181]; // Measurements in [m]
charD; // 'I': Invalid -- 'V': Valid
charS; // 'L': Clean -- 'S': Dirty
char LaserStatus[2];
};
From python I have managed to read the variable in memory using sysv_ipc but they have no structure and is seen as a data array. How can I restructure them?
python code:
from time import sleep
import sysv_ipc
# Create shared memory object
memory = sysv_ipc.SharedMemory(1234)
# Read value from shared memory
memory_value = memory.read()
print (memory_value)
print (len(memory_value))
while True:
memory_value = memory.read()
print (float(memory_value[800]))
sleep(0.1)
I have captured and printed the data in python, I have modified the sensor reading and the read data is also modified, confirming that the read data corresponds to the data in the sensor's shared memory. But without the proper structure y cant use the data.
You need to unpack your binary data structure into Python types. The Python modules struct and array can do this for you.
import struct
import array
NB: Some C compilers, but not the comomn ones, may pad your member variables to align each of them with the expected width for your CPU ( almost always 4 bytes ). This means that it may add padding bytes. You may have to experiment with the struct format parameter 'x' between the appropriate parts of your struct if this is the case. Python's struct module does not expect aligned or padded types by default, you need to inform it. See my note at the very end for a guess on what the padding might look like. Again, per #Max's comment, this is unlikely.
NB: I think the members charD and charS are really char D; and char S;
Assuming you want the floats as a Python list or equivalent we have to do some more work with the Python array module . Same for the char[] Data.
# Get the initial char array - you can turn it into a string if you need to later.
my_chars = array.array("b") # f for float, b for byteetc.
my_chars.from_bytes(memory_value[:372]) # happens that 372 C chars is 372 bytes.
Data = my_chars.tolist() # Could be bytes list
# advance to the member after Data
end_of_Data = struct.calcsize("372c")
# get the length in bytes that 181 floats take up
end_of_Med = struct.calcsize("181f") + end_of_Data
# now we know where the floats are
floats_as_bytes = memory_value[ end_of_Data : end_of_Med ]
# unpack the remaining parts
( D, S, LaserStatus_1, LaserStatus_2 ) = struct.unpack( "cccc", memory_value[end_of_Med:] )
Now use the array module to unpack to make a Python list
my_floats = array.array("f") # f for float, c for char etc.
my_floats.from_bytes(floats_as_bytes)
Now Data might be a list of Python bytes type that you need to convert to your preferred string encoding. Usually .decode('utf-8') is good enough.
Data_S = "".join(Data).decode('utf-8') # get a usable string in Data_S
Padding
struct Memory_LaserFrontal {
char Data[372]; // 372 is a multiple of 4, probably no padding
float Med[181]; // floats are 4 bytes, probably no padding
charD; // single char, expect 3 padding bytes after
charS; // single char, expect 3 padding bytes after
char LaserStatus[2]; // double char expect 2 padding bytes after.
};
So the last Python line above might be - where the 'x' indicates a padding byte that can be ignored.
( D, S, LaserStatus_1, LaserStatus_2 ) = struct.unpack( "cxxxcxxxccxx", memory_value[end_of_Med:] )
I always like to leave the full source code of the problem solved so others can use it if they have a similar problem.
thanks a lot all!
from time import sleep
import sysv_ipc
import struct
import array
# Create shared memory object
while True:
memory = sysv_ipc.SharedMemory(1234)
# Read value from shared memory
memory_value = memory.read()
#print (memory_value)
#print (len(memory_value))
# Get the initial char array - you can turn it into a string if you need to later.
my_chars = array.array("b") # f for float, c for char etc.
#my_chars.from_bytes(memory_value[:372]) # happens that 372 chars is 372 bytes.
Data = my_chars.tolist() # Could be bytes list
# advance to the member after Data
end_of_Data = struct.calcsize("372c")
# get the length in bytes that 181 floats take up
end_of_Med = struct.calcsize("181f") + end_of_Data
# now we know where the floats are
floats_as_bytes = memory_value[ end_of_Data : end_of_Med ]
# unpack the remaining parts
( D, S, LaserStatus_1, LaserStatus_2 ) = struct.unpack( "cccc", memory_value[end_of_Med:] )
print(len(floats_as_bytes)/4)
a=[]
for i in range(0,len(floats_as_bytes),4):
a.append(struct.unpack('<f', floats_as_bytes[i:i+4]))
print (a[0])
sleep(0.1)

How can I read back from a buffer using ctypes?

I have a third-party library, and I need to use one function in python script from it. Here it is:
ReadFromBlob(PVOID blob, INT blob_size, PCSTR section, PCSTR key, const void **buffer, UINT * size)
blob - some pointer? to bytes to read from
blob_size - blob size in bytes
section and key - string values like "Image"
buffer - bytes to read to
size - buffer size
The documentation gives an example of how to use it:
UINT size = 0;
PVOID buffer = NULL;
ReadFromBlob(<blob>, <blob_size>, "MainImage", "Image", &buffer, &size);
I'm not familiar with C, so argument types confusing me. I need to be able to read values from the buffer in python.
This is what I have so far:
from ctypes import *
lib = cdll.LoadLibrary(path_to_lib)
with open(filepath, 'rb') as file:
data = file.read()
blob_size = c_int(len(data))
blob = cast(c_char_p(data), POINTER(c_char * blob_size.value))
b = bytes()
size = c_uint(len(b))
buffer = cast(cast(b, c_void_p), POINTER(c_char * size.value))
lib.ReadFromBlob(blob, blob_size, b"MainImage", b"Image", buffer, pointer(size))
But I still get an empty buffer in the end. Please help me.
It looks like the function searches the blob for data based on the section and key and returns a pointer into the blob data and a size, so I made a test function that just echoes back the blob and size as the output parameters:
#include <windows.h>
#include <stdio.h>
__declspec(dllexport)
void ReadFromBlob(PVOID blob, INT blob_size, PCSTR section, PCSTR key, const void **buffer, UINT * size) {
printf("section=\"%s\" key=\"%s\"\n",section,key);
*buffer = blob; // just echo back input data for example
*size = (UINT)blob_size;
}
The types look like Windows types, and ctypes has a submodule wintypes with Windows definitions that help get the types right. Make sure to set the .argtypes and .restype correctly with parallel ctypes types for the Windows types. This helps ctypes check that arguments are passed correctly.
import ctypes as ct
from ctypes import wintypes as w
dll = ct.CDLL('./test')
# Note the parallels between C types and ctypes types.
# PVOID is just "pointer to void" and LPVOID mean the same thing, etc.
dll.ReadFromBlob.argtypes = w.LPVOID,w.INT,w.LPCSTR,w.LPCSTR,ct.POINTER(w.LPCVOID),w.LPUINT
dll.ReadFromBlob.restype = None
# storage for returned values, passed by reference as output parameters
buffer = w.LPCVOID()
size = w.UINT()
dll.ReadFromBlob(b'filedata',8,b'Section',b'Key',ct.byref(buffer),ct.byref(size))
print(ct.cast(buffer,ct.c_char_p).value,size.value)
Output showing the received section and key, and printing the returned blob data and size:
section="Section" key="Key"
b'filedata' 8

How to read from pointer address in Python?

I want to read in a Python script a number of bytes starting from a specific address. E.g., I want to read 40000 bytes starting from 0x561124456.
The pointer is given from a C# app. I want to use this method to pass data between the app and script. I've used a TCP socket via localhost, but I want to try this method also.
How can I do this?
If you really want to, enjoy:
import ctypes
g = (ctypes.c_char*40000).from_address(0x561124456)
Looks like segfault fun. There are good socket-connection libraries on both languages (sockets, RPC etc...), so I would think about this again if this is for some large project.
Once I got a pointer of memory location from C, I found "list(listSize * listDataType).from_address(memoryPointer)" created a internal copy of C memeory. If the data in memory is huge, Python takes a long time to create a list object by using internal copy. To avoid internal copy, I used the ctypelib.as_array in python:
import ctypes
import binascii
import numpy as np
myCfunslib.getData.restype = ctypes.c_void_p
#myCfunslib.getData.restype=ctypes.POINTER(ctypes.c_ubyte)#no need to cast
dataSize = 1092 * 1208
#call the c function to get the data memory pointer
cMemoryPointer = myCfunslib.getData();
newpnt = ctypes.cast(cMemoryPointer, ctypes.POINTER(ctypes.c_ubyte))
# and construct an array using this data
DataBytes = np.ctypeslib.as_array(newpnt, (dataSize,)) #no internal copy
print "the mid byte of the data in python side is ", DataBytes[dataSize/2]
I happened to work on the similar issue. My python script load .so library to get an image buffer address from c++ .so. After I got the buffer address, I need to be able to read each byte in the buffer. I used "from_address" to create a list object:
imageBytes = list(c_ubyte * dataSize).from_address(pointer)
The following shows the details how to get memory address passed from c++ to pyth and how to access the memory data on python side too. In c++ code frameprovider.cpp:
dataPackPtr = new DataPack();
DataPack * getFrame(){
uint32_t width = 1920;
uint32_t height = 1208;
const size_t buffersize = width * height * 4;//rgba, each color is one byte
unsigned char* rgbaImage = (unsigned char * )malloc(buffersize);
memset(rgbaImage, 0, buffersize); // set all the buffer data to 0.
dataPackPtr->width = width;
dataPackPtr->height = height;
dataPackPtr->buffersize = buffersize;
dataPackPtr->bufferPtr = rgbaImage;
return dataPackPtr;
}
extern "C" {
DataPack* getFrame_wrapper(){
return getFrame();
}
}
My python:
import ctypes
import binascii
lib = ctypes.cdll.LoadLibrary('/libpath/frameprovider.so')
print vars(lib)
class dataPack(ctypes.Structure):
_fields_ = [("width",ctypes.c_int),
("height",ctypes.c_int),
("buffersize",ctypes.c_int),
("bufferAddress", ctypes.c_void_p)]
lib.getFrame_wrapper.restype = ctypes.POINTER(dataPack)
data = lib.getFrame_wrapper()
print "in python the w= ", data.contents.width, "h=",data.contents.height
print "the buffersize=",data.contents.height
imageBytes = list(
(data.contents.buffersize * ctypes.c_ubyte).
from_address(data.contents.bufferAddress))
print "the len of imageBytes are ", len(imageBytes)
print imageBytes[data.contents.buffersize -1] #print the last byte in the buffer
print "in python, the hex value of element 12 is ", hex(imageBytes[12])

Can someone explain Python struct unpacking?

I have a binary file made from C structs that I want to parse in Python. I know the exact format and layout of the binary but I am confused on how to use Python Struct unpacking to read this data.
Would I have to traverse the whole binary unpacking a certain number of bytes at a time based on what the members of the struct are?
C File Format:
typedef struct {
int data1;
int data2;
int data4;
} datanums;
typedef struct {
datanums numbers;
char *name;
} personal_data;
Lets say the binary file had personal_data structs repeatedly after another.
Assuming the layout is a static binary structure that can be described by a simple struct pattern, and the file is just that structure repeated over and over again, then yes, "traverse the whole binary unpacking a certain number of bytes at a time" is exactly what you'd do.
For example:
record = struct.Struct('>HB10cL')
with open('myfile.bin', 'rb') as f:
while True:
buf = f.read(record.size)
if not buf:
break
yield record.unpack(buf)
If you're worried about the efficiency of only reading 17 bytes at a time and you want to wrap that up by buffering 8K at a time or something… well, first make sure it's an actual problem worth optimizing; then, if it is, loop over unpack_from instead of unpack. Something like this (untested, top-of-my-head code):
buf, offset = b'', 0
with open('myfile.bin', 'rb') as f:
if len(buf) < record.size:
buf, offset = buf[offset:] + f.read(8192), 0
if not buf:
break
yield record.unpack_from(buf, offset)
offset += record.size
Or, even simpler, as long as the file isn't too big for your vmsize, just mmap the whole thing and unpack_from on the mmap itself:
with open('myfile.bin', 'rb') as f:
with mmap.mmap(f, 0, access=mmap.ACCESS_READ) as m:
for offset in range(0, m.size(), record.size):
yield record.unpack_from(m, offset)
You can unpack a few at a time. Let's start with this example:
In [44]: a = struct.pack("iiii", 1, 2, 3, 4)
In [45]: a
Out[45]: '\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x04\x00\x00\x00'
If you're using a string, you can just use a subset of it, or use unpack_from:
In [49]: struct.unpack("ii",a[0:8])
Out[49]: (1, 2)
In [55]: struct.unpack_from("ii",a,0)
Out[55]: (1, 2)
In [56]: struct.unpack_from("ii",a,4)
Out[56]: (2, 3)
If you're using a buffer, you'll need to use unpack_from.

export matlab variable to text for python usage

So let's start off by saying I'm a total beginner in matlab. I'm working with python and now I've recieved some data in a matlab file that I need to export to a format I could use with python.
I've googled around and found I can export a matlab variable to a text file using:
dlmwrite('my_text', MyVariable, 'delimiter' , ',');
Now the variable I need to export is a 16000 x 4000 matrix of doubles of the form 0.006747668446927. Now here is where the problem starts. I need to export the full values for each double. Trying with that function lead me to export the numbers in a format of 0.0067477. This won't do since I need a whole lot more of precision for what I'm doing. So how can I export the full values of each of these variables? Or if you have a more elegant way of using that huge matlab matrix in python please feel free.
Regards,
Bogdan
To exchange big chunks of numerical data between Python and Matlab I
recommend HDF5
http://en.wikipedia.org/wiki/Hierarchical_Data_Format
The Python binding is called h5py
http://code.google.com/p/h5py
Here are two examples for both directions. First from
Matlab to Python
% matlab
points = [1 2 3 ; 4 5 6 ; 7 8 9 ; 10 11 12 ];
hdf5write('test.h5', '/Points', points);
# python
import h5py
with h5py.File('test.h5', 'r') as f:
points = f['/Points'].value
And now from Python to Matlab
# python
import h5py
import numpy
points = numpy.array([ [1., 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12] ])
with h5py.File('test.h5', 'w') as f:
f['/Points'] = points
% matlab
points = hdf5read('test.h5', '/Points');
NOTE A column in Matlab will come out as a row in Python and vice versa. This isn't a bug but the difference between the way C and Fortran interpret a continuous piece of data in memory.
Scipy has tools for reading MATLAB .mat files natively: see e.g. http://www.janeriksolem.net/2009/05/reading-and-writing-mat-files-with.html
While I like the hdf5 based answer, I still think text files and CSVs are nice for smaller things (you can open them in text editors, spreadsheets whatever). In that case I would use MATLABs fopen/fprintf/fclose rather than dlmwrite - I like to make things explicit. Then again, this dlmwrite might be better for multi-dimensional arrays.
You can simply write your variable to file as binary data, then read it in any language you want, be it MATLAB, Python, C, etc.. Example:
MATLAB (write)
X = rand([100 1],'single');
fid = fopen('file.bin', 'wb');
count = fwrite(fid, X, 'single');
fclose(fid);
MATLAB (read)
fid = fopen('file.bin', 'rb');
data = fread(fid, Inf, 'single=>single');
fclose(fid);
Python (read)
import struct
data = []
f = open("file.bin", "rb")
try:
# read 4 bytes at a time (float)
bytes = f.read(4) # returns a sequence of bytes as a string
while bytes != "":
# string byte-sequence to float
num = struct.unpack('f',bytes)[0]
# append to list
data.append(num);
# read next 4 bytes
bytes = f.read(4)
finally:
f.close()
# print list
print data
C (read)
#include <stdio.h>
#include <stdlib.h>
int main()
{
FILE *fp = fopen("file.bin", "rb");
// Determine size of file
fseek(fp, 0, SEEK_END);
long int lsize = ftell(fp);
rewind(fp);
// Allocate memory, and read file
float *numbers = (float*) malloc(lsize);
size_t count = fread(numbers, 1, lsize, fp);
fclose(fp);
// print data
int i;
int numFloats = lsize / sizeof(float);
for (i=0; i<numFloats; i+=1) {
printf("%g\n", numbers[i]);
}
return 0;
}

Categories