c++ read map in binary file which created in python - python

I created a Python script which creates the following map (illustration):
map<uint32_t, string> tempMap = {{2,"xx"}, {200, "yy"}};
and saved it as map.out file (a binary file).
When I try to read the binary file from C++, it doesn't copy the map, why?
map<uint32_t, string> tempMap;
ifstream readFile;
std::streamsize length;
readFile.open("somePath\\map.out", ios::binary | ios::in);
if (readFile)
{
readFile.ignore( std::numeric_limits<std::streamsize>::max() );
length = readFile.gcount();
readFile.clear(); // Since ignore will have set eof.
readFile.seekg( 0, std::ios_base::beg );
readFile.read((char *)&tempMap,length);
for(auto &it: tempMap)
{
/* cout<<("%u, %s",it.first, it.second.c_str()); ->Prints map*/
}
}
readFile.close();
readFile.clear();

It's not possible to read in raw bytes and have it construct a map (or most containers, for that matter)[1]; and so you will have to write some code to perform proper serialization instead.
If the data being stored/loaded is simple, as per your example, then you can easily devise a scheme for how this might be serialized, and then write the code to load it. For example, a simple plaintext mapping can be established by writing the file with each member after a newline:
<number>
<string>
...
So for your example of:
std::map<std::uint32_t, std::string> tempMap = {{2,"xx"}, {200, "yy"}};
this could be encoded as:
2
xx
200
yy
In which case the code to deserialize this would simply read each value 1-by-1 and reconstruct the map:
// Note: untested
auto loadMap(const std::filesystem::path& path) -> std::map<std::uint32_t, std::string>
{
auto result = std::map<std::uint32_t, std::string>{};
auto file = std::ifstream{path};
while (true) {
auto key = std::uint32_t{};
auto value = std::string{};
if (!(file >> key)) { break; }
if (!std::getline(file, value)) { break; }
result[key] = std::move(value);
}
return result;
}
Note: For this to work, you need your python program to output the format that will be read from your C++ program.
If the data you are trying to read/write is sufficiently complicated, you may look into different serialization interchange formats. Since you're working between python and C++, you'll need to look into libraries that support both. For a list of recommendations, see the answers to Cross-platform and language (de)serialization
[1]
The reason you can't just read (or write) the whole container as bytes and have it work is because data in containers isn't stored inline. Writing the raw bytes out won't produce something like 2 xx\n200 yy\n automatically for you. Instead, you'll be writing the raw addresses of pointers to indirect data structures such as the map's internal node objects.
For example, a hypothetical map implementation might contain a node like:
template <typename Key, typename Value>
struct map_node
{
Key key;
Value value;
map_node* left;
map_node* right;
};
(The real map implementation is much more complicated than this, but this is a simplified representation)
If map<Key,Value> contains a map_node<Key,Value> member, then writing this out in binary will write the binary representation of key, value, left, and right -- the latter of which are pointers. The same is true with any container that uses indirection of any kind; the addresses will fundamentally differ between the time they are written and read, since they depend on the state of the program at any given time.
You can write a simple map_node to test this, and just print out the bytes to see what it produces; the pointer will be serialized as well. Behaviorally, this is the exact same as what you are trying to do with a map and reading from a binary file. See the below example which includes different addresses.
Live Example

You can use protocol buffers to serialize your map in python and deserialization can be performed in C++.
Protocol buffers supports both Python and C++.

Related

Fastest way to store and retrieve a large stream of small unstructured messages

I am developing an IOT application that requires me to handle many small unstructured messages (meaning that their fields can change over time - some can appear and others can disappear). These messages typically have between 2 and 15 fields, whose values belong to basic data types (ints/longs, strings, booleans). These messages fit very well within the JSON data format (or msgpack).
It is critical that the messages get processed in their order of arrival (understand: they need to be processed by a single thread - there is no way to parallelize this part). I have my own logic for handling these messages in realtime (the throughput is relatively small, a few hundred thousand messages per second at most), but there is an increasing need for the engine to be able to simulate/replay previous periods by replaying a history of messages. Though it wasn't initially written for that purpose, my event processing engine (written in Go) could very well handle dozens (maybe in the low hundreds) of millions of messages per second if I was able to feed it with historical data at a sufficient speed.
This is exactly the problem. I have been storing many (hundreds of billions) of these messages over a long period of time (several years), for now in delimited msgpack format (https://github.com/msgpack/msgpack-python#streaming-unpacking). In this setting and others (see below), I was able to benchmark peak parsing speeds of ~2M messages/second (on a 2019 Macbook Pro, parsing only), which is far from saturating disk IO.
Even without talking about IO, doing the following:
import json
message = {
'meta1': "measurement",
'location': "NYC",
'time': "20200101",
'value1': 1.0,
'value2': 2.0,
'value3': 3.0,
'value4': 4.0
}
json_message = json.dumps(message)
%%timeit
json.loads(json_message)
gives me a parsing time of 3 microseconds/message, that is slightly above 300k messages/second. Comparing with ujson, rapidjson and orjson instead of the standard library's json module, I was able to get peak speeds of 1 microsecond/message (with ujson), that is about 1M messages/second.
Msgpack is slightly better:
import msgpack
message = {
'meta1': "measurement",
'location': "NYC",
'time': "20200101",
'value1': 1.0,
'value2': 2.0,
'value3': 3.0,
'value4': 4.0
}
msgpack_message = msgpack.packb(message)
%%timeit
msgpack.unpackb(msgpack_message)
Gives me a processing time of ~750ns/message (about 100ns/field), that is about 1.3M messages/second. I initially thought that C++ could be much faster. Here's an example using nlohmann/json, though this is not directly comparable with msgpack:
#include <iostream>
#include "json.hpp"
using json = nlohmann::json;
const std::string message = "{\"value\": \"hello\"}";
int main() {
auto jsonMessage = json::parse(message);
for(size_t i=0; i<1000000; ++i) {
jsonMessage = json::parse(message);
}
std::cout << jsonMessage["value"] << std::endl; // To avoid having the compiler optimize the loop away.
};
Compiling with clang 11.0.3 (std=c++17, -O3), this runs in ~1.4s on the same Macbook, that is to say a parsing speed of ~700k messages/second with even smaller messages than the Python example. I know that nlohmann/json can be quite slow, and was able to get parsing speeds of about 2M messages/second using simdjson's DOM API.
This is still far too slow for my use case. I am open to all suggestions to improve message parsing speed with potential applications in Python, C++, Java (or whatever JVM language) or Go.
Notes:
I do not necessarily care about the size of the messages on disk (consider it a plus if the storage method you suggest is memory-efficient).
All I need is a key-value model for basic data types - I do not need nested dictionaries or lists.
Converting the existing data is not an issue at all. I am simply looking for something read-optimized.
I do not necessarily need to parse the entire thing into a struct or a custom object, only to access some of the fields when I need it (I typically need a small fraction of the fields of each message) - it is fine if this comes with a penalty, as long as the penalty does not destroy the whole application's throughput.
I am open to custom/slightly unsafe solutions.
Any format I choose to use needs to be naturally delimited, in the sense that the messages will be written serially to a file (I am currently using one file per day, which is sufficient for my use case). I've had issues in the past with unproperly delimited messages (see writeDelimitedTo in the Java Protobuf API - lose a single byte and the entire file is ruined).
Things I have already explored:
JSON: experimented with rapidjson, simdjson, nlohmann/json, etc...)
Flat files with delimited msgpack (see this API: https://github.com/msgpack/msgpack-python#streaming-unpacking): what I am currently using to store the messages.
Protocol Buffers: slightly faster, but does not really fit with the unstructured nature of the data.
Thanks!!
I assume that messages only contain few named attributes of basic types (defined at runtime) and that these basic types are for example strings, integers and floating-point numbers.
For the implementation to be fast, it is better to:
avoid text parsing (slow because sequential and full of conditionals);
avoid checking if messages are ill-formed (not needed here as they should all be well-formed);
avoid allocations as much as possible;
work on message chunks.
Thus, we first need to design a simple and fast binary message protocol:
A binary message contains the number of its attributes (encoded on 1 byte) followed by the list of attributes. Each attribute contains a string prefixed by its size (encoded on 1 byte) followed by the type of the attribute (the index of the type in the std::variant, encoded on 1 byte) as well as the attribute value (a size-prefixed string, a 64-bit integer or a 64-bit floating-point number).
Each encoded message is a stream of bytes that can fit in a large buffer (allocated once and reused for multiple incoming messages).
Here is a code to decode a message from a raw binary buffer:
#include <unordered_map>
#include <variant>
#include <climits>
// Define the possible types here
using AttrType = std::variant<std::string_view, int64_t, double>;
// Decode the `msgData` buffer and write the decoded message into `result`.
// Assume the message is not ill-formed!
// msgData must not be freed or modified while the resulting map is being used.
void decode(const char* msgData, std::unordered_map<std::string_view, AttrType>& result)
{
static_assert(CHAR_BIT == 8);
const size_t attrCount = msgData[0];
size_t cur = 1;
result.clear();
for(size_t i=0 ; i<attrCount ; ++i)
{
const size_t keyLen = msgData[cur];
std::string_view key(msgData+cur+1, keyLen);
cur += 1 + keyLen;
const size_t attrType = msgData[cur];
cur++;
// A switch could be better if there is more types
if(attrType == 0) // std::string_view
{
const size_t valueLen = msgData[cur];
std::string_view value(msgData+cur+1, valueLen);
cur += 1 + valueLen;
result[key] = std::move(AttrType(value));
}
else if(attrType == 1) // Native-endian 64-bit integer
{
int64_t value;
// Required to not break the strict aliasing rule
std::memcpy(&value, msgData+cur, sizeof(int64_t));
cur += sizeof(int64_t);
result[key] = std::move(AttrType(value));
}
else // IEEE-754 double
{
double value;
// Required to not break the strict aliasing rule
std::memcpy(&value, msgData+cur, sizeof(double));
cur += sizeof(double);
result[key] = std::move(AttrType(value));
}
}
}
You probably need to write the encoding function too (based on the same idea).
Here is an example of usage (based on your json-related code):
const char* message = "\x01\x05value\x00\x05hello";
void bench()
{
std::unordered_map<std::string_view, AttrType> decodedMsg;
decodedMsg.reserve(16);
decode(message, decodedMsg);
for(size_t i=0; i<1000*1000; ++i)
{
decode(message, decodedMsg);
}
visit([](const auto& v) { cout << "Result: " << v << endl; }, decodedMsg["value"]);
}
On my machine (with an Intel i7-9700KF processor) and based on your benchmark, I get 2.7M message/s with the code using the nlohmann json library and 35.4M message/s with the new code.
Note that this code can be much faster. Indeed, most of the time is spent in efficient hashing and allocations. You can mitigate the problem by using a faster hash-map implementation (eg. boost::container::flat_map or ska::bytell_hash_map) and/or by using a custom allocator. An alternative is to build your own carefully tuned hash-map implementation. Another alternative is to use a vector of key-value pairs and use a linear search to perform lookups (this should be fast because your messages should not have a lot of attributes and because you said that you need a small fraction of the attributes per message).
However, the larger the messages, the slower the decoding. Thus, you may need to leverage parallelism to decode message chunks faster.
With all of that, this is possible to reach more than 100 M message/s.

Integrating the OHLC value from Python API to MT5 using MQL5

I have obtained the OHLC values from the iqoption and trying to find out a way to use it with MT5.
Here is how I got the values:
import time
from iqoptionapi.stable_api import IQ_Option
I_want_money=IQ_Option("email","password")
goal="EURUSD"
print("get candles")
print(I_want_money.get_candles(goal,60,111,time.time()))
The above code library is here: iqoptionapi
The line: I_want_money.get_candles(goal,60,111,time.time()) output json as : Output of the command
Now I am getting json in the output so it work like an API, I guess so.
Meanwhile, I try to create a Custom Symbol in MT5 as iqoption. Now I just wanted to add the data of the OHLC from the API to it, so that it will continue fetching data from the Iqoption and display the chart on the chart window for the custom symbol iqoption.
But I am not able to load it in the custom symbol. Kindly, help me.
Edited
This is the code for live streaming data from the iqoption:
from iqoptionapi.stable_api import IQ_Option
import logging
import time
logging.basicConfig(level=logging.DEBUG,format='%(asctime)s %(message)s')
I_want_money=IQ_Option("email","password")
I_want_money.start_candles_stream("EURUSD")
thread=I_want_money.collect_realtime_candles_thread_start("EURUSD",100)
I_want_money.start_candles_stream("USDTRY")
thread2=I_want_money.collect_realtime_candles_thread_start("USDTRY",100)
time.sleep(3)
#Do some thing
ans=I_want_money.thread_collect_realtime.items()
for k, v in ans:
print (k, v)
I_want_money.collect_realtime_candles_thread_stop(thread)
I_want_money.stop_candles_stream("EURUSD")
I_want_money.collect_realtime_candles_thread_stop(thread2)
I_want_money.stop_candles_stream("USDTRY")
Ok, you need to
1. receive the feed from the broker(I hope you succeeded)
2. write it into a file
** (both - python) **
3. read and parse it
4. add it to the history centre/marketWatch
** (both - mt5) **
So, you receive data as a string after
I_want_money.get_candles(goal,60,111,time.time())
this string might be json or json-array.
The important question is of course the path you are going to put the data. An expert in MQL45 can access only two folders (if not applying dll):
C:\Users\MY_NAME_IS_DANIEL_KNIAZ\AppData\Roaming\MetaQuotes\Terminal\MY_TERMINAL_ID_IN_HEX_FORMAT\MQL4\Files
and
C:\Users\MY_NAME_IS_DANIEL_KNIAZ\AppData\Roaming\MetaQuotes\Terminal\Common\Files
in the latter case you need to open a file with const int handle=FileOpen(,|*| FILECOMMON);
In order to parse json, you can use jason.mqh https://www.mql5.com/en/code/13663 library (there are few others) but as far as i remember it has a bug: it cannot parse array of objects correctly. In order to overcome that, I would suggest to write each tick at a separate line.
And the last, you will recieve data from your python application at random time, and write it into Common or direct folder. The MT5 robot will read it and delete. Just to avoid confusion, it could be better to guarantee that a file has a unique name. Either random (random.randint(1,1000)) or milliseconds from datetime can help.
So far, you have python code:
receivedString = I_want_money.get_candles(goal,60,111,time.time())
filePath = 'C:\Users\MY_NAME_IS_DANIEL_KNIAZ\AppData\Roaming\MetaQuotes\Terminal\MY_TERMINAL_ID_IN_HEX_FORMAT\MQL4\Files\iqoptionfeed'
fileName = os.path.join(filePath,"_"+goal+"_"+str(datetime.now())+".txt")
file = open(fileName, "w")
for string_ in receivedString:
file.write(string_)
file.close()
In case you created a thread, each time you receive an answer from the thread you write such a file.
Next, you need that data in MT5.
The easiest way is to loop over the existing files, make sure you can read them and read (or give up if you cannot) and delete after reading, then proceed with the data received.
The easiest and faster way is to use 0MQ of course, but let us do it without dll's.
In order to read the files, you need to setup a timer that can work as fast as possible, and let it go. Since you cannot make a windows app sleeping less then 15.6ms, your timer should sleep this number of time.
string path;
int OnInit()
{
EventSetMillisecondTimer(16);
path="iqoptionfeed\\*";
}
void OnDeinit(const int reason) { EventKillTimer(); }
string _fileName;
long _search_handle;
void OnTimer()
{
_search_handle=FileFindFirst(path,_fileName);
if(_search_handle!=INVALID_HANDLE)
{
do
{
ResetLastError();
FileIsExist(_fileName);
if(GetLastError()!=ERR_FILE_IS_DIRECTORY)
processFile(path+_fileName);
}
while(FileFindNext(_search_handle,_fileName));
FileFindClose(_search_handle);
}
}
this piece of code loops the folder and processes each file it managed to find.
Now reading the file (two functions) and processing the message inside it:
void processFile(const string fileName)
{
string message;
if(ReadFile(fileName,message))
processMessage(message,fileName);
}
bool ReadFile(const string fileName,string &result,const bool common=false)
{
const int handle = FileOpen(fileName,common?(FILE_COMMON|FILE_READ):FILE_READ);
if(handle==INVALID_HANDLE)
{
printf("%i - failed to find file %s (probably doesnt exist!). error=%d",__LINE__,fileName,GetLastError());
return(false);
}
Read(handle,result);
FileClose(handle);
if(!FileDelete(fileName,common?FILE_COMMON:0))
printf("%i - failed to delete file %s/%d. error=%d",__LINE__,fileName,common,GetLastError());
return(true);
}
void Read(const int handle,string &message)
{
string text="";
while(!FileIsEnding(handle) && !IsStopped())
{
text=StringConcatenate(text,FileReadString(handle),"\n");
}
//printf("%i %s - %s.",__LINE__,__FUNCTION__,text);
message=text;
}
And the last but not the least: process the obtained file.
As it was suggested above, it has a json formatted tick for each new tick, separated by \r\n.
Our goal is to add it to the symbol. In order to parse json, jason.mqh is an available solution but you can parse it manually of course.
void processMessage(const string message,const string fileName)
{
string symbolName=getSymbolFromFileName(fileName);
if(!SymbolSelect(symbolName,true))
{
if(!CustomSymbolCreate(symbolName))
return;
}
string lines[];
int size=StringSplit(message,(ushort)'\n',lines);
for(int i=0;i<size;i++)
{
if(StringLen(lines[i])==0)
continue;
CJAVal jLine(jtUNDEF,NULL);
jLine.Deserialize(lines[i]);
MqlTick mql;
//here I assume that you receive a json file like " { "time":2147483647,"bid":1.16896,"ask":1.16906,"some_other_data":"someOtherDataThatYouMayAlsoUse" } "
mql.time=(datetime)jLine["time"].ToInt();
mql.bid=(double)jLine["bid"].ToDbl();
mql.ask=(double)jLine["ask"].ToDbl();
ResetLastError();
if(CustomTicksAdd(symbolName,mql)<0)
printf("%i %s - failed to upload tick: %s %s %.5f %.5f. error=%d",__LINE__,__FILE__,symbolName,TimeToString(mql.time),mql.bid,mql.ask,GetLastError());
}
}
string getSymbolFromFileName(const string fileName)
{
string elements[];
int size=StringSplit(fileName,(ushort)'_',elements);
if(size<2)
return NULL;
return elements[1];
}
Do not forget to add debugging info and request for GetLastError() is for some reason you get errors.
Can this work in a back tester? Of course not. Fist, OnTimer() is not supported in MQL tester. Next, you need some history record in order to make it running. If you do not have any history - Nobody can help you unlees a broker can give it to you; the best idea could be to start collecting and storing it right now, and when the project is ready (maybe another couple months), you will have it ready and be able to test and optimize the strategy with the available dataset. You can apply the collected set into tester (MQL5 is really the next step in algo trading development compared to MQL4), either manually or with something like tickDataSuite and its Csv2Fxt.ex4 file that makes HST binary files that the tester can read and process; anyway that is another question and nobody can tell you if your broker stores their data somewhere to provide it to you.
After second-reading what you wrote (and edited) I can see you want:
a symbol synchronized with iqoption [ through your proxy / remotely ]
The symbol could be used for backtesting
The symbol could be used for on-screen live/strategy/indicator run
That implies operations outside strategy/indicator which MT platforms do not allow in an automated manner - you can achieve it manually by providing a data package, parsing it to CSV and importing to custom symbol creator. Well documented here.
Unfortunately, you choose a platform that by-design stands for self-contained strategies and indicators, more for beginners than professionals taking it seriously.
Refer to the link I provided and see for yourself. The official doc states you can create a custom symbol via mql ref, yet even though they state, in the foreword, it allows 3rd party providers - it's not referenced anywhere else and does not show any integration possibilities.
custom indicators
custom symbol properties

Passing a record over a socket

I have basic socket communication set up between python and Delphi code (text only). Now I would like to send/receive a record of data on both sides. I have a Record "C compatible" and would like to pass records back and forth have it in a usable format in python.
I use conn.send("text") in python to send the text but how do I send/receive a buffer with python and access the record items sent in python?
Record
TPacketData = record
pID : Integer;
dataType : Integer;
size : Integer;
value : Double;
end;
I don't know much about python, but I have done a lot between Delphi, C++, C# and Java even with COBOL.Anyway, to send a record from Delphi to C first you need to pack the record at both ends,
in Deplhi
MyRecord = pack record
in C++
#pragma pack(1)
I don’t know in python but I guess there must be a similar one. Make sure that at both sides the sizeof(MyRecord) is the same length.Also, before sending the records, you should take care about byte ordering (you know, Little-Endian vs Big-Endian), use the Socket.htonl() and Socket.ntohl() in python and the equivalent in Deplhi which are in WinSock unit. Also a "double" in Delphi could not be the same as in python, in Delphi is 8 bytes check this as well, and change it to Single(4 bytes) or Extended (10 bytes) whichever matches.
If all that match then you could send/receive binary records in one shut, otherwise, I'm afraid, you have to send the individual fields one by one.
I know this answer is a bit late to the game, but may at least prove useful to other people finding this question in their search-results. Because you say the Delphi code sends and receives "C compatible data" it seems that for the sake of the answer about Python's handling it is irrelevant whether it is Delphi (or any other language) on the other end...
The python struct and socket modules have all the functionality for the basic usage you describe. To send the example record you would do something like the below. For simplicity and sanity I have presumed signed integers and doubles, and packed the data in "network order" (bigendian). This can easily be a one-liner but I have split it up for verbosity and reusability's sake:
import struct
t_packet_struc = '>iiid'
t_packet_data = struct.pack(t_packet_struc, pid, data_type, size, value)
mysocket.sendall(t_packet_data)
Of course the mentioned "presumptions" don't need to be made, given tweaks to the format string, data preparation, etc. See the struct inline help for a description of the possible format strings - which can even process things like Pascal-strings... By the way, the socket module allows packing and unpacking a couple of network-specific things which struct doesn't, like IP-address strings (to their bigendian int-blob form), and allows explicit functions for converting data bigendian-to-native and vice-versa. For completeness, here is how to unpack the data packed above, on the Python end:
t_packet_size = struct.calcsize(t_packet_struc)
t_packet_data = mysocket.recv(t_packet_size)
(pid, data_type, size, value) = struct.unpack(t_packet_struc,
t_packet_data)
I know this works in Python version 2.x, and suspect it should work without changes in Python version 3.x too. Beware of one big gotcha (because it is easy to not think about, and hard to troubleshoot after the fact): Aside from different endianness, you can also distinguish between packing things using "standard size and alignment" (portably) or using "native size and alignment" (much faster) depending on how you prefix - or don't prefix - your format string. These can often yield wildly different results than you intended, without giving you a clue as to why... (there be dragons).

Python generically parse data into object structure

What I want to know is if I have a defined structured object with known parameters and a known order. I want to parse a binary blob into this structure in a generic way.
For example, I know that my file is a binary file of this structure
typedef struct {
uint frCompressedSize;
uint frUncompressedSize;
ushort frFileNameLength;
ushort frExtraFieldLength;
char frFileName[ frFileNameLength ];
uchar frExtraField[ frExtraFieldLength ];
uchar frData[ frCompressedSize ];
} ZIPFILERECORD;
Is there a better way to do this than reading in individual fields at a time in a hard coded manner? In my real code the structure has almost 100 parameters so the hardcoded method is not my first choice.
Any Ideas?
thanks!
You are looking for the python struct library

Reading a Delphi binary file in Python

I have a file that was written with the following Delphi declaration ...
Type
Tfulldata = Record
dpoints, dloops : integer;
dtime, bT, sT, hI, LI : real;
tm : real;
data : array[1..armax] Of Real;
End;
...
Var:
fh: File Of Tfulldata;
I want to analyse the data in the files (many MB in size) using Python if possible - is there an easy way to read in the data and cast the data into Python objects similar in form to the Delphi records? Does anyone know of a library perhaps that does this?
This is compiled on Delphi 7 with the following options which may (or may not) be pertinent,
Record Field Alignment: 8
Pentium Safe FDIV: False
Stack Frames: False
Optimization: True
Here is the full solutions thanks to hints from KillianDS and Ritsaert Hornstra
import struct
fh = open('my_file.dat', 'rb')
s = fh.read(40256)
vals = struct.unpack('iidddddd5025d', s)
dpoints, dloops, dtime, bT, sT, hI, LI, tm = vals[:8]
data = vals[8:]
I do not know how Delphi internally stores data, but if it is as simple byte-wise data (so not serialized and mangled), use struct. This way you can treat a string from a python file as binary data. Also, open files as binary file(open,'rb').
Please note that when you define a record in Delphi (like struct in C) the fields are layed out in order and in binary given the current alignment (eg Bytes are aligned on 1 byte boundaries, Words on 2 byte, Integers on 4 byte etc, but it may vary given the compiler settings.
When serialized to a file, you probably mean that this record is written in binary to the file and the next record is written after the first one starting at position sizeof( structure) etc etc. Delphi does not specify how thing should be serialized to/from file, So the information you give leaves us guessing.
If you want to make sure it is always the same without interference of any compiler setings, use packed record.
Real can have multiple meanings (it is an 48 bit float type for older Delphi versions and later on a 64 bit float (IEEE double)).
If you cannot access the Delphi code or compile it yourself, just ty to check the data with a HEX editor, you should see the boundaries of the records clearly since they start with Integers and only floats follow.

Categories