Is there any way to convert an c++ structure to JSON string using Python?
I have multiple c++ files that contain structure for example as following
#include <iostream>
using namespace std;
struct Person
{
char name[50];
int age;
float salary;
};
I want to convert it to JSON string. so I can use JSON string in my python project.
Thanks in Advance.
JSON is a standardized format and there are libraries for neraly every common programming language that will help you with that.
I am not sure what exactly you are asking; do you really want to convert a c++ file (containing a c/c++ structure) with Python? There are c++ libraries which can do that for you, too
Read this article about c++ and JSON.
If you want to convert a C++ struct to JSON string, there are a lot of libraries to do that. In my example, I am using https://github.com/nlohmann/json
#include <iostream>
#include "json.hpp"
using namespace std;
using json = nlohmann::json;
struct Person
{
string name;
int age;
float salary;
};
int main()
{
Person p;
p.name = "Shivam";
p.age = 7;
p.salary = 45.0;
// creating json
json j;
j["name"] = p.name;
j["age"] = p.age;
j["salary"] = p.salary;
string s = j.dump();
cout<<s<<endl;
// pretty print
cout<<j.dump(4)<<endl;
return 1;
}
Related
Hi I've a simple example of addressbook.proto I am serializing using the protobuf SerailizeToString() function in python. Here's the code.
import address_pb2
person = address_pb2.Person()
person.id = 1234
person.name = "John Doe"
person.email = "jdoe#example.com"
phone = person.phones.add()
phone.number = "555-4321"
phone.type = address_pb2.Person.HOME
print(person.SerializeToString())
Where address_pb2 is the file I generated from the protobuf compiler. Note that the example is copied from the protoBuf tutorials. This gives me the following string.
b'\n\x08John Doe\x10\xd2\t\x1a\x10jdoe#example.com"\x0c\n\x08555-4321\x10\x01'
Now I want to import this string into c++ protobuf. For this I wrote the following code.
#include <iostream>
#include <fstream>
#include <string>
#include "address.pb.h"
using namespace std;
int main(int argc, char* argv[]) {
GOOGLE_PROTOBUF_VERIFY_VERSION;
tutorial::AddressBook address_book;
string data = "\n\x08""John Doe\x10""\xd2""\t\x1a""\x10""jdoe#example.com\"\x0c""\n\x08""555-4321\x10""\x01""";
if(address_book.ParseFromString(data)){
cout<<"working"<< endl;
}
else{
cout<<"not working" << endl;
}
// Optional: Delete all global objects allocated by libprotobuf.
google::protobuf::ShutdownProtobufLibrary();
return 0;
}
Here I am simply trying to import the script using ParseFromString() fucntion but this doesn't work and I am not sure how it will work as I've been stuck on this since a long time now.
I tried changing the binary a bit to suit the c++ version but still no idea if I am on the right path or not.
How can I achieve this ? Does anybody have a clue ?
In Python, you are serializing a Person object. In C++, you are trying to parse an AddressBook object. You need to use the same type on both ends.
(Note that protobuf does NOT guarantee that it will detect these errors. Sometimes when you parse a message as the wrong type, the parse will appear to succeed, but the content will be garbage.)
There's another issue with your code that happens not to be a problem in this specific case, but wouldn't work in general:
string data = "\n\x08""John Doe\x10""\xd2""\t\x1a""\x10""jdoe#example.com\"\x0c""\n\x08""555-4321\x10""\x01""";
This line won't work if the string has any NUL bytes, i.e. '\x00'. If so, that byte would be interpreted as the end of the string. To avoid this problem you need to specify the length of the data, like:
string data("\n\x08""John Doe\x10""\xd2""\t\x1a""\x10""jdoe#example.com\"\x0c""\n\x08""555-4321\x10""\x01""", 45);
Say I have the following code in C++:
union {
int32_t i;
uint32_t ui;
};
i = SomeFunc();
std::string test(std::to_string(ui));
std::ofstream outFile(test);
And say I had the value of i somehow in Python, how would I be able to get the name of the file?
For those of you that are unfamiliar with C++. What I am doing here is writing some value in signed 32-bit integer format to i and then interpreting the bitwise representation as an unsigned 32-bit integer in ui. I am taking the same 32 bits and interpreting them in two different ways.
How can I do this in Python? There does not seem to be any explicit type specification in Python, so how can I reinterpret some set of bits in a different way?
EDIT: I am using Python 2.7.12
I would use python struct for interpreting bits in different ways.
something like following to print -12 as unsigned integer
import struct
p = struct.pack("#i", -12)
print("{}".format(struct.unpack("#I",p)[0]))
I am extending Python with some C++ code.
One of the functions I'm using has the following signature:
int PyArg_ParseTupleAndKeywords(PyObject *arg, PyObject *kwdict,
char *format, char **kwlist, ...);
(link: http://docs.python.org/release/1.5.2p2/ext/parseTupleAndKeywords.html)
The parameter of interest is kwlist. In the link above, examples on how to use this function are given. In the examples, kwlist looks like:
static char *kwlist[] = {"voltage", "state", "action", "type", NULL};
When I compile this using g++, I get the warning:
warning: deprecated conversion from string constant to ‘char*’
So, I can change the static char* to a static const char*. Unfortunately, I can't change the Python code. So with this change, I get a different compilation error (can't convert char** to const char**). Based on what I've read here, I can turn on compiler flags to ignore the warning or I can cast each of the constant strings in the definition of kwlist to char *. Currently, I'm doing the latter. What are other solutions?
Sorry if this question has been asked before. I'm new.
Does PyArg_ParseTupleAndKeywords() expect to modify the data you are passing in? Normally, in idiomatic C++, a const <something> * points to an object that the callee will only read from, whereas <something> * points to an object that the callee can write to.
If PyArg_ParseTupleAndKeywords() expects to be able to write to the char * you are passing in, you've got an entirely different problem over and above what you mention in your question.
Assuming that PyArg_ParseTupleAndKeywords does not want to modify its parameters, the idiomatically correct way of dealing with this problem would be to declare kwlist as const char *kwlist[] and use const_cast to remove its const-ness when calling PyArg_ParseTupleAndKeywords() which would make it look like this:
PyArg_ParseTupleAndKeywords(..., ..., ..., const_cast<char **>(kwlist), ...);
There is an accepted answer from seven years ago, but I'd like to add an alternative solution, since this topic seems to be still relevant.
If you don't like the const_cast solution, you can also create a write-able version of the string array.
char s_voltage[] = "voltage";
char s_state[] = "state";
char s_action[] = "action";
char s_type[] = "type";
char *kwlist[] = {s_voltage, s_state, s_action, s_type, NULL};
The char name[] = ".." copies the your string to a writable location.
I often have to write code in other languages that interact with C structs. Most typically this involves writing Python code with the struct or ctypes modules.
So I'll have a .h file full of struct definitions, and I have to manually read through them and duplicate those definitions in my Python code. This is time consuming and error-prone, and it's difficult to keep the two definitions in sync when they change frequently.
Is there some tool or library in any language (doesn't have to be C or Python) which can take a .h file and produce a structured list of its structs and their fields? I'd love to be able to write a script to generate my automatically generate my struct definitions in Python, and I don't want to have to process arbitrary C code to do it. Regular expressions would work great about 90% of the time and then cause endless headaches for the remaining 10%.
If you compile your C code with debugging (-g), pahole (git) can give you the exact structure layouts being used.
$ pahole /bin/dd
…
struct option {
const char * name; /* 0 8 */
int has_arg; /* 8 4 */
/* XXX 4 bytes hole, try to pack */
int * flag; /* 16 8 */
int val; /* 24 4 */
/* size: 32, cachelines: 1, members: 4 */
/* sum members: 24, holes: 1, sum holes: 4 */
/* padding: 4 */
/* last cacheline: 32 bytes */
};
…
This should be quite a lot nicer to parse than straight C.
Regular expressions would work great about 90% of the time and then cause endless headaches for the remaining 10%.
The headaches happen in the cases where the C code contains syntax that you didn't think of when writing your regular expressions. Then you go back and realise that C can't really be parsed by regular expressions, and life becomes not fun.
Try turning it around: define your own simple format, which allows less tricks than C does, and generate both the C header file and the Python interface code from your file:
define socketopts
int16 port
int32 ipv4address
int32 flags
Then you can easily write some Python to convert this to:
typedef struct {
short port;
int ipv4address;
int flags;
} socketopts;
and also to emit a Python class which uses struct to pack/unpack three values (possibly two of them big-endian and the other native-endian, up to you).
Have a look at Swig or SIP that would generate interface code for you or use ctypes.
Have you looked at Swig?
I have quite successfully used GCCXML on fairly large projects. You get an XML representation of the C code (including structures) which you can post-process with some simple Python.
ctypes-codegen or ctypeslib (same thing, I think) will generate ctypes Structure definitions (also other things, I believe, but I only tried structs) by parsing header files using GCCXML. It's no longer supported, but will likely work in some cases.
One my friend for this tasks done C-parser which he use with cog.
I need to convert between python objects and c strings of various encodings. Going from a c string to a unicode object was fairly simple using PyUnicode_Decode, however Im not sure how to go the other way
//char* can be a wchar_t or any other element size, just make sure it is correctly terminated for its encoding
Unicode(const char *str, size_t bytes, const char *encoding="utf-16", const char *errors="strict")
:Object(PyUnicode_Decode(str, bytes, encoding, errors))
{
//check for any python exceptions
ExceptionCheck();
}
I want to create another function that takes the python Unicode string and puts it in a buffer using a given encodeing, eg:
//fills buffer with a null terminated string in encoding
void AsCString(char *buffer, size_t bufferBytes,
const char *encoding="utf-16", const char *errors="strict")
{
...
}
I suspect it has somthing to do with PyUnicode_AsEncodedString however that returns a PyObject so I'm not sure how to put that into my buffer...
Note: both methods above are members of a c++ Unicode class that wraps the python api
I'm using Python 3.0
I suspect it has somthing to do with PyUnicode_AsEncodedString however that returns a PyObject so I'm not sure how to put that into my buffer...
The PyObject returned is a PyStringObject, so you just need to use PyString_Size and PyString_AsString to get a pointer to the string's buffer and memcpy it to your own buffer.
If you're looking for a way to go directly from a PyUnicode object into your own char buffer, I don't think that you can do that.