I'm using gRPC in python with django and django-grpc-framework.
And when I run the unittest, there is a piece of code in my models.py which producing weird problem: I request three gRPC requests, but only one reaches the server side.
Problem details
To simplified the problem, the following code explains the situation:
class MyModel(models.Model):
def some_method(self):
for stub in [stub1, stub2, stub3]:
resp = stub.RetractObject(base_pb2.RetractObjectRequest(
app_label='...',
object_type='...',
object_id='...',
exchange_id='...',
remark='...',
date_record='...',
))
Here, the some_method is triggering under a django.test.TestCase unittest function, while the gRPC was connecting to a external running server instance.
And the relating .proto files is as below:
// base.proto
// ...
message CreditRecord {
int64 id = 1;
string username = 2;
string app_label = 3;
string flag = 4;
string object_type = 5;
int64 object_id = 6;
int64 amount = 7;
int64 balance = 8;
string remark = 9;
string date_record = 10;
string exchange_id = 11;
int64 retract_ref = 12;
}
message RetractObjectRequest {
string app_label = 1;
string object_type = 2;
int64 object_id = 3;
string exchange_id = 4;
string remark = 5;
string date_record = 6;
}
// ...
// stub1 - stub3 has the similar structure below
syntax = "proto3";
package mjtest.growth;
import "mjtest/growth/base.proto";
service CreditController {
// ...
rpc RetractObject (RetractObjectRequest) returns (stream CreditRecord) {}
}
And the Server-Side code is something like (using django-grpc-framework):
# services.py
class GenericCreditService(mixins.RetrieveModelMixin,
mixins.ListModelMixin,
GenericService):
def RetractObject(self, request: base_pb2.RetractObjectRequest, context):
print('Request reached!', request)
# ...
records = model_class.retract_object(...)
# Here, records has aready return a list, not a iterator.
for rec in records:
yield self.serializer_class(rec).message
So, when I trigger the unittest, the client-side calls THREE times to request a RetractObject gRPC method.
But only ONE time when Request reached! was printed.
Attempt #1: adding a sleep(0.5)
I guess that there's some issue inside the client side, and I add sleep(0.5) after every request.
class MyModel(models.Model):
def some_method(self):
for stub in [stub1, stub2, stub3]:
resp = stub.RetractObject(base_pb2.RetractObjectRequest(...))
sleep(0.5)
Then all three requests reaches!
Attempt #2: traverses the response explicitly.
I guess that the stream response with no consumed may cause the problem, so I transfer the response to a list explicitly.
class MyModel(models.Model):
def some_method(self):
for stub in [stub1, stub2, stub3]:
resp = stub.RetractObject(base_pb2.RetractObjectRequest(...))
list(resp)
Then all three requests reaches!
So the problem is so weird, and neither of the ways I request throws an error.
Can anyone help?
Related
Python gRPC message does not serialize first field. I updated protos and cleaned everything many times but it is not fixed. You can see logs, settings_dict has stop field but after passing these fields to AgentSetting it is not seen in logs and server side. Also I tried to pass stop field manually but it is not seen as well. Annoying thing is that gRPC does not throw any exception it is accepting stop field but it is not sending to server also it is not seen in AgentSetting while printing but stop field is reachable like agent_setting.stop.
This is my proto file:
syntax = 'proto3';
import "base.proto";
message ConnectionCheckRequest {
string host_name = 1;
}
message RecordChunkRequest {
string host_name = 1;
string name = 2;
bytes chunk = 3;
uint32 chunk_index = 4;
uint32 chunk_number = 5;
}
message AgentSetting {
bool stop = 1;
bool connection = 2;
bool registered = 3;
string image_mode = 4;
uint32 sending_fail_limit = 5;
uint64 limit_of_old_records = 6;
string offline_records_exceed_policy = 7;
}
message AgentSettingRequest {
AgentSetting agent_setting = 1;
string host_name = 2;
}
message AgentSettingResponse {
AgentSetting agent_setting = 1;
}
message AgentRegistryRequest {
AgentSetting agent_setting = 1;
string host_name = 2;
}
service Chief {
rpc agent_update_setting(AgentSettingRequest) returns (AgentSettingResponse) {};
rpc agent_registry(AgentRegistryRequest) returns (Response) {};
rpc send (stream RecordChunkRequest) returns (Response) {};
rpc connection_check (ConnectionCheckRequest) returns (Response) {};
}
This is my code snippet:
def register():
try:
with insecure_channel(settings.server_address) as channel:
setting_dict = settings.dict()
logger.info(f'\nSetting Dict: {setting_dict}')
agent_setting = AgentSetting(**setting_dict)
logger.info(f'\nAgent Setting Instance: \n{agent_setting}')
response = ChiefStub(channel).agent_registry(
AgentRegistryRequest(
agent_setting=agent_setting,
host_name=settings.host_name
)
)
return response
except Exception as e:
logger.exception(f'Register Error: {str(e)}')
return Response(success=False, message="failure")
Logs:
|2020-05-05T18:33:56.931140+0300| |5480| |INFO| |reqs:register:28|
Setting Dict: {'stop': False, 'connection': True, 'registered': False, 'image_mode': 'RGB', 'sending_fail_limit': 3, 'limit_of_old_records': 5368709120, 'offline_records_exceed_policy': 'OVERWRITE'}
|2020-05-05T18:33:56.932137+0300| |5480| |INFO| |reqs:register:32|
Agent Setting Instance:
connection: true
image_mode: "RGB"
sending_fail_limit: 3
limit_of_old_records: 5368709120
offline_records_exceed_policy: "OVERWRITE"
In proto3, an unset value and a default value are considered equivalent.
So stop: false is considered equivalent to omitting stop entirely.
See Language Guide (proto3)
Note that for scalar message fields, once a message is parsed there's no way of telling whether a field was explicitly set to the default value (for example whether a boolean was set to false) or just not set at all: you should bear this in mind when defining your message types. For example, don't have a boolean that switches on some behaviour when set to false if you don't want that behaviour to also happen by default. Also note that if a scalar message field is set to its default, the value will not be serialized on the wire.
Note that this is different from proto2.
I collect the message according to my protofile (listed below). Then I serialize it into a byte string using SerializeToString() method. Then i get the byte string message and deserialization into proto object using ParseFromString() method.
But if i fill in the some fields zero values and execute the above algorithm like this:
def test():
fdm = device_pb2.FromDeviceMessage()
fdm.deveui = bytes.fromhex('1122334455667788')
fdm.fcntup = 0
fdm.battery = 3.5999999046325684
fdm.mode = 0
fdm.event = 1
port = fdm.data.add()
port.port = 1 #device_pb2.PortData.Name(0)
port.value = 0
c = fdm.SerializeToString()
return c
def parse_test(data):
print(data)
res = device_pb2.FromDeviceMessage()
res.ParseFromString(data)
return res
print(parse_test(test()))
, then python console will show me:
deveui: "\021\"3DUfw\210"
battery: 3.5999999046325684
event: PERIOD_EVENT
data {
port: VIBR2
}
without fields values are zero.
But i want to see:
deveui: "\021\"3DUfw\210"
fcntup: 0
battery: 3.5999999046325684
mode: BOUNDARY
event: PERIOD_EVENT
data {
port: VIBR2
value: 0
}
Why is it happening, and if it's fixed how can i fix it?
=============Proto_File================
message FromDeviceMessage{
bytes deveui = 1;
uint32 ts = 2;
int32 fcntup = 3;
float battery = 4;
int32 period = 5;
Mode mode = 6;
Event event = 7;
repeated PortData data = 8;
}
message PortData{
DevicePort port = 1;
int32 value = 2;
}
enum Mode{
BOUNDARY = 0;
PERIOD = 1;
BOUNDARY_PERIOD = 2;
}
enum Event{
BOUNDARY_EVENT = 0;
PERIOD_EVENT = 1;
}
enum DevicePort{
VIBR1 = 0;
VIBR2 = 1;
TEMPL2 = 3;
}
So, i think i guessed the reason. In case enum type ( DevicePort, Event, Mode): the default value is the first defined enum value, which must be 0. So i will set up 1 value to see required fields. In other cases: the fields with zero values are not displayed to reduce memory size of package. But if i turn to the required field using this way: res.data[0].value (in def parse_test(data)) it will show me 0, if i set value 0 in field value, for example. And this rule works in all cases.
I have a C header file which contains a series of classes, and I'm trying to write a function which will take those classes, and convert them to a python dict. A sample of the file is down the bottom.
Format would be something like
class CFGFunctions {
class ABC {
class AA {
file = "abc/aa/functions"
class myFuncName{ recompile = 1; };
};
class BB
{
file = "abc/bb/functions"
class funcName{
recompile=1;
}
}
};
};
I'm hoping to turn it into something like
{CFGFunctions:{ABC:{AA:"myFuncName"}, BB:...}}
# Or
{CFGFunctions:{ABC:{AA:{myFuncName:"string or list or something"}, BB:...}}}
In the end, I'm aiming to get the filepath string (which is actually a path to a folder... but anyway), and the class names in the same class as the file/folder path.
I've had a look on SO, and google and so on, but most things I've found have been about splitting lines into dicts, rather then n-deep 'blocks'
I know I'll have to loop through the file, however, I'm not sure the most efficient way to convert it to the dict.
I'm thinking I'd need to grab the outside class and its relevant brackets, then do the same for the text remaining inside.
If none of that makes sense, it's cause I haven't quite made sense of the process myself haha
If any more info is needed, I'm happy to provide.
The following code is a quick mockup of what I'm sorta thinking...
It is most likely BROKEN and probably does NOT WORK. but its sort of the process that I'm thinking of
def get_data():
fh = open('CFGFunctions.h', 'r')
data = {} # will contain final data model
# would probably refactor some of this into a function to allow better looping
start = "" # starting class name
brackets = 0 # number of brackets
text= "" # temp storage for lines inside block while looping
for line in fh:
# find the class (start
mt = re.match(r'Class ([\w_]+) {', line)
if mt:
if start == "":
start = mt.group(1)
else:
# once we have the first class, find all other open brackets
mt = re.match(r'{', line)
if mt:
# and inc our counter
brackets += 1
mt2 = re.match(r'}', line)
if mt2:
# find the close, and decrement
brackets -= 1
# if we are back to the initial block, break out of the loop
if brackets == 0:
break
text += line
data[start] = {'tempText': text}
====
Sample file
class CfgFunctions {
class ABC {
class Control {
file = "abc\abc_sys_1\Modules\functions";
class assignTracker {
description = "";
recompile = 1;
};
class modulePlaceMarker {
description = "";
recompile = 1;
};
};
class Devices
{
file = "abc\abc_sys_1\devices\functions";
class registerDevice { recompile = 1; };
class getDeviceSettings { recompile = 1; };
class openDevice { recompile = 1; };
};
};
};
EDIT:
If possible, if I have to use a package, I'd like to have it in the programs directory, not the general python libs directory.
As you detected, parsing is necessary to do the conversion. Have a look at the package PyParsing, which is a fairly easy-to-use library to implement parsing in your Python program.
Edit: This is a very symbolic version of what it would take to recognize a very minimalistic grammer - somewhat like the example at the top of the question. It won't work, but it might put you in the right direction:
from pyparsing import ZeroOrMore, OneOrMore, \
Keyword, Literal
test_code = """
class CFGFunctions {
class ABC {
class AA {
file = "abc/aa/functions"
class myFuncName{ recompile = 1; };
};
class BB
{
file = "abc/bb/functions"
class funcName{
recompile=1;
}
}
};
};
"""
class_tkn = Keyword('class')
lbrace_tkn = Literal('{')
rbrace_tkn = Literal('}')
semicolon_tkn = Keyword(';')
assign_tkn = Keyword(';')
class_block = ( class_tkn + identifier + lbrace_tkn + \
OneOrMore(class_block | ZeroOrMore(assignment)) + \
rbrace_tkn + semicolon_tkn \
)
def test_parser(test):
try:
results = class_block.parseString(test)
print test, ' -> ', results
except ParseException, s:
print "Syntax error:", s
def main():
test_parser(test_code)
return 0
if __name__ == '__main__':
main()
Also, this code is only the parser - it does not generate any output. As you can see in the PyParsing docs, you can later add the actions you want. But the first step would be to recognize the what you want to translate.
And a last note: Do not underestimate the complexities of parsing code... Even with a library like PyParsing, which takes care of much of the work, there are many ways to get mired in infinite loops and other amenities of parsing. Implement things step-by-step!
EDIT: A few sources for information on PyParsing are:
http://werc.engr.uaf.edu/~ken/doc/python-pyparsing/HowToUsePyparsing.html
http://pyparsing.wikispaces.com/
(Particularly interesting is http://pyparsing.wikispaces.com/Publications, with a long list of articles - several of them introductory - on PyParsing)
http://pypi.python.org/pypi/pyparsing_helper is a GUI for debugging parsers
There is also a 'tag' Pyparsing here on stackoverflow, Where Paul McGuire (the PyParsing author) seems to be a frequent guest.
* NOTE: *
From PaulMcG in the comments below: Pyparsing is no longer hosted on wikispaces.com. Go to github.com/pyparsing/pyparsing
Here is my problem guys,
I initialise the protofile as shown in the bitcoin for developers wiki shown here:
package payments;
option java_package = "org.bitcoin.protocols.payments";
option java_outer_classname = "Protos";
message Output {
optional uint64 amount = 1 [default = 0];
required bytes script = 2;
}
message PaymentDetails {
optional string network = 1 [default = "test"];
repeated Output outputs = 2;
required uint64 time = 3;
optional uint64 expires = 4;
optional string memo = 5;
optional string payment_url = 6;
optional bytes merchant_data = 7;
}
message PaymentRequest {
optional uint32 payment_details_version = 1 [default = 1];
optional string pki_type = 2 [default = "none"];
optional bytes pki_data = 3;
required bytes serialized_payment_details = 4;
optional bytes signature = 5;
}
message X509Certificates {
repeated bytes certificate = 1;
}
message Payment {
optional bytes merchant_data = 1;
repeated bytes transactions = 2;
repeated Output refund_to = 3;
optional string memo = 4;
}
message PaymentACK {
required Payment payment = 1;
optional string memo = 2;
}
throw this view into django which fetches the public key associated with a newly created address, hashes it into the correct format for a script, serializes the 'serialized_payment_details' field and returns a response object.
def paymentobject(request):
def addr_160(pub):
h3 = hashlib.sha256(unhexlify(pub))
return hashlib.new('ripemd160', h3.digest())
x = payments_pb2
btc_address = bc.getnewaddress()
pubkey_hash = bc.validateaddress(btc_address).pubkey
pubkey_hash160 = addr_160(pubkey_hash).hexdigest()
hex_script = "76" + "a9" + "14" + pubkey_hash160 + "88" + "ac"
serialized_script = hex_script.decode("hex")
xpd = x.PaymentDetails()
xpd.time = int(time())
xpd.outputs.add(amount = 0, script = serialized_script)
xpr = x.PaymentRequest()
xpr.serialized_payment_details = xpd.SerializeToString()
return HttpResponse(xpr.SerializeToString(), content_type="application/octet-stream")
When I point my bitcoin v0.9 client at URI
bitcoin:?r=http://127.0.0.1:8000/paymentobject
I am met with an error
[libprotobuf ERROR google/protobuf/message_lite.cc:123] Can't parse message of type "payments.PaymentRequest" because it is missing required fields: serialized_payment_details
But it isn't missing the details field is it?
Any help much appreciated, thanks :)
The answer was that (at the time of writing) you cannot specify zero as the Output.amount. The bitcoin-qt 0.9 client considers it dust and does not allow the transaction to proceed.
More info here.
I am a newbiew to python and ctypes. what I have is:-
C program:
struct query
{
uint16_t req_no;
uint32_t req_len;
uint64_t req;
};
struct response
{
uint16_t req_no;
uint16_t status;
uint32_t value_len;
uint64_t value;
};
// functions for creating query and response packets using
// above structs respectively, returning char buffer.
char* create_query(//some args);
char* create_response(//some args);
I have Created a libquery.so for the above C code. My TCP Server is a C program.
I am trying to create a TCP python client (my project needs it!) for the same.
I can successfully send query and receive data(using functions in libquery.so) from python client.
But when i get response data, I want to convert it to "struct response" type.
I have create a similar "Structure" class in python, but can't get anything out of it.
Please help.
Some code snippet of my Python code:-
// some ctypes imports
lib = cdll.LoadLibrary('./libquery.so')
class Info1(Structure):
_fields_ = [("req_no",c_int),
("status",c_int),
("value_len",c_int),
("value",c_int)]
header = Info1()
// Did some TCP connection code here and send data to server by calling
// create_query() method, data confirmed to be correct on server side...
# Receive response data
data = sock.recv(512)
header = str_to_class('Info1')
header.req_no = int(ord(data[0])) // Works; but I don't want to go this ways..
header.status = int(ord(data[1]))
header.value_len = int(ord(data[2]))
header.value = int(ord(data[3]))
print above header values..
I tried using :-
def str_to_class(Info1):
return getattr(sys.modules[__name__], Info1)
But don't know how to make it work.
Anybody know how to make it work OR is there any other way??
Your 'Info1' does not match C 'struct response'. So I changed in following code.
You can use ctypes.memmove.
from ctypes import *
class Info1(Structure):
_fields_ = [("req_no", c_uint16),
("status", c_uint16),
("value_len", c_uint32),
("value", c_uint64)]
data = (
'\x01\x00'
'\x02\x00'
'\x03\x00\x00\x00'
'\x04\x00\x00\x00\x00\x00\x00\x00'
)
# Assumed data was received. I assumed both server, clients are little-endian.
# else, use socket.ntoh{s|l}, socket.hton{s|l} ....
header = Info1()
memmove(addressof(header), data, sizeof(header))
assert header.req_no == 1
assert header.status == 2
assert header.value_len == 3
assert header.value == 4
You can also use struct.
import struct
data = '....' # same as above
struct.unpack('HHLQ', data) == (1, 2, 3, 4) # '>HHLQ' if data is htonl/htons-ed in sencding part.