Python gRPC message does not serialize first field. I updated protos and cleaned everything many times but it is not fixed. You can see logs, settings_dict has stop field but after passing these fields to AgentSetting it is not seen in logs and server side. Also I tried to pass stop field manually but it is not seen as well. Annoying thing is that gRPC does not throw any exception it is accepting stop field but it is not sending to server also it is not seen in AgentSetting while printing but stop field is reachable like agent_setting.stop.
This is my proto file:
syntax = 'proto3';
import "base.proto";
message ConnectionCheckRequest {
string host_name = 1;
}
message RecordChunkRequest {
string host_name = 1;
string name = 2;
bytes chunk = 3;
uint32 chunk_index = 4;
uint32 chunk_number = 5;
}
message AgentSetting {
bool stop = 1;
bool connection = 2;
bool registered = 3;
string image_mode = 4;
uint32 sending_fail_limit = 5;
uint64 limit_of_old_records = 6;
string offline_records_exceed_policy = 7;
}
message AgentSettingRequest {
AgentSetting agent_setting = 1;
string host_name = 2;
}
message AgentSettingResponse {
AgentSetting agent_setting = 1;
}
message AgentRegistryRequest {
AgentSetting agent_setting = 1;
string host_name = 2;
}
service Chief {
rpc agent_update_setting(AgentSettingRequest) returns (AgentSettingResponse) {};
rpc agent_registry(AgentRegistryRequest) returns (Response) {};
rpc send (stream RecordChunkRequest) returns (Response) {};
rpc connection_check (ConnectionCheckRequest) returns (Response) {};
}
This is my code snippet:
def register():
try:
with insecure_channel(settings.server_address) as channel:
setting_dict = settings.dict()
logger.info(f'\nSetting Dict: {setting_dict}')
agent_setting = AgentSetting(**setting_dict)
logger.info(f'\nAgent Setting Instance: \n{agent_setting}')
response = ChiefStub(channel).agent_registry(
AgentRegistryRequest(
agent_setting=agent_setting,
host_name=settings.host_name
)
)
return response
except Exception as e:
logger.exception(f'Register Error: {str(e)}')
return Response(success=False, message="failure")
Logs:
|2020-05-05T18:33:56.931140+0300| |5480| |INFO| |reqs:register:28|
Setting Dict: {'stop': False, 'connection': True, 'registered': False, 'image_mode': 'RGB', 'sending_fail_limit': 3, 'limit_of_old_records': 5368709120, 'offline_records_exceed_policy': 'OVERWRITE'}
|2020-05-05T18:33:56.932137+0300| |5480| |INFO| |reqs:register:32|
Agent Setting Instance:
connection: true
image_mode: "RGB"
sending_fail_limit: 3
limit_of_old_records: 5368709120
offline_records_exceed_policy: "OVERWRITE"
In proto3, an unset value and a default value are considered equivalent.
So stop: false is considered equivalent to omitting stop entirely.
See Language Guide (proto3)
Note that for scalar message fields, once a message is parsed there's no way of telling whether a field was explicitly set to the default value (for example whether a boolean was set to false) or just not set at all: you should bear this in mind when defining your message types. For example, don't have a boolean that switches on some behaviour when set to false if you don't want that behaviour to also happen by default. Also note that if a scalar message field is set to its default, the value will not be serialized on the wire.
Note that this is different from proto2.
Related
I'm using gRPC in python with django and django-grpc-framework.
And when I run the unittest, there is a piece of code in my models.py which producing weird problem: I request three gRPC requests, but only one reaches the server side.
Problem details
To simplified the problem, the following code explains the situation:
class MyModel(models.Model):
def some_method(self):
for stub in [stub1, stub2, stub3]:
resp = stub.RetractObject(base_pb2.RetractObjectRequest(
app_label='...',
object_type='...',
object_id='...',
exchange_id='...',
remark='...',
date_record='...',
))
Here, the some_method is triggering under a django.test.TestCase unittest function, while the gRPC was connecting to a external running server instance.
And the relating .proto files is as below:
// base.proto
// ...
message CreditRecord {
int64 id = 1;
string username = 2;
string app_label = 3;
string flag = 4;
string object_type = 5;
int64 object_id = 6;
int64 amount = 7;
int64 balance = 8;
string remark = 9;
string date_record = 10;
string exchange_id = 11;
int64 retract_ref = 12;
}
message RetractObjectRequest {
string app_label = 1;
string object_type = 2;
int64 object_id = 3;
string exchange_id = 4;
string remark = 5;
string date_record = 6;
}
// ...
// stub1 - stub3 has the similar structure below
syntax = "proto3";
package mjtest.growth;
import "mjtest/growth/base.proto";
service CreditController {
// ...
rpc RetractObject (RetractObjectRequest) returns (stream CreditRecord) {}
}
And the Server-Side code is something like (using django-grpc-framework):
# services.py
class GenericCreditService(mixins.RetrieveModelMixin,
mixins.ListModelMixin,
GenericService):
def RetractObject(self, request: base_pb2.RetractObjectRequest, context):
print('Request reached!', request)
# ...
records = model_class.retract_object(...)
# Here, records has aready return a list, not a iterator.
for rec in records:
yield self.serializer_class(rec).message
So, when I trigger the unittest, the client-side calls THREE times to request a RetractObject gRPC method.
But only ONE time when Request reached! was printed.
Attempt #1: adding a sleep(0.5)
I guess that there's some issue inside the client side, and I add sleep(0.5) after every request.
class MyModel(models.Model):
def some_method(self):
for stub in [stub1, stub2, stub3]:
resp = stub.RetractObject(base_pb2.RetractObjectRequest(...))
sleep(0.5)
Then all three requests reaches!
Attempt #2: traverses the response explicitly.
I guess that the stream response with no consumed may cause the problem, so I transfer the response to a list explicitly.
class MyModel(models.Model):
def some_method(self):
for stub in [stub1, stub2, stub3]:
resp = stub.RetractObject(base_pb2.RetractObjectRequest(...))
list(resp)
Then all three requests reaches!
So the problem is so weird, and neither of the ways I request throws an error.
Can anyone help?
I collect the message according to my protofile (listed below). Then I serialize it into a byte string using SerializeToString() method. Then i get the byte string message and deserialization into proto object using ParseFromString() method.
But if i fill in the some fields zero values and execute the above algorithm like this:
def test():
fdm = device_pb2.FromDeviceMessage()
fdm.deveui = bytes.fromhex('1122334455667788')
fdm.fcntup = 0
fdm.battery = 3.5999999046325684
fdm.mode = 0
fdm.event = 1
port = fdm.data.add()
port.port = 1 #device_pb2.PortData.Name(0)
port.value = 0
c = fdm.SerializeToString()
return c
def parse_test(data):
print(data)
res = device_pb2.FromDeviceMessage()
res.ParseFromString(data)
return res
print(parse_test(test()))
, then python console will show me:
deveui: "\021\"3DUfw\210"
battery: 3.5999999046325684
event: PERIOD_EVENT
data {
port: VIBR2
}
without fields values are zero.
But i want to see:
deveui: "\021\"3DUfw\210"
fcntup: 0
battery: 3.5999999046325684
mode: BOUNDARY
event: PERIOD_EVENT
data {
port: VIBR2
value: 0
}
Why is it happening, and if it's fixed how can i fix it?
=============Proto_File================
message FromDeviceMessage{
bytes deveui = 1;
uint32 ts = 2;
int32 fcntup = 3;
float battery = 4;
int32 period = 5;
Mode mode = 6;
Event event = 7;
repeated PortData data = 8;
}
message PortData{
DevicePort port = 1;
int32 value = 2;
}
enum Mode{
BOUNDARY = 0;
PERIOD = 1;
BOUNDARY_PERIOD = 2;
}
enum Event{
BOUNDARY_EVENT = 0;
PERIOD_EVENT = 1;
}
enum DevicePort{
VIBR1 = 0;
VIBR2 = 1;
TEMPL2 = 3;
}
So, i think i guessed the reason. In case enum type ( DevicePort, Event, Mode): the default value is the first defined enum value, which must be 0. So i will set up 1 value to see required fields. In other cases: the fields with zero values are not displayed to reduce memory size of package. But if i turn to the required field using this way: res.data[0].value (in def parse_test(data)) it will show me 0, if i set value 0 in field value, for example. And this rule works in all cases.
I have a format like this and want to convert my python dict to this proto format
proto code:
message template{
message NicSettings{
string network_adapter = 1;
string network_name = 2;
}
message NetConfig{
bool keep_mac_address =1;
bool same_as_source = 2;
repeated NicSettings nic_settings = 3;
}
NetworkConfig network_config = 3;
}
python dict:
template:
{ keepMacAddress: true,
sameAsSource : false,
nicSettings: [ {networkAdapter: "ethernet0",
networkName: "Calpal1"
} ,
{networkAdapter: "ethernet1",
networkName: "Calpal2"
}
]
}
How do I convert this to a proto message to pass it to gRPC.
It's not entirely clear that your .proto is correct (or are there typos in it?) but this looks like it should be something along the lines of
my_template_message = my_message_module_pb2.template(
keep_mac_address=my_dictionary['keepMacAddress'],
same_as_source=my_dictionary['sameAsSource'],
nic_settings=tuple(
my_message_module_pb2.template.NicSettings(
network_adapter=my_subdictionary['networkAdapter'],
network_name=my_subdictionary['networkName'])
for my_subdictionary in my_dictionary['nicSettings']
)
)
. It's also a little odd in your .proto content that you've nested two message definitions inside another message definition - that doesn't look necessary at all.
I have just started learning Uber's Tchannel. I'm trying to run the code from tchannel documentation in python and nodejs. In both the cases I am not able to connect the client to the server.
This is how my code looks like in nodejs, which i followed from http://tchannel-node.readthedocs.org/en/latest/GUIDE/:
var TChannel = require('tchannel');
var myLocalIp = require('my-local-ip');
var rootChannel = TChannel();
rootChannel.listen(0,myLocalIp());
rootChannel.on('listening', function onListen() {
console.log('got a server', rootChannel.address());
});
var TChannelThrift = rootChannel.TChannelAsThrift;
var keyChan = rootChannel.makeSubChannel({
serviceName: process.env.USER || 'keyvalue'
});
var fs = require('fs');
var keyThrift = TChannelThrift({
source: fs.readFileSync('./keyvalue.thrift', 'utf8')
});
var ctx = {
store: {}
};
keyThrift.register(keyChan, 'KeyValue::get_v1', ctx, get);
keyThrift.register(keyChan, 'KeyValue::put_v1', ctx, put);
function get(context, req, head, body, cb) {
cb(null, {
ok: true,
body: {
value: context.store[body.key]
}
});
}
function put(context, req, head, body, cb) {
context.store[body.key] = body.value;
cb(null, {
ok: true,
body: null
});
}
When i run this code i get this error:
node sever.js
assert.js:93
throw new assert.AssertionError({
^
AssertionError: every field must be marked optional, required, or have a default value on GetResult including "value" in strict mode
at ThriftStruct.link (/home/bhaskar/node_modules/thriftrw/struct.js:154:13)
at Thrift.link (/home/bhaskar/node_modules/thriftrw/thrift.js:199:32)
at new Thrift (/home/bhaskar/node_modules/thriftrw/thrift.js:69:10)
at new TChannelAsThrift (/home/bhaskar/node_modules/tchannel/as/thrift.js:46:17)
at TChannelAsThrift (/home/bhaskar/node_modules/tchannel/as/thrift.js:38:16)
at Object.<anonymous> (/home/bhaskar/uber/tchannel/thrift/sever.js:16:17)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
Similar, I have also tried the same thing in python by following http://tchannel.readthedocs.org/projects/tchannel-python/en/latest/guide.html link.
code in python looks like this:
from __future__ import absolute_import
from tornado import ioloop
from tornado import gen
from service import KeyValue
from tchannel import TChannel
tchannel = TChannel('keyvalue-server')
values={}
#tchannel.thrift.register(KeyValue)
def getValue(request):
key = request.body.key
value = values.get(key)
if value is None:
raise KeyValue.NotFoundError(key)
return value
#tchannel.thrift.register(KeyValue)
def setValue(request):
key = request.body.key
value = request.body.value
values[key] = value
def run():
tchannel.listen()
print('Listening on %s' % tchannel.hostport)
if __name__ == '__main__':
run()
ioloop.IOLoop.current().start()
when i run this with python server.py command i get ` Listening on my-local-ip:58092
`
but when i try to connect it with the client using tcurl as:
tcurl -p localhost:58092 -t ~/keyvalue/thrift/service.thrift service KeyValue::setValue -3 '{"key": "hello", "value": "world"}'
I get this:
assert.js:93
throw new assert.AssertionError({
^
AssertionError: every field must be marked optional, required, or have a default value on NotFoundError including "key" in strict mode
at ThriftException.link (/usr/lib/node_modules/tcurl/node_modules/thriftrw/struct.js:154:13)
at Thrift.link (/usr/lib/node_modules/tcurl/node_modules/thriftrw/thrift.js:199:32)
at new Thrift (/usr/lib/node_modules/tcurl/node_modules/thriftrw/thrift.js:69:10)
at new TChannelAsThrift (/usr/lib/node_modules/tcurl/node_modules/tchannel/as/thrift.js:46:17)
at asThrift (/usr/lib/node_modules/tcurl/index.js:324:18)
at onIdentified (/usr/lib/node_modules/tcurl/index.js:278:13)
at finish (/usr/lib/node_modules/tcurl/node_modules/tchannel/peer.js:266:9)
at Array.onIdentified [as 1] (/usr/lib/node_modules/tcurl/node_modules/tchannel/peer.js:257:9)
at DefinedEvent.emit (/usr/lib/node_modules/tcurl/node_modules/tchannel/lib/event_emitter.js:90:25)
at TChannelConnection.onOutIdentified (/usr/lib/node_modules/tcurl/node_modules/tchannel/connection.js:383:26)
Can anyone tell me what is the mistake?
for the node example, the thrift file in the guide needs to be updated. try to use the following (i just added a required keyword for every field):
struct GetResult {
1: required string value
}
service KeyValue {
GetResult get_v1(
1: required string key
)
void put_v1(
1: required string key,
2: required string value
)
}
Here is my problem guys,
I initialise the protofile as shown in the bitcoin for developers wiki shown here:
package payments;
option java_package = "org.bitcoin.protocols.payments";
option java_outer_classname = "Protos";
message Output {
optional uint64 amount = 1 [default = 0];
required bytes script = 2;
}
message PaymentDetails {
optional string network = 1 [default = "test"];
repeated Output outputs = 2;
required uint64 time = 3;
optional uint64 expires = 4;
optional string memo = 5;
optional string payment_url = 6;
optional bytes merchant_data = 7;
}
message PaymentRequest {
optional uint32 payment_details_version = 1 [default = 1];
optional string pki_type = 2 [default = "none"];
optional bytes pki_data = 3;
required bytes serialized_payment_details = 4;
optional bytes signature = 5;
}
message X509Certificates {
repeated bytes certificate = 1;
}
message Payment {
optional bytes merchant_data = 1;
repeated bytes transactions = 2;
repeated Output refund_to = 3;
optional string memo = 4;
}
message PaymentACK {
required Payment payment = 1;
optional string memo = 2;
}
throw this view into django which fetches the public key associated with a newly created address, hashes it into the correct format for a script, serializes the 'serialized_payment_details' field and returns a response object.
def paymentobject(request):
def addr_160(pub):
h3 = hashlib.sha256(unhexlify(pub))
return hashlib.new('ripemd160', h3.digest())
x = payments_pb2
btc_address = bc.getnewaddress()
pubkey_hash = bc.validateaddress(btc_address).pubkey
pubkey_hash160 = addr_160(pubkey_hash).hexdigest()
hex_script = "76" + "a9" + "14" + pubkey_hash160 + "88" + "ac"
serialized_script = hex_script.decode("hex")
xpd = x.PaymentDetails()
xpd.time = int(time())
xpd.outputs.add(amount = 0, script = serialized_script)
xpr = x.PaymentRequest()
xpr.serialized_payment_details = xpd.SerializeToString()
return HttpResponse(xpr.SerializeToString(), content_type="application/octet-stream")
When I point my bitcoin v0.9 client at URI
bitcoin:?r=http://127.0.0.1:8000/paymentobject
I am met with an error
[libprotobuf ERROR google/protobuf/message_lite.cc:123] Can't parse message of type "payments.PaymentRequest" because it is missing required fields: serialized_payment_details
But it isn't missing the details field is it?
Any help much appreciated, thanks :)
The answer was that (at the time of writing) you cannot specify zero as the Output.amount. The bitcoin-qt 0.9 client considers it dust and does not allow the transaction to proceed.
More info here.