cTrader decode protobuf message from Report API Events (tunnel) - python

i am dealing with cTrader Trading platform.
My project is written in python 3 on tornado.
And have issue in decoding the prtobuf message from report API Events.
Below will list everything what i achieved and where have the problem.
First cTrader have Rest API for Report
so i got the .proto file and generated it for python 3
proto file is called : cTraderReportingMessages5_9_pb2
from rest Report API getting the protobuf message and able to decode in the following way because i know which descriptor to pass for decoding
from models import cTraderReportingMessages5_9_pb2
from protobuf_to_dict import protobuf_to_dict
raw_response = yield async_client.fetch(base_url, method=method, body=form_data, headers=headers)
decoded_response = cTraderReportingMessages5_9_pb2._reflection.ParseMessage(descriptors[endpoint]['decode'], raw_response.body)
descriptors[endpoint]['decode'] = is my descriptor know exactly which descriptor to pass to decode my message
my content from cTraderReportingMessages5_9_pb2
# here is .proto file generated for python 3 is too big cant paste content here
https://ufile.io/2p2d6
So until here using rest api and know exactly which descriptor to pass, i am able to decode protobuf message and go forward.
2. Now the issue i face
Connecting with python 3 to the tunnel on 127.0.0.:5672
i am listening for events and receiving this kind of data back
b'\x08\x00\x12\x88\x01\x08\xda\xc9\x06\x10\xb6\xc9\x03\x18\xa1\x8b\xb8\x01 \x00*\x00:\x00B\x00J\x00R\x00Z\x00b\x00j\x00r\x00z\x00\x80\x01\xe9\x9b\x8c\xb5\x99-\x90\x01d\x98\x01\xea\x9b\x8c\xb5\x99-\xa2\x01\x00\xaa\x01\x00\xb0\x01\x00\xb8\x01\x01\xc0\x0
1\x00\xd1\x01\x00\x00\x00\x00\x00\x00\x00\x00\xd9\x01\x00\x00\x00\x00\x00\x00\x00\x00\xe1\x01\x00\x00\x00\x00\x00\x00\x00\x00\xea\x01\x00\xf0\x01\x01\xf8\x01\x00\x80\x02\x00\x88\x02\x00\x90\x02\x00\x98\x02\x00\xa8\x02\x00\xb0\x02\x00\xb8\x02\x90N\xc0\x02\x00\xc8\x0
2\x00
as recommendation i got, i need to use same .proto file generated for python that i did in step 1 and decode the message but without any success because i don't know the descriptor need to be passed.
so in 1 step was doing and working perfect this way
decoded_response = cTraderReportingMessages5_9_pb2._reflection.ParseMessage(descriptors[endpoint]['decode'], raw_response.body)
but in second step can not decode the message using in the same way, what i am missing or how to decode the message using same .proto file?

Finally found a workaround by my self, maybe is a primitive way but only this worked for me.
By the answer got from providers need to use same .proto file for both situations
SOLUTION:
1. Did list with all the descriptors from .proto file
here is .proto file generated for python 3 is too big cant paste content here
https://ufile.io/2p2d6
descriptors = [cTraderReportingMessages5_9_pb2.descriptor_1, cTraderReportingMessages5_9_pb2.descriptor_2]
2. Loop throw list and pass one by one
for d in descriptors:
decoded_response = cTraderReportingMessages5_9_pb2._reflection.ParseMessage(d, raw_response.body)
3. Check if decoded_response is not blank
if decoded_response:
# descriptor was found
# response is decoded
else:
# no descriptor
4. After decoded response we go parse it into dict:
from protobuf_to_dict import protobuf_to_dict
decoded_response_to_dict = protobuf_to_dict(decoded_response)
This solution that spent weeks on it finally worked.

Related

Unable to decode AWS Session Manager websocket output in python

Hope you're doing great !
The usecase
I'm trying to PoC something on AWS, the use case is that we need to be able to check on all our infrastructure that all instance are reachable through AWS Session Manager.
In order to do that, I will use a Lambda in Python 3.7, I make my PoC locally currently. I'm able to open the websocket, send the Token Payload and get an output that contains a shell.
The problem is that the byte output contains character that the python decode function can't decode in a lot of tested character encoding, every time something block.
The output
Here is the output I have after sending the payload :
print(event)
b'\x00\x00\x00toutput_stream_data \x00\x00\x00\x01\x00\x00\x01m\x1a\x1b\x9b\x15\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\xb1\x0b?\x19\x99A\xfc\xae%\xb2b\xab\xfd\x02A\xd7C\xcd\xd8}L\xa8\xb2J\xad\x12\xe3\x94\n\xed\xb81\xfa\xb6\x11\x18\xc2\xecR\xf66&4\x18\xf6\xbdd\x00\x00\x00\x01\x00\x00\x00\x10\x1b[?1034hsh-4.2$ '
What I already tried
I researched a lot on stackoverflow, tried to decode with ascii, cp1252, cp1251, cp1250, iso8859-1, utf-16, utf-8, utf_16_be, but everytime, it doesn't decode anything or it leads to an error because a character is unknown.
I also already tried to use chardet.detect, but the returned encoding is not working and also the probability result is really low. And also tried to strip the \x00 but strip doesn't work that time.
I already know that shell output can sometimes contains coloring character and some things that make it looks like garbled, but here, I tried to pass colorama on it, tried to match some ANSI character with some regex, nothing successfully decode this bytes response.
The code
Here is the code for my PoC, feel free to use it to try, you just have to change the target instance id (your instance needs to have the latest amazon-ssm-agent running on it).
import boto3
import uuid
import json
from websocket import create_connection
# Setting the boto3 client and the target
client = boto3.client('ssm','eu-west-1')
target = 'i-012345678910'
# Starting a session, this return a WebSocket URL and a Token for the Payload
response = client.start_session(Target=target)
# Creating a session with websocket.create_connection()
ws = create_connection(response['StreamUrl'])
# Building the Payload with the Token
payload = {
"MessageSchemaVersion": "1.0",
"RequestId": str(uuid.uuid4()),
"TokenValue": response['TokenValue']
}
# Sending the Payload
ws.send(json.dumps(payload))
# Receiving, printing and measuring the received message
event = ws.recv()
print(event)
print(len(event))
# Sending pwd, that should output /usr/bin
ws.send('pwd')
# Checking the result of the received message after the pwd
event = ws.recv()
print(event)
print(len(event))
Expected output
In the final solution, I expect to be able to do something like a curl http://169.254.169.254/latest/meta-data/instance-id through the websocket, and compare the instance-id of the command output against the target, to validate that instance is reachable. But I need to be able to decode the websocket output before achieving that.
Thank you in advance for any help on this.
Enjoy the rest of your day !
As per my reading of the amazon-ssm-agent code, the payload exchanged via the websocket connection and managed by the session-manager channels follow a specific structure called the AgentMessage.
You will have to comply with this structure to use session-manager with the remote agent through the MGS Service, which means serializing messages and deserializing responses.
The fields of the above struct are also broken down into models via additional structs.
It shouldn't be too long to re-implement that in python. Good luck!

Send metadata in mqtt message

I am using mqtt for the first time to transfer some binary files, so far I have no issues transferring it using a code like bellow
import paho.mqtt.client as paho
f=open("./file_name.csv.gz","rb")
filename= f.read()
f.close()
byteArray = bytearray(filename)
mqttc = paho.Client()
mqttc.will_set("/event/dropped", "Sorry, I seem to have died.")
mqttc.connect(*connection definition here*)
mqttc.publish("hello/world", byteArray )
However together with the file itself there is some extra info I want to send (the original file name, creation date,etc...), I can't find any proper way to transfer it using mqtt, is there any way to do that or do I need to add that info to the message byteArray itself? How would I do that?
You need to build your own data structor to hold the file and it's meta data.
How you build that structure is up to you. A couple of options would be:
base64/uuencode encode the file and add it as a field in a JSON object and save the meta data as other fields then publish the JSON object.
Build a Python map with the file as a field and other meta data as other fields. Then use pickle to serialise the map.

loading Behave test results in JSON file in environment.py's after_all() throws error

I'm trying to send my Behave test results to an API Endpoint. I set the output file to be a new JSON file, run my test, and then in the Behave after_all() send the JSON result via the requests package.
I'm running my Behave test like so:
args = ['--outfile=/home/user/nathan/results/behave4.json',
'--for mat=json.pretty']
from behave.__main__ import main as behave_main
behave_main(args)
In my environment.py's after_all(), I have:
def after_all(context):
data = json.load(open('/home/user/myself/results/behave.json', 'r')) # This line causes the error
sendoff = {}
sendoff['results'] = data
r = requests.post(MyAPIEndpoint, json=sendoff)
I'm getting the following error when running my Behave test:
HOOK-ERROR in after_all: ValueError: Expecting object: line 124 column 1
(char 3796)
ABORTED: By user.
The reported error is here in my JSON file:
[
{
...
} <-- line 124, column 1
]
However, behave.json is outputted after the run and according to JSONLint it is valid JSON. I don't know the exact details of after_all(), but I think the issue is that the JSON file isn't done writing by the time I try to open it in after_all(). If I try json.load() a second time on the behave.json file after the file is written, it runs without error and I am able to view my JSON file at the endpoint.
Any better explanation as to why this is happening? Any solution or change in logic to get past this?
Yes, it seems as though the file is still in the process of being written when I try to access it in after_all(). I put in a small delay before I open the file in my code, then I manually viewed the behave.json file and saw that there was no closing ] after the last }.
That explains that. I will create a new question to find out how to get by this, or if a change in a logic is required.

How can I binary read a file in TCL and send it via XMLRPC to a server that is written in Python?

I have an XMLRPC server written in Python (using xmlrpclib) that has the following method defined:
def saveFileOnServer(fileName, xmldata):
handle = open(fileName, "wb")
handle.write(xmldata.data)
handle.close()
return 0
If I'm using a Python client to connect and send the file, everything works ok (file is transferred):
import sys, xmlrpclib
client = xmlrpclib.Server('http://10.205.11.28:10044')
with open("resource.res", "rb") as handle:
binary_data = xmlrpclib.Binary(handle.read())
client.saveFileOnServer("C:\\Location\\resource.res",binary_data)
But... I have to connect to this XMLRPC server from a TCL script. And I'm doing the following:
package require XMLRPC
package require SOAP
package require rpcvar
package require http
set fd [open "resource.res" r]
fconfigure $fd -translation binary
set data [read $fd]
close $fd
XMLRPC::create ::PytharAgent::saveFileOnServer -name "saveFileOnServer" -proxy [::PytharAgent::endpoint] -params {fileName string file binary}
puts [::PytharAgent::saveFileOnServer "C:\\Location\\resource.res" $data]
From this I get the following error:
<class 'xml.parsers.expat.ExpatError'>:not well-formed (invalid token): line 2, column 154
invoked from within
"$parseProc $procVarName $reply"
(procedure "invoke2" line 17)
invoked from within
"invoke2 $procVarName $reply"
(procedure "::SOAP::invoke" line 25)
invoked from within
"::SOAP::invoke ::SOAP::_PytharAgent_saveFileOnServer {C:\Location\resource.res} IxNResourceItev1.0.0.0JTYPE2\tm_versio..."
("eval" body line 1)
invoked from within
"eval ::SOAP::invoke ::SOAP::_PytharAgent_saveFileOnServer $args"
(procedure "::PytharAgent::saveFileOnServer" line 1)
invoked from within
"::PytharAgent::saveFileOnServer "C:\\Location\\resource.res" $data"
invoked from within
"puts [::PytharAgent::saveFileOnServer "C:\\Location\\resource.res" $data]"
(file "test.pythat-agent.tcl" line 109)
I then tool the binary data from the Python code and the binary data from the TCL code and compared them with the original file. I discovered after verifying in HEX view that the data read using TCL had the original data plus some extra HEX codes from time to time or with some HEX codes slightly modified.
So i'm guessing it might be related to the TCL vs. Python different ways of handling binary data. Or am I doing something wrong when reading with TCL?
PS I also found this issue that seems to be similar to mine, but I don't understand what the solution would be exactly.
Try this:
package require base64
XMLRPC::create ::PytharAgent::saveFileOnServer -name "saveFileOnServer" -proxy [::PytharAgent::endpoint] -params {fileName string file base64}
puts [::PytharAgent::saveFileOnServer "C:\\Location\\resource.res" [::base64::encode $data]]]
basically binary seems not to be a recognized datatype of XMLRPC.

Decompress remote .gz file in Python

i've a issue with Python.
My case: i've a gzipped file from a partner platform (i.e. h..p//....namesite.../xxx)
If i click the link from my browser, it will download a file like (i.e. namefile.xml.gz).
So... if i read this file with python i can decompress and read it.
Code:
content = gzip.open(namefile.xml.gz,'rb')
print content.read()
But i can't if i try to read the file from remote source.
From remote file i can read only the encoded string, but not decoded it.
Code:
response = urllib2.urlopen(url)
encoded =response.read()
print encoded
With this code i can read the string encoded... but i can't decoded it with gzip or lzip.
Any advices?
Thanks a lot
Unfortunately the method #Aya suggests does not work, since GzipFile extensively uses seek method of the file object (not supported by response).
So you have basically two options:
Read the contents of the remote file into io.StringIO, and pass the object into gzip.GzipFile (if the file is small)
download the file into a temporary file on disk, and use gzip.open
There is another option (which requires some coding) - to implement your own reader using zlib module. It is rather easy, but you will need to know about a magic constant (How can I decompress a gzip stream with zlib?).
If you use Python 3.2 or later the bug in GzipFile (requiring tell support) is fixed, but they apparently aren't going to backport the fix to Python 2.x
For Python v3.2 or later, you can use the gzip.GzipFile class to wrap the file object returned by urllib2.urlopen(), with something like this...
import urllib2
import gzip
response = urllib2.urlopen(url)
gunzip_response = gzip.GzipFile(fileobj=response)
content = gunzip_response.read()
print content
...which will transparently decompress the response stream as you read it.

Categories