Hexadecimal and words mixed in Python bytes - python

I'm receiving frames from a websocket server and I'm not sure how to interpret some of the bytes object because they are mixed with actual words inside them.
I get something like this:
b'\x00\x17\x04\x00\x00\x00\xc0\x05FOCUS\x01\x00\xff\xfc\x00\x05;\xea\x01\x03\xe8\x81'
This one has 'FOCUS' and a ';' in it. I am expecting 'FOCUS' to be part of the payload, but I don't know why it's showing up as is, and not in hex form. Can someone explain what's going on and how I can unpack the rest of the data?
Also, it seems I'm getting the data in reverse order. I think \x81 is supposed to be the first byte of the frame.
I'm using Python 3.6 and the websocket-client lib. Thank you.

Related

Using python to measure Wi-Fi

I am working on a school project in which I must measure and log Wi-Fi (I know how to log the data, I just don't know the most efficient way to do it). I have tried using by using
subproject.check_output('iwconfig', stderr=subprocess.STDOUT)
but that outputs bytes, which I really don't want to deal with (and I don't know how to, either, so if that is the only option, then can someone explain how to handle bytes). Is there any other way, maybe to get it in plain text? And please do not just give me the code I need, tell me how to do it.
Thank you in advance!
You are almost there. I assume that you are using python 3.x. iwconfig is sending you text encoded in whatever character set your terminal uses. That encoding is available as sys.stdin.encoding. So just put it together to get a string. BTW, you want a command list instead of a string.
raw = subprocess.check_output(['iwconfig'],stderr=subprocess.STDOUT)
data = raw.decode(sys.stdin.encoding)

Interpreting padded string

I have little experience with unicode strings. I am not even sure this fits the criteria.
In any case I was using nmap and ran:
# nmap -sV -O 192.168.0.8
against a box in my LAN. Nmap produced a string over several lines returned from an open port, but I cannot understand a lot of the output due to its formatting. For example, a small snippet looks like this:
-Port8081-TCP:V=6.00%I=7%D=10/20%Time=52642C3A%P=i686-pc-linux-gnu%r(FourOhFourRequest,37,"HTTP/1\.1\x20503\x20Service\x20Unavailable\r\nContent-Length:\x200\r\n\r\n")%r
My first thought was URL encoding which requires decoding, but that's incorrect. It looks almost like padding from serial communication? Anybody able to shed light on how to interpret the "\x200" or "\x20503" or another that shows often is "x\20".
I thought about writing a small Python script to take in the entire string and convert to ASCII with:
>>> s = '<STRING>'
>>> eval('\x20"'+s.replace('"', r'"')+'"').encode('ascii')
Am I on the right track?
The string you see is a service fingerprint. It contains the responses that were received to the various probes that Nmap sends. If you think there is identifying information in the responses, please submit the fingerprint to the Nmap project to improve detection in the future.
More than likely, what happened is that the service is not sending any useful information. The sample you gave, for instance, does not have a Server: header that would identify the HTTP server.
To answer the technical problem of how to turn this string:
"HTTP/1\.1\x20503\x20Service\x20Unavailable\r\nContent-Length:\x200\r\n\r\n"
into an unescaped version, you can do this:
>>> print mystring
"HTTP/1\.1\x20503\x20Service\x20Unavailable\r\nContent-Length:\x200\r\n\r\n"
>>> print mystring.decode('string-escape')
"HTTP/1\.1 503 Service Unavailable
Content-Length: 0
"
Those numbers bring to mind hexidecimal values due to the 'x' in front.
I know that hexidecimal values actually start with '0x' and not just x, but I thought it was worth googling them as hex values with the '0x' in front. I did get a full page of search results which seemed to contain these three values(perhaps inevitable that three random values would show up somewhere, but then again, perhaps not):
0x200, 0x20503, 0x20
Sorry that this isn't an answer as such, but I thought I would mention it since you didn't mention trying this in your post. I wanted to post this as a comment, but the option wasn't available for some reason...

Best way to send string data using python UDP packets?

To preface I'm very new to python (about 7 days) but I'm an experienced software eng undergrad.
I would like to send data between machines running python scripts. The idea I had (in order to simplify things) was to concatenate the data (strings & ints) into a string and do the parsing client-side.
The UDP packets send beautifully with simple strings but when I try to send useful data python always complains about the data I send; specifically python won't let me concatenate tuples.
In order to parse the data on the client I need to seperate the data with a dash character: '-'.
nodeList is of type dictionary where the key is a string and value is a double.
randKey = random.choice( nodeList.keys() )
data = str(randKey) +'-'+ str(nodeList[randKey])
mySocket.sendto ( data , address )
The code above produces the following error:
TypeError: coercing to Unicode: need string or buffer, tuple found
I don't understand why it thinks it is a tuple I am trying to concatenate...
So my question is how can I correct this to keep Python happy, or can someone suggest I better way of sending the data?
Thank you in advance.
I highly suggest using Google Protocol Buffers as implemented in Python as protobuf for this as it will handle the serialization on both ends of the line. It has Python bindings that will allow you to easily use it with your existing Python program.
Using your example code you would create a .proto file like so:
message SomeCoolMessage {
required string key = 1;
required double value = 2;
}
Then after generating, you can use it like so:
randKey = random.choice( nodeList.keys() )
data = SomeCoolMessage()
data.key = randKey
data.value = nodeList[randKey]
mySocket.sendto ( data.SerializeToString() , address )
I'd probably use the json module serialize the data.
You need to serialize the data. Pickle does this built in for you, and you can ask pickle for an ascii representation of the data vs binary data (see the docs), or you could use json (it also serializes the data for you) both are in the standard library. But really there are a hundred thousand different libraries that handle ALL the work for you, in getting data from 1 machine to another. I'd suggest using a library.
Depending on speed, etc. there are different trade offs for the various libraries. In the standard library you get HTTP, that's about it (well and raw sockets). But there are others.
If super fast speed is more important than other things..., zeroMQ, or google's protocol buffers might be valid options.
For me, I use rpyc usually, it lets me be totally lazy, and just call over to the other process across the network. It's fast enough usually.
You know that UDP has no guarantee that the data will ever show up on the other side, or that it will show up IN ORDER. for your application you may not care, I don't know, but just thought I'd bring it up.

Handling unicode data in XMLRPC

I have to migrate data to OpenERP through XMLRPC by using TerminatOOOR.
I send a name with value "Rotule right Aurélia".
In Python the name with be encoded with value : 'Rotule right Aur\xc3\xa9lia '
But in TerminatOOOR (xmlrpc client) the data is encoded with value 'Rotule middle Aur\357\277\275lia'
So in the server side, the data value is not decoded correctly and I get bad data.
The terminateOOOR is a ruby plugin for Kettle ( Java product) and I guess it should encode data by utf-8.
I just don't know why it happens like this.
Any help?
This issue comes from Kettle.
My program is using Kettle to get an Excel file, get the active sheet and transfer the data in that sheet to TerminateOOOR for further handling.
At the phase of reading data from Excel file, Kettle can not recognize the encoding then it gives bad data to TerminateOOOR.
My work around solution is manually exporting excel to csv before giving data to TerminateOOOR. By doing this, I don't use the feature to mapping excel column name a variable name (used by kettle).
first off, whenever you deal with text (and all text is bound to contain some non-US-ASCII character sooner or later), you'll be much happier doing that in Python 3.x instead of in the 2.x series. if Py3 is not an option, try to always use from __future__ import unicode_literals (available in Python 2.6 and 2.7).
basically, when you send text or any other data over the wire, that will only happen in the form of bytes (octets of bits), so it will have to be encoded at some point. try to find out exactly where that encoding takes place in your tool chain; if necessary, use a debugging tool (or deploy print( repr( x ) ) statements) to look into relevant variables. the other software you mention is presumably written in PHP, a language which is known to have issues with unicode. you say that 'it should encode the data by utf-8', but on the other hand, when the receiving end sees the data of an incoming RPC request, that data should already be in utf-8. it would have to be decoded to obtain unicode again.

Python: How to transfer varrying length arrays over a network connection

I need to transfer an array of varying length in which each element is a tuple of two integers. As an example:
path = [(1,1),(1,2)]
path = [(1,1),(1,2),(2,2)]
I am trying to use pack and unpack, however, since the array is of varying length I don't know how to create a format such that both know the format. I was trying to turn it into a single string with delimiters, such as:
msg = 1&1~1&2~
sendMsg = pack("s",msg) or sendMsg = pack("s",str(msg))
on the receiving side:
path = unpack("s",msg)
but that just prints 1 in this case. I was also trying to send 4 integers as well, which send and receive fine, so long as I don't include the extra string representing the path.
sendMsg = pack("hhhh",p.direction[0],p.direction[1],p.id,p.health)
on the receive side:
x,y,id,health = unpack("hhhh",msg)
The first was for illustration as I was trying to send the format "hhhhs", but either way the path doesn't come through properly.
Thank-you for your help. I will also be looking at sending a 2D array of ints, but I can't seem to figure out how to send these more 'complex' structures across the network.
Thank-you for your help.
While you can use pack and unpack, I'd recommend using something like YAML or JSON to transfer your data.
Pack and unpack can lead to difficult to debug errors and incompatibilities if you change your interface and have different versions trying to communicate with each other.
Pickle can give security problems and the pickle format might change between Python versions.
JSON is included in the standard Python distribution since 2.6. For YAML there is PyYAML.
You want some sort of serialization protocol. twisted.spread provides one such (see the Banana specification or Perspective Broker documentation). JSON or protocol buffers would be more verbose examples.
See also Comparison of data serialization formats.
If you include message length as part of the message, then you will know how much data to read. So the entire string should be read across the network.
In any case, perhaps it would help if you posted some of the code you are using to send data across the network, or at least provided more of a description.
Take a look at xdrlib, it might help. It's part of the standard library, and:
The xdrlib module supports the External Data Representation Standard as described in RFC 1014, written by Sun Microsystems, Inc. June 1987. It supports most of the data types described in the RFC.
Pack and unpack are mandatory?
If not, you could use JSON and YAML.
Don't use pickle because is not secure.

Categories