Can't seek in streamed file - python

I have developed a server that serves audio files over HTTP. Using the Content-Length header I was able to view the current position, but this source isn't seekable. How can I make it seekable?
Some people recommended sending Accept-Range: bytes, but when I tried that the audio doesn't even play anymore.

Think about what you're asking: A stream of bytes is arriving over the network, and you want random access over it? You can't. What you can do is implement buffering yourself, which you could do reasonably transparently with the io module. If you want to seek forward, discard the intermediate blocks; if you want to seek backward, you'll have to hold the stream in memory till you don't need it anymore.
If you don't want to buffer the entire stream client-side, you need a way to tell the server to seek to a different position and restart streaming from there.

Related

How to send and receive a file in SocketCAN or Python-can?

I want to send a text file from one serial device(slcan0) to another serial device(slcan1) can this operation be performed in SocketCAN? The serial CAN device I am using is CANtact toolkit. Or can the same operation be done in Python-can?
When you want to send a text file over the CAN bus, you have to decide which CAN-ID you want so sent for sending and receiving.
Most likely your text file is larger than 8 bytes, so you would have to use a higher level protocol on CAN.
ISO-TP will allow 4095 of data in one message.
If this is still not enough, you would have to invent another protocol for sending and receiving the data. E.g. first send the length of data, then send the data in chunks of 4095 bytes.
Once you have figured this out, it does not really matter whether you use SocketCAN, Python-CAN, pyvit or anything else.

ssl socket - Get the real number of bytes of the original message(without the encryption)

I have a server client code that transfers files in python using sockets.
I was thinking about using SSL, but there is one simple problem.
I am using the Buffsize of socket.recv() in order to know how much
bytes were transferred until now.
If I add SSL now, each message size would grow (because of the encryption) and I won't be able to tell how much of the original file was transferred.
Is there a way to get the original message size after the encryption?
or do you have another method of knowing when the file has been completely transferred?

Python TCP socket for a lot of data

We (as project group) are currently stuck on the issue of how to handle live data to our server.
We are getting updates on data every second, and we would like to insert this into our database (security is currently not an issue, because it is a school project). The problem is here we tried python SockerServer and AsyncIO to create a TCP server to which the data can be sent.
We got this working with different libraries etc. But we are stuck on the fact that if we keep an open connection with the client (in this case hardware which sends data every second) we can't split the different JSON or XML messages. They are all added up together.
We know why because TCP only provides order.
Any thoughts on how to handle this? So that every message sent will get split from the others.
Recreating the socket won't be the right option if I recall correctly.
What you will have to do is ensure that there is a clear delimiter for each message. For example, the first 6 characters of every message could be the length of the message - whatever reads from the socket decodes the length then reads that number of bytes, and sends the data to whatever needs it. Another way would be if there is a character/byte which never appears in the content, send it immediately before a message - for example control-A (binary value 1) could be the leadin character, and send control-B (binary value 2) as the leadout. Again the server looks for these framing a message.
If you can't change the client side (the thing sending the data), then you are going to have to parse the input. You can't just add a delimiter to something that you don't control.
An alternative is to use a header that encodes the size of the message that will be sent. Lets say you use a header of 4 bytes, The client first send the server a header with the size of the message to come. The client then sends the message (up to 4 gigs or there about). The server knows that it must first read 4 bytes (a header). It calculates the size n that the header contained then reads n bytes from the socket buffer. You are guaranteed to have read only your message. Using special delimiters is dangerous as you MUST know all possible values that a client can send.
It really depends on the type of data you are receiving. What type of connection, latency... If you have a pause of 1 second between packets and your connection is consistent, you could probably get away with first reading the entire buffer once to clear it, then as soon as there is data available - read it and clear the buffer it. not a great approach, but it might work for what you need - and no parsing involved.

Python FTP and Streams

I need to create a "turning table" platform. My server must be able to take a file from FTP A and send it to FTP B. I did a lot of file transfer systems, so I have no problem with ftplib, aspera, s3 and other transfer protocols.
The thing is that I have big files (150G) on FTP A. And many transfers will occur at the same time, from and to many FTP servers or other.
I don't want my platform to actually store these files in order to send them to another location. I don't want to load everything in memory either... I need to "stream" binary data from A to B, with minimal charge on my transfer platform.
I am looking at https://docs.python.org/2/library/io.html with ReadBuffer and WriteBuffer, but I can't find examples and the documentation is sorta cryptic for me...
Anyone has a starting point?
buff = io.open('/var/tmp/test', 'wb')
def loadbuff(data):
buff.write(data)
self.ftp.retrbinary('RETR ' + name, loadbuff, blocksize=8)
So my data is coming in buff, which is a <_io.BufferedWriter name='/var/tmp/test'> object, but how can I start reading from it while ftplib keeps downloading?
Hope I'm clear enough, any idea is welcomed.
Thanks

Google protocol buffer for parsing Text and Binary protocol messages in network trace (PCAP)

I want to parse application layer protocols from network trace using Google protocol buffer and replay the trace (I am using python). I need suggestions to automatically generate protocol message description (in .proto file) from a network trace.
So you want to reconstruct what .proto messages were being passed over the application-layer protocol?
This isn't as easy as it sounds. First, .proto messages can't be sent raw over the wire, as the receiver needs to know how long they are. They need to be encapsulated somehow, maybe in an HTTP POST or with a raw 4-byte size prepended. I don't know what it would be for your application, but you'll need to deal with that.
Second, you can't reconstruct the full .proto from the messages alone. You only get tag numbers and types, not names. In addition, you will lose information about submessages - submessages and plain strings are encoded identically (you could probably tell which is which by eyeballing them, but I don't think you could do it automatically). You also will never know about optional items that never got sent. But you could parse the buffer without the proto and get some reasonable data (ints, repeated strings, and such).
Third, you need to reconstruct the application byte stream from the pcap log. I'm not sure how to do that, but I suspect there are tools that would do that for you.

Categories