Python sockets: received frames queue management - python

I'm creating an instrument panel for a flight simulator. The sim sends out data via UDP, and the panel parses this data and uses it to transform and render an SVG. The panel runs on Linux.
The sim sends out UDP frames faster than the panel can render, so the frames queue up, and the panel ends up lagging several seconds behind the sim. I haven't found any advice on how to configure or manage this queue beyond the MSG_PEEK flag which does the exact opposite of what I want.
Each UDP frame contains the full set of key/value pairs, so ideally I'd discard any older frames in the queue when a new one is received.
How might I do any one of these:
read the next frame and discard any later frames in the queue
read the most recent frame and discard the rest
set the queue size to 1 or a small number
I'm sure there's a straightforward way to do this but an hour or so of searching and experimentation hasn't come up with a solution.
Here's a representation of the current panel code:
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind(("",54321))
sock.settimeout(0.5)
while True:
try:
frame = sock.recv(1024)
parse_and_render(frame)
except socket.timeout:
pass

This is ugly but works:
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind(("",54321))
sock.setblocking(False)
frame = bytearray(b" " * 1024)
extra_frame = bytearray(b" " * 1024)
while True:
try:
# load next available frame
sock.recv_into(frame, 1024) != -1
# if there's any more frames available, overwrite with that
try:
while sock.recv_into(extra_frame, 1024) != -1:
frame = extra_frame
except BlockingIOError:
pass
# render the most recent frame
parse_and_render(frame)
except BlockingIOError:
pass

Related

After sending data to the server by threading.Timer(700ms), I received in other side 685ms?

I have been developing an application for measuring the gap between two frames to measure the time on the stopwatch by the camera, so I need to trigger a function after a specific time to send data to the server (Server-TCP). but I have one issue sometimes the server receive request less than the specified time, for example, if I run a thread with Timer 700ms, I find some requests inside the server such as 0.682,0.690,0.692,0.699 even if I calculate the gap in the real-image it also the same.
the normal behavior I expected to be if the function Trigger after 700ms, and I do some process, It must be bigger than 700 ms, this is the normal behavior.
I don't know where the mistake I made, and how that was possible to happen, all requests have a unique name, so it's easy to trace them.
this the main core in which I trigger the method to send requests to a server.
#2nd image
def generate2ndTicketBasedonTime(trackedObject, attributes, object_id,event_obj):
try:
while not event_obj.wait(0.700):
ImageProcessing.getInstance().serviceSnapshot(camera, DataToQeue)
event_obj.set()
pass
#1st image
def snapshot
ImageProcessing.getInstance().serviceSnapshot(camera, DataToQeue)
event_obj = threading.Event()
threading.Thread(target=generate2ndTicketBasedonTime, args=[self, self.attributes, self.car.object_id,event_obj]).start()
# client-side -< send data to the server
def sendRequestToCameraService(self, data):
Request = {'cameraRequest': json.dumps(data)}
self.SendToServer(Request)
def SendToServer(self, data):
try:
if(self.isConnected):
data = json.dumps(data).encode()
dataSize = struct.pack("<i", len(data))
self.currentSocket.sendall(b'\xAA\xBB'+dataSize+data+b'\xCC\xDD')
self.triggerTime = time.time()
except BrokenPipeError:
self.isConnected = False
#server setting
def run(self):
srv_address = "0.0.0.0"
srv_port = 4504
srv_sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
srv_sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
srv_sock.bind((srv_address, int(srv_port)))
srv_sock.listen(5)
while True:
cl_sock, cl_address = srv_sock.accept()
ClientHandler(cl_sock, cl_address, f"Client -{len(self.clients)}")
example for the case which I'm facing
image for gap time between client and server-side
image gap time before trigger the 2nd method in the clock
image for 300ms gap time
I figure out the issue, it was the camera that return the frame before the specific time for a mysterious reason.
1- I tried the code without socket and threads, to avoid complexity so the main problem was from the camera
2- there is a slight difference between the applications when they start up(take time.time()) so now I'm sending the timestamp from the main application to other servers in the request, so the sequences for the process seem good now.

How to send and receive webcam stream using tcp sockets in Python?

I am trying to recreate this project. What I have is a server (my computer), and a client (my raspberry pi). What I am doing differently than the original project is that I am trying to use a simple webcam instead of a raspberry pi camera to stream images from my rpi to the server. I know that I must:
Get opencv image frames from the camera.
Convert a frame (which is a numpy array) to bytes.
Transfer the bytes from the client to the server.
Convert the bytes back into frames and view.
Examples would be appreciated.
self_driver.py
import SocketServer
import threading
import numpy as np
import cv2
import sys
ultrasonic_data = None
#BaseRequestHandler is used to process incoming requests
class UltrasonicHandler(SocketServer.BaseRequestHandler):
data = " "
def handle(self):
while self.data:
self.data = self.request.recv(1024)
ultrasonic_data = float(self.data.split('.')[0])
print(ultrasonic_data)
#VideoStreamHandler uses streams which are file-like objects for communication
class VideoStreamHandler(SocketServer.StreamRequestHandler):
def handle(self):
stream_bytes = b''
try:
stream_bytes += self.rfile.read(1024)
image = np.frombuffer(stream_bytes, dtype="B")
print(image.shape)
cv2.imshow('F', image)
cv2.waitKey(0)
finally:
cv2.destroyAllWindows()
sys.exit()
class Self_Driver_Server:
def __init__(self, host, portUS, portCam):
self.host = host
self.portUS = portUS
self.portCam = portCam
def startUltrasonicServer(self):
# Create the Ultrasonic server, binding to localhost on port 50001
server = SocketServer.TCPServer((self.host, self.portUS), UltrasonicHandler)
server.serve_forever()
def startVideoServer(self):
# Create the video server, binding to localhost on port 50002
server = SocketServer.TCPServer((self.host, self.portCam), VideoStreamHandler)
server.serve_forever()
def start(self):
ultrasonic_thread = threading.Thread(target=self.startUltrasonicServer)
ultrasonic_thread.daemon = True
ultrasonic_thread.start()
self.startVideoServer()
if __name__ == "__main__":
#From SocketServer documentation
HOST, PORTUS, PORTCAM = '192.168.0.18', 50001, 50002
sdc = Self_Driver_Server(HOST, PORTUS, PORTCAM)
sdc.start()
video_client.py
import socket
import time
import cv2
client_sock = socket.socket()
client_sock.connect(('192.168.0.18', 50002))
#We are going to 'write' to a file in 'binary' mode
conn = client_sock.makefile('wb')
try:
cap = cv2.VideoCapture(0)
cap.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH,320)
cap.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT,240)
start = time.time()
while(cap.isOpened()):
conn.flush()
ret, frame = cap.read()
byteImage = frame.tobytes()
conn.write(byteImage)
finally:
finish = time.time()
cap.release()
client_sock.close()
conn.close()
You can't just display every received buffer of 1-1024 bytes as an image; you have to concatenate them up and only display an image when your buffer is complete.
If you know, out of band, that your images are going to be a fixed number of bytes, you can do something like this:
IMAGE_SIZE = 320*240*3
def handle(self):
stream_bytes = b''
try:
stream_bytes += self.rfile.read(1024)
while len(stream_bytes) >= IMAGE_SIZE:
image = np.frombuffer(stream_bytes[:IMAGE_SIZE], dtype="B")
stream_bytes = stream_bytes[IMAGE_SIZE:]
print(image.shape)
cv2.imshow('F', image)
cv2.waitKey(0)
finally:
cv2.destroyAllWindows()
sys.exit()
If you don't know that, you have to add some kind of framing protocol, like sending the frame size as a uint32 before each frame, so the server can know how many bytes to received for each frame.
Next, if you're just sending the raw bytes, without any dtype or shape or order information, you need to embed the dtype and shape information into the server. If you know it's supposed to be, say, bytes in C order in a particular shape, you can do that manually:
image = np.frombuffer(stream_bytes, dtype="B").reshape(320, 240, 3)
… but if not, you have to send that information as part of your framing protocol as well.
Alternatively, you could send a pickle.dumps of the buffer and pickle.loads it on the other side, or np.save to a BytesIO and np.load the result. Either way, that includes the dtype, shape, order, and stride information as well as the raw bytes, so you don't have to worry about it.
The next problem is that you're exiting as soon as you display one image. Is that really what you want? If not… just don't do that.
But that just raises another problem. Do you really want to block the whole server with that cv.waitKey? Your client is capturing images and sending them as fast as it can; surely you either want to make the server display them as soon as they arrive, or change the design so the client only sends frames on demand. Otherwise, you're just going to get a bunch of near-identical frames, then a many-seconds-long gap while the client is blocked waiting for you to drain the buffer, then repeat.

Python Socket server check for connection in the background

I am working on a Wi-Fi thermostat with an app running on my iPhone. It uses sockets to communicate with a python program using the built in socket library. The problem I'm having though, is that I would like to be able to change the temperature when the phone is not connected however the server will search for 1 second then time out (minimum time for the iPhone to connect) this doesn't allow me to adjust the temperature with a rotary encoder smoothly through. is there a way to listern in the background?
import sys
import socket
import os
import time
temp = 15
while True:
try:
HOST = '192.168.1.22'
PORT = 10000
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((HOST, PORT))
s.listen(1)
s.settimeout(1)
conn, addr = s.accept()
#conn.send(str(9).encode())
conn.send(str(temp).encode())
while True:
data = conn.recv(1024)
if not data: break
print(data)
print(data.decode())
data2 = data.decode()
if int(data2) in range(5, 31):
print(data2)
print("Setting the temperature to " + str(data2) + "°")
conn.send(("Setting the temperature to " + str(data2)).encode())
temp = data2
else:
print("Not in range")
conn.send("Not in range!\n".encode())
except:
print("No Connection!")
Thanks!
Your terminology is a bit confusing, but I think I see what you're trying to do. When your program executes accept with a 1 second timeout, that's not "searching" -- it's simply waiting for a client.
It sounds like you should split your code into three pieces:
actually adjusts the thermometer
listens for a connection from your iPhone client (TCP listener)
waits for ("listens for") adjustments from the rotary encoder.
I would put each in a separate thread (using the python threading module). Create a FIFO queue (with the queue module). Have the first (thermostat-adjuster) thread wait on the queue (Queue.get), and have the other two accept instructions (from TCP and rotary encoder, respectively) and feed commands through the queue (Queue.put).
Then you can get rid of the timeout in your TCP listener, and just have it block on the accept indefinitely. Likewise, your rotary encoder listener can simply wait for adjustments. And the thermostat-adjuster thread can just block waiting on instructions from the queue. This makes all three much easier to program, understand and troubleshoot.

How do I implement a raw frame capture using Python 3.3?

Note: I'm not sure if this is a programming issue, or a hardware/OS specific related issue.
I'm trying to capture raw ethernet frames using Python 3.3's socket class. Looking directly at the example from the PyDoc's website:
import socket
import struct
# CAN frame packing/unpacking (see 'struct can_frame' in <linux/can.h>)
can_frame_fmt = "=IB3x8s"
can_frame_size = struct.calcsize(can_frame_fmt)
def build_can_frame(can_id, data):
can_dlc = len(data)
data = data.ljust(8, b'\x00')
return struct.pack(can_frame_fmt, can_id, can_dlc, data)
def dissect_can_frame(frame):
can_id, can_dlc, data = struct.unpack(can_frame_fmt, frame)
return (can_id, can_dlc, data[:can_dlc])
# create a raw socket and bind it to the 'vcan0' interface
s = socket.socket(socket.AF_CAN, socket.SOCK_RAW, socket.CAN_RAW)
s.bind(('vcan0',))
while True:
cf, addr = s.recvfrom(can_frame_size)
print('Received: can_id=%x, can_dlc=%x, data=%s' % dissect_can_frame(cf))
try:
s.send(cf)
except OSError:
print('Error sending CAN frame')
try:
s.send(build_can_frame(0x01, b'\x01\x02\x03'))
except OSError:
print('Error sending CAN frame')
I get the following error:
OSError: [Errno 97] Address family not supported by protocol.
breaking at this specific line:
s = socket.socket(socket.AF_CAN, socket.SOCK_RAW, socket.CAN_RAW)
The only changes I've made to the code was the actual interface name (i.e. 'em1'). I'm using Fedora 15.
Looking further into the Python Source code it appears that the AF_CAN (address family) and the CAN_RAW (protocol) aren't the correct pair.
How do I capture raw ethernet frames for further processing?
Ultimately what I need to be able to do is capture raw ethernet frames and process them as this come into the system.
I was finally able to do this with the following:
import socket
import struct
import time
s = socket.socket(socket.AF_PACKET, socket.SOCK_RAW, socket.ntohs(0x0003))
test = []
while(1):
now = time.time()
message = s.recv(4096)
# Process the message from here

Is multicast appropriate for one to many streaming within a localhost?

I have a central data feed that I want to redistribute to many clients. The data feed produces approx. 1.8 kB/s. Currently I'm writing the feed to a file and each client reads off the end of the file. Something about this just seems wrong. Here is pseudo code for what I have now...
The feed:
o = open('feed.txt','a',0) #no buffering, maybe line buffer would be better
while 1:
data = feed.read(8192)
data = parse_data(data)
o.write(data)
time.sleep(0.01)
The server (each client connects in a new thread):
feed = open('feed.txt','r')
feed.seek(-1024,2)
while 1:
dat = feed.read(1024)
if len(dat)==0:
# For some reason if the end of the file is reached
# i can't read any more data, even there is more.
# some how backing off seems to fix the problem.
self.feed.seek(-1024,2)
self.feed.read(1024)
buffer += dat
idx = buffer.rfind('\n')
if idx>0:
data = buffer[:idx]
buffer = buffer[idx+1:]
for msg in data.split('\n'):
client.send(msg)
time.sleep(0.01)
What I'd like to do is just replace the file with a socket and write the messages directly to multicast packets. Any time a new client connects to the server I just spin up a new thread and start listening for the multicast packets. Are there any standard design patterns to handle this case?
Even simpler, just have all clients multicast on the same port. Then your server doesn't even need to track pseudo-connections.
We use a similar scheme for some of the software on our internal network, based on the fact that multicasting is "mostly reliable" on our networking infrastructure. We've stress tested the load and don't start dropping packets until there's over 30K messages/sec.
#!/usr/bin/python
import sys
import socket
ADDR = "239.239.239.9"
PORT = 7999
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind((ADDR,PORT))
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
while True:
data, addr = sock.recvfrom(2048)
print data
sys.stdout.flush()

Categories