The final requirement is to create a system which can stream video footage via a UNIX IPC socket from Python to Rust. The Python script has exclusive access to the camera/video. I'm pretty new to Rust.
I have tried an approach wherein the control flows as follows:
In Python, as the video flows in, strip each frame out of it to convert it into a NumPy array
Then, convert it into a string. Then to bytes.
Send it over a UNIX IPC socket.
At the receiving end, convert it back into string.
Hopefully parse it to get a usable array and make an image from it.
Sender:
import socket, os, cv2, numpy
# print all the string without truncating
numpy.set_printoptions(threshold=numpy.inf)
# declare the camera socket and unlink path if already in use
camera_socket_path = '/home/user/exps/test.sock'
try:
os.unlink(camera_socket_path)
except OSError:
pass
# create and listen on the socket
camera_socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
camera_socket.bind(camera_socket_path)
camera_socket.listen()
conn, addr = camera_socket.accept()
vidcap = cv2.VideoCapture('example.mp4')
success, image = vidcap.read()
print(str(image))
while success:
conn.send(bytes(str(image), 'utf-8'))
success, image = vidcap.read()
break # after sending a frame, for testing
conn.close()
Receiver:
use std::os::unix::net::UnixStream;
use std::io::prelude::*;
use image::RgbImage;
use ndarray::Array3;
fn array_to_image(arr: Array3<u8>) -> RgbImage {
assert!(arr.is_standard_layout());
let (height, width, _) = arr.dim();
let raw = arr.into_raw_vec();
RgbImage::from_raw(width as u32, height as u32, raw)
.expect("container should have the right size for the image dimensions")
}
fn main() {
// create a standard UNIX IPX socket stream
let mut stream = UnixStream::connect("/home/user/exps/test.sock").unwrap();
loop {
// 2074139 is the length of the string printed at sender :D
// I thought it might work out but it didn't obviously
// the docs suggest an exponential of 2. default was 1024
let mut buf = [0; 2074139];
let count = stream.read(&mut buf).unwrap();
let response = String::from_utf8(buf[..2074139].to_vec()).unwrap();
// write a parser function to convert the string to Array3 iterating through line()
// let pic = array_to_image(convert(response));
println!("{}", response);
break; // get a single frame, for testing
}
}
Both the strings look nothing like each other. In the final version, I hope to create a stream using which I can continuously do this for incoming video stream from a camera, with something like cv2.VideoCapture(). What is the list of stuff to be done to achieve it?
An mp4 file consists of arbitrary byte sequences, not utf8 strings. So you should send the raw bytes and receive into a Vec<u8> or similar data type.
There is no reason to mangle things through a string encodings, unix sockets support arbitrary bytes.
For debugging purposes you can print out the bytes in hex format, but you shouldn't send it as such since that too would be wasteful.
Related
Edit: To clarify: it does compile, it just crashes almost immediately after the stream loads. It does connect properly.
So, I've been trying for a very long time to complete this project of mine. What I'm trying to do is send a video feed over sockets using cv2. It works over LAN, not over WAN. I get the following error:
"ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host"
Code for client(sending video over):
import cv2
import numpy as np
import socket
import pickle
host = "<insert public ip of recipient>"
port = 7643
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # declares s object with two parameters
s.connect((host, port)) # connects to the host & port
cap = cv2.VideoCapture(1)
while cap.isOpened(): # while camera is being used
ret, frame = cap.read() # reads each frame from webcam
if ret:
encoded = pickle.dumps(cv2.imencode(".jpg", frame)[1]) # encoding each frame, instead of sending live video it is sending pictures one by one
s.sendall(encoded)
if cv2.waitKey(1) & 0xFF == ord("q"): # wait until key was pressed once and
break
cap.release()
cv2.destroyAllWindows()
Code for recipient(receiving video):
import cv2
import socket
import pickle
host = "192.168.1.186"
port = 7643
boo = True
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # declares s object with two parameters
s.bind((host, port)) # tells my socket object to connect to this host & port "binds it to it"
s.listen(10) # tells the socket how much data it will be receiving.
conn, addr = s.accept()
while boo:
try:
pictures = conn.recv(256000) # creates a pictures variable that receives the pictures with a max amount of 128000 data it can receive
decoded = pickle.loads(pictures) # decodes the pictures
frame = cv2.imdecode(decoded, cv2.IMREAD_COLOR) # translates decoded into frames that we can see!
cv2.imshow("unique", frame)
if cv2.waitKey(1) & 0xFF == ord("q"): # wait until q key was pressed once and
break
except:
print("Something is broken...")
boo = False
cv2.destroyAllWindows()
s.close()
You apparently got lucky when running this over your LAN. Your code is not correctly sending a stream of images from sender to recipient, because stream sockets like TCP are a little more complicated to use by their nature. The main issue is that your sender is not communicating where each image ends and the next begins, and your recipient similarly is not organizing the data it reads into individual full images.
That is to say, socket.sendall() does not communicate the end of its data to the recipient; you need to include that information in the actual data that you send.
Error handling
But before fixing that, you should fix your error handling on the recipient so that you get more useful error messages. When you write
except:
print("Something is broken...")
You're throwing away something that would have helped you more, like "EOFError: Ran out of input" or "_pickle.UnpicklingError". Don't throw that information away. Instead, print it:
except:
traceback.print_exc()
or re-raise it:
except Exception as err:
# do whatever you want to do first
raise err
or, since you want to let it crash your program, and just want to do cleanup first, do your cleanup in a finally clause, no need for except:
try:
# your code
finally:
# the cleanup
Stream sockets and the sender
Back to your socket code, you're using stream sockets. They send a stream of bytes, and while you can count on them arriving in the correct order, you can't count on when they'll arrive. If you send b"something" and then b"something else", you could receive b"somethingsomething else" all at once, b"somet" and then later b"hing", etc. Your receiver needs to know where the dividing line is between each message, so step one is making there be dividing lines between the messages. There are a few ways to do this:
Making all messages be the same size. Since you're encoding them as JPEGs which can have different sizes based on how it's compressed, that would be a little complicated and maybe not what you want anyway.
Sending an actual marker in bytes, like a newline b"\n" or b"\n\r". This is more complicated to make work for your situation.
Sending the size of each message before you send it. This should be the easiest for your case.
Of course if you're now sending the size of the message, that's just like another message, and your recipient needs to know where this size message ends. Once again you could end the size message with a newline:
s.sendall("{}\n".format(len(encoded)).encode("ascii"))
Or you could pack it into a fixed-length number of bytes, for example 4:
s.sendall(struct.pack("!i", len(encoded)))
The receiver
Your receiver code now needs to read full messages, despite the fact that socket.recv() can return partial messages, or parts of multiple messages together. You can keep a buffer of the incoming data. Add to the end, and then remove full messages from the front:
buf = ''
while boo:
new_data = s.recv(4096)
if not new_data:
# exit, because the socket has been closed
buf += new_data
# if there's a full message at the beginning of buf:
# remove that message, but leave the rest in buf
# process that message
# else:
# nothing, just go back to receiving more
Of course, to find your full message, first you need to get the full size message. If you encoded all your size messages as 4 bytes with struct.pack, just receive data until buf is 4 or more bytes long, then split it into the size and any remaining data:
message_size = struct.unpack("!i", buf[:4])[0]
buf = buf[4:]
Then do the same thing with the image message. Receive data until you have at least message_size bytes of data, split your buffer into the first image message, which you can decode and display, and keep the remainder in the buffer.
Security
The documentation for pickle says:
Warning: The pickle module is not secure. Only unpickle data you trust.
It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Never unpickle data that could have come from an untrusted source, or that could have been tampered with.
In your case, someone else could in theory connect to your IP on your chosen port and send whatever they wanted to your recipient. If this is just a toy project that wouldn't be left running all the time, the odds are low.
I'm trying to read and write data from an RFID tag using python whit this module:
https://es.aliexpress.com/item/32573423210.html
I can connect successfully whit serial but I don't know how to read any tag, because the datasheet from pr9200(the reader that I am working) use this:
Image for pr9200 operation It's like a raw packet whit only hex address that I need to send to the module for it works
my code on python is this:
import serial
ser = serial.Serial(port = "COM27", baudrate=115200, bytesize=8, parity='N', stopbits=1)
while(ser.is_open == True):
rfidtag = ''
incomingByte = ser.read(21)
print(incomingByte)
for i in incomingByte:
rfidtag = rfidtag + hex(i)
Some comments to jump start your coding:
-What you need to do is send a command to your device to ask it to start sending readings in auto mode. To do that you need to use ser.write(command). You can find a good template here.
-To prepare the command you just need to take the raw bytes (those hex values you mentioned) and put them together as, for instance a bytearray.
-The only minor hurdle remaining is to calculate the CRC. There are some nice methods here at SO, just search CRC 16 CCITT.
-Be aware that after writing you can not start immediately waiting for readings, you have to wait first for the device to acknowledge the command. Hint: read 9 bytes.
-Lastly, take a new count of the bytes you will receive for each tag. I think they are 22 instead of 21.
You can use pyembedded python library for this which can give you the tag id.
from pyembedded.rfid_module.rfid import RFID
rfid = RFID(port='COM3', baud_rate=9600)
print(rfid.get_id())
https://pypi.org/project/pyembedded/
I'm trying to combine three values that I got through serial port using pyserial. These values are corresponding to 3 parts of a 24bit data transmitted from and fpga board and I want to get the 24 bit data in python script. What kind of a conversion and combination process can give me back this 24 bit data? I'm reading data using below simple while loop...
import serial
port = serial.Serial('/dev/ttyUSB0', 115200)
file = open("my_file.txt","a")
while True:
message = ord(port.read())
print(message)
file.write(str(message) + "\n")
file.close()
Thanks in advance :)
You can use the struct module (built-in Python) for pack those 3 bytes into one:
import struct
#Your code ...
while True:
message = port.read(3) #Read 3 bytes at once
combined = struct.unpack(">I", b'\x00' + message)[0]
The >I means you decoding 32-bit unsigned integer (I) saved in big-endian (>). Because struct.unpack() doesn't support 24-bit numbers I added 1 zero byte to the start of the 24-bit number which effectively creates 32-bit number.
I want to send data from a Simulink model (running in real time) to a Python script (also running in real time. I am using Simulink's built-in "UDP Send" block, which works, but I don't know how to decode the data I'm getting. This is what my python script looks like:
import sys, struct
from socket import *
SIZE = 1024 # packet size
hostName = gethostbyname('0.0.0.0')
mySocket = socket( AF_INET, SOCK_DGRAM )
mySocket.bind((hostName,5002))
repeat = True
while repeat:
(data,addr) = mySocket.recvfrom(SIZE)
data = struct.unpack('d',data)
print data
I've suspected that the data stream should be something like a double, but while it's giving me numbers they aren't meaningful:
If simulink sends a constant "1", I get an output of "3.16e-322"
If Simulink sends a constant "2", I get an output of "3.038e-319"
Any ideas?
Turns out my network was reversing the packet bits. The solution was to read it in as bit-reversed:
data = struct.unpack('!d',data)
I have no clue why this happens over some networks and not others. Can someone comment on a way to tell if I need to use bit-reversal?
The problem occurs when the sender and receiver has different byte order.
See sys.byteorder.
Best practice should be to always convert to network order when sending and convert again when receiving.
I am streaming some data down from a webcam. When I get all of the bytes for a full image (in a string called byteString) I want to display the image using OpenCV. Done fast enough, this will "stream" video from the webcam to an OpenCV window.
Here's what I've done to set up the window:
cvNamedWindow('name of window', CV_WINDOW_AUTOSIZE)
And here's what I do when the byte string is complete:
img = cvCreateImage(IMG_SIZE,PIXEL_DEPTH,CHANNELS)
buf = ctypes.create_string_buffer(byteString)
img.imageData = ctypes.cast(buf, ctypes.POINTER(ctypes.c_byte))
cvShowImage('name of window', img)
cvWaitKey(0)
For some reason this is producing an error:
File "C:\Python26\lib\site-packages\ctypes_opencv\highgui_win32.py", line 226, in execute
return func(*args, **kwargs)
WindowsError: exception: access violation reading 0x015399E8
Does anybody know how to do what I'm trying to do / how to fix this crazy violation error?
I actually solved this problem and forgot to post the solution. Here's how I did it, though it may not be entirely robust:
I analyzed the headers coming from the MJPEG of the network camera I was doing this to, then I just read from the stream 1 byte at a time, and, when I detected that the header of the next image was also in the bytestring, I cut the last 42 bytes off (since that's the length of the header).
Then I had the bytes of the JPEG, so I simply created a new Cv Image by using the open(...) method and passing it the byte string wrapped in a StringIO class.
Tyler:
I'm not sure what you are trying to do..i have a few guesses.
if you are trying to simply read an image from a webcam connected to your pc then this code should work:
import cv
cv.NamedWindow("camera", 1)
capture = cv.CaptureFromCAM(0)
while True:
img = cv.QueryFrame(capture)
cv.ShowImage("camera", img)
if cv.WaitKey(10) == 27:
break
are you trying to stream video from an internet cam?
if so, you should check this other post:
opencv-with-network-cameras
If for some reason you cannot do it in any of these ways then may be you can just somehow savethe image on the hard drive and then load it in your opencv program by doing a simple cvLoadImage ( of course this way is much slower).
another approach would be to set the new image pixels by hand by reading each of the values from the byteString, doing something like this:
for(int x=0;x<640;x++){
for(int y=0;y<480;y++){
uchar * pixelxy=&((uchar*) (img->imageData+img->widthStep*y))[x];
*pixelxy=buf[y*img->widthStep + x];
}
}
this is also slower but faster than using the hard drive.
Anyway, hope some of this helps, you should also specify which opencv version are you using.