Edit: To clarify: it does compile, it just crashes almost immediately after the stream loads. It does connect properly.
So, I've been trying for a very long time to complete this project of mine. What I'm trying to do is send a video feed over sockets using cv2. It works over LAN, not over WAN. I get the following error:
"ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host"
Code for client(sending video over):
import cv2
import numpy as np
import socket
import pickle
host = "<insert public ip of recipient>"
port = 7643
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # declares s object with two parameters
s.connect((host, port)) # connects to the host & port
cap = cv2.VideoCapture(1)
while cap.isOpened(): # while camera is being used
ret, frame = cap.read() # reads each frame from webcam
if ret:
encoded = pickle.dumps(cv2.imencode(".jpg", frame)[1]) # encoding each frame, instead of sending live video it is sending pictures one by one
s.sendall(encoded)
if cv2.waitKey(1) & 0xFF == ord("q"): # wait until key was pressed once and
break
cap.release()
cv2.destroyAllWindows()
Code for recipient(receiving video):
import cv2
import socket
import pickle
host = "192.168.1.186"
port = 7643
boo = True
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # declares s object with two parameters
s.bind((host, port)) # tells my socket object to connect to this host & port "binds it to it"
s.listen(10) # tells the socket how much data it will be receiving.
conn, addr = s.accept()
while boo:
try:
pictures = conn.recv(256000) # creates a pictures variable that receives the pictures with a max amount of 128000 data it can receive
decoded = pickle.loads(pictures) # decodes the pictures
frame = cv2.imdecode(decoded, cv2.IMREAD_COLOR) # translates decoded into frames that we can see!
cv2.imshow("unique", frame)
if cv2.waitKey(1) & 0xFF == ord("q"): # wait until q key was pressed once and
break
except:
print("Something is broken...")
boo = False
cv2.destroyAllWindows()
s.close()
You apparently got lucky when running this over your LAN. Your code is not correctly sending a stream of images from sender to recipient, because stream sockets like TCP are a little more complicated to use by their nature. The main issue is that your sender is not communicating where each image ends and the next begins, and your recipient similarly is not organizing the data it reads into individual full images.
That is to say, socket.sendall() does not communicate the end of its data to the recipient; you need to include that information in the actual data that you send.
Error handling
But before fixing that, you should fix your error handling on the recipient so that you get more useful error messages. When you write
except:
print("Something is broken...")
You're throwing away something that would have helped you more, like "EOFError: Ran out of input" or "_pickle.UnpicklingError". Don't throw that information away. Instead, print it:
except:
traceback.print_exc()
or re-raise it:
except Exception as err:
# do whatever you want to do first
raise err
or, since you want to let it crash your program, and just want to do cleanup first, do your cleanup in a finally clause, no need for except:
try:
# your code
finally:
# the cleanup
Stream sockets and the sender
Back to your socket code, you're using stream sockets. They send a stream of bytes, and while you can count on them arriving in the correct order, you can't count on when they'll arrive. If you send b"something" and then b"something else", you could receive b"somethingsomething else" all at once, b"somet" and then later b"hing", etc. Your receiver needs to know where the dividing line is between each message, so step one is making there be dividing lines between the messages. There are a few ways to do this:
Making all messages be the same size. Since you're encoding them as JPEGs which can have different sizes based on how it's compressed, that would be a little complicated and maybe not what you want anyway.
Sending an actual marker in bytes, like a newline b"\n" or b"\n\r". This is more complicated to make work for your situation.
Sending the size of each message before you send it. This should be the easiest for your case.
Of course if you're now sending the size of the message, that's just like another message, and your recipient needs to know where this size message ends. Once again you could end the size message with a newline:
s.sendall("{}\n".format(len(encoded)).encode("ascii"))
Or you could pack it into a fixed-length number of bytes, for example 4:
s.sendall(struct.pack("!i", len(encoded)))
The receiver
Your receiver code now needs to read full messages, despite the fact that socket.recv() can return partial messages, or parts of multiple messages together. You can keep a buffer of the incoming data. Add to the end, and then remove full messages from the front:
buf = ''
while boo:
new_data = s.recv(4096)
if not new_data:
# exit, because the socket has been closed
buf += new_data
# if there's a full message at the beginning of buf:
# remove that message, but leave the rest in buf
# process that message
# else:
# nothing, just go back to receiving more
Of course, to find your full message, first you need to get the full size message. If you encoded all your size messages as 4 bytes with struct.pack, just receive data until buf is 4 or more bytes long, then split it into the size and any remaining data:
message_size = struct.unpack("!i", buf[:4])[0]
buf = buf[4:]
Then do the same thing with the image message. Receive data until you have at least message_size bytes of data, split your buffer into the first image message, which you can decode and display, and keep the remainder in the buffer.
Security
The documentation for pickle says:
Warning: The pickle module is not secure. Only unpickle data you trust.
It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Never unpickle data that could have come from an untrusted source, or that could have been tampered with.
In your case, someone else could in theory connect to your IP on your chosen port and send whatever they wanted to your recipient. If this is just a toy project that wouldn't be left running all the time, the odds are low.
Related
Good day dear community,
I have a system that is running a Nvidia NX system with an IntelSense 3D camera. The goal is to capture the video stream with Python and OpenCV on another computer that is connected over WiFi with the system.
On my computer I can access the videostream in my browser with XXX.XXX.XX.X:8080 over mjpg-streamer. The correct http-address is: http://XXX.XXX.XX.X:8080/stream.html
Unfortunately I had zero success with different approaches. First I tried to fetch the stream directly with OpenCV:
import cv2
stream = cv2.VideoCapture('http://192.168.12.1:8080/stream.html')
while True:
ret, frame = stream.read()
cv2.imshow('Video Stream Monitor', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
stream.release()
cv2.destroyAllWindows()
This didn't work. Too easy to be true.
Next I stumbled accross this thread and thought this may be more promising because in my first example I did nothing to convert the data. I updated my code:
import cv2
import urllib
import numpy as np
# stream = cv2.VideoCapture('http://192.168.12.1:8080/stream.html')
stream = urllib.request.urlopen('http://localhost:8080/frame.mjpg')
bytes = ''
while True:
bytes += stream.read(1024)
a = bytes.find('\xff\xd8')
b = bytes.find('\xff\xd9')
if a != -1 and b != -1:
jpg = bytes[a:b+2]
bytes = bytes[b+2:]
i = cv2.imdecode(np.fromstring(jpg, dtype=np.uint8), cv2.CV_LOAD_IMAGE_COLOR)
cv2.imshow('i', i)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
stream.release()
cv2.destroyAllWindows()
And the result was an Error:
URLError: <urlopen error [WinError 10061] Es konnte keine Verbindung hergestellt werden, da der Zielcomputer die Verbindung verweigerte>
Target computer denies the connection. And this is where I'm still stuck (I run Windows and it may be due to that, I will try switching to a Linux system). Although I think, that with the connection established, there will still be problems because I capture data from .html link and the code is requesting with a .mjpg ending.
Anyways, is there somebody that knows how I can get the stream from my system to my computer?
I also thought about connection the computer with a socket to the system ip and port to retrieve the data. Would that be even possible without setting up a server-client architecture on both systems? But that would be not my very first option because I already have latency because of the WiFi connection. Implementing a socket with server and client will make latency issues worse, so I'd like to retrieve the stream preferably directly.
Any help appreciated.
Everything I tried is stated with my code examples. Thus, this line is obligatory information.
The final requirement is to create a system which can stream video footage via a UNIX IPC socket from Python to Rust. The Python script has exclusive access to the camera/video. I'm pretty new to Rust.
I have tried an approach wherein the control flows as follows:
In Python, as the video flows in, strip each frame out of it to convert it into a NumPy array
Then, convert it into a string. Then to bytes.
Send it over a UNIX IPC socket.
At the receiving end, convert it back into string.
Hopefully parse it to get a usable array and make an image from it.
Sender:
import socket, os, cv2, numpy
# print all the string without truncating
numpy.set_printoptions(threshold=numpy.inf)
# declare the camera socket and unlink path if already in use
camera_socket_path = '/home/user/exps/test.sock'
try:
os.unlink(camera_socket_path)
except OSError:
pass
# create and listen on the socket
camera_socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
camera_socket.bind(camera_socket_path)
camera_socket.listen()
conn, addr = camera_socket.accept()
vidcap = cv2.VideoCapture('example.mp4')
success, image = vidcap.read()
print(str(image))
while success:
conn.send(bytes(str(image), 'utf-8'))
success, image = vidcap.read()
break # after sending a frame, for testing
conn.close()
Receiver:
use std::os::unix::net::UnixStream;
use std::io::prelude::*;
use image::RgbImage;
use ndarray::Array3;
fn array_to_image(arr: Array3<u8>) -> RgbImage {
assert!(arr.is_standard_layout());
let (height, width, _) = arr.dim();
let raw = arr.into_raw_vec();
RgbImage::from_raw(width as u32, height as u32, raw)
.expect("container should have the right size for the image dimensions")
}
fn main() {
// create a standard UNIX IPX socket stream
let mut stream = UnixStream::connect("/home/user/exps/test.sock").unwrap();
loop {
// 2074139 is the length of the string printed at sender :D
// I thought it might work out but it didn't obviously
// the docs suggest an exponential of 2. default was 1024
let mut buf = [0; 2074139];
let count = stream.read(&mut buf).unwrap();
let response = String::from_utf8(buf[..2074139].to_vec()).unwrap();
// write a parser function to convert the string to Array3 iterating through line()
// let pic = array_to_image(convert(response));
println!("{}", response);
break; // get a single frame, for testing
}
}
Both the strings look nothing like each other. In the final version, I hope to create a stream using which I can continuously do this for incoming video stream from a camera, with something like cv2.VideoCapture(). What is the list of stuff to be done to achieve it?
An mp4 file consists of arbitrary byte sequences, not utf8 strings. So you should send the raw bytes and receive into a Vec<u8> or similar data type.
There is no reason to mangle things through a string encodings, unix sockets support arbitrary bytes.
For debugging purposes you can print out the bytes in hex format, but you shouldn't send it as such since that too would be wasteful.
I wrote some code for converting an image to hex in the client side, and then the hex values are send to server. On the server side, this hex will be converted to binary and the binary values are written to a file. But I am not getting the same image. The resulting image is not visible at all.
client side:
with open('1.jpg',"rb") as f:
contents = f.read()
contentss = binascii.hexlify(contents)
s.send(contentss)
server side:
data = c.recv(1024)
binary = binascii.unhexlify(data)
f = open('1server.jpg',"wb")
f.write(binary)
EDIT
If I am printing the "binary" using "print binary", then i am getting output as ÿØÿà
what is happening here? please help.
Changing 1024(length of receiving data) with the length of sent data will resolve this issue.
I have a raw ethernet Frame that i want to send
How should i do that? i tried to send hex values of a frame but i still cant control packet header that contains src/dst address and ports
import socket
# the public network interface
HOST = socket.gethostbyname(socket.gethostname())
addr = ('46.165.204.237', 10000)
# create a raw socket and bind it to the public interface
s = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_IP)
s.bind((HOST, 0))
netpacket = '\xDE\xB0\x7B\xE5\xA7\xCD\x4C\x17\xEB\x07\x0D\xBC\x08\x00\x45\x00\x00\x92\x68\x94\x40\x00\x78\x06\xDC\x94\x2E\xA5\xCC\xED\xC0\xA8\x01\x02\x27\x10\x07\xC8\x04\xD7\xEA\xEA\xC3\x2A\x4E\xA2\x50\x18\x01\x02\x39\xB0\x00\x00\x3C\x3F\x78\x6D\x6C\x20\x76\x65\x72\x73\x69\x6F\x6E\x3D\x22\x31\x2E\x30\x22\x3F\x3E\x3C\x50\x61\x63\x6B\x65\x74\x3E\x3C\x6F\x70\x65\x72\x61\x74\x69\x6F\x6E\x3E\x33\x3C\x2F\x6F\x70\x65\x72\x61\x74\x69\x6F\x6E\x3E\x3C\x64\x61\x74\x61\x3E\x33\x24\x30\x24\x30\x24\x30\x24\x30\x24\x30\x3C\x2F\x64\x61\x74\x61\x3E\x3C\x65\x78\x74\x64\x61\x74\x61\x3E\x3C\x2F\x65\x78\x74\x64\x61\x74\x61\x3E\x3C\x2F\x50\x61\x63\x6B\x65\x74\x3E'
#netpaket = netpacket.encode('UTF-8')
s.sendto(netpacket.encode('UTF-8'), addr)
Is there in Python a function like sendRaw() or sendRawFrame()?
I know scapy can handle this, but i need to do that many many times, and every time with various payload data. How scapy can be automated? I mean python script that launch scapy creates packet with some payload and send it.
scapy.py
packet1 = IP(dst='46.165.204.237')/TCP(sport=1992, dport=10000)/'<?xml version="1.0"?><Packet><operation>99</operation><data><![CDATA[8 fast]]></data><extdata><![CDATA[]]></extdata></Packet>.'
send(packet1)
The goal for it is to send packet from a port that already in use. If there a better solution for that problem?
Offtopic: maybe someone knows how to send packets through the Open socket id in windows (not only in python)?
You can provide Scapy with raw input by using the Raw layer.
netpacket = Raw('\xDE\xB0...')
To send packets at the ethernet layer - see the documentation for sendp.
sendp(netpacket, iface="eth1")
I want to send data from a Simulink model (running in real time) to a Python script (also running in real time. I am using Simulink's built-in "UDP Send" block, which works, but I don't know how to decode the data I'm getting. This is what my python script looks like:
import sys, struct
from socket import *
SIZE = 1024 # packet size
hostName = gethostbyname('0.0.0.0')
mySocket = socket( AF_INET, SOCK_DGRAM )
mySocket.bind((hostName,5002))
repeat = True
while repeat:
(data,addr) = mySocket.recvfrom(SIZE)
data = struct.unpack('d',data)
print data
I've suspected that the data stream should be something like a double, but while it's giving me numbers they aren't meaningful:
If simulink sends a constant "1", I get an output of "3.16e-322"
If Simulink sends a constant "2", I get an output of "3.038e-319"
Any ideas?
Turns out my network was reversing the packet bits. The solution was to read it in as bit-reversed:
data = struct.unpack('!d',data)
I have no clue why this happens over some networks and not others. Can someone comment on a way to tell if I need to use bit-reversal?
The problem occurs when the sender and receiver has different byte order.
See sys.byteorder.
Best practice should be to always convert to network order when sending and convert again when receiving.