First off, I'm trying to do something like screenshare, I'm confronting an error. I tried to send the image row by row than I figured out that it could work only in LAN since the connection speed was far better. Then I transmitted exclusively each pixel and constructed the array on my own which would be pretty slow. Here is the code:
Server side:
def getImg(socket,y,x):
pixels = []
print(x*y*3)
for a in range(x*y*3):
pixels.append(struct.unpack('i', socket.recv(4))[0])
if a%(x*y)==0:
pass
return np.array(pixels).reshape(y,x,3)
Client side:
def sendSS(img):
y, x, color = img.shape
print(y*x*3)
for a in range(y):
for b in range(x):
for m in img[a][b]:
socket.send(struct.pack('i', m))
The error is this:
(<class 'cv2.error'>, error("OpenCV(4.5.1) c:\\users\\appveyor\\appdata\\local\\temp\\1\\pip-req-build-wvn_it83\\opencv\\modules\\imgproc\\src\\color.simd_helpers.hpp:94: error: (-2:Unspecified error) in function '__cdecl cv::impl::`anonymous-namespace'::CvtHelper<struct cv::impl::`anonymous namespace'::Set<3,4,-1>,struct cv::impl::A0xeee51b91::Set<3,4,-1>,struct cv::impl::A0xeee51b91::Set<0,2,5>,2>::CvtHelper(const class cv::_InputArray &,const class cv::_OutputArray &,int)'\n> Unsupported depth of input image:\n> 'VDepth::contains(depth)'\n> where\n> 'depth' is 4 (CV_32S)\n"), <traceback object at 0x0000023221307740>)
The way I constructed the numpy array works fine:
import pyautogui
import cv2
import numpy as np
import pickle
import struct
from pprint import pprint as pp
ss = np.array(pyautogui.screenshot())
y,x,color = ss.shape
pixels = []
for a in range(y):
for b in range(x):
for m in ss[a][b]:
pixels.append(struct.pack('i',m))
pixels2 = []
for val in pixels:
pixels2.append(struct.unpack('i',val)[0])
pixels2 = np.array(pixels2).reshape((1080,1920,3))
cv2.imwrite('name.png',pixels2)
How can I transmit faster and solve this problem?
edit:
Here is the part of the code where the error occurs, it says the depth is 4 but I'm sure the dimensions of the image are 1080,1920,3 because I can print it.
image = getImg(aSocket,y,x)
SS = cv2.cvtColor(image,cv2.COLOR_RGB2BGR) # error occurs in this line
aSocket.send('image is gotten'.encode('utf-8'))
print('image is gotten')
cv2.imwrite(f'{ip[0]}\{count}.png',SS)
You need to pass the new shape as tuple instead for reshape, eg. arr.reshape((y,x,3)). You might also get another error if OpenCV expects different data type (8 bits per channel, unsigned) than what it got from your program (32 bit signed integer), see documentation for cvtColor(). You can specify the data type when you create the array np.array(..., dtype=np.uint8).
Regarding performance, in Python for loops over large amounts of data can be slow and often you can make your code significantly faster if you can avoid the loop and use library routine instead. Also, you should read and write larger chunks with send() and recv(). If you are not familiar with socket programming, you should at minimum check out the python documentation Socket Programming HOWTO. You eliminate the loops over the image data by directly interpreting the data as bytes. The socket code follows the examples in linked documentation page.
def send_im(sock, im):
totalsent = 0
im_bytes = im.tobytes()
while totalsent < len(im_bytes):
sent = sock.send(im_bytes[totalsent:])
if sent == 0:
raise RuntimeError("socket error")
totalsent += sent
Receiver needs to know the size of expected input.
def recv_im(sock, dtype, shape):
# expected number of bytes
size = np.dtype(float).itemsize * np.prod(shape)
im_bytes = bytearray(size) # received bytes
bytes_recvd = 0
chunk_size = 2048
while bytes_recvd < size:
chunk = sock.recv(min(size-bytes_recvd, chunk_size))
if chunk == b'':
raise RuntimeError("socket connection broken")
im_bytes[bytes_recvd:(bytes_recvd+chunk_size)] = memoryview(chunk)
bytes_recvd += len(chunk)
# intrepret bytes in correct data type and shape and return image
return np.frombuffer(im_bytes, dtype=dtype).reshape(shape)
In general, you should always use a serialization library instead of directly sending raw bytes over network, as those libraries usually handle various important details such as byte order, message sizes, data types, conversions, and so on.
Related
I am creating a socket server to connect and speak with a C# program over TCP. Currently I am trying to create a way to convert the hex sent over the TCP socket to specific variables (the variable type will be in the packet header, and yes I do know tcp is a stream not technically sending packets but I am designing it like this). Currently I have all of the C# integral numeric types converting to and from bytesarray/integers correctly via the code below (All of the different types are the same with a couple edits to fit the c# type)
## SBYTE Type Class definition
## C#/Unity "sbyte" can be from -128 to 127
##
## Usage:
##
## Constructor
## variable = sbyte(integer)
## variable = sbyte(bytearray)
##
## Variables
## sbyte.integer (Returns integer representation)
## sbyte.bytes (Returns bytearray representation)
class sbyte:
def __init__(self, input):
if type(input) == type(int()):
self.integer = input
self.bytes = self.__toBytes(input)
elif type(input) == type(bytearray()):
self.bytes = input
self.integer = self.__toInt(input)
else:
raise TypeError(f"sbyte constructor can take integer or bytearray type not {type(input)}")
## Return Integer from Bytes Array
def __toInt(self, byteArray):
## Check that there is only 1 byte
if len(byteArray) != 1:
raise OverflowError(f"sbyte.__toInt length can only be 1 byte not {len(byteArray)} bytes")
## Return signed integer
return int.from_bytes(byteArray, byteorder='little', signed=True)
## Return Bytes Array from Integer
def __toBytes(self, integer):
## Check that the passed integer is not larger than 128 and is not smaller than -128
if integer > 127 or integer < -128:
raise ValueError(f"sbyte.__toBytes can only take an integer less than or equal to 127, and greater than or equal to -128, not \"{integer}\"")
## Convert the passed integer to Bytes
return integer.to_bytes(1, byteorder='little', signed=True)
This is working for all the types I currently implemented, but I do wonder if there is a better way to handle this? Such as using ctype's or some other python library. Since this will be a socket server with potentially many connections handling this as fast as possible is best. Or if there is anything else you see that I can improve I would love to know.
If all you want is an integer value from a byte array, simply index the byte array:
>>> b = bytearray.fromhex('1E')
>>> b[0]
30
After testing the differenced between from_bytes, struct.unpack, and numpy.frombuffer with the following code:
setup1 = """\
byteArray = bytearray.fromhex('1E')
"""
setup2 = """\
import struct
byteArray = bytearray.fromhex('1E')
"""
setup3 = """\
import numpy as np
type = np.dtype(np.byte)
byteArray = bytearray.fromhex('1E')
"""
stmt1 = "int.from_bytes(byteArray, byteorder='little', signed=True)"
stmt2 = "struct.unpack('b', byteArray)"
stmt3 = "np.frombuffer(byteArray, type)"
print(f"First statement execution time = {timeit.timeit(stmt=stmt1, setup=setup1, number=10**8)}")
print(f"Second statement execution time = {timeit.timeit(stmt=stmt2, setup=setup2, number=10**8)}")
print(f"Third statement execution time = {timeit.timeit(stmt=stmt3, setup=setup3, number=10**8)}")
Results:
First statement execution time = 14.456886599999999
Second statement execution time = 6.671141799999999
Third statement execution time = 21.8327342
From the initial results it looks like struct is the fastest way to accomplish this. Unless there are other libraries I am missing.
EDIT:
Per AKX's suggestion I added the following test for signed byte:
stmt4 = """\
if byteArray[0] <=127:
byteArray[0]
else:
byteArray[0]-127
"""
and got the following execution time:
Fourth statement execution time = 4.581732600000002
Going this path is the fastest although slightly over just using structs. I will have to test with each type for the fastest way to cast the bytes and vice versa but this question gave me 4 different ways to test each one now. Thanks!
I want to send an image which was imported by opencv imread to another computer via socket. But the data received and the data sent are not equal.
I have converted the numpy array data which I got by imread and converted it to byte array in order to send it via the socket. And then
i have converted the byte array which was received at the other end back to the numpy array. But I can't view the image from the received data.
This is the code snippet in the senders end
im = cv2.imread('view.jpg')
stringimage = np.array_str(im)
byteimage = str.encode(stringimage)
sock.sendto(byteimage,("127.0.0.1",5002))
This is the code snippet in the receivers end
byteimage,addr = sock.recvfrom(1024)
decoded = bytes.decode(byteimage)
backstring = np.array(decoded)
cv2.imshow('RealSense', backstring)
cv2.waitKey(0)
I got this error
TypeError: mat data type = 19 is not supported
for this line of code
cv2.imshow('RealSense', backstring)
Update
After getting the suggestions bellow and referring some other materials I have come up with a solution which is working for my scenario.
Image senders side
#color_image is my opencv image
retval, data = cv2.imencode('.jpg', color_image, ENCODE_PARAMS)
b64_bytes = base64.b64encode(data)
b64_string = b64_bytes.decode()
sock.sendto(str.encode(b64_string), ("127.0.0.1", 5002))
Image receivers side
data, addr = sock.recvfrom(60000)
img = imread(io.BytesIO(base64.b64decode(data)))
Please tell me if there is any bad coding in my solution
A similar question was helpfully answered here.
The issue is that constructing the numpy array from a string doesn't parse the data as a float/int the way you wrote it (and converting to a string to send the data is unnecessary).
Here's a simplified example to apply that solution:
import numpy as np
from io import BytesIO
a = np.array([1, 2])
b = BytesIO()
np.save(b, a)
"""-----send the data-----"""
# send(b.getvalue())
data = BytesIO(b.getvalue())
c = np.load(data)
print(a)
print(c)
Resulting in:
[1 2]
[1 2]
byteimage,addr = sock.recvfrom(1024)
This limits your buffer size to 1024 bytes.
Read docs
I'm making a LAN multiplayer arena shooter using pygame and sockets and am having troubling transferring pickled object data from server to client. I have 2 objects, player and projectile (the bullets). I don't know how to send multiple objects at once, so i decided to put the 2 objects in a list and pickle them. But when unpickling, I can't index the list as I keep getting the 'EOFError: Ran out of input'
So I want to unpickle the list that I receive and separate out the 2 objects in that list. But python won't let me index the list after I unpickled them. Any help would be much appreciated.
Here's my code:
#instantiating instances for Player and Projectile classes
players=[Player(450,400,"sprite1.png",1),Player(500,400,"sprite2.png",2)]
bullets=[Projectile(50,50,5,"right","projectile.png",0)]
def threaded_client(conn, player):
player_data=(players[player])
bullet_data=(bullets[player])
alldata=[player_data,bullet_data] #putting the 2 objects in a list.
conn.send(pickle.dumps(alldata)) #pickling list
reply = ""
while True:
try:
alldata = pickle.loads(conn.recv(2048))
players[player] = alldata[0]
...
self.client.connect(self.addr)
alldata=pickle.loads(self.client.recv(2048)) #unpickling the list
return alldata[0] #trying to return the first object
You need to make arrangements to ensure that you have the entire object before you unpickle. You're doing a conn.recv(XXX) but that does not mean you actually received all XXX bytes. On success, it means you got somewhere between 1 and XXX bytes (inclusive). If it's a small buffer, you often get the entire thing in one chunk but you should never count on that.
Generally, you'll want to send the byte count in a fixed-size binary format (typically using the struct module), then after retrieving the byte count, keep receiving until you got all the expected bytes or you get an error (i.e. your peer disconnected).
Something like this on the sending side:
import struct
pickled_bytes = pickle.dumps(thing_youre_sending)
p_size = len(pickled_bytes) # Size of pickled buffer
p_size_buf = struct.pack("!I", p_size) # Packed binary size (4 byte field)
conn.sendall(p_size_buf) # Send length (Note sendall!)
conn.sendall(pickled_bytes) # Send actual pickled object
On the receiving side, you'll do something like this:
import struct
...
def recv_all(conn, rlen):
""" Function to receive all bytes """
recvd = 0
buf = b''
while recvd < rlen:
rbuf = conn.recv(rlen - recvd)
if not rbuf:
# Client disconnected. Handle error in whatever way makes sense
raise ClientDisconnected()
recvd += len(rbuf)
buf += rbuf
...
p_size_buf = recv_all(conn, 4) # Receive entire binary length field
p_size = struct.unpack("!I", p_size_buf)[0] # (Unpack returns an array)
pickled_bytes = recv_all(conn, p_size) # Receive actual pickled object
thing_you_sent = pickle.loads(pickled_bytes)
I have a Python library that is using ctypes to register a callback with a C library. The signature of the callback is:
CFUNCTYPE(c_int_32, c_uint_32, POINTER(c_byte), POINTER(c_size_t))
In this case, the third argument is a pointer to a byte array allocated by the C library, and the fourth argument is its length.
I would like to populate the byte array with data from the network by calling socket.recv_into. More importantly, I would like to completely fill this byte array: that is, if recv_into returns fewer bytes than the fourth argument of this function then I'd like to call recv_into again.
Suppose my function looks like this:
def callback(first, second, buffer, size):
total_read = 0
while total_read < size[0]:
total_read += some_socket.recv_into(buffer, size[0] - total_read)
# XXX: What happens here???
My question is: how can I manipulate the value of buffer to ensure that each call to recv_into appends to the buffer, rather than overwriting previously-written data?
Building on the comments, without intermediate casting:
# assuming we know OFFSET (where to start copying in the buffer)
# and AMOUNT (how many bytes we will read)
tmp_buf = (ctypes.c_char * size).from_address(ctypes.addressof(buffer.contents))
mview = memoryview(tmp_buf)[OFFSET:AMOUNT]
_ = sock.recv_into(mview, AMOUNT)
buffer.contents returns the pointer to the buffer, so its address can be extracted using ctypes
This worked for me in a simple C code that passes a pre-allocated buffer into a callback, that is registered in the python code.
I believe the trick is to wrap it into a memoryview:
>>> buf = ctypes.create_string_buffer(b"fooxxx")
>>> sock.recv_into(memoryview(buf)[3:],3) # receiving b"bar"
3
>>> buf.value
b'foobar'
Here's a code sample that shows how to have recv_into "append" into the buffer. Lots of boilerplate around setting up a socket but this allows you to run it immediatelly and see it in action.
import socket
import ctypes
_BUFFER_SIZE = 20
BufferType = ctypes.c_byte * _BUFFER_SIZE
if __name__ == '__main__':
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(('127.0.0.1', 10000))
s.listen(1)
c, _ = s.accept()
buffer = BufferType()
offset = 0
available = _BUFFER_SIZE
memview = memoryview(buffer)
while offset < _BUFFER_SIZE:
print('recv_into: offset=%r, available=%r' % (offset, available))
count = c.recv_into(memview[offset:], available)
offset += count
available -= count
print('done: buffer=%r' % (memview.tobytes(),))
c.close()
s.close()
Does this solve your problem?
I'm trying to read in data from a file in binary format and store it in a 2-d array. However, I'm getting an error that reads
error: unpack requires a bytes object of length 2
Essentially what I have is something like
import os, struct
from itertools import chain
packets = value1 #The number of packets in the data stream
dataLength = value2 #bytes of data per packet
packHeader = [[0 for x in range(14)] for y in range(packets)]
data = [[0 for x in range(dataLength)] for y in range(packets)]
for i in range(packets):
packHeader[i][0] = struct.unpack('>H', file.read(2))
packHeader[i][1] = struct.unpack('>H', file.read(2))
....
packHeader[i][13] = struct.unpack('>i', file.read(4))
packHeader[i]=list(chain(*packHeader[i])) #Deals with the tuple issue ((x,),(y,),...) -> (x,y,...)
for j in range(dataLength):
data[i][j] = struct.unpack('<h', file.read(2))
When it gets to this point it produces the error above. I'm not sure why. Both dataLength and packets are even numbers. So, imagined unpacking 2 bytes at a time shouldn't be an issue. Any thoughts?
EDIT I did check to see what would happen if I read in the data one byte at a time. So
data[i][j] = struct.unpack('<b', file.read(1))
and that worked fine. It just is not liking to unpack anything else.
EDIT 2 I also just went ahead and made that slightly more compact by saying something like
data[i] = [struct.unpack('<h', file.read(2)) for j in range(dataLength)]
Still produces the same error - just more compactly.
As it turns out, there was still iterations to be performed that when reading in 2 bytes (or more) at a time the data from the file was running out. The fix is to do something like the following
readBytes = value_wanting_to_be_read
dataLength = int(value2/readBytes)
and then in the actual loop
data[i] = [struct.unpack('<h', file.read(readBytes)) for j in range(dataLength)]
which works if readBytes = 2.