diy compression of a image giving a previous frame (Inter-frame compression) - python

Is there any way i can compress a numpy image giving a previous frame
I wanna do a sort of DIY video live stream for my 3d printed rc car (Newone) from my raspberry pi.
currently Iam sending continuously jpgs without any Inter-frame compression. It would be cool to have a sort of motion prediction, which would save lot of traffic
npImg = cap.read()[1] # capturing image
npImg=cv2.resize(npImg,(320,180))# downsize the image for stream
jImg = cv2.imencode('.jpg', npImg, [int(cv2.IMWRITE_JPEG_QUALITY),90])[1]# numpy image to jpg
stream = io.BytesIO(jImg)# creating a stream to continuously taking chunks from the jpg file
while run:
part = stream.read(504)# 506 is the max UDP package lengh, minus 2 for video authentification
if not part: break# if-block will execute if string is empty; breaks the loop
soc.sendto(b'v'+part,address_server)
soc.sendto(b'f', address_server) #img finished

Related

Image stitch of DJI Mavic mini 2 panoramas programmatically

I used dji fly app to create panoroma image taken by dji mini 2 drone. My aim is to automate this process either server side or create an app to make panoroma.
I tried to write code for creating panorama image out of 26 images taken by dji mini 2 drones. I have gone through DJI SDK and sample code on android and ios as well since these are too old they did not updated it (android app is not even campatible with modern code structre). I tried open cv python and failed miserably as it took too long to create a single image(some image output were wrong also). I have seen the DJI fly app working great and someone tell me how can I achieve that.
I am open to all suggestions don't limit yourself to android devices.
My open cv code:
# grab the paths to the input images and initialize our images list
print("[INFO] loading images...")
imagePaths = sorted(list(paths.list_images("/content/drive/MyDrive/100_0040")))
images = []
# loop over the image paths, load each one, and add them to our
# images to stitch list
print(imagePaths)
for imagePath in imagePaths:
image = cv2.imread(imagePath)
images.append(image)
print("[INFO] stitching images...")
stitcher = cv2.createStitcher() if imutils.is_cv3() else cv2.Stitcher_create()
(status, stitched) = stitcher.stitch(images)
print(status)
# if the status is '0', then OpenCV successfully performed image
# stitching
if status == 0:
# write the output stitched image to disk
cv2.imwrite('/content/sample_data/output11.jpeg', stitched)
# display the output stitched image to our screen
cv2.imshow("Stitched", stitched)
cv2.waitKey(0)
# otherwise the stitching failed, likely due to not enough keypoints)
# being detected
else:
print("[INFO] image stitching failed ({})".format(status))
creates image but with black spots in between.

How to convert a Numpy array image to a JPEG without saving the image?

I'm using Microsoft Azure's Face API to detect the emotions of a person in a video. I have a Python program working correctly with local images, and now I'm trying to take a local video and send each frame to the API, and store the result of each analysis.
The data sent to Azure's Face API needs to be a PNG/JPG file read as bytes:
image_data=open(image_source, "rb").read()
OpenCV seems to be the standard for going frame by frame through a video with Python, but the frames are of the type Numpy array. You can take each frame of a video and save it as a JPG to disk like so:
import cv2 # OpenCV
vidcap = cv2.VideoCapture('vid.mp4')
success, image = vidcap.read()
count = 1
while success:
cv2.imwrite("video_data/frame_%d.jpg" % count, image)
success, frame = vidcap.read() # frame is a Numpy array
print('Saved frame ', count)
count += 1
But this isn't exactly what I want. Is there anyway to do this Numpy array to JPG conversion without saving a file to disk? I just want to convert it to JPG, then send that image as bytes to the Azure API.
Any and all advice and guidance is appreciated, thanks!
Edit: I've got a working work-around by converting the Numpy array frame to a PIL Image object and converting it to a PNG through the BytesIO library module. If anyone has any more efficient/nicer/cleaner/better solutions, I would still love to hear them!
You just need cv2.imencode() like this:
success, frame = vidcap.read()
_, JPEG = cv2.imencode('.jpeg', frame)
JPEG will now be a Numpy array containing a JPEG-encoded image. If you want to send it to Azure as bytes, you can send:
JPEG.tobytes()

Why opencv video reading fps is not same with video encrypted fps?

In OpenCV with Python, when the fps of the webcam and a video file in the directory are same, why does the video file play in fast forward whereas the webcam continues to show the frames at a normal rate? What role does the cv2.waitKey() function play here
The fps of a video file means how it was encrypted, how many frames contain within a second as the name reveals. For example, if extracted 1 second of this video will produce only that number of frames (images).
The corresponding fps of the web camera means how many frames that camera can capture in a second. If saved to a video file that would mean how many frames are contained within each 1-second span.
There is a third (probably hidden to you) concept here though. How fast the opencv can read a video file. Normally, and for typical resolutions on a modern computer this fps is larger than the actual video. So, your computer seem to playback the video in fast forward mode because it reads (and displays) frames in a faster pace than the video file's fps.
Theoretically, you can calculate the delay you should import to the video playback to force it to displayed with normal pace. I am not sure how easily you can accomplish that (in a scientific way and not trial and error mode).
Hope this clarifies the issue.

Why does opencv Videocapture function read video frames with wrong pixel values?

I am capturing videos from a camera and saving them using the opencv VideoWriter function. I save the captured videos as uncompressed avi files. When I finish recording the video, I have another script that is supposed to read the video frame by frame, process the pixel values. However, when I try to read the frames of the saved video, the pixel values are off a bit.
For example, comparing the first frames of the video being written, and the video being read (supposed to be 100% identical), I notice that the pixel values are off by a small number (RGB values off by a small number, usually less than 5).
I have already made sure that I am using the exact same video codex when writing the video, and when reading the video (Check code below)
def write_video():
out = cv2.VideoWriter("encrypted.avi" ,cv2.VideoWriter_fourcc(*'XVID'),30, (640,480))
foreach frame:
out.write(frame)
def read_video():
cap = cv2.VideoCapture("encrypted.avi")
cap.set(cv2.CAP_PROP_FOURCC,cv2.VideoWriter_fourcc(*'XVID'))
while(cap.isOpened()):
ret, frame = cap.read()
For my application, the frames being written and read should match 100%. I have included an image highlighting the difference between the first frame in the video being written, and the video being read. Any help is much appreciated!
These are the compression artifacts, since you are using lossy compression. If you would like your frames match down to the last bit, write them as a sequence of .PNG files -- these are losslessly compressed and preserve everything. Beware that .PNG will take much more of your HDD space than compressed video.

YUYV Framerate faster than MJPG from USB Camera OpenCV

I am using a SOM that is running a cortex A5 #500MHz and am trying to maximise the frame rate received from a USB camera. My camera supports video capture with YUYV and MJPEG.
Other posts suggested forcing opencv to read MJPEG frames from the camera, however this slowed down the frame rate.
I can currently get about 18 fps reading YUYV format and about 10 fps reading MJPEG's at 640x480. Currently I am just grabbing frames and am not doing any other processing. I am getting the CAP_PROP_FOURCC format each loop to ensure that opencv is setting the capture format correctly.
I am currently running opencv 4 and python3.5
Any ideas why this may be happening?
EDIT: Capture code:
# Repeatedly capture current image
while True:
ret, image = cap.read()
if image is None:
time.sleep(0.5)
continue
codec = cap.get(cv2.CAP_PROP_FOURCC)
print(codec)
# Print the framerate.
text = '{:.2f}, {:.2f}, {:.2f} fps'.format(*fps.tick())
print(text)
Please provide the exact SOM and the camera that you are using.
There are many factors, for example the format of the images captured by the camera, how they are transferred and how they are received and managed by the SOM.
Transferring them shouldn't be the problem in terms of bandwidth.
I am assuming that the settings in opencv only apply to the SOM and won't change the format of the camera capture and therefore the SOM has more processing to do, hence the frame rate drops.
[EDIT]
I cannot comment yet so I hope you read this ... your camera link is dead :/

Categories