I used dji fly app to create panoroma image taken by dji mini 2 drone. My aim is to automate this process either server side or create an app to make panoroma.
I tried to write code for creating panorama image out of 26 images taken by dji mini 2 drones. I have gone through DJI SDK and sample code on android and ios as well since these are too old they did not updated it (android app is not even campatible with modern code structre). I tried open cv python and failed miserably as it took too long to create a single image(some image output were wrong also). I have seen the DJI fly app working great and someone tell me how can I achieve that.
I am open to all suggestions don't limit yourself to android devices.
My open cv code:
# grab the paths to the input images and initialize our images list
print("[INFO] loading images...")
imagePaths = sorted(list(paths.list_images("/content/drive/MyDrive/100_0040")))
images = []
# loop over the image paths, load each one, and add them to our
# images to stitch list
print(imagePaths)
for imagePath in imagePaths:
image = cv2.imread(imagePath)
images.append(image)
print("[INFO] stitching images...")
stitcher = cv2.createStitcher() if imutils.is_cv3() else cv2.Stitcher_create()
(status, stitched) = stitcher.stitch(images)
print(status)
# if the status is '0', then OpenCV successfully performed image
# stitching
if status == 0:
# write the output stitched image to disk
cv2.imwrite('/content/sample_data/output11.jpeg', stitched)
# display the output stitched image to our screen
cv2.imshow("Stitched", stitched)
cv2.waitKey(0)
# otherwise the stitching failed, likely due to not enough keypoints)
# being detected
else:
print("[INFO] image stitching failed ({})".format(status))
creates image but with black spots in between.
Related
Currently I am working on automatic number plate recognition system. I have used yolov7 for number plate detection and Text detection facility from Google vision API. I have tested the whole system with test images. Now I am willing to develop the system to detect and read number plates in video sources. I could do the detection part for the video and where I am stuck is using the OCR for the detected bound box in the video.
For images, I first apply the trained YOLOv7 model and extract the number plate and save the detected number plate as a cropped part from the original image in the directory. And then the OCR is applied for that cropped part(Number Plate) and the text is read.
Test Sample:
NP detection:
Detected NP: (OCR is applied to this cropped image)
Detected Text:
I could detect the number plates from the videos but could not find a way to freeze the frame where a number plate is detected and apply ocr or any other way to read the number plate.
Is there a way to achieve this? any help would be highly appreciated.
Try getting the coordinates of the bounding box from each frame where the bounding box is appeared and apply OCR.
With Kivy I created several scrollable lists where each item contains 1 image and two buttons (similar to a KivyMD TwoLineAvatar). This is used to show music albums or tracks. The images are loaded from from the track and the sizes are very different (from 115x115 to 1200x1200). However, it works fine and images are shown correctly in a 50x50 BoxLayout.
Now the problem: I was runnig out of memory (on my 1GB Raspi). It turned out, that apparently Kivy shows the correct image size for the layout, but saves the original image in the widget.
To reduce the size of the image I use PIL.resize before the image is added to the widget:
# use PIL to load the image
im = pilImage.open(settings.mopidy_data + '/local/images' + track['image'])
im = im.resize((50, 50)) # and resize it to the needed size
data = BytesIO() # prepare temporary memory storage
im.save(data, format='JPEG') # save the PIL image
data.seek(0) # reset data pointer to start
cim = CoreImage(BytesIO(data.read()), ext='jpg') # read data as Kivy CoreImage
self.IM = Image(texture=cim.texture) # load image as texture
This works and saves a lot of memory but,to me it looks like a workaround. Is there a way which works without this external conversion ? I alredy lookad at Kivi Atlas, but it seems not do have such resize feature.
Is there any way i can compress a numpy image giving a previous frame
I wanna do a sort of DIY video live stream for my 3d printed rc car (Newone) from my raspberry pi.
currently Iam sending continuously jpgs without any Inter-frame compression. It would be cool to have a sort of motion prediction, which would save lot of traffic
npImg = cap.read()[1] # capturing image
npImg=cv2.resize(npImg,(320,180))# downsize the image for stream
jImg = cv2.imencode('.jpg', npImg, [int(cv2.IMWRITE_JPEG_QUALITY),90])[1]# numpy image to jpg
stream = io.BytesIO(jImg)# creating a stream to continuously taking chunks from the jpg file
while run:
part = stream.read(504)# 506 is the max UDP package lengh, minus 2 for video authentification
if not part: break# if-block will execute if string is empty; breaks the loop
soc.sendto(b'v'+part,address_server)
soc.sendto(b'f', address_server) #img finished
i have a list of images which are in greyscale 8 bit resolution. i have a python script which successfully detects objects in colored images. Now i want to use this script to detect objects (people) in these 8 bit images. However, when i try to read an image through imread , i see a black window. How can i read this type of image in opencv and see the details inside it? Here is my code for reading an image:
for imagePath in imagePaths:
# load the image and resize it to (1) reduce detection time
# and (2) improve detection accuracy
image = cv2.imread(imagePath)
image = imutils.resize(image, width=min(400, image.shape[1]))
I successfully processed a video and had the algorithm detect faces, but I am attempting to detect faces in real-time, capturing images from the screen (such as when I'm playing games, etc.) This is the bit of the code I used to process a captured video:
capture = cv2.VideoCapture('source_video.avi')
How can I change this to capture images from the screen in real-time? Please give me some code examples if possible.
Don't use openCV for this. Better use
from PIL import ImageGrab
ImageGrab.grab().save("screen_capture.jpg", "JPEG")