What would the most efficient way of flashing images be in python?
Currently I have an infinite while loop that calls sleep at the end, then uses matplotlib to display an image. However I can't get matplotlib to replace the current image, I instead have to close then show again which is slow. I'd like to flash sequences of images as precisely as possible at a set frequency.
I'm opens to solutions that use other libraries to do the replacement in place.
Using openCV you could use this to iterate through images and show them
for img in images:
# Show the image (will replace previously displayed images
# with the same title)
cv2.imshow('window title', img)
# Wait 1000ms before showing the next image and if the user
# pressed 'q', close the window and stop looping through the dataset
if cv2.waitKey(1000) & 0xFF == ord('q'):
cv2.destroyAllWindows()
break
Using a library called PsychoPy. It can guarantee that everything is drawn and allows you to control when a window is drawn (a frame) with the window.frame() function.
Related
Hi whenever I use the cv2.imshow function all I get is a black rectangle like this:
My code is here, I am using python:
from cv2 import cv2
image = cv2.imread("scren.png")
cv2.imshow("image2", image)
I have tried using different file types as well as restarting my computer. Does anyone know how to fix this? Thanks.
I'll put the answer from #fmw42 here as it solved for me too and I was looking for it for some hours
Add cv2.waitKey(0) after cv2.imshow(...).
I don't know why it solves, but solved for me.
According to the documentation the imshow function requires a subsequent command to waitKey or else the image will not show. I believe this functionality has to do with how highgui works. So to display an image in Python using opencv you will always have to do something like this:
import cv2
image = cv2.imread(r"path\to\image")
cv2.imshow("image", image)
cv2.waitKey(0)
Note about imshow
This function should be followed by a call to cv::waitKey or cv::pollKey to perform GUI housekeeping tasks that are necessary to actually show the given image and make the window respond to mouse and keyboard events. Otherwise, it won't display the image and the window might lock up. For example, waitKey(0) will display the window infinitely until any keypress (it is suitable for image display). waitKey(25) will display a frame and wait approximately 25 ms for a key press (suitable for displaying a video frame-by-frame). To remove the window, use cv::destroyWindow.
See also.
I am working on a program to crop a image around a rectangle in OpenCV. How could I go about doing this. I also need it to be able to turn multiple rectangles into cropped images.
I've tried using this tutorial: https://www.pyimagesearch.com/2016/02/08/opencv-shape-detection/, but I dont know how to get the borders of the shape and crop around it.
I hope to get an output of multiple images, that have pictures of the contents of the triangle.
Thank you in advance
I have just recently done this for one of my projects, and it worked perfectly.
Here is the technique I use to implement this in Python OpenCV:
Display the image using OpenCV's cv2.imshow() function.
Select 2 points (x, y) on an image. This can be done by capturing mouse click events with OpenCV. One way to do this is to click with your mouse where the first point is, while still clicking move towards the second points, and let go from the mouse click once the cursor is over the correct point. This selects the 2 points for you. In OpenCV, you can do this with cv2.EVENT_LBUTTONDOWN and cv2.EVENT_LBUTTONUP. You can write a function to record the two points using the mouse capture events and pass it to cv2.setMouseCallback().
Once you have your 2 coordinates, you can draw a rectangle using OpenCV's cv2.rectangle() function, where you can pass the image, the 2 points and additional parameters such as the colour of the rectangle to draw.
Once you're happy with those results, you can crop the results using something like this:
image = cv2.imread("path_to_image")
cv2.setMouseCallback("image", your_callback_function)
cropped_img = image[points[0][1]:points[1][1], points[0][0]:points[1][0]]
cv2.imshow("Cropped Image", cropped_img)
cv2.waitKey(0)
Here is one of the results I get on one of my images.
Before (original image):
Region of interest selected with a rectangle drawn around it:
After (cropped image):
I started by following this excellent tutorial on how to implement it before further improving it on my own, so you can get started here: Capturing mouse click events with Python and OpenCV. You should also read the comments at the bottom of the attached tutorial to easily improve the code.
You can get co-ordinates of box using 'BoundedRect' function. Then use slicing operation, to extract required part of image.
I was doing a frame slicing from the OpenCV Library in Python, and I am successfully able to create frames from the video being tested on.
I am doing it on a CCTV Camera installed at a parking entry gateway where the video plays 24x7, and at times the car is standing still for good number of minutes, leading to having consecutive frames of the same vehicle.
My question is how can I create a frame only when a new vehicle enters the parking lot?
Stackoverflow is for code related queries. I suggest you try some code and share your results and your problems before posting anything here. That being said, you can start with object detection tutorials like this and then do tracking with sort. Many pre trained models are available that include the cars class so you won't even need to train a new model.
Do you need to detect license plates, etc? Or just notice if something happens? For the latter, you could use a very simple approach. Take an average of say the frames of the last 30 seconds and subtract that from a current frame. If the mean absolute average of the delta image is above a threshold, that could be the change you are looking for.
You could do some simpler motion detection with opencv, it's nicely explained in https://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/
So if you have a picture of the background as reference, you can compare each new image to the background and only save the image if it's different enough from the background (hopefully only when a car entered). Then make this the new background and reset for a new car when the new images again start looking like the original background.
Hopefully I stated my idea clear enough and that link provides enough information to implement it. If not just ask for a clarification!
First you have to have a specific xml to detect only cars.You can get it from the here.I have developed a code just to uniquely identify and count the cars that are visible to the cctv you are using,sometimes it totally depends on the frame rate and detection too,so you can control the frame rate and also the total count variable.
import cv2
cascade_src = 'cars.xml'
cap = cv2.VideoCapture('rtsp_of_ur_cctv')
car_cascade = cv2.CascadeClassifier(cascade_src)
prev_count=0
total_count=0
while True:
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cars = car_cascade.detectMultiScale(gray, 1.1, 1)
if len(cars)>prev_count:
diffrence=len(cars)-prev_count
total_count=total_count+diffrence
#here yo can save unique new entry and possibly avoid the recursive ones
print(total_count)
for (x,y,w,h) in cars:
cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,255),2)
prev_count=len(cars)
cv2.imshow('video', img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
So, I am developing a hobby project where a camera is mounted on a bot, powered by Raspberry Pi. The bot moves around a room and does some processing based on the camera's response. I apologize, if this is not the right place to ask.
Problem:-
The camera attached to the bot will perform Background subtraction continuously.The bot will be moving simultaneously. In case the background algorithm detects an object in front of the bot, it'll stop the bot and do further processing with respect to the object. Here the working assumption is that the ground is of only one color and uniform to a great extent.
The algorithm works great under very controlled lighting conditions. The problem arises when there is slight lighting changes or when the ground has small patches/potholes/uneveness in it. The above scenarios generate false flags and as a result my bot stops. I want to know if there is any way to prevent these false flags with the help of any modifications in the following code ?
import picamera,cv2,time
camera = PiCamera()
camera.resolution = (512,512)
camera.awb_mode="fluorescent"
camera.iso = 800
camera.contrast=25
camera.brightness=64
camera.sharpness=100
rawCapture = PiRGBArray(camera, size=(512, 512))
first_time=0 # This flag is to capture the first frame as background image.
frame_buffer=0 # This flag is to change the background image every 30 frames.
counter=0
camera.start_preview()
sleep(1)
def imageSubtract(img):
luv=cv2.cvtColor(img,cv2.COLOR_BGR2LUV)
l,u,v=cv2.split(luv)
return v
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
# Here first 10 frames are being rejected .
if first_time==0:
rawCapture.truncate(0)
if frame_buffer<10:
print("Frame rejected -",str(frame_buffer))
frame_buffer+=1
continue
os.system("clear")
refImg=frame.array
refThresh=imageSubtract(refImg)
first_time=1
frame_buffer=0
frame_buffer+=1
cv2.imshow("Background",refImg)
image = frame.array
cv2.imshow("Foreground", image)
key = cv2.waitKey(1)
rawCapture.truncate(0)
newThresh=imageSubtract(image)
diff=cv2.absdiff(refThresh,newThresh) #Here the background image is sub from foreground
kernel = np.ones((5,5),np.uint8)
diff = cv2.morphologyEx(diff, cv2.MORPH_OPEN, kernel)
diff=cv2.dilate(diff,kernel,iterations = 2)
_, thresholded = cv2.threshold(diff, 0 , 255, cv2.THRESH_BINARY +cv2.THRESH_OTSU)
_, contours, _= cv2.findContours(thresholded,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
try:
c=max(contours,key=cv2.contourArea)
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(thresholded,(x,y),(x+w,y+h),(125,125,125),2)
if cv2.contourArea(c)>300 and len(contours)<=3:
if counter==0:
print("Going to sleep for 0.1 second") # allowing the device to move ahead for 0.1 sec before processing the object
time.sleep(0.1)
counter=1
continue
else:
print("Object found !")
cv2.imshow("Threshold",thresholded)
if frame_buffer%30==0:
frame_buffer=0
refImg=image
refThresh=imageSubtract(refImg)
os.system("clear")
print("Refrence Image changed")
except Exception as e:
print(e)
pass
NOTE :- The above algorithm used continuous capture mode of the PiCamera. Also, first 10 frames are rejected because I have noticed that the PiCamera takes some time to adjust the colors once it starts up. Another thing is that the background image is being changed every 30 frames because I wanted the background image to remain as close as possible to the foreground image. Since the room is quite big, there is going to be some local changes in the light/color of the ground between one corner of the room and the other. Hence, felt the need to update the background image after every 30 frames. The object needs to have an area greater than 300 for it to be detected. I have also given a delay of 0.1 sec (time.sleep(0.1)) after the object has been detected, because I wanted the object to enter the frame completely and be right in the middle of the frame before the device stops.
Some solutions that I had in mind was :-
I thought of attaching few IR sensors at the base of the device. In case, an object is detected ( Real/False Flags), it'll check the output from IR sensor just to check if any object is being picked up by it as well. In case of shadows and patches, the IR response is going to be NULL, so the bot continues to move forward.
I thought of calculating the height of the detected object. If the height was above a certain threshold , then presence of object could have been confirmed. Otherwise it is a false flag. But, the camera is going to be facing down, which means the image is taken from top. So I don't think there is any way to ascertain the height of the object from it's top-down image.
Please suggest any alternative solutions. I want to make the above algorithm as perfect as possible because the entire working of the device depends upon the accuracy of the background subtraction algorithm.
Thank you for your help in advance !
EDIT 1-
Background Image - Back
Foreground Image - Front
Background subtraction works well when your camera is not moving and objects are moving in front of the camera. Your case is just the opposite and it will easily detect movement everywhere.
If your background is uniform, any edge detection filter may work better than your background subtraction.
Another alternative is using thresholding (see https://docs.opencv.org/3.4.0/d7/d4d/tutorial_py_thresholding.html). I would suggest an adaptive threshold (Gaussian or mean) that will work better if the edges are darker than the center. Try different sizes of the neighbourhood and different values for the constant C.
Then you can erode/dilate as you already do and filter by size or position as you suggest. Adding more sensors to your robot is a good idea as well.
My current program will output an image to the user and based on user input, readjust the image as necessary.
Long story short, I am trying to find circular objects in an image file. I will be using the Hough Circle Transform. However, because many of my circles in the image are not "perfect circles", I am doing an algorithm that "guesses" the radii of the circles. However, I want to allow the user to readjust the radii as necessary.
Is there some way to ask the user for input and then based on the user input, readjust the window in imshow()? Right now, imshow() refuses to show the actual window until I use cv2.waitKey(0), at which point I can't ask for user input until the window is destroyed.
You can call imshow repeatedly without destroying it. And yes, you will likely need waitKey, just don't call it with 0 or it will wait indefinitely. Call it with a 1 to wait just 1 millisecond and ensure the image redraws. Try something like:
while True:
cv2.imshow('image', img)
cv2.waitKey(1)
radius = input('Input radius')
# recalculate image with new radius here...