I need to take a single snapshot from webcam. I choice SimpleCV for this task.
Now i try to get a single image and show it:
from SimpleCV import Camera
cam = Camera()
img = cam.getImage()
img.show()
But i see only black image. I think camera is not ready at this moment, because if I call time.sleep(10) before cam.getImage() all works good.
What the right way for this? Thank you!
Have you installed PIL? I had similar problems, but on installing PIL everything works fine. Hope this helps.
You can download PIL from Pythonware
I ran into this same issue and came up with the following work-around. Basically it grabs an image, tests the middle pixel to see if it's black (0,0,0). If it is, then it waits 1 second and tries again.
import time
from SimpleCV import Camera
cam = Camera()
r = g = b = 0
while r + g + b < 0.01:
img = cam.getImage()
r, g, b = img[img.width/2, img.height/2]
print("r: {} g: {} b: {}".format(r,g,b))
time.sleep(1)
img.save(file_name)
Your problem may be that when the script ends the camera object doesn't seem to release the webcam for other programs. When you wait to run the script Windows frees it up (maybe?) so it will work again. If you use it will the old script still thinks it owns the webcam, it shows up as black. Anyway the solution here seems to work. Add "del cam" to the end of your script which will make the camera object go away and let you use the camera again.
try this
from SimpleCV import Camera
cam = Camera(0)
while True:
img = cam.getImage()
img.show()
Related
Does anyone know a way to create a camera source using python? So for example I have 2 cameras:
Webcam
USB Cam
Whenever I use any web application or interface which requires camera, I have an option to select a camera source from the above two mentioned.
What I want to achieve is that I am processing real time frames in my python program with a camera portal of my own like this:
import numpy as np
import cv2
while True:
_,frame = self.cam.read()
k = cv2.waitKey(1)
if k & 0xFF == ord('q'):
self.cam.release()
cv2.destroyAllWindows()
break
else:
cv2.imshow("Frame",frame)
Now I want to use this frame as a camera source so next time whenever I open a software or web app which requires camera then it show option as follows:
Webcam
USB Cam
Python Cam (Or whatever be the name)
Does anyone has any advice or hint on how to go about that? I have seen some premium softwares which generate their own camera source but they're written in c++. I was wondering if this could happen in python or not.
Here is an example of the same:
As you can see there are multiple camera source there. I want to add one camera source which displays the frames processed by python in its feed.
You can use v4l2loopback (https://github.com/umlaeute/v4l2loopback) for that. It is written in C but there are some python wrappers. I've used virtualvideo (https://github.com/Flashs/virtualvideo).
The virtualvideo README is pretty straight forward but here is my modification of their github example to match your goal:
import virtualvideo
import cv2
class MyVideoSource(virtualvideo.VideoSource):
def __init__(self):
self.cam = cv2.VideoCapture(0)
_, img = self.cam.read()
size = img.shape
#opencv's shape is y,x,channels
self._size = (size[1],size[0])
def img_size(self):
return self._size
def fps(self):
return 30
def generator(self):
while True:
_, img = self.cam.read()
yield img
vidsrc = MyVideoSource()
fvd = virtualvideo.FakeVideoDevice()
fvd.init_input(vidsrc)
fvd.init_output(2, 640, 480, pix_fmt='yuyv422')
fvd.run()
in init_output the first argument is the virtual camera resource thar i've created when adding the kernel module:
sudo modprobe v4l2loopback video_nr=2 exclusive_caps=1
the last arguments are my webcam size and the pixel format.
You should see this option in hangouts:
I am using OpenCV in python and the aim of the code is to display one image after another in the same window without destroying it. For eg. i1.jpg is displayed and then in the same window left arrow is displayed after a pause of 5s.
In my case, i1.jpg is displayed and then the window is destroyed and then i2.jpg is displayed in another window.
Here is the code for the same -
t_screen_time = 5000
import numpy
import cv2
left_arrow = "left_arrow.jpg"
right_arrow = "right_arrow.jpg"
img = cv2.imread(left_arrow)
cv2.imshow('image',img)
cv2.waitKey(t_screen_time)
cv2.destroyAllWindows()
img = cv2.imread(right_arrow)
cv2.imshow('image',img)
cv2.waitKey(t_screen_time)
cv2.destroyAllWindows()
The argument in cv2.waitKey() is the time to wait for a keypress and not exactly the time the image is displayed for.
cv::imshow() or cv2.imshow() was not designed to be a complete GUI. It was designed to let you quickly debug and display images/videos.
You can achieve what you need using some form of sleep() in a thread or so. But this is harder to do in python. You could also just edit your code to
t_screen_time = 5000
import numpy
import cv2
left_arrow = r"first/image"
right_arrow = r"second/image"
img = cv2.imread(left_arrow)
cv2.imshow('image',img)
cv2.waitKey(t_screen_time)
img = cv2.imread(right_arrow)
cv2.imshow('image',img)
cv2.waitKey(t_screen_time)
cv2.destroyAllWindows()
by removing the cv2.destroyAllWindows() before displaying the second image. I've tested this code and it works, but if you're looking for more GUI functionality, use something like Qt.
I'm running opencv version 3.1.0 and python 2.7 on Mac OSX 10.9 and want to display a black image on fullscreen. My screen's resolution is 2880x1800.
However when I attempt to do so, there is a large white border on the top of the screen.
Here's my code, note that black.jpg is a 2880x1800 image.
import cv2
img = cv2.imread("black.jpg")
cv2.namedWindow("window", cv2.WND_PROP_FULLSCREEN)
cv2.setWindowProperty("window",cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)
cv2.imshow("window", img)
while True:
key = cv2.waitKey(20)
#exit on ESC
if key == 27:
break
I've also tried to just create a black image manually, using the following code.
import cv2
import numpy as np
img = np.zeros((1800, 2880))
cv2.namedWindow("window", cv2.WND_PROP_FULLSCREEN)
cv2.setWindowProperty("window", cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)
cv2.imshow("window",img)
cv2.waitKey(0)
I've adjusted the dimensions of the numpy array to make it larger but the border still remains.
Doing some research I've discovered that this may be a bug with opencv. However the solutions only apply to windows operating systems, see the following:
OpenCV window in fullscreen and without any borders
and
How to display an image in full screen borderless window in openCV
If anyone has an idea of how to fix the bug for Macs I can go ahead and rebuild the library. Or if I am doing something incorrectly please let me know. Thanks!
I guess the key is to adapt the image size to the REAL macbook screen resolution. 1800x2880 probably is not the one you are currently adopting.
Find out the display resolution of your macbook at System Preferences -> Display -> Scaled
Code with OpenCV
import cv2
def show_full_screen_image():
while True:
print 'loading images...'
img = cv2.imread('preferred_image.png')
# Note: 900x1440 is the resolution with my MBP
img = cv2.resize(img, (1440, 900), interpolation=cv2.INTER_CUBIC)
cv2.namedWindow("test", cv2.WND_PROP_FULLSCREEN)
cv2.setWindowProperty("test", cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)
cv2.imshow("test", img)
key=cv2.waitKey(0)
if key==27: # ESC to exit
break
if __name__ == '__main__':
show_full_screen_image()
The problem in those links is not the presence of a border, but the window's background showing through for some reason. From what I understand, OpenCV's namedWindow actually creates a two windows, one inside the other. The "white lines" are actually the grey background of the parent window. You might be facing the same problem in OSX since openCV creates windows this way.
I solved it in windows by changing the background colour of the parent window through the Windows API, maybe you can try something similar in OSX.
I'm trying out SimpleCV and I'm noticing every time I click title bar, simplecv stops working to the pint that it crashes. Before crashing it says "pythonw.exe Stopped working." That happens if I edit my script and run it from the python idle. If I simply double click it, the image is displayed for 20secs and then just closes.
This is what I tried. Really simple.
from SimpleCV import Image
img = Image("carro.jpg")
img = img.scale(300,300)
img.show()
Just wondering if this could causes any kind of trouble while doing some image processing like subtracting colors and stuff like that.
Had the same problem and after searching found this: From http://help.simplecv.org/question/1118/why-imageshow-freezes/ it looks like it is caused by pyGame requiring a while loop to keep pumping events to the window.
The solution, as indicated at that post, and worked for me, was to use the quit() method on the window handle returned by show.
````
img = Image("carro.jpg")
img = img.scale(300,300)
win = img.show()
#wait for user input before closing
raw_input()
win.quit()
````
I am familiar with programming but not with python or linux. I am programming in python on a raspberry pi trying to create a security camera. Here is my code to test my current problem:
#!/usr/bin/python
import pygame, sys
from pygame.locals import *
from datetime import datetime
import pygame.camera
import time
pygame.init()
pygame.camera.init()
width = 640
height = 480
pic_root = "/root/cam/"
cam = pygame.camera.Camera("/dev/video0",(width,height),"RGB")
cam.start()
while True:
raw_input("press enter")
image = cam.get_image()
filename = datetime.now().strftime("%Y_%m_%d_%H_%M_%S") +'.jpg'
filepath = pic_root+filename
pygame.image.save(image, filepath)
So when I press enter, an image is taken from the webcam and saved. But the image is always two images behind. No matter how long in between saving images, the first two are always very dim as if the webcam has just started up, then the rest are always two images late.
So if I took 5 images, one with one finger up, then next with two fingers, etc, I would end up with two dark images and then the first three images. 1,2 and 3 fingers. It is as if the images are being stored somewhere then when I try to save a live images it pulls up an old one.
Am I missing something here? What's the issue?
First, I'm not familiar with Pygame (but I do a lot of snapshot capturing with OpenCV -- here's one of my projects: http://vmlaker.github.io/wabbit.)
I changed your code so that on every iteration, you 1) start, 2) take snapshot, and 3) stop the camera. This works a little better, in that it is only one image behind (instead of two.) It's still pretty weird how the old image sticks around from the previous run... I haven't figured out how to flush the camera. Notice I also changed pic_root, and instead of the infinite loop I'm using 3 iterations only:
from datetime import datetime
import pygame
import pygame.camera
pygame.init()
pygame.camera.init()
width = 640
height = 480
pic_root = './'
cam = pygame.camera.Camera("/dev/video0",(width,height),"RGB")
#cam.start()
for ii in range(3):
raw_input("press enter")
cam.start()
image = cam.get_image()
cam.stop()
filename = datetime.now().strftime("%Y_%m_%d_%H_%M_%S") +'.jpg'
filepath = pic_root+filename
pygame.image.save(image, filepath)
The OP's comment helped, but I actually have to pull the picture three times with get_image() before saving.
I also have a wakeup function which I call after a long standby time to wake the camera. Mine has the behavior to be black after a long time.
I guess, all this weird stuff has something to do with a buffer. But the multiple call did the trick for me.