i use this lib http://videocapture.sourceforge.net/ for a capturing web cam. But i don't understand how it video-stream send to qpixmap.
If you check out the documentation for this library, you will see its very short and sweet. http://videocapture.sourceforge.net/html/VideoCapture.html
There are two ways I can tell you that you could get your image into Qt...
The best way - Directly into a QImage
getImage() states that it will return a PIL image. PIL, if you are using the latest version, has a module called ImageQt which can take a PIL Image object and give you back a QImage. From here you could convert that to QPixmap:
from PyQt4 import QtCore, QtGui
from VideoCapture import Device
from PIL import Image, ImageQt
app = QtGui.QApplication([])
cam = Device()
# this is a PIL image
pilImage = cam.getImage()
# this is a QImage
qImage = ImageQt.ImageQt(pilImage)
# this is a QPixmap
qPixmap = QtGui.QPixmap.fromImage(q)
The other way - Write to disk first
If you follow the example they give on this modules website, they show you how to use saveSnapshot() to save the image to disk. This is less desirable than the first method since you have to do disk i/o, but I will still mention it. You would then read it into your Qt app as a QPixmap:
cam = Device()
cam.saveSnapshot('image.jpg')
qPixmap = QtGui.QPixmap('image.jpg')
Do the first method. Its faster and more efficient.
Related
My current program is in Python and uses OpenCV. I rely on webcam captures and I am processing every captured frame:
import cv2
# use the webcam
cap = cv2.VideoCapture(0)
while True:
# read a frame from the webcam
ret, img = cap.read()
# transform image
I would like to make a Kivy interface (or another graphical user interface) with buttons, keeping already existing functionality with webcam captures.
I found this example:
https://kivy.org/docs/examples/gen__camera__main__py.html
— but it doesn’t explain how to acquire the webcam image to process it with OpenCV.
I found an older example:
http://thezestyblogfarmer.blogspot.it/2013/10/kivy-python-script-for-capturing.html
— it saves screenshots to disk using the ‘screenshot’ function. Then I can read the saved files and process them, but this seems to be an unnecessary step.
What else can I try?
Found this example here: https://groups.google.com/forum/#!topic/kivy-users/N18DmblNWb0
It converts the opencv captures to kivy textures, so you can do every kind of cv transformations before displaying it to your kivy interface.
__author__ = 'bunkus'
from kivy.app import App
from kivy.uix.widget import Widget
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.image import Image
from kivy.clock import Clock
from kivy.graphics.texture import Texture
import cv2
class CamApp(App):
def build(self):
self.img1=Image()
layout = BoxLayout()
layout.add_widget(self.img1)
#opencv2 stuffs
self.capture = cv2.VideoCapture(0)
cv2.namedWindow("CV2 Image")
Clock.schedule_interval(self.update, 1.0/33.0)
return layout
def update(self, dt):
# display image from cam in opencv window
ret, frame = self.capture.read()
cv2.imshow("CV2 Image", frame)
# convert it to texture
buf1 = cv2.flip(frame, 0)
buf = buf1.tostring()
texture1 = Texture.create(size=(frame.shape[1], frame.shape[0]), colorfmt='bgr')
#if working on RASPBERRY PI, use colorfmt='rgba' here instead, but stick with "bgr" in blit_buffer.
texture1.blit_buffer(buf, colorfmt='bgr', bufferfmt='ubyte')
# display image from the texture
self.img1.texture = texture1
if __name__ == '__main__':
CamApp().run()
cv2.destroyAllWindows()
Note: I have no clue how OpenCV works, but I found camera_opencv.py, so this means there is an easy way how to work with it.
As you see in camera example, this is the default way and when you look in __init__.py for camera you can see opencv in providers so perhaps it works with OpenCV out of the box. Check log if you can see OpenCV detected as a provider. You should see CameraOpenCV written somewhere if it's detected and it should show itself when capturing image.
If you however want to work with OpenCV directly(i.e. cap.read() and similar stuff), then you need to write your own handler for the provider or append more options to the camera_opencv file.
I want to use PIL's ImageGrab to capture a specific window. What my code below does is that it uses pywin32's FindWindow to get the handle of my wanted window then get its size and location with GetWindowRect. I then use ImageGrab with a bbox equal to the result that I got from GetWindowRect. However, this does not capture the whole window; a big chunk of the window is not shown. What did I do wrong? Here is my code and the result I get:
import win32gui
import cv2
from PIL import ImageGrab
import numpy as np
fceuxHWND = win32gui.FindWindow(None, 'FCEUX 2.1.4a: Super Mario Bros. (JU)
[!]')
rect = win32gui.GetWindowRect(fceuxHWND)
screen = np.array(ImageGrab.grab(bbox=rect), dtype=np.uint8)
cv2.imshow('test',cv2.cvtColor(screen,cv2.COLOR_BGR2RGB))
Result of code
Your DPI settings is at 125% and your process is not DPI aware. Call SetProcessDPIAware as follows
import ctypes
...
ctypes.windll.user32.SetProcessDPIAware()
I can create an animated GIF like this:
from wand.image import Image
with Image() as im:
while i_need_to_add_more_frames():
im.sequence.append(Image(blob=get_frame_data(), format='png'))
with im.sequence[-1] as frame:
frame.delay = calculate_how_long_this_frame_should_be_visible()
im.type = 'optimize'
im.format = 'gif'
do_something_with(im.make_blob())
However, an image created like this loops indefinitely. This time, I want it to loop once, and then stop. I know that I could use convert's -loop parameter if I were using the commandline interface. However, I was unable to find how to do this using the Wand API.
What method should I call, or what field should I set, to make the generated GIF loop exactly once?
You'll need to use ctypes to bind the wand library to the correct C-API method.
Luckily this is straightforward.
import ctypes
from wand.image import Image
from wand.api import library
# Tell Python about the C-API method.
library.MagickSetImageIterations.argtypes = (ctypes.c_void_p, ctypes.c_size_t)
with Image() as im:
while i_need_to_add_more_frames():
im.sequence.append(Image(blob=get_frame_data(), format='png'))
with im.sequence[-1] as frame:
frame.delay = calculate_how_long_this_frame_should_be_visible()
im.type = 'optimize'
im.format = 'gif'
# Set the total iterations of the animation.
library.MagickSetImageIterations(im.wand, 1)
do_something_with(im.make_blob())
So I have a series of transparent pngs and append them to a new Image()
with Image() as new_gif:
for img_path in input_images:
with Image(filename=img_path) as inimg:
# create temp image with transparent background to composite
with Image(width=inimg.width, height=inimg.height, background=None) as new_img:
new_img.composite(inimg, 0, 0)
new_gif.sequence.append(new_img)
new_gif.save(filename=output_path)
unfortunately the background is not "cleared" when the new image is appended. They'll have the last image there as well:
But how do I clear the background? I though I do exactly that by compositing into a new image upfront.. `:| HALP!!
I see there is a similar thing with commandline ImageMagick but wand doesn't have anything like that. So far I have to workaround with a fitting background color.
Without seeing the source images, I can assume the -set dispose background is what's needed. For wand, you'll need to call wand.api.library.MagickSetOption method.
from wand.image import Image
from wand.api import library
with Image() as new_gif:
# Tell new gif how to manage background
library.MagickSetOption(new_gif.wand, 'dispose', 'background')
for img_path in input_images:
library.MagickReadImage(new_gif.wand, img_path)
new_gif.save(filename=output_path)
Or alternatively...
You can extent wand to manage Background Dispose behavior. This approach would give you the benefit of alter/generate each frame programmatically. But the down side would include a lot more work with ctypes. For example.
import ctypes
from wand.image import Image
from wand.api import library
# Tell python about library method
library.MagickSetImageDispose.argtypes = [ctypes.c_void_p, # Wand
ctypes.c_int] # DisposeType
# Define enum DisposeType
BackgroundDispose = ctypes.c_int(2)
with Image() as new_gif:
for img_path in input_images:
with Image(filename=img_path) as inimg:
# create temp image with transparent background to composite
with Image(width=inimg.width, height=inimg.height, background=None) as new_img:
new_img.composite(inimg, 0, 0)
library.MagickSetImageDispose(new_img.wand, BackgroundDispose)
new_gif.sequence.append(new_img)
# Also rebuild loop and delay as ``new_gif`` never had this defined.
new_gif.save(filename=output_path)
<- still needs delay correction
Here's my problem.
I'm building a simple QR-code generator with qrcode and PyQt4. In particular, this application shows an image of the generated QR-code inside a QPixmap of a QLabel. The problem is that the qrmodule, as far as I know, allows only to save the generated image to a file. I tried to access the inner workings of qrcode but it's kind of convoluted.
Do you know if it's possible to expose the image by saving the resulting image in a file stream, like the one from tempfile.TemporaryFile()? Otherwise I can only insert the qr-code by saving it on a real file and then loading it. For example
import qrcode as q
from PyQt4 import QtGui
filename = 'path/to/file.png'
img = q.make('Data')
img.save(filename)
pixmap = QtGui.QPixmap(filename)
EDIT 1
I tried to use the PIL.ImageQt.ImageQt function as proposed in an answer in the following way
import sys
import qrcode as qr
from PyQt4 import QtGui
from PIL import ImageQt
a = QtGui.QApplication(sys.argv)
l = QtGui.QLabel()
pix = QtGui.QPixmap.fromImage(ImageQt.ImageQt(qr.make("Some test data")))
l.setPixmap(pix)
l.show()
sys.exit(a.exec_())
The result is not however consistent. This is the result obtained using the method above
And this is the result using qrcode.make('Some test data').save('test2.png')
Am I missing something? The ImageQt function si a subclass of QImage as far as I understand, in fact I get no runtime errors, but the image is corrupted.
It turns out it takes a couple of steps to do this but you don't need to mess around with intermediate files. Note that the QRcode that qrcode generates is a PIL (python image library but its best to use the fork pillow) object. PIL provides the handy ImageQt module to convert your QR code into an Qimage object. Next you need to use the method QtGui.QPixmap.fromImage to create a pixmap (this method seg faults unless qt's initialisation has been done). The complete code would look something like this:
>>> import qrcode as q
>>> from PyQt4 import QtGui
>>> from PIL import ImageQt
>>> import sys
>>> app = QtGui.QApplication(sys.argv) # line 8 seg faults without this
>>> qrcode = q.make("some test data")
>>> qt_image = ImageQt.ImageQt(qrcode)
>>> pixmap = QtGui.QPixmap.fromImage(qt_image)
Obviously you can neaten that up somewhat but it works.