I have an animated GIF played by GIFAnimationCtrl() on my Python code. Since my GIF is triggered by user events I need to just play it once.
I tried just exporting the GIF without the looping attribute and trying if the function Play() has any options, but it doesn't. Any other function or libary I could use with wxPython is also wellcomed.
Here is the image i'm trying to use.
After more research i just found a workaround, wich is to put on the last frame of the animated GIF, a 240 seconds delay (the maximum I can put on Photoshop) wich does the trick,
Still looking for better aproaches.
Related
I would like to show an image with transparent background to indicate something when a key combination is pressed.
Let's say I pressed ctrl+f3, I trigger a python script. Is there anyway I can make that happen?
What python library can I use to show an image without window border and background?
I have figured out how to trigger the file on key press. How to I deal with the (imshow) thing?
Thank you.
show an image without window border and background
This sound like task for some GUI library. There are many available but you would need test them in order to find which one can do it. First feature is generally known as frameless or borderless window. tkinter which ships with python has ability to work this way, see for example tutorialspoint.com tutorial, though I do not know how it will work with alpha channel of your image.
I'm working with opencv3, python 3 and pyqt5. I want to make a simple GUI in which I want open up a new window to play a video along with some other widgets when a button is clicked on the main window. I've used QPixmap for displaying images in the past so I create a label and try to set the frames in the pixmap in a loop. The loop works fine but I am unable to get a display of the video/new window.
The loop I want to execute in the new window looks something like this:
def setupUi():
vid=cv2.VideoCapture('file')
ret, frame=vid.read()
while ret:
Qimg=convert(frame)
self.label.setpixmap(Qimg)
self.label.update()
ret,frame=vid.read()
convert() is a function I've written myself that converts the cv frame to QImage type to be set into the pixmap.
I'm only a beginner with pyQt so don't know what I am doing wrong. I've read about using signals, threads for the new window and QtApplication.processEvents() but don't know how these work and how they'll fit into my problem.
It would be helpful if someone could set me in the right direction and also point out some resources to create good interfaces for my apps using OpenCV and python.
The reason that this isn't running is that your while loop is blocking Qt's event loop. Basically, you're stuck in the while loop and you never give control back to Qt to redraw the screen.
Your update() call isn't doing what you think it is; it's updating the data stored by the object, but this change does not show up until the program reenters the eventloop.
There are probably multiple ways of handling this, but I see two good options, the first being easier to implement:
1) Call QApplication.processEvents() in every iteration of your while loop. This forces Qt to update the GUI. This will be much more simple to implement than 2).
2) Move the function to a separate class and use QThread combined with moveToThread() to update the data, and communicate with the GUI thread using signals/slots. This will require restructuring your code a bit, but this might be good for your code overall. Right now the code that is generating the data is in your MainWindow class presumably, while the two should be kept separate according to Qt's Model-View design pattern. Not very important for a small one-off app, but will help keep your code base intelligible as your app grows in size.
I wanted to use Python to create animations (video) containing text and simple moving geometric objects (lines, rectangles, circles and so on).
In the book titled "Python 2.6 Graphics Cookbook" I found examples using Tkinter library. First, it looked like what I need. I was able to create simple animation but then I realized that in the end I want to have a file containing my animation (in gif or mp4 format). However, what I have, is an application with GUI running on my computer and showing me my animation.
Is there a simple way to save the animation that I see in my GUI in a file?
There is no simple way.
The question Programmatically generate video or animated GIF in Python? has answers related strictly to creating these files with python (ie: it doesn't mention tkinter).
The question How can I convert canvas content to an image? has answers related to saving the canvas as an image
You might be able to take the best answers from those two questions and combine them into a single program.
I've accomplished this before, but not in a particularly pretty way.
Tl;dr save your canvas as an image at each step of the iteration, use external tools to convert from image to gif
This won't require any external dependencies or new packages except having imagemagick already installed on your machine
Save the image
I assume that you're using a Tkinter canvas object. If you're posting actual images to the tk widgets, it will probably be much easier to save them; the tk canvas doesn't have a built-in save function except as postcript. Postscript might actually be fine for making the animation, but otherwise you can
Concurrently draw in PIL and save the PIL image https://www.daniweb.com/software-development/python/code/216929/saving-a-tkinter-canvas-drawing-python
Take a screenshot at every step, maybe using imagegrab http://effbot.org/imagingbook/imagegrab.htm
Converting the images to to an animation
Once the images are saved, I used imagemagick to dump them into either a gif, or into a mpg. You can run the command right from python using How to run imagemagick in the background from python or something similar. It also means that the process is implictely run on a separate thread, so it won't halt your program while it happens. You can query the file to find out when the process is done.
The command
convert ../location/*.ps -quality 100 ../location/animation.gif
should do the trick.
Quirks:
There are some small details, and the process isn't perfect. Imagemagick reads files in order, so you'll need to save the files so that alphabetical and chronological line up. Beware that the name
name9.ps
Is alphabetically greater than
name10.ps
From imagemagick's point of view.
If you don't have imagemagick, you can download it easily (its a super useful command-line tool to have) on linux and mac, and cygwin comes with it on windows. If you're worried about portability... well... PIL isn't standard either
There is a way of doing that, with the "recording screen method", this was explained in other question: "how can you record your screen in a gif?".
Click the link -->LICEcap : https://github.com/lepht/licecap
They say that it's free software for Mac (OS X) and Windows
You could look at Panda3D, but it could be a little over killed for what you need.
I would say you can use Blender3d too but i'm not really sure of how it works. Someone more experimented then me could tell you more about this.
I am using the Pygame module in python to take pictures with my webcam. The problem is that I would like to export a video file (don't care what type) to use elsewhere. Since pygame cannot export video directly, I guess that there is two ways to do it:
Somehow stitch the photos Pygame creates into a video. (my preferred method)
Use an external library.
I only need 4 frames per second, and I don't care about the picture quality.
How can I make a video with python / pygame?
I have the same problem and am searching the solution.
I found this
This seems work well though I didn't try yet.
Hope this helps.
I'm looking for a Python framework that will enable me to play video as well as draw on that video (for labeling purposes).
I've tried Pyglet, but this doesn't seem to work particularly well - when drawing on an existing video, there is flicker (even with double buffering and all of that good stuff), and there doesn't seem to be a way to get the frame index in the video during the per-frame callback (only elapsed time since the last frame).
Try a Python wrapper for OpenCV such as ctypes-opencv. The C API reference is here, and the wrapper is very close (see docstrings for any changes).
I have used it to draw on video without any flicker, so you should have no problems with that.
A rough outline of calls you need:
Load video with cvCreateFileCapture, load font with cvFont.
Grab frame with cvQueryFrame, increment your frame counter.
Draw on the frame with cvPutText, cvEllipse, etc etc.
Display to user with cvShowImage.
Qt (PyQt) has Phonon, which might help out. PyQt is available as GPL or payware. (Qt has LGPL too, but the PyQt wrappers don't)
Try the Python bindings for GStreamer.