I'm using Python 2.7, PyGTK 2.24, and PyGST (Gstreamer).
To ensure smooth playback from one clip to another (without a blink), I combined all the clips I needed into one larger video. This lets me seek to the exact place I need in code. One of the clips is like a "fill-in", which should loop whenever one of the other clips is not playing.
However, to make my code easier and more streamlined, I want to use segments to define the various clips within the larger video. Then, at the end of each segment (I know there is a segment end event), I seek to the fill-in clip. When I need another clip, I just seek to that segment.
My question is, how exactly do I create these segments? I'm guessing that would be the event_new_new_segment(), but I am not sure. Can I create multiple clips to seek with using this function? Is there another I should use. Are there any gotchas to this method of seeking in my video that I should be aware of?
Second, how do I seek to that segement?
Thank you!
Looks like only GstElement's can generate NEWSEGMENT events, you can't simply attach it to an existing element. The closest thing you could do if not using Python, would be creating a single shot or periodic GstClockID or and use gst_clock_id_wait_async until the clock time hit. But the problem is, GstClockID is not wrapped in PyGst.
I think I'm actually working on some similar problem. Some kind of solution I'm using now, is gluing video streams in real time with gnonlin. The good side: seems to work, didn't have time to thoroughly test it yet. Bad side: poorly documented and buggy. These sources from the flumotion project (and the comments inside!) were very, very helpful to me for understanding how to make the whole thing work.
Related
With OpenCV I've written a script that;
captures a running application in real-time and displays it in a window
searches for a specific image within the window
if it finds the image, it draws a box around it otherwise it does nothing
I want to extend this functionality to be able to identify multiple images on the same window at once.
As I understand it I can either:
store the images as a list and iterate over the list doing something at each iteration
use the same functionality noted above, in a different thread.
What do I need to take into consideration when deciding on the approach? Is there a best practice to follow or things I need to be aware of before deciding on a course?
I've found a number of topics on the subject but they're all very old and mostly deal with sensors or performance or are other languages. It's a 2D image I'm detecting, I don't think performance is going to be an issue.
When is it appropriate to multi-thread?
When to thread and when to WAR?
Python: Multi-threading or Loops?
When multi-threading is a bad idea?
To multi-thread or not to multi-thread!
I'm trying to automate mouse movements in a way that would look somewhat human-like. The reason why I need to do this is because I'm currently working on promotional videos for a web-app and I want to show how it's used. I don't want to just record my own movements because what I mean by "somewhat" human-like is that I want it to feel kind of organic, but I still want it to be perfect.
I've been using pyautogui and time for now and it works great. I can get the cursor to move at whatever speed I want. I can make it slow down with tweening functions. It looks almost human, but it's all in straight lines. I'd like to have the cursor move in curves.
Is that possible? Either that means using pyautogui with another module or simply other modules completely, that'd be fine. Also, I've found only one question here about this subject and I'm not sure the poster had the same objective as I do. They were talking about a bezier curve. I'm not particularly familiar with those so ideally, if it's possible of course, I'd like a simpler solution.
I would like to draw an image into another image with Wand (an ImageMagick binding for Python). The source image should totally replace the destination image (at given position).
I can use:
destinationImage.composite_channel(channel='all_channels', image=sourceImage, operator='replace', left=leftPosition, top=topPosition)
But I was wondering if there is a simple or faster solution.
But I was wondering if there is a simple or faster solution.
Not really. In the scope of wand, this would be one of the fastest methods. For simplicity, your already doing everything on one line of code. Perhaps you can reduce this with Image.composite.
destinationImage.composite(sourceImage, leftPosition, topPosition)
But your now compromising the readability of your current solution. Having the full command with channel='all_channels' & operator='replace' kwargs will help you in the long run. Think about revisiting the code in a year.
destinationImage.composite(sourceImage, leftPosition, topPosition)
# versus
destinationImage.composite_channel(channel='all_channels',
image=sourceImage,
operator='replace',
left=leftPosition,
top=topPosition)
Right away, without hitting the API docs, you know the second option is replacing destination with a source image across all channels. Those facts are hidden, or assumed, in the first variation.
I am currently watching a video from 27C3 and I would like to filter the applause, as it is very loud. Is this possible? I have heard something like this was made for Vuvuzelas.
I use Ubuntu. If this filter would work via ffmpeg this would be great. If it is written in Python it would also be ok.
Here is an example: http://www.youtube.com/watch?v=TIViQuCX7XM#t=5m4s
No, this isn't possible. The sound of applause covers a very wide band. Filtering vuvuzelas was somewhat possible because they were all close to the same pitch. Applause is all over.
If you want to experiment, pull up this video and play with your EQ in VLC. Even with the wide bands of a 7-band EQ, you'll be dropping quite a few to cut the audience, thus leaving you with nothing.
As Brad said, this is not possible with a static frequency filter, however, if you have some knowledge in signal theory, and a fair lot of free time, you could write an active noise control system. See google scholar for some examples of such a filter (like this one).
You could use a dynamic range compressor, this will not filter out the applause, but at least it will smooth out the loudness. You can give it a threshold so that it doesn't affect sound below that threshold.
Background
I have been asked by a client to create a picture of the world which has animated arrows/rays that come from one part of the world to another.
The rays will be randomized, will represent a transaction, will fade out after they happen and will increase in frequency as time goes on. The rays will start in one country's boundary and end in another's. As each animated transaction happens a continuously updating sum of the amounts of all the transactions will be shown at the bottom of the image. The amounts of the individual transactions will be randomized. There will also be a year showing on the image that will increment every n seconds.
The randomization, summation and incrementing are not a problem for me, but I am at a loss as to how to approach the animation of the arrows/rays.
My question is what is the best way to do this? What frameworks/libraries are best suited for this job?
I am most fluent in python so python suggestions are most easy for me, but I am open to any elegant way to do this.
The client will present this as a slide in a presentation in a windows machine.
The client will present this as a slide in a presentation in a windows machine
I think this is the key to your answer. Before going to a 3d implementation and writing all the code in the world to create this feature, you need to look at the presentation software. Chances are, your options will boil down to two things:
Animated Gif
Custom Presentation Scripts
Obviously, an animated gif is not ideal due to the fact that it repeats when it is done rendering, and to make it last a long time would make a large gif.
Custom Presentation Scripts would probably be the other way to allow him to bring it up in a presentation without running any side-programs, or doing anything strange. I'm not sure which presentation application is the target, but this could be valuable information.
He sounds like he's more non-technical and requesting something he doesn't realize will be difficult. I think you should come up with some options, explain the difficulty in implementing them, and suggest another solution that falls into the 'bang for your buck' range.
If you are adventurous use OpenGL :)
You can draw bezier curves in 3d space on top of a textured plane (earth map), you can specify a thickness for them and you can draw a point (small cone) at the end. It's easy and it looks nice, problem is learning the basics of OpenGL if you haven't used it before but that would be fun and probably useful if your in to programing graphics.
You can use OpenGL from python either with pyopengl or pyglet.
If you make the animation this way you can capture it to an avi file (using camtasia or something similar) that can be put onto a presentation slide.
It depends largely on the effort you want to expend on this, but the basic outline of an easy way. Would be to load an image of an arrow, and use a drawing library to color and rotate it in the direction you want to point(or draw it using shapes/curves).
Finally to actually animate it interpolate between the coordinates based on time.
If its just for a presentation though, I would use Macromedia Flash, or a similar animation program.(would do the same as above but you don't need to program anything)