How do I convert this ffmpeg command to be used in ffmpy? - python

I have a gif that I am taking apart frame by frame in order to write text onto it. I used ffmpeg to put the frames (saved as individual .png files) back together and it worked nicely. This is the code I used:
ffmpeg -f image2 -i newimg%d.png out.gif
But now I want to use the python wrapper ffmpy. Following the docs, I tried a variety of things but I keep getting syntax errors.
Here is one instance of my efforts:
ff = ffmpy(FFmpeg(inputs = {ffmpeg -f image2 -i "newimg%d.png"}, outputs = {"gif_with_text.gif"}))
ff.run()
In this attempt, the syntax error points to the "2" in image2. Could someone help me out? Note - I'm new to python, let alone ffmpeg and ffmpy.

Related

Load video from url in moviepy and save frame

from moviepy.editor import *
clip = ( VideoFileClip("https://filelink/file.mp4"))
clip.save_frame("frame.png", t = 3)
I am able to load video using moviepy but its loading complete video and then saving the frame. Is it possible not to load the complete video but only first four second and then save the frame at 3 second.
Unless I missed something, it's not possible using MoviePy.
You may use ffmpeg-python instead.
Here is a code sample using ffmpeg-python:
import ffmpeg
stream_url = "https://file-examples-com.github.io/uploads/2017/04/file_example_MP4_480_1_5MG.mp4"
# Input seeking example: https://trac.ffmpeg.org/wiki/Seeking
(
ffmpeg
.input(stream_url, ss='00:00:03') # Seek to third second
.output("frame.png", pix_fmt='rgb24', frames='1') # Select PNG codec in RGB color space and one frame.
.overwrite_output()
.run()
)
Notes:
The solution may not work for all mp4 URL files, because mp4 format is not so "WEB friendly" - I think the moov atom must be located at the beginning of the file.
You may need to manually install FFmpeg command line tool (but it supposed to be installed with MoviePy).
Result frame:

Receiving multiple files from ffmpeg via subprocesses.PIPE

I am using ffmpeg to convert a video into images. These images are then processed by my Python program. Originally I used ffmpeg to first save the images to disk, then reading them one by one with Python.
This works fine, but in an effort to speed up the program I am trying to skip the storage step and only work with the images in memory.
I use the following ffmpeg and Python subproccesses command to pipe the output from ffmpeg to Python:
command = "ffmpeg.exe -i ADD\\sg1-original.mp4 -r 1 -f image2pipe pipe:1"
pipe = subprocess.Popen(ffmpeg-command, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
image = Image.new(pipe.communicate()[0])
The image variable can then be used by my program. The problem is that if I send more than 1 image from ffmpeg all the data is stored in this variable. I need a way to separate the images. The only way I can think of is splitting on jpeg markers end of file (0xff, 0xd9). This works, but is unreliable.
What have I missed regarding piping files with subproccesses. Is there a way to only read one file at a time from the pipeline ?
One solution to this would be to use the ppm format, which has a predictable size:
ffmpeg -i movie.mp4 -r 1 -f image2pipe -vcodec ppm pipe:1
The format is specified here: http://netpbm.sourceforge.net/doc/ppm.html
And looks something like this:
P6 # magic number
640 480 # width height
255 # colors per channel
<data>
Where will be exactly 640 * 480 * 3 bytes (assuming there are 255 or fewer colors per channel).
Note that this is an uncompressed format, so it may potentially take up quite a bit of memory if you read it all at once. You may consider switching your algorithm to:
pipe = subprocess.Popen(ffmpeg_command, stdout=subprocess.PIPE, stderr=sys.stderr)
while True:
chunk = pipe.stdout.read(4096)
if not chunk:
break
# ... process chunk of data ...
Note that the subprocess' stderr is set to the current process' stderr; this is important because, if we don't, the stderr buffer could fill up (as nothing is reading it) and cause a deadlock.

search for latest image inside a folder, and save it as a png file with python

I am new to python. I want to do some image analysis using the python on Raspberry Pi.
I am streaming images using motion to a folder. In that folder, I want to find the latest image at any point of time. Then I would like to apply my image analysis command on that one image and export the results to a text file. I have achieved most parts of it. But struck at one small bit.
import subprocess
import os
import glob
newest = max(glob.iglob('*.[p][n][g]'), key=os.path.getctime)
print(newest)
cmd = "sudo ./deepbelief lena.png > try5.txt"
proc = subprocess.Popen(cmd, shell=True)
In the above deepbelief is my image analysis program. Now the problem is how can I feed my newest image as input to the command ./deepbelief.
Also, if possible, can I save newest image as an png file for later use?
Many thanks in advance!
You can use string formatting:
cmd = "sudo ./deepbelief {} > try5.txt".format(newest)
The {} will be whatever the value of newest is, so for example if newest is foo.png, you're command will be "sudo ./deepbelief foo.png > try5.txt" and so on..

OpenCV + python -- grab frames from a video file

I can't seem to capture frames from a file using OpenCV -- I've compiled from source on Ubuntu with all the necessary prereqs according to: http://opencv.willowgarage.com/wiki/InstallGuide%20%3A%20Debian
#!/usr/bin/env python
import cv
import sys
files = sys.argv[1:]
for f in files:
capture = cv.CaptureFromFile(f)
print capture
print cv.GetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH)
print cv.GetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT)
for i in xrange(10000):
frame = cv.QueryFrame(capture)
if frame:
print frame
Output:
ubuntu#local:~/opencv$ ./test.py bbb.avi
<Capture 0xa37b130>
0.0
0.0
The frames are always None...
I've transcoded a video file to i420 format using:
mencoder $1 -nosound -ovc raw -vf format=i420 -o $2
Any ideas?
You don't have the gstreamer-ffmpeg or gsteamer-python or gsteamer-python-devel packages installed. I installed all three of them. and the exact same problem was resolved.
I'm using OpenCV 2.2.0, compiled on Ubuntu from source. I can confirm that the source code you provided works as expected. So the problem is somewhere else.
I couldn't reproduce your problem using mencoder (installing it is a bit of a problem on my machine) so I used ffmpeg to wrap a raw video in the AVI container:
ffmpeg -s cif -i ~/local/sample-video/foreman.yuv -vcodec copy foreman.avi
(foreman.yuv is a standard CIF image sequence you can find on the net if you look around).
Running the AVI from ffmpeg through your source gives this:
misha#misha-desktop:~/Desktop/stackoverflow$ python ocv_video.py foreman.avi
<Capture 0xa71120>
352.0
288.0
<iplimage(nChannels=3 width=352 height=288 widthStep=1056 )>
<iplimage(nChannels=3 width=352 height=288 widthStep=1056 )>
...
So things work as expected. What you should check:
Do you get any errors on standard output/standard error? OpenCV uses ffmpeg libraries to read video files, so be on the lookout for informative messages. Here's what happens if you try to play a RAW video file without a container (sounds similar to your problem):
error:
misha#misha-desktop:~/Desktop/stackoverflow$ python ocv_video.py foreman.yuv
[IMGUTILS # 0x7fff37c8d040] Picture size 0x0 is invalid
[IMGUTILS # 0x7fff37c8cf20] Picture size 0x0 is invalid
[rawvideo # 0x19e65c0] Could not find codec parameters (Video: rawvideo, yuv420p)
[rawvideo # 0x19e65c0] Estimating duration from bitrate, this may be inaccurate
GStreamer Plugin: Embedded video playback halted; module decodebin20 reported: Your GStreamer installation is missing a plug-in.
<Capture 0x19e3130>
0.0
0.0
Make sure your AVI file actually contains the required information to play back the video. At a minimum, this should be the frame dimensions. RAW video typically doesn't contain any information besides the actual pixel data, so knowing the frame dimensions and FPS is required. You can wrong-guess the FPS and still get a viewable video, but if you get the dimensions wrong, the video will be unviewable.
Make sure the AVI file you're trying to open is actually playable. Try ffplay file.avi -- if that fails, then the problem is likely to be with the file. Try using ffmpeg to transcode instead of mencoder.
Make sure you can play other videos, using the same method as above. If you can't, then it's likely that your ffmpeg install is broken.

How to unit test a Python function that draws PDF graphics?

I'm writing a CAD application that outputs PDF files using the Cairo graphics library. A lot of the unit testing does not require actually generating the PDF files, such as computing the expected bounding boxes of the objects. However, I want to make sure that the generated PDF files "look" correct after I change the code. Is there an automated way to do this? How can I automate as much as possible? Do I need to visually inspect each generated PDF? How can I solve this problem without pulling my hair out?
(See also update below!)
I'm doing the same thing using a shell script on Linux that wraps
ImageMagick's compare command
the pdftk utility
Ghostscript (optionally)
(It would be rather easy to port this to a .bat Batch file for DOS/Windows.)
I have a few reference PDFs created by my application which are "known good". Newly generated PDFs after code changes are compared to these reference PDFs. The comparison is done pixel by pixel and is saved as a new PDF. In this PDF, all unchanged pixels are painted in white, while all differing pixels are painted in red.
Here are the building blocks:
pdftk
Use this command to split multipage PDF files into multiple singlepage PDFs:
pdftk reference.pdf burst output somewhere/reference_page_%03d.pdf
pdftk comparison.pdf burst output somewhere/comparison_page_%03d.pdf
compare
Use this command to create a "diff" PDF page for each of the pages:
compare \
-verbose \
-debug coder -log "%u %m:%l %e" \
somewhere/reference_page_001.pdf \
somewhere/comparison_page_001.pdf \
-compose src \
somewhereelse/reference_diff_page_001.pdf
Ghostscript
Because of automatically inserted meta data (such as the current date+time), PDF output is not working well for MD5hash-based file comparisons.
If you want to automatically discover all cases which consist of purely white pages, you could also convert to a meta-data free bitmap format using the bmp256 output device. You can do that for the original PDFs (reference and comparison), or for the diff-PDF pages:
gs \
-o reference_diff_page_001.bmp \
-r72 \
-g595x842 \
-sDEVICE=bmp256 \
reference_diff_page_001.pdf
md5sum reference_diff_page_001.bmp
If the MD5sum is what you expect for an all-white page of 595x842 PostScript points, then your unit test passed.
Update:
I don't know why I didn't previously think of generating a histogram output from the ImageMagick compare...
The following is a command pipeline chaining 2 different commands:
the first one is the same as the above compare which generates the 'white pixels are equal, red pixels are differences'-format, only it outputs the ImageMagick internal miff format. It doesn't write to a file, but to stdout.
the second one uses convert to read stdin, generate a histogram and output the result in text form. There will be two lines:
one indicating the number of white pixels
the other one indicating the number of red pixels.
Here it goes:
compare \
reference.pdf \
current.pdf \
-compose src \
miff:- \
| \
convert \
- \
-define histogram:unique-colors=true \
-format %c \
histogram:info:-
Sample output:
56934: (61937, 0, 7710,52428) #F1F100001E1ECCCC srgba(241,0,30,0.8)
444056: (65535,65535,65535,52428) #FFFFFFFFFFFFCCCC srgba(255,255,255,0.8)
(Sample output was generated by using these reference.pdf and current.pdf files.)
I think this type of output is really well suited for automatic unit testing. If you evaluate the two numbers, you can easily compute the "red pixel" percentage and you could even decide to return PASSED or FAILED based on a certain threshold (if you don't necessarily need "zero red" for some reason).
You could capture the PDF as a bitmap (or at least a losslessly-compressed) image, and then compare the image generated by each test with a reference image of what it's supposed to look like. Any differences would be flagged as an error for the test.
The first idea that pops in my head is to use a diff utility. These are generally used to compare texts of documents but they might also compare the layout of the PDF. Using it, you can compare the expected output with the output supplied.
The first result google gives me is this. Altough it is commercial, there might be other free/open source alternatives.
I would try this using xpresser - (https://wiki.ubuntu.com/Xpresser ) You can try to match images to similar images not exact copies - which is the problem in these cases.
I don't know if xpresser is being ctively developed, or if it can be used with stand alone image files (I think so) -- anyway it takes its ideas from teh Sikuli project (which is Java with a Jython front end, while xpresser is Python).
I wrote a tool in Python to validate PDFs for my employer's documentation. It has the capability to compare individual pages to master images. I used a library I found called swftools to export the page to PNG, then used the Python Imaging Library to compare it with the master.
The relevant code looks something like this (this won't run as there are some dependencies on other parts of the script, but you should get the idea):
# exporting
gfxpdf = gfx.open("pdf", self.pdfpath)
if os.path.isfile(pngPath):
os.remove(pngPath)
page = gfxpdf.getPage(pagenum)
img = gfx.ImageList()
img.startpage(page.width, page.height)
page.render(img)
img.endpage()
img.save(pngPath)
return os.path.isfile(pngPath)
# comparing
outPng = os.path.join(outpath, pngname)
masterPng = os.path.join(outpath, "_master", pngname)
if os.path.isfile(masterPng):
output = Image.open(outPng).convert("RGB") # discard alpha channel, if any
master = Image.open(masterPng).convert("RGB")
mismatch = any(x[1] for x in ImageChops.difference(output, master).getextrema())

Categories