I have to acomplish the following task: Two square, equally sized png images have to be put together side by side and exported as a combined image. This has to be done to hundreds of pairs in a folder with endings "_1" and "_2"
I think this can be done in Gimp with Pytho-Fu, but trying to understand the fundamentals of scripting for Gimp is a bit overwhelming on a tight schedule and I really just need a solution for this single task. I would really appreciate you to point me in the right direction with this.
(If there is a simpler solution than using Gimp, please let me know. It should run on Linux and ideally be able to be executed from bash.
After xenoid's recommendation:
I found ImageMagick's syntax and documentation to be a horrible mess less than optimal, so I'll share how I got it done:
with Ubuntu 18.04.04:
montage -tile x1 -geometry +0+0 input1.png input2.png output.png
The whole thing (probably not interesting for anyone else...)
#! /bin/bash
input="./Input/"
output="./Output/"
# add to output filename
prefix="CIF_"
postfix="_2"
# get file list
readarray -d '' RRA < <(find $input -regextype posix-egrep -regex '(.*)?_1_cr\.png$' -print0)
echo "Merging ${#RRA[#]} images.."
# remove directory from filename
RRA=( "${RRA[#]##*/}" )
# strip last part of filename: "_1_cr.png"
RRA=( "${RRA[#]/%_1_cr\.png/}" )
# merge images
for fall in "${RRA[#]}";do
# check if there are two images to merge for current case
if test -f "$input${fall}_2_cr.png"; then
echo "${fall}"
montage -tile x1 -tile-offset +10 -geometry +0+0 -border +20+20 -bordercolor white $input${fall}_1_cr.png $input${fall}_2_cr.png $output$prefix${fall}$postfix.png
else
echo "${fall} - no second image found"
fi
done
With ImageMagick, you can loop over each pair and just do:
convert image_1.png image_2.png +append image_1_2.png
See
https://imagemagick.org/Usage/layers/#append
If you want space between them, then use +smush X, where X is the amount of space that you want. If you want them overlapped, then use a negative value for X. You can set the color of the space using -background color.
See
https://imagemagick.org/script/command-line-options.php#smush
Related
I have a vectorized floorplan image. I want to identify the objects in the image through the vector data in the SVG file of that image. The SVG code does not have any close points(z) in between them. So I am unable to understand when does the point moves to the other object? Can somebody help me, please?
I have very little knowledge about these SVG files and using them in Tkinter. So please somebody help me or suggest me what can I do?
This is the vector data of the image.
vector data of the image
use in conjunction with SO floorplan question.
Jump to z_final_floorplan.svg for final file.
A
Create 4 files:
w_original_floorplan.svg
x_rough_static_floorplan.svg
y_rough_live_floorplan.svg
z_final_floorplan.svg
w_original_floorplan.svg and x_rough_static_floorplan.svg are identical apart from filename.
y_rough_live_floorplan.svg and z_final_floorplan.svg are empty; to be populated.
Copy x_rough_static_floorplan.svg to y_rough_live_floorplan.svg.
Open y_rough_live_floorplan.svg on browser using server.
x_rough_static_floorplan.svg find all M and replace with two newlines / symbol M (case sensitive). shift + enter shift + enter /M
B
[this section takes the time]
Take away 1st '/' in path in y_rough_live_floorplan.svg [shows blackout_floorplan]
Label x_rough_static_floorplan.svg code section blackout_floorplan where code is.
(this file is used as rough-work, so being xml / svg valid is irrelevant)
In y_rough_live_floorplan.svg find next '/' and delete it [shows floorplan_top_left_whiteout]
Label x_rough_static_floorplan.svg code section floorplan_top_left_whiteout where code is.
Have x_rough_static_floorplan.svg and y_rough_live_floorplan.svg open in 2 windows, will be going back and forth to each of them. Keep repeating until at end.
(hint: find tool seems to be on switching from files in vscode, so you can use find / and next one cmd + g easily) Maybe handy to have a paper printout of original svg as reference and label the names of objects you create e.g.bath, sink, table, as you go along (don’t be fooled by this, one table is 'table'. Is 2nd chair chair2, chair_2, chair_two etc.?) etc..
C
Reorder the whole labels and corresponding code in path x_rough_static_floorplan.svg so the labels are ordered next to each other, but in the order they are found in the path:
e.g.
…
floorplan
bath
sink
table_chairs
sofa
…
Use the 'find' tool here. This process, itself will require a temp file to copy and paste to rather than reorder within the file working on. And rewrite temp to file working on. Might be good idea to create checklist of objects and cross-off as done.
E.g. floorplan, bath, table_chairs, sink…
D
Create path elements from your grouped objects, putting each id as id=“floorplan_main”, id=“bath”, id=“sink” etc.. etc..
Bear in mind, the data of how this is drawn is really, really bad. Really they should be drawn with rect elements for a rectangle when possible and a lot of the path data is very unnecessary, but that’s obviously how the application generates the svg.
I am currently working on a media project. We've shooted looong clips, mainly dark if not black. I have decomposed these clips into their frames (>500k single frames) and put them in some folders. Now, my goal is to find out and select those frames that are not black or mainly dark: it's around a thousand out of the total.
This seems a job that a simple Python script can handle without too much effort. I know that scikit-image is quite common to work with images, but don't know how to come up with a script that does the job neatly. I have some experience with scientific programming but this with images manipulation is a bit out of my field.
For example, this image should be reported as black and thus ignored, while this other one, although in low light, should be kept as good.
Ideally, it would be optimal to have a script that uses one or more criteria to determine if an image is totally dark or not, and in the latter case put it into another folder for human (me) inspection.
Any help is exteremely appreciated!
You can get the mean of each image very simply without writing any code using ImageMagick which is available for Windows, Linux and macOS.
Like this:
magick identify -format '%[fx:mean*255] %f\r\n' black.jpg
1.01936 black.jpg
and:
magick identify -format '%[fx:mean*255] %f\r\n' nonblack.jpg
1.72921 nonblack.jpg
To improve performance, I would use GNU Parallel on macOS or Linux, but in Windows, I would open a new command prompt for each directory and run several scripts in parallel, or start one script processing all the files ending in 0 or 1, a second one processing files ending in 2 or 3, a third one processing files ending in 4,5 or 6 and a final one processing files ending in 7,8 or 9.
If I was doing it in Python I would use a multiprocessing pool to speed things up, by the way.
Opencv is enough to solve this problem.
use np.mean(image, axis=2) to get mean of different channels, then you can easily check the black ones.
As pointed out in the replies, taking a 'mean' of the image helped. After reading in the image, I compute np.mean(img, axis = 2).mean() so that I have the mean of the three colour channels. If this mean is low (<2) then the image is discarded, otherwise the file is copied to another folder.
The code is not really time efficient as it takes ~3 hours for 200k files, but does the trick!
You'll probably want to use PIL (Python Image Library).
I did a quick search for code that calculates the average of an image and found this snippet:
Image Average Color
import Image
def get_average_color((x,y), n, image):
""" Returns a 3-tuple containing the RGB value of the average color of the
given square bounded area of length = n whose origin (top left corner)
is (x, y) in the given image"""
r, g, b = 0, 0, 0
count = 0
for s in range(x, x+n+1):
for t in range(y, y+n+1):
pixlr, pixlg, pixlb = image[s, t]
r += pixlr
g += pixlg
b += pixlb
count += 1
return ((r/count), (g/count), (b/count))
image = Image.open('test.png').load()
r, g, b = get_average_color((24,290), 50, image)
print r,g,b
Maybe you could just iterate through all of the images in your folder and log (or copy) ones that are above a certain values.
There's probably a more elegant way to do this using PIL but maybe this will get you started.
Hope it helps!
I am trying to trim parts of the image where a complete row of Image doesn't have anything except white color.
I tried using matplot lib
convert image into matrix and saw if (r,g,b) = (0,0,0) or (1,1,1) and removed entire row in image if every (r,g,b) is of above kind in the row
matrix looks like [ [ [r,g,b], [r,g,b]....] ],...., [ [r,g,b], [r,g,b]....] ] ]
i achieved my requirement but i am running this for around 500 images and it is taking 30 minutes around. Can i do it in better ways?
and the required image should be like
Edit-1 :
tried with trim method from wand package
with wand_img(filename=path) as i:
# i.trim(color=Color('white'))
# i.trim(color=Color('white'))
i.trim()
i.trim()
i.save(filename='output.png')
but not working for the following type of images
You could use ImageMagick which is installed on most Linux distros and is available for macOS and Windows.
To trim one image, start a Terminal (or Command Prompt on Windows) and run:
magick input.png -fuzz 20% -trim result.png
That will give you this - though I added a black border so you can make out the extent of it:
If you have lots to do, you can do them in parallel with GNU Parallel like this:
parallel -X magick mogrify -trim ::: *png
I made 1,000 copies of your image and did the whole lot in 4 seconds on a MacBook Pro.
If you don't have GNU Parallel, you can do 1,000 images in 12 seconds like this:
magick mogrify -trim *png
If you want to do it with Python, you could try something like this:
#!/usr/bin/env python3
from PIL import Image, ImageChops
# Load image and convert to greyscale
im = Image.open('image.png').convert('L')
# Invert image and find bounding box
bbox = ImageChops.invert(im).getbbox()
# Debug
print(*bbox)
# Crop and save
result = im.crop(bbox)
result.save('result.png')
It gives the same output as the ImageMagick version. I would suggest you use a threading tool to do lots in parallel for best performance.
The sequential version takes 65 seconds for 1,000 images and the multi-processing version takes 14 seconds for 1,000 images.
Using two trims in Imagemagick 6.9.10.25 Q16 Mac OSX Sierra works fine for me. Your image has a black bar on the right hand side. The first trim will remove that. The second trim will remove the remaining excess white. You may need to add some fuzz (tolerance) amount for the trim. But I did not need it.
Input:
convert img.png -trim +write tmp1.png -trim result.png
Result of first trim (tmp1.png):
Final Result after second trim:
ADDITION:
Looking at the docs for Python Wand:
trim(*args, **kwargs)
Remove solid border from image. Uses top left pixel as a guide by default, or you can also specify the color to remove.
Parameters:
color (Color) – the border color to remove. if it’s omitted top left pixel is used by default
fuzz (numbers.Integral) – Defines how much tolerance is acceptable to consider two colors as the same.
You will need to specify color=black for the first trim, since this version of trim uses the top left corner for trimming. Command line Imagemagick looks at all corners. If that fails, then add some fuzz value.
I have a gif that I am taking apart frame by frame in order to write text onto it. I used ffmpeg to put the frames (saved as individual .png files) back together and it worked nicely. This is the code I used:
ffmpeg -f image2 -i newimg%d.png out.gif
But now I want to use the python wrapper ffmpy. Following the docs, I tried a variety of things but I keep getting syntax errors.
Here is one instance of my efforts:
ff = ffmpy(FFmpeg(inputs = {ffmpeg -f image2 -i "newimg%d.png"}, outputs = {"gif_with_text.gif"}))
ff.run()
In this attempt, the syntax error points to the "2" in image2. Could someone help me out? Note - I'm new to python, let alone ffmpeg and ffmpy.
I'm writing a CAD application that outputs PDF files using the Cairo graphics library. A lot of the unit testing does not require actually generating the PDF files, such as computing the expected bounding boxes of the objects. However, I want to make sure that the generated PDF files "look" correct after I change the code. Is there an automated way to do this? How can I automate as much as possible? Do I need to visually inspect each generated PDF? How can I solve this problem without pulling my hair out?
(See also update below!)
I'm doing the same thing using a shell script on Linux that wraps
ImageMagick's compare command
the pdftk utility
Ghostscript (optionally)
(It would be rather easy to port this to a .bat Batch file for DOS/Windows.)
I have a few reference PDFs created by my application which are "known good". Newly generated PDFs after code changes are compared to these reference PDFs. The comparison is done pixel by pixel and is saved as a new PDF. In this PDF, all unchanged pixels are painted in white, while all differing pixels are painted in red.
Here are the building blocks:
pdftk
Use this command to split multipage PDF files into multiple singlepage PDFs:
pdftk reference.pdf burst output somewhere/reference_page_%03d.pdf
pdftk comparison.pdf burst output somewhere/comparison_page_%03d.pdf
compare
Use this command to create a "diff" PDF page for each of the pages:
compare \
-verbose \
-debug coder -log "%u %m:%l %e" \
somewhere/reference_page_001.pdf \
somewhere/comparison_page_001.pdf \
-compose src \
somewhereelse/reference_diff_page_001.pdf
Ghostscript
Because of automatically inserted meta data (such as the current date+time), PDF output is not working well for MD5hash-based file comparisons.
If you want to automatically discover all cases which consist of purely white pages, you could also convert to a meta-data free bitmap format using the bmp256 output device. You can do that for the original PDFs (reference and comparison), or for the diff-PDF pages:
gs \
-o reference_diff_page_001.bmp \
-r72 \
-g595x842 \
-sDEVICE=bmp256 \
reference_diff_page_001.pdf
md5sum reference_diff_page_001.bmp
If the MD5sum is what you expect for an all-white page of 595x842 PostScript points, then your unit test passed.
Update:
I don't know why I didn't previously think of generating a histogram output from the ImageMagick compare...
The following is a command pipeline chaining 2 different commands:
the first one is the same as the above compare which generates the 'white pixels are equal, red pixels are differences'-format, only it outputs the ImageMagick internal miff format. It doesn't write to a file, but to stdout.
the second one uses convert to read stdin, generate a histogram and output the result in text form. There will be two lines:
one indicating the number of white pixels
the other one indicating the number of red pixels.
Here it goes:
compare \
reference.pdf \
current.pdf \
-compose src \
miff:- \
| \
convert \
- \
-define histogram:unique-colors=true \
-format %c \
histogram:info:-
Sample output:
56934: (61937, 0, 7710,52428) #F1F100001E1ECCCC srgba(241,0,30,0.8)
444056: (65535,65535,65535,52428) #FFFFFFFFFFFFCCCC srgba(255,255,255,0.8)
(Sample output was generated by using these reference.pdf and current.pdf files.)
I think this type of output is really well suited for automatic unit testing. If you evaluate the two numbers, you can easily compute the "red pixel" percentage and you could even decide to return PASSED or FAILED based on a certain threshold (if you don't necessarily need "zero red" for some reason).
You could capture the PDF as a bitmap (or at least a losslessly-compressed) image, and then compare the image generated by each test with a reference image of what it's supposed to look like. Any differences would be flagged as an error for the test.
The first idea that pops in my head is to use a diff utility. These are generally used to compare texts of documents but they might also compare the layout of the PDF. Using it, you can compare the expected output with the output supplied.
The first result google gives me is this. Altough it is commercial, there might be other free/open source alternatives.
I would try this using xpresser - (https://wiki.ubuntu.com/Xpresser ) You can try to match images to similar images not exact copies - which is the problem in these cases.
I don't know if xpresser is being ctively developed, or if it can be used with stand alone image files (I think so) -- anyway it takes its ideas from teh Sikuli project (which is Java with a Jython front end, while xpresser is Python).
I wrote a tool in Python to validate PDFs for my employer's documentation. It has the capability to compare individual pages to master images. I used a library I found called swftools to export the page to PNG, then used the Python Imaging Library to compare it with the master.
The relevant code looks something like this (this won't run as there are some dependencies on other parts of the script, but you should get the idea):
# exporting
gfxpdf = gfx.open("pdf", self.pdfpath)
if os.path.isfile(pngPath):
os.remove(pngPath)
page = gfxpdf.getPage(pagenum)
img = gfx.ImageList()
img.startpage(page.width, page.height)
page.render(img)
img.endpage()
img.save(pngPath)
return os.path.isfile(pngPath)
# comparing
outPng = os.path.join(outpath, pngname)
masterPng = os.path.join(outpath, "_master", pngname)
if os.path.isfile(masterPng):
output = Image.open(outPng).convert("RGB") # discard alpha channel, if any
master = Image.open(masterPng).convert("RGB")
mismatch = any(x[1] for x in ImageChops.difference(output, master).getextrema())