The best code for mosaic I've found you can see at this page:
https://github.com/codebox/mosaic
However, the code doesn't work well on my Windows computer, and also I think the code is too advanced for what it should do. Here are my requirements I've posted on reddit:
1) The main photo already has reduced number of colors (8)
2) I have already every image associated with colour needed to be replaced (e.g. number 1 is supposed to replace black pixels, number 2 replaces green pixels...)
3) I need to enlarge the photo by the small photo's size (9 x 9 small photos will produce 81 times bigger image), which should push the pixels "2n" points away from each other, but instead of producing a n x n same-coloured area around every single one of them (this is how I believe enlarging works in general, correct me if I'm wrong), it will just colour the white spaces with unrecognized colour, which is not associated with any small photo (let's call that colour C)
4) Now all it needs is to run through all non-C coloured pixels and put an image centered on that pixel, which would create the mosaic.
Since I'm pretty new to Python (esp. graphics) and need it just for one use, could someone help me with creating that code? I think that code I got inspired with is too complicated. Two things I don't need:
1) "approximation" - if the enlargement is lesser than needed for 100% quality (e.g. the pictures are 9x9, but every side of the original photo can be only 3 times larger, then the program needs to merge some pixels of different colours together, leading to quality loss)
2) association colour - picture: my palette of pictures is small and of colours as well, I can do it manually
For the ones who didn't get what I mean, here is my idea: https://ibb.co/9GNhqBx
I had a quick go using pyvips:
#!/usr/bin/python3
import sys
import os
import pyvips
if len(sys.argv) != 4:
print("usage: tile-directory input-image output-image")
sys.exit(1)
# the size of each tile ... 16x16 for us
tile_size = 16
# load all the tile images, forcing them to the tile size
print(f"loading tiles from {sys.argv[1]} ...")
for root, dirs, files in os.walk(sys.argv[1]):
tiles = [pyvips.Image.thumbnail(os.path.join(root, name), tile_size,
height=tile_size, size="force")
for name in files]
# drop any alpha
tiles = [image.flatten() if image.hasalpha() else image
for image in tiles]
# copy the tiles to memory, since we'll be using them many times
tiles = [image.copy_memory() for image in tiles]
# calculate the average rgb for an image, eg. image -> [12, 13, 128]
def avg_rgb(image):
m = image.stats()
return [m(4,i)[0] for i in range(1,4)]
# find the avg rgb for each tile
tile_colours = [avg_rgb(image) for image in tiles]
# load the main image ... we can do this in streaming mode, since we only
# make a single pass over the image
main = pyvips.Image.new_from_file(sys.argv[2], access="sequential")
# find the abs of an image, treating each pixel as a vector
def pyth(image):
return sum([band ** 2 for band in image.bandsplit()]) ** 0.5
# calculate a distance map from the main image to each tile colour
distance = [pyth(main - colour) for colour in tile_colours]
# make a distance index -- hide the tile index in the bottom 16 bits of the
# distance measure
index = [(distance[i] << 16) + i for i in range(len(distance))]
# find the minimum distance for each pixel and mask out the bottom 16 bits to
# get the tile index for each pixel
index = index[0].bandrank(index[1:], index=0) & 0xffff
# replicate each tile image to make a set of layers, and zoom the index to
# make an index matching the output size
layers = [tile.replicate(main.width, main.height) for tile in tiles]
index = index.zoom(tile_size, tile_size)
# now for each layer, select pixels matching the index
final = pyvips.Image.black(main.width * tile_size, main.height * tile_size)
for i in range(len(layers)):
final = (index == i).ifthenelse(layers[i], final)
print(f"writing {sys.argv[3]} ...")
final.write_to_file(sys.argv[3])
I hope it's easy to read. I can run it like this:
$ ./mosaic3.py smallpic/ mainpic/Use\ this.jpg x.png
loading tiles from smallpic/ ...
writing x.png ...
$
It takes about 5s on this 2015 laptop and makes this image:
I had to shrink it for upload, but here's a detail (bottom left of the first H):
Here's a google drive link to the mosaic, perhaps it'll work: https://drive.google.com/file/d/1J3ofrLUhkuvALKN1xamWqfW4sUksIKQl/view?usp=sharing
And here's this code on github: https://github.com/jcupitt/mosaic
Related
I have binary images where rectangles are placed randomly and I want to get the positions and sizes of those rectangles.
If possible I want the minimal number of rectangles necessary to exactly recreate the image.
On the left is my original image and on the right the image I get after applying scipys.find_objects()
(like suggested for this question).
import scipy
# image = scipy.ndimage.zoom(image, 9, order=0)
labels, n = scipy.ndimage.measurements.label(image, np.ones((3, 3)))
bboxes = scipy.ndimage.measurements.find_objects(labels)
img_new = np.zeros_like(image)
for bb in bboxes:
img_new[bb[0], bb[1]] = 1
This works fine if the rectangles are far apart, but if they overlap and build more complex structures this algorithm just gives me the largest bounding box (upsampling the image made no difference). I have the feeling that there should already exist a scipy or opencv method which does this.
I would be glad to know if somebody has an idea on how to tackle this problem or even better knows of an existing solution.
As result I want a list of rectangles (ie. lower-left-corner : upper-righ-corner) in the image. The condition is that when I redraw those filled rectangles I want to get exactly the same image as before. If possible the number of rectangles should be minimal.
Here is the code for generating sample images (and a more complex example original vs scipy)
import numpy as np
def random_rectangle_image(grid_size, n_obstacles, rectangle_limits):
n_dim = 2
rect_pos = np.random.randint(low=0, high=grid_size-rectangle_limits[0]+1,
size=(n_obstacles, n_dim))
rect_size = np.random.randint(low=rectangle_limits[0],
high=rectangle_limits[1]+1,
size=(n_obstacles, n_dim))
# Crop rectangle size if it goes over the boundaries of the world
diff = rect_pos + rect_size
ex = np.where(diff > grid_size, True, False)
rect_size[ex] -= (diff - grid_size)[ex].astype(int)
img = np.zeros((grid_size,)*n_dim, dtype=bool)
for i in range(n_obstacles):
p_i = np.array(rect_pos[i])
ps_i = p_i + np.array(rect_size[i])
img[tuple(map(slice, p_i, ps_i))] = True
return img
img = random_rectangle_image(grid_size=64, n_obstacles=30,
rectangle_limits=[4, 10])
Here is something to get you started: a naïve algorithm that walks your image and creates rectangles as large as possible. As it is now, it only marks the rectangles but does not report back coordinates or counts. This is to visualize the algorithm alone.
It does not need any external libraries except for PIL, to load and access the left side image when saved as a PNG. I'm assuming a border of 15 pixels all around can be ignored.
from PIL import Image
def fill_rect (pixels,xp,yp,w,h):
for y in range(h):
for x in range(w):
pixels[xp+x,yp+y] = (255,0,0,255)
for y in range(h):
pixels[xp,yp+y] = (255,192,0,255)
pixels[xp+w-1,yp+y] = (255,192,0,255)
for x in range(w):
pixels[xp+x,yp] = (255,192,0,255)
pixels[xp+x,yp+h-1] = (255,192,0,255)
def find_rect (pixels,x,y,maxx,maxy):
# assume we're at the top left
# get max horizontal span
width = 0
height = 1
while x+width < maxx and pixels[x+width,y] == (0,0,0,255):
width += 1
# now walk down, adjusting max width
while y+height < maxy:
for w in range(x,x+width,1):
if pixels[x,y+height] != (0,0,0,255):
break
if pixels[x,y+height] != (0,0,0,255):
break
height += 1
# fill rectangle
fill_rect (pixels,x,y,width,height)
image = Image.open('A.png')
pixels = image.load()
width, height = image.size
print (width,height)
for y in range(16,height-15,1):
for x in range(16,width-15,1):
if pixels[x,y] == (0,0,0,255):
find_rect (pixels,x,y,width,height)
image.show()
From the output
you can observe the detection algorithm can be improved, as, for example, the "obvious" two top left rectangles are split up into 3. Similar, the larger structure in the center also contains one rectangle more than absolutely needed.
Possible improvements are either to adjust the find_rect routine to locate a best fit¹, or store the coordinates and use math (beyond my ken) to find which rectangles may be joined.
¹ A further idea on this. Currently all found rectangles are immediately filled with the "found" color. You could try to detect obviously multiple rectangles, and then, after marking the first, the other rectangle(s) to check may then either be black or red. Off the cuff I'd say you'd need to try different scan orders (top-to-bottom or reverse, left-to-right or reverse) to actually find the minimally needed number of rectangles in any combination.
Background: I have images I need to compare for differences. The images are large (on the order of 1400x9000 px), machine-generated and highly constrained (screenshots of a particular piece of linear UI), and are expected to be nearly identical, with differences being one of the following three possibilities:
Image 1 has a section image 2 is missing
Image 1 is missing a section image 2 has
Both images have the given section, but its contents differ
I'm trying to build a tool that highlights the differences for a human reviewer, essentially an image version of line-oriented diff. To that end, I'm trying to scan the images line by line and compare them to decide if the lines are identical. My ultimate goal is an actual diff-like output, where it can detect that sections are missing/added/different, and sync the images up as soon as possible for the remaining parts of identical content, but for the first cut, I'm going with a simpler approach where the two images are overlaid (alpha blended), and the lines which were different highlighted with a particular colour (ie. alpha-blended with a third line of solid colour). At first I tried using Python Imaging Library, but that was far several orders of magnitude too slow, so I decided to try using vips, which should be way faster. However, I have absolutely no idea how to express what I'm after using vips operations. The pseudocode for the simpler version would be essentially:
out = []
# image1 and image2 are expected, but not guaranteed to have the same height
# they are likely to have different heights if different
# most lines are entirely white pixels
for line1, line2 in zip(image1, image2):
if line1 == line2:
out.append(line1)
else:
# ALL_RED is a line composed of solid red pixels
out.append(line1.blend(line2, 0.5).blend(ALL_RED, 0.5))
I'm using pyvips in my project, but I'm also interested in code using plain vips or any other bindings, since the operations are shared and easily translated across dialects.
Edit: adding sample images as requested
Edit 2: full size images with missing/added/changed sections:
reference
comparison
How about just using diff? It's pretty quick. All you need to do is turn your PNGs into text a scanline a time, then parse the diff output.
For example:
#!/usr/bin/env python3
import sys
import os
import re
import pyvips
# calculate a checksum for each scanline and write to name_out
def scanline_checksum(name_in, name_out):
a = pyvips.Image.new_from_file(name_in, access="sequential")
# unfold colour channels to make a wider 1-band image
a = a.bandunfold()
# xyz makes an index image, where the value of each pixel is its coordinate
b = pyvips.Image.xyz(a.width, a.height)
# make a pow gradient image ... each pixel is some power of the x coordinate
b = b[0] ** 0.5
# now multiply and sum to make a checksum for each scanline
# "project" returns sum of columns, sum of rows
sum_of_columns, sum_of_rows = (a * b).project()
sum_of_rows.write_to_file(name_out)
to_csv(sys.argv[1], "1.csv")
to_csv(sys.argv[2], "2.csv")
os.system("diff 1.csv 2.csv > diff.csv")
for line in open("diff.csv", "r"):
match = re.match("(\\d+),(\\d+)c(\\d+),(\\d+)", line)
if not match:
continue
print(line)
For your two test images I see:
$ time ./diff.py 1.png 2.png
264,272c264,272
351,359c351,359
real 0m0.346s
user 0m0.445s
sys 0m0.033s
On this elderly laptop. All you need to do is use those "change" commands to mark up your images.
If OpenCV and NumPy are options to you, then there would be a quite simple solution at least for finding and coloring different rows.
In my approach, I just calculate pixel-wise differences using np.abs, and find non-zero row indices with np.nonzero. With these found row indices, I set up an additional black image and draw red lines for each row. The final blending is just some linear mixing:
0.5 * image1 + 0.5 * image2
for all equal rows, or
0.333 * image1 + 0.333 * image2 + 0.333 * red
for all different rows.
Here's the final code:
import cv2
import numpy as np
# Load images
first = cv2.imread('9gOlq.png', cv2.IMREAD_COLOR)
second = cv2.imread('1Hdx4.png', cv2.IMREAD_COLOR)
# Calcluate absolute differences between images
diff = np.abs(np.float32(first) - np.float32(second))
# Find all non-zero rows
nz_rows = np.unique(np.nonzero(diff)[0])
# Set up image with red lines
red = np.zeros(first.shape, np.uint8)
red[nz_rows, :, :] = [0, 0, 255]
# Set up output image
output = np.uint8(0.5 * first + 0.5 * second)
output[nz_rows, :, :] = 0.333 * first[nz_rows, :, :] + 0.333 * second[nz_rows, :, :] + 0.333 * red[nz_rows, :, :]
# Show results
cv2.imshow("diff", np.array(diff, dtype=np.uint8))
cv2.imshow("output", output)
cv2.waitKey()
cv2.destroyAllWindows()
The difference image diff looks like this:
The final output looke like this:
It would be interesting to see two input images with omitted sections as you described in your question. Also, testing this approach using original sized images would be necessary, since you mentioned time is crucial.
Anyway - hope that helps!
I'm using PyCairo to draw some vector images programmatically from a Python script. It works fine. But now I'd like to access the pixel data and do some further processing on them at the pixel level (things like blur or other raster effects) and then continue using that image surface with PyCairo to draw some more vector shapes.
I found the get_data() method in cairo.ImageSurface class, but I'm not sure how to use it, because the documentation is very cryptic about it. It just says that it returns something called a "Python buffer", but there are no code examples of how this can actually be used in a real aplication.
Can anyone provide an example code of how to get the grip of those pixels in that "Python buffer" thingamajig? (preferably without the need of copying the entire image back and forth from/to PyCairo surfaces).
The data is the raw pixel data. It's a memoryview representing the underlying ImageSurface. The data has a different interpretation depending on the format.
Let's consider only the RGB24 pixel format for simplicity sake. Each pixel is stored as four bytes. Red, green, and blue respectively. The fourth byte is ignored, and is only there for performance reasons.
The pixels are then stored row by row, the first row coming first and the second row coming after and so on and so forth.
The might be additional padding at the end of the row as well, therefore the stride of the data is a crucial property. To get the byte index of a specific row y we thus need to compute y * stride. To this we add a x coordinate times the pixel byte width 4.
This is all illustrated in the following small python program that draws a white rectangle on a black background.
import cairo
width, height = 100, 100
surface = cairo.ImageSurface(cairo.Format.RGB24, width, height)
data = surface.get_data()
for y in range(50 - 20, 50 + 20):
for x in range(50 - 20, 50 + 20):
index = y * surface.get_stride() + x * 4
data[index] = 255 # red
data[index + 1] = 255 # green
data[index + 2] = 255 # blue
surface.write_to_png("im.png")
I want to create distortion effect like Spiral, stretch, fisheye, Wedge and other effect like underwater and snow like this website using cv2 library in python.
I figured out fisheye distortion.
In OpenCV version 3.0 and above it is possible to perform it using cv2.fisheye.undistortImage(). I have the code in python if you need.
This is what I got for the following input image:
Input Image:
Distorted image:
The function accepts a matrix, which upon modification yield different distortions of the image.
UPDATE
In order to add a snowfall effect you can add some noise like Poisson noise.
Here is a replacement block to map out a fisheye in the middle of the image. Please look elsewhere for details on the math. Use this in place of the 2 for loops in the previous code.
As stated in the first half of my answer (see previous answer), the purpose of this block is to create 2 maps that work together to remap the source image into the destination image.
To create the two maps, this block sweeps through 2 for loops with the dimensions of the image. Values are calculated for the X and y maps (flex_x and flex_y). It starts with assigning each to simply x and y for a 1-to-1 replacement map. Then, if the radius (r) is between 0 and 1, the map for the tangential slide for the fisheye is applied and new flex_x and flex_y values are mapped.
Please see my other answer for more details.
# create simple maps with a modified assignment
# outside the bulge is normal, inside is modified
# this is where the magic is assembled
for y in range(h):
ny = ((2*y-250)/(h-250))-1 #play with the 250's to move the y
ny2 = ny*ny
for x in range(w):
nx = ((2*x-50)/(w-50))-1 #play with the 50's to move the x
nx2 = nx*nx
r = math.sqrt(nx2+ny2)
flex_x[y,x] = x
flex_y[y,x] = y
if r>0 and r<1:
nr1 = 1 - r**2
nr2 = math.sqrt(nr1)
nr = (r + (1.0-nr2)) / 2.0
theta = math.atan2(ny,nx)
nxn = nr*math.cos(theta)
nyn = nr*math.sin(theta)
flex_x[y,x] = (((nxn+1)*w)/2.0)
flex_y[y,x] = (((nyn+1)*h)/2.0)
Here is half of an answer. The cv2.remap function uses maps to choose a pixel from the source for each pixel in the destination. alkasm's answer to this: How do I use OpenCV's remap function?
does a great job of defining the process, but glosses over the usefulness of those maps. If you can get creative in the maps, you can make any effect you want. Here is what I came up with.
The program starts by loading the image and resizing it. This is a convenience for a smaller screen. Then the empty maps are created.
The maps need to be the same dimensions as the image that is being processed, but with a depth of 1. If the resized original is 633 x 400 x 3, the maps both need to be 633 x 400.
When the remapping is done, cv2.remap will used the value at each coordinate in the maps to determine which pixel in the original to use in the destination. For each x,y in the destination, dest[x,y] = src[map1[x,y],map2[x,y]].
The simplest mapping would be if for every (x,y), map1(x,y)=x and map2(x,y)=y. This creates a 1-to-1 map, and the destination would match the source. In this example, a small offset is added to each value. The cosine function in the offset creates both positive and negative shifts, creating waves in the final image.
Note that creating the maps is slow, but the cv2.remap is fast. Once you have created the map, the cv2.remap is fast enough to be applied to frames of video.
import numpy as np #create waves
import cv2
import math
# read in image and resize down to width of 400
# load your image file here
image = cv2.imread("20191114_154534.jpg")
r = 400.0 / image.shape[1]
dim = (400, int(image.shape[0] * r))
# Perform the resizing of the image
resized = cv2.resize(image, dim, interpolation = cv2.INTER_AREA)
# Grab the dimensions of the image and calculate the center
# of the image (center not needed at this time)
(h, w, c) = resized.shape
center = (w // 2, h // 2)
# set up the x and y maps as float32
flex_x = np.zeros((h,w),np.float32)
flex_y = np.zeros((h,w),np.float32)
# create simple maps with a modified assignment
# the math modifier creates ripples. increase the divisor for less waves,
# increase the multiplier for greater movement
# this is where the magic is assembled
for y in range(h):
for x in range(w):
flex_x[y,x] = x + math.cos(x/15) * 15
flex_y[y,x] = y + math.cos(y/30) * 25
# do the remap this is where the magic happens
dst = cv2.remap(resized,flex_x,flex_y,cv2.INTER_LINEAR)
#show the results and wait for a key
cv2.imshow("Resized",resized)
cv2.imshow("Flexed",dst)
cv2.waitKey(0)
cv2.destroyAllWindows()
I'm currently cropping images this way:
self.crop = self.img[self.y_center - self.radius:self.y_center + self.radius,
self.x_center - self.radius:self.x_center + self.radius:, ]
Where y_center and x_center are defined by clicks, and radius is the size of the crop. But I repeatedly run into a problem where the click is on an extreme side of an image. In that case, how do I extend the side on the other side of the image?
Basically I do not want to compromise the size.
The simple & stupid (r) way:
check if the rectangle around the point goes beyond the image
move it the corresponding amount of pixels if it does
repeat for all the 4 sides
Not so s&s is how to handle the case where the rectangle is larger than the entire image, but we'll see...
# X,Y - size of the image - 1 (maximum possible values for x,y)
# x,y - click coordinates relative to the image
t=x+radius; if t>X: x-=t-X
t=y+radius; if t>Y: y-=t-Y
t=x-radius; if t<0: x-=t #t<0 so it's increased actually
t=y-radius; if t<0: y-=t
del t
crop = img[x-radius:min(x+radius,X), y-radius:min(y+radius,Y)]
With min, we handle the case when the rectangle is larger than the image.
We check from the "high" side because we chose the order of adjustments so that the last ones have docked the rectangle to the "low" side.