I have a code where an image got converted to B/W.
Now I want to build a new image in reference to the original image.
The output of the original image are the X-/Y-coordinates and "1" and "0" for Black and White.
The new image will receive these information but not chronologically.
Therefore it must check and provide a negative output if it already has received information about a specific coordinate so that double entries can be avoided.
I haven´t found many similar examples to this; only some examples that are going in the about direction.
Does anyone have an idea how to realize that?
UPDATE:
I built the code which converts a pixel from a white image black, if the reference pixel from the original image is black (Otherwise it leaves it white).
Furthermore the used coordinate is entered into a list and checked if used.
However, this part is not working properly.
Although the coordinate [10, 10] has been used in the loop before, the code displays Coordinate not in the system
Any help would be appreciated!
import cv2
import numpy
white = cv2.imread('white.jpg') #loading white image
white = cv2.resize(white,(640,480)) #adjusting it to the size of the original image
y = 0 #for testing purposes the white image gets blackened manually
x = 0
j = 0
while j < 50:
content = numpy.zeros((200, 2)) #creating a list with 200 entries, every entry contains 2 values
content = ([x, y]) #adding two values to the list
if condition[y, x] = 1: #condition = 1 means that in the reference picture at this coordinate the pixel is black
white[y,x] = 0 #"0" creates a black pixel at the specified coordinate on the white image
x += 5
y += 5
j += 1
x = 10 #taking a value which already has been used
y = 10
try:
b = content.index([x, y]) #check if coordinate is in the list
except ValueError:
print("Coordinate not in the system")
else:
print("Coordinate already in the system")
i = 0
while i < 100:
cv2.imshow('Bild', white) #displays the image
if cv2. waitKey(1) == ord('q'):
break
It took me a while but I was able to solve it without any complex lists or arrays.
Might not be the most elegant way but at least it is working!
I created a second white picture (=reference) which is getting compared if the coordinate has already been used or not.
If the coordinate has not been used, it will create a black pixel.
The next time it is checking this coordinate it will find a black pixel and therefore know that it has been used.
In the end the white image will contain 49 black pixels (because the position [10, 10] has already been used and will not become painted).
import cv2
import numpy
white = cv2.imread('C:\white.jpg') #loading white image
reference = cv2.imread('C:\white.jpg') #loading white image
white = cv2.resize(white,(640,480)) #adjusting it to the size of the original image
reference = cv2.resize(white,(640,480)) #adjusting it to the size of the original image
y = 0 #for testing purposes the white image gets blackened manually
x = 0
j = 0
reference[10,10] = 0
while j < 50:
if [255,255,255] in reference[y,x]:
reference[y,x] = 0 #"0" creates a black pixel at the specified coordinate on the reference image
white[y,x] = 0 #"0" creates a black pixel at the specified coordinate on the white image
print("Coordinate not in system")
else:
print("coordinate already in system")
x += 5
y += 5
j += 1
i = 0
while i < 100:
cv2.imshow('image copy', white) #displays the image
if cv2. waitKey(1) == ord('q'):
break
Related
I have a 1024x1024 image and I want to slice it with boxes which are different sizes and will be selected randomly. For example 2 pieces 512x512,8 pieces 16x16 etc. Box positions is not important. And I want to use every pixel only one time. Below is my code but when I run it, a lot of pictures are created and same regions are being used. How can I make that each pixel will be used only 1 time. Below picture represents which I want.
'''
from PIL import Image
import random
infile = 'Da Vinci.jpg'
chopsize = [512,256,128,64,32]
img = Image.open(infile)
width, height = img.size
a= random.choice(chopsize)
for x0 in range(0, width):
for y0 in range(0, height):
box = (x0, y0,
x0+random.choice(chopsize) if x0+random.choice(chopsize) < width else width - 1,
y0+random.choice(chopsize) if y0+random.choice(chopsize) < height else height - 1)
print('%s %s' % (infile, box))
img.crop(box).save('%s.x%01d.y%01d.jpg' % (infile.replace('.jpg',''), x0, y0))
a=random.choice(chopsize)
That is what I want:
This is a fun problem! How do you randomly tile with your boxes an area but make sure none of the boxes overlap.
You have a couple of issues:
as you've written your code so far you are going to have boxes that spill over the border of your image. I don't know if you care about this - in your example picture the boxes fit perfectly into the space. If you do care you are going to have to figure that part out.
(although this code makes me think you have thought about it and don't care)
x0+random.choice(chopsize) if x0+random.choice(chopsize) < width else width - 1
The other issue which is what your question is really about is that you don't save a record of what pixels you have already visited. There are a few different ways you could do this.
One might be something like:
import numpy as np
filled_pixels = np.zeros((width, height))
x = 0
while x < width:
y=0
if filled_pixels[x,y] == 1:
x+=32 #the minimum dimensions of a square
while y < height:
chop = random.choice(chopsize)
if filled_pixels[x,y] == 1:
y+=1 #the minimum dimensions of a square
else:
filled_pixels[x:x+chop,y:y+chop] = 1
#do your stuff with making the boxes
y+=chop
you basically could raster through your image making boxes, making sure that you aren't making a square at any pixel where you already have a square (given by your filled value)
I am writing a program where I chop up an image into many sub-tiles, process the tiles, then stitch them together. I am stuck at the stitching part. When I run my code, after the first row the tiles each shift one space over. I am working with 1000x1000 tiles and the image size can be variable. I also get this ugly horizontal padding that I can't figure out how to get rid of.
Here is a google drive link to the images:
https://drive.google.com/drive/folders/1HqRl29YlWUrsYoZP88TAztJe9uwgP5PS?usp=sharing
Clarification based on the comments
I take the original black and white image and crop it into 1000px x 1000px black and white tiles. These tiles are then re-colored to replace the white with a color corresponding to a density heatmap. The recolored tiles are then saved into that folder. The picture I included is one of the colored in tiles that I am trying to piece back together. When pieced together it should be the same shape but multi colored version of the black and white image
from PIL import Image
import os
stitched_image = Image.new('RGB', (large_image.width, large_image.height))
image_list = os.listdir('recolored_tiles')
current_tile = 0
for i in range(0, large_image.height, 1000):
for j in range(0, large_image.width, 1000):
p = Image.open(f'recolored_tiles/{image_list[current_tile]}')
stitched_image.paste(p, (j, i), 0)
current_tile += 1
stitched_image.save('test.png')
I am attaching the original image that I process in tiles and the current state of the output image:
An example of the tiles found in the folder recolored_tiles:
First off, the code below will create the correct image:
from PIL import Image
import os
stitched_image = Image.new('RGB', (original_image_width, original_image_height))
image_list = os.listdir('recolored_tiles')
current_tile = 0
for y in range(0, original_image_height - 1, 894):
for x in range(0, original_image_width - 1, 1008):
tile_image = Image.open(f'recolored_tiles/{image_list[current_tile]}')
print("x: {0} y: {1}".format(x, y))
stitched_image.paste(tile_image, (x, y), 0)
current_tile += 1
stitched_image.save('test.png')
Explanation
First off, you should notice, that your tiles aren't 1000x1000. They are all 1008x984 because 18145x16074 can't be divided up into 19 1000x1000 tiles each.
Therefore you will have to put the correct tile width and height in your for loops:
for y in range(0, 16074, INSERT CURRECT RECOLORED_TILE HEIGHT HERE):
for x in range(0, 18145, INSERT CURRECT RECOLORED_TILE WIDTH HERE):
Secondly, how python range works, it doesn't run on the last digit. Representation:
for i in range(0,5):
print(i)
The output for that would be:
0
1
2
3
4
Therefore the width and height of the original image will have to be minused by 1, because it thinks you have 19 tiles, but there isn't.
Hope this works and what a cool project you're working on :)
I have binary images where rectangles are placed randomly and I want to get the positions and sizes of those rectangles.
If possible I want the minimal number of rectangles necessary to exactly recreate the image.
On the left is my original image and on the right the image I get after applying scipys.find_objects()
(like suggested for this question).
import scipy
# image = scipy.ndimage.zoom(image, 9, order=0)
labels, n = scipy.ndimage.measurements.label(image, np.ones((3, 3)))
bboxes = scipy.ndimage.measurements.find_objects(labels)
img_new = np.zeros_like(image)
for bb in bboxes:
img_new[bb[0], bb[1]] = 1
This works fine if the rectangles are far apart, but if they overlap and build more complex structures this algorithm just gives me the largest bounding box (upsampling the image made no difference). I have the feeling that there should already exist a scipy or opencv method which does this.
I would be glad to know if somebody has an idea on how to tackle this problem or even better knows of an existing solution.
As result I want a list of rectangles (ie. lower-left-corner : upper-righ-corner) in the image. The condition is that when I redraw those filled rectangles I want to get exactly the same image as before. If possible the number of rectangles should be minimal.
Here is the code for generating sample images (and a more complex example original vs scipy)
import numpy as np
def random_rectangle_image(grid_size, n_obstacles, rectangle_limits):
n_dim = 2
rect_pos = np.random.randint(low=0, high=grid_size-rectangle_limits[0]+1,
size=(n_obstacles, n_dim))
rect_size = np.random.randint(low=rectangle_limits[0],
high=rectangle_limits[1]+1,
size=(n_obstacles, n_dim))
# Crop rectangle size if it goes over the boundaries of the world
diff = rect_pos + rect_size
ex = np.where(diff > grid_size, True, False)
rect_size[ex] -= (diff - grid_size)[ex].astype(int)
img = np.zeros((grid_size,)*n_dim, dtype=bool)
for i in range(n_obstacles):
p_i = np.array(rect_pos[i])
ps_i = p_i + np.array(rect_size[i])
img[tuple(map(slice, p_i, ps_i))] = True
return img
img = random_rectangle_image(grid_size=64, n_obstacles=30,
rectangle_limits=[4, 10])
Here is something to get you started: a naïve algorithm that walks your image and creates rectangles as large as possible. As it is now, it only marks the rectangles but does not report back coordinates or counts. This is to visualize the algorithm alone.
It does not need any external libraries except for PIL, to load and access the left side image when saved as a PNG. I'm assuming a border of 15 pixels all around can be ignored.
from PIL import Image
def fill_rect (pixels,xp,yp,w,h):
for y in range(h):
for x in range(w):
pixels[xp+x,yp+y] = (255,0,0,255)
for y in range(h):
pixels[xp,yp+y] = (255,192,0,255)
pixels[xp+w-1,yp+y] = (255,192,0,255)
for x in range(w):
pixels[xp+x,yp] = (255,192,0,255)
pixels[xp+x,yp+h-1] = (255,192,0,255)
def find_rect (pixels,x,y,maxx,maxy):
# assume we're at the top left
# get max horizontal span
width = 0
height = 1
while x+width < maxx and pixels[x+width,y] == (0,0,0,255):
width += 1
# now walk down, adjusting max width
while y+height < maxy:
for w in range(x,x+width,1):
if pixels[x,y+height] != (0,0,0,255):
break
if pixels[x,y+height] != (0,0,0,255):
break
height += 1
# fill rectangle
fill_rect (pixels,x,y,width,height)
image = Image.open('A.png')
pixels = image.load()
width, height = image.size
print (width,height)
for y in range(16,height-15,1):
for x in range(16,width-15,1):
if pixels[x,y] == (0,0,0,255):
find_rect (pixels,x,y,width,height)
image.show()
From the output
you can observe the detection algorithm can be improved, as, for example, the "obvious" two top left rectangles are split up into 3. Similar, the larger structure in the center also contains one rectangle more than absolutely needed.
Possible improvements are either to adjust the find_rect routine to locate a best fit¹, or store the coordinates and use math (beyond my ken) to find which rectangles may be joined.
¹ A further idea on this. Currently all found rectangles are immediately filled with the "found" color. You could try to detect obviously multiple rectangles, and then, after marking the first, the other rectangle(s) to check may then either be black or red. Off the cuff I'd say you'd need to try different scan orders (top-to-bottom or reverse, left-to-right or reverse) to actually find the minimally needed number of rectangles in any combination.
Background
I have been trying to create a chessboard in the PIL module and I have got the general pattern for the first two rows, but can't figure out how to apply this to the entire board. As you can see, I have created an image:
from PIL import Image
img = Image.new("RGB", (15,15), "white") # create a new 15x15 image
pixels = img.load() # create the pixel map
My solution for the first two rows
Note - I am still learning Python so this code may seem very inefficient, but feel free to suggest improvements.
The second row:
Code:
black_2 = []
for i in range(img.size[0]):
if i % 2 == 0:
black_2.append(i)
This gives me all the horizontal index positions on where to put a black pixel. Therefore, for the 15x15 board I created, it returns [0, 2, 4, 6, 8, 10, 12, 14]
The first row:
Code:
I then use the second row to work out the horizontal index positions for the first row
black_1 = [i-1 for i in black_2 if i > 0]
if img.size[0] % 2 == 0: # 'that' if statement
black_1.append(img.size[0]-1)
For the 15x15 pixel board I created, it returns [1, 3, 5, 7, 9, 11, 13]. I created that if statement because I realised that the last black pixel was not showing if the board had an even length, and that seemed to fix it.
Changing the pixels to black:
# hardcoded to check patterns
for i in black_1:
pixels[i,0] = (0,0,0)
for k in black_2:
pixels[k,1] = (0,0,0)
img.show()
How can I apply both patterns to the rest of the board, regardless of its size?
I would suspect a for var in range() loop is needed, but I am not sure how it would change depending on if the height(img.size[1]) of the board is odd or even.
Overall pattern so far:
black_1 applies to first row
black_2 applies to second row
A chess board has 64 squares instead of 256. Firstly you need (8,8) and then you can use double for loops to assign the color to all the 8 rows.
General example for any size
from PIL import Image
size = 16
img = Image.new("RGB", (size,size), "white") # create a new 15x15 image
pixels = img.load() # create the pixel map
black_2 = []
for i in range(img.size[0]):
if i % 2 == 0:
black_2.append(i)
black_1 = [i-1 for i in black_2 if i > 0]
if img.size[0] % 2 == 0: # 'that' if statement
black_1.append(img.size[0]-1)
for i in black_1:
for j in range(0, size, 2):
pixels[i,j] = (0,0,0)
for k in black_2:
for l in range(1, size+1, 2):
pixels[k,l] = (0,0,0)
img.show()
This is a simple way. In case it confuses you, one pixel in the image corresponds to a whole square of the chessboard and you can scale it up at the end if you want to.
#!/usr/bin/env python3
from PIL import Image
# Create new black image of entire board
w, h = 12, 6
img = Image.new("RGB", (w,h))
pixels = img.load()
# Make pixels white where (row+col) is odd
for i in range(w):
for j in range(h):
if (i+j)%2:
pixels[i,j] = (255,255,255)
img.save('result.png')
If you want it to be a larger image, just resize at the end. So, say you want each square of the board to be 15px x 15px:
img = img.resize((15*w,15*h), Image.NEAREST)
Solution:
Using numpy modul, its very fast and you have only 11 loops(also not looping through every pixel, but rather through patterns). It doesnt matter how big is the chessboard. Processing should be similary fast
timing: 2400 x 2400 pixel chessboard -> 0.17s
import numpy as np
from PIL import Image
n = 50 # size of one element, row = 8*n, chessboard = 8*n x 8*n
segment_black = np.zeros(shape = [n,n])
segment_white = np.ones(shape = [n,n])*255
chessboard = np.hstack((segment_black,segment_white))
for i in range(4):
chessboard = np.hstack((chessboard,segment_black))
chessboard = np.hstack((chessboard,segment_white))
temp = chessboard
for i in range(7):
chessboard = np.concatenate((np.fliplr(chessboard),temp))
img = Image.fromarray(chessboard.astype(np.uint8))
img.save('chess.jpg')
img.show()
You could use PIL.Image.paste to paste img on itself.
for n in range(15/2):
img.paste(img, (0, n*2))
Using all your existing code, just add this at the bottom.
I try to test example all black pic it not show "mostly black" i wonder please help me
from PIL import Image
im = Image.open('im.gif')
pixels = im.getdata() # get the pixels as a flattened sequence
black_thresh = 50
nblack = 0
for pixel in pixels:
if pixel < black_thresh:
nblack += 1
n = len(pixels)
if (nblack / float(n)) > 0.5:
print("mostly black")
If the image returned as tuple containing each of the color components (e.g. RGB), not a single value. You can take the average value and compare against the threshold like this:
for pixel in pixels:
if sum(pixel) / len(pixel) < black_thresh:
nblack += 1