Horizentally flip of an image [python] - python

so I need to flip an image horizontally, meaning in regards to the horizontal axes, like this picture
however my problem is it's flipping in regards to the vertical axes, as shown in this picture
This is my code right now. May someone explain the changes I need to do ? image.gif is the image I am trying to flip. the reason I am setting the newImage at a new position as shown below is because I want the image to come up in the same window as the old image. and it did! only flipped vertically not horizontally.
from cImage import*
def horizentalFlip(oldimage):
myimagewindow = ImageWin("image",1000,600)
oldimage = FileImage("image.gif")
oldimage.draw(myimagewindow)
oldw = oldimage.getWidth()
oldh = oldimage.getHeight()
newImage = EmptyImage(oldw,oldh)
maxp = oldw - 1
for row in range(oldh):
for col in range(oldw):
oldpixel = oldimage.getPixel(maxp-col,row)
newImage.setPixel(col,row,oldpixel)
newImage.setPosition(oldw+1,0)
newImage.draw(myimagewindow)
myimagewindow.exitOnClick()
Thank you

The type of flipping you want (reflection around the x axis, your first picture) is usually called a "vertical" flip, whereas reflection around the y axis (your second picture) is usually called a horizontal flip.
Nomenclature aside, you want reflection about the x axis and if I've understood you correctly your problem is that you're getting reflection about the y axis. This is because the following lines:
maxp = oldw - 1
...
oldpixel = oldimage.getPixel(maxp-col,row)
manipulate widths (w), columns (col) and the first coordinate argument of each getPixel call: all these concepts are relevant to manipulation of the horizontal coordinate of each pixel. But you want to change the vertical coordinate of each pixel, so you need to work with height, with rows, and with the second coordinate argument:
maxp = oldh - 1
...
oldpixel = oldimage.getPixel(col, maxp - row)

Related

How to slice an image with different dimensions in python?

I have a 1024x1024 image and I want to slice it with boxes which are different sizes and will be selected randomly. For example 2 pieces 512x512,8 pieces 16x16 etc. Box positions is not important. And I want to use every pixel only one time. Below is my code but when I run it, a lot of pictures are created and same regions are being used. How can I make that each pixel will be used only 1 time. Below picture represents which I want.
'''
from PIL import Image
import random
infile = 'Da Vinci.jpg'
chopsize = [512,256,128,64,32]
img = Image.open(infile)
width, height = img.size
a= random.choice(chopsize)
for x0 in range(0, width):
for y0 in range(0, height):
box = (x0, y0,
x0+random.choice(chopsize) if x0+random.choice(chopsize) < width else width - 1,
y0+random.choice(chopsize) if y0+random.choice(chopsize) < height else height - 1)
print('%s %s' % (infile, box))
img.crop(box).save('%s.x%01d.y%01d.jpg' % (infile.replace('.jpg',''), x0, y0))
a=random.choice(chopsize)
That is what I want:
This is a fun problem! How do you randomly tile with your boxes an area but make sure none of the boxes overlap.
You have a couple of issues:
as you've written your code so far you are going to have boxes that spill over the border of your image. I don't know if you care about this - in your example picture the boxes fit perfectly into the space. If you do care you are going to have to figure that part out.
(although this code makes me think you have thought about it and don't care)
x0+random.choice(chopsize) if x0+random.choice(chopsize) < width else width - 1
The other issue which is what your question is really about is that you don't save a record of what pixels you have already visited. There are a few different ways you could do this.
One might be something like:
import numpy as np
filled_pixels = np.zeros((width, height))
x = 0
while x < width:
y=0
if filled_pixels[x,y] == 1:
x+=32 #the minimum dimensions of a square
while y < height:
chop = random.choice(chopsize)
if filled_pixels[x,y] == 1:
y+=1 #the minimum dimensions of a square
else:
filled_pixels[x:x+chop,y:y+chop] = 1
#do your stuff with making the boxes
y+=chop
you basically could raster through your image making boxes, making sure that you aren't making a square at any pixel where you already have a square (given by your filled value)

Find minimal number of rectangles in the image

I have binary images where rectangles are placed randomly and I want to get the positions and sizes of those rectangles.
If possible I want the minimal number of rectangles necessary to exactly recreate the image.
On the left is my original image and on the right the image I get after applying scipys.find_objects()
(like suggested for this question).
import scipy
# image = scipy.ndimage.zoom(image, 9, order=0)
labels, n = scipy.ndimage.measurements.label(image, np.ones((3, 3)))
bboxes = scipy.ndimage.measurements.find_objects(labels)
img_new = np.zeros_like(image)
for bb in bboxes:
img_new[bb[0], bb[1]] = 1
This works fine if the rectangles are far apart, but if they overlap and build more complex structures this algorithm just gives me the largest bounding box (upsampling the image made no difference). I have the feeling that there should already exist a scipy or opencv method which does this.
I would be glad to know if somebody has an idea on how to tackle this problem or even better knows of an existing solution.
As result I want a list of rectangles (ie. lower-left-corner : upper-righ-corner) in the image. The condition is that when I redraw those filled rectangles I want to get exactly the same image as before. If possible the number of rectangles should be minimal.
Here is the code for generating sample images (and a more complex example original vs scipy)
import numpy as np
def random_rectangle_image(grid_size, n_obstacles, rectangle_limits):
n_dim = 2
rect_pos = np.random.randint(low=0, high=grid_size-rectangle_limits[0]+1,
size=(n_obstacles, n_dim))
rect_size = np.random.randint(low=rectangle_limits[0],
high=rectangle_limits[1]+1,
size=(n_obstacles, n_dim))
# Crop rectangle size if it goes over the boundaries of the world
diff = rect_pos + rect_size
ex = np.where(diff > grid_size, True, False)
rect_size[ex] -= (diff - grid_size)[ex].astype(int)
img = np.zeros((grid_size,)*n_dim, dtype=bool)
for i in range(n_obstacles):
p_i = np.array(rect_pos[i])
ps_i = p_i + np.array(rect_size[i])
img[tuple(map(slice, p_i, ps_i))] = True
return img
img = random_rectangle_image(grid_size=64, n_obstacles=30,
rectangle_limits=[4, 10])
Here is something to get you started: a naïve algorithm that walks your image and creates rectangles as large as possible. As it is now, it only marks the rectangles but does not report back coordinates or counts. This is to visualize the algorithm alone.
It does not need any external libraries except for PIL, to load and access the left side image when saved as a PNG. I'm assuming a border of 15 pixels all around can be ignored.
from PIL import Image
def fill_rect (pixels,xp,yp,w,h):
for y in range(h):
for x in range(w):
pixels[xp+x,yp+y] = (255,0,0,255)
for y in range(h):
pixels[xp,yp+y] = (255,192,0,255)
pixels[xp+w-1,yp+y] = (255,192,0,255)
for x in range(w):
pixels[xp+x,yp] = (255,192,0,255)
pixels[xp+x,yp+h-1] = (255,192,0,255)
def find_rect (pixels,x,y,maxx,maxy):
# assume we're at the top left
# get max horizontal span
width = 0
height = 1
while x+width < maxx and pixels[x+width,y] == (0,0,0,255):
width += 1
# now walk down, adjusting max width
while y+height < maxy:
for w in range(x,x+width,1):
if pixels[x,y+height] != (0,0,0,255):
break
if pixels[x,y+height] != (0,0,0,255):
break
height += 1
# fill rectangle
fill_rect (pixels,x,y,width,height)
image = Image.open('A.png')
pixels = image.load()
width, height = image.size
print (width,height)
for y in range(16,height-15,1):
for x in range(16,width-15,1):
if pixels[x,y] == (0,0,0,255):
find_rect (pixels,x,y,width,height)
image.show()
From the output
you can observe the detection algorithm can be improved, as, for example, the "obvious" two top left rectangles are split up into 3. Similar, the larger structure in the center also contains one rectangle more than absolutely needed.
Possible improvements are either to adjust the find_rect routine to locate a best fit¹, or store the coordinates and use math (beyond my ken) to find which rectangles may be joined.
¹ A further idea on this. Currently all found rectangles are immediately filled with the "found" color. You could try to detect obviously multiple rectangles, and then, after marking the first, the other rectangle(s) to check may then either be black or red. Off the cuff I'd say you'd need to try different scan orders (top-to-bottom or reverse, left-to-right or reverse) to actually find the minimally needed number of rectangles in any combination.

Why does imshow display non-integer x and y values for the pixel position?

I am trying to read the x and y positions of the pixels in images. This is an example of what is shown when I run:
plt.figure(1)
plt.imshow(img)
plt.title('image')
plt.show()
Why are they non-integer values? My best guess is that some scaling is occurring? I am running python on spyder as an IDE.
Edit: Here is the image:
Edit 2: Upon closer inspection, inspecting pixel by pixel, they appear to be at the .5 marks rather than 0 to 1 as well. And here is a screenshot of my axis settings... something is definitely funky here. Anybody have an idea why?
My guess is, that the float values you worry about while hovering over the shown image with your mouse is just the mouse pointer position, which does not have to be integer. Yet still lays within a pixel (squared integer area) and thus gives you information about the channels at that pixel's position.
Another way to get information about your pixels in a more controlled way is given here:
Here is my working code snippet printing the pixel colours from an image:
import os, sys
import Image
im = Image.open("image.jpg")
x = 3
y = 4
pix = im.load()
print pix[x,y]
Answer edit 2: It just makes sense like that. The pixel centers fall on the integer values .0 you expect the pixels to have. If the edges would fall on the .0 a direct mapping between pixel coordinates and pixel values would not be possible within the visualization. Also the pixel having a height and width of 1 is exactly what we would expect.

OpenCV - Creating an Ellipse shaped mask in python

I've extracted a Circle shaped mask from an image in OpenCV. I used the following code for the same:
H, W = img.shape
x, y = np.meshgrid(np.arange(W), np.arange(H))**
d2 = (x - xc)**2 + (y - yc)**2**
mask = d2 < r **2**
And, used the mask value to find the average color outside the circle.
outside = np.ma.masked_where(mask, img)**
average_color = outside.mean()**
I want to extract an Ellipse from an image in the same above process in OpenCV Python.
Thank You.
Drawing Ellipse
To draw the ellipse, we need to pass several arguments. One argument is the center location (x,y). Next argument is axes lengths (major axis length, minor axis length). angle is the angle of rotation of ellipse in anti-clockwise direction. startAngle and endAngle denotes the starting and ending of ellipse arc measured in clockwise direction from major axis. i.e. giving values 0 and 360 gives the full ellipse. For more details, check the documentation of cv2.ellipse(). Below example draws a half ellipse at the center of the image.
cv2.ellipse(img,(256,256),(100,50),0,0,180,255,-1)
Taken from Miki's Link in the Question Comments

Matplotlib Patch Size in Points

How can I draw shapes in matplotlib using point/inch dimensions?
I've gone through the patch/transform documentation so I understand how to work in pixel, data, axes or figure coordinates but I cannot figure out how to dimension a rectangle in points/inches.
Ideally I would like to position a rectangle in data coordinates but set its size in points, much like how line markers work.
Here is an example of the plot I am trying to create. I currently position the black and red boxes in (data, axes) coordinates. This works when the graph is a known size, but fails when it gets rescaled as the boxes become smaller even through the text size is constant.
Ended up figuring it out with help from this question: How do I offset lines in matplotlib by X points
There is no built in way to specify patch dimensions in points, so you have to manually calculate a ratio of axes or data coordinates to inches/points. This ratio will of course vary depending on figure/axes size.
This is accomplished by running a (1, 1) point through the axes transform and seeing where it ends up in pixel coordinates. Pixels can then be converted to inches or points via the figure dpi.
t = axes.transAxes.transform([(0,0), (1,1)])
t = axes.get_figure().get_dpi() / (t[1,1] - t[0,1]) / 72
# Height = 18 points
height = 18 * t

Categories