How can i make a gtk.Image that is made from an list of gtk.gdk.Color.
color_list = [gtk.gdk.Color(100*i,100*i,100*i) for i in range(10)]
image = gtk.Image()
draw_image_from_list(color_list, image)
I need that procedure and the image would be just one column of pixels:
|color1|
|color2|
|color3|
...
Here's a widget that almost does what you want: https://github.com/ptomato/LaserCam/blob/master/src/ColorMapIndicator.py
It's a gtk.DrawingArea instead of a gtk.Image, but I decided against using an image since they are more meant for displaying image files, not for drawing on. It displays the colors in a horizontal bar 128 pixels wide and 10 high, but that should be easy enough to change. And finally, it does it from an array of numbers, not of gtk.gdk.Color, but it looks like you generate those from numbers anyway in your example.
Related
I am trying to draw 16 images side by side on a blank unnamed window.
blank window size is 2456 x 1296 pixels. Each image size is 614x324 pixels.
I want to know how to draw or display images together on one window.
Here is my desired output:
enter image description here
I think you can simply concatenate 16 images into a big image by cv2.hconcat and cv2.vconcat functions like below.
new_img = cv2.hconcat([im1, im2])
new_img = cv2.vconcat([im1, im2])
I need to split an RGBA image into an arbitrary number of boxes that are as equally sized as possible
I have attempted to use numpy.array_split, but am unsure of how to do so while preserving the RGBA channels
I have looked the following questions, none of them detail how to split an image into n boxes, they reference splitting the image into boxes of predetermined pixel size, or how to split the image into some shape.
While it seems that it would be some simple math to get number of boxes from box size and image size, I am unsure of how to do so.
How to Split Image Into Multiple Pieces in Python
Cutting one image into multiple images using the Python Image Library
Divide image into rectangles information in Python
While attempting to determine the number of boxes from pixel box size, I used the formula
num_boxes = (img_size[0]*img_size[1])/ (box_size_x * box_size_y)
but that did not result in the image being split up properly
To clarify, I would like to be able to input an image that is a numpy array of size (a,b,4) and a number of boxes and output the images in some form (np array preferred, but whatever works)
I appreciate any help, even if you aren't able to provide the full method, I would appreciate some direction.
I have tried
def split_image(image, n_boxes):
return numpy.array_split(image,n_boxes)
#doesn't work with colors
def split_image(image, n_boxes):
box_size = factor_int(n_boxes)
M = im.shape[0]//box_size[0]
N = im.shape[1]//box_size[1]
return [im[x:x+M,y:y+N] for x in range(0,im.shape[0],M) for y in range(0,im.shape[1],N)]
factor_int returns integer as close to a square as possible from Factor an integer to something as close to a square as possible
I am still not sure if your inputs are actually the image and the dimensions of the boxes or the image and the number of boxes. Nor am I sure if your problem is deciding where to chop the image or knowing how to chop a 4-channel image, but maybe something in here will get you started.
I started with this RGBA image - the circles are transparent, not white:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
import math
# Open image and get dimensions
im = Image.open('start.png').convert('RGBA')
# Make Numpy array from image and get height and width
ni = np.array(im)
h ,w = ni.shape[:2]
print(f'Height: {h}, width: {w}')
BOXES = 4
for i in range(BOXES):
this = ni[:, i*w//BOXES:(i+1)*w//BOXES, :]
Image.fromarray(this).save(f'box-{i}.png')
You can change BOXES but leaving it at 4 gets you these 4 output images:
[] []4
I am trying to read the x and y positions of the pixels in images. This is an example of what is shown when I run:
plt.figure(1)
plt.imshow(img)
plt.title('image')
plt.show()
Why are they non-integer values? My best guess is that some scaling is occurring? I am running python on spyder as an IDE.
Edit: Here is the image:
Edit 2: Upon closer inspection, inspecting pixel by pixel, they appear to be at the .5 marks rather than 0 to 1 as well. And here is a screenshot of my axis settings... something is definitely funky here. Anybody have an idea why?
My guess is, that the float values you worry about while hovering over the shown image with your mouse is just the mouse pointer position, which does not have to be integer. Yet still lays within a pixel (squared integer area) and thus gives you information about the channels at that pixel's position.
Another way to get information about your pixels in a more controlled way is given here:
Here is my working code snippet printing the pixel colours from an image:
import os, sys
import Image
im = Image.open("image.jpg")
x = 3
y = 4
pix = im.load()
print pix[x,y]
Answer edit 2: It just makes sense like that. The pixel centers fall on the integer values .0 you expect the pixels to have. If the edges would fall on the .0 a direct mapping between pixel coordinates and pixel values would not be possible within the visualization. Also the pixel having a height and width of 1 is exactly what we would expect.
I am working on image to find outer body points but when I save them they have different size and which is creating problem.
My original image is of a person. (1.8Mb)
I create a mask of the person to detect the outer body parts from the original image and save it. (400kb)
From the mask, I obtain the outer body points and plot them on original image, but they not aligned because of difference in size of original and mask image.
To save images without axes and with full size so that it can match with original image I am saving them by the following method. After saving they look exactly same but due to difference in size points are not aligned.
plt.axis('off')
fig.axes.get_xaxis().set_visible(False)
fig.axes.get_yaxis().set_visible(False)
plt.savefig('kmask.jpg',bbox_inches='tight',pad_inches = 0,dpi=1500)
Result when I plot points on original image:
How to deal with such problems?
From what I can tell you are saving the mask in a different size than the original image.
One way to solve this is to first figure out the resolution of the original image. If you don't know you can always check:
img = plt.imread('body_image.jpg')
print(img.shape)
# The first two numbers correspond to the height and width of the image in pixels
The problem is that matplotlib doesn't deal with image resolution the same way. Instead it requires the figure size (in inches) and the DPI (or how many pixels per inch). One way would be to calculate what values you need, and save the image accordingly.
image_height_in_pixels = height_in_inches * dpi
Then use these two numbers to save the mask.
f = plt.figure(figsize=(height_in_inches, width_in_inches))
plt.axis('off')
plt.savefig('kmask.jpg', bbox_inches='tight', pad_inches=0, dpi=dpi)
If this doesn't work try saving the original image too, with matplotlib. This will ensure that both of them have the same dimensions.
I have a question to do with the Python module pygame.
I want to change the hue of an image for a sprite, like applying a filter to the image. I have seen many posts concerning changing specific pixels from one color to another, although this is not what I want to do. I want to do something similar to what can be done in simple photo editing software such as paint.net, changing the overall color of an image. I could of course change the hue of the image in a photo editing software, but this would lead to lots of images needing to be made and loaded and managed, it would quickly become very tedious. I am hoping that there is some sort of way to change the hue of an image in pygame.
You can do this with Python PIL. Take a look at this question and answer, and especially the original question and answer that they link to:
Changing the color of an image based on RGB value
'''
Original post https://www.reddit.com/r/pygame/comments/hprkpr/how_to_change_the_color_of_an_image_in_pygame/
'''
blue_rgb = (0,0,255) red_rgb = (255,0,0) img =
pygame.image.load("sky_picture.png") # loads the picture from the path
given var = pygame.PixelArray(img)
# var.replace(([Colour you want to replace]), [Colour you want]) var.replace((blue_rgb), (red_rgb)) # replaces all blue in the picture
to red del var
"""
if the picture has some unchanged pixels left it's probably because they are not EXACTLY the rgb given for example (254,0,0) is not
(255,0,0) and won't be changed (to fix this you will have to calculate
the approx number ot just change the main picture)
"""
# I also uploaded this to grepper
The hue of each pixel in an image can be shifted using
PixelArray to
iterate over each pixel,
Surface.unmap_rgb to get a Color object from each pixel, and
Color.hsla to do the
hue shift.
# Get the pixels
pixels = PixelArray(surface)
# Iterate over every pixel
for x in range(surface.get_width()):
for y in range(surface.get_height()):
# Turn the pixel data into an RGB tuple
rgb = surface.unmap_rgb(pixels[x][y])
# Get a new color object using the RGB tuple and convert to HSLA
color = Color(*rgb)
h, s, l, a = color.hsla
# Add 120 to the hue (or however much you want) and wrap to under 360
color.hsla = (int(h) + 120) % 360, int(s), int(l), int(a)
# Assign directly to the pixel
pixels[x][y] = color
# The old way of closing a PixelArray object
del pixels
If the surface is small, this could be run in real time. However, it would be better to run once at load time, especially if the surface is large.