I loaded the image and tried to draw a red point in an image
img=mpimg.imread('./images/im00001.jpg')
red = [0,0,255]
# Change one pixel
img[ 0.,-26.10911452,0. ]=red
imgplot = plt.imshow(img)
but the following error occurred
ValueError: assignment destination is read-only
What you are doing actually changes your image.
To draw points on the image as it is being shown, you can show the image in a matplotlib figure and then plot points on it. You can use pyplot.plot() function to plot points, or pyplot.scatter() function to plot an array of points.
image = mpimg.imread("road.jpg")
pts = np.array([[330,620],[950,620],[692,450],[587,450]])
plt.imshow(image)
plt.plot(640, 570, "og", markersize=10) # og:shorthand for green circle
plt.scatter(pts[:, 0], pts[:, 1], marker="x", color="red", s=200)
plt.show()
You're on the right track. You can change a pixel's property using Numpy splicing
img[x,y] = [B,G,R]
So for example, to change a pixel at (50,50) to red, you can do
img[50,50] = [0,0,255]
Here we change a single pixel to red (it's pretty tiny)
import cv2
import numpy as np
width = 100
height = 100
# Make empty black image of size (100,100)
img = np.zeros((height, width, 3), np.uint8)
red = [0,0,255]
# Change pixel (50,50) to red
img[50,50] = red
cv2.imshow('img', img)
cv2.waitKey(0)
An alternative method is to use cv2.circle() to draw your point inplace.
The function header is
cv2.circle(image, (x, y), radius, (B,G,R), thickness)
Using this, we obtain the same result
cv2.circle(img, (50,50), 1, red, -1)
mpimg indicates that you are using matplotlib to read the image.
Here are few points to remember to work with images using matplotlib:
matplotlib stores image data into Numpy arrays. So, type(img) will return <class 'numpy.ndarray'>. (Ref 1)
The shape of the ndarray represents the height, width and number of bands of the image.
Each inner list represents a pixel. For RGB image inner list length is 3. For RGBA Image inner list length is 4. Each value of the list stores floating point data between 0.0 to 1.0. Each value represents value of R(Red), G(Green), B(Blue) and A(Alpha / transparency) of the pixel.
For RGB image, to set a pixel to red color the pixel should be assigned: [1, 0, 0]
For RGBA image, to set a pixel to red color the pixel should be assigned: [1, 0, 0, 1]
In matplotlib, the Figure's size is fixed, and the contents are stretched/squeezed/interpolated to fit the figure. So, after saving the image the resolution may change. (Ref 2)
According to these points, I have edited a RGBA image (png format) by putting a red dot in center of it.
Original image:
Edited image:
code.py:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# dpi for the saved figure: https://stackoverflow.com/a/34769840/3129414
dpi = 80
# Set red pixel value for RGB image
red = [1, 0, 0]
img = mpimg.imread("minion.png")
height, width, bands = img.shape
# Update red pixel value for RGBA image
if bands == 4:
red = [1, 0, 0, 1]
# Update figure size based on image size
figsize = width / float(dpi), height / float(dpi)
# Create a figure of the right size with one axes that takes up the full figure
figure = plt.figure(figsize=figsize)
axes = figure.add_axes([0, 0, 1, 1])
# Hide spines, ticks, etc.
axes.axis('off')
# Draw a red dot at pixel (62,62) to (66, 66)
for i in range(62, 67):
for j in range(62, 67):
img[i][j] = red
# Draw the image
axes.imshow(img, interpolation='nearest')
figure.savefig("test.png", dpi=dpi, transparent=True)
References:
Matplotlib official
tutorial
Stackoverflow answer on saving image in same resolution as original
image
Related
I noticed that displaying a RGB masked image does not work as I would expect, i.e. the resulting image is not masked when displayed. Is it normal, is there a workaround?
The example bellow shows the observed behaviour:
import numpy as np
from matplotlib import pyplot as plt
img=np.random.normal(0,10,(20,20)) # create a random image
mask=img>0
ma_img=np.ma.masked_where(mask, img) # create a masked image
img_rgb=np.random.uniform(0,1,(20,20,3)) # create a randomRGB image
mask_rgb=np.broadcast_to(mask[...,np.newaxis],img_rgb.shape) # extend the mask so that it matches the RGB image shape
ma_img_rgb=np.ma.masked_where(mask_rgb, img_rgb) # create a masked RGB image
## Display:
fig, ax=plt.subplots(2,2)
ax[0,0].imshow(img)
ax[0,0].set_title('Image')
ax[0,1].imshow(ma_img)
ax[0,1].set_title('Masked Image')
ax[1,0].imshow(img_rgb)
ax[1,0].set_title('RGB Image')
ax[1,1].imshow(ma_img_rgb)
ax[1,1].set_title('Masked RGB Image')
Interestingly, when the mouse passes over masked pixels in the masked RBG image, the pixel value does not appear in the lower right corner of the figure window.
It seems the mask is ignored for RGB arrays, see also this question.
From the doc the input for imshow() can be:
(M, N): an image with scalar data. The values are mapped to colors using normalization and a colormap. See parameters norm, cmap, vmin, vmax.
(M, N, 3): an image with RGB values (0-1 float or 0-255 int).
(M, N, 4): an image with RGBA values (0-1 float or 0-255 int), i.e. including transparency.
Therefore one option would be to use ~mask as alpha values for the rgb array:
img_rgb = np.random.uniform(0, 1, (20, 20, 3))
ma_img_rgb = np.concatenate([img_rgb, ~mask[:, :, np.newaxis]], axis=-1)
# ma_img_rgb = np.dstack([img_rgb, ~mask]) # Jan Kuiken
I am trying to create an occupancy grid map by exporting an higher resolution image of the map to a very low resolution.
In most basic form an occupancy grid is a 2 dimensional binary array. The values stored in array denotes free(0) or occupied(1). Each value corresponds to a discrete location of the physical map (the following image depicts an area)
As seen in the above image each array location is a cell of physical world.
I have a 5 meter x 5 meter World, it is then discretized into cells of 5cm x 5cm. The world is thus 100 x 100 cells corresponding to 5m x 5m physical world.
The obstacle re randomly generated circular disks at location (x,y) and of a random radius r like follows:
I need to covert this (above) image into an array of size 100x100. That means evaluating if each cell is actually in the region of a obstacle or free.
To speed things, I have found the following workaround:
Create matplotlib figure populated with obstacles with figsize=(5,5) and save the image with dpi=20 in bmp format and finally import the bmp image as an numpy array. Alas, matplotlib does not support bmp. If I save the image in jpeg using plt.savefig('map.jpg', dpi=20, quality=100) or other formats then the cell's boundary becomes blurred and flows into other cells. Shown in this image :
So my question: How to save a scaled-down image from matplotlib that preserves the cell sharpness of image (akin to bmp).
Nice hack. However, I would rather compute the boolean mask corresponding to your discretized circles explicitly. One simple way to get such a boolean map is by using the contains_points method of matplotlib artists such as a Circle patch.
#!/usr/bin/env python
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Circle
world_width = 100 # x
world_height = 100 # y
minimum_radius = 1
maximum_radius = 10
total_circles = 5
# create circle patches
x = np.random.randint(0, world_width, size=total_circles)
y = np.random.randint(0, world_height, size=total_circles)
r = minimum_radius + (maximum_radius - minimum_radius) * np.random.rand(total_circles)
circles = [Circle((xx,yy), radius=rr) for xx, yy, rr in zip(x, y, r)]
# for each circle, create a boolean mask where each cell element is True
# if its center is within that circle and False otherwise
X, Y = np.meshgrid(np.arange(world_width) + 0.5, np.arange(world_height) + 0.5)
masks = np.zeros((total_circles, world_width, world_height), dtype=bool)
for ii, circle in enumerate(circles):
masks[ii] = circle.contains_points(np.c_[X.ravel(), Y.ravel()]).reshape(world_width, world_height)
combined_mask = np.sum(masks, axis=0)
plt.imshow(combined_mask, cmap='gray_r')
plt.show()
If I have understood correctly, I think this can be done quite simply with PIL, specifically with the Image.resize fucntion. For example, does this do what you asked:
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image, ImageDraw
# Make a dummy image with some black circles on a white background
image = Image.new('RGBA', (1000, 1000), color="white")
draw = ImageDraw.Draw(image)
draw.ellipse((20, 20, 180, 180), fill = 'black', outline ='black')
draw.ellipse((500, 500, 600, 600), fill = 'black', outline ='black')
draw.ellipse((100, 800, 250, 950), fill = 'black', outline ='black')
draw.ellipse((750, 300, 800, 350), fill = 'black', outline ='black')
image.save('circles_full_res.png')
# Resize the image with nearest neighbour interpolation to preserve grid sharpness
image_lo = image.resize((100,100), resample=0)
image_lo.save("circles_low_res.png")
I have an image which I define like the following:
img = np.zeros(474,474)
I would like to draw true filled circles and not polygonal approximations of circles on this image at different coordinates as centre and of a fixed radius. For example, I want to draw two circles with centres (100,200) and (150,372) with radius 2 pixels. What I am expecting is that after plotting the circles, the entries of the original image img should change to all ones where the circle is present.
I tried opencv cv.circlemodule as well as skimage.draw.circle module but they generate some polynomial approximation of circle.
I was also trying the following in matplotlib but I don't seem to understand how to plot it on my image img.
Any help would be appreciated.
from matplotlib.patches import Circle
img=np.zeros(474,474)
fig = plt.figure()
ax = fig.add_subplot(111)
centers = [(100,200),(150,372)]
for i in range(len(centers)):
Circle((centers[i][0],centers[i][1]), radius= 2)
draw circles in the img.
import cv2
import numpy as np
img = np.zeros([474, 474])
cv2.circle(img, (100,100), 5, 255, -1)
cv2.circle(img, (200,200), 30, 255, -1)
cv2.imshow('image', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
When drawing a circle in OpenCV, there are several parameters that can be chosen, one of them is used to define the type of the circle boundary, you have 4 types:
Filled
4-connected line
8-connected line
antialiased line
You can see the different effect on the following image (in same order)
An example code (in C++)
circle(src,cv::Point(300,300), 10, Scalar(0,0,255), 1, FILLED);
I am trying to slice an image into RGB and I have a problem with plotting these images.
I obtain all images from a certain folder with this function:
def get_images(path, image_type):
image_list = []
for filename in glob.glob(path + '/*'+ image_type):
im=misc.imread(filename, mode='RGB')
image_list.append(im)
return image_list
This function creates 4d array (30, 1536, 2048, 3) and I am quite sure that the first value represents number of images, second and third are dimensions and third are RGB values.
After I obtained all the images, I stored them as a numpy array
image_list = get_images('C:\HDR\images', '.jpg')
temp = np.array(image_list)
After that I tried to use simple slicing on order to take specific colors from these images:
red_images = temp[:,:,:,0]
green_images = temp[:,:,:,1]
blue_images = temp[:,:,:,2]
When I print out the values, everything seems to be fine.
print(temp[11,125,311,:])
print(red_images[11,125,311])
print(green_images[11,125,311])
print(blue_images[11,125,311])
And I get the following:
[105 97 76]
105
97
76
So far, everything seems to be fine, but the problem arises when I try to display the image. I used matplotlib.pyplot.imshow to display it and I get the image like:
Which is reasonable, because I choose red:
plt.imshow(temp[29,:,:,0])
But when I change it to different color channel, like this:
plt.imshow(temp[29,:,:,2])
I get the image like this:
My question is simple. What is happening here?
I think matplotlib is just treating each channel (i.e., intensities) as a "heat map".
Pass a color map to the imshow function like so to tell it how you want it to color your image:
plt.imshow(image_slice, cmap=plt.cm.gray)
Edit
#mrGreenBrown in response to your comment, I'm assuming that the misc.imread function you used is from scipy, i.e., scipy.misc.imread. That function is no different from that of PIL. See scipy.misc.imread docs. Thanks to #dai for pointing this out.
A single channel of any image is just intensities. It does not have color. For an image expressed in RGB color space, color is obtained by "mixing" amounts (given by the respective channel's intensities) of red, green, and blue. A single channel cannot express color.
What happened was Matplotlib by default displays the intensities as a heatmap, hence the "color".
When you save a single channel as an image in a format say JPEG, the function merely duplicates the single channel 3 times so that the R, G, and B channels all contain the same intensities. This is the typical behavior unless you save it in a format such as PGM which can handle single channel grayscale image. When you try to visualize this image which has the same channel duplicated 3 times, because the contributions from red, green, and blue are the same at each pixel, the image appears as grey.
Passing plt.cm.gray to the cmap argument simply tells imshow not to "color-code" the intensities. So, brighter pixels (pixels approaching white) means there is "more" of that "color" at those locations.
If you want color, you have to make copies of the 3 channel image and set the other channels to have values of 0.
For e.g., to display a red channel as "red":
# Assuming I is numpy array with 3 channels in RGB order
I_red = image.copy() # Duplicate image
I_red[:, :, 1] = 0 # Zero out contribution from green
I_red[:, :, 2] = 0 # Zero out contribution from blue
A related question from stackoverflow here.
So, you want to show in different colors the different RGB channels of an image...
import matplotlib.pyplot as plt
from matplotlib.cbook import get_sample_data
image = plt.imread(get_sample_data('grace_hopper.jpg'))
titles = ['Grace Hopper', 'Red channel', 'Green channel', 'Blue channel']
cmaps = [None, plt.cm.Reds_r, plt.cm.Greens_r, plt.cm.Blues_r]
fig, axes = plt.subplots(1, 4, figsize=(13,3))
objs = zip(axes, (image, *image.transpose(2,0,1)), titles, cmaps)
for ax, channel, title, cmap in objs:
ax.imshow(channel, cmap=cmap)
ax.set_title(title)
ax.set_xticks(())
ax.set_yticks(())
plt.savefig('RGB1.png')
Note that when you have a dark room with a red pen on a dark table, if you turn on a red lamp you percept the pen as almost white...
Another possibility is to create a different image for each color, with the pixel values for the other colors turned to zero. Starting from where we left we define a function to extract a channel into an otherwise black image
...
from numpy import array, zeros_like
def channel(image, color):
if color not in (0, 1, 2): return image
c = image[..., color]
z = zeros_like(c)
return array([(c, z, z), (z, c, z), (z, z, c)][color]).transpose(1,2,0)
and finally use it...
colors = range(-1, 3)
fig, axes = plt.subplots(1, 4, figsize=(13,3))
objs = zip(axes, titles, colors)
for ax, title, color in objs:
ax.imshow(channel(image, color))
ax.set_title(title)
ax.set_xticks(())
ax.set_yticks(())
plt.savefig('RGB2.png')
I can't tell which is the version that I like better, perhaps the 1st one is looking more realistic to me (maybe it looks less artificial) but it's quite subjective...
I am a complete novice to image processing, and I am guessing this is quite easy to do, but I just don't know the terminology.
Basically, I have a black and white image, I simply want to apply a colored overlay to the image, so that I have got the image overlayed with blue green red and yellow like the images shown below (which actually I can't show because I don't have enough reputation to do so - grrrrrr). Imagine I have a physical image, and a green/red/blue/yellow overlay, which I place on top of the image.
Ideally, I would like to do this using Python PIL but I would be just as happy to do it using ImageMagik, but either way, I need to be able to script the process as I have 100 or so images that I need to carry out the process on.
EDIT: As mentioned by Matt in the comments, this functionality is now available in skimage.color.label2rgb.
In the latest development version, we've also introduced a saturation parameter, which allows you to add overlays to color images.
Here's a code snippet that shows how to use scikit-image to overlay colors on a grey-level image. The idea is to convert both images to the HSV color space, and then to replace the hue and saturation values of the grey-level image with those of the color mask.
from skimage import data, color, io, img_as_float
import numpy as np
import matplotlib.pyplot as plt
alpha = 0.6
img = img_as_float(data.camera())
rows, cols = img.shape
# Construct a colour image to superimpose
color_mask = np.zeros((rows, cols, 3))
color_mask[30:140, 30:140] = [1, 0, 0] # Red block
color_mask[170:270, 40:120] = [0, 1, 0] # Green block
color_mask[200:350, 200:350] = [0, 0, 1] # Blue block
# Construct RGB version of grey-level image
img_color = np.dstack((img, img, img))
# Convert the input image and color mask to Hue Saturation Value (HSV)
# colorspace
img_hsv = color.rgb2hsv(img_color)
color_mask_hsv = color.rgb2hsv(color_mask)
# Replace the hue and saturation of the original image
# with that of the color mask
img_hsv[..., 0] = color_mask_hsv[..., 0]
img_hsv[..., 1] = color_mask_hsv[..., 1] * alpha
img_masked = color.hsv2rgb(img_hsv)
# Display the output
f, (ax0, ax1, ax2) = plt.subplots(1, 3,
subplot_kw={'xticks': [], 'yticks': []})
ax0.imshow(img, cmap=plt.cm.gray)
ax1.imshow(color_mask)
ax2.imshow(img_masked)
plt.show()
Here's the output:
I ended up finding an answer to this using PIL, basically creating a new image with a block colour, and then compositing the original image, with this new image, using a mask that defines a transparent alpha layer. Code below (adapted to convert every image in a folder called data, outputting into a folder called output):
from PIL import Image
import os
dataFiles = os.listdir('data/')
for filename in dataFiles:
#strip off the file extension
name = os.path.splitext(filename)[0]
bw = Image.open('data/%s' %(filename,))
#create the coloured overlays
red = Image.new('RGB',bw.size,(255,0,0))
green = Image.new('RGB',bw.size,(0,255,0))
blue = Image.new('RGB',bw.size,(0,0,255))
yellow = Image.new('RGB',bw.size,(255,255,0))
#create a mask using RGBA to define an alpha channel to make the overlay transparent
mask = Image.new('RGBA',bw.size,(0,0,0,123))
Image.composite(bw,red,mask).convert('RGB').save('output/%sr.bmp' % (name,))
Image.composite(bw,green,mask).convert('RGB').save('output/%sg.bmp' % (name,))
Image.composite(bw,blue,mask).convert('RGB').save('output/%sb.bmp' % (name,))
Image.composite(bw,yellow,mask).convert('RGB').save('output/%sy.bmp' % (name,))
Can't post the output images unfortunately due to lack of rep.
See my gist https://gist.github.com/Puriney/8f89b43d96ddcaf0f560150d2ff8297e
Core function via opencv is described as below.
def mask_color_img(img, mask, color=[0, 255, 255], alpha=0.3):
'''
img: cv2 image
mask: bool or np.where
color: BGR triplet [_, _, _]. Default: [0, 255, 255] is yellow.
alpha: float [0, 1].
Ref: http://www.pyimagesearch.com/2016/03/07/transparent-overlays-with-opencv/
'''
out = img.copy()
img_layer = img.copy()
img_layer[mask] = color
out = cv2.addWeighted(img_layer, alpha, out, 1 - alpha, 0, out)
return(out)
Add colored and transparent overlay on either RGB or gray image can work: