I have to find count of pixels of RGB which is using by RGB values. So i need logic for that one.
Example.
i have image called image.jpg. That image contains width = 100 and height = 100 and Red value is 128. Green is 0 and blue is 128. so that i need to find how much RGB pixels have that image. can anyone help?
As already mentioned in this question, by using Pillow you can do the following:
from PIL import Image
im = Image.open('image.jpg', 'r')
If successful, this function returns an Image object. You can now use instance attributes to examine the file contents:
width, height = im.size
pixel_values = list(im.getdata())
print(im.format, im.size, im.mode)
The format attribute identifies the source of an image. If the image was not read from a file, it is set to None. The mode attribute defines the number and names of the bands in the image, and also the pixel type and depth. Common modes are “L” (luminance) for greyscale images, “RGB” for true color images, and “CMYK” for pre-press images.
Related
I have this image which has 4 channels. What I want to do is, to reduce this image's opacity. I want the image to remain transparent, however, I just want to reduce the opacity of the VW logo part. (Using opencv/numpy and python)
Here's what I tried :
logo = cv2.imread(FILE_PATH, cv2.IMREAD_UNCHANGED)
logo[:, :, 3] = 50
But this assigns the value 50 to all over the image, giving me this result. Notice how the remaining part of the image is no more transparent (I want it to remain like the original one.)
I thought of something like:
#This is my logic NOT ANY CODE, I want to do something like this:
if (any of) other 3 channels are non zero, make alpha channel of that pixel = 50.
else, keep that pixel as it is. (This pixel would be part of logo)
Is there any way of achieving this result, by using opencv / numpy in python? My last option would be to iterate through all the pixels and look for above conditions, but I feel that would be inefficient.
This answer is exactly what I DON'T want. I want only the logo part(Colored pixels)'s alpha channel to be set to 50.
Just fleshing out the comment from #yann-ziselman...
Here are the RGBA channels of your image side-by-side, with a red border so you can see the extent on Stack Overflow's background:
Here's how to do what you ask:
import cv2
# Load image including alpha channel
im = cv2.imread('logo.png', cv2.IMREAD_UNCHANGED)
# Set alpha channel to 50 anywhere none of the BGR channels is non-zero
im[(im[..., :3]!=0).any(2), 3] = 50
# Save result
cv2.imwrite('result.png', im)
Result
Result split into RGBA channels side-by-side
I am attempting to separate red, green and blue components of an image and display the resulting images in separate subplots.
To do this, for each colour, I have created an array of zeros the same shape as the original image (using the function np.zeros), and copied one of the image colours across using slicing.
However, the output just appears to be a red square, therefore I don't think the code is working how I would expect it to. Does anyone have any idea where I'm going wrong?
red_image[:,:,0] = red_channel
image = plt.imread('archway.jpg')
plt.imshow(image)
red_channel = image[:,:,0]
red_image = np.zeros(image.shape)
red_image[:,:,0] = red_channel
plt.imshow(red_image)
At the upper left hand side of the image the "pixel" value is always going to be (0,0). But depending on the resolution of the camera and varying images the size of the picture changes, so how do I find the maximum pixel length and width of an image using python? (opencv or numpy)
Not what you asked, I realise, however, Pillow will do this for you.
>>> from PIL import Image
>>> im = Image.open(r"C:\Users\Bill\Pictures\Tilly.jpg")
>>> im.size
(2448, 2448)
The module that you used to read the image should usually give you a way to retrieve the dimension. I noticed you have the opencv tag. If you are using open cv , you can do this:
import cv2
img = cv2.imread('img.png')
#shape will give you [height, width, channel]
h, w = img.shape[0:2]
#get the bottom right pixel
bottom_right = img[h-1, w-1]
I have a bunch of map files I've downloaded from Google maps in png formats that I want to convert to a single larger image. When I import one and I look at the pixels, I see that the pixels are a single number in the range of 0..256 instead of three values list. What's going on here?
I'm using
from PIL import Image
print open('b1.png').load()[0,0]
and I get 153 instead of [r,g,b]
my image file is
The reason of such result (value 153 in [0,0]) is that image mode is set to P (8-bit pixels, mapped to any other mode using a colour palette). If You want to set different mode (e.g. RGB) You can do it before invoking method load().
Here is an example of how to do this:
from PIL import Image
file_data = Image.open('b1.png')
file_data = file_data.convert('RGB') # conversion to RGB
data = file_data.load()
print data[0,0]
and the result of print is
(240, 237, 229)
For more information about Image Modes please visit the documentation.
Your image is in mode=P. It has it's colors defined in a color palette.
>>> Image.open('b1.png')
<PIL.PngImagePlugin.PngImageFile image mode=P size=640x640 at 0x101856B48>
You want a RGB value. First convert to RGB:
>>> im = Image.open('b1.png')
>>> im = im.convert('RGB')
>>> im.getpixel((1,1))
(240, 237, 229)
From the docs: http://pillow.readthedocs.org/en/latest/handbook/concepts.html?highlight=mode
P (8-bit pixels, mapped to any other mode using a color palette)
...
RGB (3x8-bit pixels, true color)
I am trying to create a negative of this black and white image. The opposite of white (255) is black (0) and vice versa. The opposite of a pixel with a value of 100 is 155.
I cannot use convert, invert, point, eval, lambda.
Here is my code but it doesnt work yet. Could you please let me know which part i am wrong.
def bw_negative(filename):
"""
This function creates a black and white negative of a bitmap image
using the following parameters:
filename is the name of the bitmap image
"""
#Create the handle and then create a list of pixels.
image = Image.open(filename)
pixels = list(image.getdata())
pixel[255] = 0
pixel[0] = 255
for i in range(255,0):
for j in range(0,255):
pixel[i] = j
print pixels[i]
image.putdata(pixels)
image.save ('new.bmp')
Python is an interpreted language, which has the advantage that you can use an interactive interpreter-session to try out things. Try to open the image file in an interactive session and look at the list you get from list(image.getdata()). Once you understand what that list contains, you can think about a way to invert the image.