I would like to know how to change more than one pixel value in a mode P image with PIL.
In my case I have 3 pixels values: 0, 1 , 2. I would map them respectively to 255, 76, 124.
I tried:
Image.open(my_image).point(lambda p: 255 if p==0 else (76 if p==1 else 124))
When I run the code above, I get an image with all black pixels. Why? Sholud I use a different function rather than point()?
Update:
.getpalette() returns {0, 255}
If all pixels in your image with the same value mapping to the same output value sounds fine, then Image.point() is definitely the right way to go.
Now why you're getting a black image depends on the color values defined in the palette. You can check the image's palette by calling Image.getpalette(). If your palette defines only 3 color values and anything beyond index 9 in the palette data is 0, then you should map your pixel data in that range, anything other than that will map to the default black.
If you want to use other color values than these defined in your palette, then consider converting to other color modes before calling Image.point():
Image.open(my_image).convert('L').point(lambda p: 255 if p==0 else 76 if p==1 else 124)
Related
I have an image, using steganography I want to save the data in border pixels only.
In other words, I want to save data only in the least significant bits(LSB) of border pixels of an image.
Is there any way to get border pixels to store data( max 15 characters text) in the border pixels?
Plz, help me out...
OBTAINING BORDER PIXELS:
Masking operations are one of many ways to obtain the border pixels of an image. The code would be as follows:
a= cv2.imread('cal1.jpg')
bw = 20 //width of border required
mask = np.ones(a.shape[:2], dtype = "uint8")
cv2.rectangle(mask, (bw,bw),(a.shape[1]-bw,a.shape[0]-bw), 0, -1)
output = cv2.bitwise_and(a, a, mask = mask)
cv2.imshow('out', output)
cv2.waitKey(5000)
After I get an array of ones with the same dimension as the input image, I use cv2.rectangle function to draw a rectangle of zeros. The first argument is the image you want to draw on, second argument is start (x,y) point and the third argument is the end (x,y) point. Fourth argument is the color and '-1' represents the thickness of rectangle drawn (-1 fills the rectangle). You can find the documentation for the function here.
Now that we have our mask, you can use 'cv2.bitwise_and' (documentation) function to perform AND operation on the pixels. Basically what happens is, the pixels that are AND with '1' pixels in the mask, retain their pixel values. Pixels that are AND with '0' pixels in the mask are made 0. This way you will have the output as follows:
.
The input image was :
You have the border pixels now!
Using LSB planes to store your info is not a good idea. It makes sense when you think about it. A simple lossy compression would affect most of your hidden data. Saving your image as JPEG would result in loss of info or severe affected info. If you want to still try LSB, look into bit-plane slicing. Through bit-plane slicing, you basically obtain bit planes (from MSB to LSB) of the image. (image from researchgate.net)
I have done it in Matlab and not quite sure about doing it in python. In Matlab,
the function, 'bitget(image, 1)', returns the LSB of the image. I found a question on bit-plane slicing using python here. Though unanswered, you might want to look into the posted code.
To access border pixel and enter data into it.
A shape of an image is accessed by t= img.shape. It returns a tuple of the number of rows, columns, and channels.A component is RGB which 1,2,3 respectively.int(r[0]) is variable in which a value is stored.
import cv2
img = cv2.imread('xyz.png')
t = img.shape
print(t)
component = 2
img.itemset((0,0,component),int(r[0]))
img.itemset((0,t[1]-1,component),int(r[1]))
img.itemset((t[0]-1,0,component),int(r[2]))
img.itemset((t[0]-1,t[1]-1,component),int(r[3]))
print(img.item(0,0,component))
print(img.item(0,t[1]-1,component))
print(img.item(t[0]-1,0,component))
print(img.item(t[0]-1,t[1]-1,component))
cv2.imwrite('output.png',img)
I'm using opencv and numpy to process some satellite images.
I need to differentiate what is "land" from what is "green" (crops and vegetation).
My question is: How can I decide which values are close to green in the RGB format?
What I'm doing so far is:
img = cv2.imread('image1.jpg',1)
mat = np.asarray(img)
for elemento in mat:
for pixel in elemento:
if pixel[1] > 200: # If the level of green is higher than 200, I change it to black
pixel[0] = 0
pixel[1] = 0
pixel[2] = 0
else: # If the level of G is lower than 200 I change it to white.
pixel[0] = 255
pixel[1] = 255
pixel[2] = 255
This code works, but isn't really useful. I need a more precise manner to decide which RGB values correspond to green and which ones does not.
How can I achieve this?
You could use InRange function to find colors in specific range, because you will not be able to find green color from satelites just with one or few values of pixels. InRange function will help you to find a range of set colors (you should set the range of green colors) and return an image with the coordinates of those green pixels ir the original image. I've answered a similar quiestion HERE with examples and code (although it is not python, you should understand the methods and easily implement it in your OpenCV project), you should find everything you need there.
I know this is very similar to a previous question that I had posted, I cannot figure out how to modify it to suit this problem.
I have a array of values between 0 and 1. I want to convert these values to RGB color array ranging from red (0) to blue (1). The solution suggested in the previous question converted values to HSV tuples first, and then used colosys.hsv_to_rgb() to convert to RGB values.
This was fine if I wanted to generate colors between red and green. I could interpolate the hue value linearly. However, when the extreme colors are red and blue, the intermediary values take on green color because green hue lies between red and blue. Is there a way I can avoid this and get colors only in the vicinity of red and blue?
I know I can linearly interpolate RGB values directly, but that does not give a good color spectrum. So please tell me a solution using HSV space.
Scale your 0-1 input to 0-(-120) output, take the modulus by 360, and use that as your hue (or 0-(-0.3333...) and 1.0 as appropriate).
Hue is a cyclic value, thus, if you add the total hue range to any color's hue (modulo its max value, so that it is still within the range), you get the exact same color. So instead of increasing it from red to blue, decrease it from red to blue. HSV is a cylindrical color model.
I'm trying to draw shapes with colors, with this extremely simple piece of code:
from PIL import Image, ImageDraw
img = Image.new( "RGB", (256,256))
draw = ImageDraw.Draw(img)
draw.rectangle( [(0,0),(256,128)], fill="#FF0000" )
draw.rectangle( [(0,128),(256,256)], fill=0xFF0000 )
# img.show()
img.save("test.png")
My first rectangle will be RED, but the second is BLUE. I know values are not the same: one is a string, the other one is an integer, but obviously the program should interpret it the same way shouldn't it? Or am I overlooking some straightforward thing?
I'm drawing gradients with integers and found this strange behaviour.
Thanks for any guidance.
Well, since you did not get the same colors, obviously the library us not interpreting them the same way.
The first form you have a color as it is specified in HTML and CSS names - a string, and you could have used the words "red" and "blue" just as you used "#ff0000" -
On the second form you are actually passing it an integer number that will represent the color. Since it shows red instead of blue, the byte order is reversed when you input colors in this format - just try 0x0000FF instead. (i.e. it takes BGR instead of RGB)
If you are using numbers rather than strings, note that you can sent a 3-tuple with the RGB values as well, like in draw.rectangle( [(0,128),(256,256)], fill=(255, 0,0) ) (doing it this way also uses RGB)
"reality is what you get away with"
It does not support it. Here is what is supported:
Colour Names
In PIL 1.1.4 and later, you can also use string constants when drawing
in "RGB" images. PIL supports the following string formats:
Hexadecimal color specifiers, given as "#rgb" or "#rrggbb". For
example, "#ff0000" specifies pure red.
RGB functions, given as "rgb(red, green, blue)" where the colour
values are integers in the range 0 to 255. Alternatively, the color
values can be given as three percentages (0% to 100%). For example,
"rgb(255,0,0)" and "rgb(100%,0%,0%)" both specify pure red.
Hue-Saturation-Lightness (HSL) functions, given as "hsl(hue,
saturation%, lightness%)" where hue is the colour given as an angle
between 0 and 360 (red=0, green=120, blue=240), saturation is a value
between 0% and 100% (gray=0%, full color=100%), and lightness is a
value between 0% and 100% (black=0%, normal=50%, white=100%). For
example, "hsl(0,100%,50%)" is pure red.
Common HTML colour names. The ImageDraw provides some 140 standard
colour names, based on the colors supported by the X Window system and
most web browsers. Colour names are case insensitive, and may contain
whitespace. For example, "red" and "Red" both specify pure red.
The ImageDraw Module
Hi I'm trying to built simple color identifying program. I have taken a image (yellow & pink) with and convert it in HSV color space. Then used threshold to identify yellow color region. I getting the output (black image). I want yellow region to be filled with while color and rest with black.
IplImage *imgRead= cvLoadImage("yellow.jpeg",CV_LOAD_IMAGE_COLOR);
if(!imgRead) {
fprintf(stderr, "Error in reading image\n");
exit(1);
}
IplImage *imgHsv = cvCreateImage(cvGetSize(imgRead),8, 3);
cvCvtColor(imgRead, imgHsv,CV_BGR2HSV);
IplImage *imgThreshold = cvCreateImage(cvGetSize(imgRead),8, 1);
cvInRangeS(imgHsv, cvScalar(25, 80, 80,80), cvScalar(34, 255, 255,255), imgThreshold);
cvShowImage("image",imgThreshold);
cvWaitKey(0);
In above code I had calculated HSV value for yellow as 30. (In gimp hsv value for yellow color is 60). In cvInRangeS, except for hue value I'm not sure how to specify other values for cvScalar.
What values I need to put? Am I missing anything?
I think the problem you are having is due to the scaling of the HSV data to fit in 8-bits. Normally, as I'm sure you noticed from using GIMP that HSV scales are as follows:
H -> [0, 360]
S -> [0, 100]
V -> [0, 100]
But, OpenCV remaps these values as follows:
(H / 2) -> [0, 180] (so that the H values can be stored in 8-bits)
S -> [0, 255]
V -> [0, 255]
This is why your calculated Hue value is 30 instead of 60. So, to filter out all colors except for yellow your cvInRangeS call would look something like this:
cvInRangeS(imgHsv, cvScalar(25, 245, 245, 0), cvScalar(35, 255, 255, 255), imgThreshold);
The fourth channel is unused for HSV. This call would give you 10-counts of noise in your color detector threshold for each dimension.
As mentioned by, SSteve your threshold should work, but you may need to expand your threshold boundaries to capture the yellow-ish color in your image.
Hope that helps!
I ran your code and it worked fine. Perhaps the yellow in your image isn't as yellow as you think.
Edit: The other potential difference is that I'm using OpenCV 2.3. Which version are you using?
Ok, one more edit: Have you tried looking at your yellow values? That would give you a definitive answer as to what values you should use in cvInRangeS. Add these two lines after the call to cvCvtColor:
uchar* ptr = (uchar*)(imgHsv->imageData);
printf("H: %d, S:%d, V:%d\n", ptr[0], ptr[1], ptr[2]);
For my image, I got:
H: 30, S:109, V:255
That's why your code worked for me.