My goal is to find the location of specific images on other images, using python. Take this example:
I want to find the location of the walnut in the image. The image of the walnut is known, so I think there is no need for any kind of advanced pattern matching or machine learning to tell if something is a walnut or not.
How would I go about finding the walnut in the image? Would a strategy along these lines work:
read images with with PIL
transform them into Numpy arrays
use Scipy's image filters (what filters?)
Thanks!
I would go with pure PIL.
Read in image and walnut.
Take any pixel of the walnut.
Find all pixels of the image with the same color.
Check to see if the surrounding pixels coincide with the surrounding pixels of the walnut (and break as soon as you find one mismatch to minimize time).
Now, if the picture uses lossy compression (like JFIF), the walnut of the image won't be exactly the same as the walnut pattern. In this case, you could define some threshold for comparison.
EDIT: I used the following code (by converting white to alpha, the colors of the original walnut slightly changed):
#! /usr/bin/python2.7
from PIL import Image, ImageDraw
im = Image.open ('zGjE6.png')
isize = im.size
walnut = Image.open ('walnut.png')
wsize = walnut.size
x0, y0 = wsize [0] // 2, wsize [1] // 2
pixel = walnut.getpixel ( (x0, y0) ) [:-1]
def diff (a, b):
return sum ( (a - b) ** 2 for a, b in zip (a, b) )
best = (100000, 0, 0)
for x in range (isize [0] ):
for y in range (isize [1] ):
ipixel = im.getpixel ( (x, y) )
d = diff (ipixel, pixel)
if d < best [0]: best = (d, x, y)
draw = ImageDraw.Draw (im)
x, y = best [1:]
draw.rectangle ( (x - x0, y - y0, x + x0, y + y0), outline = 'red')
im.save ('out.png')
Basically, one random pixel of the walnut and looking for the best match. This is a first step with not too bad an output:
What you still want to do is:
Increase the sample space (not only using one pixel, but maybe 10 or
20).
Not only check the best match, but the best 10 matches for
instance.
EDIT 2: Some improvements
#! /usr/bin/python2.7
import random
import sys
from PIL import Image, ImageDraw
im, pattern, samples = sys.argv [1:]
samples = int (samples)
im = Image.open (im)
walnut = Image.open (pattern)
pixels = []
while len (pixels) < samples:
x = random.randint (0, walnut.size [0] - 1)
y = random.randint (0, walnut.size [1] - 1)
pixel = walnut.getpixel ( (x, y) )
if pixel [-1] > 200:
pixels.append ( ( (x, y), pixel [:-1] ) )
def diff (a, b):
return sum ( (a - b) ** 2 for a, b in zip (a, b) )
best = []
for x in range (im.size [0] ):
for y in range (im.size [1] ):
d = 0
for coor, pixel in pixels:
try:
ipixel = im.getpixel ( (x + coor [0], y + coor [1] ) )
d += diff (ipixel, pixel)
except IndexError:
d += 256 ** 2 * 3
best.append ( (d, x, y) )
best.sort (key = lambda x: x [0] )
best = best [:3]
draw = ImageDraw.Draw (im)
for best in best:
x, y = best [1:]
draw.rectangle ( (x, y, x + walnut.size [0], y + walnut.size [1] ), outline = 'red')
im.save ('out.png')
Running this with scriptname.py image.png walnut.png 5 yields for instance:
Related
I am trying to horizontally stretch an image in a very specific way. Each x prime coordinate should follow a tangent path with respect to the original x coordinate. I believe there are two ways to do this:
Inverse the tangent function and map it normally
Map the tangent function and then inverse the mapping
Using this answer for map inversion, Im trying to figure out why the two images are not the same. I know that the first method gives me the correct image that I'm looking for, so why doesnt the second method work? Is it because of the "limited precision" that #ChristophRackwitz commented on the answer?
import cv2
import glob
import numpy as np
import math
A = -1010
B = -3.931
C = 5.258
D = 978.3
M = -193.8
N = 1740
def get_tan_func_value(x):
return A * math.tan((((x-N)/M)+B)/C) + D
def get_inverse_tan_func_value(x):
return M * (C*math.atan((x-D)/A) - B) + N
# answer from linked post
def invert_map(F, shape):
I = np.zeros_like(F)
I[:,:,1], I[:,:,0] = np.indices(shape)
P = np.copy(I)
for i in range(10):
P += I - cv2.remap(F, P, None, interpolation=cv2.INTER_LINEAR)
return P
# import image
images = glob.glob('*.jpg')
img = cv2.imread(images[0])
h, w = img.shape[:2]
map_x_tan = np.zeros((img.shape[0], img.shape[1]), dtype=np.float32)
map_x_inverse_tan = np.zeros((img.shape[0], img.shape[1]), dtype=np.float32)
map_y = np.zeros((img.shape[0], img.shape[1]), dtype=np.float32)
# x tan function map
for i in range(map_x_tan.shape[0]):
map_x_tan[i,:] = [get_tan_func_value(x) for x in range(map_x_tan.shape[1])]
# x inverse tan function map
for i in range(map_x_inverse_tan.shape[0]):
map_x_inverse_tan[i,:] = [get_inverse_tan_func_value(x) for x in range(map_x_inverse_tan.shape[1])]
# default y map
for j in range(map_y.shape[1]):
map_y[:,j] = [y for y in range(map_y.shape[0])]
# convert x tan map to 2 channel (x,y) map
(xymap_tan, _) = cv2.convertMaps(map1=map_x_tan, map2=map_y, dstmap1type=cv2.CV_32FC2)
# invert the 2 channel x tan map
xymap_inverted = invert_map(xymap_tan, (h,w))
# remap and write the target image (inverse tan function with normal map)
target = cv2.remap(img, map_x_inverse_tan, map_y, cv2.INTER_LINEAR)
cv2.imwrite("target.jpg", target)
# remap and write the attempted image (normal tan function with inverted map)
attempt = cv2.remap(img, xymap_inverted, None, cv2.INTER_LINEAR)
cv2.imwrite("attempt.jpg", attempt)
Method 1: Target Image
Method 2: Attempt Image
The results show that the attempt (normal tan function with inverted map) has less stretching near the edges of the image than expected. Almost everywhere else on the images are identical except the edges. I did not post the original picture to save space.
I've played around with that invert_map procedure. It seems slightly susceptible to oscillation.
use this instead:
def invert_map(F):
(h, w) = F.shape[:2] # (h, w, 2), "xymap"
I = np.zeros_like(F)
I[:,:,1], I[:,:,0] = np.indices((h,w)) # identity map
P = np.copy(I)
for i in range(10):
correction = I - cv2.remap(F, P, None, interpolation=cv2.INTER_LINEAR)
P += correction * 0.5
return P
I simply damped the correction by 0.5, which makes the fixed point iteration tamer, converging a lot faster too.
In my experiments with your tan map, I've found that 5-10 iterations are good enough already, and there's no further progress in further iterations.
Entire notebook of my explorations: https://gist.github.com/crackwitz/67f76f8a9eff21476b080c06d20660d0
Feature request: https://github.com/opencv/opencv/issues/22120
I am reading an image, getting objects that have a certain brightness value, and then plotting the X and Y coords to the image.
But, there is a huge group of outliers, which are all located in a rectangular part of the image, Its X and Y coords are 1110-1977 (width) and 1069-1905 (height). From here, I'm looping through this little square portion of the image, and from my pre-created x and y arrays any values that have the same coords as shown there are removed.
However, this removes a lot more coords, which, for example, have X in the range 1110-1977. So the end result is a cross pattern filtering when I only want the square in the center to be filtered. How would I do this?
Code
from PIL import Image, ImageDraw
import numpy as np
from math import sqrt
imag = Image.open("Centaurus_A-DeNoiseAI-denoise.jpg")
imag = imag.convert ('RGB')
x=[]
y=[]
imag2=Image.open("Cen_A_cropped.jpg")
imag2=imag2.convert('RGB')
r=[]
g=[]
b=[]
width2, height2=imag2.size
for count2 in range(width2):
for i2 in range(height2):
X,Y=count2,i2
(R,G,B)=imag2.getpixel((X,Y))
r.append(R)
g.append(G)
b.append(B)
average_r=sum(r)/len(r)
average_g=sum(g)/len(g)
average_b=sum(b)/len(b)
brightness_average=sqrt(0.299*(average_r**2) + 0.587*(average_g**2) + 0.114*(average_b**2))
print("Avg. brightness "+str(brightness_average))
def calculate_brightness(galaxy,ref_clus,clus_mag):
delta_b=(galaxy/ref_clus)
bright=delta_b**2
mag=np.log(bright)/np.log(2.512)
return mag+clus_mag
count=0
X,Y = 1556,1568
(R,G,B) = imag.getpixel((X,Y))
width, height=imag.size
brightness = sqrt(0.299*(R**2) + 0.587*(G**2) + 0.114*(B**2))
print("Magnitude: "+str((calculate_brightness(13050, 15.79,3.7))))
reference=brightness_average/(calculate_brightness(13050, 15.79,3.7)/6.84)
print("Reference: "+str(reference))
for count in range(width):
for i in range(height):
X,Y = count,i
(R,G,B) = imag.getpixel((X,Y))
brightness = sqrt(0.299*(R**2) + 0.587*(G**2) + 0.114*(B**2))
if(reference<=brightness<=reference+3):
x.append(X)
y.append(Y)
#post processing----------------------------------------------------------------------------------------------------
for x2 in range(1110, 1977):
for y2 in range(1069, 1905):
X,Y=x2,y2
if(X in x and Y in y):
x.remove(X)
y.remove(Y)
#-------------------------------------------------------------------------------------------------------------------
with imag as im:
delta = 19
draw = ImageDraw.Draw(im)
for i in range(len(x)):
draw.rectangle([x[i-delta],y[i-delta],x[i-delta],y[i-delta]], fill=(0,255,0))
im.save("your_image.png")
Centaurus_A-DeNoiseAI-denoise.jpg
Cen_A_cropped.jpg
Your post-processing logic is flawed. You remove a bunch of X values in the range 1110-1977, without checking whether its corresponding Y value is also in the range of the box. Remove this code section instead and add that logic the first time you loop to gather your x and y coords.
for count in range(width):
for i in range(height):
X,Y = count,i
if 1110 <= X < 1977 and 1069 <= Y < 1905: # add these
continue # two lines
(R,G,B) = imag.getpixel((X,Y))
However, there is a better way of doing the exact same thing by using numpy arrays. Instead of writing explicit loops, you can vectorise a lot of your computations.
import numpy as np
from PIL import Image, ImageDraw
image = Image.open('Centaurus_A-DeNoiseAI-denoise.jpg').convert('RGB')
img1 = np.array(image)
img2 = np.array(Image.open('Cen_A_cropped.jpg').convert('RGB'))
coeffs = np.array([.299, .587, .114])
average = img2.mean(axis=(0, 1))
brightness_average = np.sqrt(np.sum(average**2 * coeffs))
reference = brightness_average / (calculate_brightness(13050, 15.79,3.7) / 6.84)
print(f'Avg. brightness: {brightness_average}')
print(f'Reference: {reference}')
brightness = np.sqrt(np.sum(img1.astype(int)**2 * coeffs, axis=-1))
accepted_brightness = (brightness >= reference) * (brightness <= reference + 3)
pixels_used = np.ones((img1.shape[:2]), dtype=bool)
pixels_used[1069:1905,1110:1977] = False
rows, cols = np.where(accepted_brightness * pixels_used)
with image as im:
draw = ImageDraw.Draw(im)
draw.point(list(zip(cols, rows)), fill=(0, 255, 0))
image.save('out.png')
The main trick used here is in the line
rows, cols = np.where(accepted_brightness * pixels_used)
accepted_brightess is a 2d array of each pixel with a boolean value whether its brightness is within your preferred range. pixels_used is another 2d boolean array, where every pixel is True, except from the pixels in the box near the centre you want to ignore. The combination of those two gives you the pixel coordinates that have the correct brightness and are not in the square in the centre.
I am trying to write a program which fades an image in radial direction. which means as we move away from the centre of the image, the pixels fade to black. For this, I have written five different functions:
center: returns coordinate pair (center_y, center_x) of the image center.
radial_distance: returns for image with width w and height h an array with shape (h,w), where the number at index (i,j) gives the euclidean distance from the point (i,j) to the center of the image.
scale: returns a copy of the array 'a' (or image) with its elements scaled to be in the given range.
radial_mask: takes an image as a parameter and returns an array with same height and width filled with values between 0.0 and 1.0.
radial_fade: returns the image multiplied by its radial mask.
The program is:
import numpy as np
import matplotlib.pyplot as plt
def center(a):
y, x = a.shape[:2]
return ((y-1)/2,(x-1)/2) # note the order: (center_y, center_x)
def radial_distance(b):
h, w = b.shape[:2]
y, x = center(b)
o = b[:h,:w,0]
for i in range(h):
for j in range(w):
o[i,j] = np.sqrt((y-i)**2 + (x-j)**2)
return o
def scale(c, tmin=0.0, tmax=1.0):
"""Returns a copy of array 'a' with its values scaled to be in the range
[tmin,tmax]."""
mini, maxi = c.min(), c.max()
if maxi == 0:
return 0
else:
m = (tmax - tmin)/(maxi - mini)
f = tmin - m*mini
return c*m + f
def radial_mask(d):
f = radial_distance(d)
g = scale(f, tmin=0.0, tmax=1.0)
# f = g[:,:,0]
n = 1.0 - g
return n
def radial_fade(l):
f, g = l.shape[:2]
q = l[:f,:g,0]
return q * radial_mask(l)
image = plt.imread("src/painting.png")
fig, ax = plt.subplots(3)
masked = radial_mask(ima)
faded = radial_fade(ima)
ax[0].imshow(ima)
ax[1].imshow(masked)
ax[2].imshow(faded)
plt.show()
there is something wrong somewhere in the code as it does not do the expected job.
One problem is that in
o = b[:h,:w,0]
you're using the same precision as the image that may be integers (e.h. uint8).
You should use for example
o = np.zeros((h, w), np.float32)
I have a 2d map of a coordinate transform. The data at each point is the aximuthal angle in the original coordinate system, which goes from 0 to 360. I'm trying to use pyplot.contour to plot lines of constant angle, e.g. 45 degrees. The contour appears along the 45 degree line between the two poles, but there's an additional part to the contour that connects the two poles along the 0/360 discontinuity. This makes a very jagged ugly line as it basically just traces the pixels with a number close to 0 on one side and another close to 360 on the other.
Examples:
Here is an image using full colour map:
You can see the discontinuity along the blue/red curve on the left side. One side is 360 degrees, the other is 0 degrees. When plotting contours, I get:
Note that all contours connect the two poles, but even though I have NOT plotted the 0 degree contour, all the other contours follow along the 0 degree discontinuity (because pyplot thinks if it's 0 on one side and 360 on the other, there must be all other angles in between).
Code to produce this data:
import numpy as np
import matplotlib.pyplot as plt
jgal = np.array(
[
[-0.054875539726, -0.873437108010, -0.483834985808],
[0.494109453312, -0.444829589425, 0.746982251810],
[-0.867666135858, -0.198076386122, 0.455983795705],
]
)
def s2v3(rra, rdec, r):
pos0 = r * np.cos(rra) * np.cos(rdec)
pos1 = r * np.sin(rra) * np.cos(rdec)
pos2 = r * np.sin(rdec)
return np.array([pos0, pos1, pos2])
def v2s3(pos):
x = pos[0]
y = pos[1]
z = pos[2]
if np.isscalar(x):
x, y, z = np.array([x]), np.array([y]), np.array([z])
rra = np.arctan2(y, x)
low = np.where(rra < 0.0)
high = np.where(rra > 2.0 * np.pi)
if len(low[0]):
rra[low] = rra[low] + (2.0 * np.pi)
if len(high[0]):
rra[high] = rra[high] - (2.0 * np.pi)
rxy = np.sqrt(x ** 2 + y ** 2)
rdec = np.arctan2(z, rxy)
r = np.sqrt(x ** 2 + y ** 2 + z ** 2)
if x.size == 1:
rra = rra[0]
rdec = rdec[0]
r = r[0]
return rra, rdec, r
def gal2fk5(gl, gb):
rgl = np.deg2rad(gl)
rgb = np.deg2rad(gb)
r = 1.0
pos = s2v3(rgl, rgb, r)
pos1 = np.dot(pos.transpose(), jgal).transpose()
rra, rdec, r = v2s3(pos1)
dra = np.rad2deg(rra)
ddec = np.rad2deg(rdec)
return dra, ddec
def make_coords(resolution=50):
width = 9
height = 6
px = width * resolution
py = height * resolution
coords = np.zeros((px, py, 4))
for ix in range(0, px):
for iy in range(0, py):
l = 360.0 / px * ix - 180.0
b = 180.0 / py * iy - 90.0
dra, ddec = gal2fk5(l, b)
coords[ix, iy, 0] = dra
coords[ix, iy, 1] = ddec
coords[ix, iy, 2] = l
coords[ix, iy, 3] = b
return coords
coords = make_coords()
# now do one of these
# plt.imshow(coords[:,:,0],origin='lower') # color plot
plt.contour(
coords[:, :, 0], levels=[45, 90, 135, 180, 225, 270, 315]
) # contour plot with jagged ugliness
plt.show()
How can I either:
stop pyplot.contour from drawing a contour along the discontinuity
make pyplot.contour recognize that the 0/360 discontinuity in angle is not a real discontinuity at all.
I can just increase the resolution of the underlying data, but before I get a nice smooth line it starts to take a very long time and a lot of memory to plot.
I will also want to plot a contour along 0 degrees, but if I can figure out how to hide the discontinuity I can just shift it to somewhere else not near a contour. Or, if I can make #2 happen, it won't be an issue.
This is definitely still a hack, but you can get nice smooth contours with a two-fold approach:
Plot contours of the absolute value of the phase (going from -180˚ to 180˚) so that there is no discontinuity.
Plot two sets of contours in a finite region so that numerical defects close to the tops and bottoms of the extrema do not creep in.
Here is the complete code to append to your example:
Z = np.exp(1j*np.pi*coords[:,:,0]/180.0)
Z *= np.exp(0.25j*np.pi/2.0) # Shift to get same contours as in your example
X = np.arange(300)
Y = np.arange(450)
N = 2
levels = 90*(0.5 + (np.arange(N) + 0.5)/N)
c1 = plt.contour(X, Y, abs(np.angle(Z)*180/np.pi), levels=levels)
c2 = plt.contour(X, Y, abs(np.angle(Z*np.exp(0.5j*np.pi))*180/np.pi), levels=levels)
One can generalize this code to get smooth contours for any "periodic" function. What is left to be done is to generate a new set of contours with the correct values so that colormaps apply correctly, labels will be applied correctly etc. However, there does not seem to be a simple way of doing this with matplotlib: the relevant QuadContourSet class does everything and I do not see a simple way of constructing an appropriate contour object from the contours c1 and c2.
I was interested in the exact same problem. One solution is to NaN out the contours along the branch cut; see here; another is to use the max_jump argument in matplotx's contour().
I molded the solution into a Python package, cplot.
import cplot
import numpy as np
def f(z):
return np.exp(1 / z)
cplot.show(f, (-1.0, +1.0, 400), (-1.0, +1.0, 400))
Anyone have any suggestions on how to make randomized colors that are all greenish? Right now I'm generating the colors by this:
color = (randint(100, 200), randint(120, 255), randint(100, 200))
That mostly works, but I get brownish colors a lot.
Simple solution: Use the HSL or HSV color space instead of rgb (convert it to RGB afterwards if you need this). The difference is the meaning of the tuple: Where RGB means values for Red, Green and Blue, in HSL the H is the color (120 degree or 0.33 meaning green for example) and the S is for saturation and the V for the brightness. So keep the H at a fixed value (or for even more random colors you could randomize it by add/sub a small random number) and randomize the S and the V. See the wikipedia article.
As others have suggested, generating random colours is much easier in the HSV colour space (or HSL, the difference is pretty irrelevant for this)
So, code to generate random "green'ish" colours, and (for demonstration purposes) display them as a series of simple coloured HTML span tags:
#!/usr/bin/env python2.5
"""Random green colour generator, written by dbr, for
http://stackoverflow.com/questions/1586147/how-to-generate-random-greenish-colors
"""
def hsv_to_rgb(h, s, v):
"""Converts HSV value to RGB values
Hue is in range 0-359 (degrees), value/saturation are in range 0-1 (float)
Direct implementation of:
http://en.wikipedia.org/wiki/HSL_and_HSV#Conversion_from_HSV_to_RGB
"""
h, s, v = [float(x) for x in (h, s, v)]
hi = (h / 60) % 6
hi = int(round(hi))
f = (h / 60) - (h / 60)
p = v * (1 - s)
q = v * (1 - f * s)
t = v * (1 - (1 - f) * s)
if hi == 0:
return v, t, p
elif hi == 1:
return q, v, p
elif hi == 2:
return p, v, t
elif hi == 3:
return p, q, v
elif hi == 4:
return t, p, v
elif hi == 5:
return v, p, q
def test():
"""Check examples on..
http://en.wikipedia.org/wiki/HSL_and_HSV#Examples
..work correctly
"""
def verify(got, expected):
if got != expected:
raise AssertionError("Got %s, expected %s" % (got, expected))
verify(hsv_to_rgb(0, 1, 1), (1, 0, 0))
verify(hsv_to_rgb(120, 0.5, 1.0), (0.5, 1, 0.5))
verify(hsv_to_rgb(240, 1, 0.5), (0, 0, 0.5))
def main():
"""Generate 50 random RGB colours, and create some simple coloured HTML
span tags to verify them.
"""
test() # Run simple test suite
from random import randint, uniform
for i in range(50):
# Tweak these values to change colours/variance
h = randint(90, 140) # Select random green'ish hue from hue wheel
s = uniform(0.2, 1)
v = uniform(0.3, 1)
r, g, b = hsv_to_rgb(h, s, v)
# Convert to 0-1 range for HTML output
r, g, b = [x*255 for x in (r, g, b)]
print "<span style='background:rgb(%i, %i, %i)'> </span>" % (r, g, b)
if __name__ == '__main__':
main()
The output (when viewed in a web-browser) should look something along the lines of:
Edit: I didn't know about the colorsys module. Instead of the above hsv_to_rgb function, you could use colorsys.hsv_to_rgb, which makes the code much shorter (it's not quite a drop-in replacement, as my hsv_to_rgb function expects the hue to be in degrees instead of 0-1):
#!/usr/bin/env python2.5
from colorsys import hsv_to_rgb
from random import randint, uniform
for x in range(50):
h = uniform(0.25, 0.38) # Select random green'ish hue from hue wheel
s = uniform(0.2, 1)
v = uniform(0.3, 1)
r, g, b = hsv_to_rgb(h, s, v)
# Convert to 0-1 range for HTML output
r, g, b = [x*255 for x in (r, g, b)]
print "<span style='background:rgb(%i, %i, %i)'> </span>" % (r, g, b)
Check out the colorsys module:
http://docs.python.org/library/colorsys.html
Use the HSL or HSV color space. Randomize the hue to be close to green, then choose completely random stuff for the saturation and V (brightness).
If you stick with RGB, you basically just need to make sure the G value is greater than the R and B, and try to keep the blue and red values similar so that the hue doesn't go too crazy. Extending from Slaks, maybe something like (I know next to nothing about Python):
greenval = randint(100, 255)
redval = randint(20,(greenval - 60))
blueval = randint((redval - 20), (redval + 20))
color = (redval, greenval, blueval)
So in this case you are lucky enough to want variations on a primary color, but for artistic uses like this it is better to specify color wheel coordinates rather than primary color magnitudes.
You probably want something from the colorsys module like:
colorsys.hsv_to_rgb(h, s, v)
Convert the color from HSV coordinates to RGB coordinates.
The solution with HSx color space is a very good one. However, if you need something extremely simplistic and have no specific requirements about the distribution of the colors (like uniformity), a simplistic RGB-based solution would be just to make sure that G value is greater than both R and B
rr = randint(100, 200)
rb = randint(100, 200)
rg = randint(max(rr, rb) + 1, 255)
This will give you "greenish" colors. Some of them will be ever so slightly greenish. You can increase the guaranteed degree of greenishness by increasing (absolutely or relatively) the lower bound in the last randint call.
What you want is to work in terms of HSL instead of RGB. You could find a range of hue that satisfies "greenish" and pick a random hue from it. You could also pick random saturation and lightness but you'll probably want to keep your saturation near 1 and your lightness around 0.5 but you can play with them.
Below is some actionscript code to convert HSL to RGB. I haven't touched python in a while or it'd post the python version.
I find that greenish is something like 0.47*PI to 0.8*PI.
/**
#param h hue [0, 2PI]
#param s saturation [0,1]
#param l lightness [0,1]
#return object {r,g,b} {[0,1],[0,1][0,1]}
*/
public function hslToRGB(h:Number, s:Number, l:Number):Color
{
var q:Number = (l<0.5)?l*(1+s):l+s-l*s;
var p:Number = 2*l-q;
var h_k:Number = h/(Math.PI*2);
var t_r:Number = h_k+1/3;
var t_g:Number = h_k;
var t_b:Number = h_k-1/3;
if (t_r < 0) ++t_r; else if (t_r > 1) --t_r;
if (t_g < 0) ++t_g; else if (t_g > 1) --t_g;
if (t_b < 0) ++t_b; else if (t_b > 1) --t_b;
var c:Color = new Color();
if (t_r < 1/6) c.r = p+((q-p)*6*t_r);
else if (t_r < 1/2) c.r = q;
else if (t_r < 2/3) c.r = p+((q-p)*6*(2/3-t_r));
else c.r = p;
if (t_g < 1/6) c.g = p+((q-p)*6*t_g);
else if (t_g < 1/2) c.g = q;
else if (t_g < 2/3) c.g = p+((q-p)*6*(2/3-t_g));
else c.g = p;
if (t_b < 1/6) c.b = p+((q-p)*6*t_b);
else if (t_b < 1/2) c.b = q;
else if (t_b < 2/3) c.b = p+((q-p)*6*(2/3-t_b));
else c.b = p;
return c;
}
The simplest way to do this is to make sure that the red and blue components are the same, like this: (Forgive my Python)
rb = randint(100, 200)
color = (rb, randint(120, 255), rb)
I'd go with with the HSV approach everyone else mentioned. Another approach would be to get a nice high resolution photo which some greenery in it, crop out the non-green parts, and pick random pixels from it using PIL.