Mark pixels of a blank image - python

I Want to mark the pixels,
Mark=[2, 455, 6, 556, 12, 654, 22, 23, 4,86,.....]
in such a way that it will not mark the 1st 2 pixels and then mark next 455 pixels by a color, again for next 6 pixels it will not mark and again mark the next 556 pixels by the same color and so on.
The size of the image is 500x500x3. How do I calculate these steps?
Img=np.zeros((500,500,3),dtype=np.uint8)

Your algorithm is actually in your question. By 500x500x3 I guess that you mean your image is 500 (width) on 500 (height) with 3 color channel?
It could be implemented as follows, without any optimizations:
color = (128, 50, 30)
x, y = 0, 0
for (skip, count) in [Mark[n:n+2] for n in range(len(Mark) // 2)]:
x += skip
y += x // 500 # keep track of the lines, when x > 500,
# it means we are on a new line
x %= 500 # keep the x in bounds
# colorize `count` pixels in the image
for i in range(0, count):
Img[x, y, 0] = color[0]
Img[x, y, 1] = color[1]
Img[x, y, 2] = color[2]
x += 1
y += x // 500
x %= 500 # keep the x in bounds
The zip([a for i, a in enumerate(Mark) if i % 2 == 0], [a for i, a in enumerate(Mark) if i % 2 != 0]) is a just a way to group the pairs (skip, pixel to colorize). It could definitely be improved though, I'm no Python expert.
EDIT: modified the zip() to use [Mark[n:n+2] for n in range(len(Mark) // 2)] as suggested by Peter, much simpler and easier to understand.

The easiest way is probably to convert the image to a Numpy array:
import numpy as np
na = np.array(Img)
And then use Numpy ravel() to give you a flattened (1-D) view of the array
flat = np.ravel(na)
You can now see the shape of your flat view:
print(flat.shape)
Then you can do your colouring by iterating over your array of offsets from your question. Then the good news is that, because ravel() gives you a view into your original data all the changes you make to the view will be reflected in your original data.
So, to get back to a PIL Image, all you need is:
RecolouredImg = Image.fromarray(na)
Try it out by just colouring the first ten pixels before worrying about your long list.
If you like working with Python lists (I don't), you can achieve a similar effect by using PIL getdata() to get a flattened list of pixels and then process the list against your requirements and putdata() to put them all back. The effect will be the same, just choose a method that fits how your brain works.

Related

Finding lines manually in an image

I have an image (saved as a variable called canny_image) and it looks like this after preprocessing.
I am basically trying to find the distance between the first two vertical lines. I tried using the hough_line function from skimage, but it's unable to find the first line, so I thought it might be easier to solve this manually.
I am basically trying to solve this by going through each row in the image until I get to the first pixel with a value of 255, (the lines have a value of 255, while everything else is zero), and then I store the location of that pixel in an array. And I take the mode of the values in the array as the x location of the first line. I'll do the same for the 2nd line by using the first x-value as a starting point.
def find_lines(canny_image):
threshold = 255
for y in range(canny_image.shape[0]):
for x in range(canny_image.shape[1]):
if canny_image[x, y] == threshold:
return x
This is the code I wrote to get the x-location of the first line, however, I'm not getting the desired output. Any help on how to solve this will be much appreciated. Thanks!
Perhaps try something like this
# Returns an array of lines x positions in an image
def find_line_x_positions(image, lines_to_detect: int, buffer_zone: int):
threshold = 255
(height, width) = image.shape
x_position_sums = np.zeros((lines_to_detect, 2), np.double) # For each line, store x_pos sum[i,0] and point count[i,1]
for y in range(height):
buffer = 0
line_index = 0
for x in range(width):
if buffer > 0:
buffer -= 1
if (image[y, x] >= threshold) and (buffer == 0):
buffer = buffer_zone
x_position_sums[line_index, 0] += x
x_position_sums[line_index, 1] += 1
line_index += 1
if ((line_index) == lines_to_detect):
break
# Divide the x position sums by the point counts to get the average x position for each line
results = x_position_sums[np.all(x_position_sums,axis=1)]
results = np.divide(results[:,0],results[:,1])
return results
You can also try OpenCV's HoughLines() function which is simpler to implement than scikit lib. When I tested OpenCV implementation out, it seems to have a hard time finding vertical lines(within 10 degrees from vertical) but you can solve this by rotating your image X degrees and look for lines within that range of rotation.

Fast way to iterate over sparse numpy array but only where elements are 1

I have an array representing a light source in space made as such:
source2D = np.zeros((256, 256))
with any amount of the pixels = 1. An example I have is a point source which is generated by:
source2D[126:128, 126:128] = 1
And I am running a monte carlo simulation which shoots a ray from each part of the array where the value = 1. Currently I am iterating over the entire array but I would save a lot of time by only picking out the elements where array = 1 and iterating over them. I should add that this function should be made to accept a generic 256x256 where any elements could be set to 1, so cropping the array is not an option. What is the fastest way to do this? I am also using tensorflow so if there is an implementation using that, that would also be an option
Right now my code looks somethinglike this:
while pc < 1000000:
pc+=1
# Randomize x and y as coordinates on source
x = np.random.randint(0, source2D.shape[0]) # 0 to 255 for this example
y = np.random.randint(0, source2D.shape[1]) # 0 to 255 for this example
# Shoot raycast from x,y to point on detector
Solved with hpaulj's comment:
source2D = np.zeros((256, 256)) # Testing with point source
source2D[126:128, 126:128] = 1
nonzero_entries = np.nonzero(source2D)
i = np.random.choice(nonzero_entries[0])
j = np.random.choice(nonzero_entries[1])

Numpy: Efficient mapping of single values in np arrays

I have a 3D numpy array respresenting an image in HSV color space (shape = (h=1000, w=3000, 3)).
The last dimension of the image is [H,S, V]. I want to subtract 20 from the H channel from all the pixels IF the pixel value is >20 , but leave S and V intact.
I wrote the following vectorized function:
def sub20(x):
# x is a array in the format [H,S, V]
return np.uint8([H-20, S, V])
vec= np.vectorize(sub20, otypes=[np.uint8],signature="(i)->(i)")
img2= vec(img1)
What this vectorised function does is to accept the last dimension of the image [H,S,V] and output
[H-20, S, V]
I dont know how to make it subtract 20 if H is greater than 20. it also takes 1 minute to execute. I want the script to accept live webcam feed. Is there any way to make it faster?
Thanks
You can simply slice with condition:
img1[:,:,0][img1[:,:,0]>=20] -= 20
Or also make use of np.where:
img1[:,:,0] = np.where(img1[:,:,0]>=20, img1[:,:,0]-20, img1[:,:,0])
Do you need to use the vectorize function?
Otherwise you could only use the following command:
# if you want to make change directly on same image.
img1[:,:,0] -= 20
# if you want to leave img1 in the same state.
img2 = np.array(img1)
img2[:,:,0] = img1[:,:,0] - 20
Update (12:08 - 5.4.2020)
To incorporate that values never get below 0 I would recommend to compute it in two steps as Mercury mentioned:
# if you want to make changes directly on same image.
img1[:,:,0] -= 20
img1[img1[:,:,0] < 0] = 0
# if you want to leave img1 in the same state.
img2 = np.array(img1)
img2[:,:,0] = img2[:,:,0] - 20
img2[img2[:,:,0] < 0] = 0

Python: rotate image without depending on pillow library?

First of all, I am at a complete beginner level of python, with just under a month of experience.
I am writing a simple program as part of a project whereby my two primary objectives are create a function from scratch that flips and rotates an image, and another function that essentially changes the rgb value of the image (e.g. make the image greyscale). The user will be given an choice where they choose one of the effects to apply to the image. I already have pillow installed, do I need any other libraries? I want to know how I can go about creating these from scratch.
Any help would be appreciated
Thanks
EDIT: To clarify, I will be using pillow, but I will be creating the rotate and greyscale functions myself
Pillow provides access to the individual pixels of an image, which can help you achieve what you want. Of course, the library functions like rotate() are faster ways to do this, but you just want to explore and learn, which is half the fun of programming.
You can create a new image and then get the pixel at a particular coordinate.
im = Image.new('RGBA', (250, 250))
im.getpixel((0, 0))
getpixel() will return a tuple of the color information, containing (red, green, blue, alpha)
You can also loop through an image and 'put' a new pixel using the same tuple of color values.
for x in range(200):
for y in range(30):
im.putpixel((x, y), (250, 0, 250))
You can save your image when you are done.
im.save('myImage.png')
Rotation in 90 degree increments is very simple, you can just swap the x and y values in the pixels.
for x in range(200):
for y in range(200):
p = sourceImage.getpixel(x,y) # copy a pixel
targetImage.getpixel(y,x,p) # put it in the new image, rotated 90 deg
Your next visit will be to look up computer graphics techniques.
You will need to analyze the image as a matrix, and then swap the columns and rows. This requires understanding of linear algebra for optimizations. And if you try to brute force it, you will be waiting about 30 minutes to rotate each image (been there, done that).
Here is a look at inplace rotating. The gist of the program is:
# Python3 program to rotate a matrix by 90 degrees
N = 4
# An Inplace function to rotate
# N x N matrix by 90 degrees in
# anti-clockwise direction
def rotateMatrix(mat):
# Consider all squares one by one
for x in range(0, int(N/2)):
# Consider elements in group
# of 4 in current square
for y in range(x, N-x-1):
# store current cell in temp variable
temp = mat[x][y]
# move values from right to top
mat[x][y] = mat[y][N-1-x]
# move values from bottom to right
mat[y][N-1-x] = mat[N-1-x][N-1-y]
# move values from left to bottom
mat[N-1-x][N-1-y] = mat[N-1-y][x]
# assign temp to left
mat[N-1-y][x] = temp
# Function to pr the matrix
def displayMatrix( mat ):
for i in range(0, N):
for j in range(0, N):
print (mat[i][j], end = ' ')
print ("")
mat = [[0 for x in range(N)] for y in range(N)] # Driver Code
# Test case 1
mat = [ [1, 2, 3, 4 ],
[5, 6, 7, 8 ],
[9, 10, 11, 12 ],
[13, 14, 15, 16 ] ]
'''
# Test case 2
mat = [ [1, 2, 3 ],
[4, 5, 6 ],
[7, 8, 9 ] ]
# Test case 3
mat = [ [1, 2 ],
[4, 5 ] ]
'''
rotateMatrix(mat)
displayMatrix(mat) # Print rotated matrix
# This code is contributed by saloni1297

How can I process images faster with Python?

I'd trying to write a script that will detect an RGB value on the screen then click the x,y values. I know how to perform the click but I need to process the image a lot faster than my code below currently does. Is this possible with Python?
So far I'm reading a row at a time, when x = 1920 I go onto the second row but it takes about 10 seconds to do one row. By that time the person on screen would have moved into a completely different spot and I have only done one row!
Can I speed this code up OR is there a better way to achieve what I want? If it is not possible in Python I am open to C++ options :)
import Image
x = 0
y = 0
im = Image.open("C:\Users\sean\Desktop\screen.jpg")
pix = im.load()
print im.size #get width and height of the image for iterating over
while x < 1920:
print pix[x,y] #get RGBA value of the pixel of an image
print "x is:" +str(x)
x = x + 1
print "y is: " +str(y)
if x == 1920:
x = 0
y = y + 1
Generally, you want to avoid per-pixel loops in Python. They will always be slow. To get somewhat fast image processing, you need to get used to working with matrices instead of individual pixels. You have basically two options, you can either use NumPy or OpenCV, or a combination of the two. NumPy is a generic mathemtical matrix/array library, but you can do many image-related things with it. If you need something more specific, OpenCV supports many common operations on images.
Thanks for the responses, below is the code I used, I didn't change my original. Turns out it is fast enough but printing is a very costly operation :) It finds the x and y coords of the RGB value in less than a second
#check for certain RGB in image
##need to screen grab
import Image, sys
x = 0
y = 0
im = Image.open('C:\\Users\\sean\\Desktop\\test.jpg')
pix = im.load()
print im.size #get width and height of the image for iterating over
while x < 1914:
value = pix[x,y] #get RGBA value of the pixel of an image
if value == (33, 179, 80):
#call left_click(x,y)
print x,y
x = x + 1
if x == 1914:
x = 0
y = y + 1
print "Finished"
sys.exit()
Image.getpixel is considered very slow. Instead, consider using Image.getdata. That will give you a sequence with data for all the pixels through which you can iterate.
Something like this:
import Image
import math
x = 0
y = 0
im = Image.open("IMG_2977.JPG")
(width, height) = im.size
print width
print height
pix = im.getdata()
i = 0
for pixel in pix:
print pixel
x = i % ( width )
y = math.trunc( i / width)
print "x is: {}".format(x)
print "y is: {}".format(y)
i += 1
Without printing (just storing pixel in a variable) that runs in 2 seconds of user time (0.02 seconds of processor time) on my MacBook Pro.
You may want to do one of two things here.
1. Get a single pixel from the image
In this case, you don't need to iterate through the entire file. Just use im.getpixel. #Daniel makes a valid point that this is slow in a loop, but if you just want a single pixel, this is very efficient.
from PIL import Image
im = Image.open('screenshot.png')
im.getpixel((x, y)) # Returns the colour at (x, y)
2. Process multiple pixels from the image
This is best done using NumPy as #Lukáš suggests. If you want to do something like get the average colour of the 10 x 10 grid around the pixel, for example.
You can get the data as a NumPy array using scipy.misc.fromimage
from PIL import Image
from scipy.misc import fromimage
im = Image.open('screenshot.png')
data = fromimage(img)
Let's compare the time it takes to get this data against a for loop.
In [32]: pix = im.load()
In [33]: %timeit fromimage(im)
10 loops, best of 3: 8.24 ms per loops
In [34]: %timeit [pix[x, y] for x in xrange(im.size[0]) for y in xrange(im.size[1])]
1 loops, best of 3: 637 ms per loop
To summarise:
scipy.misc.fromimage is the fastest, at ~8ms for a 1920x1080 image
Looping through pix[x, y] takes ~640ms, about 80 times slower
There is something called pyautogui and it will find the entire image on the screen within 1-5 seconds usually, which is not too fast but seems better than you current option
You can get the first half of image and second half of image in 2 threads and process these halfs, but for me it speed up only for 15%. Normal speed for me is 2,7 secs at the image of height 375 and width 483. Threads speed it up to 2,3 secs. This is why I'm searching this question.

Categories