I am writing a script that can encrypt and decrypt an image using the RSA algorithm. My public key is (7, 187) and the private key is (23,187) now the calculation for the encryption is correct like for an entry in the matrix of the image, 41 the encrypted value is 46. But when the decryption is happening it is not giving the appropriate result like for 46 it is giving 136 and for every entry of 46 in the encrypt matrix the result I am getting is 136 in the decrypt matrix. And I don't know why this is happening. When I am doing the same calculation in the python prompt(or shell) it is giving the correct answer.
In the script, I am first converting the RGB image into grayscale and then converting it to a 2d numpy array, then for each element, I am applying the RSA algo(the keys) and then saving it as an image. Then I am applying the decryption key in the encrypted matrix and then the problem is occurring. Heres the code:
from PIL import Image
import numpy as np
from pylab import *
#encryption
img1 = (Image.open('image.jpeg').convert('L'))
img1.show()
img = array((Image.open('image.jpeg').convert('L')))
a,b = img.shape #saving the no of rows and col in a tuple
print('\n\nOriginal image: ')
print(img)
print((a,b))
tup = a,b
for i in range (0, tup[0]):
for j in range (0, tup[1]):
img[i][j]= (pow(img[i][j],7)%187)
print('\n\nEncrypted image: ')
print(img)
imgOut = Image.fromarray(img)
imgOut.show()
imgOut.save('img.bmp')
#decryption
img2 = (Image.open('img.bmp'))
img2.show()
img3 = array(Image.open('img.bmp'))
print('\n\nEncrypted image: ')
print(img3)
a1,b1 = img3.shape
print((a1,b1))
tup1 = a1,b1
for i1 in range (0, tup1[0]):
for j1 in range (0, tup1[1]):
img3[i1][j1]= ((pow(img3[i1][j1], 23))%187)
print('\n\nDecrypted image: ')
print(img3)
imgOut1 = Image.fromarray(img3)
imgOut1.show()
print(type(img))
The values of the matrices:
Original image:
[[41 42 45 ... 47 41 33]
[41 43 45 ... 44 38 30]
[41 42 46 ... 41 36 30]
...
[43 43 44 ... 56 56 55]
[45 44 45 ... 55 55 54]
[46 46 46 ... 53 54 54]]
Encrypted image:
[[ 46 15 122 ... 174 46 33]
[ 46 87 122 ... 22 47 123]
[ 46 15 7 ... 46 9 123]
...
[ 87 87 22 ... 78 78 132]
[122 22 122 ... 132 132 164]
[ 7 7 7 ... 26 164 164]]
Decrypted image:
[[136 70 24 ... 178 136 164]
[136 111 24 ... 146 141 88]
[136 70 96 ... 136 100 88]
...
[111 111 146 ... 140 140 1]
[ 24 146 24 ... 1 1 81]
[ 96 96 96 ... 52 81 81]]
Any help will be greatly appreciated. Thank You.
I think you will get on better using the 3rd parameter to the pow() function which does the modulus internally for you.
Here is a little example without the complexity of loading images - just imagine it is a greyscale gradient from black to white.
# Make single row greyscale gradient from 0..255
img = [ x for x in range(256) ]
# Create encrypted version
enc = [ pow(x,7,187) for x in img ]
# Decrypt back to plaintext
dec = [ pow(x,23,187) for x in enc ]
It seems to decrypt back into the original values from 0..187, where it goes wrong - presumably because of overflow? Maybe someone cleverer than me will be able to explain that - please add comment for me if you know!
here is some code:
c = np.delete(a,b)
print(len(a))
print(a)
print(len(b))
print(b)
print(len(c))
print(c)
it gives back:
24
[32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55]
20
[46, 35, 37, 54, 40, 49, 34, 48, 50, 38, 42, 47, 33, 52, 41, 36, 39, 44, 55,
51]
24
[32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55]
as you can see, all elements of b appear in a, but are not being deleted. can not figure out why. any ideas? thank you.
numpy.delete does not remove the elements contained in b, it deletes a[b], in other words, b needs to contain the indices to remove. Since your b contains only values larger than the length of a, no values are removed. Currently out of bounds indices are ignored, but this will not be true in the future:
/usr/local/bin/ipython3:1: DeprecationWarning: in the future out of bounds indices will raise an error instead of being ignored by `numpy.delete`.
#!/usr/bin/python3
A pure Python solution would be to use set:
set_b = set(b)
c = np.array([x for x in a if x not in set_b])
# array([32, 43, 45, 51, 53])
And using numpy broadcasting to create a mask to determine which values to delete:
c = a[~(a[None,:] == b[:, None]).any(axis=0)]
# array([32, 43, 45, 51, 53])
They are about the same speed with the given example, but the numpy approach and takes more memory (because it generates a 2D matrix that contains all combinations of a and b).
I have a function like:
def calcChromaFromPixel(red, green, blue):
r = int(red)
g = int(green)
b = int(blue)
return math.sqrt(math.pow(r - g, 2) +
math.pow(r - b, 2) +
math.pow(g - b, 2))
and I have an RGB Image, which is already converted into an numpy array with a shape like [width, height, 3], where 3 are the color channels.
What I want to do is to apply the method to every pixel and build the mean from the result. I already have done the obvious thing and iterated over the array with two loops, but that seems to be a really slow thing to do... Is there a faster and prettier way to do that?!
Thanks :)
Code:
import math
import numpy as np
np.random.seed(1)
# FAKE-DATA
img = np.random.randint(0,255,size=(4,4,3))
print(img)
# LOOP APPROACH
def calcChromaFromPixel(red, green, blue):
r = int(red)
g = int(green)
b = int(blue)
return math.sqrt(math.pow(r - g, 2) +
math.pow(r - b, 2) +
math.pow(g - b, 2))
bla = np.zeros(img.shape[:2])
for a in range(img.shape[0]):
for b in range(img.shape[1]):
bla[a,b] = calcChromaFromPixel(*img[a,b])
print('loop')
print(bla)
# VECTORIZED APPROACH
print('vectorized')
res = np.linalg.norm(np.stack(
(img[:,:,0] - img[:,:,1],
img[:,:,0] - img[:,:,2],
img[:,:,1] - img[:,:,2])), axis=0)
print(res)
Out:
[[[ 37 235 140]
[ 72 137 203]
[133 79 192]
[144 129 204]]
[[ 71 237 252]
[134 25 178]
[ 20 254 101]
[146 212 139]]
[[252 234 156]
[157 142 50]
[ 68 215 215]
[233 241 247]]
[[222 96 86]
[141 233 137]
[ 7 63 61]
[ 22 57 1]]]
loop
[[ 242.56545508 160.44313634 138.44132331 97.21111048]
[ 246.05283985 192.94040531 291.07730932 98.66103588]
[ 124.99599994 141.90842117 207.88939367 17.20465053]
[ 185.66636744 133.02631319 77.82030583 69.29646456]]
vectorized
[[ 242.56545508 160.44313634 138.44132331 97.21111048]
[ 246.05283985 192.94040531 291.07730932 98.66103588]
[ 124.99599994 141.90842117 207.88939367 17.20465053]
[ 185.66636744 133.02631319 77.82030583 69.29646456]]
This question already has answers here:
Python: Sorting items from top left to bottom right with OpenCV
(2 answers)
Closed 10 months ago.
I am trying to build an character recognition program using Python. I am stuck on sorting the contours. I am using this page as a reference.
I managed to find the contours using the following piece of code:
mo_image = di_image.copy()
contour0 = cv2.findContours(mo_image.copy(),cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
contours = [cv2.approxPolyDP(cnt,3,True) for cnt in contour0[0]]
And added the bounding rectangles and segmented the image using this part of the code:
maxArea = 0
rect=[]
for ctr in contours:
maxArea = max(maxArea,cv2.contourArea(ctr))
if img == "Food.jpg":
areaRatio = 0.05
elif img == "Plate.jpg":
areaRatio = 0.5
for ctr in contours:
if cv2.contourArea(ctr) > maxArea * areaRatio:
rect.append(cv2.boundingRect(cv2.approxPolyDP(ctr,1,True)))
symbols=[]
for i in rect:
x = i[0]
y = i[1]
w = i[2]
h = i[3]
p1 = (x,y)
p2 = (x+w,y+h)
cv2.rectangle(mo_image,p1,p2,255,2)
image = cv2.resize(mo_image[y:y+h,x:x+w],(32,32))
symbols.append(image.reshape(1024,).astype("uint8"))
testset_data = np.array(symbols)
cv2.imshow("segmented",mo_image)
plt.subplot(2,3,6)
plt.title("Segmented")
plt.imshow(mo_image,'gray')
plt.xticks([]),plt.yticks([]);
However the resulting segments appear in to be in random order.
Here is the original image followed by the processed image with detected segments.
The program then outputs each segment separately, however it is in the order: 4 1 9 8 7 5 3 2 0 6 and not 0 1 2 3 4 5 6 7 8 9.
Simply adding a sort operation in "rect" fixes this, but the same solution wont work for a document with multiple lines.
So my question is: How do I sort the contours from left to right and top to bottom?
I don't think you are going to be able to generate the contours directly in the correct order, but a simple sort as follows should do what you need:
First approach
Use a sort to first group by similar y values into row values, and then sorting by the x offset of the rectangle. The key is a list holding the estimated row and then the x offset.
The maximum height of a single rectangle is calculated to determine a suitable grouping value for nearest. The 1.4 value is a line spacing value. So for both of your examples nearest is about 70.
import numpy as np
c = np.load(r"rect.npy")
contours = list(c)
# Example - contours = [(287, 117, 13, 46), (102, 117, 34, 47), (513, 116, 36, 49), (454, 116, 32, 49), (395, 116, 28, 48), (334, 116, 31, 49), (168, 116, 26, 49), (43, 116, 30, 48), (224, 115, 33, 50), (211, 33, 34, 47), ( 45, 33, 13, 46), (514, 32, 32, 49), (455, 32, 31, 49), (396, 32, 29, 48), (275, 32, 28, 48), (156, 32, 26, 49), (91, 32, 30, 48), (333, 31, 33, 50)]
max_height = np.max(c[::, 3])
nearest = max_height * 1.4
contours.sort(key=lambda r: [int(nearest * round(float(r[1]) / nearest)), r[0]])
for x, y, w, h in contours:
print(f"{x:4} {y:4} {w:4} {h:4}")
Second approach
This removes the need to estimate a possible line height and also allows for possible processing by line number:
Sort all the contours by their y-value.
Iterate over each contour and assign a line number for each.
Increment the line number when the new y-value is greater than max_height.
Sort the resulting by_line list which will be in (line, x, y, w, h) order.
A final list comprehension can be used to remove the line number if it is not required (but could be useful?)
# Calculate maximum rectangle height
c = np.array(contours)
max_height = np.max(c[::, 3])
# Sort the contours by y-value
by_y = sorted(contours, key=lambda x: x[1]) # y values
line_y = by_y[0][1] # first y
line = 1
by_line = []
# Assign a line number to each contour
for x, y, w, h in by_y:
if y > line_y + max_height:
line_y = y
line += 1
by_line.append((line, x, y, w, h))
# This will now sort automatically by line then by x
contours_sorted = [(x, y, w, h) for line, x, y, w, h in sorted(by_line)]
for x, y, w, h in contours:
print(f"{x:4} {y:4} {w:4} {h:4}")
Both would display the following output:
36 45 33 40
76 44 29 43
109 43 29 45
145 44 32 43
184 44 21 43
215 44 21 41
241 43 34 45
284 46 31 39
324 46 7 39
337 46 14 41
360 46 26 39
393 46 20 41
421 45 45 41
475 45 32 41
514 43 38 45
39 122 26 41
70 121 40 48
115 123 27 40
148 121 25 45
176 122 28 41
212 124 30 41
247 124 91 40
342 124 28 39
375 124 27 39
405 122 27 43
37 210 25 33
69 199 28 44
102 210 21 33
129 199 28 44
163 210 26 33
195 197 16 44
214 210 27 44
247 199 25 42
281 212 7 29
292 212 11 42
310 199 23 43
340 199 7 42
355 211 43 30
406 213 24 28
437 209 31 35
473 210 28 43
506 210 28 43
541 210 17 31
37 288 21 33
62 282 15 39
86 290 24 28
116 290 72 30
192 290 23 30
218 290 26 41
249 288 20 33
While I solved my task I made such an approach (this one is not optimized and can be improved, I guess):
import pandas as pd
import cv2
import cv2
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (20.0, 10.0)
matplotlib.rcParams['image.cmap'] = 'gray'
imageCopy = cv2.imread("./test.png")
imageGray = cv2.imread("./test.png", 0)
image = imageCopy.copy()
contours, hierarchy = cv2.findContours(imageGray, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
bboxes = [cv2.boundingRect(i) for i in contours]
bboxes=sorted(bboxes, key=lambda x: x[0])
df=pd.DataFrame(bboxes, columns=['x','y','w', 'h'], dtype=int)
df["x2"] = df["x"]+df["w"] # adding column for x on the right side
df = df.sort_values(["x","y", "x2"]) # sorting
for i in range(2): # change rows between each other by their coordinates several times
# to sort them completely
for ind in range(len(df)-1):
# print(ind, df.iloc[ind][4] > df.iloc[ind+1][0])
if df.iloc[ind][4] > df.iloc[ind+1][0] and df.iloc[ind][1]> df.iloc[ind+1][1]:
df.iloc[ind], df.iloc[ind+1] = df.iloc[ind+1].copy(), df.iloc[ind].copy()
num=0
for box in df.values.tolist():
x,y,w,h, hy = box
cv2.rectangle(image, (x,y), (x+w,y+h), (255,0,255), 2)
# Mark the contour number
cv2.putText(image, "{}".format(num + 1), (x+40, y-10), cv2.FONT_HERSHEY_SIMPLEX, 1,
(0, 0, 255), 2);
num+=1
plt.imshow(image[:,:,::-1])
Original sorting:
Up-to-bottom left-to-right:
The original image, if you want to test it:
Given a binary image - thresh, I think the shortest way is -
import numpy as np
import cv2
contours,hierarchy = cv2.findContours(thresh,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NON) #thresh is a bia
cntr_index_LtoR = np.argsort([cv2.boundingRect(i)[0] for i in contours])
Here, cv2.boundingRect(i)[0] returns just x from x,y,w,h = cv2.boundingRect(i) for the ith contour.
Similarly, you can use the top to bottom.
contours.sort(key=lambda r: round( float(r[1] / nearest))) will cause similar effect like (int(nearest * round(float(r[1])/nearest)) * max_width + r[0])
after finding the contours using contours=cv2.findContours(),use -
boundary=[]
for c,cnt in enumerate(contours):
x,y,w,h = cv2.boundingRect(cnt)
boundary.append((x,y,w,h))
count=np.asarray(boundary)
max_width = np.sum(count[::, (0, 2)], axis=1).max()
max_height = np.max(count[::, 3])
nearest = max_height * 1.4
ind_list=np.lexsort((count[:,0],count[:,1]))
c=count[ind_list]
now c will be sorted in left to right and top to bottom.
A simple way to sort contours with the bounding box (x, y, w, h) of the contours left to right, top to bottom is as follows.
You can get the bounding boxes using the boundingBoxes = cv2.boundingRect() method
def sort_bbox(boundingBoxes):
'''
function to sort bounding boxes from left to right, top to bottom
'''
# combine x and y as a single list and sort based on that
boundingBoxes = sorted(boundingBoxes, key=lambda b:b[0]+b[1], reverse=False))
return boundingboxes
The method is not extensively tested with all the cases but found really effective for the project I was doing.
Link to sorted function documentation for reference https://docs.python.org/3/howto/sorting.html
def sort_contours(contours, x_axis_sort='LEFT_TO_RIGHT', y_axis_sort='TOP_TO_BOTTOM'):
# initialize the reverse flag
x_reverse = False
y_reverse = False
if x_axis_sort == 'RIGHT_TO_LEFT':
x_reverse = True
if y_axis_sort == 'BOTTOM_TO_TOP':
y_reverse = True
boundingBoxes = [cv2.boundingRect(c) for c in contours]
# sorting on x-axis
sortedByX = zip(*sorted(zip(contours, boundingBoxes),
key=lambda b:b[1][0], reverse=x_reverse))
# sorting on y-axis
(contours, boundingBoxes) = zip(*sorted(zip(*sortedByX),
key=lambda b:b[1][1], reverse=y_reverse))
# return the list of sorted contours and bounding boxes
return (contours, boundingBoxes)
contours, hierarchy = cv2.findContours(img_vh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contours, boundingBoxes = sort_contours(contours, x_axis_sort='LEFT_TO_RIGHT', y_axis_sort='TOP_TO_BOTTOM')
I have a 2d array of 300 x 200 from an image. I would like to generate a list of coordinate pairs for every 20 x 20 chunks until the end of the array.
To generate the coordinate pairs from a grid is straight forward, but I'm stuck at how to iterate the 20 x 20 chunks in an array. I'm new to numpy and arrays.
w, h = 300, 200
coordinates = [(x, y) for x in xrange(w) for y in xrange(h)]
If you want to iterate through the original array you can do something like this:
w,h = 6,4
n = 2 #Height of window
m = 2 #Width of window
k = h / n #Must divide evenly
l = w / m #Must divide evenly
data = np.random.randint(0,90,(h,w))
data
[[45 39 36 25 30 21]
[48 27 46 48 20 87]
[19 20 59 27 41 52]
[52 11 42 30 85 49]]
for h in xrange(k):
for w in xrange(l):
print data[h*n:(h+1)*n,w*m:(w+1)*m]
[[45 39]
[48 27]]
[[36 25]
[46 48]]
[[30 21]
[20 87]]
[[19 20]
[52 11]]
[[59 27]
[42 30]]
[[41 52]
[85 49]]
You can switch the order of the loop to have different windows occurring first.
You can also pre generate all indices:
inds = np.arange(w*h).reshape(k,n,l,m).swapaxes(1,2).reshape(k,l,n*m)
#The final reshape can be reshape(k*l,n*m) if you do not want a double loop.
for h in xrange(k):
for w in xrange(l):
print np.take(data,inds[h,w])
[45 39 48 27]
[36 25 46 48]
[30 21 20 87]
[19 20 52 11]
[59 27 42 30]
[41 52 85 49]
You also have this option:
[np.split(x,k,axis=0) for x in np.split(data,l,axis=1)]
[[array([[45, 39],
[48, 27]]),
array([[19, 20],
[52, 11]])],
[array([[36, 25],
[46, 48]]),
array([[59, 27],
[42, 30]])],
[array([[30, 21],
[20, 87]]),
array([[41, 52],
[85, 49]])]]
Note for the above I switched the output ordering, you can use:
[np.split(x,l,axis=1) for x in np.split(data,k,axis=0)]
to return the same as all the others, I just wanted to give this as an example.
The following code generates a list totalList in which each element itself is a list of all coordinates in a specific 20x20 block. You can reduce this to another list comprehension if you like, but I personally don't think that's very readable.
w, h = 300, 200
blockSize = 20
totalList = []
for xStart in xrange(0, w, blockSize):
xEnd = min(xStart+blockSize, w)
for yStart in range(0, h, blockSize):
yEnd = min(yStart+blockSize, h)
partCoords = [(x,y) for x in xrange(xStart, xEnd) for y in xrange(yStart, yEnd)]
totalList.append(partCoords)
If I understand correctly.
w, h = 300, 200
coordinates = [(x,y) for x in xrange(0, w, 20) for y in xrange(0, h, 20)]