I am writing a script that can encrypt and decrypt an image using the RSA algorithm. My public key is (7, 187) and the private key is (23,187) now the calculation for the encryption is correct like for an entry in the matrix of the image, 41 the encrypted value is 46. But when the decryption is happening it is not giving the appropriate result like for 46 it is giving 136 and for every entry of 46 in the encrypt matrix the result I am getting is 136 in the decrypt matrix. And I don't know why this is happening. When I am doing the same calculation in the python prompt(or shell) it is giving the correct answer.
In the script, I am first converting the RGB image into grayscale and then converting it to a 2d numpy array, then for each element, I am applying the RSA algo(the keys) and then saving it as an image. Then I am applying the decryption key in the encrypted matrix and then the problem is occurring. Heres the code:
from PIL import Image
import numpy as np
from pylab import *
#encryption
img1 = (Image.open('image.jpeg').convert('L'))
img1.show()
img = array((Image.open('image.jpeg').convert('L')))
a,b = img.shape #saving the no of rows and col in a tuple
print('\n\nOriginal image: ')
print(img)
print((a,b))
tup = a,b
for i in range (0, tup[0]):
for j in range (0, tup[1]):
img[i][j]= (pow(img[i][j],7)%187)
print('\n\nEncrypted image: ')
print(img)
imgOut = Image.fromarray(img)
imgOut.show()
imgOut.save('img.bmp')
#decryption
img2 = (Image.open('img.bmp'))
img2.show()
img3 = array(Image.open('img.bmp'))
print('\n\nEncrypted image: ')
print(img3)
a1,b1 = img3.shape
print((a1,b1))
tup1 = a1,b1
for i1 in range (0, tup1[0]):
for j1 in range (0, tup1[1]):
img3[i1][j1]= ((pow(img3[i1][j1], 23))%187)
print('\n\nDecrypted image: ')
print(img3)
imgOut1 = Image.fromarray(img3)
imgOut1.show()
print(type(img))
The values of the matrices:
Original image:
[[41 42 45 ... 47 41 33]
[41 43 45 ... 44 38 30]
[41 42 46 ... 41 36 30]
...
[43 43 44 ... 56 56 55]
[45 44 45 ... 55 55 54]
[46 46 46 ... 53 54 54]]
Encrypted image:
[[ 46 15 122 ... 174 46 33]
[ 46 87 122 ... 22 47 123]
[ 46 15 7 ... 46 9 123]
...
[ 87 87 22 ... 78 78 132]
[122 22 122 ... 132 132 164]
[ 7 7 7 ... 26 164 164]]
Decrypted image:
[[136 70 24 ... 178 136 164]
[136 111 24 ... 146 141 88]
[136 70 96 ... 136 100 88]
...
[111 111 146 ... 140 140 1]
[ 24 146 24 ... 1 1 81]
[ 96 96 96 ... 52 81 81]]
Any help will be greatly appreciated. Thank You.
I think you will get on better using the 3rd parameter to the pow() function which does the modulus internally for you.
Here is a little example without the complexity of loading images - just imagine it is a greyscale gradient from black to white.
# Make single row greyscale gradient from 0..255
img = [ x for x in range(256) ]
# Create encrypted version
enc = [ pow(x,7,187) for x in img ]
# Decrypt back to plaintext
dec = [ pow(x,23,187) for x in enc ]
It seems to decrypt back into the original values from 0..187, where it goes wrong - presumably because of overflow? Maybe someone cleverer than me will be able to explain that - please add comment for me if you know!
Related
The following Python script computes the 2D convolution of the blue color channel of a .jpg image:
It reads a 6x6 BGR image:
It extracts the channel 0. In cv2 this corresponds to color channel blue.
I print the values of the channel
Input data type is uint8. Therefore, we making cv2.filter2D() and setting ddepth=-1, the output will have data type uint8 too and hence values >255 cannot be represented. Hence, I decided to convert the image from uint8 to, for example, short to have a wider numeric range and be able to represent the values at the output of the filter.
I define a kernel of size 3x3 (see the values of the kernel in the code below).
I filter the blue channel with the kernel and I obtain a filtered image of the same size due to the padding
The filtered values given by the function filter2D() don't correspond to what I would expect. For example, for the top left value the functions returns 449, however I would have expected 425 instead since 71*0+60*0+65*1+69*1+58*3+61*1+89*0+66*0+56*1=425.
Does anyone have any idea about how the filtered image is being calculated by filter2D() function? Is there anything wrong with my proposed calculation?
import cv2
import numpy as np
image = cv2.imread('image.jpg')
# Read image 6x6x3 (BGR)
blue_channel=image[:,:,0]
# Obtain blue channel
print(blue_channel)
# Result is:
#[[71 60 65 71 67 67]
# [69 58 61 69 69 67]
# [89 66 56 55 45 37]
# [65 37 27 32 31 30]
# [46 23 22 38 43 45]
# [55 36 44 60 60 47]]
blue_channel=np.short(blue_channel)
# Convert image from uint8 to short. Otherwise, output of the filter will have the same data type as the
# input when using ddepth=-1 and hence filtered values >255 won't be able to be represented
print(blue_channel)
# Result is (same as before, ok...):
# 71 60 65 71 67 67
# 69 58 61 69 69 67
# 89 66 56 55 45 37
# 65 37 27 32 31 30
# 46 23 22 38 43 45
# 55 36 44 60 60 47
kernel=np.array([ [0, 0, 1], [1, 3, 1], [0, 0, 1] ])
# Kernel is of size 3x3
# [0 0 1]
# [1 3 1]
# [0 0 1]
filtered_image = cv2.filter2D(blue_channel, -1, kernel)
# Blue channel is filtered with the kernel and the result gives:
# 449 438 464 483 473 473
# 449 425 436 449 447 451
# 494 431 390 366 324 301
# 358 281 243 242 237 240
# 257 208 219 270 289 312
# 283 251 304 370 377 347
print(filtered_image)
# Why top left filtered value is 449?
# I would expect this:
# 71*0+60*0+65*1+69*1+58*3+61*1+89*0+66*0+56*1=425
# In short, I would expect 425 instead of 449, how is that 449 computed?
Your calculation is not wrong, but you have actually written convolution for value at [2,2], which match with your result 425.
To calculate value e.g. [1,1] you need values outside of the image, you have to handle surrounding edges. And by default in function filter2D they are handled as reflect 101 , in wiki its shifted mirror edge handling by +1.
To understand diffrence between mirror(reflect) and reflect 101:
Mirror (reflect)
left edge | image | right edge
| |
b a | a b c | c b
Reflect 101
left edge | image | right edge
| |
c b | a b c | b a
So calculation for [1,1] with default edge handling in filder2D would be:
0*58 + 0*69 + 1*58 + 1*60 + 3*71 + 1*60 + 0*58 + 0*69 + 1*58 = 449
I have an array of strings that i read from file ,i want to compare each line of my file to a specific string..the file is too large (about 200 MB of lines)
i have followed this tutorial https://nyu-cds.github.io/python-numba/05-cuda/ but it doesn't show exactly how to deal with array of strings/characters.
import numpy as np
from numba import cuda
#cuda.jit
def my_kernel(io_array):
tx = cuda.threadIdx.x
ty = cuda.blockIdx.x
bw = cuda.blockDim.x
pos = tx + ty * bw
if pos < io_array.size: # Check array boundaries
io_array[pos] # i want here to compare each line of the string array to a specific line
def main():
a = open("test.txt", 'r') # open file in read mode
print("the file contains:")
data = country = np.array(a.read())
# Set the number of threads in a block
threadsperblock = 32
# Calculate the number of thread blocks in the grid
blockspergrid = (data.size + (threadsperblock - 1)) // threadsperblock
# Now start the kernel
my_kernel[blockspergrid, threadsperblock](data)
# Print the result
print(data)
if __name__ == '__main__':
main()
I have two problems.
First: how to send my sentence (string) that i want to compare each line of my file to it to the kernal function. (in the io_array without affecting the threads computation)
Second: it how to deal with string array? i get this error when i run the above code
this error is usually caused by passing an argument of a type that is unsupported by the named function.
[1] During: typing of intrinsic-call at test2.py (18)
File "test2.py", line 18:
def my_kernel(io_array):
<source elided>
if pos < io_array.size: # Check array boundaries
io_array[pos] # do the computation
P.S i'm new to Cuda and have just started learning it.
First of all this:
data = country = np.array(a.read())
doesn't do what you think it does. It does not yield a numpy array that you can index like this:
io_array[pos]
If you don't believe me, just try that in ordinary python code with something like:
print(data[0])
and you'll get an error. If you want help with that, just ask your question on the python or numpy tag.
So we need a different method to load the string data from disk. For simplicity, I choose to use numpy.fromfile(). This method will require that all lines in your file are of the same width. I like that concept. There's more information you would have to describe if you want to handle lines of varying lengths.
If we start out that way, we can load the data as an array of bytes, and use that:
$ cat test.txt
the quick brown fox.............
jumped over the lazy dog........
repeatedly......................
$ cat t43.py
import numpy as np
from numba import cuda
#cuda.jit
def my_kernel(str_array, check_str, length, lines, result):
col,line = cuda.grid(2)
pos = (line*(length+1))+col
if col < length and line < lines: # Check array boundaries
if str_array[pos] != check_str[col]:
result[line] = 0
def main():
a = np.fromfile("test.txt", dtype=np.byte)
print("the file contains:")
print(a)
print("array length is:")
print(a.shape[0])
print("the check string is:")
b = a[33:65]
print(b)
i = 0
while a[i] != 10:
i=i+1
line_length = i
print("line length is:")
print(line_length)
print("number of lines is:")
line_count = a.shape[0]/(line_length+1)
print(line_count)
res = np.ones(line_count)
# Set the number of threads in a block
threadsperblock = (32,32)
# Calculate the number of thread blocks in the grid
blocks_x = (line_length/32)+1
blocks_y = (line_count/32)+1
blockspergrid = (blocks_x,blocks_y)
# Now start the kernel
my_kernel[blockspergrid, threadsperblock](a, b, line_length, line_count, res)
# Print the result
print("matching lines (match = 1):")
print(res)
if __name__ == '__main__':
main()
$ python t43.py
the file contains:
[116 104 101 32 113 117 105 99 107 32 98 114 111 119 110 32 102 111
120 46 46 46 46 46 46 46 46 46 46 46 46 46 10 106 117 109
112 101 100 32 111 118 101 114 32 116 104 101 32 108 97 122 121 32
100 111 103 46 46 46 46 46 46 46 46 10 114 101 112 101 97 116
101 100 108 121 46 46 46 46 46 46 46 46 46 46 46 46 46 46
46 46 46 46 46 46 46 46 10]
array length is:
99
the check string is:
[106 117 109 112 101 100 32 111 118 101 114 32 116 104 101 32 108 97
122 121 32 100 111 103 46 46 46 46 46 46 46 46]
line length is:
32
number of lines is:
3
matching lines (match = 1):
[ 0. 1. 0.]
$
If I have a set of data that's of shape (1000,1000) and I know that the values I need from it are contained within the indices (25:888,11:957), how would I go about separating the two sections of data from one another?
I couldn't figure out how to get np.delete() to like the specific 2D case and I also need both the good and the bad sections of data for analysis, so I can't just specify my array bounds to be within the good indices.
I feel like there's a simple solution I'm missing here.
Is this how you want to divide the array?
In [364]: arr = np.ones((1000,1000),int)
In [365]: beta = arr[25:888, 11:957]
In [366]: beta.shape
Out[366]: (863, 946)
In [367]: arr[:25,:].shape
Out[367]: (25, 1000)
In [368]: arr[888:,:].shape
Out[368]: (112, 1000)
In [369]: arr[25:888,:11].shape
Out[369]: (863, 11)
In [370]: arr[25:888,957:].shape
Out[370]: (863, 43)
I'm imaging a square with a rectangle cut out of the middle. It's easy to specify that rectangle, but the frame is has to be viewed as 4 rectangles - unless it is described via the mask of what is missing.
Checking that I got everything:
In [376]: x = np.array([_366,_367,_368,_369,_370])
In [377]: np.multiply.reduce(x, axis=1).sum()
Out[377]: 1000000
Let's say your original numpy array is my_arr
Extracting the "Good" Section:
This is easy because the good section has a rectangular shape.
good_arr = my_arr[25:888, 11:957]
Extracting the "Bad" Section:
The "bad" section doesn't have a rectangular shape. Rather, it has the shape of a rectangle with a rectangular hole cut out of it.
So, you can't really store the "bad" section alone, in any array-like structure, unless you're ok with wasting some extra space to deal with the cut out portion.
What are your options for the "Bad" Section?
Option 1:
Be happy and content with having extracted the good section. Let the bad section remain as part of the original my_arr. While iterating trough my_arr, you can always discriminate between good and and bad items based on the indices. The disadvantage is that, whenever you want to process only the bad items, you have to do it through a nested double loop, rather than use some vectorized features of numpy.
Option 2:
Suppose we want to perform some operations such as row-wise totals or column-wise totals on only the bad items of my_arr, and suppose you don't want the overhead of the nested for loops. You can create something called a numpy masked array. With a masked array, you can perform most of your usual numpy operations, and numpy will automatically exclude masked out items from the calculations. Note that internally, there will be some memory wastage involved, just to store an item as "masked"
The code below illustrates how you can create a masked array called masked_arr from your original array my_arr:
import numpy as np
my_size = 10 # In your case, 1000
r_1, r_2 = 2, 8 # In your case, r_1 = 25, r_2 = 889 (which is 888+1)
c_1, c_2 = 3, 5 # In your case, c_1 = 11, c_2 = 958 (which is 957+1)
# Using nested list comprehension, build a boolean mask as a list of lists, of shape (my_size, my_size).
# The mask will have False everywhere, except in the sub-region [r_1:r_2, c_1:c_2], which will have True.
mask_list = [[True if ((r in range(r_1, r_2)) and (c in range(c_1, c_2))) else False
for c in range(my_size)] for r in range(my_size)]
# Your original, complete 2d array. Let's just fill it with some "toy data"
my_arr = np.arange((my_size * my_size)).reshape(my_size, my_size)
print (my_arr)
masked_arr = np.ma.masked_where(mask_list, my_arr)
print ("masked_arr is:\n", masked_arr, ", and its shape is:", masked_arr.shape)
The output of the above is:
[[ 0 1 2 3 4 5 6 7 8 9]
[10 11 12 13 14 15 16 17 18 19]
[20 21 22 23 24 25 26 27 28 29]
[30 31 32 33 34 35 36 37 38 39]
[40 41 42 43 44 45 46 47 48 49]
[50 51 52 53 54 55 56 57 58 59]
[60 61 62 63 64 65 66 67 68 69]
[70 71 72 73 74 75 76 77 78 79]
[80 81 82 83 84 85 86 87 88 89]
[90 91 92 93 94 95 96 97 98 99]]
masked_arr is:
[[0 1 2 3 4 5 6 7 8 9]
[10 11 12 13 14 15 16 17 18 19]
[20 21 22 -- -- 25 26 27 28 29]
[30 31 32 -- -- 35 36 37 38 39]
[40 41 42 -- -- 45 46 47 48 49]
[50 51 52 -- -- 55 56 57 58 59]
[60 61 62 -- -- 65 66 67 68 69]
[70 71 72 -- -- 75 76 77 78 79]
[80 81 82 83 84 85 86 87 88 89]
[90 91 92 93 94 95 96 97 98 99]] , and its shape is: (10, 10)
Now that you have a masked array, you will be able to perform most of the numpy operations on it, and numpy will automatically exclude the masked items (the ones that appear as "--" when you print the masked array)
Some examples of what you can do with the masked array:
# Now, you can print column-wise totals, of only the bad items.
print (masked_arr.sum(axis=0))
# Or row-wise totals, for that matter.
print (masked_arr.sum(axis=1))
The output of the above is:
[450 460 470 192 196 500 510 520 530 540]
[45 145 198 278 358 438 518 598 845 945]
I want to save array that array's element is over 255 to image file (.jp2). Data type is 'int32'. Is there any method to save array to image that array's element is over 255? This is used for processing sentinel-2 datasets.
I already tried it with cv2, pil, scipy functions.
but it doesn't work.
-
imwrite, scipy.misc.toimage, scipy.misc.imsave, .save()...
I already tried these functions..
For example, h_01 array is like this.
[[1419. 1448.5 1444. ... 1388.5 1390.5 1391.5]
[1449.5 1434. 1448. ... 1370. 1372. 1373. ]
[1424.5 1428.5 1457. ... 1353.5 1354.5 1378. ]
...
[1430. 1412.5 1422.5 ... 1500. 1474.5 1495. ]
[1449.5 1409.5 1417.5 ... 1472.5 1492. 1512.5]
[1447.5 1429. 1437. ... 1492. 1511.5 1509.5]]
and I changed my data to int32.
h_01=np.array(h_01,np.int32)
then, I save this array to image
scipy.misc.toimage(h_01).save(opt.image+"_01.jp2")
in this method, array saved like this.
[[28 33 32 ... 23 23 23]
[33 31 33 ... 20 20 20]
[29 30 34 ... 17 17 21]
...
[30 27 28 ... 42 37 41]
[33 26 28 ... 37 40 44]
[33 30 31 ... 40 44 43]]
I want to save array that element is over 255 to image file(.jp2).
but it doesn't work, results that saved are not over 255 (have loss).
I am confused about the follow code:
import tensorflow as tf
import numpy as np
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.framework import dtypes
'''
Randomly crop a tensor, then return the crop position
'''
def random_crop(value, size, seed=None, name=None):
with ops.name_scope(name, "random_crop", [value, size]) as name:
value = ops.convert_to_tensor(value, name="value")
size = ops.convert_to_tensor(size, dtype=dtypes.int32, name="size")
shape = array_ops.shape(value)
check = control_flow_ops.Assert(
math_ops.reduce_all(shape >= size),
["Need value.shape >= size, got ", shape, size],
summarize=1000)
shape = control_flow_ops.with_dependencies([check], shape)
limit = shape - size + 1
begin = tf.random_uniform(
array_ops.shape(shape),
dtype=size.dtype,
maxval=size.dtype.max,
seed=seed) % limit
return tf.slice(value, begin=begin, size=size, name=name), begin
sess = tf.InteractiveSession()
size = [10]
a = tf.constant(np.arange(0, 100, 1))
print (a.eval())
a_crop, begin = random_crop(a, size = size, seed = 0)
print ("offset: {}".format(begin.eval()))
print ("a_crop: {}".format(a_crop.eval()))
a_slice = tf.slice(a, begin=begin, size=size)
print ("a_slice: {}".format(a_slice.eval()))
assert (tf.reduce_all(tf.equal(a_crop, a_slice)).eval() == True)
sess.close()
outputs:
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
96 97 98 99]
offset: [46]
a_crop: [89 90 91 92 93 94 95 96 97 98]
a_slice: [27 28 29 30 31 32 33 34 35 36]
There are two tf.slice options:
(1). called in function random_crop, such as tf.slice(value, begin=begin, size=size, name=name)
(2). called as a_slice = tf.slice(a, begin=begin, size=size)
The parameters (values, begin and size) of those two slice operations are the same.
However, why the printed values a_crop and a_slice are different and tf.reduce_all(tf.equal(a_crop, a_slice)).eval() is True?
Thanks
EDIT1
Thanks #xdurch0, I understand the first question now.
Tensorflow random_uniform seems like a random generator.
import tensorflow as tf
import numpy as np
sess = tf.InteractiveSession()
size = [10]
np_begin = np.random.randint(0, 50, size=1)
tf_begin = tf.random_uniform(shape = [1], minval=0, maxval=50, dtype=tf.int32, seed = 0)
a = tf.constant(np.arange(0, 100, 1))
a_slice = tf.slice(a, np_begin, size = size)
print ("a_slice: {}".format(a_slice.eval()))
a_slice = tf.slice(a, np_begin, size = size)
print ("a_slice: {}".format(a_slice.eval()))
a_slice = tf.slice(a, tf_begin, size = size)
print ("a_slice: {}".format(a_slice.eval()))
a_slice = tf.slice(a, tf_begin, size = size)
print ("a_slice: {}".format(a_slice.eval()))
sess.close()
output
a_slice: [42 43 44 45 46 47 48 49 50 51]
a_slice: [42 43 44 45 46 47 48 49 50 51]
a_slice: [41 42 43 44 45 46 47 48 49 50]
a_slice: [29 30 31 32 33 34 35 36 37 38]
The confusing thing here is that tf.random_uniform (like every random operation in TensorFlow) produces a new, different value on each evaluation call (each call to .eval() or, in general, each call to tf.Session.run). So if you evaluate a_crop you get one thing, if you then evaluate a_slice you get a different thing, but if you evaluate tf.reduce_all(tf.equal(a_crop, a_slice)) you get True, because all is being computed in a single evaluation step, so only one random value is produced and it determines the value of both a_crop and a_slice. Another example is this, if you run tf.stack([a_crop, a_slice]).eval() you will get a tensor with to equal rows; again, only one random value was produced. More generally, if you call tf.Session.run with multiple tensors to evaluate, all the computations in that call will use the same random values.
As a side note, if you actually need a random value in a computation that you want to maintain for a later computation, the easiest thing would be to just retrieve if with tf.Session.run, along with any other needed computation, to feed it back later through feed_dict; or you could have a tf.Variable and store the random value there. A more advanced possibility would be to use partial_run, an experimental API that allows you to evaluate part of the computation graph and continue evaluating it later, while maintaining the same state (i.e. the same random values, among other things).