Tkinter: format RGB values into string - python

I am new to Tkinter (and Python) and I would like to find the most efficient way to format RGB values into a string so it can be used with the PhotoImage.put() function.
Let's say I have a Numpy rank 3 array in which the RGB values are stored, the 3rd dimension having a length of 3 for red, green and blue respectively. The most intuitive way to proceed would be:
for i in range(0, n_pixels_x):
for j in range(0, n_pixels_y):
hexcode = "#%02x%02x%02x" % (array[i,j,0], array[i,j,1], array[i,j,2])
img.put(hexcode, (j,i))
Unfortunately, this is way too slow for large images.
As described in the PhotoImage Wiki, it is possible to pass one large string to put() so the function is called only once. Then, I need to efficiently convert my array into such a string, which should be formatted like this (for a 4x2 image):
"{#ff0000 #ff0000 #ff0000 #ff0000} {#ff0000 #ff0000 #ff0000 #ff0000}"
Again, this could easily be done with nested for loops, but I would like to avoid them for efficiency reasons. Is there any way to use join() in order to do what I want?
If needed, I can store the content of my array differently, the only constraint being that I should be able to modify the color values easily.
Edit: After working on this a bit, I found a way to format my values approximately 10 times faster than by using nested loops. Here is the commented piece of code:
# 1. Create RGB array
array = np.zeros((n_pixels_x*n_pixels_y, 3))
array = np.asarray(array, dtype = "uint32")
array[1,:] = [0, 100, 255]
# 2. Create a format string
fmt_str = "{" + " ".join(["#%06x"]*n_pixels_x) + "}"
fmt_str = " ".join([fmt_str]*n_pixels_y)
# 3. Convert RGB values to hex
array_hex = (array[:,0]<<16) + (array[:,1]<<8) + array[:,2]
# 4. Format array
img_str = fmt_str % tuple(array_hex)
For a 640x480 array, steps 3 and 4 take ~0.1s to execute on my laptop (evaluated with timeit.default_timer()). Using nested loops, it takes between 0.9s and 1.0s.
I would still like to reduce the computation time, but I'm not sure if any improvement is still possible at this point.

I was able to find another way to format my array, and this really seems to be the quickest solution. The solution is to simply use Image and ImageTk to generate an image object directly from the array:
array = np.zeros((height, width, 3), 'uint8')
imageObject = Image.fromarray(array)
img = ImageTk.PhotoImage(image = imageObject, mode = 'RGB'))
This takes approximately 0.02s to run, which is good enough for my needs, and there is no need to use the put() function.
I actually found this answer from another question: How do I convert a numpy array to (and display) an image?

Related

howcan i put a list into 2d array with known size 283*283 using python

i want to hide(invisible watermark) a string into image (283*283) using LSB (Least Significant Bit)
algorithm. user gives hidden message(string) , then i put all characters' ascii code (base 2) in a list , now i want to make this list a 2d array in same size as my image then i can use '&' and '|' operators.
import cv2 as cv
#read image:
img=cv.imread('C:/Users/pc/Desktop/cameraman.jpg',0)
cv.imshow("ax bedoon ramz",img)
cv.waitKey()
#make least significant bit of each pixel 0 :
img_r=img&0b11111110
img_w=img_r.copy()
#take message and make sure it can hide in 283*283 image :
while True:
txt=input('chi maikhay ghayem koni ? (max = 10000 character) : ')
if len(txt)>10000:
print('out of range characters ! ! ! ')
else :
break
#put characters ascii code in list :
ch_ascii_base2 = [bin(ord(i))[2:] for i in txt]
result=[]
for ch in ch_ascii_base2:
for val in ch:
result.append(bin(int(val))[2:])
There is no point in zeroing out all the LSB of all pixels, because if your secret is much smaller than the size of your image, you have modified ~50% of the remaining pixels for no reason.
I would simply get the bitstream of the message, flatten the image and then hide your message in a slice of that array that fits the message. Then reshape it back to 2D.
string = 'Hello world'
# Getting the bits from each character with bitwise operations is better
# than using intermediate strings with `bin` or string formats
for byte in map(ord, string):
bits.extend((byte >> i) & 1 for i in range(7, -1, -1))
flat = img.flatten()
flat[:len(bits)] = (flat[:len(bits)] & 0xfe) | bits
stego = flat.reshape(img.shape)
If the image is RGB, then the order of the pixels is (0, 0, R), (0, 0, G), (0, 0, B), (0, 1, R), etc. If you want to embed your secret first in, say, only the blue channel, extract that colour plane, embed as many bits as it can fit in there with the above process and then move on to another channel. It's a bit more convoluted, but not really hard.
If you insist in converting the bitstream to a 2D array the same size as your image, just count how many pixels your image has, how many bits you have and append that many 1s or 0s to your bitstream. Then use np.reshape(). Again though, if the result is a 3D array, you have to be mindful of the final order of the bits.
All in all, if you don't care about embedding your secret in specific planes, use the method I suggested. It's extremely short and clear and it doesn't involve any extraneous computations or modifications to your image.

Why is the max element value in a numpy array 255?

I am currently using numpy to create an array. I would like to use vectorized implementations to more efficiently take the average of the elements in a position, (i, j). These arrays are coming from images in a file directory which have all been standardized to a fixed size.
However, when I try to add the image arrays, the sum of each element is returned in a form a (mod 256). How can I change the maximum value of the elements?
Your arrays are presumably of type numpy.uint8, so they wraparound when they hit 256.
If you want to get larger results, use astype to convert the first argument to a larger data type, e.g.:
a = np.array(..., dtype=np.uint8)
b = np.array(..., dtype=np.uint8)
c = a.astype(np.uint32) + b
and you'll get a result array of the larger data type too.
Per #Eric, to avoid the temporary, you can use the numpy add function (not method) to do the addition, passing a dtype so the result is of the new type even as the inputs are not converted, avoiding a temporary (at least at the Python level):
c = np.add(a, b, dtype=np.uint32)
You would be better off creating the output array first:
average = numpy.zeros(a.shape, numpy.float32)
image = numpy.zeros_like(average)
Then traversing the images and adding them up in-place:
for i in images:
image[:] = function_that_reads_images_as_uint8(i)
average += image
average /= len(images)
You might get away with int types if you didn't need the precision in the division step.

Fast string to array copying python

I'm looking to cut up image data into regularly sized screen blocks. Currently the method I've been using is this:
def getScreenBlocksFastNew(bmpstr):
pixelData = array.array('c')
step = imgWidth * 4
pixelCoord = (blockY * blockSizeY * imgWidth +
blockSizeX * blockX)* 4
for y in range(blockSizeY):
pixelData.extend( bmpstr[pixelCoord : pixelCoord + blockSizeX * 4] )
pixelCoord += step
return pixelData
bmpstr is a string of the raw pixel data, stored as one byte per RGBA value. (I also have the option of using a tuple of ints. They seem to take about the same amount of time for each). This creates an array of a block of pixels, depicted by setting blockX, blockY and blockSizeX, blockSizeY. Currently blockSizeX = blockSizeY = 22, which is the optimal size screen block for what I am doing.
My problem is that this process takes .0045 seconds per 5 executions, and extrapolating that out to the 2000+ screen blocks to fill the picture resolution requires about 1.7 seconds per picture, which is far too slow.
I am looking to make this process faster, but I'm not sure what the proper algorithm will be. I am looking to have my pixelData array pre-created so I don't have to reinstantiate it every time. However this leaves me with a question: what is the fastest way to copy the pixel RGBA values from bmpstr to an array, without using extend or append? Do I need to set each value individually? That can't be the most efficient way.
For example, how can I copy values bmpstr[0:100] into pixelData[0:100] without using extend or setting each value individually?

Why does cvSet2D take in a tuple of doubles, and why is this tuple all 0 save for the first element?

cvSet2D(matrix, i, j, tuple)
Hi. I'm dissecting the Gabor Filter code given in http://www.eml.ele.cst.nihon-u.ac.jp/~momma/wiki/wiki.cgi/OpenCV/Gabor%20Filter.html . I have a few questions on cvSet2D especially as used in the following code snippets (as given in the link):
C code:
for (x = -kernel_size/2;x<=kernel_size/2; x++) {
for (y = -kernel_size/2;y<=kernel_size/2; y++) {
kernel_val = exp( -((x*x)+(y*y))/(2*var))*cos( w*x*cos(phase)+w*y*sin(phase)+psi);
cvSet2D(kernel,y+kernel_size/2,x+kernel_size/2,cvScalar(kernel_val));
cvSet2D(kernelimg,y+kernel_size/2,x+kernel_size/2,cvScalar(kernel_val/2+0.5));
}
}
Python code:
for x in range(-kernel_size/2+1,kernel_size/2+1):
for y in range(-kernel_size/2+1,kernel_size/2+1):
kernel_val = math.exp( -((x*x)+(y*y))/(2*var))*math.cos( w*x*math.cos(phase)+w*y*math.sin(phase)+psi)
cvSet2D(kernel,y+kernel_size/2,x+kernel_size/2,cvScalar(kernel_val))
cvSet2D(kernelimg,y+kernel_size/2,x+kernel_size/2,cvScalar(kernel_val/2+0.5))
As I'm aware cvSet2D sets the (j, i)th pixel of a matrix to the equivalent color value of the tuple defined in the last parameter. But why does it take in a tuple of doubles? Isn't it more natural to take in a tuple of ints, seeing that a pixel color is defines as a tuple of ints?
Lastly, if I read the docs correctly, the cvScalar method used returns the 4-tuple <given_value_in_double, 0.000000, 0.000000, 0.000000)>. I surmised that cvSet2D takes the first three values and uses it to as the RGB 3-tuple. But, seeing that the output of Gabor Filters are more or less in grayscale, that won't hold being that, the colors produced in my scheme will lean towards red. So, what does cvSet2D do with this tuple?
Thank you for anyone who'll take the bother to explain!
In OpenCV, images can have 1 (grayscale), 2, 3 (RGB) or 4 (RGB plus alpha) channels. The one Set2D function is used for all images regardless of how many channels they have. Each of the elements of that tuple is used for specifying a channel value.
You always pass in four values through the tuple, but OpenCV will only use the first N of these, where N is the number of channels in the image. It will ignore the rest. Zero is just an accepted place-holder that means "this value does not matter". Of course, you can pass in any value you want, but the readability of your code will suffer.
The reason double is used as the argument type is probably because that is the highest precision type in Python that you can use. OpenCV will cast this value to the proper underlying type (which it gets from the image).
Since you're dealing with one-channel 32-bit float images (CV_32FC1), just continue using Set2D(image, (value, 0, 0, 0)) and everything should be alright.
I think that for grayscale images (matrices), CvSet2D sets the only channel (brightness) using the first value in the cvScalar. The documentation for CvSet2D seems broken, I'll try to verify in the code.
EDIT: In the code sample you linked to, kernel has type CV_32FC1 which means it has one channel only. So does kernelImg:
kernel = cvCreateMat(kernel_size,kernel_size,CV_32FC1)
kernelimg = cvCreateImage(cvSize(kernel_size,kernel_size),IPL_DEPTH_32F,1)
So it makes sense that the scalar only needs one channel value set within.

Process RGBA data efficiently using python?

I'm trying to process an RGBA buffer (list of chars), and run "unpremultiply" on each pixel. The algorithm is color_out=color*255/alpha.
This is what I came up with:
def rgba_unpremultiply(data):
for i in range(0, len(data), 4):
a = ord(data[i+3])
if a != 0:
data[i] = chr(255*ord(data[i])/a)
data[i+1] = chr(255*ord(data[i+1])/a)
data[i+2] = chr(255*ord(data[i+2])/a)
return data
It works but causes a major drawback in performance.
I'm wondering besides writing a C module, what are my options to optimize this particular function?
This is exactly the kind of code NumPy is great for.
import numpy
def rgba_unpremultiply(data):
a = numpy.fromstring(data, 'B') # Treat the string as an array of bytes
a = a.astype('I') # Cast array of bytes to array of uints, since temporary values needs to be larger than byte
alpha = a[3::4] # Every 4th element starting from index 3
alpha = numpy.where(alpha == 0, 255, alpha) # Don't modify colors where alpha is 0
a[0::4] = a[0::4] * 255 // alpha # Operates on entire slices of the array instead of looping over each element
a[1::4] = a[1::4] * 255 // alpha
a[2::4] = a[2::4] * 255 // alpha
return a.astype('B').tostring() # Cast back to bytes
How big is data? Assuming this is on python2.X Try using xrange instead of range so that you don't have to constantly allocate and reallocate a large list.
You could convert all the data to integers for working with them so you're not constantly converting to and from characters.
Look into using numpy to vectorize this: Link I suspect that simply storing the data as integers and using a numpy array will greatly improve the performance.
And another relatively simple thing you could do is write a little Cython:
http://wiki.cython.org/examples/mandelbrot
Basically Cython will compile your above function into C code with just a few lines of type hints. It greatly reduces the barrier to writing a C extension.
I don't have a concrete answer, but some useful pointers might be:
Python's array module
numpy
OpenCV if you have actual image data
There are some minor things you can do, but I do not think you can improve a lot.
Anyway, here's some hint:
def rgba_unpremultiply(data):
# xrange() is more performant then range, it does not precalculate the whole array
for i in xrange(0, len(data), 4):
a = ord(data[i+3])
if a != 0:
# Not sure about this, but maybe (c << 8) - c is faster than c*255
# So maybe you can arrange this to do that
# Check for performance improvement
data[i] = chr(((ord(data[i]) << 8) - ord(data[i]))/a)
data[i+1] = chr(255*ord(data[i+1])/a)
data[i+2] = chr(255*ord(data[i+2])/a)
return data
I've just make some dummy benchmark on << vs *, and it seems not to be markable differences, but I guess you can do better evaluation on your project.
Anyway, a c module may be a good thing, even if it does not seem to be "language related" the problem.

Categories