Cant display the LBP image after using exposed elbp function in python - python

I am trying to display an lbp image after applying elbp function to a gray-scale image, the function was exposed manually in the face module for python.
here's the code:
LBP = cv2.face.elbp(gray_roi,1,8)
cv2.imshow("Face",LBP)
But however, what I got is a pure black window,also I noticed that the cols and rows are always smaller than the original image by 2,here is the error information:
could not broadcast input array from shape (95,95) into shape (97,97)
I noticed one other ppl asked the same question but was using c++ instead: Unable to display image after applying LBPH (elbp()) operator in opencv
But what I cant understand is what he meant by normalized the image to fit the screen rendering range?
here is the matrix ouput of my image:
print(LBP)
As you can see, the pixel intensity distribution is normal.
here is the actual elbp function!

#barny thx for your detailed response , based on your solution, I times the matrix by a value like 64, finally I got the image shown, but I'm not sure why do I have to times a value to get the proper visible image,shouldn't it be done in the original elbp function?
and also , the matrix element value become very large:
here is some part of the histogram that I printed out from the shown image(times a value by 64):
histogram=scipy.stats.itemfreq(LBP1)
print(histogram)[[ 0 1726]
histogram
If someone can explain to me why do I have to multiply such a big value that would be great!
ps:this is my 1st time ask on stack overflow, thx for everyone who tried to help !

Related

What does the no_auto_scale parameter mean in rawpy?

I am currently trying to work on a raw image and I would like apply very little processing. I am trying to understand what the no_auto_scale parameter in rawpy.postprocessrawpy.Params is. I don't understand what disabling pixel value scaling does. Could anyone help me please ?
My ultimate goal is to load the Bayer matrix with the colors scaled to balance out the sensitivity of each color sensor. So every pixel in the final image will correspond to a different color depending on where it is in the Bayer pattern but they will all be on a similar scale.

Find contour and differences between two images

For a university project I need to compare two images I have taken and find the differences between them.
To be precise I monitor a 3d printing process where I take a picture after each printed layer. Afterwards I need to find the outlines of the newly printed part.
The pictures look like this (left layer X, right layer X+1):
I have managed to extract the layer differences with the structural similarity from scikit from this question. Resulting in this image:
The recognized differences match the printed layer nearly 1:1 and seem to be a good starting point to draw the contours. However this is where I am currently stuck. I have tried several combinations of tresholding, blurring, findContours, sobel and canny operations but I am unable to produce an accurate outline of the newly printed layer.
Edit:
This is what I am looking for:
Edit2:
I have uploaded the images in the original file size and format here:
Layer X Layer X+1 Difference between the layers
Are there any operations that I haven't tried yet/do not know about? Or is there a combination of operations that could help in my case?
Any help on how to solve this problem would be greatly appreciated!!

Find_Peaks: Invalid shape (4951,) for image data. Possible solutions?

Hello all I ran into this error when I tried to display an via imshow: Invalid shape (4951,) for image data
This data is the pixel "peak_vals" output I got from running an image through photutils.find_peaks(). The original shape was (5820,). I'm pretty sure this error is occurring because of the irregular shape, but I am not sure if it is possible to reshape it to the right dimensions.
So my question is:
If there is a possible method of reshaping, what is it?
If no, how can I find the connected pixels (or hyperpixels) within the image that I am working with? My original approach was to fit the peak_vals data (pixel intensity data) to an ellipse and filter down the data/pixels to those that fit within the ellipse. 2 iterations of this led to the data's reshaping (from (5820,) to (4951,).

Images change color after unfolding using pytorch

I am new to pytorch, for one of my projects I am dividing a large image into smaller tiles/patches. I am using unfold to make this happen. My code is as follows
data = training_set[1][0].data.unfold(1, 64, 64).unfold(2, 64, 64).unfold(3, 64, 64)
After doing this I transpose the resultant matrix since the images are flipped, like this sample code
torch.t(data [0][0][0][0])
but the resultant images lose color, or get discolored for some reason, and I am worried that this might affect any calculations I do based on these patches.
The following is a screenshot of the problem
The top is the patch and the bottom one is the complete picture
Any help is appreciated, thanks
I think your dataset is probably fine.
I had a similar issue in the past related to this. In this situation I have the feeling that the culprit is the matplotlib.imshow() function.
It would be helpful if you could share the complete code you have used to plot the matplotlib figure.
You are most likely taking a RGBA instead of a regular RGB as input into the plt.imshow() function. Thus the color are just because you also display an alpha value (A) on top of the regular red green blue (RGB).
If that is the case I would suggest that you try to plot this
image = torch.t(data [0][0][0])

Alternative to opencv warpPerspective

I am using opencv warpPerspective() function to warp the found countour in the image to find contour i am using findContours().
This is shown in this image:
but the warpPerspective() function takes "more time" to warp to full image is there any alternative to this function to warp the object in image to full image as shown in figure.
OR will traversing help?but this would be difficult to do so that i can reduce the time the warpPerspective() function takes.
You can try to work on rotation and translation matrices (or roto-translational matrix, a combination of both), which can warp image as you wish. The function warpPerspective() utilizes similar approach, so you will basically will have an opportunity to look inside the function.
The approach is:
You calculate the matrix, then multiply the height and width of
the original image to find dimensions of the output image.
Go through all pixels in the original image and multiply their
(x,y) coordinates to the matrix R
(rotation/translation/roto-translation matrix) to get the
coordinates on the output image (xo,yo).
On every calculated coordinate (xo,yo) assign value from the
corresponding original image coordinate (x,y).
Interpolate using median filter/bilinear/bicubic/etc. method as
sometimes there may be empty points left on the output image
However, if you work in Python your implementation may work even slower than warpPerspective(), so you may consider C++. Another thing is that OpenCV uses C++ compiler and I am pretty sure that implementation of warpPerspective() in OpenCV is very efficient.
So, I think that you can go around warpPerspective(), however, I am not sure if you can do it faster than in OpenCV without any boosts (like GPU, powerful CPU etc.) :)
Good luck!

Categories