qwt/pyqt custom scale for image plot (pixel to mm conversion) - python

I've decided to use guiqwt as my main plot library in Python and it works quite well. However, I'm missing a contour plot feature, so I had to work out my own contours in my image plots. That was quite easy by using scikit. Now I have my plot showing the image and the contours on top. Scale unit in x and y-direction is pixel as the image raw data is given per pixel and the calculated contours as well.
My problem is to convert the pixel-scale into e. g. mm-scale without scaling the image. I want to replace the original scale with a scale that represents the measured distances. The distances are available in an array.
In my first attempt I tried to change the AxisScaleDivision by creating a new one and using QwtPlot::setAxisScaleDiv. But that seems to work like a zoom-function as the image is reduced to the new interval.
Here is my code for a small example:
from guiqwt.plot import ImageDialog
from guiqwt.builder import make
from skimage import measure
import numpy as np
data = np.random.rand(80,30)
contours = measure.find_contours(data, 0.1)
win = ImageDialog(edit=False, toolbar=True, wintitle="Contrast test",
options=dict(show_contrast=True))
img = make.image(data)
plot = win.get_plot()
plot.add_item(img)
for n, contour in enumerate(contours):
curve = make.curve(contour[:, 1], contour[:, 0], 'k-')
plot.add_item(curve)
win.show()
scaleEng = plot.axisScaleEngine(2)
scaleDiv = scaleEng.divideScale(20, 30, 5, 5, 0)
plot.setAxisScaleDiv(2, scaleDiv)
plot.replot()
The syntax is very close to qwt, so I think anybody who is familiar with qwt might be able to help me :)
The image zoom should stay unaltered. Only the axis should be recalculated to a mm-scale and afterwards, of course, adapted when the zoom function is used.

I solved the problem by using a completely different approach. I used the xyimage-function of guiqwt. However, I had to scale my contours too. I missed that the last time, that's why I posted the question.

Related

How to clean binary image using horizontal projection?

I want to remove anything other than text from a license plate with a binary filter.
I have the projections on each axis but I don't know how to apply it. My idea is to erase the white outlines.
This is the image I'm working for now:
This is the projection in Axis X:
from matplotlib import pyplot as plt
import pylab
(rows,cols)=img.shape
h_projection = np.array([ x/255/rows for x in img.sum(axis=0)])
plt.plot(range(cols), h_projection.T)
And this is the result:
As you can see in the graph, at the end the line is shot by the white contour.
How can I erase everything that is at a certain threshold of the photo? Every help is appreciated
So, you want to extract the black areas within the white characters.
For example, you can select the columns (or rows) in your histograms where the value is less than a certain threshold.
from matplotlib import pyplot as plt
import pylab
import numpy as np
img = plt.imread('binary_image/iXWgw.png')
(rows,cols)=img.shape
h_projection = np.array([ x/rows for x in img.sum(axis=0)])
threshold = (np.max(h_projection) - np.min(h_projection)) / 4
print("we will use threshold {} for horizontal".format(threshold))
# select the black areas
black_areas = np.where(h_projection < threshold)
fig = plt.figure(figsize=(16,8))
fig.add_subplot(121)
for j in black_areas:
img[:, j] = 0
plt.plot((j, j), (0, 1), 'g-')
plt.plot(range(cols), h_projection.T)
v_projection = np.array([ x/cols for x in img.sum(axis=1)])
threshold = (np.max(v_projection) - np.min(v_projection)) / 4
print("we will use threshold {} for vertical".format(threshold))
black_areas = np.where(v_projection < threshold)
fig.add_subplot(122)
for j in black_areas:
img[j, :] = 0
plt.plot((0,1), (j,j), 'g-')
plt.plot(v_projection, range(rows))
plt.show()
# obscurate areas on the image
plt.figure(figsize=(16,12))
plt.subplot(211)
plt.title("Image with the projection mask")
plt.imshow(img)
# erode the features
import scipy
plt.subplot(212)
plt.title("Image after erosion (suggestion)")
eroded_img = scipy.ndimage.morphology.binary_erosion(img, structure=np.ones((5,5))).astype(img.dtype)
plt.imshow(eroded_img)
plt.show()
So now you have the horizontal and vertical projections, that look like this
And after that you can apply the mask: there are several ways of doing this, in the code is already applied within the for loop, where we set img[:,j] = 0 for the columns, and img[j,:] = 0 for the rows. It was easy and I think intuitive, but you can look for other methods.
As a suggestion, I would say you can look into the morphological operator of erosion that can help to separate the white parts.
So the output would look like this.
Unfortunately, the upper and lower part still show white regions. You can manually set the rows to white img[:10,:] = 0, img[100:,:] = 0, but that probably would not work on all the images you have (if you are trying to train a neural network I assume you have lots of them, so you need to have a code that works on all of them.
So, since now you ask for segmentation also, this opens another topic. Segmentation is a complex task, and it is not as straightforward as a binary mask. I would strongly suggest you read some material on that before you just apply something without understanding. For example here a guide on image processing with scipy, but you may look for more.
As a suggestion and a small snippet to make it work, you can use the labeling from scipy.ndimage.
Here a small part of code (from the guide)
label_im, nb_labels = scipy.ndimage.label(eroded_img)
plt.figure(figsize=(16,12))
plt.subplot(211)
plt.title("Segmentation")
plt.imshow(label_im)
plt.subplot(212)
plt.title("One Object as an example")
plt.imshow(label_im == 6) # change number for the others!
Which will output:
As an example I showed the S letter. if you change label_im == 6 you will get the next letter. As you will see yourself, it is not always correct and other little pieces of the image are also considered as objects. So you will have to work a little bit more on that.

Overplot SunPy HEK polygon mask on custom numpy array instead of Sunpy Map

I'm working on sunspot detection and I'm trying to build ground truth masks using sunpy.net.hek client to download solar events from the knowledge base.
I followed this tutorial.
My problem is that I'm not able to get the polygon pixel coordinates after the rotation. That is:
ch_boundary = SkyCoord( [(float(v[0]), float(v[1])) * u.arcsec for v in p3],
obstime=ch_date,
frame=frames.Helioprojective)
rotated_ch_boundary = solar_rotate_coordinate(ch_boundary, aia_map.date)
Where p3 holds the original coordinates of the event (they have to be rotated because your picture could not have the same timing as the event on hek). rotated_ch_boundary is an Astropy SkyCoord but cannot figure out how to get the coordinates in pixel relative to the image from that.
Then in the tutorial it just plots the coordinates using Sunpy Map and matplotlib:
aia_map.plot(axes=ax)
ax.plot_coord(rotated_ch_boundary, color='c')
I cannot do that because I want to print the polygon (filled) on a numpy array and save it.
I also tried to build a custom Sunpy map and use the same function to plot:
from sunpy.net.helioviewer import HelioviewerClient
hv = HelioviewerClient()
filepath = hv.download_jp2('2017/07/10 10:00:00', observatory='SDO',
instrument='HMI', detector='HMI', measurement='continuum')
hmi = sunpy.map.Map(filepath)
# QUERY AND ROTATION CODE HERE...
hmi.plot(axes=ax)
ax.plot_coord(rotated_ch_boundary, color='c')
but it doesn't even show the polygon on the plot, I don't know if it's for the different resolution or whatever.
Do you have any idea on how I can plot the polygon on a custom image and save it in order to use it later?
My purpose is to create a black image with a white polygon highlighted. The polygon should be in the exact same position as the sunspot in the corresponding image, let's say an SDO HMI intensitygram of the same day I downloaded from helioviewer.
Solution:
aia_map.world_to_pixel(rotated_ch_boundary)
or
rotated_ch_boundary.to_pixel(aia_map.wcs)
Thanks to fraserwatson for this post

Python OpenCV HoughLinesP Fails to Detect Lines

I am using OpenCV HoughlinesP to find horizontal and vertical lines. It is not finding any lines most of the time. Even when it finds a lines it is not even close to actual image.
import cv2
import numpy as np
img = cv2.imread('image_with_edges.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
flag,b = cv2.threshold(gray,0,255,cv2.THRESH_OTSU)
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(1,1))
cv2.erode(b,element)
edges = cv2.Canny(b,10,100,apertureSize = 3)
lines = cv2.HoughLinesP(edges,1,np.pi/2,275, minLineLength = 100, maxLineGap = 200)[0].tolist()
for x1,y1,x2,y2 in lines:
for index, (x3,y3,x4,y4) in enumerate(lines):
if y1==y2 and y3==y4: # Horizontal Lines
diff = abs(y1-y3)
elif x1==x2 and x3==x4: # Vertical Lines
diff = abs(x1-x3)
else:
diff = 0
if diff < 10 and diff is not 0:
del lines[index]
gridsize = (len(lines) - 2) / 2
cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
cv2.imwrite('houghlines3.jpg',img)
Input Image:
Output Image: (see the Red Line):
#ljetibo Try this with:
c_6.jpg
There's quite a bit wrong here so I'll just start from the beginning.
Ok, first thing you do after opening an image is tresholding. I recommend strongly that you have another look at the OpenCV manual on tresholding and the exact meaning of the treshold methods.
The manual mentions that
cv2.threshold(src, thresh, maxval, type[, dst]) → retval, dst
the special value THRESH_OTSU may be combined with one of the above
values. In this case, the function determines the optimal threshold
value using the Otsu’s algorithm and uses it instead of the specified
thresh .
I know it's a bit confusing because you don't actully combine THRESH_OTSU with any of the other methods (THRESH_BINARY etc...), unfortunately that manual can be like that. What this method actually does is it assumes that there's a "foreground" and a "background" that follow a bi-modal histogram and then applies the THRESH_BINARY I believe.
Imagine this as if you're taking an image of a cathedral or a high building mid day. On a sunny day the sky will be very bright and blue, and the cathedral/building will be quite a bit darker. This means the group of pixels belonging to the sky will all have high brightness values, that is will be on the right side of the histogram, and the pixels belonging to the church will be darker, that is to the middle and left side of the histogram.
Otsu uses this to try and guess the right "cutoff" point, called thresh. For your image Otsu's alg. supposes that all that white on the side of the map is the background, and the map itself the foreground. Therefore your image after thresholding looks like this:
After this point it's not hard to guess what goes wrong. But let's go on, What you're trying to achieve is, I believe, something like this:
flag,b = cv2.threshold(gray,160,255,cv2.THRESH_BINARY)
Then you go on, and try to erode the image. I'm not sure why you're doing this, was your intention to "bold" the lines, or was your intention to remove noise. In any case you never assigned the result of erosion to something. Numpy arrays, which is the way images are represented, are mutable but it's not the way the syntax works:
cv2.erode(src, kernel, [optionalOptions] ) → dst
So you have to write:
b = cv2.erode(b,element)
Ok, now for the element and how the erosion works. Erosion drags a kernel over an image. Kernel is a simple matrix with 1's and 0's in it. One of the elements of that matrix, usually centre one, is called an anchor. An anchor is the element that will be replaced at the end of the operation. When you created
cv2.getStructuringElement(cv2.MORPH_CROSS, (1, 1))
what you created is actually a 1x1 matrix (1 column, 1 row). This makes erosion completely useless.
What erosion does, is firstly retrieves all the values of pixel brightness from the original image where the kernel element, overlapping the image segment, has a "1". Then it finds a minimal value of retrieved pixels and replaces the anchor with that value.
What this means, in your case, is that you drag [1] matrix over the image, compare if the source image pixel brightness is larger, equal or smaller than itself and then you replace it with itself.
If your intention was to remove "noise", then it's probably better to use a rectangular kernel over the image. Think of it this way, "noise" is that thing that "doesn't fit in" with the surroundings. So if you compare your centre pixel with it's surroundings and you find it doesn't fit, it's most likely noise.
Additionally, I've said it replaces the anchor with the minimal value retrieved by the kernel. Numerically, minimal value is 0, which is coincidentally how black is represented in the image. This means that in your case of a predominantly white image, erosion would "bloat up" the black pixels. Erosion would replace the 255 valued white pixels with 0 valued black pixels if they're in the reach of the kernel. In any case it shouldn't be of a shape (1,1), ever.
>>> cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
array([[0, 1, 0],
[1, 1, 1],
[0, 1, 0]], dtype=uint8)
If we erode the second image with a 3x3 rectangular kernel we get the image bellow.
Ok, now we got that out of the way, next thing you do is you find edges using Canny edge detection. The image you get from that is:
Ok, now we look for EXACTLY vertical and EXACTLY horizontal lines ONLY. Of course there are no such lines apart from the meridian on the left of the image (is that what it's called?) and the end image you get after you did it right would be this:
Now since you never described your exact idea, and my best guess is that you want the parallels and meridians, you'll have more luck on maps with lesser scale because those aren't lines to begin with, they are curves. Additionally, is there a specific reason to get a Probability Hough done? The "regular" Hough doesn't suffice?
Sorry for the too-long post, hope it helps a bit.
Text here was added as a request for clarification from the OP Nov. 24th. because there's no way to fit the answer into a char limited comment.
I'd suggest OP asks a new question more specific to the detection of curves because you are dealing with curves op, not horizontal and vertical lines.
There are several ways to detect curves but none of them are easy. In the order of simplest-to-implement to hardest:
Use RANSAC algorithm. Develop a formula describing the nature of the long. and lat. lines depending on the map in question. I.e. latitude curves will almost be a perfect straight lines on the map when you're near the equator, with the equator being the perfectly straight line, but will be very curved, resembling circle segments, when you're at high latitudes (near the poles). SciPy already has RANSAC implemented as a class all you have to do is find and the programatically define the model you want to try to fit to the curves. Of course there's the ever-usefull 4dummies text here. This is the easiest because all you have to do is the math.
A bit harder to do would be to create a rectangular grid and then try to use cv findHomography to warp the grid into place on the image. For various geometric transformations you can do to the grid you can check out OpenCv manual. This is sort of a hack-ish approach and might work worse than 1. because it depends on the fact that you can re-create a grid with enough details and objects on it that cv can identify the structures on the image you're trying to warp it to. This one requires you to do similar math to 1. and just a bit of coding to compose the end solution out of several different functions.
To actually do it. There are mathematically neat ways of describing curves as a list of tangent lines on the curve. You can try to fit a bunch of shorter HoughLines to your image or image segment and then try to group all found lines and determine, by assuming that they're tangents to a curve, if they really follow a curve of the desired shape or are they random. See this paper on this matter. Out of all approaches this one is the hardest because it requires a quite a bit of solo-coding and some math about the method.
There could be easier ways, I've never actually had to deal with curve detection before. Maybe there are tricks to do it easier, I don't know. If you ask a new question, one that hasn't been closed as an answer already you might have more people notice it. Do make sure to ask a full and complete question on the exact topic you're interested in. People won't usually spend so much time writing on such a broad topic.
To show you what you can do with just Hough transform check out bellow:
import cv2
import numpy as np
def draw_lines(hough, image, nlines):
n_x, n_y=image.shape
#convert to color image so that you can see the lines
draw_im = cv2.cvtColor(image, cv2.COLOR_GRAY2BGR)
for (rho, theta) in hough[0][:nlines]:
try:
x0 = np.cos(theta)*rho
y0 = np.sin(theta)*rho
pt1 = ( int(x0 + (n_x+n_y)*(-np.sin(theta))),
int(y0 + (n_x+n_y)*np.cos(theta)) )
pt2 = ( int(x0 - (n_x+n_y)*(-np.sin(theta))),
int(y0 - (n_x+n_y)*np.cos(theta)) )
alph = np.arctan( (pt2[1]-pt1[1])/( pt2[0]-pt1[0]) )
alphdeg = alph*180/np.pi
#OpenCv uses weird angle system, see: http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_houghlines/py_houghlines.html
if abs( np.cos( alph - 180 )) > 0.8: #0.995:
cv2.line(draw_im, pt1, pt2, (255,0,0), 2)
if rho>0 and abs( np.cos( alphdeg - 90)) > 0.7:
cv2.line(draw_im, pt1, pt2, (0,0,255), 2)
except:
pass
cv2.imwrite("/home/dino/Desktop/3HoughLines.png", draw_im,
[cv2.IMWRITE_PNG_COMPRESSION, 12])
img = cv2.imread('a.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
flag,b = cv2.threshold(gray,160,255,cv2.THRESH_BINARY)
cv2.imwrite("1tresh.jpg", b)
element = np.ones((3,3))
b = cv2.erode(b,element)
cv2.imwrite("2erodedtresh.jpg", b)
edges = cv2.Canny(b,10,100,apertureSize = 3)
cv2.imwrite("3Canny.jpg", edges)
hough = cv2.HoughLines(edges, 1, np.pi/180, 200)
draw_lines(hough, b, 100)
As you can see from the image bellow, straight lines are only longitudes. Latitudes are not as straight therefore for each latitude you have several detected lines that behave like tangents on the line. Blue drawn lines are drawn by the if abs( np.cos( alph - 180 )) > 0.8: while the red drawn lines are drawn by rho>0 and abs( np.cos( alphdeg - 90)) > 0.7 condition. Pay close attention when comparing the original image with the image with lines drawn on it. The resemblance is uncanny (heh, get it?) but because they're not lines a lot of it only looks like junk. (especially that highest detected latitude line that seems like it's too "angled" but in reality those lines make a perfect tangent to the latitude line on its thickest point, just as hough algorithm demands it). Acknowledge that there are limitations to detecting curves with a line detection algorithm

Why does skimage.imread() not return RGB values for my bmp?

I'm trying to slice chunks from a bmp image to use for image correlation, however, when I take a single plane from the array returned by skimage.imread(), instead of getting the red plane or the green plane, I get weird colors, as if the original data is in hsl.
I've tried converting the image to RGB using PIL but the colors just get worse...
Can anyone tell me what's going on?
I'm sure my question needs more info, so let me know what I need to add, please and thanks!
Edit:
from skimage import data
full=data.imread("Cam_1.bmp")
green_template = full[144:194,297:347,1] #Both give me a sort of reddish square
red_template = full[145:195,252:302,0]
FWIW if skimage.match_template would take color images, I wouldn't have this problem... Is there a correlation library that does color?
Here's the image I'm working with:
Here's what results when I display the small crops made with the code above:
Also using full = numpy.array(image) after opening with PIL yields the same results.
Ok I figured it out (with the help of a friend).
I don't totally understand why my pictures were displaying so weirdly, but I do know why they weren't displaying as red and green planes.
green_template = full[144:194,297:347,1]
red_template = full[145:195,252:302,0]
These were taking one slice from the final subarray, which would be the g and r values, respectively. What I should have done, if I wanted to display them properly, is create a new image with the green_template and red_template as the respective g and r values, and zeros in the other places, e.g. make it back into an array with shape (width,height,3).
For example:
import Image
import numpy as np
im = Image.open('Cam_1.bmp')
#im.show()
r,g,b = im.split()
y = Image.fromarray(np.zeros(im.size[0]*im.size[1]).reshape(im.size[1],im.size[0]).astype('float')).convert("L")
red = Image.merge("RGB",(r,y,y))
green = Image.merge("RGB",(y,g,y))
If I do that with the original image, here are the images that result:
However, my original problem was that skimage's match_template only takes a 2D array. So, it turns out, I had the right arrays the whole time, I just didn't realize that they were right because displaying them results in the weird colors you see in the image in the question. If anyone knows why python does weird things when displaying a 2D image, I'd like to know. Otherwise, I've solved my problem. Thanks to anyone who attempted to help, whether you posted or just tried stuff on your own!
Edit - requested image rendering code:
def locate_squares(im):
r,g,b = im.split()
red = np.array(r)
green = np.array(g)
green_template = green[144:194,297:347] #,144:194]
gRadius = (green_template.shape[0]/2, green_template.shape[1]/2)
red_template = red[145:195,252:302] #,145:195]
rRadius = (red_template.shape[0]/2, red_template.shape[1]/2)
#print red_template
plt.figure(1)
plt.subplot(1,2,1)
plt.imshow(green_template)
plt.subplot(1,2,2)
plt.imshow(red_template)
plt.figure(2)
plt.subplot(1,2,1)
plt.imshow(green)
plt.subplot(1,2,2)
plt.imshow(red)
plt.show()

How to recognize histograms with a specific shape in opencv / python

I want to segment images (from magazines) in text and image parts. I have several histograms for several ROIs in my picture. I use opencv with python (cv2).
I want to recognize histograms that look like this
http://matplotlib.sourceforge.net/users/image_tutorial-6.png
as it is a typical shape for a text region. How can I do that?
Edit: Thank you for your help so far.
I compared the histograms I got from my ROIs to a sample histogram I provided:
hist = cv2.calcHist(roi,[0,1], None, [180,256],ranges)
compareValue = cv2.compareHist(hist, samplehist, cv.CV_COMP_CORREL)
print "ROI: {0}, compareValue: {1}".format(i,compareValue)
Assuming ROI 0, 1, 4 and 5 are text regions and ROI is an image region, I get output like this:
ROI: 0, compareValue: 1.0
ROI: 1, compareValue: -0.000195522081574 <--- wrong classified
ROI: 2, compareValue: 0.0612670248952
ROI: 3, compareValue: -0.000517370176887
ROI: 4, compareValue: 1.0
ROI: 5, compareValue: 1.0
What can I do to avoid wrong classification? For some images, the misclassification rate is about 30%, which is way too high.
(I tried also with CV_COMP_CHISQR, CV_COMP_INTERSECT, CV_COMP_BHATTACHARYY and (hist*samplehist).sum() but they also provide wrong compareValues)
(See the EDIT at the end in case i misunderstood the question) :
If you are looking to draw the histograms, I had submitted one python sample to OpenCV, and you can get it from here :
http://code.opencv.org/projects/opencv/repository/entry/trunk/opencv/samples/python2/hist.py
It is used to draw two kinds of histograms. First one applicable to both color and grayscale images as shown here : http://opencvpython.blogspot.in/2012/04/drawing-histogram-in-opencv-python.html
Second one is exclusive for grayscale image which is same as your image in the question.
I will show the second and its modification.
Consider a full image as below :
We need to draw a histogram as you have shown. Check the below code:
import cv2
import numpy as np
img = cv2.imread('messi5.jpg')
mask = cv2.imread('mask.png',0)
ret,mask = cv2.threshold(mask,127,255,0)
def hist_lines(im,mask):
h = np.zeros((300,256,3))
if len(im.shape)!=2:
print "hist_lines applicable only for grayscale images"
#print "so converting image to grayscale for representation"
im = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
hist_item = cv2.calcHist([im],[0],mask,[256],[0,255])
cv2.normalize(hist_item,hist_item,0,255,cv2.NORM_MINMAX)
hist=np.int32(np.around(hist_item))
for x,y in enumerate(hist):
cv2.line(h,(x,0),(x,y),(255,255,255))
y = np.flipud(h)
return y
histogram = hist_lines(img,None)
And below is the histogram we got. Remember it is histogram of full image. For that,we have given None for mask.
Now I want to find the histogram of some part of the image. OpenCV histogram function has got a mask facility for that. For normal histogram, you should set it None. Otherwise you have to specify the mask.
Mask is a 8-bit image, where white denotes that region should be used for histogram calculations, and black means it should not.
So I used a mask like below ( created using paint, you have to create your own mask for your purposes).
I changed the last line of code as below :
histogram = hist_lines(img,mask)
Now see the difference below :
(Remember, values are normalized, so values shown are not actual pixel count, normalized to 255. Change it as you like.)
EDIT :
I think i misunderstood your question. You need to compare histograms, right ?
If that is what you wanted, you can use cv2.compareHist function.
There is an official tutorial about this in C++. You can find its corresponding Python code here.
You can use a simple correlation metric.
make sure that the histogram you compute and your reference are normalized (ie represent probapilities)
for each histogram compute (given that myRef and myHist are numpy arrays):
metric = (myRef * myHist).sum()
this metric is a measure of how much the histogram looks like your reference.

Categories