Efficient way to draw line with specific falloff (eg blurry line) - python

Not completely sure what to call this problem but I will try my best to explain it here.
I have the coordinates of a line I want to draw onto a numpy array. However, I don't just want a simple line, but a thick line where I can specify the falloff (brightness with distance from the line) with a curve or mathematic function. For example, I might want to have a gaussian falloff, which would look something similar to the example below where a gaussian blur was applied to the image.
However, using blur filters does not allow the flexibility in functions I would like and does not enable precise control of the falloff (for example, when I want points on the line to have exactly value 1.0 and points further than say 10 pixels away to be 0.0).
I have attempted to solve this problem by creating the falloff pattern for a point, and then drawing that pattern into a new numpy channel for every point of the line, before merging them via the max function. This works but is too slow.
Is there a more efficient way to draw such a line from my input coordinates?

The solution I came up with is to make use of dilations. This method is more general and can be applied to any polygonal shape or binary mask.
Rasterize geometry the simple way first. For points set the corresponding pixel; for lines draw 1 pixel thick lines with library function from opencv or similar; for polygons draw the boundary or fill the polygon with opencv functions. This results in the initial mask with value 1 on the lines.
Iteratively apply dilations to this mask. This grows the mask pixel by pixel. Set the strength of the new mask according to an arbitrary falloff function.
The dilation operation is available in opencv. Alternatively, it can efficiently be implemented as a simple convolution with boolean matrices, which can then run on GPU devices.
An example of the results can be seen with the polygonal input:
Exponential falloff:
Sinusoidal falloff:

Related

OpenCV - How to remove convexity defects in a cam scanner?

I get in trouble by finding an algorithm to remove the convexity of my photos. As you can see the photos are captured from book pages, and I wanna remove the convexity. My question is similar to this but what I have is just page boundaries as input and neither I have grid nor am able to find by processing algorithms.
I wanna output as the right one in the below photo.
Obviously, the perspective transformation is the first thing comes in mind. However, as you can see the result is not promising:
Here's a possible pipeline to solve your problem. The main idea is to identify the text, create a super blob of it with some morphology, locate the 4 corners of this super blob and feed the points to a perspective "unwarper" (or rectifier, or whatever you wish to call that perspective correction method).
Start by converting your image to grayscale and apply adaptive thresholding to it. Try the Gaussian or Mean methods with parameters that better fit your tests. This is the result I obtain after fiddling with the values for a bit:
Now, the idea is to isolate just the text. The solution I applied is: obtain the biggest blobs and subtract them from the original image. You're going to need a method to calculate the area of each binary blob. Check this previous post for suggestions on how to implement one.
These are the biggest blobs from the image:
Subtract the largest blobs from the original image. This is the result:
As you can see, the text is almost isolated. Let me clean up the little bits of pixels by applying, again, an area filter. This time to eliminate the small blobs. This is the result:
Very good, some characters are lost during the operation, but that’s ok. We need a nice continuous block of text, because we are gonna dilate the hell of it. I tried applying a rectangular structuring element of size 5 and 5 Op iterations. Erode the output with 5 more iterations afterward, so you end up with this nice - isolated - super blob were the text used to be:
Check it out. The 3 markers you see are the centroids of the biggest blobs that I detected on the image. We need to find the 4 corners of the super blob. The biggest blob in the image is what we are after. I decided to re-use the area filter and look for the blob with the biggest area. This is the isolated super blob:
From here, the operations are pretty straightforward. Again, the goal is to get the four corners of this blob. You can fit a rectangle or apply an edge detector followed by Hough transform, to get the straight lines that follow the edges of the super blob.
I decided to apply a Canny Edge detector followed by Hough transform. Of course, I tuned the transform to filter only the possible lines I’m interested in – straight lines above a certain length. This is the result of the line detection:
There's some extra info plotted on the image. The markers you see (red and yellow) are the start/endpoints of the lines. My idea here was to find a bunch of these lines and compute the mean of these points. The idea is that we have a cluster of points that are separated in "quadrants". If we compute the mean of the start and endpoints of each line per quadrant, we will end up with 4 means – and these are the approximate values of the super blob’s corners!
I applied K-means to the start and endpoints of the lines, but you very well prefer other methods of processing. That's ok. My approximate corners are identified by the big red O markers in the above image.
As I suggested, try giving a fixed output position for these corners. I defined the red rectangle for the corners to be mapped on. For this test, I pretty much adjusted the rectangle manually. The perspective correction yields this result:
Some suggestions:
Depending on the resolution of the input image, you could downsize it
for a faster and better result, as your input seems big enough for
that.
Tune Hough Line Detection to yield larger lines. My current
configuration detects some smaller lines and that can hinder the
corner approximation.
I choose a somewhat robust method for calculating the 4 corners of
the super blob that I’ve personally used before (Edge detection +
Hough Line Transform + K-means) but whatever processing chain you
chose to obtain the data is entirely up to you!

Automatically find optimal image threshold value from density of histogram plot

I'm looking to perform optical character recognition (OCR) on a display, and want the program to work under different light conditions. To do this, I need to process and threshold the image such that there is no noise surrounding each digit, allowing me to detect the contour of the digit and perform OCR from there. I need the threshold value I use to be adaptable to these different light conditions. I've tried adaptive thresholding, but I haven't been able to get it to work.
My image processing is simple: load the image (i), grayscale i (g), apply a histogram equalization to g (h), and apply a binary threshold to h with a threshold value = t. I've worked with a couple of different datasets, and found that the optimal threshold value to make the OCR work consistently lies within the range of highest density in a histogram plot of (h) (the only part of the plot without gaps).
A histogram of (h). The values t=[190,220] are optimal for OCR. A more complete set of images describing my problem is available here: http://imgur.com/a/wRgi7
My current solution, which works but is clunky and slow, checks for:
1. There must be 3 digits
2. The first digit must be reasonably small in size
3. There must be at least one contour recognized as a digit
4. The digit must be recognized in the digit dictionary
Barring all cases being accepted, the threshold is increased by 10 (beginning at a low value) and an attempt is made again.
The fact that I can recognize the optimal threshold value on the histogram plot of (h) may just be confirmation bias, but I'd like to know if there's a way I can extract the value. This is different from how I've worked with histograms before, which has been more on finding peaks/valleys.
I'm using cv2 for image processing and matplotlib.pyplot for the histogram plots.
Check this: link it really not depend on density, it works because you did separation of 2 maximums. Local maximums are main classes foreground - left local maximum (text pixels), and background right local maximum (white paper). Optimal threshold should optimally separate these maximums. And the optimal threshold value lies in local minimum region between two local maximums.
At first, I thought "well, just make a histogram of the indexes in which data appears" which would totally work, but I don't think that will actually solve your underlying work you want to do.
I think you're misinterpreting histogram equalization. What histogram equalization does is thins out the histogram in highly concentrated areas so that if you take different bin sizes with the histogram, you'll get more or less equal quantity inside the bins. The only reason those values are dense is specifically because they appear less in the image. Histogram equalization makes other, more popular values, appear less. And the reason that range works out well is, as you see in the original grayscale histogram, values between 190 and 220 are really close to where the image begins to get bright again; i.e., where there is a clear demarkation of bright values.
You can see the way equalizeHist works directly by plotting histograms with different bin sizes. For example, here's looping over bin sizes from 3 to 20:
Edit: So just to be clear, what you want is this demarked area between the lower bump and the higher bump in your original histogram. You don't need to use equalized histograms for this. In fact, this is what Otsu thresholding (following Otsu's method) actually does: you assume the data follows a bimodal distribution, and find the point which clearly marks the point between the two distributions.
Basically, what you're asking is to find the indexes of the longest sequence of non-zero element in a 256 x 1 array.
Based on this answer, you should get what you want like this :
import cv2
import numpy as np
# load in grayscale
img = cv2.imread("image.png",0)
hist = cv2.calcHist([img],[0],None,[256],[0,256])
non_zero_sequences = np.where(np.diff(np.hstack(([False],hist!=0,[False]))))[0].reshape(-1,2)
longest_sequence_id = np.diff(non_zero_sequences,axis=1).argmax()
longest_sequence_start = non_zero_sequences[longest_sequence_id,0]
longest_sequence_stop = non_zero_sequences[longest_sequence_id,1]
Note that it is untested.
I would also recommend to use an automatic thresholding method like the Otsu's method (here a nice explanation of the method).
In Python OpenCV, you have this tutorial that explains how to do Otsu's binarization.
If you want to experiment other automatic thresholding methods, you can look at the ImageJ / Fiji software. For instance, this page summarizes all the methods implemented.
Grayscale image:
Results:
If you want to reimplement the methods, you can check the source code of the Auto_Threshold plugin. I used Fiji for this demo.

Determining the pattern orientation of a spatiotemporal image

How can I obtain average direction of the pattern shown in the figure below. It is the direction of the red arrow relative to the yellow (horizontal) line. Any ideas for an approach? I couldn't figure out a way to approach. This is a spatio-temporal image created from a video. Thank you.
Here is my original image:
The simplest approach would be to compute the gradient vector (x derivative and y derivative) and find its direction at each pixel (atan2(y,x)). The average orientation is what you want, not the average direction (will cancel out). So apply modulus pi, then average across the image.
The best way to compute image gradients is through the Gaussian gradients.
The structure tensor is the more robust way of accomplishing this. In short, it computes local averages of the gradient vector to reduce the effect of noise. It does this by computing the outer product of the gradient vector with itself, which produces a symmetric matrix. The individual components of this matrix can then be locally averaged (i.e. apply a smoothing filter). This is similar to computing the angle of the vectors, doubling the angles to make vectors in opposite directions equal, then averaging them.
Note that you can apply either of these solutions in 3D (you can think of the video data as 2D + time = 3D). That way, you compute both the speed and the direction of motion within the 2D frame, rather than just the speed along the direction in which you extracted the 2D image you show in the question. The image gradient and the concept of the structure tensor easily extend to 3D. This 3D approach is similar to the approach by Lucas-Kanade for optical flow. If you follow that link to Wikipedia, you'll see it uses the structure tensor in 2D, and adds gradients along the time dimension.
Might be useful to try Fourier transform.
In your case you should get two vertical lines in the middle of the transformed image corresponding to the information when traveling vertically in the image.
On the other hand there shouldn't be a horizontal line since when traveling horizontally in the image there is little information (little change)
For example you can use this online site to play with fourier transforms:
https://www.ejectamenta.com/Fourifier-fullscreen/
It might sound like the problem remains the same but in fact it is much easier now.
The 2D pattern is converted into dominant lines which are quite easy to find in the transformed image.
For example you can search for the strongest pixels in the image and simply determine if they are more likely to be horizontal line or a vertical line or determine the angle of the dominant line. Then rotate by 90 degrees.
For example see this image of wood grain and the resulting transformed image:
And don't worry about the two lines. The image is symmetric so just ignore 3/4 of the image and look in 1 quarter.
I recommend giving the Hough transform a go, it is available in OpenCv. The Hough transform maps lines to angles, and might be useful in your case.

Find edges of images

I have software that generates several images like the following four images:
Does an algorithm exist that detects the (horizontal & vertical) edges and creates a binary output like this?
If possible I'd like to implement this with numpy and scipy. I already tried to implement an algorithm, but I failed because I didn't find a place to start. I also tried to use a neural network to do this, but this seems to be overpowered and does not work perfectly.
The simplest thing to try is to:
Convert your images to binary images (by a simple threshold)
Apply the Hough transform (OpenCV, Matlab have it already implemented)
In the Hough transform results, detect the peaks for angles 0 degree, + and - 90 degrees. (Vertical and horizontal lines)
In OpenCV and Matlab, you have extra options for the Hough transform which allow you to fill the gaps between two disconnected segments belonging to a same straight line. You may need a few extra operations for post-processing your results but the main steps should be these ones.

Image Segmentation based on Pixel Density

I need some help developing some code that segments a binary image into components of a certain pixel density. I've been doing some research in OpenCV algorithms, but before developing my own algorithm to do this, I wanted to ask around to make sure it hasn't been made already.
For instance, in this picture, I have code that imports it as a binary image. However, is there a way to segment objects in the objects from the lines? I would need to segment nodes (corners) and objects (the circle in this case). However, the object does not necessarily have to be a shape.
The solution I thought was to use pixel density. Most of the picture will made up of lines, and the objects have a greater pixel density than that of the line. Is there a way to segment it out?
Below is a working example of the task.
Original Picture:
Resulting Images after Segmentation of Nodes (intersection of multiple lines) and Components (Electronic components like the Resistor or the Voltage Source in the picture)
You can use an integral image to quickly compute the density of black pixels in a rectangular region. Detection of regions with high density can then be performed with a moving window in varying scales. This would be very similar to how face detection works but using only one super-simple feature.
It might be beneficial to make all edges narrow with something like skeletonizing before computing the integral image to make the result insensitive to wide lines.
OpenCV has some functionality for finding contours that is able to put the contours in a hierarchy. It might be what you are looking for. If not, please add some more information about your expected output!
If I understand correctly, you want to detect the lines and the circle in your image, right?
If it is the case, have a look at the Hough line transform and Hough circle transform.

Categories