This question already has answers here:
How to convert a grayscale image to heatmap image with Python OpenCV
(2 answers)
Closed 10 months ago.
I am trying to implement this:
YouTube link- https://www.youtube.com/watch?v=Qk4V_x6B7jY&t=5s
Blog link- http://www.businessinsider.in/These-Heat-Maps-Show-How-Retailers-Track-You-As-You-Shop/articleshow/29512380.cms
I want to use python and openCV but I am a beginner in openCV and hence, I have no idea How to implement this.
I have some basic idea. I have been able to track motion and draw a rectangle around the moving object and i am saving the co-ordinates of the rectangle in an external csv file. But, i am stuck on plotting heat map part. How to make it so that, over time when people are moving more and more in an area, the color changes from blue(normal movement ) to red (high movement) ? Please help..
So the basic idea for plotting a heat map is to visually get some feedback for the probability of a given particular event, You may write your own method which may take probability in range 0-1 and output a color in range (255, 0, 0) - (0, 0, 255). Or Opencv has provision of color-maps. You may be interested in using the COLORMAP_JET:
And Now you have to normalize the probability in range 0-255 instead of 0-1 and the you may use cv2.applyColorMap(input_prob, cv2.COLORMAP_JET) to get the desired output.
Related
I'm beginning to work on a project with OpenCV (in python), and I'm trying to figure out the best way to tackle the problem I'm facing. I'm trying to get the area of an item in an image...but the surface area I'm looking for is on a 3D object.
So I found this while searching for calculating the area of a contour, but as you can see, this is only for a 2D object. For example, how could I find the area of the red question marks on the sphere in this image? Or the size of the rash on this baby's face (which is most certainly not 2-dimensional)? Is there a way to find the depth of the surface in the image, then use the high color gradient difference to find contours and calculate the difference based on the depths?
I found this deep learning paper (and associated PyTorch library), but was wondering if there was another way that I was missing...
Thanks to all for any ideas / replies.
I'm trying to make an indoor navigation and I need indoor map that robot can automatically navigate the way. I'm thinking of using image which have different colors for each place(certain section), and I want to know the way to get the coordinates of the certain colors. So that I can designate places to certain color area using that coordinates. I am currently using pycharm
How can I get the coordinates of each of the pink,purple, and yellow part?
RGB code of the colors are pink(255,128,255), yellow(255,255,0), purple(128,128, 255).
This is the image that I'll use
The solution to your problem will involve two main parts:
Detecting the color from input image
Converting the blob to a single coordinate.
Let's take the first problem. You can use cv2.inRange() with various colors, to get a binary mask for each of your marked squares in the input image.
Now you can use cv2.findContours on the binary mask(s) to detect the largest contour and the take it's mid-point or something.
This question already has an answer here:
matplotlib: add circle to plot
(1 answer)
Closed 5 years ago.
I have an image as follows:
The image is generated in Python. I want to further plot a curve on top of the image, which encloses the white region. I have already got the pixel indices for the region border. I know in Matlab, e.g., it can be simply done by :
hold on
plot(x,y,'r-') %x and y are the indices of boundary pixels.
The resultant image should be like this:
How can I do that contour on top of the image in Python, not by cv2.drawContours? Anyone can help me with the problem?
You can use the Pillow library for it. Without going into the whole solution, there's a good example in their tutorial on how to read an image an do some postscript drawing on top of it: https://pillow.readthedocs.io/en/4.1.x/handbook/tutorial.html#drawing-postscript
You pretty much only need to change the rectangle to a circle instead.
The cut down example is:
from PIL import Image
from PIL import PSDraw
im = Image.open("lena.ppm")
box = (1*72, 2*72, 7*72, 10*72) # in points
ps = PSDraw.PSDraw() # default is sys.stdout
ps.begin_document(title)
# draw the image (75 dpi)
ps.image(box, im, 75)
ps.rectangle(box)
ps.end_document()
PS. I'm assuming that by "I have an image as follows:" you mean, you have an image file and want to process that, not that you're generating that graph in python already. If you're using something like matplotlib to generate it in the first place, you can add it there instead.
If you're already doing it in matplotlib, then the question is likely a duplicate of matplotlib: add circle to plot
The following paper gives you an idea about tripwire.
I have subtracted the background from foreground using backgroundsubtractormog2 method of OpenCV C++.
I can able to find and draw contours but how to draw a line such that whenever a person or vehicle is crossing that line, it should be counted. you can see a sample video here example video.
Any suggestions on drawing such a line......
Is the way I am approaching is correct? I am using Windows 7 OpenCV 2 & OpenCV 3 in python and C++
For drawing a line you can use the cv::line function.
For counting objects passing the line you can use the minimum and maximum (x,y) values of the contour or bounding box you know. If the contour has passed the line, then both minimum and maximum (x,y) will be on the other side of that line.
Can you show us the code of what you've already tried?
This question already has answers here:
Detect gray things with OpenCV
(2 answers)
Closed 8 years ago.
I am working on a project which involves using a thermal video camera to detect objects of a certain temperature. The output I am receiving from the camera is an image where the pixels of interest (within the specified temperature range) are colored yellow-orange depending on intensity, and all other pixels are grayscale. I have tried using cv2.inRange() to filter for the colored pixels, but results have been spotty, as the pixel colors I have been provided with in the color lookup tables do not match those actually output by the camera.
I figured then that it would be easiest for me to just filter out all grayscale pixels, as then I will be left with only colored pixels (pixels of interest). I have tried looping through each pixel of each frame of the video and checking to see if each channel has the same intensity, but this takes far too long to do. Is there a better way to filter out all grayscale pixels than this?
filter out greyscale or filter in the allowed colors
Idk if the range of colors or range of greyscale is larger but maybe whitelisting instead of blacklisting is helpful here