Desired Outcome I would like to count Polygons from a given image using a Python script. I am looking for a Python library/module or code example that can trained, like an AI, on attached image such that when I provide another similar image it can count the polygons correctly.
Practical use case for this would be to count the number of homes in a given area. See attached imageMap of neighborhood with houses outlined with red polygons
Any suggestions where to start?
I'm new to Stack overflow so if there is any further clarification required on this posted question, please ask (rather than simply marking this question down or disabling it).
Thank you!
If you input the image as text, you should use numpy or kinda sorts of libs.
If you input as .jpg/.png/... , you should try to use tensorflow or numpy as AI.
Related
I have been going over this for days now and have hit a road block as I am too scared to try out my hypothesis.
I would like to find out the number of grayed rectangular boxes in this image. However, I am not sure how I can do that. I was thinking of two ways:
i. Getting area of the connected components, calculating their median and getting the number of components between a certain percentile of the area (may sound pretty strange).
ii. Making a machine learning model and find out the similar boxes in the image and count them.
However, I would like them to be more generalized so that I will need to be able to make the solution fit other images that I would need to be processed.
Here is my source Image:
Any sort of help/suggestions and even solutions would be greatly appreciated.
Thanks in advance!
Maybe you are losing a lot of image information with filtering...Do you have an unfiltered source image too? I suppose ML approach would work pretty nice then.
I noticed you could achieve better resolution if your camera is 90 rotated (If you could affect this)
for my school project, I need to find images in a large dataset. I'm working with python and opencv. Until now, I've managed to find an exact match of an image in the dataset but it takes a lot of time even though I had 20 images for the test code. So, I've searched few pages of google and I've tried the code on these pages
image hashing
building an image hashing search engine
feature matching
Also, I've been thinking to search through the hashed dataset, save their paths, then find the best feature matching image among them. But most of the time, my narrowed down working area is so much different than what is my query image.
The image hashing is really great. It looks like what I need but there is a problem: I need to find an exact match, not similar photos. So, I'm asking you guys, if you have any suggestion or a piece of code might help or improve the reference code that I've linked, can you share it with me? I'd be really happy to try or research what you guys send or suggest.
opencv is probably the wrong tool for this. The algorithms there are geared towards finding similar matches, not exact ones. The general idea is to use machine learning to teach the code to recognize what a car looks like so it can detect cars in videos, even when the color or form changes (driving in the shadow, different make, etc).
I've found two approaches work well when trying to build an image database.
Use a normal hash algorithm like SHA-256 plus maybe some metadata (file or image size) to find matches
Resize the image down to 4x4 or even 2x2. Use the pixel RGB values as "hash".
The first approach is to reduce the image to a number. You can then put the number in a look up table. When searching for the image, apply the same hashing algorithm to the image you're looking for. Use the new number to look in the table. If it's there, you have a match.
Note: In all cases, hashing can produce the same number for different pictures. So you have to compare all the pixels of two pictures to make sure it's really an exact match. That's why it sometimes helps to add information like the picture size (in pixels, not file size in bytes).
The second approach allows to find pictures which very similar to the eye but in fact slightly different. Imagine cropping off a single pixel column on the left or tilting the image by 0.01°. To you, the image will be the same but for a computer, they will by totally different. The second approach tries to average small changes out. The cost here is that you will get more collisions, especially for B&W pictures.
Finding exact image matches using hash functions can be done with the undouble library (Disclaimer: I am also the author). It works using a multi-step process of pre-processing the images (grayscaling, normalizing, and scaling), computing the image hash, and the grouping of images based on a threshold value.
I get a problem when handling images taken from cell phones.
Image sample:
So, get ghosting especially for the question number area.
I think the reason is a little joggle when press the shutter.
Is there any way to remove the ghosting thus question number area will look more clear?
There is another worse one:
Actually, I find some image denoising functions like cv2.fastNlMeansDenoisingColored(), and it indeed works well upon some images.
Unfortunately, doesn't work for the above two images.
Env: Python3.6.5 Opencv:3.4.0
Thanks.
Wesley
I have some data about a set of coordinates, like {(x=1, y=2), (x=3, y=4), ...}. I projected these coordinates on the picture. Then I would like to count these point in different position. My idea is below
First, separate the picture from several pixel parts based on 10 pixels.
Second, count the point inside the pixel box.
I am new in these area, and I use python a lot. I think this may be computer graphic problem.
I am not asking for a code solution. I just want to know which libraries or approaches that are related.
Anyone could give me a hint would be highly appreciated. Thanks.
Sure, your approach seems fine. You simply want to count the number of pixels in different image regions that you placed, correct?
I answered a question recently (with Python) that was giving an indication if there was a black pixel inside an image region. It could be easily modified to count pixels instead of simply finding one. Check it out here and modify your question or post a new one if you have code problems working it out.
I wanted to create a photomosaic in python using the PIL library.
Will FFT help me in finding the best match for a picture from the given set of pictures?
I have tried with rgb colour space but the ouput seems to be ok.
I would need robust performance so thought of using fft.
Kindly provide some other valuable suggestions too.
An implementation to spark ideas might be useful. link
Otherwise google "image similarity measures". The top link is going to be this question which again might provide the needed direction your looking for.