I'm working on a project that involves reading colored data from an image (I'll attach somewhere). The input needs to be in longitude latitude. It is important that I find a way to convert these coordinates to pixels in the image. I have been contemplating this issue a lot and no great solutions are popping in mind. Unfortunately, the raw data used to construct the image is not released. Any ideas? :)Image I need to find color values for
Edit: A problem here is that these images curve the earth so the latitude lines are not parallel lines. They have a curve.
One way that might work depending on your image is to use something similar to http://www.lat-long.com/ ,using Google maps to find a point. You will have to scale the image from google maps and overlay it on top of your image, and get the pixel value. You should be able to request an image with the proper level of zoom. The good news is that, since your image is static, you can hardcode the zoom level.
Related
I'm currently a little stuck with a problem, that sounds easier than it is (at least for me):
Let's say you have satellite images taken from LEO that show an approximately 1000 km wide area (the image axis of the camera is more or less perpendicular to the ground). There is no additional location data stored in the image, so no way of directly extracting the position the image was taken).
What I want to do is write a program (in python) that can find the location the image was taken from by matching it to a map of earth. this should be done automatically (more or less in real time) for the purpose of calculating the orbit of the satellite taking the images.
I've no problem calculating the orbit, once I have location data (even if it's very noisy), using a technique based on an Extended Kalman Filter.
Matching a satellite image to a map of earth by just using the image data, on the other hand.... I honestly don't even know where to start.
I know this is an incredibly unspecific question and not related to a specific problem, but maybe someone could point me in the right direction...
EDIT:
Just to give you an idea how unprocessed images from LEO look, I included a few reasonably good images taken over one orbit of earth.
Images have been taken with a NIR camera. Resolution of the images I included have been only 640x480 (by mistake!), but image resolution should be around 4k.
These images have some artifacts in them due to the fact that they where taken through a thick glass window of the ISS - so there are some reflections happening there...
I have a dataset that contains local coordinates of tracking cars, and I have an image related to my dataset, I would like to plot my track on the image, but the coordinate systems are not matched. I also know the value of each pixel in the meter ( ortho pixel to the meter ), but I do not know how to convert my coordinates to the pixel value. Can anyone know how can I do it? if there are some sources or sample codes please introduce me, as I was searching and couldn't find anything. I am new in this area, so if someone explains from the scratch what should I do?
Thank you
I tried using imshow to open my image, but as I need to convert my coordinates to pixel values and don't know how to do it, I couldn't implement my figure completely.
picture example
I have recently started learning Python with Spyder IDE and I'm a bit lost so I ask for advice.
The thing is that I need to program an algorithm that, given a random image representing a board with black spots in it (in the picture I upload It is a 4x5 board) so It recognizes the edges properly and draw a AxB grid on it. I also need to save each cell separately so as to work with them.
I know that open CV treat images and I have even tried auto_canny but I don't really know how to solve this problem. Can anybody give me some indications please?
as I understand from your question you need to have as an output the grid of the matrix in your picture (eg. 4x3) and each cell as separate image.
This is the way I would approach this problem:
Use canny + corner detection to get the intersection of the lines
With the coordinates of the corners you can form your regions of interest, crop each individually and save it as a new image
For the grid you can check the X's and the Y's of the coordinates, for example you will have something like: ((50, 30), (50,35),(50,40)) and from this you can tell that there are 3 points on the horizontal axis. I would encourage you to set a error margin as the points might not be all on the same coordinate, but may not differ a lot.
Good luck!
I have a group of images and some separate heatmap data which (imperfectly) explains where subject of the image is. The heatmap data is in a numpy array with shape (224,224,3). I would like to generate bounding box data from this heatmap data.
The heatmaps are not always perfect, So I guess I'm wondering if anyone can think of an intelligent way to do this.
Here are some examples of what happens when I apply the heatmap data to the image:
I found a solution to this in matlab, but I have no idea how to read this code! I am a python programmer, unfortunately.
https://github.com/metalbubble/CAM/tree/master/bboxgenerator
Anyone have any ideas about how to approach something like this?
I am not quite sure how the heatmap data of your project exactly looks like, but it seems to me that you can use something like Selective Search. You can also have a look on this interesting paper. Maybe you can use this approach on your dataset.
I'm attempting a similar method for automating the creation of bounding boxes (since, lets face it: creating boxes manually takes along time)
this other stackpost covers a similar idea:
EDIT: (i originally had put a link to the current stack post 🤦 - but here is the stack post i was referring to)
Generating bounding boxes from heatmap data
the problem at hand that i recognize is that heatmaps can be fragmented and a bit arbitrary. the solution that comes to mind initially is setting a threshold of the heat map. So in the case of the example heat map images - when applying a bounding box cover all regions that are yellow/orange/red than say green/blue.
It depends on how many bounding boxes you need. You can set a threshold and have multiple bounding boxes for each of the highly activated regions, or try connecting the regions (by a morphological operation maybe) and calculate a single bounding box for connected activated pixels.
I have two images, one image which contains a box and one without. There is a small vertical disparity between the two pictures since the camera was not at the same spot and was translated a bit. I want to cut out the box and replace the hole with the information from the other picture.
I want to achieve something like this (a slide from a computer vision course)
I thought about using the cv2.createBackgroundSubtractorMOG2() method, but it does not seem to work with only 2 pictures.
Simply subtracting the picture from another does not work either because of the disparity.
The course suggests using RANSAC to compute the most likely relationship between two pictures and subtract the area thaht changed a lot. But how do I actually fill in the holes?
Many thanks in advance!!
If you plant ot use only a pair of images (or only a few images), image stitching methods are better than background subtraction.
The steps are:
Calculate homography between the two images.
Warp the second image to overlap the second.
Replace the region with the human with pixels from the warped image.
This link shows a basic example of image stitching. You will need extra work if both images have humans in different places, but otherwise it should not be hard to tweak this code.
You can try this library for background subtraction issues. https://github.com/andrewssobral/bgslibrary
there is python wrappers of this tool.