I am constructing a map with meaningful data using Folium, on Python. But, I need to extract the information(for example an image which is bounded by max-min lat-long values). I tried several different ways. However, I don't get the data I desired.
A sample map, constructed using Folium, in an html file.
I need to use this as an RGB image rather than an interactive map. As much as I see, there is no such functionality. At least, I could not find. Is there a way?
Assuming that there is no such way, and I decided to crop this image using selenium browser method. So, I firstly had to indicate the boundaries in order to capture the image with corresponding latitude, longitude values. I applied fit_bounds(), but it is not bounded by given max/min lat-long values. There is padding-like expansion outside of the boundaries. Therefore, this way also failed. Could you please let me know if there is a solution for this purpose? Simply, briefly, I need to have the data that includes the RGB image, lat-long values(at least the boundaries) and these are retrieved directly from a folium map if possible.
Thank you in advance for any support.
Related
I've got this thing to do, when we give coordinates (the latitude and longitude) as an input then the required map should be shown in the output with the help of PySimpleGUI, in Python language, of course.
So, any ideas on how to start this thing?
If you just want to show a map with a satellite image for a given lat/lon coordinate, take a look at Google's "Maps Static API". All you have to do is generate a URL with the appropriate parameters, and this API will return an image of your place and zoom level (and other parameters). As the name implies, it generates a static map image, instead of the interactive maps generated through the normal Maps APIs.
More info in the documentation:
https://developers.google.com/maps/documentation/maps-static/overview
Not that if you are thinking about "extracting" images from these maps, make sure to review the terms of service, which generally prohibit bulk extraction of data. https://www.google.com/help/terms_maps/
I am trying to build up an algorithm to detect some objects and track them over time. My input data is a tif multi-stack file, which I read as a np array. I apply a U-Net model to create a binary mask and then identify the coordinates of single objects using scipy.
Up to here everything kind of works but I just cannot get my head around the tracking. I have a dictionary where keys are the frame numbers and values are lists of tuples. Each tuple contain the coordinates of each object.
Now I have to link the objects together, which on paper seems pretty simple. I was hoping there was a function or a package to do so (ideally something similar to trackMate or M2track on ImageJ), but I cannot find anything like that. I am considering writing my own nearest neighbor tool but I'd like to know whether there is a less painful way (and also, I would like to consider also more advanced metrics).
The other option I considered is using cv2, but this would require converting the data in a format cv2 likes, which will significantly slow down the code. In addition, I would like to keep the data as close as possible to the original input, so no cv2 for me.
I solved it using trackpy.
http://soft-matter.github.io/trackpy/v0.5.0/
trackpy properly reads multistack tiff files (OpenCv can't).
I am trying to extract a time series dataset from an image (with x-axis and y-axis). Is there a quick way to do so on Python?
To be more precise, this is my graph:
HEL Share Price
and I am trying to get daily data.
Any help?
Thanks! :)
I know this Web App that can do it: WebPlotDigitizer
Looking at alternativeto.net I found Engauge Digitizer which "accepts image files (like PNG, JPEG and TIFF) containing graphs, and recovers the data points from those graphs" and a recent version "adds python support". I never used Engauge, but it sounds like what want...
Keep in mind, that it is not that easy to automate such a task, because finding the correct axis labels and "49,28" label even might overlap the graph sometimes...
In Python, you could try this Python3 utility. It says it can extract raw data from plots images.
But you can more easily extract data from graph images using GUI-friendly tools, like plotdigitizer.com or automeris.io. I prefer the former over the latter. You can find the entire list of such programs over here.
I am trying to obtain a radius and diameter distribution from some AFM (Atomic force microscopy) measurements. So far I am trying out Gwyddion, ImageJ and different workflows in Matlab.
At the moment the best results I have found is to use Gwyddion and to take the Phase image, high pass filter it and then try an edge detection with 'Laplacian of Gaussian'. The result is shown in figure 3. However this image is still too noisy and doesnt really capture the edges of all the particles. (some are merged together others do not have a clear perimeter).
In the end I need an image which segments each of the spherical particles which I can use for blob detection/analysis to obtain size/radius information.
Can anyone recommend a different method?
[
I would definitely try a Granulometry, it was designed for something really similar. There is a good explanation of granulometry here starting page 158.
The granulometry will perform consecutive / increasing openings that will erase the different patterns according to their dimensions. The bigger the pattern, the latter it will be erased. It will give you a curve that represent the pattern dimension distributions in your image, so exactly what you want.
However, it will not give you any information about the position inside the image. If you want to have a rough modeling of the blobs present in your image, you can take a look to the Ultimate Opening.
Maybe you can use Avizo, it's a powerful software for dealing with image issues, especially for three D data (CT)
I have photo images of galaxies. There are some unwanted data on these images (like stars or aeroplane streaks) that are masked out. I don't just want to fill the masked areas with some mean value, but to interpolate them according to surrounding data. How do i do that in python?
We've tried various functions in SciPy.interpolate package: RectBivariateSpline, interp2d, splrep/splev, map_coordinates, but all of them seem to work in finding new pixels between existing pixels, we were unable to make them fill arbitrary "hole" in data.
What you want is called Inpainting.
OpenCV has an inpaint() function that does what you want.
What you want is not interpolation at all. Interpolation depends on the assumption that data between known points is roughly contiguous. In any non-trivial image, this will not be the case.
You actually want something like the content-aware fill that is in Photoshop CS5. There is a free alternative available in The GIMP through the GIMP-resynthesize plugin. These filters are extremely advanced and to try to re-implement them is insane. A better choice would be to figure out how to use GIMP-resynthesize in your program instead.
I made my first gimp python script that might help you:
my scripts
It is called conditional filter as it is a matrix filter that fill all transparent pixels from an image according to the mean value of its 4 nearest neighbours that are not transparent.
Be sure to use a RGBA image with only 0 and 255 transparent values.
Its is rough, simple, slow, unoptimized but bug free.