Related
I'm currently trying to figure out how vesselfinder.com calculates its Box Boundaries (bbox) which they use to query data from their backend.
Given an input like: lat, lon = 59.8230, 22.9586
They fetch data by using this bbox: 13761899,35886447,13779795,35898097
If I try to get a similar bbox by using bboxfinder.com, I get the following values, which aren't even close to what I was expecting: 2553560.4710,8358928.9331,2556565.4293,8360514.8411
The website above is using EPSG:4326 (WGS 84) to EPSG:3857 (WHS 84 / Pseudo-Mercator) by default. I tried to verify in the JS code of vesselfinder that they're using this conversion as well.
var c = new s.geom.MultiLineString(t);
return c.transform('EPSG:4326', 'EPSG:3857'),
There are also the following ones mentioned, but I'm pretty sure, that it has to be the upper shown transformation.
it = [
new $('EPSG:3857'),
new $('EPSG:102100'),
new $('EPSG:102113'),
new $('EPSG:900913'),
The questions now are: What am I doing wrong? / Where do I think wrong?
I also tried using Python for the conversion and even tried out the other mentioned EPSG:XXXXXX types, but haven't got the desired result. I also changed the order of both EPSG types when creating the Transformer, but again, not the desired results.
from pyproj import Transformer
TRAN_4326_TO_3857 = Transformer.from_crs("EPSG:4326", "EPSG:3857", always_xy=True)
lat = 59.823002
lon = 22.958583
expansion = 2000
res = TRAN_4326_TO_3857.transform(lng, lat)
bbox = (round(res[0]-expansion), round(res[1]-expansion), round(res[0]+expansion), round(res[1]+expansion))
print(bbox)
# (2455738, 8260436, 2655738, 8460436)
This one is close to the one I got from bboxfinder, but is again not even close to the bbox vesselfinder is using.
https://gis.stackexchange.com/a/370496 seems to have the math.
convertCoordinates(lon, lat) {
var x = (lon * 20037508.34) / 180;
var y = Math.log(Math.tan(((90 + lat) * Math.PI) / 360)) /
(Math.PI / 180);
y = (y * 20037508.34) / 180;
return [x, y];
}
Or, in C# ( https://gis.stackexchange.com/a/325551 )
public static double[] latLonToMeters(double lat, double lon)
{
//Debug.Log(Mathd.Tan((90d + lat) * Mathd.PI / 720));
//Debug.Log(Mathd.Tan((90d + lat) * Mathd.PI / 360d));
double[] meters = new double[2];
meters[0] = lat * originShift / 180d;
meters[1] = Mathd.Log(Mathd.Tan((90d+lon) * Mathd.PI / 360d)) /
(Mathd.PI / 180d);
//meters[1] = Mathd.Log(5) / (Mathd.PI / 180d);
meters[1] = meters[1] * originShift / 180d;
return meters;
}
In any case, note the website those came from; that may be a better place to get the algorithm. (Then if you need help converting to your preferred language, come back here.)
I want to implement Paint.NET's polar inversion effect in Python.
If you don't know Paint.NET's polar inversion effect, basically, it transforms this (I created the image using Python):
To this:
After a bit of Google searching I found this:
protected override void InverseTransform(ref WarpEffectBase.TransformData data)
{
double x = data.X;
double y = data.Y;
double invertDistance = DoubleUtil.Lerp(1.0, base.DefaultRadius2 / (x * x + y * y), this.amount);
data.X = x * invertDistance;
data.Y = y * invertDistance;
}
Source
After a bit more Google searching I found this:
float Lerp(float firstFloat, float secondFloat, float by)
{
return firstFloat * (1 - by) + secondFloat * by;
}
Source
So putting the pieces together, this is the transformation that needs to be applied to every pixel, implemented in Python:
def lerp(x, y, by):
return x * (1 - by) + y * by
def transform_xy(x, y, width, height):
cx = width/2
cy = height/2
return x-cx, cy-y
def base_radius_squared(width, height):
radius = min(width, height) / 2
return radius ** 2
def polar_inversion(x, y, radius, strength):
invertDistance = lerp(1, radius/(x**2+y**2), strength)
return x*invertDistance, y*invertDistance
Strength is a float between -4 and 4 (inclusive). x and y are not the pixel coordinates of the pixel in the image, namely the origin is not at the upper left corner of the image, and y axis isn't downwards.
The x, y values used here are relative to the center of transformation, the origin is at the center of transformation, the center of transformation is defaulted at the center of image, and the y axis is upwards. I just want to clarify the coordinate system here.
So how do I apply the transformation to every pixel of an image as efficiently as possible without using for loop to iterate every pixel? How do I apply the transformation in a vectorized way?
I am detecting edges of round objects and am obtaining "bumpy" irregular edges. Is there away to smooth the edges so that I have a more uniform shape?
For example, in the code below I generate a "bumpy" circle (left). Is there a smoothing or moving average kind of function I could use to obtain or approximate the "smooth" circle (right). Preferably with some sort of parameter I can control as my actual images arn't perfectly circular.
import numpy as np
import matplotlib.pyplot as plt
fig, (bumpy, smooth) = plt.subplots(ncols=2, figsize=(14, 7))
an = np.linspace(0, 2 * np.pi, 100)
bumpy.plot(3 * np.cos(an) + np.random.normal(0,.03,100), 3 * np.sin(an) + np.random.normal(0,.03,100))
smooth.plot(3 * np.cos(an), 3 * np.sin(an))
You can do this in frequency domain. Take the (x,y) coordinates of the points of your curve and construct the signal as signal = x + yj, then take the Fourier transform of this signal. Filter out the high frequency components, then take the inverse Fourier transform and you'll get a smooth curve. You can control the smoothness by adjusting the cutoff frequency.
Here's an example:
import numpy as np
from matplotlib import pyplot as plt
r = 3
theta = np.linspace(0, 2 * np.pi, 100)
noise_level = 2
# construct the signal
x = r * np.cos(theta) + noise_level * np.random.normal(0,.03,100)
y = r * np.sin(theta) + noise_level * np.random.normal(0,.03,100)
signal = x + 1j*y
# FFT and frequencies
fft = np.fft.fft(signal)
freq = np.fft.fftfreq(signal.shape[-1])
# filter
cutoff = 0.1
fft[np.abs(freq) > cutoff] = 0
# IFFT
signal_filt = np.fft.ifft(fft)
plt.figure()
plt.subplot(121)
plt.axis('equal')
plt.plot(x, y, label='Noisy')
plt.subplot(122)
plt.axis('equal')
plt.plot(signal_filt.real, signal_filt.imag, label='Smooth')
If the shapes in question approximate ellipses, and you’d like to force them to be actual ellipses, you can easily fit an ellipse by computing the moments of inertia of the set of points, and figure the parameters of the ellipse from those. This Q&A shows how to accomplish that using MATLAB, the code is easy to translate to Python:
import numpy as np
import matplotlib.pyplot as plt
# Some test data (from Question, but with offset added):
an = np.linspace(0, 2 * np.pi, 100)
x = 3 * np.cos(an) + np.random.normal(0,.03,100) + 3.8
y = 3 * np.sin(an) + np.random.normal(0,.03,100) + 5.4
plt.plot(x, y)
# Approximate the ellipse:
L, V = np.linalg.eig(np.cov(x, y))
r = np.sqrt(2*L) # radius
cos_phi = V[0, 0]
sin_phi = V[1, 0] # two components of minor axis direction
m = np.array([np.mean(x), np.mean(y)]) # centroid
# Draw the ellipse:
x_approx = r[0] * np.cos(an) * cos_phi - r[1] * np.sin(an) * sin_phi + m[0];
y_approx = r[0] * np.cos(an) * sin_phi + r[1] * np.sin(an) * cos_phi + m[1];
plt.plot(x_approx, y_approx, 'r')
plt.show()
The centroid calculation above works because the points are uniformly distributed around the ellipse. If this is not the case, a slightly more complex centroid computation is needed.
I recommend to use FIR filter. The idea behind is to use weighted average of points arround filtered point... You just do something like
p(i) = 0.5*p(i) + 0.25*p(i-1) + 0.25*p(i+1)
where p(i) is i-th point. Its a good idea to remember original p(i) as it is used for next iteration (to avoid shifting). You can do each axis separately. You can use any weights but their sum should be 1.0. You can use any number of neighbors not just 2 (but in such case you need to remember more points). Symmetric weights will reduce shifting. You can apply FIR multiple times ...
Here small 2D C++ example:
//---------------------------------------------------------------------------
const int n=50; // points
float pnt[n][2]; // points x,y ...
//---------------------------------------------------------------------------
void pnt_init()
{
int i;
float a,da=2.0*M_PI/float(n),r;
Randomize();
for (a=0.0,i=0;i<n;i++,a+=da)
{
r=0.75+(0.2*Random());
pnt[i][0]=r*cos(a);
pnt[i][1]=r*sin(a);
}
}
//---------------------------------------------------------------------------
void pnt_smooth()
{
int i,j;
float p0[2],*p1,*p2,tmp[2],aa0[2],aa1[2],bb0[2],bb1[2];
// bb = original BBOX
for (j=0;j<2;j++) { bb0[j]=pnt[0][j]; bb1[j]=pnt[0][j]; }
for (i=0;i<n;i++)
for (p1=pnt[i],j=0;j<2;j++)
{
if (bb0[j]>p1[j]) bb0[j]=p1[j];
if (bb1[j]<p1[j]) bb1[j]=p1[j];
}
// FIR filter
for (j=0;j<2;j++) p0[j]=pnt[n-1][j]; // remember p[i-1]
p1=pnt[0]; p2=pnt[1]; // pointers to p[i],p[i+1]
for (i=0;i<n;i++,p1=p2,p2=pnt[(i+1)%n])
{
for (j=0;j<2;j++)
{
tmp[j]=p1[j]; // store original p[i]
p1[j]=(0.1*p0[j]) + (0.8*p1[j]) + (0.1*p2[j]); // p[i] = FIR(p0,p1,p2)
p0[j]=tmp[j]; // remeber original p1 as p[i-1] for next iteration
}
}
// aa = new BBOX
for (j=0;j<2;j++) { aa0[j]=pnt[0][j]; aa1[j]=pnt[0][j]; }
for (i=0;i<n;i++)
for (p1=pnt[i],j=0;j<2;j++)
{
if (aa0[j]>p1[j]) aa0[j]=p1[j];
if (aa1[j]<p1[j]) aa1[j]=p1[j];
}
// compute scale transform aa -> bb
for (j=0;j<2;j++) tmp[j]=(bb1[j]-bb0[j])/(aa1[j]-aa0[j]); // scale
// convert aa -> bb
for (i=0;i<n;i++)
for (p1=pnt[i],j=0;j<2;j++)
p1[j]=bb0[0]+((p1[j]-aa0[j])*tmp[j]);
}
//---------------------------------------------------------------------------
I added also checking BBOX before and after smoothing so the shape does not change size and position. In some cases centroid is better than BBOX for the position correction.
Here preview of multiple application of FIR filter:
I am loading a image from google static Map API, the loaded satellite image is a place with hundreds of meters wide and length.
https://maps.googleapis.com/maps/api/staticmap?center=53.4055429,-2.9976502&zoom=16&size=400x400&maptype=satellite&key=YOUR_API_KEY
Additionally, the image resolution shows to be 10 meters, as shown below
.
My question is
as I have known the centered geolocation (53.4055429,-2.9976502) and resolution of this static image, how would I be able to extend it to calculate the geolocation of left up or right bottom in the image, and finally calculate each pixel of the image
What kind of solution is it
Looks like you need not a javascript solution but for python to use it not in browser but on a server. I've created a python example, but it is the math that I am going to stand on, math is all you need to calculate coordinates. Let me do it with js as well to make snippet work in browser. You can see, that python and js give the same results.
Jump to the answer
If you just need formulae for degrees per pixel, here you are. They are simple enough and you don't need any external libraries but just a python's math. The explanation can be found further.
#!/usr/bin/python
import math
w = 400
h = 400
zoom = 16
lat = 53.4055429
lng = -2.9976502
def getPointLatLng(x, y):
parallelMultiplier = math.cos(lat * math.pi / 180)
degreesPerPixelX = 360 / math.pow(2, zoom + 8)
degreesPerPixelY = 360 / math.pow(2, zoom + 8) * parallelMultiplier
pointLat = lat - degreesPerPixelY * ( y - h / 2)
pointLng = lng + degreesPerPixelX * ( x - w / 2)
return (pointLat, pointLng)
print 'NE: ', getPointLatLng(w, 0)
print 'SW: ', getPointLatLng(0, h)
print 'NW: ', getPointLatLng(0, 0)
print 'SE: ', getPointLatLng(w, h)
The output of the script is
$ python getcoords.py
NE: (53.40810128625675, -2.9933586655761717)
SW: (53.40298451374325, -3.001941734423828)
NW: (53.40810128625675, -3.001941734423828)
SE: (53.40298451374325, -2.9933586655761717)
What we have to start with
We have some parameters needed in url https://maps.googleapis.com/maps/api/staticmap?center=53.4055429,-2.9976502&zoom=16&size=400x400&maptype=satellite&key=YOUR_API_KEY – coordinates, zoom, size in pixels.
Let's introduce some initial variables:
var config = {
lat: 53.4055429,
lng: -2.9976502,
zoom: 16,
size: {
x: 400,
y: 400,
}
};
The math of the Earth of 512 pixels
The math is as follows. Zoom 1 stands for full view of the Earth equator 360° when using image size 512 (see the docs for size and zoom). See the example at zoom 1. It is a very important point. The scale (degrees per pixel) doesn't depend on the image size. When one changes image size, one sees the same scale: compare 1 and 2 – the second image is a cropped version of the bigger one. The maximum image size for googleapis is 640.
Every zoom-in increases resolution twice. Therefore the width of your image in terms of longitude is
lngDegrees = 360 / 2**(zoom - 1); // full image width in degrees, ** for power
Then use linear function to find coordinates for any point of the image. It should be mentioned, that linearity works well only for high zoomed images, you can't use it for low zooms like 5 or less. Low zooms have slightly more complex math.
lngDegreesPerPixel = lngDegrees / 512 = 360 / 2**(zoom - 1) / 2**9 = 360 / 2**(zoom + 8);
lngX = config.lng + lngDegreesPerPixel * ( point.x - config.size.x / 2);
Latitude degrees are different
Latitude degree and longitude degree on the equator are of the same size, but if we go north or south, longitude degree become smaller since rings of parallels on the Earth have smaller radii - r = R * cos(lat) < R and therefore image height in degrees becomes smaller (see P.S.).
latDegrees = 360 / 2**(zoom - 1) * cos(lat); // full image height in degrees, ** for power
And respectively
latDegreesPerPixel = latDegrees / 512 = 360 / 2**(zoom - 1) * cos(lat) / 2**9 = 360 / 2**(zoom + 8) * cos(lat);
latY = config.lat - latDegreesPerPixel * ( point.y - config.size.y / 2)
The sign after config.lat differs from the sign for lngX since earth longitude direction coincide with image x direction, but latitude direction is opposed to y direction on the image.
So we can make a simple function now to find a pixel's coordinates using its x and y coordinates on the picture.
var config = {
lat: 53.4055429,
lng: -2.9976502,
zoom: 16,
size: {
x: 400,
y: 400,
}
};
function getCoordinates(x, y) {
var degreesPerPixelX = 360 / Math.pow(2, config.zoom + 8);
var degreesPerPixelY = 360 / Math.pow(2, config.zoom + 8) * Math.cos(config.lat * Math.PI / 180);
return {
lat: config.lat - degreesPerPixelY * ( y - config.size.y / 2),
lng: config.lng + degreesPerPixelX * ( x - config.size.x / 2),
};
}
console.log('SW', getCoordinates(0, config.size.y));
console.log('NE', getCoordinates(config.size.x, 0));
console.log('SE', getCoordinates(config.size.x, config.size.y));
console.log('NW', getCoordinates(0, 0));
console.log('Something at 300,128', getCoordinates(300, 128));
P.S. You can probably ask me, why I place cos(lat) multiplier to latitude, not as a divider to longitude formula. I found, that google chooses to have constant longitude scale per pixel on different latitudes, so, cos goes to latitude as a multiplier.
I believe you can calculate a bounding box using Maps JavaScript API.
You have a center position and know that distance from the center to the NorthEast and SouthWest is 200 pixels, because the size in your example is 400x400.
Have a look at the following code that calculates NE and SW points
var map;
function initMap() {
var latLng = new google.maps.LatLng(53.4055429,-2.9976502);
map = new google.maps.Map(document.getElementById('map'), {
center: latLng,
zoom: 16,
mapTypeId: google.maps.MapTypeId.SATELLITE
});
var marker = new google.maps.Marker({
position: latLng,
map: map
});
google.maps.event.addListener(map, "idle", function() {
//Verical and horizontal distance from center in pixels
var h = 200;
var w = 200;
var centerPixel = map.getProjection().fromLatLngToPoint(latLng);
var pixelSize = Math.pow(2, -map.getZoom());
var nePoint = new google.maps.Point(centerPixel.x + w*pixelSize, centerPixel.y - h*pixelSize);
var swPoint = new google.maps.Point(centerPixel.x - w*pixelSize, centerPixel.y + h*pixelSize);
var ne = map.getProjection().fromPointToLatLng(nePoint);
var sw = map.getProjection().fromPointToLatLng(swPoint);
var neMarker = new google.maps.Marker({
position: ne,
map: map,
title: "NE: " + ne.toString()
});
var swMarker = new google.maps.Marker({
position: sw,
map: map,
title: "SW: " + sw.toString()
});
var polygon = new google.maps.Polygon({
paths: [ne, new google.maps.LatLng(ne.lat(),sw.lng()), sw, new google.maps.LatLng(sw.lat(),ne.lng())],
map: map,
strokeColor: "green"
});
console.log("NE: " + ne.toString());
console.log("SW: " + sw.toString());
});
}
#map {
height: 100%;
}
/* Optional: Makes the sample page fill the window. */
html, body {
height: 100%;
margin: 0;
padding: 0;
}
<div id="map"></div>
<script src="https://maps.googleapis.com/maps/api/js?key=AIzaSyDztlrk_3CnzGHo7CFvLFqE_2bUKEq1JEU&libraries=geometry&callback=initMap"
async defer></script>
I hope this helps!
UPDATE
In order to solve this in python you should understand the Map and Tile Coordinates principles used by Google Maps JavaScript API and implement projection logic similar to Google Maps API in python.
Fortunately, somebody has already did this task and you can find the project that implements methods similar to map.getProjection().fromLatLngToPoint() and map.getProjection().fromPointToLatLng() from my example in python. Have a look at this project in github:
https://github.com/hrldcpr/mercator.py
So, you can download mercator.py and use it in your project. My JavaScript API example converts into the following python code
#!/usr/bin/python
from mercator import *
w = 200
h = 200
zoom = 16
lat = 53.4055429
lng = -2.9976502
centerPixel = get_lat_lng_tile(lat, lng, zoom)
pixelSize = pow(2, -zoom)
nePoint = (centerPixel[0] + w*pixelSize, centerPixel[1] - h*pixelSize)
swPoint = (centerPixel[0] - w*pixelSize, centerPixel[1] + h*pixelSize)
ne = get_tile_lat_lng(zoom, nePoint[0], nePoint[1]);
sw = get_tile_lat_lng(zoom, swPoint[0], swPoint[1]);
print 'NorthEast: ', ne
print 'SouthWest: ', sw
What S2Region and how should I use to get all s2 cells at certain parent level(let say 9) covered by the circle drawn from given lat, long and radius. Below is an example which use python s2 library for getting all cells under a rectangle.
region_rect = S2LatLngRect(
S2LatLng.FromDegrees(-51.264871, -30.241701),
S2LatLng.FromDegrees(-51.04618, -30.000003))
coverer = S2RegionCoverer()
coverer.set_min_level(8)
coverer.set_max_level(15)
coverer.set_max_cells(500)
covering = coverer.GetCovering(region_rect)
source of example http://blog.christianperone.com/2015/08/googles-s2-geometry-on-the-sphere-cells-and-hilbert-curve/
I am looking for something like
region_circle = S2latLangCircle(lat,lang,radius)
I find answer of this question for google s2 library implemented in c++ Using google s2 library - find all s2 cells of a certain level within the circle, given lat/lng and radius in miles/km but I need this in python.
Thanks
With the help of link, I worked out for python solution.
I am using python s2sphere library.
earthCircumferenceMeters = 1000 * 40075.017
def earthMetersToRadians(meters):
return (2 * math.pi) * (float(meters) /
const.earthCircumferenceMeters)
def getCoveringRect(lat, lng, radius, parent_level):
radius_radians = earthMetersToRadians(radius)
latlng = LatLng.from_degrees(float(lat),
float(lng)).normalized().to_point()
region = Cap.from_axis_height(latlng,
(radius_radians*radius_radians)/2)
coverer = RegionCoverer()
coverer.min_level = int(parent_level)
coverer.max_level = int(parent_level)
coverer.max_cells = const.MAX_S2_CELLS
covering = coverer.get_covering(region)
s2_rect = []
for cell_id in covering:
new_cell = Cell(cell_id)
vertices = []
for i in range(4):
vertex = new_cell.get_vertex(i)
latlng = LatLng.from_point(vertex)
vertices.append((math.degrees(latlng.lat().radians),
math.degrees(latlng.lng().radians)))
s2_rect.append(vertices)
return s2_rect
getCoveringRect method returns all s2 cells(Rectangle boundary) at given parent level which is covered by circle drawn from given lat, long as center and given radius
Here is a Go example how to get covering cells
import (
"math"
"sort"
"strconv"
"github.com/golang/geo/s2"
)
const (
earthRadiusInMeter = 1000 * 6371.393 // earth radius is 6371km
)
// S=4πR²,s2 regards surface is 4π,that is R=1
func getS2EarthSurfaceArea(radius float64) float64 {
area := math.Pi * radius * radius / (earthRadiusInMeter * earthRadiusInMeter)
return area
}
func GetCellIDs(lng, lat, radius float64) []string {
point := s2.PointFromLatLng(s2.LatLngFromDegrees(lat, lng))
area := getS2EarthSurfaceArea(radius)
_cap := s2.CapFromCenterArea(point, area)
_cover := s2.RegionCoverer{
MinLevel: 13,
MaxLevel: 13,
LevelMod: 1,
MaxCells: 16,
}
cellUnion := _cover.Covering(_cap)
stringCellIDs := make([]string, 0, len(cellUnion))
for _, c := range cellUnion {
stringCellIDs = append(stringCellIDs, c.ToToken())
}
return stringCellIDs
}
I'm not sure if the formula used by Guarav is right.
First, function earthMetersToRadians does not return radians, it just computes (2*pi*r) / (2*pi*R) = r/R where R denotes the earth' radius.
From that, it computes height = (r/R)^2/2, and I'm not sure where this formula comes from.
From the formulae of a spherical cap, we have height = 1 - cos(theta) where theta = arcsin(r/R) in our case.
Together have height = 1 - cos(arcsin(r/R)) which can be computed as height = 1 - sqrt(1 - (r/R)^2).
Note, however, that both formulas are very close, so in practical cases they are pretty much the same, especially if you run an S2Coverer on your cap afterwards.