How to determine deviation value in latitude and longitude values? - python

I have a script that takes the latitude and longitude values and stores them in the database(So caching). If there is the same location, I want to pull it from the database without having to calculate it again. However, I want to keep this location (latitude-longitude) information between a certain deviation value, since there will be very small differences depending on various environments during the acquisition of location information. For example:
First Location Value
latitude: 51.21318
longitude : 16.82032
Another Location Value
latitude: 51.21319
longitude : 16.82033
In the example above, if the control is provided with one-to-one matching, it will be perceived as two different location information and the database will be added. But I want to keep it here by specifying a certain deviation. I plan to set max and min values and provide control between these values. For example:
deviation = 0.005
max_lot = latitude + (latitude * deviation)
min_lot = latitude - (latitude * deviation)
I plan to provide a check for those within a certain range between these values and the database. Is there any formula or accepted value in the literature to determine the deviation value I want to ask here? What do you think is the best way?

Related

Mean center Pandas

I have a movie dataset. The dataset contains ratings from users. For every user I need the mean of all the ratings form that user. This mean I need to substract from every single rating to get the centered mean.
mean_user_rating = df_ratings.groupby(['userId'])['rating'].mean().to_frame()
mean_user_rating.rename(columns={"A": "userId", "B": "rating"})
film_user = pd.DataFrame(df_ratings.sort_values('rating', ascending=False))
example data
But now I need to substract the mean ratings from the normal ratings but this need to be done on the userId.
The question and additional info:
Users have a bias, they do not review in the same way and therefore an absolute rating is less interesting. To make a better system we can look at the average rating of a user and how the given rating deviates from this average. This way you know whether a user relatively finds a film better or worse. In other words, we mean-center the rating. We take the mean, the mean, as the midpoint and rewrite the rating to the deviation from this midpoint.
Once you know the deviation from the center point, you can use this to compare films. For example, you can take the average of the deviation of each rating of a film and use this as a new rating for a film.
Question 25: A better top 5
It is now up to you to draw up a new top 5. This time not based on the average rating, but based on the average mean-centered rating. Again, use df_ratings_filtered as dataset here. Store the top 5 as Series in a variable called top_5_mean_centered with index movieId.
Note, this command is slightly larger. Think carefully in advance which steps you have to go through.

Most effective pathway to compare geographic locations? solutions pls

Say I have a location A, with IP address, and a geolocator API with latitude and longitude. Now I want to find all instances that are within a 25-mile radius of location A. How can I compute this with the least amount of steps?
solution A: I can compute all distances between location A and all instances in the database, and display instances within 25 radius. (way too slow especially if I want a dynamic location, with a large database of locations)
solution B: I can group all instances in terms of zip code in addition to IP, and (lat, long). so that fewer distances between location A and instances needed to be computed. (better, but what if the IP address is at the border of another zip code, this will add to the amount of needed computation)
solution C: I can use trigonometry. using the latitude and longitude of location A. i can find each instance with in the 25 mile radius.
Can someone please describe a better way of comparing distances? ideas and suggestions are much appreciated (if further explanation is needed, pls ask) Thanks.
I'd use a combination of your proposed solutions a and c. You can query your database directly using a filter that only selects the locations within a 25 miles radius (or any other radius). Calculating the longitudinal difference in miles is a little tricky because the mileage of one degree in longitude differs with latitude. Kudos go out to this explanation: https://gis.stackexchange.com/questions/142326/calculating-longitude-length-in-miles#142327
Assuming you have the following DB schema with existing locations (only latitude and longitude as columns):
CREATE TABLE location (
lat REAL,
lon REAL
);
You're able to filter only the locations within a 25 mile radius using this query:
query = """
SELECT (lat - ?) AS difflat,
(lon - ?) AS difflon
FROM location
WHERE POWER(POWER(difflat * 69.172, 2) + POWER(difflon * COS(lat*3.14/180.) * 69.172, 2), 0.5) < ?;
"""
Using the query then like this:
radius = 25 #miles
cursor.execute(query, (querylocation['lat'], querylocation['lat'], radius))
SQLite3 unfortunately doesn't support basic mathematical functions like COS and POWER but they can easily be created:
import math
con = sqlite3.connect(db_path)
con.create_function('POWER', 2, math.pow)
con.create_function('COS', 1, math.cos)

Why i am gething different h3 index with same latitude and longitude

I am working with a SQlite hexagon index database and other information, the Hexagons index is the primary key. This database is generated by code written in python, and other codes written in C use the hexagonal index to access the information stored in the database.
for res_hex in [12,11,10,9,8]:
index_hex = h3.geo_to_h3(sonde[1], sonde[0], res_hex)
sonde[1] is latitude, sonde[0] is longitude res_hex is the resolution.
In fact, I have a list of objects represented by their latitude and longitude in a text file, I calculate the indexes around them with different resolutions (8 to 12). that I enter the database.
But my problem is that when I calculate the hexagon in code c with lat, lon and the resolution, I do not find it in the base. This even if the calculation is based on the same file.
GeoCoord geo = {latitude, longitude};
H3Index currentIndex = geoToH3(&geo, resolution);
Thanks for your help
I have find a solution, in C Lat/Lon must be in radians but it is not the case in python

How do i get a list of all the latitudes and longitudes from a database within a specific area?

I have a table in a Postgres DB containing places and their corresponding latitude and longitude values.
Places (id, name, lat, lng) # primary-key(id)
I will get an input of a pair of latitudes and longitudes forming a rectangle.
{
"long_ne": 12.34,
"lat_ne": 34.45,
"long_sw": 15.34,
"lat_sw": 35.56
}
I want to get all the rows that fall inside the rectangle.
The rows can't be sorted based on their lat-lng values as that will cause trouble while inserting new values.
What would be the best way to go about solving this to optimize queries to get the result?
I can obviously do it using the WHERE clause, but would it be the ost optimized solution? There would be a massive number of rows in the table and is there a way this query can be optimized to speed up the result?
It this what you want?
select *
from places
where
lat between :lat_no and :lat_sw
and lng between :long_no and :long_sw
Where :lat_no, :lat_ws :long_no and :long_sw are the input parameters to the query.
This gives you all rows whose latitude and longitude fall between the square boundaries. Note that between considers the inteval inclusive of their bounds on both ends. You can change this as needed with inequality conditions, for example, this makes the match inclusive on the lower bound and exclusive on the upper bound:
where
lat >= :lat_no and lat < :lat_sw
and lng >= :long_no and lng < :long_sw
Won't a simple SQL query suffice here?
SELECT *
FROM Places
WHERE (lat > lat_sw)
AND (long > long_sw)
AND (lat < lat_ne)
AND (long < long_ne)

How to delete CSV table values outside of a longitude/latitude radius?

I have a csv file with a table that has the columns Longitude, Latitude, and Wind Speed. I have a code that takes a csv file and deletes values outside of a specified bound. I would like to retain values whose longitude/latitude is within a 0.5 lon/lat radius of a point located at -71.5 longitude and 40.5 latitude.
My example code below deletes any values whose longitude and latitude isn't between -71 to -72 and 40 to 41 respectively. Of course, this retains values within a square bound ±0.5 lon/lat around my point of interest. But I am interested in finding values within a circular bound with radius 0.5 lon/lat of my point of interest. How should I modify my code?
import pandas as pd
import numpy
df = pd.read_csv(r"C:\\Users\\xil15102\\Documents\\results\\EasternLongIsland50.csv") #file path
indexNames=df[(df['Longitude'] <= -72)|(df['Longitude']>=-71)|(df['Latitude']<=40)|(df['Latitude']>=41)].index
df.drop(indexNames,inplace=True)
df.to_csv(r"C:\\Users\\xil15102\\Documents\\results\\EasternLongIsland50.csv")
Basically you need to check if a value is a certain distance from a central point (-71.5 and 40.5); to do this use the pythagorean theorem/distance formula:
d = sqrt(dx^2+dy^2).
So programmatically, I would do this like:
from math import sqrt
drop_indices = []
for row in range(len(df)):
if (sqrt(abs(-71.5 - df[row]['Longitude'])*abs(-71.5 - df[row]['Longitude']) + abs(40.5-df[row]['Latitude'])*abs(40.5-df[row]['Latitude']))) > 0.5:
drop_indices.append(row)
df.drop(drop_indices)
Sorry that is a sort for disgusting way to get rid of the rows and your way looks much better, but the code should work.
You should write a function to calculate the distance from your point of interest and drop those. Some help here. Pretty sure the example below should work if you implement is_not_in_area as a function to calculate the distance and check if dist < 0.5.
df = df.drop(df[is_not_in_area(df.lat, df.lon)].index)
(This code lifted from here)
Edit: drop the ones that aren't in area, not the ones that are haha.

Categories