I have a shapefile that has all the counties for the US, and I am doing a bunch of queries at a lat/lon point and then finding what county the point lies in. Right now I am just looping through all the counties and doing pnt.within(county). This isn't very efficient. Is there a better way to do this?
Your situation looks like a typical case where spatial joins are useful. The idea of spatial joins is to merge data using geographic coordinates instead of using attributes.
Three possibilities in geopandas:
intersects
within
contains
It seems like you want within, which is possible using the following syntax:
geopandas.sjoin(points, polygons, how="inner", op='within')
Note: You need to have installed rtree to be able to perform such operations. If you need to install this dependency, use pip or conda to install it
Example
As an example, let's plot European cities. The two example datasets are
import geopandas
import matplotlib.pyplot as plt
world = geopandas.read_file(geopandas.datasets.get_path('naturalearth_lowres'))
cities = geopandas.read_file(geopandas.datasets.get_path('naturalearth_cities'))
countries = world[world['continent'] == "Europe"].rename(columns={'name':'country'})
countries.head(2)
pop_est continent country iso_a3 gdp_md_est geometry
18 142257519 Europe Russia RUS 3745000.0 MULTIPOLYGON (((178.725 71.099, 180.000 71.516...
21 5320045 Europe Norway -99 364700.0 MULTIPOLYGON (((15.143 79.674, 15.523 80.016, ...
cities.head(2)
name geometry
0 Vatican City POINT (12.45339 41.90328)
1 San Marino POINT (12.44177 43.93610)
cities is a worldwide dataset and countries is an European wide dataset.
Both dataset need to be in the same projection system. If not, use .to_crs before merging.
data_merged = geopandas.sjoin(cities, countries, how="inner", op='within')
Finally, to see the result let's do a map
f, ax = plt.subplots(1, figsize=(20,10))
data_merged.plot(axes=ax)
countries.plot(axes=ax, alpha=0.25, linewidth=0.1)
plt.show()
and the underlying dataset merges together the information we need
data_merged.head(5)
name geometry index_right pop_est continent country iso_a3 gdp_md_est
0 Vatican City POINT (12.45339 41.90328) 141 62137802 Europe Italy ITA 2221000.0
1 San Marino POINT (12.44177 43.93610) 141 62137802 Europe Italy ITA 2221000.0
192 Rome POINT (12.48131 41.89790) 141 62137802 Europe Italy ITA 2221000.0
2 Vaduz POINT (9.51667 47.13372) 114 8754413 Europe Austria AUT 416600.0
184 Vienna POINT (16.36469 48.20196) 114 8754413 Europe Austria AUT 416600.0
Here, I used inner join method but that's a parameter you can change if, for instance, you want to keep all points, including those not within a polygon.
Related
I got a DataFrame generated from a CSV database with a list of districts in Buenos Aires Province (Argentina). The CSV has columns like population and surface of all of these districts. Also, it contains two columns with categorical variables. The first of these one is called "REGION", and indicates if the district is located in the north or in the south of the province. The second one, is called "PERTENENCIA" (Belonging), and indicates if the district belongs to the metropolitan area of Buenos Aires City (Greater Buenos Aires, GBA), or it's in the interior of the province (outside GBA). So then, it can adopt the values "GBA" or "INTERIOR", respectively. Since the metropolitan area of Buenos Aires is located at the north of the province, every district which belongs to the GBA it's also categorized as north (we have no districts categorized in south and also in GBA)
So then, my table looks like this ("Municipio" is the district, "Poblacion" is population, and "Superficie" is surface):
MUNICIPIO REGION PERTENENCIA POBLACION SUPERFICIE
0 ALSINA SUR INTERIOR ... ...
1 ADOLFO GONZ. SUR INTERIOR ... ...
2 ALBERTI NORTE INTERIOR ... ...
3 ALT. BROWN SUR GBA ... ...
4 ARRECIFES NORTE INTERIOR ... ...
5 AVELLANEDA NORTE GBA ... ...
...
140 ZARATE NORTE INTERIOR ... ...
The issue is this one: I need to study jointly the frequency of those districts, both by region and belonging. I'm making a stacked bar chart just for that purpose, and also a nested pie chart.
For that, I'd like to generate a cross-table with the total amount of districts in those categories, something like this:
GBA INTERIOR TOTAL
NORTE 33 41 74
SUR 0 67 67
TOTAL 33 108 141
I have now something like this to calculate these values manually:
cant_mun_gba=municipios['PERTENENCIA'].value_counts()['GBA']
cant_mun_interior=municipios['PERTENENCIA'].value_counts()['INTERIOR']
cant_mun_norte=municipios['REGION'].value_counts()['NORTE']
cant_mun_sur=municipios['REGION'].value_counts()['SUR']
cant_mun_norte_interior = cant_mun_norte - cant_mun_gba
cant_mun_norte_gba = cant_mun_gba
cant_mun_sur_interior=cant_mun_sur
cant_mun_sur_gba=0
Although this works, it's pretty ugly, and also I'd like to have the cross-table, just for displaying it.
Is there a way to achieve this?
Thanks a lot!
Try pd.crosstab
pd.crosstab(municipios['REGION'], municipios['PERTENENCIA'])
I want to visualize the number of crimes by state using plotly express.
This is the code :
import plotly.express as px
fig = px.choropleth(grouped, locations="Code",
color="Incident",
hover_name="Code",
animation_frame='Year',
scope='usa')
fig.show()
The dataframe itself looks like this:
I only get blank map:
What is the wrong with the code?
The reason for the lack of color coding is that the United States is not specified in the location mode. please find attached a graph with locationmode='USA-states' added. You can find an example in the references. The data was created for your data.
df.head()
Year Code State incident
0 1980 AL Alabama 1445
1 1980 AK Alaska 970
2 1980 AZ Arizona 3092
3 1980 AR Arkansas 1557
4 1980 CA California 1614
import plotly.express as px
fig = px.choropleth(grouped,
locations='Code',
locationmode='USA-states',
color='incident',
hover_name="Code",
animation_frame='Year',
scope="usa")
fig.show()
So I tried graphing a data frame using pandas and when I typed it out there is a blank image that shows up with no errors or anything. I was hoping someone knows what the problem could be and how I can solve it.
I was wondering if this is a backend issue or what. Thank you!
For faster answers, we need the code in text format and sample data for reproduction. I have tried to apply the sample from the official reference to your code. The reason why the graph doesn't show up is a guess, since I don't have any code or data, but I think the country name is not retrieved from the dictionary. I extracted the top 10 countries from the sample data by population, and drew a graph based on the data extracted from the original data frame for those country names. The data used as the basis for the looping process is a dictionary of country names and arbitrary colors.
import plotly.express as px
from plotly.subplots import make_subplots
df1 = px.data.gapminder().query('year==2007').sort_values('pop', ascending=False).head(10)
df1
country
continent
year
lifeExp
pop
gdpPercap
iso_alpha
iso_num
299
China
Asia
2007
72.961
1318683096
4959.11
CHN
156
707
India
Asia
2007
64.698
1110396331
2452.21
IND
356
1619
United States
Americas
2007
78.242
301139947
42951.7
USA
840
719
Indonesia
Asia
2007
70.65
223547000
3540.65
IDN
360
179
Brazil
Americas
2007
72.39
190010647
9065.8
BRA
76
1175
Pakistan
Asia
2007
65.483
169270617
2605.95
PAK
586
107
Bangladesh
Asia
2007
64.062
150448339
1391.25
BGD
50
1139
Nigeria
Africa
2007
46.859
135031164
2013.98
NGA
566
803
Japan
Asia
2007
82.603
127467972
31656.1
JPN
392
995
Mexico
Americas
2007
76.195
108700891
11977.6
MEX
484
# create dict country and color
colors = px.colors.sequential.Plasma
color = {k:v for k,v in zip(df1.country,colors)}
{'China': '#0d0887',
'India': '#46039f',
'United States': '#7201a8',
'Indonesia': '#9c179e',
'Brazil': '#bd3786',
'Pakistan': '#d8576b',
'Bangladesh': '#ed7953',
'Nigeria': '#fb9f3a',
'Japan': '#fdca26',
'Mexico': '#f0f921'}
# top10 data
df1_top10 = px.data.gapminder().query('country in #df1.country')
import plotly.graph_objects as go
fig = go.Figure()
colors = px.colors.sequential.Plasma
for k,v in color.items():
fig.add_trace(go.Scatter(
x=df1_top10[df1_top10['country']==k]['year'],
y=df1_top10[df1_top10['country']==k]['lifeExp'],
name=k,
mode='markers+text+lines',
marker_color='black',
marker_size=3,
line=dict(color=color[k]),
yaxis='y1'))
fig.update_layout(
title="Top 10 Country wise Life Ladder trend",
xaxis_title="Year",
yaxis_title="Life Ladder",
template='ggplot2',
font=dict( size=16,
color="Black",
family="Garamond"
),
xaxis=dict(showgrid=True),
yaxis=dict(showgrid=True)
)
fig.show()
I'm having troubles with avoiding negative values in interpolation. I have the following data in a DataFrame:
current_country =
idx Country Region Rank Score GDP capita Family Life Expect. Freedom Trust Gov. Generosity Residual Year
289 South Sudan Sub-Saharan Africa 143 3.83200 0.393940 0.185190 0.157810 0.196620 0.130150 0.258990 2.509300 2016
449 South Sudan Sub-Saharan Africa 147 3.59100 0.397249 0.601323 0.163486 0.147062 0.116794 0.285671 1.879416 2017
610 South Sudan Sub-Saharan Africa 154 3.25400 0.337000 0.608000 0.177000 0.112000 0.106000 0.224000 1.690000 2018
765 South Sudan Sub-Saharan Africa 156 2.85300 0.306000 0.575000 0.295000 0.010000 0.091000 0.202000 1.374000 2019
And I want to interpolate the following year (2019) - shown below - using pandas' df.interpolate()
new_row =
idx Country Region Rank Score GDP capita Family Life Expect. Freedom Trust Gov. Generosity Residual Year
593 South Sudan Sub-Saharan Africa 0 np.nan np.nan np.nan np.nan np.nan np.nan np.nan np.nan 2015
I create the df containing null values in all columns to be interpolated (as above) and append that one to the original dataframe before I interpolate to populate the cells with NaNs.
interpol_subset = current_country.append(new_row)
interpol_subset = interpol_subset.interpolate(method = "pchip", order = 2)
This produces the following df
idx Country Region Rank Score GDP capita Family Life Expect. Freedom Trust Gov. Generosity Residual Year
289 South Sudan Sub-Saharan Africa 143 3.83200 0.393940 0.185190 0.157810 0.196620 0.130150 0.258990 2.509300 2016
449 South Sudan Sub-Saharan Africa 147 3.59100 0.397249 0.601323 0.163486 0.147062 0.116794 0.285671 1.879416 2017
610 South Sudan Sub-Saharan Africa 154 3.25400 0.337000 0.608000 0.177000 0.112000 0.106000 0.224000 1.690000 2018
765 South Sudan Sub-Saharan Africa 156 2.85300 0.306000 0.575000 0.295000 0.010000 0.091000 0.202000 1.374000 2019
4 South Sudan Sub-Saharan Africa 0 2.39355 0.313624 0.528646 0.434473 -0.126247 0.072480 0.238480 0.963119 2015
The issue: In the last row, the value in "Freedom" is negative. Is there a way to parameterize the df.interpolate function such that it doesn't produce negative values? I can't find anything in the documentation. I'm fine with the estimates besides that negative value (Although they're a bit skewed)
I considered simply flipping the negative to a positive, but the "Score" value is a sum of all the other continuous features and I would like to keep it that way. What can I do here?
Here's a link to the actual code snippet. Thanks for reading.
I doubt this is an issue for interpolation. The main reason is the method you were using. 'pchip' will return a negative value for the 'freedom' anyway. If we take the values from your dataframe:
import numpy as np
import scipy.interpolate
y = np.array([0.196620, 0.147062, 0.112000, 0.010000])
x = np.array([0, 1, 2, 3])
pchip_obj = scipy.interpolate.PchipInterpolator(x, y)
print(pchip_obj(4))
The result is -0.126. I think if you want a positive result you should better change the method you are using.
I've this data of 2007 with population in Millions,GDP in Billions and index column is Country
continent year lifeExpectancy population gdpPerCapita GDP Billions
country
China Asia 2007 72.961 1318.6831 4959.11485 6539.50093
India Asia 2007 64.698 1110.39633 2452.21041 2722.92544
United States Americas 2007 78.242 301.139947 42951.6531 12934.4585
Indonesia Asia 2007 70.65 223.547 3540.65156 791.502035
Brazil Americas 2007 72.39 190.010647 9065.80083 1722.59868
Pakistan Asia 2007 65.483 169.270617 2605.94758 441.110355
Bangladesh Asia 2007 64.062 150.448339 1391.25379 209.311822
Nigeria Africa 2007 46.859 135.031164 2013.97731 271.9497
Japan Asia 2007 82.603 127.467972 31656.0681 4035.1348
Mexico Americas 2007 76.195 108.700891 11977.575 1301.97307
I am trying to plot a histogram as the following:
This was plotted using matplotlib (code below), and I want to get this with df.plot method.
The code for plotting with matplotlib:
x = data.plot(y=[3],kind = "bar")
data.plot(y = [3,5],kind = "bar",secondary_y = True,ax = ax,style='g:', figsize = (24, 6))
plt.show()
You could use df.plot() with the y axis columns you need in your plot and secondary_y argument as the second column
data[['population','gdpPerCapita']].plot(kind='bar', secondary_y='gdpPerCapita')
If you want to set the y labels for each side, then you have to get all the axes of the plot (in this case 2 y axis) and set the labels respectively.
ax1, ax2 = plt.gcf().get_axes()
ax1.set_ylabel('Population')
ax2.set_ylabel('GDP')
Output: